url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://www.askamathematician.com/2009/12/q-could-a-simple-cup-of-coffee-be-heated-by-a-hand-held-device-designed-to-not-only-mix-but-heat-the-water-through-friction-and-is-that-more-efficient-than-heating-on-a-stove-and-then-mixing/
# Q: Could a simple cup of coffee be heated by a hand held device designed to not only mix but heat the water through friction, and is that more efficient than heating on a stove and then mixing? Physicist: You could definitely make a device that heats water through mixing.  In fact, that’s exactly how scientists (Joule) figured out how to equate heat energy and kinetic energy in the first place. Joule's device, which turns the energy of a dropping weight into heat. When you introduce turbulence to a system the energy flows from large scale eddies into smaller and smaller scale eddies.  At some point the eddies are about the size of molecules (this takes about one minute).  At this point you’re no longer talking about the flow of a fluid, and are instead talking about the random motion of molecules (heat). Fun fact!: In two dimensions turbulence actually starts at small scales and moves up into larger scales!  You can see this exhibited in weather systems larger than about 15 km across in the atmosphere (at these scales the atmosphere is effectively flat). A good way to induce large-scale eddy currents. From this point of view the difference between a mixer and a heater is that a mixer induces large eddy currents, and a heater induces the smallest possible eddies.  Ultrasonic heaters fall neatly in between. Efficiency is defined as $\eta = 1 - \frac{E_{heat}}{E_{in}}$, where $\eta$ is efficiency, $E_{in}$ is the energy put in, and $E_{heat}$ is the energy lost to heat.  You’ll notice that no lost heat means 100% ($\eta = 1$) efficiency, and if all the energy in is lost to heat then the efficiency is 0%.  So the nice thing about trying to create heat intentionally, is that you’ll always be 100% efficient (if you lose some heat to heat, would you notice?).  Or close enough at least.  What you have to worry about is accidentally heating up the wrong thing.  Mixers generate large eddies which can move the cup, makes noise, and what-have-you.  In other words, some of the energy is wasted heating up stuff near the cup (if you can hear it, then some of the energy is being wasted in your ear). An electric stove top pushes energy into water at a rate of about 1kW.  A normal blender (mixer) draws power at about 400W, and loses almost all of it to noise and vibration. So, to actually answer the question, you can heat coffee through mixing, but you’ll get plenty of splashing, it’d be slow, and it’d lose a fair amount of energy through noise and vibration.  You’d be better off with a normal mixing device that has a heating element built in, or heating on a stove first. This entry was posted in -- By the Physicist, Engineering, Physics. Bookmark the permalink. ### 2 Responses to Q: Could a simple cup of coffee be heated by a hand held device designed to not only mix but heat the water through friction, and is that more efficient than heating on a stove and then mixing? 1. Melle says: “Ultrasonic heaters fall neatly in between.” I’m not sure I understand correctly, but does that mean that a device with millions of tiny supersonic rotating parts with as little friction as possible could heat water? Can’t Freeman Dyson invent that? Or would it still be hugely inefficient? 2. The Physicist says: The idea of this post was that if you have a device that does anything at all other that heat, it’ll be less efficient than a heater. The best you can do is a just a regular heating element.
2015-11-29 08:41:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.531294047832489, "perplexity": 934.8099604549324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398457127.55/warc/CC-MAIN-20151124205417-00140-ip-10-71-132-137.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/499524/bibliography-references-not-showing-in-output-file
# Bibliography references not showing in output file I am using Jabref for bibliography entries. Now when i put my path to the bib file, i don't see any bibliography entries. Also, \nocite is already active. In citations in pdf, i get?? instead. How can it be solved: \documentclass[AMA,STIX1COL]{WileyNJD-v2} %\usepackage{subcaption} \usepackage{subfig} \articletype{Article Type}% % %\usepackage[demo]{graphicx} \usepackage{graphicx} %\usepackage[super]{natbib} \bibliographystyle{unsrtnat} \begin{document} \nocite{*} \bibliography{us}% \end{document} Bibfile entries: @Article{Gibbon2010, author = {Timothy Braidwood Gibbon and Xianbin Yu and Romeo Gamatham and Neil Guerrero Gonzalez and Roberto Rodes and Jesper Bevensee Jensen and Antonio Caballero and Idelfonso Tafur Monroy}, title = {3.125 Gb/s Impulse Radio Ultra-Wideband Photonic Generation and Distribution Over a 50 km Fiber With Wireless Transmission}, journal = {{IEEE} Microwave and Wireless Components Letters}, year = {2010}, volume = {20}, number = {2}, pages = {127--129}, month = {feb}, doi = {10.1109/lmwc.2009.2039049}, publisher = {Institute of Electrical and Electronics Engineers ({IEEE})}, } • Welcome to TeX.SX. Did you click the extra button in your editor to process the bibliography database and make the results accessible to LaTeX? If so, you will start to see more than question marks once you click the usual button to compile your document. – Johannes_B Jul 12 at 4:20
2019-08-19 06:24:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41686707735061646, "perplexity": 8989.372021261093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314667.60/warc/CC-MAIN-20190819052133-20190819074133-00462.warc.gz"}
http://math.stackexchange.com/questions/337397/show-that-the-infinite-intersection-of-nested-non-empty-closed-subsets-of-a-comp
# Show that the infinite intersection of nested non-empty closed subsets of a compact space is not empty I'm given the following problem: Suppose that for every $n\in \mathbb{N}$ $V_n$ is a non-empty, closed subset of a compact space $X$, with $V_n \supseteq V_{n+1}$. Now I have to show that $V_{\infty}= \bigcap_{n=1}^{\infty} V_n \neq \emptyset$. How can I do that? I know the nested interval property from real analysis... The 'answer' should be that the family $\{V_n: n\in \mathbb{N} \}$ has the finite intersection property - the intersection of any finite subfamily $\{V_{n_1}, V_{n_2}, ..., V_{n_r} \}$ is $V_N$, where $N$ is $\max \{n_1,n_2, ..., n_r \}$ and $V_N \neq \emptyset$. Hence since $X$ is compact another exercise $(*)$ says that $\bigcap_{n=1}^{\infty} V_n$ is non-empty. Exercise $(*)$ was about to prove that for a space $X$ to be compact, it is necessary and sufficient condition that if $\{V_i: i\in I \}$ is any indexed family of closed subsets of $X$ such that $\bigcap_{j\in J}V_j$ is non-empty for any finite subset $J \subseteq I$, then $\bigcap_{i\in I}V_i$ is non-empty. So I don't understand the proof now.... Can somebody clarify this stuff? :-) Definition of open cover: Let $A$ be a subset in $X$. A family $\mathcal{U}=\{U_i: i\in I\}$ of subsets of $X$ is called a cover for $A$ is $A\subseteq \bigcup U_i$. If each $\{U_k\}$ is open in $X$, then $\mathcal{U}$ is an open cover for $A$ - If the intersection were empty, then there would be a finite subset $M$ of $\mathbb N$ such that the intersection $\bigcap_{m\in M}V_m=\emptyset$. –  Stefan Hamcke Mar 21 at 23:26 And I don't understand what you don't understand: the current claim's proof or the past (*) proof? It is a rather important property of compact spaces... –  DonAntonio Mar 21 at 23:27 Well it would be nice if anyone could tell me how to prove the second $(*)$ proof. –  MSKfdaswplwq Mar 21 at 23:30 It's just a dualization of the definition of compactness. Well, actually it is the contrapositive of this dualization. –  Stefan Hamcke Mar 21 at 23:32 Sorry, I don't know what you mean by dualization of a definition.. :-) –  MSKfdaswplwq Mar 21 at 23:34 Claim: A topological space $\,X\,$ is compact iff it has the Finite Intersection Property (=FIP): Proof: (1) Suppose $\,X\,$ is compact and let $\,\{V_i\}\,$ be a family of closed subsets s.t. that $\,\displaystyle{\bigcap_{i}V_i=\emptyset}\,$. Putting now $\,A_i:=X-V_i\,$ , we get that $\,\{A_i\}\,$ is a family of open subsets , and $$\bigcap_{i}V_i=\emptyset\Longrightarrow \;X=X-\emptyset=X-\left(\bigcap_iV_i\right)=\bigcup_i\left(X-V_i\right)=\bigcup_iA_i\Longrightarrow\;\{A_i\}$$ is an open cover of $\,X\,$ and thus there exists a finite subcover of it: $$X=\bigcup_{i\in I\,,\,|I|<\aleph_0}A_i=\bigcup_{i\in I\,,\,|I|<\aleph_0}(X-V_i)=X-\left(\bigcap_{i\in I\,,\,|I|<\aleph_0}V_i\right)\Longrightarrow \bigcap_{i\in I\,,\,|I|<\aleph_0}V_i=\emptyset\Longrightarrow$$ The family $\,\{V_i\}\,$ has the FIP (2) Suppose now that every family of closed subsets of $\,X\,$ hast the FIP, and let $\,\{A_i\}\,$ be an open cover of it. Put $\,U_i:=X-A_i\,$ , so $\,U_i\,$ is closed for every $\,i\,$: $$\bigcap_iU_i=\bigcap_i(X-A_i)=X-\bigcup_i A_i=X-X=\emptyset$$ By assumption, there exists a finite set $\,J\,$ s.t. $\,\displaystyle{\bigcap_{i\in J}U_i=\emptyset}\,$ , but then $$X=X-\emptyset=X-\bigcap_{i\in J}U_i=\bigcup_{i\in J}X-U_i)=\bigcup_{i\in J}A_i\Longrightarrow$$ $\,\{A_i\}_{i\in J}$ is a finite subcover for $\,X\,$ and thus it is compact....QED. Please be sure you can follow the above and justify all steps. Check where we used Morgan Rules, for example, and note that we used the contrapositive of the FIP's definition... - If you are working in a metric space then this can be proved using the sequential definition of compactness. For each $V_n$ choose $x_n \in V_n$. Then all the $x_n's$ live in $V_1$, since this is compact there is a subsequence $x_{n_k}$ converging to some $x \in V_1$. Now we claim that in fact $x \in V_i$ for all $i$. Suppose not; so that there exists $i> 0$ such that $x \notin V_i$. Then $V_i$ being closed implies that we can find an open set $U$ about $x$ such that $x \in U$ and $U \cap V_i = \emptyset$. Then $U$ can only meet finitely many terms of the sequence $x_{n_k}$, contradicting $x_{n_k} \to x$. - Since $V_{n+1}\subset V_n$ and $V_n\neq\varnothing$ for each $n\in\mathbb{N}$, we have that the family of closed subsets of $X$, $\{V_n\}_{n\in\mathbb{N}}$, has the finite intersection property. Since $X$ is compact and $\{V_n\}_{n\in\mathbb{N}}$ has the finite intersection property, $\bigcap\limits_{n\in\mathbb{N}} V_n\neq\varnothing$.
2013-12-19 18:25:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9688959717750549, "perplexity": 103.58067172368685}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345766127/warc/CC-MAIN-20131218054926-00094-ip-10-33-133-15.ec2.internal.warc.gz"}
https://tentgar.ru/carbon-dating-exponential-functions-175.html
# Carbon dating exponential functions Talk one on one withhorny In our choice of a function to serve as a mathematical model, we often use data points gathered by careful observation and measurement to construct points on a graph and hope we can recognize the shape of the graph.Exponential growth and decay graphs have a distinctive shape, as we can see in Figure $$\Page Index$$ and Figure $$\Page Index$$.We may use the exponential decay model when we are calculating half-life, or the time it takes for a substance to exponentially decay to half of its original quantity.We use half-life in applications involving radioactive isotopes.The formula is derived as follows $\begin \dfrac A_0&= A_0e^\ \dfrac&= e^ \qquad \text A_0\ \ln \left (\dfrac \right )&= ktv \qquad \text\ -\ln(2)&= kt \qquad \text\ -\ln(2)k&= t \qquad \text \end$ Since $$t$$, the time, is positive, $$k$$ must, as expected, be negative. Expressed in scientific notation, this is $$4.01134972 × 1013$$. In this section, we explore some important applications in more depth, including radioactive isotopes and Newton’s Law of Cooling. In real-world applications, we need to model the behavior of a function. It compares the difference between the ratio of two isotopes of carbon in an organic artifact or fossil to the ratio of those two isotopes in the air. It is believed to be accurate to within about $$1\%$$ error for plants or animals that died within the last $$60,000$$ years. Again, we have the form $$y=A_0e^$$ where $$A_0$$ is the starting value, and $$e$$ is Euler’s constant. ## One thought on “carbon dating exponential functions” 1. She started her playing with youth clubs of American women soccer clubs such as Slammers FC and Chadwick Dolphins. 2. With as many as 500 users online at one time, it's safe to say our sex chat room is always buzzing with activity, with plenty of males and females for everyone. 3. Others have fallen by the wayside, but we remain, and have grown to one of the most popular and user friendly sites on the internet.
2020-06-01 03:28:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5612715482711792, "perplexity": 675.6337000845983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413901.34/warc/CC-MAIN-20200601005011-20200601035011-00237.warc.gz"}
https://mathoverflow.net/questions/20070/functoriality-of-hironakas-resolution-of-singularities
# Functoriality of Hironaka's resolution of singularities Is Hironaka's resolution of singularities functorial? I know that the resolution is not unique, there are flips etc. But if we have a rational map f:X---> Y, can we chose resolutions X'->X and Y'->Y and a map $f_*:X'\to Y'$ that makes the relevant diagram commute? • First resolve Y, then resolve the closure of the graph of the resulting rational map: this should do the trick! – damiano Apr 1 '10 at 13:25 • I think the tag projective-resolution is intended for homological algebra use, so it is probably non appropriate here. – Andrea Ferretti Apr 1 '10 at 13:26 • @Andrea: indeed. I've taken the liberty to detag. – José Figueroa-O'Farrill Apr 1 '10 at 14:08 • Darn!! I tried to create a resolution of singularities tag, only to find that it was truncated as resolution-of-singulariti. We really need more letters for tags. – Regenbogen Apr 1 '10 at 15:00 • Regenborgen: You can create the truncated tag for now, and then later when (if?) the character limit for tags is increased, the administrators can change the tag name. – Kevin H. Lin Apr 1 '10 at 17:33 A useful (at least for me) example is given in Kollar's article/book on resolutions of singularities about how you can't expect to get a "resolution functor": take a quadric cone $$C = \{(x,y,z) \in \mathbb A^3: xy-z^2=0\}$$ in $\mathbb A^3$. Then you have the obvious map $\phi\colon \mathbb A^2 \to C$. But now suppose that $C'$ is a resolution of $C$ provided by a putative "resolution functor". Then if we let $\tilde{C}$ be the minimal resolution, $C'$ factors through $C$. If we assume that $\mathbb A^2$ is resolved by itself (as seems reasonable!) then we'd have to have $\phi$ lifting to a map $\mathbb A^2 \to \tilde{C}$ compatibly with the original morphism, which of course one cannot do.
2020-09-22 15:23:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.775997519493103, "perplexity": 791.4854823379774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206133.46/warc/CC-MAIN-20200922125920-20200922155920-00213.warc.gz"}
https://poissonconsulting.github.io/universals/
universals provides S3 generic methods and some default implementations for Bayesian analyses that generate Markov Chain Monte Carlo (MCMC) samples. The purpose of ‘universals’ is to reduce package dependencies and conflicts. ## Philosophy The methods are primarily designed to be used for Bayesian analyses that generate Markov Chain Monte Carlo (MCMC) samples but many can also be used for Maximum Likelihood (ML) and other types of analyses. The names of the functions are based on the following definitions/concepts: • A term is a single real or integer value. • A par (short for parameter) is a numeric object of terms. • An MCMC object is a collection of MCMC samples that refer to a set of terms. • The samples are arranged in one or more chains of the same length (number of iterations). • The number of simulations is the product of the number of iterations and the number of chains. • The number of samples is the product of the number of simulations and the number of terms. The ‘nlist’ package implements many of the methods for its ‘nlists’ class. ## Installation To install the latest release from CRAN install.packages("universals") To install the developmental version from GitHub # install.packages("remotes") remotes::install_github("poissonconsulting/universals") ## Usage universals is designed to be used by package developers. It is recommended to import and re-export the generics of interest. For example, to provide a method for the S3 pars() method, use the following roxygen2 code: #' @importFrom universals pars #' @export universals::pars
2020-12-01 17:26:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37626567482948303, "perplexity": 1092.3572183863942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141681209.60/warc/CC-MAIN-20201201170219-20201201200219-00650.warc.gz"}
https://www.jobilize.com/precalculus/section/using-double-angle-formulas-to-verify-identities-by-openstax?qcr=www.quizover.com
# 9.3 Double-angle, half-angle, and reduction formulas  (Page 2/8) Page 2 / 8 Given $\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\alpha =\frac{5}{8},$ with $\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}$ in quadrant I, find $\text{\hspace{0.17em}}\mathrm{cos}\left(2\alpha \right).$ $\mathrm{cos}\left(2\alpha \right)=\frac{7}{32}$ ## Using the double-angle formula for cosine without exact values Use the double-angle formula for cosine to write $\text{\hspace{0.17em}}\mathrm{cos}\left(6x\right)\text{\hspace{0.17em}}$ in terms of $\text{\hspace{0.17em}}\mathrm{cos}\left(3x\right).$ $\begin{array}{ccc}\hfill \mathrm{cos}\left(6x\right)& =& \mathrm{cos}\left(3x+3x\right)\hfill \\ & =& \mathrm{cos}\text{\hspace{0.17em}}3x\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}3x-\mathrm{sin}\text{\hspace{0.17em}}3x\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}3x\hfill \\ & =& {\mathrm{cos}}^{2}3x-{\mathrm{sin}}^{2}3x\hfill \end{array}$ ## Using double-angle formulas to verify identities Establishing identities using the double-angle formulas is performed using the same steps we used to derive the sum and difference formulas. Choose the more complicated side of the equation and rewrite it until it matches the other side. ## Using the double-angle formulas to verify an identity Verify the following identity using double-angle formulas: $1+\mathrm{sin}\left(2\theta \right)={\left(\mathrm{sin}\text{\hspace{0.17em}}\theta +\mathrm{cos}\text{\hspace{0.17em}}\theta \right)}^{2}$ We will work on the right side of the equal sign and rewrite the expression until it matches the left side. $\begin{array}{ccc}\hfill {\left(\mathrm{sin}\text{\hspace{0.17em}}\theta +\mathrm{cos}\text{\hspace{0.17em}}\theta \right)}^{2}& =& {\mathrm{sin}}^{2}\theta +2\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta +{\mathrm{cos}}^{2}\theta \hfill \\ & =& \left({\mathrm{sin}}^{2}\theta +{\mathrm{cos}}^{2}\theta \right)+2\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta \hfill \\ & =& 1+2\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta \hfill \\ & =& 1+\mathrm{sin}\left(2\theta \right)\hfill \end{array}$ Verify the identity: $\text{\hspace{0.17em}}{\mathrm{cos}}^{4}\theta -{\mathrm{sin}}^{4}\theta =\mathrm{cos}\left(2\theta \right).$ ${\mathrm{cos}}^{4}\theta -{\mathrm{sin}}^{4}\theta =\left({\mathrm{cos}}^{2}\theta +{\mathrm{sin}}^{2}\theta \right)\left({\mathrm{cos}}^{2}\theta -{\mathrm{sin}}^{2}\theta \right)=\mathrm{cos}\left(2\theta \right)$ ## Verifying a double-angle identity for tangent Verify the identity: $\mathrm{tan}\left(2\theta \right)=\frac{2}{\mathrm{cot}\text{\hspace{0.17em}}\theta -\mathrm{tan}\text{\hspace{0.17em}}\theta }$ In this case, we will work with the left side of the equation and simplify or rewrite until it equals the right side of the equation. Verify the identity: $\text{\hspace{0.17em}}\mathrm{cos}\left(2\theta \right)\mathrm{cos}\text{\hspace{0.17em}}\theta ={\mathrm{cos}}^{3}\theta -\mathrm{cos}\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}{\mathrm{sin}}^{2}\theta .$ $\mathrm{cos}\left(2\theta \right)\mathrm{cos}\text{\hspace{0.17em}}\theta =\left({\mathrm{cos}}^{2}\theta -{\mathrm{sin}}^{2}\theta \right)\mathrm{cos}\text{\hspace{0.17em}}\theta ={\mathrm{cos}}^{3}\theta -\mathrm{cos}\text{\hspace{0.17em}}\theta {\mathrm{sin}}^{2}\theta$ ## Use reduction formulas to simplify an expression The double-angle formulas can be used to derive the reduction formulas    , which are formulas we can use to reduce the power of a given expression involving even powers of sine or cosine. They allow us to rewrite the even powers of sine or cosine in terms of the first power of cosine. These formulas are especially important in higher-level math courses, calculus in particular. Also called the power-reducing formulas, three identities are included and are easily derived from the double-angle formulas. We can use two of the three double-angle formulas for cosine to derive the reduction formulas for sine and cosine. Let’s begin with $\text{\hspace{0.17em}}\mathrm{cos}\left(2\theta \right)=1-2\text{\hspace{0.17em}}{\mathrm{sin}}^{2}\theta .\text{\hspace{0.17em}}$ Solve for $\text{\hspace{0.17em}}{\mathrm{sin}}^{2}\theta :$ $\begin{array}{ccc}\hfill \mathrm{cos}\left(2\theta \right)& =& 1-2\text{\hspace{0.17em}}{\mathrm{sin}}^{2}\theta \hfill \\ \hfill 2\text{\hspace{0.17em}}{\mathrm{sin}}^{2}\theta & =& 1-\mathrm{cos}\left(2\theta \right)\hfill \\ \hfill {\mathrm{sin}}^{2}\theta & =& \frac{1-\mathrm{cos}\left(2\theta \right)}{2}\hfill \end{array}$ Next, we use the formula $\text{\hspace{0.17em}}\mathrm{cos}\left(2\theta \right)=2\text{\hspace{0.17em}}{\mathrm{cos}}^{2}\theta -1.\text{\hspace{0.17em}}$ Solve for $\text{\hspace{0.17em}}{\mathrm{cos}}^{2}\theta :$ The last reduction formula is derived by writing tangent in terms of sine and cosine: ## Reduction formulas The reduction formulas    are summarized as follows: ${\mathrm{sin}}^{2}\theta =\frac{1-\mathrm{cos}\left(2\theta \right)}{2}$ ${\mathrm{cos}}^{2}\theta =\frac{1+\mathrm{cos}\left(2\theta \right)}{2}$ ${\mathrm{tan}}^{2}\theta =\frac{1-\mathrm{cos}\left(2\theta \right)}{1+\mathrm{cos}\left(2\theta \right)}$ ## Writing an equivalent expression not containing powers greater than 1 Write an equivalent expression for $\text{\hspace{0.17em}}{\mathrm{cos}}^{4}x\text{\hspace{0.17em}}$ that does not involve any powers of sine or cosine greater than 1. We will apply the reduction formula for cosine twice. #### Questions & Answers answer and questions in exercise 11.2 sums how do u calculate inequality of irrational number? Alaba give me an example Chris and I will walk you through it Chris cos (-z)= cos z . what is a algebra (x+x)3=? what is the identity of 1-cos²5x equal to? __john __05 Kishu Hi Abdel hi Ye hi Nokwanda C'est comment Abdel Hi Amanda hello SORIE Hiiii Chinni hello Ranjay hi ANSHU hiiii Chinni h r u friends Chinni yes Hassan so is their any Genius in mathematics here let chat guys and get to know each other's SORIE I speak French Abdel okay no problem since we gather here and get to know each other SORIE hi im stupid at math and just wanna join here Yaona lol nahhh none of us here are stupid it's just that we have Fast, Medium, and slow learner bro but we all going to work things out together SORIE it's 12 what is the function of sine with respect of cosine , graphically tangent bruh Steve cosx.cos2x.cos4x.cos8x sinx sin2x is linearly dependent what is a reciprocal The reciprocal of a number is 1 divided by a number. eg the reciprocal of 10 is 1/10 which is 0.1 Shemmy Reciprocal is a pair of numbers that, when multiplied together, equal to 1. Example; the reciprocal of 3 is ⅓, because 3 multiplied by ⅓ is equal to 1 Jeza each term in a sequence below is five times the previous term what is the eighth term in the sequence I don't understand how radicals works pls How look for the general solution of a trig function stock therom F=(x2+y2) i-2xy J jaha x=a y=o y=b sinx sin2x is linearly dependent cr root under 3-root under 2 by 5 y square The sum of the first n terms of a certain series is 2^n-1, Show that , this series is Geometric and Find the formula of the n^th cosA\1+sinA=secA-tanA Wrong question why two x + seven is equal to nineteen. The numbers cannot be combined with the x Othman 2x + 7 =19 humberto 2x +7=19. 2x=19 - 7 2x=12 x=6 Yvonne because x is 6 SAIDI
2020-08-13 01:54:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 23, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.910275399684906, "perplexity": 1078.1481881839436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738950.61/warc/CC-MAIN-20200813014639-20200813044639-00086.warc.gz"}
https://read.somethingorotherwhatever.com/entry/Paperfoldingmorphismsplanefillingcurvesandfractaltiles
# Paperfolding morphisms, planefilling curves, and fractal tiles • Published in 2010 In the collections An interesting class of automatic sequences emerges from iterated paperfolding. The sequences generate curves in the plane with an almost periodic structure. We generalize the results obtained by Davis and Knuth on the self-avoiding and planefilling properties of these curves, giving simple geometric criteria for a complete classification. Finally, we show how the automatic structure of the sequences leads to self-similarity of the curves, which turns the planefilling curves in a scaling limit into fractal tiles. For some of these tiles we give a particularly simple formula for the Hausdorff dimension of their boundary. ### BibTeX entry @article{Paperfoldingmorphismsplanefillingcurvesandfractaltiles, title = {Paperfolding morphisms, planefilling curves, and fractal tiles}, abstract = {An interesting class of automatic sequences emerges from iterated paperfolding. The sequences generate curves in the plane with an almost periodic structure. We generalize the results obtained by Davis and Knuth on the self-avoiding and planefilling properties of these curves, giving simple geometric criteria for a complete classification. Finally, we show how the automatic structure of the sequences leads to self-similarity of the curves, which turns the planefilling curves in a scaling limit into fractal tiles. For some of these tiles we give a particularly simple formula for the Hausdorff dimension of their boundary.}, url = {http://arxiv.org/abs/1011.5788v2 http://arxiv.org/pdf/1011.5788v2}, year = 2010, author = {Michel Dekking}, comment = {}, urldate = {2020-09-21}, archivePrefix = {arXiv}, eprint = {1011.5788}, primaryClass = {math.CO}, collections = {geometry,things-to-make-and-do} }
2021-02-26 16:04:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5531007647514343, "perplexity": 2372.895577165607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357929.4/warc/CC-MAIN-20210226145416-20210226175416-00573.warc.gz"}
http://math.stackexchange.com/questions/54915/is-there-general-formula-for-the-exponential-of-a-tridiagonal-matrix
# Is there general formula for the exponential of a tridiagonal matrix? For an arbitrary tridiagonal matrix of the form $$A = \begin{pmatrix} b_1 & c_1 & 0 & 0 & ... \\ a_2 & b_2 & c_2 & 0 & ... \\ 0 & a_3 & b_3 & c_3 & ... \\ \vdots &&\ddots&\ddots&\ddots\end{pmatrix}$$ is there a formula to calculate $\exp(A)$? Or at least for some special tridiagonal matrices? The special case I am most interested in is a $(2n+1)^2$ matrix with $b_k = i(k-n-1)$ and $c_k = (a_{2n+2-k})^*$, i.e. $$\begin{pmatrix} -in & c_1 & 0 & \\ c_{2n}^* & -i(n-1) & c_2 & \\ 0 & c_{2n-1}^* & -i(n-2) & \ddots \\ &&\ddots&\ddots \end{pmatrix}$$ - A closed form for that exponential would entail finding a closed form for the characteristic polynomial of the tridiagonal matrix, since the eigenvectors can be expressed in terms of derivatives of the characteristic polynomial evaluated at appropriate values... –  Guess who it is. Aug 10 '11 at 8:53 Did you ever find a solution to your problem? –  John Salvatier Jan 10 '13 at 16:55 @JohnSalvatier I'm afraid not :-/ –  Tobias Kienzler Jan 10 '13 at 17:00 I'm looking for a way to compute exp(At)*x_0 cheaply when A's a symmetric tridiagonal matrix. I think I may just have to eigen-decompose A and do it that way. Luckily I only have to decompose A once, and then it's O(n**2), which I guess is okay. Since you should be able to compute Ax_0 in O(n) steps since its tridiagonal, I was hoping for something better, but maybe that's not possible. –  John Salvatier Jan 10 '13 at 20:26
2015-05-27 20:07:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9159284830093384, "perplexity": 368.7532870414238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929096.44/warc/CC-MAIN-20150521113209-00170-ip-10-180-206-219.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/131790/the-longest-run-of-heads
# The longest run of heads Let be $X_1, X_2,\; \ldots$ independent and identically distributed (i.i.d.) random variables with common distribution: $\mathbb{P}(X_k=1)=p,\; \mathbb{P}(X_k=0)=1-p$. We fix a $\lambda > 1$ parameter and let $A_k^{(\lambda)}$ denote the following events: $k=1, 2, 3, \dots$ $$A_k^{(\lambda)}: = \{ \exists r \in [[\lambda^k], [\lambda^{k+1}]-1]\cap\mathbb{N}: X_r=X_{r+1}= \dots =X_{r+k-1}=1\}$$ where $[\lambda^k]$ denotes the floor. I.e.: $A_k^{(\lambda)}$ event means that between $[\lambda^k]$a and $[\lambda^{k+1}]-1$ exists a $k$ long run of ones. Let's prove: a) If $\lambda < p^{-1}$ then $A_k^{(\lambda)}$ events occurs only for finitely many k with probability of 1. b) If $\lambda > p^{-1}$ then $A_k^{(\lambda)}$ events occurs infinitely many times with probability of 1. c) What happens when $\lambda=p^{-1}$? Any help would be appreciated. - I think third question should be: "What happens if $\lambda=1/p$?" –  Norbert Apr 14 '12 at 17:51 (+1) Is this homework? If so, kindly add the homework tag. :) –  cardinal Apr 14 '12 at 18:23 Does $[\lambda^k]$ denote the nearest integer, or floor, or ceiling, or what? –  Quinn Culver Apr 14 '12 at 19:19 Does "...among $A_{k}^{(\lambda)}$ events occurs only finitely many..." mean that $A_{k}^{(\lambda)}$ occurs for only finitely many $k$? –  Quinn Culver Apr 14 '12 at 19:35 Thank you. I added the tag as you suggested and completed the text that $[\lambda^k]$ denotes the floor, plus I corrected the sentence that was not clear. –  Dawson Apr 14 '12 at 20:16 The three parts are consequences of Borel-Cantelli lemmas hence we begin by controlling the probability of $A^{(\lambda)}_k$. On the one hand, this event involves at most $\lambda^{k+1}-\lambda^k$ (non independent) events of probability $p^k$ hence, introducing $\mu=\lambda-1\gt0$, $$\mathrm P(A^{(\lambda)}_k)\leqslant \mu\cdot(p\lambda)^k.$$ On the other hand, decomposing the interval $(\lambda^k,\lambda^{k+1})$ into disjoint intervals of length $k$, one gets at least $\mu\lambda^k/k$ (independent) events of probability $p^k$ hence $$\mathrm P(A^{(\lambda)}_k)\geqslant 1-(1-p^k)^{\mu\lambda^k/k}.$$ These two estimates imply the following: • In case (a), $p\lambda\lt1$ hence $\sum\limits_k\mathrm P(A^{(\lambda)}_k)$ converges and the first Borel-Cantelli lemma yields the conclusion in your post. • In case (b), $p\lambda\gt1$ hence $(1-p^k)^{\mu\lambda^k/k}\to0$, $\mathrm P(A^{(\lambda)}_k)\to1$ and $\sum\limits_k\mathrm P(A^{(\lambda)}_k)$ diverges. Since the events $(A^{(\lambda)}_k)_k$ are independent, the second Borel-Cantelli lemma yields the conclusion in your post. • In case (c), $(1-p^k)^{\mu\lambda^k}=(1-p^k)^{\mu/p^k}\to\mathrm e^{-\mu}\gt0$ hence $\mathrm P(A^{(\lambda)}_k)\geqslant (\mu+o(1))/k$ and the second Borel-Cantelli lemma shows that, almost surely, infinitely many events $A^{(\lambda)}_k$ occur.
2015-08-01 02:20:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9833188652992249, "perplexity": 400.24853109287943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988399.65/warc/CC-MAIN-20150728002308-00114-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/em-wave-and-complex-number.514476/
# Em wave and complex number 1. Jul 16, 2011 ### phabos why electromagnetic waves are represented by complex numbers? 2. Jul 16, 2011 ### Lavabug My answer is a bit general but I think its pretty relevant: Waves and harmonic oscillators are represented by sinusoidal functions. Using Euler's theorem you can rewrite them as the (real part) of an imaginary exponential, where the exponent is i*(arg), where the argument is the same one you would use for an oscillator(wt +phase) or a wave (kx - or +wt + phase). Its a bit more convenient to work with imaginary exponentials since they're more compact, taking their time derivatives to get velocities for example. Something worth trying to illustrate that example: show that the total energy (T+V) of a harmonic oscillator is proportional to the square of the amplitude. You can do this either way, but I think its more compact if you use y(t) = Re{Ae^(iwt)} instead of Acoswt as your starting point. 3. Jul 17, 2011 ### yungman EM wave usually are of sinusoidal nature. It is easier to represent harmonic wave ( sinusoidal) in cosine wave: $$\vec E =E_0 cos\;(\omega t -\vec k\cdot \vec R)\;=\; \Re e [E_0 e^{j\omega t}e^{-j\vec k \cdot \vec R}]$$ And then use phasor form where $\tilde E = E_0 e^{-j\vec k \cdot \vec R} \;\hbox { and }\;\vec E = \Re e [\tilde E \;e^{j\omega t}]$ The solution of homogeneous harmonic wave equation is something like: $$\nabla ^2 E +\delta^2 \vec E = 0 \;\hbox { is } E^+ e^{-\delta \vec k \cdot \vec R} +E^- e^{\delta \vec k \cdot \vec R} \;\hbox { where } \delta = \alpha + j\beta$$ It is not as common in Physics than in RF and microwave Electronics. In RF, we deal with transmission lines where we can assume the direction of propagation in z direction which really simplify the calculation tremendously. We avoid all the differential equations, PDE, integration and differentiation. In fact I learn in reverse order. I have been using phasor calculation to design filters, matching networks for years before I really start learning EM!!!! Last edited: Jul 17, 2011
2018-10-16 01:42:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8938189744949341, "perplexity": 860.439818782423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509960.34/warc/CC-MAIN-20181016010149-20181016031649-00065.warc.gz"}
https://pos.sissa.it/282/492/
Volume 282 - 38th International Conference on High Energy Physics (ICHEP2016) - Neutrino Physics Results and Status from KamLAND-Zen J. Ouellet* on behalf of the KamLAND-Zen Collaboration *corresponding author Full text: pdf Pre-published on: February 06, 2017 Published on: April 19, 2017 Abstract The KamLAND-Zen experiment is searching for the 0$\nu\beta\beta$ decay of $^{136}$Xe using 320--380\,kg of enriched Xe, mounted in the KamLAND detector in the Kamioka mine in Japan. In this talk, we report on the results of 504\,kg$\cdot$yr of data collected during the second phase of data taking, which followed a purification program of the liquid scintillator and Xe. Using this data, we set a limit on the 0$\nu\beta\beta$ half-life of $T_{1/2}^{0\nu}(^{136}\mathrm{Xe})>9.2\times10^{25}$\,yr at 90\% C.L. Combined with the results of the first phase, this limit improves to $T_{1/2}^{0\nu}(^{136}\mathrm{Xe})>1.07\times10^{26}$\,yr at 90\% C.L. Finally, we report on the current status of the next phase of the experiment, KamLAND-Zen 800. DOI: https://doi.org/10.22323/1.282.0492 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
2020-11-27 20:45:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4518775939941406, "perplexity": 3754.4732812384323}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141194171.48/warc/CC-MAIN-20201127191451-20201127221451-00188.warc.gz"}
http://tex.stackexchange.com/questions/46441/reindexing-theorem-numbers
# Reindexing Theorem Numbers The amsthm package comes with an several theorem-like environments. Is there a way to adjust the indexing on a sequence of theorems to skip a number? That is, I would like to have it do something like this, Theorem 1. Theorem 3. Theorem 4. without having anything appearing in between Theorem 1 and Theorem 3. - Why do you want to do this? Is the number 2 used for some other environment and do you want to share the number? If yes then amsmath lets you do this. –  Marc van Dongen Mar 2 '12 at 5:49 No, I just want to skip it. I'm taking a test where I only have to answer some of the questions, and wanted the problem numbers on my solutions to correspond to the problem numbers on the test. –  Dane Mar 3 '12 at 2:50 \stepcounter{theorem} after Theorem 1. wouldn't you want \refstepcounter? –  cmhughes Mar 2 '12 at 4:23 @cmhughes: I didn't think of that, but actually, you wouldn't. \refstepcounter sets the current label, and therefore, after calling it, every \label command until the next \refstepcounter (a section or another theorem) will incorrectly point to the nonexistent Theorem 2. Theorem 3 itself will \refstepcounter in the correct place. –  Ryan Reich Mar 2 '12 at 4:36 Actually, it's even worse than that, since if you are calling \refstepcounter in the global context, it requires another globally-scoped \refstepcounter to override it. So only another section will do (inside of environments it can be overridden locally). Of course, most of the time you don't have \labels flying around in the middle of the text, but it's still philosophically wrong. –  Ryan Reich Mar 2 '12 at 5:14
2014-04-23 17:56:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926237165927887, "perplexity": 1480.3039454408897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
https://javalab.org/en/equilibrium_constants_en/
# Equilibrium Constants You can click the particle with the mouse. Chemical equilibrium In a chemical reaction, chemical equilibrium is the state in which both reactants and products are present in concentrations which have no further tendency to change with time. In equilibrium state at a constant temperature, the ratio of the concentration product of the reactant to the concentration product of the product is always constant. This is called the law of chemical equilibrium. In general, the equilibrium constant 'K' value is the ratio of the product to the reactant. Assuming that the following reaction is in equilibrium, $aA+bB\rightleftharpoons cC+dD$ The forward reaction speed V 1 and the backward reaction speed V 2 are the same. And the equilibrium constant K can be obtained as follows. $k=\frac { { [C] }^{ c }{ [D] }^{ d } }{ { [A] }^{ a }{ [B] }^{ b } }$ The equilibrium constant, K always shows a constant value at constant temperature. What can be known by equilibrium constants The K value tells you what chemical reaction is going to happen in the future. If the current value is less than the K value, a forward reaction will occur. On the contrary, if it is big, the backward reaction will happen. If the current value is equal to the K value, the equilibrium state is reached. Related Post
2019-02-22 06:59:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8517309427261353, "perplexity": 553.0761538705964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247513661.77/warc/CC-MAIN-20190222054002-20190222080002-00035.warc.gz"}
https://www.physicsforums.com/threads/kcl-question.375814/
# KCL Question 1. Feb 5, 2010 ### Paymemoney 1. The problem statement, all variables and given/known data Determine the current in each branch of the circuit in the below diagram. http://img341.imageshack.us/img341/7901/kclquestion.jpg [Broken] 2. Relevant equations V=IR P=VI 3. The attempt at a solution This is what i have done but it is incorrect. Vtotal=16V V=IR $$Io1=\frac{16.00}{8.00}$$ =2.00A $$Io2=\frac{16}{6}$$ =0.75A $$Io3=\frac{16}{1}+\frac{16}{3}$$ =21.3A P.S Last edited by a moderator: May 4, 2017 2. Feb 6, 2010 ### Staff: Mentor You are not writing the simultaneous KCL equations as far as I can tell. Combine series resistors where you can, and write the 1 KCL equation for this circuit (why just 1?). Then solve it. Last edited by a moderator: May 4, 2017 3. Feb 6, 2010 ### Zayer Use mesh analysis and KVL 4. Feb 6, 2010 ### Staff: Mentor Is there a reason to pick KVL over KCL here? Just curious. 5. Feb 6, 2010 ### Paymemoney After redoing the question this is what i have done: Three equations: Equation 1: Io3=Io1+1o2 Equation 2: 8-(4)Io2-(6)Io3=0 Equation 3: 4-(8)Io1-(6)Io2=0 Substitute Equation 1 into Equation 2 Equation 4 8-(4)Io2-(6)(Io1+Io2)=0 8=(10)Io2+(6)Io1 Equation 4 - Equation 3 8=(10)Io2+(6)Io1 - 4=(8)Io1+(6)Io2 Io2=0.90Amps Substitute Io2 into Equation 3 Io1=\frac{4-(6*0.90)}{8} =0.175Amps Subsitute Io1 and Io2 into Equation 1 Io3=Io1+Io2 =0.90+0.175 =1.075Amps I checked with the book's answers and my solutions are incorrect. Can someone tell me where i have gone wrong. 6. Feb 6, 2010 ### Staff: Mentor I do not understand your equations. Are they meant to be KCL equations? The KCL only involves the voltage at that top node, and the 3 currents that flow out of that node. 7. Feb 6, 2010 ### Paymemoney yes they are KCL equations 8. Feb 7, 2010 9. Feb 7, 2010 ### Paymemoney ok thanks i know how to do it now.
2017-10-18 20:44:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3543257713317871, "perplexity": 5535.717241823001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823114.39/warc/CC-MAIN-20171018195607-20171018215607-00834.warc.gz"}
https://docs.nnpdf.science/data/exp-data-files.html
# Experimental data files Data made available by experimental collaborations comes in a variety of formats. For use in a fitting code, this data must be converted into a common format that contains all the required information for use in PDF fitting. Existing formats commonly used by the community, such as in HepData, are generally unsuitable. Principally as they often do not fully describe the breakdown of systematic uncertainties. Therefore over several years an NNPDF standard data format has been iteratively developed, now denoted CommonData. In addition to the CommonData files themselves, in the nnpdf++ project the user has the ability to vary the treatment of individual systematic errors by use of parameter files denoted SYSTYPE files. In this section we shall detail the specifications of these two files. In principle, the file specification and classes described in this section are independent of the nnpdf++ project and may be generated by whatever means the user sees fit. In practice, the CommonData and SYSTYPE files are generated by the buildmaster project of nnpdf++ from the raw experimental data files. ## Process types and kinematics Before going into the file formats, we shall summarise the identifying features used for data in the nnpdf++ code. Each data point has an associated process type string. This can be specified by the user, but must begin with the appropriate identifying base process type. Additionally for each data point three kinematic values are given, the process type being primarily to identify the nature of these values. Typically the first kinematic variable is the principal differential quantity used in the measurement. The second kinematic variable defines the scale of the process. The third is generally the centre-of-mass energy of the process, or inelasticity in the case of DIS. The allowed basic process types, and their corresponding three kinematic variables are outlined below. • DIS - Deep inelastic scattering measurements: $$(x,Q^2,y)$$ • DYP - Fixed-target Drell-Yan measurements: $$(y,M^2,\sqrt{s})$$ • JET - Jet production: $$(\eta,p_T^2,\sqrt{s})$$ • DIJET - Dijet production: $$(\eta,m_{12},\sqrt{s})$$ • PHT - Photon production: $$(\eta_\gamma,E_{T,\gamma}^2,\sqrt{s})$$ • INC - A total inclusive cross-section: $$(0,\mu^2,\sqrt{s})$$ • EWK_RAP - Collider electroweak rapidity distribution: $$(\eta/y,M^2,\sqrt{s})$$ • EWK_PT - Collider electroweak $$p_T$$ distribution: $$(p_T,M^2,\sqrt{s})$$ • EWK_PTRAP - Collider electroweak $$p_T, y$$ distribution: $$(\eta/y, p_T^2,\sqrt{s})$$ • EWK_MLL - Collider electroweak lepton-pair mass distribution: $$(M_{ll},M_{ll}^2,\sqrt{s})$$ • EWJ_(J)RAP - Collider electroweak + jet boson(jet) rapidity distribution: $$(\eta/y,M^2,\sqrt{s})$$ • EWJ_(J)PT - Collider electroweak + jet boson(jet) $$p_T$$ distribution: $$(p_T,M^2,\sqrt{s})$$ • EWJ_(J)PTRAP - Collider electroweak + jet boson(jet) $$p_T, y$$ distribution: $$(\eta/y, p_T^2,\sqrt{s})$$ • EWJ_MLL - Collider electroweak+jet lepton-pair mass distribution: $$(M_{ll},M_{ll}^2,\sqrt{s})$$ • HQP_YQQ - Heavy diquark system rapidity $$(y^{QQ},\mu^2,\sqrt{s})$$ • HQP_MQQ - Heavy diquark system mass $$(M^{QQ},\mu^2,\sqrt{s})$$ • HQP_PTQQ - Heavy diquark system $$p_T$$ $$(p_T^{QQ},\mu^2,\sqrt{s})$$ • HQP_YQ - Heavy quark rapidity $$(y^Q,\mu^2,\sqrt{s})$$ • HQP_PTQ - Heavy quark $$p_T$$ $$(p_T^Q,\mu^2,\sqrt{s})$$ • HIG_RAP - Higgs boson rapidity distribution $$(y,M_H^2,\sqrt{s})$$ As examples of process type strings, consider EWK_RAP for a collider $$W$$ boson asymmetry measurement binned in rapidity, and DIS_F2P for the $$F_2^p$$ structure function in DIS. The user is free to choose something identifying for the second segment of the process type, the important feature being the basic process type. However, users are encouraged to only use this freedom when absolutely necessary (such as when used in combination with APFEL). One special case is that of $$W$$ boson lepton asymmetry measurements, which being cross-section asymmetries may occasionally have negative data points. Therefore asymmetry measurements must have the final tag ASY to ensure that artificial data generation permits negative data values. An example process type string would be EWK_RAP_ASY. ### Notes for the future In the future it would be nice to have a more flexible treatment of the kinematic variables, both in their number and labelling. ## CommonData file format Each experimental Dataset has its own CommonData file. CommonData files contain the bulk of the experimental information used in the nnpdf++ project, with the only other experimental data files controlling the treatment and correlation of systematic errors. Each CommonData file is a plaintext file whose layout is described in the following. The first line begins with the Dataset name, the number of systematic errors, and the number of data points in the set, whitespace separated. For example, for the ATLAS 2010 jet measurement the first line of the file reads: ATLASR04JETS36PB 91 90 Which demonstrates that the set name is ‘ATLASR04JETS36PB’, that there are 91 sources of systematic uncertainty, 90 data points, one associated FK table, and that the FK table corresponds to a proton initial state. As another example, consider the NMCPD Dataset: NMCPD 5 211 Here there are 5 sources of systematic uncertainty and 211 data points. Following this, each line specifies the details of a single data point. The first value being the data point index $$1< i_{\text{dat}} \leq N_{\mathrm{dat}}$$, followed by the process type string as outlined above, and the three kinematic variables in order. These are followed by the value of the experimental data point itself, and the value of the statistical uncertainty associated with it (absolute value). Finally the systematic uncertainties are specified. The layout per data point is therefore $$i_{\mathrm{dat}}$$ ProcessType $$\text{kin}_1 \text{kin}_2 \text{kin}_3$$ data_value stat_error $$[..$$ systematics $$..]$$ For example, in the case of a DIS data point from the BCDMSD Dataset: 1 DIS_F2D 7.0e-02 8.75e+00 5.666e-01 3.6575e-01 6.43e-03 $$[..$$ systematics $$..]$$ In these lines the systematic uncertainties are laid out as so. For each uncertainty, additive and multiplicative versions are given. The additive uncertainty is given by absolute value, and the multiplicative as a percentage of the data value (that is, relative error multiplied by 100). The systematics string is formed by the sequence of $$N_{\text{sys}}$$ pairs of systematic uncertainties: $$[..$$ systematics $$..] = \sigma^{\mathrm{add}}_0 \quad \sigma^{\mathrm{mul}}_0\quad \sigma^{\mathrm{add}}_1 \quad \sigma^{\mathrm{mul}}_1 \quad....\quad \sigma^{\mathrm{add}}_n \quad\sigma^{\mathrm{mul}}_n$$ where $$\sigma^{\mathrm{add}}_i$$ and $$\sigma^{\mathrm{mul}}_i$$ are the additive and multiplicative versions respectively of the systematic uncertainty arising from the $$i\text{th}$$ source. While it may seem at first that the multiplicative error is spurious given the presence of the additive error and data central value, this may not be the case. For example, in a closure test scenario, the data central values may have been replaced in the CommonData file by theoretical predictions. Therefore if you wish to use a covariance matrix generated with the original multiplicative uncertainties via the $$t_0$$ method, you must also store the original multiplicative (percentage) error. For flexibility and ease of I/O this is therefore done in the CommonData file itself. For a Dataset with $$N_{\text{dat}}$$ data points and $$N_{\text{sys}}$$ sources of systematic uncertainty, the total CommonData file should therefore be $$N_{\text{dat}}+1$$ lines long. Its first line contains the set parameters, and every subsequent line should consist of the description of a single data point. Each data point line should therefore contain $$7 + 2N_{\text{sys}}$$ columns. ## SYSTYPE file format The explicit presentation of the systematic uncertainties in the CommonData file allows for a great deal of flexibility in the treatment of these errors. Specifically, whether they should be treated as additive or multiplicative uncertainties, and how they are correlated, both within the Dataset and within a larger Experiment. A specification for how the systematic uncertainties should be treated is provided by a SYSTYPE file. As there is not always an unambiguous method for the treatment of these uncertainties, these information is kept outside the (unambiguous) CommonData file. Several options for this treatment are often provided in the form of multiple SYSTYPE files which may be selected between in the fit. Each SYSTYPE file begins with a line specifying the total number of systematics. Naturally this must match with the $$N_{\text{sys}}$$ variable specified in the associated CommonData file. This is presented as a single integer. For example, in the case of the BCDMSD SYSTYPE files, the first line is 8 as there are $$N_{\text{sys}}=8$$ sources of systematic uncertainty for this Dataset. Following this line there are $$N_{\text{sys}}$$ lines describing each source of systematic uncertainty. For each source two parameters are provided, the uncertainty treatment and the uncertainty description. These are laid out for each systematic as: $$i_{\text{sys}}$$ [uncertainty treatment] [uncertainty description] where $$1< i_{\text{sys}} \leq N_{\mathrm{sys}}$$ enumerates each systematic. The uncertainty treatment determines whether the uncertainty should be treated as additive, multiplicative, or in cases where the choice is unclear, as randomised on a replica by replica basis. These choices are selected by using the strings ADD, MULT, or RAND. The uncertainty description specifies how the systematic is to be correlated with other data points. There are three special cases for the uncertainty description, specified by the strings CORR, UNCORR, THEORYCORR, THEORYUNCORR and SKIP. The first two specify whether the systematic is fully correlated only within the Dataset (CORR), or whether the systematic is totally uncorrelated (UNCORR). The THEORY descriptor is used to describe theoretical systematics due to e.g missing NNLO corrections, which are treated as either CORR or UNCORR according to their suffix, but are not included in the generation of artificial replicas (their only contribution is to the fitting error function). If the user wishes to correlate a specific uncertainty between multiple Datasets within an Experiment, then they should use a custom uncertainty description. When building a covariance matrix for an Experiment, the nnpdf++ code checks for matches between the uncertainty descriptions of systematics of its constituent Datasets. If a match is found, the code will correlate those systematics over the relevant datasets. The SKIP descriptor removes the systematic from the covariance matrices for debugging purposes. As an example, let us consider an NNPDF2.3 standard SYSTYPE for the BCDMSD Dataset: 8 4 MULT BCDMSNORM 5 MULT BCDMSRELNORMTARGET 6 MULT CORR 7 MULT CORR 8 MULT CORR Here the first five systematics have custom uncertainty descriptions, thereby allowing them to be cross-correlated with other Datasets in a larger Experiment. Systematics six to eight are specified as being fully correlated, but only within the BCDMSD Dataset. Additionally note that the first three systematics are specified as additive, and the remainder are multiplicative. If we compare now to the equivalent SYSTYPE file for the BCDMSP Dataset: 11 4 MULT BCDMSNORM 5 MULT BCDMSRELNORMTARGET 6 MULT CORR 7 MULT CORR 8 MULT CORR 9 MULT CORR 10 MULT CORR 11 MULT CORR it is clear that the first five systematics are the same as in the BCDMSD Dataset, and therefore should the two sets be combined into a common Experiment, the code will cross-correlate them appropriately. The combination of SYSTYPE and CommonData is quite flexible. As stated previously, once generated from the original raw experimental data, the CommonData file is fixed and should not be altered apart from for the purpose of correcting errors. In practice the full details on the systematic correlation and their treatment is often not precisely specified. This system allows for the safe variation of these parameters for testing purposes.
2022-08-20 01:38:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5989617705345154, "perplexity": 1841.6700132261808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00282.warc.gz"}
https://www.ademcetinkaya.com/2023/01/cnty-century-casinos-inc-common-stock.html
Outlook: Century Casinos Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Sell Time series to forecast n: 19 Jan 2023 for (n+4 weeks) Methodology : Deductive Inference (ML) ## Abstract Century Casinos Inc. Common Stock prediction model is evaluated with Deductive Inference (ML) and ElasticNet Regression1,2,3,4 and it is concluded that the CNTY stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Sell ## Key Points 1. Probability Distribution 2. Why do we need predictive models? 3. Can stock prices be predicted? ## CNTY Target Price Prediction Modeling Methodology We consider Century Casinos Inc. Common Stock Decision Process with Deductive Inference (ML) where A is the set of discrete actions of CNTY stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(ElasticNet Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Deductive Inference (ML)) X S(n):→ (n+4 weeks) $∑ i = 1 n s i$ n:Time series to forecast p:Price signals of CNTY stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## CNTY Stock Forecast (Buy or Sell) for (n+4 weeks) Sample Set: Neural Network Stock/Index: CNTY Century Casinos Inc. Common Stock Time series to forecast n: 19 Jan 2023 for (n+4 weeks) According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Sell X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for Century Casinos Inc. Common Stock 1. When designating a group of items as the hedged item, or a combination of financial instruments as the hedging instrument, an entity shall prospectively cease applying paragraphs 6.8.4–6.8.6 to an individual item or financial instrument in accordance with paragraphs 6.8.9, 6.8.10, or 6.8.11, as relevant, when the uncertainty arising from interest rate benchmark reform is no longer present with respect to the hedged risk and/or the timing and the amount of the interest rate benchmark-based cash flows of that item or financial instrument. 2. An entity shall assess whether contractual cash flows are solely payments of principal and interest on the principal amount outstanding for the currency in which the financial asset is denominated. 3. Such designation may be used whether paragraph 4.3.3 requires the embedded derivatives to be separated from the host contract or prohibits such separation. However, paragraph 4.3.5 would not justify designating the hybrid contract as at fair value through profit or loss in the cases set out in paragraph 4.3.5(a) and (b) because doing so would not reduce complexity or increase reliability. 4. An entity shall apply this Standard for annual periods beginning on or after 1 January 2018. Earlier application is permitted. If an entity elects to apply this Standard early, it must disclose that fact and apply all of the requirements in this Standard at the same time (but see also paragraphs 7.1.2, 7.2.21 and 7.3.2). It shall also, at the same time, apply the amendments in Appendix C. *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions Century Casinos Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Century Casinos Inc. Common Stock prediction model is evaluated with Deductive Inference (ML) and ElasticNet Regression1,2,3,4 and it is concluded that the CNTY stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Sell ### CNTY Century Casinos Inc. Common Stock Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementB3B2 Balance SheetB2B1 Leverage RatiosCBa1 Cash FlowCC Rates of Return and ProfitabilityB2Baa2 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 79 out of 100 with 540 signals. ## References 1. Athey S, Tibshirani J, Wager S. 2016b. Generalized random forests. arXiv:1610.01271 [stat.ME] 2. V. Borkar. A sensitivity formula for the risk-sensitive cost and the actor-critic algorithm. Systems & Control Letters, 44:339–346, 2001 3. P. Artzner, F. Delbaen, J. Eber, and D. Heath. Coherent measures of risk. Journal of Mathematical Finance, 9(3):203–228, 1999 4. H. Kushner and G. Yin. Stochastic approximation algorithms and applications. Springer, 1997. 5. Tibshirani R, Hastie T. 1987. Local likelihood estimation. J. Am. Stat. Assoc. 82:559–67 6. Y. Chow and M. Ghavamzadeh. Algorithms for CVaR optimization in MDPs. In Advances in Neural Infor- mation Processing Systems, pages 3509–3517, 2014. 7. Hirano K, Porter JR. 2009. Asymptotics for statistical treatment rules. Econometrica 77:1683–701 Frequently Asked QuestionsQ: What is the prediction methodology for CNTY stock? A: CNTY stock prediction methodology: We evaluate the prediction models Deductive Inference (ML) and ElasticNet Regression Q: Is CNTY stock a buy or sell? A: The dominant strategy among neural network is to Sell CNTY Stock. Q: Is Century Casinos Inc. Common Stock stock a good investment? A: The consensus rating for Century Casinos Inc. Common Stock is Sell and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of CNTY stock? A: The consensus rating for CNTY is Sell. Q: What is the prediction period for CNTY stock? A: The prediction period for CNTY is (n+4 weeks)
2023-03-22 02:41:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5220572352409363, "perplexity": 9977.7088712049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00392.warc.gz"}
https://math.stackexchange.com/questions/2976009/does-the-functor-v-mapsto-v-v-for-dim-v-infty-reflect-monomorphisms
# Does the functor $V\mapsto V^{**}/V$ (for $\dim V=\infty$) reflect monomorphisms, epimorphisms or isomorphisms? Let $$\textbf{Vect}_\infty$$ be the category of infinite dimensional vector spaces (over some fixed ground field), and consider the endofunctor $$F$$ of $$\textbf{Vect}_\infty$$ whose effect on objects is given by $$F(V):=V^{**}/V$$, and whose effect on morphisms is the obvious one. (Here $$V^{**}/V$$ is the cokernel of the canonical monomorphism $$V\to V^{**}$$.) Does $$F$$ reflect monomorphisms? Does it reflect epimorphisms? Does it reflect isomorphisms? (Recall the $$F$$ reflect monomorphisms if the condition that $$F(f)$$ is a monomorphism implies that $$f$$ is also a monomorphism. The reflections of epimorphisms and isomorphisms are defined similarly.) • If $V$ is an infinite dimensional vector space, if $K$ is the ground field, and if $f$ is the endomorphism $(\lambda,v)\mapsto(0,v)$ of $K\times V$, then $F(f)$ is (isomorphic to) the identity of $F(V)$. This shows that the answer to the three questions is No. – Pierre-Yves Gaillard Oct 30 '18 at 9:31 The endofunctor $$F:\mathbf{Vect}_\infty\rightsquigarrow\mathbf{Vect}_\infty$$ does not reflect monomorphisms. Pick $$V$$ to be the vector space of sequences $$(x_1,x_2,x_3,\ldots)$$ with finitely many non-zero terms. Choose $$f:V\to V$$ to be the left shift operator $$(x_1,x_2,x_3,\ldots)\mapsto (x_2,x_3,x_4\ldots).$$ Clearly, $$f$$ is not monic. We claim that $$F(f)$$ is monic. The kernel of $$F(f)$$ consists of $$\alpha +V$$ with $$\alpha \in V^{**}$$ satisfying $$f^{**}\alpha \in V$$. Because $$f$$ is an epimorphism, $$f^{**}\alpha=f(v)$$ for some $$v\in V$$. Now, for an arbitrary $$\varphi\in V^*$$, we have $$\alpha(\varphi\circ f)=f^{**}\alpha(\varphi)=f(v)(\varphi)=\varphi\big(f(v)\big)=(\varphi\circ f)(v)=v(\varphi\circ f).$$ Therefore, $$(\alpha-v)(f^*\varphi)=(\alpha-v)(\varphi\circ f)=0$$ for all $$\varphi\in V^*$$. For convenience, let $$e_j\in V$$ denote the sequence $$(0,0,0,\ldots,0,1,0,0,0,\ldots)$$, where there is only one $$1$$ at the $$j$$th term, and other terms are $$0$$. So, $$V=\bigoplus_{j=1}^\infty Ke_j$$ if $$K$$ is the base field, and $$V^*$$ can be identified with $$\prod_{j=1}^\infty Ke_j$$. Hence, we can identify $$V$$ as a subspace of $$V^*$$ as well. Observe that $$V^*=Ke_1\oplus \operatorname{im}f^*$$. That is, if $$\lambda=(\alpha-v)(e_1)$$, then $$\alpha=v+\big(\lambda-v_1\big)e_1\in V,$$ where $$v=(v_1,v_2,v_3,\ldots)$$. That is, $$F(f)$$ is monic, but $$f$$ is not. Note that $$F(f)$$ in the example above is actually an isomorphism. To show that $$F(f)$$ is epic, we observe that for $$\beta\in V^{**}$$, there exists $$\alpha \in V^{**}$$ such that $$\beta=f^{**}\alpha$$. We define $$\alpha(\varphi_1,\varphi_2,\varphi_3,\ldots)=\beta(\varphi_2,\varphi_3,\varphi_4,\ldots)$$ for all $$\varphi=(\varphi_1,\varphi_2,\varphi_3,\ldots)\in V^*$$. That is, $$f^{**}\alpha(\varphi_1,\varphi_2,\varphi_3,\ldots)=\alpha(0,\varphi_1,\varphi_2,\varphi_3,\ldots)=\beta(\varphi_1,\varphi_2,\varphi_3,\ldots)$$ for all $$\varphi\in V^{*}$$, so $$f^{**}\alpha=\beta$$. This proves that $$F(f)$$ is epic. Since it is monic, $$F(f)$$ is an isomorphism, but as we learned, $$f$$ is not an isomorphism. Therefore, $$F$$ does not reflect isomorphisms. The endofunctor $$F$$ does not reflect epimorphisms. Pick $$V$$ to be the same as above, but now let $$f$$ be the right shift operator $$(x_1,x_2,x_3,\ldots)\mapsto (0,x_1,x_2,x_3,\ldots).$$ Clearly, $$f$$ is not epic. We claim that $$F(f)$$ is epic. Let $$\beta \in V^{**}$$. We want to find $$\alpha \in V^{**}$$ such that $$\beta-f^{**}\alpha\in V$$. With the identification $$V^*=\prod_{j=1}^\infty Ke_j$$ as before, we take $$\alpha(\varphi_1,\varphi_2,\varphi_3,\ldots)=\beta(0,\varphi_1,\varphi_2,\varphi_3,\ldots)$$ for all $$\varphi=(\varphi_1,\varphi_2,\varphi_3,\ldots)\in V^*$$ with $$\varphi_j=\varphi(e_j)$$. Therefore, for all $$\varphi\in V^*$$, $$\big(\beta-f^{**}\alpha\big)(\varphi)=\beta(\varphi_1,0,0,0,\ldots)=\varphi_1\beta(e_1)=\beta(e_1)\ e_1(\varphi).$$ That is, $$\beta-f^{**}\alpha =\beta(e_1)\ e_1\in V.$$ Consequently, $$\beta+V=F(f)(\alpha +V)$$, and so $$F(f)$$ is epic without $$f$$ being epic. However, this is true. If $$f:V\to W$$ is such that $$F(f)$$ is monic, then $$W\cap \big(\operatorname{im}f^{**}\big)\subseteq \operatorname{im} f$$. Conversely, if $$f:V\to W$$ is such that $$W\cap \big(\operatorname{im}f^{**}\big)\subseteq \operatorname{im} f$$ with the extra condition that $$f$$ is monic, then $$F(f)$$ is monic. To show this, note that the kernel of $$F(f)$$ consists of $$\alpha+V$$ with $$\alpha \in V^{**}$$ satisfying $$f^{**}\alpha \in W$$. Because $$F(f)$$ is monic, $$\alpha=v$$ for some $$v\in V$$. As $$f^{**}\alpha=w$$ for some $$w\in W\cap\big(\operatorname{im}f^{**}\big)$$, $$\varphi\big(f(v)\big)=(\varphi\circ f)(v)=v(\varphi\circ f)=\alpha(\varphi \circ f)=f^{**}\alpha(\varphi)=w(\varphi)=\varphi(w)$$ for all $$\varphi \in W^*$$. That is, $$\varphi\big(w-f(v)\big)=0$$ for every $$\varphi\in W^*$$. This shows that $$w=f(v)$$, and so $$w\in\operatorname{im}f$$. The converse can be proven as follows. Let $$\alpha\in V^{**}$$ be such that $$\alpha+V\in\ker F(f)$$. Then, $$f^{**}\alpha \in W\cap \big(\operatorname{im} f^{**}\big)\subseteq \operatorname{im}f$$, so $$f^{**}\alpha=f(v)$$ for some $$v\in V$$. Thus, $$(\alpha-v)(\varphi\circ f)=f^{**}\alpha(\varphi)-f(v)(\varphi)=0$$ for all $$\varphi\in W^*$$, so the injectivity of $$f$$ implies surjectivity of $$f^*:W^*\to V^*$$, which means $$\alpha=v$$. • Thank you very much for this magnificent answer!!! However I don't understand the third point. It seems to me $F(f)$ is injective iff $f^{**}(\alpha)\in W$ implies $\alpha\in V$. I don't see why your $w$ is an arbitrary vector of $W$. – Pierre-Yves Gaillard Oct 29 '18 at 14:47 • Ach! You are very right. This part will be modified. – user593746 Oct 29 '18 at 15:03 • Now I understand! Nice! - Just to make sure I understand correctly the status of my 3 questions after your post, do you agree that the reflection of isomorphisms (or conservatism) remains open? – Pierre-Yves Gaillard Oct 29 '18 at 15:36 • All of them have been answered, if I didn't make stupid mistakes. The same example that fails reflection of monomorphisms works as a counterexample for the reflection of isomorphisms too. – user593746 Oct 29 '18 at 15:37 • I will take a look at them later, but I cannot promise to be able to help. – user593746 Oct 29 '18 at 16:00
2019-08-24 22:26:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 110, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.975368082523346, "perplexity": 85.28552583035463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321786.95/warc/CC-MAIN-20190824214845-20190825000845-00225.warc.gz"}
http://mathoverflow.net/questions/132619/are-the-elements-of-a-division-algebra-which-commute-with-all-commutators-in-the
# Are the elements of a division algebra which commute with all commutators in the center of the algebra? I asked this quetion five days ago at http://math.stackexchange.com/questions/406669/are-the-elements-of-a-division-algebra-which-commute-with-all-commutators-in-the Some good people have given good comments there. - This is a lemma in Ancohea's paper jstor.org/discover/10.2307/…. I can not understand the proof. –  yanyu Jun 3 '13 at 4:43 Hmm, the proof of the lemma in that paper seems to suggest Ancohea assumes the division algebra is finite-dimensional over the center, but this is never clearly stated earlier in the paper. How about the following 1-sentence proof: the commutators over $k$ span the space of commutators over $\overline{k}$, so it suffices to treat the case of a matrix algebra, which you can treat by bare hands. QED –  user30180 Jun 3 '13 at 6:22
2014-04-18 00:33:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9152179956436157, "perplexity": 521.2629599932334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
https://ufn.ru/en/articles/2015/8/c/citation/en/refworks.html
# ←→ Physics of our days # Physical laboratory at the center of the Galaxy a, b,  a a Institute for Nuclear Research, Russian Academy of Sciences, prosp. 60-letiya Oktyabrya 7a, Moscow, 117312, Russian Federation b National Research Nuclear University ‘MEPhI’, Kashirskoe shosse 31, Moscow, 115409, Russian Federation This paper reviews the physical processes that occur at the center of the Galaxy and are related to the supermassive black hole SgrA* residing there. The discovery of high-velocity S0 stars orbiting SgrA* allowed for the first time the measurement of the mass of this our closest supermassive black hole with the 10% accuracy, with the result: Mh=(4,1±0,4) ✕ 106 M. Further monitoring can potentially discover the Newtonian precession of the S0 star orbits in the gravitational field of the black hole due to the invisible distributed matter. This will yield the "weight" of the elusive dark matter concentrated there and provide new information for the identification of dark matter particles. The weak accretion activity of the "dormant quasar" at the Galactic center occasionally shows up as quasiperiodic X-ray and near IR oscillations with mean periods of 11 and 19 min, oscillations which can possibly be interpreted as related to the rotation frequency of the SgrA* event horizon and to the latitude oscillations of hot plasma spots in the accretion disk. Both these frequencies depend only on the black hole gravitational field and not on the accretion model. Using this interpretation yields quite accurate values both for the mass Mh and spin a (the Kerr rotation parameter) of SgrA*: Mh =(4,2±0,2) ✕ 106 M and a=0,65±0,05. Keywords: black holes, Galactic center, dark matter PACS: 95.35.+d, 97.60.Lf, 98.35.Gi, 98.35.Jk (all) DOI: 10.3367/UFNe.0185.201508c.0829 URL: https://ufn.ru/en/articles/2015/8/c/ Citation: Dokuchaev V I, Eroshenko Yu N "Physical laboratory at the center of the Galaxy" Phys. Usp. 58 772–784 (2015) BibTexBibNote ® (generic)BibNote ® (RIS)Medline RefWorks RT Journal T1 Physical laboratory at the center of the Galaxy A1 Dokuchaev,V.I. A1 Eroshenko,Yu.N. PB Physics-Uspekhi PY 2015 FD 10 Aug, 2015 JF Physics-Uspekhi JO Phys. Usp. VO 58 IS 8 SP 772-784 DO 10.3367/UFNe.0185.201508c.0829 LK https://ufn.ru/en/articles/2015/8/c/ Received: 30th, May 2015, 9th, June 2015 Îðèãèíàë: Äîêó÷àåâ Â È, Åðîøåíêî Þ Í «Ôèçè÷åñêàÿ ëàáîðàòîðèÿ â öåíòðå Ãàëàêòèêè» ÓÔÍ 185 829–843 (2015); DOI: 10.3367/UFNr.0185.201508c.0829 References (278) Cited by (16) Similar articles (13) © 1918–2019 Uspekhi Fizicheskikh Nauk Email: ufn@ufn.ru Editorial office contacts About the journal Terms and conditions
2019-11-20 08:45:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.273810476064682, "perplexity": 3239.0051157431003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670535.9/warc/CC-MAIN-20191120083921-20191120111921-00163.warc.gz"}
http://physics.stackexchange.com/questions/28109/existence-of-electric-field-lines?answertab=oldest
# Existence Of Electric Field Lines [closed] Can an Electric Field with field lines Like So Exist: One Of my friends said it couldn't as the field lines here are not conservative ; so it cannot exist ; Is he right? Or can it be made to exist - ## closed as unclear what you're asking by Emilio Pisanty, Brandon Enright, Chris White, tpg2114, John RennieFeb 3 '14 at 11:18 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question.If this question can be reworded to fit the rules in the help center, please edit the question. Dont the Electric Field Lines mean the same thing? –  The-Ever-Kid May 10 '12 at 12:37 Yep. "Lines" is missing in your question. EDIT: I see, you have it in title, but you dont have it in the text. Never mind. –  Pygmalion May 10 '12 at 12:39 "Existence Of Electric Field Lines" & "as the field lines here" Ive used the word like twice...so.....okay im editing it. –  The-Ever-Kid May 10 '12 at 12:40 OK, fine, I somehow missed title reading text :) –  Pygmalion May 10 '12 at 12:41 The link to this image is broken. Please replace it with an image hosted at the standard StackExchange image hosting service by using the image upload button on the question editing window. Or does anyone remember what that picture was? –  Emilio Pisanty Feb 3 '14 at 0:46 Yes, your friend is right. Within electrostatics, an electric field $\vec{E}$ should be curl-free $\vec{\nabla} \times\vec{E}= \vec{0}$. The drawn electric field lines looks like the electric field is of the form $$E_x=E_x(y), \qquad E_y=0, \qquad E_z=0,$$ cf. the rule that to depict the magnitude $|\vec{E}|$, a selection of field lines is drawn such that the density of field lines (number of field lines per unit perpendicular area) at any location is proportional to $|\vec{E}|$ at that point. Here the $x$-axis is horizontal, the $y$-axis is vertical, and the $z$-axis perpendicular to the plane. This is only curl-free if $E_x=E_x(y)$ is independent of $y$, which it isn't on the figure. - Can we Make Something that generates a field like so 'coz I've Heard that induced electric field is non conservative –  The-Ever-Kid May 10 '12 at 12:34 He didn't say the electric field was conservative –  Physiks lover May 10 '12 at 18:43 @The-Ever-Kid: Perhaps you are referring to Faraday's induction law. With the help of an appropriate time-varying magnetic field, the electric field lines in the figure may be realized. –  Qmechanic May 10 '12 at 18:51 @Physiks lover: Well, indirectly. OP tagged it as electrostatics, which traditionally implies a curlfree electric field. –  Qmechanic May 10 '12 at 18:56 +1, although technically if there is nontrivial $z$-dependence, this could be a slice of a zero-curl configuration. (Admittedly one is not given any reason to believe that is the case.) –  David Z May 10 '12 at 20:52
2015-01-31 17:41:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7583776116371155, "perplexity": 1179.8391584382657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422120928902.90/warc/CC-MAIN-20150124173528-00082-ip-10-180-212-252.ec2.internal.warc.gz"}
https://academ.us/list/math/
Exact Solutions of the Time Derivative Fokker-Planck Equation: A Novel Approach In the present article, an approach to find the exact solution of the fractional Fokker-Planck equation is presented. It is based on transforming it to a system of first-order partial differential equation via Hopf transformation, together with implementing the extended unified method. On the other hand, reduction of the fractional derivatives to non autonomous ordinary derivative. Thus the fractional Fokker-Planck equation is reduced to non autonomous classical ones. Some explicit solutions of the classical, fractional time derivative Fokker-Planck equation, are obtained . It is shown that the solution of the Fokker-Planck equation is bi-Gaussian's. It is found that high friction coefficient plays a significant role in lowering the standard deviation. Further, it is found the fractionality has stronger effect than fractality. It is worthy to mention that the mixture of Gaussian's is a powerful tool in machine learning. Further, when varying the order of the fractional time derivatives, results to slight effects in the probability distribution function. Also, it is shown that the mean and mean square of the velocity vary slowly. A descent view on Mitchell's theorem In this short note, we given a new proof of Mitchell's theorem that $L_{T\left(n\right)} K(Z) \cong 0$ for $n > 2$. Instead of reducing the problem to delicate representation theory, we use recently established hyperdescent technology for chromatically-localized algebraic K-theory. Confluent Darboux transformations, isospectral deformations, and Exceptional Legendre polynomials Exceptional orthogonal polynomials are families of orthogonal polynomials that arise as solutions of Sturm-Liouville eigenvalue problems. They generalize the classical families of Hermite, Laguerre, and Jacobi polynomials by allowing for polynomial sequences that are missing a finite number of "exceptional" degrees. In this note we sketch the construction of multi-parameter exceptional Legendre polynomials by considering the isospectral deformation of the classical Legendre operator. Using confluent Darboux transformations and a technique from inverse scattering theory, we obtain a fully explicit description of the operators and polynomials in question. Diffusions on a space of interval partitions: The two-parameter model In this paper, we study interval partition diffusions with Poisson--Dirichlet$(\alpha,\theta)$ stationary distribution for parameters $\alpha\in(0,1)$ and $\theta\ge 0$. This extends previous work on the cases $(\alpha,0)$ and $(\alpha,\alpha)$ and builds on our recent work on measure-valued diffusions. We work on spaces of interval partitions with $\alpha$-diversity. These processes can be viewed as diffusions on the boundary of a branching graph of integer compositions. The additional order and diversity structure of such interval partitions is essential for applications to continuum random tree models such as stable CRTs and limit structures of other regenerative tree growth processes, where intervals correspond to masses of spinal subtrees (or spinal bushes) in spinal order and diversities give distances between any two spinal branch points. We further show that our processes can be extended to enter continuously from the Hausdorff completion of our state space. Todxs cuentan: cultivating diversity in combinatorics This article describes the efforts of the SFSU-Colombia Combinatorics Initiative to build a research and learning community between California and Colombia. It seeks to broaden and deepen representation in mathematics, based on four underlying principles: 1. Mathematical potential is distributed equally among different groups, irrespective of geographic, demographic, and economic boundaries. 2. Everyone can have joyful, meaningful, and empowering mathematical experiences. 3. Mathematics is a powerful, malleable tool that can be shaped and used differently by various communities to serve their needs. 4. Every student deserves to be treated with dignity and respect. Todxs cuentan: building community and welcoming humanity from the first day of class Everyone can have joyful, meaningful, and empowering academic experiences; but no single academic experience is joyful, meaningful, and empowering to everyone. How do we build academic spaces where every participant can thrive? Audre Lorde advises us to use our differences to our advantage. bell hooks highlights the key role of building community while addressing power dynamics. Rochelle Guti\'errez emphasizes the importance of welcoming students' full humanity. This note discusses some efforts to implement these ideas in a university classroom, focusing on the first day of class. Three solutions for a new Kirchhoff-type problem This article concerns on the existence of multiple solutions for a new Kirchhoff-type problem with negative modulus. We prove that there exist three nontrivial solutions when the parameter is enough small via the variational methods and algebraic analysis. Moreover, our fundamental technique is one of the Mountain Pass Lemma, Ekeland variational principle, and Minimax principle. Error Estimates for Deep Learning Methods in Fluid Dynamics In this study, we provide error estimates and stability analysis of deep learning techniques for certain partial differential equations including the incompressible Navier-Stokes equations. In particular, we obtain explicit error estimates (in suitable norms) for the solution computed by optimizing a loss function in a Deep Neural Network (DNN) approximation of the solution, with a fixed complexity. A Poisson basis theorem for symmetric algebras of infinite-dimensional Lie algebras We consider when the symmetric algebra of an infinite-dimensional Lie algebra, equipped with the natural Poisson bracket, satisfies the ascending chain condition (ACC) on Poisson ideals. We define a combinatorial condition on a graded Lie algebra which we call Dicksonian because it is related to Dickson's lemma on finite subsets of $\mathbb N^k$. Our main result is: Theorem. If $\mathfrak g$ is a Dicksonian graded Lie algebra over a field of characteristic zero, then the symmetric algebra $S(\mathfrak g)$ satisfies the ACC on radical Poisson ideals. As an application, we establish this ACC for the symmetric algebra of any graded simple Lie algebra of polynomial growth, and for the symmetric algebra of the Virasoro algebra. We also derive some consequences connected to the Poisson primitive spectrum of finitely Poisson-generated algebras. Quaternionic numerical range of complex matrices The paper explores further the computation of the quaternionic numerical range of a complex matrix. We prove a modified version of a conjecture by So and Tompson. Specifically, we show that the shape of the quaternionic numerical range for a complex matrix depends on the complex numerical range and two real values. We establish under which conditions the bild of a complex matrix coincides with its complex numerical range and when the quaternionic numerical range is convex. Iterative Pre-Conditioning for Expediting the Gradient-Descent Method: The Distributed Linear Least-Squares Problem This paper considers the multi-agent linear least-squares problem in a server-agent network. In this problem, the system comprises multiple agents, each having a set of local data points, that are connected to a server. The goal for the agents is to compute a linear mathematical model that optimally fits the collective data points held by all the agents, without sharing their individual local data points. This goal can be achieved, in principle, using the server-agent variant of the traditional iterative gradient-descent method. The gradient-descent method converges linearly to a solution, and its rate of convergence is lower bounded by the conditioning of the agents' collective data points. If the data points are ill-conditioned, the gradient-descent method may require a large number of iterations to converge. We propose an iterative pre-conditioning technique that mitigates the deleterious effect of the conditioning of data points on the rate of convergence of the gradient-descent method. We rigorously show that the resulting pre-conditioned gradient-descent method, with the proposed iterative pre-conditioning, achieves superlinear convergence when the least-squares problem has a unique solution. In general, the convergence is linear with improved rate of convergence in comparison to the traditional gradient-descent method and the state-of-the-art accelerated gradient-descent methods. We further illustrate the improved rate of convergence of our proposed algorithm through experiments on different real-world least-squares problems in both noise-free and noisy computation environment. Commutator length of powers in free products of groups Given groups $A$ and $B$, what is the minimal commutator length of the 2020th (for instance) power of an element $g\in A*B$ not conjugate to elements of the free factors? The exhaustive answer to this question is still unknown, but we can give an almost answer: this minimum is one of two numbers (simply depending on $A$ and $B$). Other similar problems are also considered. Numerical Methods for a Diffusive Class Nonlocal Operators In this paper we develop a numerical scheme based on quadratures to approximate solutions of integro-differential equations involving convolution kernels, $\nu$, of diffusive type. In particular, we assume $\nu$ is symmetric and exponentially decaying at infinity. We consider problems posed in bounded domains and in $\R$. In the case of bounded domains with nonlocal Dirichlet boundary conditions, we show the convergence of the scheme for kernels that have positive tails, but that can take on negative values. When the equations are posed on all of $\R$, we show that our scheme converges for nonnegative kernels. Since nonlocal Neumann boundary conditions lead to an equivalent formulation as in the unbounded case, we show that these last results also apply to the Neumann problem. Efficient multiscale algorithms for simulating nonlocal optical response of metallic nanostructure arrays In this paper, we consider numerical simulations of the nonlocal optical response of metallic nanostructure arrays inside a dielectric host, which is of particular interest to the nanoplasmonics community due to many unusual properties and potential applications. Mathematically, it is described by Maxwell's equations with discontinuous coefficients coupled with a set of Helmholtz-type equations defined only on the domains of metallic nanostructures. To solve this challenging problem, we develop an efficient multiscale method consisting of three steps. First, we extend the system into the domain occupied by the dielectric medium in a novel way and result in a coupled system with rapidly oscillating coefficients. A rigorous analysis of the error between the solutions of the original system and the extended system is given. Second, we derive the homogenized system and define the multiscale approximate solution for the extended system by using the multiscale asymptotic method. Third, to fix the inaccuracy of the multiscale asymptotic method inside the metallic nanostructures, we solve the original system in each metallic nanostructure separately with boundary conditions given by the multiscale approximate solution. A fast algorithm based on the $LU$ decomposition is proposed for solving the resulting linear systems. By applying the multiscale method, we obtain the results that are in good agreement with those obtained by solving the original system directly at a much lower computational cost. Numerical examples are provided to validate the efficiency and accuracy of the proposed method. Performance of Underwater Wireless Optical Communications in Presents of Cascaded Mixture Exponential-Generalized Gamma Turbulence Underwater wireless optical communication is one of the critical technologies for buoy-based high-speed cross-sea surface communication, where the communication nodes are vertically deployed. Due to the vertically inhomogeneous nature of the underwater environment, seawater is usually vertically divided into multiple layers with different parameters that reflect the real environment. In this work, we consider a generalized UWOC channel model that contains$N$ layers. To capture the effects of air bubbles and temperature gradients on channel statistics, we model each layer by a mixture Exponential-Generalized Gamma(EGG) distribution. We derive the PDF and CDF of the end-to-end SNR in exact closed-form. Then, unified BER and outage expressions using OOK and BPSK are also derived. The performance and behavior of common vertical underwater optical communication scenarios are thoroughly analyzed through the appropriate selection of parameters. All the derived expressions are verified via Monte Carlo simulations. Functional Limit Theorems of moving averages of Hermite processes and an application to homogenization We generalise the homogenisation theorem in \cite{Gehringer-Li-homo,Gehringer-Li-tagged} for passive tracer in a fractional Gaussian field to fractional non-Gaussian fields. We also obtain the limit theorems of normalized functionals of Hermite-Volterra processes, extending the result in \cite{Diu-Tran} to power series with fast decaying coefficients. We obtain either convergence to a Wiener process, in the short-range dependent case, or to a Hermite process, in the long-range dependent case. Furthermore, we prove convergence in the multivariate case with both, short and long-range dependent components and give an application to homogenization of fast/slow systems. Todxs cuentan in ECCO: community and belonging in mathematics The Encuentro Colombiano de Combinatoria (ECCO) is an international summer school that welcomes students and researchers with a wide variety of mathematical and personal experiences. ECCO has taught us a lot about what it might mean to truly find community and belonging in a mathematical space. The goal of this article is to share a few of the lessons that we have learned from helping to build it. Noncommutative Networks on a Cylinder In this paper a double quasi Poisson bracket in the sense of Van den Bergh is constructed on the space of noncommutative weights of arcs of a directed graph embedded in a disk or cylinder $\Sigma$, which gives rise to the quasi Poisson bracket of G.Massuyeau and V.Turaev on the group algebra $\mathbf k\pi_1(\Sigma,p)$ of the fundamental group of a surface based at $p\in\partial\Sigma$. This bracket also induces a noncommutative Goldman Poisson bracket on the cyclic space $\mathcal C_\natural$, which is a $\mathbf k$-linear space of unbased loops. We show that the induced double quasi Poisson bracket between boundary measurements can be described via noncommutative $r$-matrix formalism. This gives a more conceptual proof of the result of N. Ovenhouse that traces of powers of Lax matrix form an infinite collection of noncommutative Hamiltonians in involution with respect to noncommutative Goldman bracket on $\mathcal C_\natural$. Mesh sampling and weighting for the hyperreduction of nonlinear Petrov-Galerkin reduced-order models with local reduced-order bases The energy-conserving sampling and weighting (ECSW) method is a hyperreduction method originally developed for accelerating the performance of Galerkin projection-based reduced-order models (PROMs) associated with large-scale finite element models, when the underlying projected operators need to be frequently recomputed as in parametric and/or nonlinear problems. In this paper, this hyperreduction method is extended to Petrov-Galerkin PROMs where the underlying high-dimensional models can be associated with arbitrary finite element, finite volume, and finite difference semi-discretization methods. Its scope is also extended to cover local PROMs based on piecewise-affine approximation subspaces, such as those designed for mitigating the Kolmogorov $n$-width barrier issue associated with convection-dominated flow problems. The resulting ECSW method is shown in this paper to be robust and accurate. In particular, its offline phase is shown to be fast and parallelizable, and the potential of its online phase for large-scale applications of industrial relevance is demonstrated for turbulent flow problems with $O(10^7)$ and $O(10^8)$ degrees of freedom. For such problems, the online part of the ECSW method proposed in this paper for Petrov-Galerkin PROMs is shown to enable wall-clock time and CPU time speedup factors of several orders of magnitude while delivering exceptional accuracy. Sharp upper diameter bounds for compact shrinking Ricci solitons We give a sharp upper diameter bound for a compact shrinking Ricci soliton in terms of its scalar curvature integral and the Perelman's entropy functional. The sharp cases could occur at round spheres. The proof mainly relies on the sharp logarithmic Sobolev inequality of gradient shrinking Ricci solitons and a Vitali-type covering argument. Binomial ideals of domino tilings In this paper, we consider the set of all domino tilings of a cubiculated region. The primary question we explore is: How can we move from one tiling to another? Tiling spaces can be viewed as spaces of subgraphs of a fixed graph with a fixed degree sequence. Moves to connect such spaces have been explored in algebraic statistics. Thus, we approach this question from an applied algebra viewpoint, making new connections between domino tilings, algebraic statistics, and toric algebra. Using results from toric ideals of graphs, we are able to describe moves that connect the tiling space of a given cubiculated region of any dimension. This is done by studying binomials that arise from two distinct domino tilings of the same region. Additionally, we introduce tiling ideals and flip ideals and use these ideals to restate what it means for a tiling space to be flip connected. Finally, we show that if $R$ is a $2$-dimensional simply connected cubiculated region, any binomial arising from two distinct tilings of $R$ can be written in terms of quadratic binomials. As a corollary to our main result, we obtain an alternative proof to the fact that the set of domino tilings of a $2$-dimensional simply connected region is connected by flips. Generic Fibers of Parahoric Hitchin Systems In this paper, we talk about parahoric Hitchin systems over smooth projective curves with structure group a semisimple simply connected group. We describe the geometry of generic fibers of parahoric Hitchin fibrations using root stacks. We work over an algebraically closed field with a mild assumption of the characteristic. All of these can be treated as a generalization of GLn case in [SWW19] Unifying Compactly Supported and Matern Covariance Functions in Spatial Statistics The Mat{\'e}rn family of covariance functions has played a central role in spatial statistics for decades, being a flexible parametric class with one parameter determining the smoothness of the paths of the underlying spatial field. This paper proposes a new family of spatial covariance functions, which stems from a reparameterization of the generalized Wendland family. As for the Mat{\'e}rn case, the new class allows for a continuous parameterization of the smoothness of the underlying Gaussian random field, being additionally compactly supported. More importantly, we show that the proposed covariance family generalizes the Mat{\'e}rn model which is attained as a special limit case. The practical implication of our theoretical results questions the effective flexibility of the Mat{\'e}rn covariance from modeling and computational viewpoints. Our numerical experiments elucidate the speed of convergence of the proposed model to the Mat{\'e}rn model. We also inspect the level of sparseness of the associated (inverse) covariance matrix and the asymptotic distribution of the maximum likelihood estimator under increasing and fixed domain asymptotics. The effectiveness of our proposal is illustrated by analyzing a georeferenced dataset on maximum temperatures over the southeastern United States, and performing a re-analysis of a large spatial point referenced dataset of yearly total precipitation anomalies A Vectorial Approach to Unbalanced Optimal Mass Transport Unbalanced optimal mass transport (OMT) seeks to remove the conservation of mass constraint by adding a source term to the standard continuity equation in the Benamou-Brenier formulation of OMT. In this note, we show how the addition of the source fits into the vector-valued OMT framework. On a quaternification of complex Lie algebras We give a definition of quaternion Lie algebra and of the quaternification of a complex or a real Lie algebra. so*(2n), sp(n) and sl(n,H) become quaternion Lie algebras. Then we shall prove that a simple Lie algebra has the quaternification. For the proof we follow the well known argument due to Harich-Chandra, Chevalley and Serre to construct the simple Lie algebra from its corresponding root system. The root space decomposition of this quaternion Lie algebra will be given. Each root sapce of a fundamental root is 2-dimensional. For example the quaternion special linear algebra sl(n,H) is the quaternification of the complex special Lie algebra sl(n,C). Generalized Silver and Miller measurability We present some results about the burgeoning research area concerning set theory of the kappa-reals. We focus on some notions of measurability coming from generalizations of Silver and Miller trees. We present analogies and mostly differences from the classical setting. Classification of genus-$1$ holomorphic Lefschetz pencils In this paper, we classify relatively minimal genus-$1$ holomorphic Lefschetz pencils up to smooth isomorphism. We first show that such a pencil is isomorphic to either the pencil on $\mathbb{P}^1\times \mathbb{P}^1$ of bi-degree $(2,2)$ or a blow-up of the pencil on $\mathbb{P}^2$ of degree $3$, provided that no fiber of a pencil contains an embedded sphere. (Note that one can easily classify genus-$1$ Lefschetz pencils with an embedded sphere in a fiber.) We further determine the monodromy factorizations of these pencils and show that the isomorphism class of a blow-up of the pencil on $\mathbb{P}^2$ of degree $3$ does not depend on the choice of blown-up base points. We also show that the genus-$1$ Lefschetz pencils constructed by Korkmaz-Ozbagci (with nine base points) and Tanaka (with eight base points) are respectively isomorphic to the pencils on $\mathbb{P}^2$ and $\mathbb{P}^1\times \mathbb{P}^1$ above, in particular these are both holomorphic. Automata groups generated by Cayley machines of groups of nilpotency class two We show presentations of automata groups generated by Cayley machines of finite groups of nilpotency class two and these automata groups are all cross-wired lamplighters. Kernel Embedding based Variational Approach for Low-dimensional Approximation of Dynamical Systems Transfer operators such as Perron-Frobenius or Koopman operator play a key role in modeling and analysis of complex dynamical systems, which allow linear representations of nonlinear dynamics by transforming the original state variables to feature spaces. However, it remains challenging to identify the optimal low-dimensional feature mappings from data. The variational approach for Markov processes (VAMP) provides a comprehensive framework for the evaluation and optimization of feature mappings based on the variational estimation of modeling errors, but it still suffers from a flawed assumption on the transfer operator and therefore sometimes fails to capture the essential structure of system dynamics. In this paper, we develop a powerful alternative to VAMP, called kernel embedding based variational approach for dynamical systems (KVAD). By using the distance measure of functions in the kernel embedding space, KVAD effectively overcomes the theoretical and practical limitations of VAMP. In addition, we develop a data-driven KVAD algorithm for seeking the ideal feature mapping within a subspace spanned by given basis functions, and numerical experiments show that the proposed algorithm can significantly improve the modeling accuracy compared to VAMP. Distribution of genus among numerical semigroups with fixed Frobenius number A numerical semigroup is a sub-semigroup of the natural numbers that has a finite complement. The size of its complement is called the genus and the largest number in the complement is called its Frobenius number. We consider the set of numerical semigroups with a fixed Frobenius number $f$ and analyse their genus. We find the asymptotic distribution of genus in this set of numerical semigroups and show that it is a product of a Gaussian and a power series. We show that almost all numerical semigroups with Frobenius number $f$ have genus close to $\frac{3f}{4}$. We denote the number of numerical semigroups of Frobenius number $f$ by $N(f)$. While $N(f)$ is not monotonic we prove that $N(f)<N(f+2)$ for every $f$. Fitting ideals in two-variable equivariant Iwasawa theory and an application We study equivariant Iwasawa theory for two-variable abelian extensions of an imaginary quadratic field. One of the main goals of this paper is to describe the Fitting ideals of Iwasawa modules using $p$-adic $L$-functions. We also provide an application to Selmer groups of elliptic curves with complex multiplication. Large deviation principle for the three dimensional planetary geostrophic equations of large-scale ocean circulation with small multiplicative noise We demonstrate the large deviation principle in the small noise limit for the three dimensional stochastic planetary geostrophic equations of large-scale ocean circulation. In this paper, we first prove the well-posedness of weak solutions to this system by the method of monotonicity. As we know, a recently developed method, weak convergent method, has been employed in studying the large deviations and this method is essentially based on the main result of \cite{ba2} which discloses the variational representation of exponential integrals with respect to the Brownian noise. The It\^{o} inequality and Burkholder-Davis-Gundy inequality are the main tools in our proofs, and the weak convergence method introduced by Budhiraja, Dupuis and Ganguly in \cite{ba3} is also used to establish the large deviation principle. Domination number of middle graphs In this paper, we study the domination number of middle graphs. Indeed, we obtain tight bounds for this number in terms of the order of the graph. We also compute the domination number of some families of graphs such as star graphs, double start graphs, path graphs, cycle graphs, wheel graphs, complete graphs, complete bipartite graphs and friendship graphs, explicitly. Moreover, some Nordhaus-Gaddum-like relations are presented for the domination number of middle graphs. A Channel Model of Transceivers for Multiterminal Secret Key Agreement Information theoretic secret key agreement is impossible without making initial assumptions. One type of initial assumption is correlated random variables that are generated by using a noisy channel that connects the terminals. Terminals use the correlated random variables and communication over a reliable public channel to arrive at a shared secret key. Previous channel models assume that each terminal either controls one input to the channel, or receives one output variable of the channel. In this paper, we propose a new channel model of transceivers where each terminal simultaneously controls an input variable and observes an output variable of the (noisy) channel. We give upper and lower bounds for the secret key capacity (i.e., highest achievable key rate) of this transceiver model, and prove the secret key capacity under the conditions that the public communication is noninteractive and input variables of the noisy channel are independent. On the invertibility in periodic ARFIMA models The present paper, characterizes the invertibility and causality conditions of a periodic ARFIMA (PARFIMA) models. We first, discuss the conditions in the multivariate case, by considering the corresponding p-variate stationary ARFIMA models. Second, we construct the conditions using the univariate case and we deduce a new infinite autoregressive representation for the PARFIMA model, the results are investigated through a simulation study. From formal to actual Puiseux series solutions of algebraic differential equations of first order The existence and uniqueness of formal Puiseux series solutions of non-autonomous algebraic differential equations of the first order at a nonsingular point of the equation is proven. The convergence of those Puiseux series is established. Several new examples are provided. Relationships to the celebrated Painleve theorem and lesser-known Petrovic's results are discussed in detail. Emergent dynamics of the Lohe Hermitian sphere model with frustration We study emergent dynamics of the Lohe hermitian sphere(LHS) model which can be derived from the Lohe tensor model \cite{H-P2} as a complex counterpart of the Lohe sphere(LS) model. The Lohe hermitian sphere model describes aggregate dynamics of point particles on the hermitian sphere $\bbh\bbs^d$ lying in ${\mathbb C}^{d+1}$, and the coupling terms in the LHS model consist of two coupling terms. For identical ensemble with the same free flow dynamics, we provide a sufficient framework leading to the complete aggregation in which all point particles form a giant one-point cluster asymptotically. In contrast, for non-identical ensemble, we also provide a sufficient framework for the practical aggregation. Our sufficient framework is formulated in terms of coupling strengths and initial data. We also provide several numerical examples and compare them with our analytical results. Joint Uplink-and-Downlink Optimization of 3D UAV Swarm Deployment for Wireless-Powered NB-IoT Networks This paper investigates a full-duplex orthogonal-frequency-division multiple access (OFDMA) based multiple unmanned aerial vehicles (UAVs)-enabled wireless-powered Internet-of-Things (IoT) networks. In this paper, a swarm of UAVs is first deployed in three dimensions (3D) to simultaneously charge all devices, i.e., a downlink (DL) charging period, and then flies to new locations within this area to collect information from scheduled devices in several epochs via OFDMA due to potential limited number of channels available in Narrow Band IoT, i.e., an uplink (UL) communication period. To maximize the UL throughput of IoT devices, we jointly optimizes the UL-and-DL 3D deployment of the UAV swarm, including the device-UAV association, the scheduling order, and the UL-DL time allocation. In particular, the DL energy harvesting (EH) threshold of devices and the UL signal decoding threshold of UAVs are taken into consideration when studying the problem. Besides, both line-of-sight (LoS) and non-line-of-sight (NLoS) channel models are studied depending on the position of sensors and UAVs. The influence of the potential limited channels issue in NB-IoT is also considered by studying the IoT scheduling policy. Two scheduling policies, a near-first (NF) policy and a far-first (FF) policy, are studied. It is shown that the NF scheme outperforms FF scheme in terms of sum throughput maximization; whereas FF scheme outperforms NF scheme in terms of system fairness. Coherent sheaves on the stack of Langlands parameters We formulate a few conjectures on some hypothetical coherent sheaves on the stacks of arithmetic local Langlands parameters, including their roles played in the local-global compatibility in the Langlands program. We survey some known results as evidences of these conjectures. On approximations of the point measures associated with the Brownian web by means of the fractional step method and the discretization of the initial interval The rate of the weak convergence in the fractional step method for the Arratia flow is established in terms of the Wasserstein distance between the images of the Lebesque measure under the action of the flow. We introduce finite-dimensional densities describing sequences of collisions in the Arratia flow and derive an explicit expression for them. With the initial interval discretized, the convergence of the corresponding approximations of the point measure associated with the Arratia flow is discussed in terms of such densities. Polynomial-time algorithms for Multimarginal Optimal Transport problems with structure Multimarginal Optimal Transport (MOT) has recently attracted significant interest due to its many applications. However, in most applications, the success of MOT is severely hindered by a lack of sub-exponential time algorithms. This paper develops a general theory about "structural properties" that make MOT tractable. We identify two such properties: decomposability of the cost into either (i) local interactions and simple global interactions; or (ii) low-rank interactions and sparse interactions. We also provide strong evidence that (iii) repulsive costs make MOT intractable by showing that several such problems of interest are NP-hard to solve--even approximately. These three structures are quite general, and collectively they encompass many (if not most) current MOT applications. We demonstrate our results on a variety of applications in machine learning, statistics, physics, and computational geometry. On an $L^2$ extension theorem from log-canonical centres with log-canonical measures With a view to prove an Ohsawa-Takegoshi type $L^2$ extension theorem with $L^2$ estimates given with respect to the log-canonical (lc) measures, a sequence of measures each supported on lc centres of specific codimension defined via multiplier ideal sheaves, this article is aiming at providing evidence and possible means to prove the $L^2$ estimates on compact K\"ahler manifolds $X$. A holomorphic family of $L^2$ norms on the ambient space $X$ is introduced which is shown to "deform holomorphically" to an $L^2$ norm with respect to an lc-measure. Moreover, the latter norm is shown to be invariant under a certain normalisation which leads to a "non-universal" $L^2$ estimate on compact $X$. Explicit examples on $\mathbb{P}^3$ with detailed computation are presented to verify the expected $L^2$ estimates for extensions from lc centres of various codimensions and to provide hint for the proof of the estimates in general. On singular control for Lévy processes We revisit the classical singular control problem of minimizing running and controlling costs. The problem arises in inventory control, as well as in healthcare management and mathematical finance. Existing studies have shown the optimality of a barrier strategy when driven by the Brownian motion or L\'evy processes with one-side jumps. Under the assumption that the running cost function is convex, we show the optimality of a barrier strategy for a general class of L\'evy processes. Numerical results are also given. $q$-dimensions of highest weight crystals and cyclic sieving phenomenon In this paper, we compute explicitly the $q$-dimensions of highest weight crystals modulo $q^n-1$ for a quantum group of arbitrary finite type under certain assumption, and interpret the modulo computations in terms of the cyclic sieving phenomenon. This interpretation gives an affirmative answer to the conjecture by Alexandersson and Amini. As an application, under the assumption that $\lambda$ is a partition of length $<m$ and there exists a fixed point in $\mathsf{SST}_m(\lambda)$ under the action $\mathsf{c}$ arising from the crystal structure, we show that the triple $(\mathsf{SST}_m(\lambda), \langle \mathsf{c} \rangle, \mathsf{s}_{\lambda}(1,q,q^2, \ldots, q^{m-1}))$ exhibits the cycle sieving phenomenon if and only if $\lambda$ is of the form $((am)^{b})$, where either $b=1$ or $m-1$. Moreover, in this case, we give an explicit formula to compute the number of all orbits of size $d$ for each divisor $d$ of $n$. A Whitney type theorem for surfaces: characterising graphs with locally planar embeddings We prove that a graph G embeds r-locally planarly in a pseudo-surface if and only if a certain matroid associated to the graph G is co-graphic. This extends Whitney's abstract planar duality theorem from 1932. Characterising graphs with no subdivision of a wheel of bounded diameter We prove that a graph has an r-bounded subdivision of a wheel if and only if it does not have a graph-decomposition of locality r and width at most two. Local 2-separators How can sparse graph theory be extended to large networks, where algorithms whose running time is estimated using the number of vertices are not good enough? I address this question by introducing 'Local Separators' of graphs. Applications include: 1. A unique decomposition theorem for graphs along their local 2-separators analogous to the 2-separator theorem; 2. an exact characterisation of graphs with no bounded subdivision of a wheel; 3. an analogue of the tangle-tree theorem of Robertson and Seymour, where the decomposition-tree is replaced by a general graph. Asymptotic energy distribution of one-dimensional nonlinear wave equation In this work we consider the defocusing nonlinear wave equation in one-dimensional space. We show that almost all energy is located near the light cone $|x|=|t|$ as time tends to infinity. We also prove that any light cone will eventually contain some energy. As an application we obtain a result about the asymptotic behaviour of solutions to focusing one-dimensional wave equation with compact-supported initial data. Sobolev spaces of vector-valued functions We are concerned here with Sobolev-type spaces of vector-valued functions. For an open subset $\Omega\subset\mathbb{R}^N$ and a Banach space $V$, we compare the classical Sobolev space $W^{1,p}(\Omega, V)$ with the so-called Sobolev-Reshetnyak space $R^{1,p}(\Omega, V)$. We see that, in general, $W^{1,p}(\Omega, V)$ is a closed subspace of $R^{1,p}(\Omega, V)$. As a main result, we obtain that $W^{1,p}(\Omega, V)=R^{1,p}(\Omega, V)$ if, and only if, the Banach space $V$ has the Radon-Nikod\'ym property On the Balasubramanian-Ramachandra method close to Re(s)=1 We study the problem on how to get good lower estimates for the integral $$\int_T^{T+H} |\zeta(\sigma+it)| dt,$$ when $H \ll 1$ is small and $\sigma$ is close to $1$, as well as related integrals for other Dirichlet series, by using ideas related to the Balasubramanian-Ramachandra method. We use kernel-functions constructed by the Paley-Wiener theorem as well as the kernel function of Ramachandra. We also notice that the Fourier transform of Ramachandra's Kernel-function is in fact a $K$-Bessel function. This simplifies some aspects of Balasubramanian-Ramachandra method since it allows use of the theory of Bessel-functions. Homogenization of nonstationary periodic Maxwell system in the case of constant permeability In $L_2({\mathbb R}^3;{\mathbb C}^3)$, we consider a selfadjoint operator ${\mathcal L}_\varepsilon$, $\varepsilon >0$, given by the differential expression $\mu_0^{-1/2}\operatorname{curl} \eta(\mathbf{x}/\varepsilon)^{-1} \operatorname{curl} \mu_0^{-1/2} - \mu_0^{1/2}\nabla \nu(\mathbf{x}/\varepsilon) \operatorname{div} \mu_0^{1/2}$, where $\mu_0$ is a constant positive matrix, a matrix-valued function $\eta(\mathbf{x})$ and a real-valued function $\nu(\mathbf{x})$ are periodic with respect to some lattice, positive definite and bounded. We study the behavior of the operator-valued functions $\cos (\tau {\mathcal L}_\varepsilon^{1/2})$ and ${\mathcal L}_\varepsilon^{-1/2} \sin (\tau {\mathcal L}_\varepsilon^{1/2})$ for $\tau \in {\mathbb R}$ and small $\varepsilon$. It is shown that these operators converge to the corresponding operator-valued functions of the operator ${\mathcal L}^0$ in the norm of operators acting from the Sobolev space $H^s$ (with a suitable $s$) to $L_2$. Here ${\mathcal L}^0$ is the effective operator with constant coefficients. Also, an approximation with corrector in the $(H^s \to H^1)$-norm for the operator ${\mathcal L}_\varepsilon^{-1/2} \sin (\tau {\mathcal L}_\varepsilon^{1/2})$ is obtained. We prove error estimates and study the sharpness of the results regarding the type of the operator norm and regarding the dependence of the estimates on $\tau$. The results are applied to homogenization of the Cauchy problem for the nonstationary Maxwell system in the case where the magnetic permeability is equal to $\mu_0$, and the dielectric permittivity is given by the matrix $\eta(\mathbf{x}/\varepsilon)$. Homotopy classification of maps between $\mathbf{A}_n^2$-complexes In this article, the author improves Baues's homotopy classification of maps between indecomposable $(n-1)$-connected $(n+2)$ dimensional finite CW-complexes $X,Y,n>3$, by finding a generating set for any abelian group $[X,Y]$. Using these generators, the author firstly finds splitting cofiber sequences which imply Zhu-Pan's decomposability result of smash products of the complexes, and secondly, obtains partial results on the groups of homotopy classes of self-homotopy equivalences of the complexes and some of their natural subgroups. Self-similar Gaussian Markov processes We define a two-parameter family of Gaussian Markov processes, which includes Brownian motion as a special case. Our main result is that any centered self-similar Gaussian Markov process is a constant multiple of a process from this family. This yields short and easy proofs of some non-Markovianity results concerning variants of fractional Brownian motion (most of which are known). In the proof of our main theorem, we use some properties of additive functions, i.e. solutions of Cauchy's functional equation. In an appendix, we show that a certain self-similar Gaussian process with asymptotically stationary increments is not a semimartingale. Algebraic discretization of time-independent Hamiltonian systems using a Lie-group/algebra approach In this paper, time-independent Hamiltonian systems are investigated via a Lie-group/algebra formalism. The (unknown) solution linked with the Hamiltonian is considered to be a Lie-group transformation of the initial data, where the group parameter acts as the time. The time-evolution generator (i.e. the Lie algebra associated to the group transformation) is constructed at an algebraic level, hence avoiding discretization of the time-derivatives for the discrete case. This formalism makes it possible to investigate the continuous and discrete versions of time for time-independent Hamiltonian systems and no additional information on the system is required (besides the Hamiltonian itself and the initial conditions of the solution). When the time-independent Hamiltonian system is integrable in the sense of Liouville, one can use the action-angle coordinates to straighten the time-evolution generator and construct an exact scheme (i.e. a scheme without errors). In addition, a method to analyse the errors of approximative/numerical schemes is provided. These considerations are applied to well-known examples associated with the one-dimensional harmonic oscillator. On some $\overrightarrow{p(x)}$ anisotropic elliptic equations in unbounded domain We study a class of nonlinear elliptic problems with Dirichlet conditions in the framework of the Sobolev anisotropic spaces with variable exponent, involving an anisotropic operator on an unbounded domain $\Omega\subset \>I\!\!R^{N}\>(N \geq 2)\>$. We prove the existence of entropy solutions avoiding sign condition and coercivity on the lowers order terms. An information geometry approach for robustness analysis in uncertainty quantification of computer codes Robustness analysis is an emerging field in the domain of uncertainty quantification. It consists of analysing the response of a computer model with uncertain inputs to the perturbation of one or several of its input distributions. Thus, a practical robustness analysis methodology should rely on a coherent definition of a distribution perturbation. This paper addresses this issue by exposing a rigorous way of perturbing densities. The proposed methodology is based the Fisher distance on manifolds of probability distributions. A numerical method to calculate perturbed densities in practice is presented. This method comes from Lagrangian mechanics and consists of solving an ordinary differential equations system. This perturbation definition is then used to compute quantile-oriented robustness indices. The resulting Perturbed-Law based sensitivity Indices (PLI) are illustrated on several numerical models. This methodology is also applied to an industrial study (simulation of a loss of coolant accident in a nuclear reactor), where several tens of the model physical parameters are uncertain with limited knowledge concerning their distributions. $X$-States From a Finite Geometric Perspective It is found that $15$ different types of two-qubit $X$-states split naturally into two sets (of cardinality $9$ and $6$) once their entanglement properties are taken into account. We {characterize both the validity and entangled nature of the $X$-states with maximally-mixed subsystems in terms of certain parameters} and show that their properties are related to a special class of geometric hyperplanes of the symplectic polar space of order two and rank two. Finally, we introduce the concept of hyperplane-states and briefly address their non-local properties. Special idempotents and projections We define, for any special matching of a finite graded poset, an idempotent, regressive and order preserving function. We consider the monoid generated by such functions. The idempotents of this monoid are called special idempotents. They are interval retracts. Some of them realize a kind of parabolic map and are called special projections. We prove that, in Eulerian posets, the image of a special projection, and its complement, are graded induced subposets. In a finite Coxeter group, all projections on right and left parabolic quotients are special projections, and some projections on double quotients too. We extend our results to special partial matchings. $L^p$-theory for Cauchy-transform on the unit disk Let $\mathbb{D}$ be the unit disk and $\varphi\in L^p(\mathbb{D}, \mathrm{d}A)$, where $1\leq p\leq\infty$. For $z\in\mathbb{D}$, the Cauchy-transform on $\mathbb{D}$, denote by $\mathcal{P}$, is defined as follows: $$\mathcal{P}[\varphi](z)=-\int_{\mathbb{D}}\left(\frac{\varphi(w)}{w-z}+\frac{z\overline{\varphi(w)}}{1-\bar{w}z}\right)\mathrm{d}A(w).$$ The Beurling transform on $\mathbb{D}$, denote by $\mathcal{H}$, is now defined as the $z$-derivative of $\mathcal{P}$. In this paper, by using Hardy's type inequalities and Bessel functions, we show that $\|\mathcal{P}\|_{L^2\to L^2}=\alpha\approx1.086$, where $\alpha$ is a solution to the equation: $2J_0(2/\alpha)-\alpha J_1(2/\alpha)=0$, and $J_0$, $J_1$ are Bessel functions. Moreover, for $p>2$, by using Taylor expansion, Parseval's formula and hypergeometric functions, we also prove that $\|\mathcal{P}\|_{L^p\to L^{\infty}}=2(\Gamma(2-q)/\Gamma^2(2-\frac{q}{2}))^{1/q}$, where $q=p/(p-1)$ is the conjugate exponent of $p$, and $\Gamma$ is the Gamma function. Finally, applying the same techniques developed in this paper, we show that the Beurling transform $\mathcal{H}$ acts as an isometry of $L^2(\mathbb{D}, \mathrm{d}A)$. The Graham--Knuth--Patashnik recurrence: Symmetries and continued fractions We study the triangular array defined by the Graham--Knuth--Patashnik recurrence $T(n,k) = (\alpha n + \beta k + \gamma)\, T(n-1,k)+(\alpha' n + \beta' k + \gamma') \, T(n-1,k-1)$ with initial condition $T(0,k) = \delta_{k0}$ and parameters $\mathbf{\mu} = (\alpha,\beta,\gamma, \alpha',\beta',\gamma')$. We show that the family of arrays $T(\mathbf{\mu})$ is invariant under a 48-element discrete group isomorphic to $S_3 \times D_4$. Our main result is to determine all parameter sets $\mathbf{\mu} \in \mathbb{C}^6$ for which the ordinary generating function $f(x,t) = \sum_{n,k=0}^\infty T(n,k) \, x^k t^n$ is given by a Stieltjes-type continued fraction in $t$ with coefficients that are polynomials in $x$. We also exhibit some special cases in which $f(x,t)$ is given by a Thron-type or Jacobi-type continued fraction in $t$ with coefficients that are polynomials in $x$. The stochastic heat equation as the limit of a stirring dynamics perturbed by a voter model We prove that in dimension $d\le 3$ a modified density field of a stirring dynamics perturbed by a voter model converges to the stochastic heat equation. A Note on Non-tangential Convergence for Schrödinger Operators The goal of this note is to establish non-tangential convergence results for Schr\"{o}dinger operators along restricted curves. We consider the relationship between the dimension of this kind of approach region and the regularity for the initial data which implies convergence. As a consequence, we obtain a upper bound for $p$ such that the Schr\"{o}dinger maximal function is bounded from $H^{s}(\mathbb{R}^{n})$ to $L^{p}(\mathbb{R}^{n})$ for any $s > \frac{n}{2(n+1)}$. A formula for the $r$-coloured partition function in terms of the sum of divisors function and its inverse Let $p_{-r}(n)$ denote the $r$-coloured partition function, and $\sigma(n)=\sum_{d|n}d$ denote the sum of positive divisors of $n$. The aim of this note is to prove the following $$p_{-r}(n)=\theta(n)+\,\sum_{k=1}^{n-1}\frac{r^{k+1}}{(k+1)!} \sum_{\alpha_1\,= k}^{n-1} \, \sum_{\alpha_2\,= k-1}^{\alpha_1-1} \cdots \sum_{\alpha_k\, = 1}^{\alpha_{k-1}-1}\theta(n-\alpha_1) \theta(\alpha_1 -\alpha_2) \cdots \theta(\alpha_{k-1}-\alpha_k) \theta(\alpha_k)$$ where $\theta(n)=n^{-1}\, \sigma(n)$, and its inverse $$\sigma(n) = n\,\sum_{r=1}^n \frac{(-1)^{r-1}}{r}\, \binom{n}{r}\, p_{-r}(n).$$ On the Distribution of the Sum of Málaga-$\mathcal{M}$ Random Variables and Applications In this paper, a very accurate approximation method for the statistics of the sum of M\'{a}laga-$\mathcal{M}$ random variates with pointing error (MRVs) is proposed. In particular, the probability density function of MRV is approximated by a Fox's $H$-function through the moment-based approach. Then, the respective moment-generating function of the sum of $N$ MRVs is provided, based on which the average symbol error rate is evaluated for an $N$-branch maximal-ratio combining (MRC) receiver. The retrieved results show that the proposed approximate results match accurately with the exact simulated ones. Additionally, the results show that the achievable diversity order increases as a function of the number of MRC diversity branches. Double covers and extensions In this paper we consider double covers of the projective space in relation with the problem of extensions of varieties, specifically of extensions of canonical curves to $K3$ surfaces and Fano 3-folds. In particular we consider $K3$ surfaces which are double covers of the plane branched over a general sextic: we prove that the general curve in the linear system pull back of plane curves of degree $k\geq 7$ lies on a unique $K3$ surface. If $k\leq 6$ the general such curve is instead extendable to a higher dimensional variety. In the cases $k=4,5,6$, this gives the existence of singular index $k$ Fano varieties of dimensions 8, 5, 3, and genera 17, 26, 37 respectively. For $k = 6$ we recover the Fano variety $\mathbf{P}(3, 1, 1, 1)$, one of only two Fano threefolds with canonical Gorenstein singularities with the maximal genus 37, found by Prokhorov. We show that the latter variety is no further extendable. We also study the extensions of smooth degree 2 sections of $K3$ surfaces of genus 3. In all these cases, we compute the co-rank of the Gauss--Wahl maps of the curves under consideration. Finally we observe that linear systems on double covers of the projective plane provide superabundant logarithmic Severi varieties. Injectors in $π$-separable groups Let $\pi$ be a set of primes. We show that $\pi$-separable groups have a conjugacy class of $\mathfrak F$-injectors for suitable Fitting classes $\mathfrak F$, which coincide with the usual ones when specializing to soluble groups. On local and global structures of transmission eigenfunctions and beyond The (interior) transmission eigenvalue problems are a type of non-elliptic, non-selfadjoint and nonlinear spectral problems that arise in the theory of wave scattering. They connect to the direct and inverse scattering problems in many aspects in a delicate way. The properties of the transmission eigenvalues have been extensively and intensively studied over the years, whereas the intrinsic properties of the transmission eigenfunctions are much less studied. Recently, in a series of papers, several intriguing local and global geometric structures of the transmission eigenfunctions are discovered. Moreover, those longly unveiled geometric properties produce some interesting applications of both theoretical and practical importance to direct and inverse scattering problems. This paper reviews those developments in the literature by summarizing the results obtained so far and discussing the rationales behind them. There are some side results of this paper including the general formulations of several types of transmission eigenvalue problems, some interesting observations on the connection between the transmission eigenvalue problems and several challenging inverse scattering problems, and several conjectures on the spectral properties of transmission eigenvalues and eigenfunctions, with most of them are new to the literature. Game-Theoretic Upper Expectations for Discrete-Time Finite-State Uncertain Processes Game-theoretic upper expectations are joint (global) probability models that mathematically describe the behaviour of uncertain processes in terms of supermartingales; capital processes corresponding to available betting strategies. Compared to (the more common) measure-theoretic expectation functionals, they are not bounded to restrictive assumptions such as measurability or precision, yet succeed in preserving, or even generalising many of their fundamental properties. We focus on a discrete-time setting where local state spaces are finite and, in this specific context, build on the existing work of Shafer and Vovk; the main developers of the framework of game-theoretic upper expectations. In a first part, we study Shafer and Vovk's characterisation of a local upper expectation and show how it is related to Walley's behavioural notion of coherence. The second part consists in a study of game-theoretic upper expectations on a more global level, where several alternative definitions, as well as a broad range of properties are derived, e.g. the law of iterated upper expectations, compatibility with local models, coherence properties,... Our main contribution, however, concerns the continuity behaviour of these operators. We prove continuity with respect to non-increasing sequences of so-called lower cuts and continuity with respect to non-increasing sequences of finitary functions. We moreover show that the game-theoretic upper expectation is uniquely determined by its values on the domain of bounded below limits of finitary functions, and additionally show that, for any such limit, the limiting sequence can be constructed in such a way that the game-theoretic upper expectation is continuous with respect to this particular sequence. The Life and Mathematical Legacy of Thomas M. Liggett Thomas Milton Liggett was a world renowned UCLA probabilist, famous for his monograph Interacting Particle Systems. He passed away peacefully on May 12, 2020. This is a perspective article in memory of both Tom Liggett the person and Tom Liggett the mathematician. A fractional degenerate parabolic-hyperbolic Cauchy problem with noise We consider the Cauchy problem for a stochastic scalar parabolic-hyperbolic equation in any space dimension with nonlocal, nonlinear, and possibly degenerate diffusion terms. The equations are nonlocal because they involve fractional diffusion operators. We adapt the notion of stochastic entropy solution and provide a new technical framework to prove the uniqueness. The existence proof relies on the vanishing viscosity method. Moreover, using bounded variation (BV) estimates for vanishing viscosity approximations, we derive an explicit continuous dependence estimate on the nonlinearities and derive error estimate for the stochastic vanishing viscosity method. In addition, we develop uniqueness method "a la Kruzkov" for more general equations where the noise coefficient may depends explicitly on the spatial variable. Quartic Graphs with Minimum Spectral Gap Aldous and Fill conjectured that the maximum relaxation time for the random walk on a connected regular graph with $n$ vertices is $(1+o(1)) \frac{3n^2}{2\pi^2}$. This conjecture can be rephrased in terms of the spectral gap as follows: the spectral gap (algebraic connectivity) of a connected $k$-regular graph on $n$ vertices is at least $(1+o(1))\frac{2k\pi^2}{3n^2}$, and the bound is attained for at least one value of $k$. We determine the structure of connected quartic graphs on $n$ vertices with minimum spectral gap which enable us to show that the minimum spectral gap of connected quartic graph on $n$ vertices is $(1+o(1))\frac{4\pi^2}{n^2}$. From this result, the Aldous--Fill conjecture follows for $k=4$. A note on the asymptotic stability of the Semi-Discrete method for Stochastic Differential Equations We study the asymptotic stability of the semi-discrete (SD) numerical method for the approximation of stochastic differential equations. Recently, we examined the order of $\mathcal L^2$-convergence of the truncated SD method and showed that it can be arbitrarily close to $1/2,$ see \textit{Stamatiou, Halidias (2019), Convergence rates of the Semi-Discrete method for stochastic differential equations, Theory of Stochastic Processes, 24(40)}. We show that the truncated SD method is able to preserve the asymptotic stability of the underlying SDE. Motivated by a numerical example, we also propose a different SD scheme, using the Lamperti transformation to the original SDE, which we call Lamperti semi-discrete (LSD). Numerical simulations support our theoretical findings. A remark on quantum Hochschild homology Beliakova-Putyra-Wehrli studied various kinds of traces, in relation to annular Khovanov homology. In particular, to a graded algebra and a graded bimodule over it, they associate a quantum Hochschild homology of the algebra with coefficients in the bimodule, and use this to obtain a deformation of the annular Khovanov homology of a link. A spectral refinement of the resulting invariant was recently given by Akhmechet-Krushkal-Willis. In this short note we observe that quantum Hochschild homology is a composition of two familiar operations, and give a short proof that it gives an invariant of annular links, in some generality. Much of this is implicit in Beliakova-Putyra-Wehrli's work. A sequential optimality condition for Mathematical Programs with Cardinality Constraints In this paper we propose an Approximate Weak stationarity ($AW$-stationarity) concept designed to deal with {\em Mathematical Programs with Cardinality Constraints} (MPCaC), and we proved that it is a legitimate optimality condition independently of any constraint qualification. Such a sequential optimality condition improves weaker stationarity conditions, presented in a previous work. Many research on sequential optimality conditions has been addressed for nonlinear constrained optimization in the last few years, some works in the context of MPCC and, as far as we know, no sequential optimality condition has been proposed for MPCaC problems. We also establish some relationships between our $AW$-stationarity and other usual sequential optimality conditions, such as AKKT, CAKKT and PAKKT. We point out that, despite the computational appeal of the sequential optimality conditions, in this work we are not concerned with algorithmic consequences. Our aim is purely to discuss theoretical aspects of such conditions for MPCaC problems. On the monoid of cofinite partial isometries of $\mathbb{N}$ with the usual metric In the paper we show that the monoid $\mathbf{I}\mathbb{N}_{\infty}$ of all partial cofinite isometries of positive integers does not embed isomorphically into the monoid $\mathbf{ID}_{\infty}$ of all partial cofinite isometries of integers. Moreover every non-annihilating homomorphism $\mathfrak{h}\colon \mathbf{I}\mathbb{N}_{\infty}\to\mathbf{ID}_{\infty}$ has the following property: the image $(\mathbf{I}\mathbb{N}_{\infty})\mathfrak{h}$ is isomorphic either to the two-element cyclic group $\mathbb{Z}_2$ or to the additive group of integers $\mathbb{Z}(+)$. Also we prove that the monoid $\mathbf{I}\mathbb{N}_{\infty}$ is not a finitely generated, and moreover monoid $\mathbf{I}\mathbb{N}_{\infty}$ does not contain its minimal generating set. On the duality of the symmetric strong diameter $2$ property in Lipschitz spaces We characterise the weak$^*$ symmetric strong diameter $2$ property in Lipschitz function spaces by a property of its predual, the Lipschitz-free space. We call this new property decomposable octahedrality and study its duality with the symmetric strong diameter $2$ property in general. For a Banach space to be decomposably octahedral it is sufficient that its dual space has the weak$^*$ symmetric strong diameter $2$ property. Whether it is also a necessary condition remains open. Minimal Set of Generators of Ideals Defining Nilpotent Orbit Closures Over a field of characteristic $0$, we construct a minimal set of generators of the defining ideals of closures of nilpotent conjugacy class in the set of $n \times n$ matrices. This modifies a conjecture of Weyman and provides a complete answer to it. Roaming in Restricted Acetaldehyde: a Phase Space Perspective Recent experimental and theoretical results show many molecules dissociate in a slow and complicated manner called roaming, that is due to a mechanism independent of the mechanisms for molecular and radical dissociation. While in most molecules the conventional molecular mechanism dominates roaming, acetaldehyde stands out by predominantly dissociating to products characteristic for roaming. This work contributes to the discussion of the prominence of roaming in (restricted) acetaldehyde from a dynamical systems perspective. We find two mechanisms consisting of invariant phase space structures that may lead to identical molecular products. One of them is a slow passage via the flat region that fits the term frustrated dissociation used to describe roaming and is similar to the roaming mechanism in formaldehyde. The other mechanism is fast and avoids the flat region altogether. Trajectory simulations show that the fast mechanism is significantly more likely than roaming. Hamiltonian cycles and 1-factors in 5-regular graphs It is proven that for any integer $g \ge 0$ and $k \in \{ 0, \ldots, 10 \}$, there exist infinitely many 5-regular graphs of genus $g$ containing a 1-factorisation with exactly $k$ pairs of 1-factors that are perfect, i.e. form a hamiltonian cycle. For $g = 0$, this settles a problem of Kotzig from 1964. Motivated by Kotzig and Labelle's "marriage" operation, we discuss two gluing techniques aimed at producing graphs of high cyclic edge-connectivity. We prove that there exist infinitely many planar 5-connected 5-regular graphs in which every 1-factorisation has zero perfect pairs. On the other hand, by the Four Colour Theorem and a result of Brinkmann and the first author, every planar 4-connected 5-regular graph satisfying a condition on its hamiltonian cycles has a linear number of 1-factorisations each containing at least one perfect pair. We also prove that every planar 5-connected 5-regular graph satisfying a stronger condition contains a 1-factorisation with at most nine perfect pairs, whence, every such graph admitting a 1-factorisation with ten perfect pairs has at least two edge-Kempe equivalence classes. The paper concludes with further results on edge-Kempe equivalence classes in planar 5-regular graphs. Computing cohomology intersection numbers of GKZ hypergeometric systems In this review article, we report on some recent advances on the computational aspects of cohomology intersection numbers of GKZ systems developed in \cite{GM}, \cite{MH}, \cite{MT} and \cite{MT2}. We also discuss the relation between intersection theory and evaluation of an integral of a product of powers of absolute values of polynomials. Generating Sparse Stochastic Processes Using Matched Splines We provide an algorithm to generate trajectories of sparse stochastic processes that are solutions of linear ordinary differential equations driven by L\'evy white noises. A recent paper showed that these processes are limits in law of generalized compound-Poisson processes. Based on this result, we derive an off-the-grid algorithm that generates arbitrarily close approximations of the target process. Our method relies on a B-spline representation of generalized compound-Poisson processes. We illustrate numerically the validity of our approach. Analysis of the second order BDF scheme with variable steps for the molecular beam epitaxial model without slope selection In this work, we are concerned with the stability and convergence analysis of the second order BDF (BDF2) scheme with variable steps for the molecular beam epitaxial model without slope selection. We first show that the variable-step BDF2 scheme is convex and uniquely solvable under a weak time-step constraint. Then we show that it preserves an energy dissipation law if the adjacent time-step ratios $r_k:=\tau_k/\tau_{k-1}<3.561.$ Moreover, with a novel discrete orthogonal convolution kernels argument and some new discrete convolutional inequalities, the $L^2$ norm stability and rigorous error estimates are established, under the same step-ratios constraint that ensuring the energy stability., i.e., $0<r_k<3.561.$ This is known to be the best result in literature. We finally adopt an adaptive time-stepping strategy to accelerate the computations of the steady state solution and confirm our theoretical findings by numerical examples. Hartree-Fock type systems: existence of ground states and asymptotic behavior In this paper we consider an Hartree-Fock type system made by two Schr\"odinger equations in presence of a Coulomb interacting term and a cooperative pure power and subcritical nonlinearity, driven by a suitable parameter $\beta \geq 0$. We show the existence of semitrivial and vectorial ground states solutions depending on the parameters involved. The asymptotic behavior with respect to the parameter $\beta$ of these solutions is also studied. Spaces of knots in the solid torus, knots in the thickened torus, and irreducible links in the 3-sphere We recursively determine the homotopy type of the space of any irreducible framed link in the 3-sphere, modulo rotations. This leads us to the homotopy type of the space of any knot in the solid torus, thus answering a question posed by Arnold. We similarly study spaces of unframed links in the 3-sphere, modulo rotations, and spaces of knots in the thickened torus. The subgroup of meridional rotations splits as a direct factor of the fundamental group of the space of an irreducible framed link. Its generators can be viewed as generalizations of the Gramain loop in the space of long knots. Taking the quotient by certain such rotations relates the spaces we study. All of our results generalize previous work of Hatcher and Budney. We provide many examples and explicitly describe generators of fundamental groups. On abelian subcategories of triangulated categories The stable module category of a selfinjective algebra is triangulated, but need not have any nontrivial $t$-structures, and in particular, full abelian subcategories need not arise as hearts of a $t$-structure. The purpose of this paper is to investigate full abelian subcategories of triangulated categories whose exact structures are related, and more precisely, to explore relations between invariants of finite-dimensional selfinjective algebras and full abelian subcategories of their stable module categories. On the metabelian property of quotient groups of solvable groups of orientation-preserving homeomorphisms of the line For the class of solvable groups of homeomorphisms of the line preserving orientation and containing a freely acting element, we establish the metabelianity of the quotient group $G/H_G$, where the elements of the normal subgroup $H_G$ are stabilizers of the minimal set. This fact is an important element in the classification theorem, used, in particular, in the study of the Thompson's group $F$. Boolean Types in Dependent Theories The notion of a complete type can be generalized in a natural manner to allow assigning a value in an arbitrary Boolean algebra B to each formula. We show some basic results regarding the effect of the properties of B on the behavior of such types, and show they are particularity well behaved in the case of NIP theories. In particular, we generalize the third author's result about counting types, as well as the notion of a smooth type and extending a type to a smooth one. We then show that Keisler measures are tied to certain Boolean types and show that some of the results can thus be transferred to measures - in particular, giving an alternative proof of the fact that every measure in a dependent theory can be extended to a smooth one. We also study the stable case. We consider this paper as an invitation for more research into the topic of Boolean types. Profit maximization via capacity control for distribution logistics problems We consider a distribution logistics scenario where a shipping operator, managing a limited amount of resources, receives a stream of collection requests, issued by a set of customers along a booking time-horizon, that are referred to a future operational period. The shipping operator must then decide about accepting or rejecting each incoming request at the time it is issued, accounting for revenues, but also considering resource consumptions. In this context, the decision process is based on dynamically finding the best trade-off between the immediate return of accepting the request and the convenience of preserving capacity to possibly exploit more valuable future requests. We give a dynamic formulation of the problem aimed at maximizing the operator revenues, accounting also for the operational distribution costs. Due to the "curse of dimensionality", the dynamic program cannot be solved optimally. For this reason, we propose a mixed-integer linear programming approximation, whose exact or approximate solutions provide the relevant information to apply some commonplace revenue management policies in the real-time decision-making. Adopting a capacitated vehicle routing problem as an underlying distribution application, we analyze the computational behaviour of the proposed techniques on a set of academic test problems. Outer invariance entropy for discrete-time linear systems on Lie groups We introduce discrete-time linear control system on connected Lie groups and present an upper bound for the outer invariance entropy of admissible pairs. In the case of solvable Lie groups the upper bound coincides with the outer invariance entropy. The open XXZ chain at $Δ=-1/2$ and the boundary quantum Knizhnik-Zamolodchikov equations The open XXZ spin chain with the anisotropy parameter $\Delta=-\frac12$ and diagonal boundary magnetic fields that depend on a parameter $x$ is studied. For real $x>0$, the exact finite-size ground-state eigenvalue of the spin-chain Hamiltonian is explicitly computed. In a suitable normalisation, the ground-state components are characterised as polynomials in $x$ with integer coefficients. Linear sum rules and special components of this eigenvector are explicitly computed in terms of determinant formulas. These results follow from the construction of a contour-integral solution to the boundary quantum Knizhnik-Zamolodchikov equations associated with the $R$-matrix and diagonal $K$-matrices of the six-vertex model. A relation between this solution and a weighted enumeration of totally-symmetric alternating sign matrices is conjectured. A Probabilistic Numerical Extension of the Conjugate Gradient Method We present a Conjugate Gradient (CG) implementation of the probabilistic numerical solver BayesCG, whose error estimates are a fully integrated design feature, easy to compute, and competitive with the best existing estimators. More specifically, we extend BayesCG to singular prior covariances, derive recursions for the posterior covariances, express the posteriors as projections, and establish that BayesCG retains the minimization properties over Krylov spaces regardless of the singular priors. We introduce a possibly singular Krylov prior covariance, under which the BayesCG posterior means coincide with the CG iterates and the posteriors can be computed efficiently. Because of its factored form, the Krylov prior is amenable to low-rank approximation, which produces an efficient BayesCG implementation as a CG method. We also introduce a probabilistic error estimator, the $S$-statistic'. Although designed for sampling from BayesCG posteriors, its mean and variance under approximate Krylov priors can be computed with CG. An approximation of the $S$-statistic by a 95 percent credible interval' avoids the cost of sampling altogether. Numerical experiments illustrate that the resulting error estimates are competitive with the best existing methods and are easy to compute. Bubbles with constant mean curvature, and almost constant mean curvature, in the hyperbolic space Given a constant $k>1$, let $Z$ be the family of round spheres of radius $\textrm{artanh}(k^{-1})$ in the hyperbolic space $\mathbb{H}^3$, so that any sphere in $Z$ has mean curvature $k$. We prove a crucial nondegeneracy result involving the manifold $Z$. As an application, we provide sufficient conditions on a prescribed function $\phi$ on $\mathbb{H}^3$, which ensure the existence of a ${\cal C}^1$-curve, parametrized by $\varepsilon\approx 0$, of embedded spheres in $\mathbb{H}^3$ having mean curvature $k +\varepsilon\phi$ at each point. Ensemble Control on Lie Groups Problems involving control of large ensmebles of structurally identical dynamical systems, called \emph{ensemble control}, arise in numerous scientific areas from quantum control and robotics to brain medicine. In many of such applications, control can only be implemented at the population level, i.e., through broadcasting an input signal to all the systems in the population, and this new control paradigm challenges the classical systems theory. In recent years, considerable efforts have been made to investigate controllability properties of ensemble systems, and most works emphasized on linear and some forms of bilinear and nonlinear ensemble systems. In this paper, we study controllability of a broad class of bilinear ensemble systems defined on semisimple Lie groups, for which we define the notion of ensemble controllability through a Riemannian structure of the state space Lie group. Leveraging the Cartan decomposition of semisimple Lie algebras in representation theory, we develop a \emph{covering method} that decomposes the state space Lie group into a collection of Lie subgroups generating the Lie group, which enables the determination of ensemble controllability by controllability of the subsystems evolving on these Lie subgroups. Using the covering method, we show the equivalence between ensemble and classical controllability, i.e., controllability of each individual system in the ensemble implies ensemble controllability, for bilinear ensemble systems evolving on semisimple Lie groups. This equivalence makes the examination of controllability for infinite-dimensional ensemble systems as tractable as for a finite-dimensional single system. The Tensor Quadratic Forms We consider the following data perturbation model, where the covariates incur multiplicative errors. For two $n \times m$ random matrices $U, X$, we denote by $U \circ X$ the Hadamard or Schur product, which is defined as $(U \circ X)_{ij} = (U_{ij}) \cdot (X_{ij})$. In this paper, we study the subgaussian matrix variate model, where we observe the matrix variate data $X$ through a random mask $U$: \begin{equation*} {\mathcal X} = U \circ X \; \; \; \text{ where} \; \; \;X = B^{1/2} {\mathbb Z} A^{1/2}, \end{equation*} where ${\mathbb Z}$ is a random matrix with independent subgaussian entries, and $U$ is a mask matrix with either zero or positive entries, where ${\mathbb E} U_{ij} \in [0, 1]$ and all entries are mutually independent. Subsampling in rows, or columns, or random sampling of entries of $X$ are special cases of this model. Under the assumption of independence between $U$ and $X$, we introduce componentwise unbiased estimators for estimating covariance $A$ and $B$, and prove the concentration of measure bounds in the sense of guaranteeing the restricted eigenvalue conditions to hold on the estimator for $B$, when columns of data matrix $X$ are sampled with different rates. Our results provide insight for sparse recovery for relationships among people (samples, locations, items) when features (variables, time points, user ratings) are present in the observed data matrix ${\mathcal X}$ with heterogenous rates. Our proof techniques can certainly be extended to other scenarios. Quasinilpotent operators on separable Hilbert spaces have nontrivial invariant subspaces The invariant subspace problem is a well known unsolved problem in funtional analysis. While many partial results are known, the general case for complex, infinite dimensional separable Hilbert spaces is still open. It has been shown that the problem can be reduced to the case of operators which are norm limits of nilpotents. One of the most important subcases is the one of quasinilpotent operators, for which the problem has been extensively studied for many years. In this paper, we will prove that every quasinilpotent operator has a nontrivial invariant subspace. This will imply that all the operators for which the ISP has not been established yet are norm-limits of operators having nontrivial invariant subspaces. Optimal quantization for discrete distributions In this paper, we first determine the optimal sets of $n$-means and the $n$th quantization errors for all $1\leq n\leq 6$ for two nonuniform discrete distributions with support the set $\{1, 2, 3, 4, 5, 6\}$. Then, for a probability distribution $P$ with support $\{\frac 1n : n\in \mathbb N\}$ associated with a mass function $f$, given by $f(x)=\frac 1 {2^k}$ if $x=\frac 1 k$ for $k\in \mathbb N$, and zero otherwise, we determine the optimal sets of $n$-means and the $n$th quantization errors for all positive integers up to $n=300$. Further, for a probability distribution $P$ with support the set $\mathbb N$ of natural number associated with a mass function $f$, given by $f(x)=\frac 1 {2^k}$ if $x=k$ for $k\in \mathbb N$, and zero otherwise, we determine the optimal sets of $n$-means and the $n$th quantization errors for all positive integers $n$. At last we discuss for a discrete distribution, if the optimal sets are given, how to obtain the probability distributions. A Particular Upper Expectation as Global Belief Model for Discrete-Time Finite-State Uncertain Processes To model discrete-time finite-state uncertain processes, we argue for the use of a global belief model in the form of an upper expectation that is the most conservative one under a set of basic axioms. Our motivation for these axioms, which describe how local and global belief models should be related, is based on two possible interpretations for an upper expectation: a behavioural one similar to Walley's, and an interpretation in terms of upper envelopes of linear expectations. We show that the most conservative upper expectation satisfying our axioms, that is, our model of choice, coincides with a particular version of the game-theoretic upper expectation introduced by Shafer and Vovk. This has two important implications: it guarantees that there is a unique most conservative global belief model satisfying our axioms; and it shows that Shafer and Vovk's model can be given an axiomatic characterisation and thereby provides an alternative motivation for adopting this model, even outside their game-theoretic framework. Finally, we relate our model to the upper expectation resulting from a traditional measure-theoretic approach. We show that this measure-theoretic upper expectation also satisfies the proposed axioms, which implies that it is dominated by our model or, equivalently, the game-theoretic model. Moreover, if all local models are precise, all three models coincide. Algebraic and symplectic viewpoint on compactifications of two-dimensional cluster varieties of finite type In this article we explore compactifications of cluster varieties of finite type in complex dimension two. Cluster varieties can be viewed as the spec of a ring generated by theta functions and a compactification of such varieties can be given by a grading on that ring, which can be described by positive polytopes [17]. In the examples we exploit, the cluster variety can be interpreted as the complement of certain divisors in del Pezzo surfaces. In the symplectic viewpoint, they can be described via almost toric fibrations over $\R^2$ (after completion). Once identifying them as almost toric manifolds, one can symplectically view them inside other del Pezzo surfaces. So we can identify other symplectic compactifications of the same cluster variety, which we expect should also correspond to different algebraic compactifications. Both viewpoints are presented here and several compactifications have their corresponding polytopes compared. The finiteness of the cluster mutations are explored to provide cycles in the graph describing monotone Lagrangian tori in del Pezzo surfaces connected via almost toric mutation [34]. Sklyanin-like algebras for ($q$-)linear grids and ($q$-)para-Krawtchouk polynomials S-Heun operators on linear and $q$-linear grids are introduced. These operators are special cases of Heun operators and are related to Sklyanin-like algebras. The Continuous Hahn and Big $q$-Jacobi polynomials are functions on which these S-Heun operators have natural actions. We show that the S-Heun operators encompass both the bispectral operators and Kalnins and Miller's structure operators. These four structure operators realize special limit cases of the trigonometric degeneration of the original Sklyanin algebra. Finite-dimensional representations of these algebras are obtained from a truncation condition. The corresponding representation bases are finite families of polynomials: the para-Krawtchouk and $q$-para-Krawtchouk ones. A natural algebraic interpretation of these polynomials that had been missing is thus obtained. We also recover the Heun operators attached to the corresponding bispectral problems as quadratic combinations of the S-Heun operators On transitivity and connectedness of Cayley graphs of gyrogroups In this work, we explore edge direction, transitivity, and connectedness of Cayley graphs of gyrogroups. More specifically, we find conditions for a Cayley graph of a gyrogroup to be undirected, transitive, and connected. We also show a relationship between the cosets of a certain type of subgyrogroups and the connected components of Cayley graphs. Some examples regarding these findings are provided. Coloring the normalized Laplacian for oriented hypergraphs Independence number, coloring number and related constants are investigated in the setting of oriented hypergraphs using the spectrum of the normalized Laplace operator. Dressing a new integrable boundary of the nonlinear Schrödinger equation We further develop the method of dressing the boundary for the focusing nonlinear Schr\"odinger equation (NLS) on the half-line to include the new boundary condition presented by Zambon. Additionally, the foundation to compare the solutions to the ones produced by the mirror-image technique is laid by explicitly computing the change of scattering data under the Darboux transformation. In particular, the developed method is applied to insert pure soliton solutions. Total Variation Diminishing (TVD) method for Elastohydrodynamic Lubrication (EHL) problem on Parallel Computers In this article, we offer a novel numerical approach for the solution of elastohydrodynamic lubrication line and point contact problems using a class of total variation diminishing (TVD) schemes on parallel computers. A direct parallel approach is presented by introducing a novel solver named as projected alternate quadrant interlocking factorization (PAQIF) by solving discrete variational inequality. For one-dimensional EHL case, we use weighted change in Newton-Raphson approximation to compute the Jacobian matrix in the form of a banded matrix by dividing two subregions on the whole computation domain. Such subregion matrices are assembled by measuring the ratio of diffusive coefficient and discrete grid length on the domain of the interest. The banded matrix is then processed to parallel computers for solving discrete linearized complementarity system using PAQIF algorithm. The idea is easily extended in two-dimensional EHL case by taking appropriate splitting in x and y alternating directions respectively. Numerical experiments are performed and analyzed to validate the performance of computed solution on serial and parallel computers. About finite posets R and S with \# H(P,R) <= \# H(P,S) for every finite poset P Finite posets $R$ and $S$ are studied with $\# {\cal H}(P,R) \leq \# {\cal H}(P,S)$ for every finite poset $P$, where ${\cal H}(P,Q)$ is the set of order homomorphisms from $P$ to $Q$. It is shown that under an additional regularity condition, $\# {\cal H}(P,R) \leq \# {\cal H}(P,S)$ for every finite poset $P$ is equivalent to $\# {\cal S}(P,R) \leq \# {\cal S}(P,S)$ for every finite poset $P$, where ${\cal S}(P,Q)$ is the set of strict order homomorphisms from $P$ to $Q$. A method is developed for the rearrangement of a finite poset $R$, resulting in a poset $S$ with $\# {\cal H}(P,R) \leq \# {\cal H}(P,S)$ for every finite poset $P$. The results are used in constructing pairs of posets $R$ and $S$ with this property. Scanning electron diffraction tomography of strain Strain engineering is used to obtain desirable materials properties in a range of modern technologies. Direct nanoscale measurement of the three-dimensional strain tensor field within these materials has however been limited by a lack of suitable experimental techniques and data analysis tools. Scanning electron diffraction has emerged as a powerful tool for obtaining two-dimensional maps of strain components perpendicular to the incident electron beam direction. Extension of this method to recover the full three-dimensional strain tensor field has been restricted though by the absence of a formal framework for tensor tomography using such data. Here, we show that it is possible to reconstruct the full non-symmetric strain tensor field as the solution to an ill-posed tensor tomography inverse problem. We then demonstrate the properties of this tomography problem both analytically and computationally, highlighting why incorporating precession to perform scanning precession electron diffraction may be important. We establish a general framework for non-symmetric tensor tomography and demonstrate computationally its applicability for achieving strain tomography with scanning precession electron diffraction data. One idea and two proofs of the KMT theorems Two proofs of the Koml\'os-Major-Tusn\'ady embedding theorems, one for the uniform empirical process and one for the simple symmetric random walk, are given. More precisely, what are proved are the univariate coupling results needed in the proofs, such as Tusn\'{a}dy's lemma. These proofs are modifications of existing proof architectures, one combinatorial (the original proof with many modifications, due to Cs\"{o}rg\~o, R\'{e}v\'{e}sz, Bretagnolle, Massart, Dudley, Carter, Pollard etc.) and one analytical (due to Sourav Chatterjee). There is one common idea to both proofs: we compare binomial and hypergeometric distributions among themselves, rather than with the Gaussian distribution. In the combinatorial approach, this involves comparing Binomial(n,1/2) distribution with the Binomial(4n,1/2) distribution, which mainly involves comparison between the corresponding binomial coefficients. In the analytical approach, this reduces Chatterjee's method to coupling nearest neighbour Markov chains on integers so that they stay close. Foatic actions of the symmetric group and fixed-point homomesy We study maps on the set of permutations of n generated by the R\'enyi-Foata map intertwined with other dihedral symmetries (of a permutation considered as a 0-1 matrix). Iterating these maps leads to dynamical systems that in some cases exhibit interesting orbit structures, e.g., every orbit size being a power of two, and homomesic statistics (ones which have the same average over each orbit). In particular, the number of fixed points (aka 1-cycles) of a permutation appears to be homomesic with respect to three of these maps, even in one case where the orbit structures are far from nice. For the most interesting such "Foatic" action, we give a heap analysis and recursive structure that allows us to prove the fixed-point homomesy and orbit properties, but two other cases remain conjectural. On equations over direct powers of algebraic structures We study systems of equations over graphs, posets and matroids. We give the criteria, when a direct power of such algebraic structures is equationally Noetherian. Moreover we prove that any direct power of a finite algebraic structure is weakly equationally Noetherian. IEEE 802.11be: Wi-Fi 7 Strikes Back As hordes of data-hungry devices challenge its current capabilities, Wi-Fi strikes back with 802.11be, alias Wi-Fi 7. This brand-new amendment promises a (r)evolution of unlicensed wireless connectivity as we know it. With its standardisation process being consolidated, we provide an updated digest of 802.11be essential features, vouching for multi-AP coordination as a must-have for critical and latency-sensitive applications. We then get down to the nitty-gritty of one of its most enticing implementations-coordinated beamforming-, for which our standard-compliant simulations confirm near-tenfold reductions in worst-case delays. Interacting Conformal Carrollian Theories: Cues from Electrodynamics We construct the free Lagrangian of the magnetic sector of Carrollian electrodynamics, which surprisingly, is not obtainable as an ultra-relativistic limit of Maxwellian Electrodynamics. The construction relies on Helmholtz integrability condition for differential equations in a self consistent algorithm working hand in hand with imposing invariance under infinite dimensional Conformal Carroll algebra (CCA). It requires inclusion of new fields in the dynamics and the system in free of gauge redundancies. We calculate two-point functions in the free theory based entirely on symmetry principles. We next add interaction (quartic) terms to the free Lagrangian, strictly constrained by conformal invariance and Carrollian symmetry. Finally, a successful dynamical realization of infinite dimensional CCA is presented at the level of charges, for the interacting theory. In conclusion, we calculate the Poisson brackets for these charges. Quantum field theory with dynamical boundary conditions and the Casimir effect II: Coherent states We have previously studied -in part I- the quantization of a mixed bulk-boundary system describing the coupled dynamics between a bulk quantum field confined to a spacetime with finite space slice and with timelike boundary, and a boundary observable defined on the boundary. Our bulk system is a quantum field in a spacetime with timelike boundary and a dynamical boundary condition -the boundary observable's equation of motion. Owing to important physical motivations, in part I, we have computed the renormalized local state polarization and local Casimir energy for both the bulk quantum field and the boundary observable in the ground state and in a Gibbs state at finite, positive temperature. In this work, we introduce an appropriate notion of coherent and thermal coherent states for this mixed bulk-boundary system, and extend our previous study of the renormalized local state polarization and local Casimir energy to coherent and thermal coherent states. General Initial Value Problem for the Nonlinear Shallow Water Equations: Runup of Long Waves on Sloping Beaches and Bays We formulate a new approach to solving the initial value problem of the shallow water-wave equations utilizing the famous Carrier-Greenspan transformation [G. Carrier and H. Greenspan, J. Fluid Mech. 01, 97 (1957)]. We use a Taylor series approximation to deal with the difficulty associated with the initial conditions given on a curve in the transformed space. This extends earlier solutions to waves with near shore initial conditions, large initial velocities, and in more complex U-shaped bathymetries; and allows verification of tsunami wave inundation models in a more realistic 2-D setting. A Multiperiod Workforce Scheduling and Routing Problem with Dependent Tasks In this paper, we study a new Workforce Scheduling and Routing Problem, denoted Multiperiod Workforce Scheduling and Routing Problem with Dependent Tasks. In this problem, customers request services from a company. Each service is composed of dependent tasks, which are executed by teams of varying skills along one or more days. Tasks belonging to a service may be executed by different teams, and customers may be visited more than once a day, as long as precedences are not violated. The objective is to schedule and route teams so that the makespan is minimized, i.e., all services are completed in the minimum number of days. In order to solve this problem, we propose a Mixed-Integer Programming model, a constructive algorithm and heuristic algorithms based on the Ant Colony Optimization (ACO) metaheuristic. The presence of precedence constraints makes it difficult to develop efficient local search algorithms. This motivates the choice of the ACO metaheuristic, which is effective in guiding the construction process towards good solutions. Computational results show that the model is capable of consistently solving problems with up to about 20 customers and 60 tasks. In most cases, the best performing ACO algorithm was able to match the best solution provided by the model in a fraction of its computational time. A minimalist model for co-evolving supply and drainage networks Numerous complex systems, both natural and artificial, are characterized by the presence of intertwined supply and/or drainage networks. Here we present a minimalist model of such co-evolving networks in a spatially continuous domain, where the obtained networks can be interpreted as a part of either the counter-flowing drainage or co-flowing supply and drainage mechanisms. The model consists of three coupled, nonlinear partial differential equations that describe spatial density patterns of input and output materials by modifying a mediating scalar field, on which supply and drainage networks are carved. In the 2-dimensional case, the scalar field can be viewed as the elevation of a hypothetical landscape, of which supply and drainage networks are ridges and valleys, respectively. In the 3-dimensional case, the scalar field serves as the chemical signal strength, in which vascularization of the supply and drainage networks occurs above a critical 'erosion' strength. The steady-state solutions are presented as a function of non-dimensional channelization indices for both materials. The spatial patterns of the emerging networks are classified within the branched and congested extreme regimes, within which the resulting networks are characterized based on the absolute as well as the relative values of two non-dimensional indices. Privacy-Preserved Collaborative Estimation for Networked Vehicles with Application to Road Anomaly Detection Road information such as road profile and traffic density have been widely used in intelligent vehicle systems to improve road safety, ride comfort, and fuel economy. However, vehicle heterogeneity and parameter uncertainty make it extremely difficult for a single vehicle to accurately and reliably measure such information. In this work, we propose a unified framework for learning-based collaborative estimation to fuse local road estimation from a fleet of connected heterogeneous vehicles. The collaborative estimation scheme exploits the sequential measurements made by multiple vehicles traversing the same road segment and let these vehicles relay a learning signal to iteratively refine local estimations. Given that the privacy of individual vehicles' identity must be protected in collaborative estimation, we directly incorporate privacy-protection design into the collaborative estimation design and establish a unified framework for privacy-preserving collaborative estimation. Different from patching conventional privacy mechanisms like differential privacy which will compromise algorithmic accuracy or homomorphic encryption which will incur heavy communication/computational overhead, we leverage the dynamical properties of collective estimation to enable inherent privacy protection without sacrificing accuracy or significantly increasing communication/computation overhead. Numerical simulations confirm the effectiveness and efficiency of our proposed framework. Antiphase Versus In-Phase Synchronization of Coupled Pendulum Clocks and Metronomes In 1665, Huygens observed that two pendulum clocks hanging from the same beam became synchronized in antiphase after hundreds of swings. On the other hand, modern experiments with metronomes placed on a movable platform show that they tend to synchronize in phase, not antiphase. Here, using a simple model of coupled clocks and metronomes, we calculate the regimes where in-phase and antiphase synchronization are stable. Unusual features of our approach include its treatment of the escapement mechanism, a small-angle approximation up to cubic order, and a three-time scale asymptotic analysis. Construction of a minimum energy path for the VT flash model by an exponential time differencing scheme with the string method Phase equilibrium calculation, also known as flash calculation, plays significant roles in various aspects of petroleum and chemical industries. Since Michelsen proposed his milestone studies in 1982, through several decades of development, the current research interest on flash calculation has been shifted from accuracy to efficiency, but the ultimate goal remains the same focusing on estimation of the equilibrium phase amounts and phase compositions under the given variable specification. However, finding the transition route and its related saddle points are very often helpful to study the evolution of phase change and partition. Motivated by this, in this study we apply the string method to find the minimum energy paths and saddle points information of a single-component VT flash model with the Peng-Robinson equation of state. As the system has strong stiffness, common ordinary differential equation solvers have their limitations. To overcome these issues, a Rosenbrock-type exponential time differencing scheme is employed to reduce the computational difficulty caused by the high stiffness of the investigated system. In comparison with the published results and experimental data, the proposed numerical algorithm not only shows good feasibility and accuracy on phase equilibrium calculation, but also successfully calculates the minimum energy path and and saddle point of the single-component VT flash model with strong stiffness. Fractal Gaussian Networks: A sparse random graph model based on Gaussian Multiplicative Chaos We propose a novel stochastic network model, called Fractal Gaussian Network (FGN), that embodies well-defined and analytically tractable fractal structures. Such fractal structures have been empirically observed in diverse applications. FGNs interpolate continuously between the popular purely random geometric graphs (a.k.a. the Poisson Boolean network), and random graphs with increasingly fractal behavior. In fact, they form a parametric family of sparse random geometric graphs that are parametrized by a fractality parameter $\nu$ which governs the strength of the fractal structure. FGNs are driven by the latent spatial geometry of Gaussian Multiplicative Chaos (GMC), a canonical model of fractality in its own right. We asymptotically characterize the expected number of edges and triangle in FGNs. We then examine the natural question of detecting the presence of fractality and the problem of parameter estimation based on observed network data, in addition to fundamental properties of the FGN as a random graph model. We also explore fractality in community structures by unveiling a natural stochastic block model in the setting of FGNs. Energy Communities: From European Law to Numerical Modeling In 2019, the European Union introduced two new actors in the European energy system: Renewable and Citizen Energy Communities (RECs and CECs). Modelling these two new actors and their effects on the energy system is crucial when implementing the European Legislation, incorporating energy communities (ECs) into the electric grid, planning ECs, and conducting academic research. This paper aims to bridge the gap between the letter of the law and numerical models of ECs. After introducing RECs and CECs, we list elements of the law to be considered by regulators, distribution system operators, EC planners, researchers, and other stakeholders when modelling ECs. Finally, we provide three case studies of EC models that explicitly include elements of the European Law. Hierarchical Clusterings of Unweighted Graphs We study the complexity of finding an optimal hierarchical clustering of an unweighted similarity graph under the recently introduced Dasgupta objective function. We introduce a proof technique, called the normalization procedure, that takes any such clustering of a graph $G$ and iteratively improves it until a desired target clustering of G is reached. We use this technique to show both a negative and a positive complexity result. Firstly, we show that in general the problem is NP-complete. Secondly, we consider min-well-behaved graphs, which are graphs $H$ having the property that for any $k$ the graph $H(k)$ being the join of $k$ copies of $H$ has an optimal hierarchical clustering that splits each copy of $H$ in the same optimal way. To optimally cluster such a graph $H(k)$ we thus only need to optimally cluster the smaller graph $H$. Co-bipartite graphs are min-well-behaved, but otherwise they seem to be scarce. We use the normalization procedure to show that also the cycle on 6 vertices is min-well-behaved. Sphere partition function of Calabi-Yau GLSMs The sphere partition function of Calabi-Yau gauged linear sigma models (GLSMs) has been shown to compute the exact Kaehler potential of the Kaehler moduli space of a Calabi-Yau. We propose a universal expression for the sphere partition function evaluated in hybrid phases of Calabi-Yau GLSMs that are fibrations of Landau-Ginzburg orbifolds over some base manifold. Special cases include Calabi-Yau complete intersections in toric ambient spaces and Landau-Ginzburg orbifolds. The key ingredients that enter the expression are Givental's I/J-functions, the Gamma class and further data associated to the hybrid model. We test the proposal for one- and two-parameter abelian GLSMs, making connections, where possible, to known results from mirror symmetry and FJRW theory. Multilevel Monte Carlo for quantum mechanics on a lattice Monte Carlo simulations of quantum field theories on a lattice become increasingly expensive as the continuum limit is approached since the cost per independent sample grows with a high power of the inverse lattice spacing. Simulations on fine lattices suffer from critical slowdown, the rapid growth of autocorrelations in the Markov chain with decreasing lattice spacing. This causes a strong increase in the number of lattice configurations that have to be generated to obtain statistically significant results. This paper discusses hierarchical sampling methods to tame this growth in autocorrelations. Combined with multilevel variance reduction, this significantly reduces the computational cost of simulations for given tolerances $\epsilon_{\text{disc}}$ on the discretisation error and $\epsilon_{\text{stat}}$ on the statistical error. For an observable with lattice errors of order $\alpha$ and an integrated autocorrelation time that grows like $\tau_{\mathrm{int}}\propto a^{-z}$, multilevel Monte Carlo (MLMC) can reduce the cost from $\mathcal{O}(\epsilon_{\text{stat}}^{-2}\epsilon_{\text{disc}}^{-(1+z)/\alpha})$ to $\mathcal{O}(\epsilon_{\text{stat}}^{-2}\vert\log \epsilon_{\text{disc}} \vert^2+\epsilon_{\text{disc}}^{-1/\alpha})$. Even higher performance gains are expected for simulations of quantum field theories in $D$ dimensions. The efficiency of the approach is demonstrated on two model systems, including a topological oscillator that is badly affected by critical slowdown due to freezing of the topological charge. On fine lattices, the new methods are orders of magnitude faster than standard sampling based on Hybrid Monte Carlo. For high resolutions, MLMC can be used to accelerate even the cluster algorithm for the topological oscillator. Performance is further improved through perturbative matching which guarantees efficient coupling of theories on the multilevel hierarchy. Low-Congestion Shortcuts for Graphs Excluding Dense Minors We prove that any $n$-node graph $G$ with diameter $D$ admits shortcuts with congestion $O(\delta D \log n)$ and dilation $O(\delta D)$, where $\delta$ is the maximum edge-density of any minor of $G$. Our proof is simple, elementary, and constructive - featuring a $\tilde{\Theta}(\delta D)$-round distributed construction algorithm. Our results are tight up to $\tilde{O}(1)$ factors and generalize, simplify, unify, and strengthen several prior results. For example, for graphs excluding a fixed minor, i.e., graphs with constant $\delta$, only a $\tilde{O}(D^2)$ bound was known based on a very technical proof that relies on the Robertson-Seymour Graph Structure Theorem. A direct consequence of our result is that many graph families, including any minor-excluded ones, have near-optimal $\tilde{\Theta}(D)$-round distributed algorithms for many fundamental communication primitives and optimization problems including minimum spanning tree, minimum cut, and shortest-path approximations. Gluing resource proof-structures: inhabitation and inverting the Taylor expansion A Multiplicative-Exponential Linear Logic (MELL) proof-structure can be expanded into a set of resource proof-structures: its Taylor expansion. We introduce a new criterion characterizing those sets of resource proof-structures that are part of the Taylor expansion of some MELL proof-structure, through a rewriting system acting both on resource and MELL proof-structures. As a consequence, we prove also the semi-decidability of the type inhabitation problem for MELL proof-structures. Inverse variational problem for nonlinear dynamical systems In this paper we have chosen to work with two different approaches to solving the inverse problem of the calculus of variation. The first approach is based on an integral representation of the Lagrangian function that uses the first integral of the equation of motion while the second one relies on a generalization of the well known Noether's theorem and constructs the Lagrangian directly from the equation of motion. As an application of the integral representation of the Lagrangian function we first provide some useful remarks for the Lagrangian of the modified Emden-type equation and then obtain results for Lagrangian functions of (i) cubic-quintic Duffing oscillator, (ii) Li\'{e}nard-type oscillator and (iii) Mathews-Lakshmanan oscillator. As with the modified Emden-type equation these oscillators were found to be characterized by nonstandard Lagrangians except that one could also assign a standard Lagrangian to the Duffing oscillator. We used the second approach to find indirect analytic (Lagrangian) representation for three velocity-dependent equations for (iv) Abraham-Lorentz oscillator, (v) Lorentz oscillator and (vi) Van der Pol oscillator. For each of the dynamical systems from (i)-(vi) we calculated the result for Jacobi integral and thereby provided a method to obtain the Hamiltonian function without taking recourse to the use of the so-called Legendre transformation. Crossing versus locking: Bit threads and continuum multiflows Bit threads are curves in holographic spacetimes that manifest boundary entanglement, and are represented mathematically by continuum analogues of network flows or multiflows. Subject to a density bound, the maximum number of threads connecting a boundary region to its complement computes the Ryu-Takayanagi entropy. When considering several regions at the same time, for example in proving entropy inequalities, there are various inequivalent density bounds that can be imposed. We investigate for which choices of bound a given set of boundary regions can be "locked", in other words can have their entropies computed by a single thread configuration. We show that under the most stringent bound, which requires the threads to be locally parallel, non-crossing regions can in general be locked, but crossing regions cannot, where two regions are said to cross if they partially overlap and do not cover the entire boundary. We also show that, under a certain less stringent density bound, a crossing pair can be locked, and conjecture that any set of regions not containing a pairwise crossing triple can be locked, analogously to the situation for networks. Log-modulated rough stochastic volatility models We propose a new class of rough stochastic volatility models obtained by modulating the power-law kernel defining the fractional Brownian motion (fBm) by a logarithmic term, such that the kernel retains square integrability even in the limit case of vanishing Hurst index $H$. The so-obtained log-modulated fractional Brownian motion (log-fBm) is a continuous Gaussian process even for $H = 0$. As a consequence, the resulting super-rough stochastic volatility models can be analysed over the whole range $0 \le H < 1/2$ without the need of further normalization. We obtain skew asymptotics of the form $\log(1/T)^{-p} T^{H-1/2}$ as $T\to 0$, $H \ge 0$, so no flattening of the skew occurs as $H \to 0$. 6G Wireless Systems: Vision, Requirements, Challenges, Insights, and Opportunities Mobile communications have been undergoing a generational change every ten years or so. However, the time difference between the so-called "G's" is also decreasing. While fifth-generation (5G) systems are becoming a commercial reality, there is already significant interest in systems beyond 5G - which we refer to as the sixth-generation (6G) of wireless systems. In contrast to the many published papers on the topic, we take a top-down approach to 6G. We present a holistic discussion of 6G systems beginning with the lifestyle and societal changes driving the need for next generation networks, to the technical requirements needed to enable 6G applications, through to the challenges, as well as possibilities for practically realizable system solutions across all layers of the Open Systems Interconnection stack. Since many of the 6G applications will need access to an order-of-magnitude more spectrum, utilization of frequencies between 100 GHz and 1 THz becomes of paramount importance. We comprehensively characterize the limitations that must be overcome to realize working systems in these bands; and provide a unique perspective on the physical, as well as higher layer challenges relating to the design of next generation core networks, new modulation and coding methods, novel multiple access techniques, antenna arrays, wave propagation, radio-frequency transceiver design, as well as real-time signal processing. We rigorously discuss the fundamental changes required in the core networks of the future, such as the redesign or significant reduction of the transport architecture that serves as a major source of latency. While evaluating the strengths and weaknesses of key technologies, we differentiate what may be practically achievable over the next decade, relative to what is possible in theory. For each discussed system aspect, we present concrete research challenges. ESPRESSO: Entropy and ShaPe awaRe timE-Series SegmentatiOn for processing heterogeneous sensor data Extracting informative and meaningful temporal segments from high-dimensional wearable sensor data, smart devices, or IoT data is a vital preprocessing step in applications such as Human Activity Recognition (HAR), trajectory prediction, gesture recognition, and lifelogging. In this paper, we propose ESPRESSO (Entropy and ShaPe awaRe timE-Series SegmentatiOn), a hybrid segmentation model for multi-dimensional time-series that is formulated to exploit the entropy and temporal shape properties of time-series. ESPRESSO differs from existing methods that focus upon particular statistical or temporal properties of time-series exclusively. As part of model development, a novel temporal representation of time-series $WCAC$ was introduced along with a greedy search approach that estimate segments based upon the entropy metric. ESPRESSO was shown to offer superior performance to four state-of-the-art methods across seven public datasets of wearable and wear-free sensing. In addition, we undertake a deeper investigation of these datasets to understand how ESPRESSO and its constituent methods perform with respect to different dataset characteristics. Finally, we provide two interesting case-studies to show how applying ESPRESSO can assist in inferring daily activity routines and the emotional state of humans.
2020-08-10 21:48:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7032890915870667, "perplexity": 471.8044134999066}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738699.68/warc/CC-MAIN-20200810205824-20200810235824-00534.warc.gz"}
http://sharepoint.stackexchange.com/questions/28051/list-add-new-item-link-at-top-of-list
The 'add new item' link which is always located at the end of the list. Is there a way that we can add a similar link at the top of the list? - Not possible by just configuring web part. Of course you could modify the Title Url of the web part to point to the Add new item URL, but I don't count that as a valid suggestion ;) I would use javascript (jQuery) to search for the bottom Add new item link and inject similar to the correct position below the web part title. Some web parts also allow you to edit the XSL of the web part. I haven't tried this myself, but you might be able to achieve the same in certain web parts using this method. jQuery example (not on my dev machine now, so will share the idea at least): 1. Find the Add new item node using ID and get it's html: $('table[summary='yourtablename'] td[class='ms-addnew']).html() 2. Find the web part table, it should have "summary" attribute that might be easiest to find$('table[summary='yourtablename']) 3. Append the Add new item TD you got from step 1 wherever you wish, or get the TR above that TD and append the row below/above column header row of the table 4. You may want to change the ID attribute of the actual link to some other as there might be conflicts if you have two links with same ID attribute in HTML) - Hi Jussi, could u provide a example as to getting the "add new item" link using jquery –  spStacker Jan 31 '12 at 15:23 Hi, edited the answer above. –  Jussi Palo Feb 1 '12 at 21:12 Thanks jussi for the reply.i will try it at work tomorrow –  spStacker Feb 2 '12 at 3:55 You can edit the page to add a Content Editor Web Part up top and adjust the chrome type so that title / borders / etc don't display. Then edit the content of the web part to be a hyperlink to the same place as the other one. - Copy the HTML markup from the existing Add new item link. You can see it with help of the F12 developer tools of internet explorer or inspect element of Firefox/Chrome. Example: <SPAN style="POSITION: relative; WIDTH: 10px; DISPLAY: inline-block; HEIGHT: 10px; OVERFLOW: hidden" class=s4-clust><IMG style="POSITION: absolute; TOP: -128px !important; LEFT: 0px !important" alt="" src="/_layouts/images/fgimg.png"></SPAN> Paste the HTML markup into the Content Editor web part. You may need to wrap the above code in a div and additional styling to it as needed. - thanks for your valuable answer dhirschl –  Naresh Nemuri Dec 8 '11 at 6:04 but Mr.dhirschl,i wants the bottom link to be removed –  Naresh Nemuri Dec 8 '11 at 6:08 Change the toolbar type to No Toolbar in the XSLT List web part properties. –  NLV Dec 12 '11 at 13:52 Actually, the simplest approach and probably best solution is to simply link to the form on your site ending with NewForm.aspx. It looks even better if you add the + image in fron too. Example: New Issue Tracking Form - Assuming you have jQuery available in your environment. Here is a quick way to knock it out. <script type="text/javascript" id="ms-addnew_js"> $(document).ready(function () {$('.ms-addnew').filter(":last").hide(); }); </script> - But the catch is that JQuery solutions always comes with a latency. You'll see the link for a minute. The best way is to edit the XSL underneath the List XSLT web part in the view page. –  NLV Dec 12 '11 at 13:52 This code must work with JQuery. var $addnew =$('.ms-addnew'); ($('#WebPartWPQ2').prepend('<br>')).prepend($addnew.html()); -
2015-01-27 12:34:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2261204570531845, "perplexity": 3480.766887222517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422120842874.46/warc/CC-MAIN-20150124173402-00099-ip-10-180-212-252.ec2.internal.warc.gz"}
http://xml.jips-k.org/full-text/view?doi=10.3745/JIPS.04.0162
Zhao* and Jiang*: Adaptive Signal Separation with Maximum Likelihood # Adaptive Signal Separation with Maximum Likelihood Abstract: Maximum likelihood (ML) is the best estimator asymptotically as the number of training samples approaches infinity. This paper deduces an adaptive algorithm for blind signal processing problem based on gradient optimization criterion. A parametric density model is introduced through a parameterized generalized distribution family in ML framework. After specifying a limited number of parameters, the density of specific original signal can be approximated automatically by the constructed density function. Consequently, signal separation can be conducted without any prior information about the probability density of the desired original signal. Simulations on classical biomedical signals confirm the performance of the deduced technique. Keywords: Density , Estimator , Framework , Kurtosis , Likelihood , Separation ## 1. Introduction In recent decades, blind signal processing (BSS) has been widely concerned for its potential applications in biomedical signal processing, automatic control, advanced statistics, and other academic and industrial fields [1-3]. Generally speaking, BSS technology is to obtain the relevant information we are interested in from the observed data of hybrid systems (such as wireless channel, communication system, radar system, mixing process, etc.) through some signal processing methods. The term “blind” denotes that all information about the mixing system is unknown in advance except for observed data. BSS is a typical neural network technique, whose goal is to estimate the original signals based on the observed signals without prior knowledge of the original signals and mixed parameters [2,3]. The BSS problem is mathematically underdetermined. Indeed, there are two uncertainties in the results of BSS separation. The first is the sequence uncertainty of the separation results and the second is the amplitude uncertainty of the separation results. The information to be transmitted is often contained in the signal waveform. Therefore, these two uncertainties do not affect the actual application of BSS technology. As a powerful statistical and computational technique, BSS has been proved to be reasonable and reliable in theory and will have great vitality [4-6]. Since Jutten and Herault [7] creatively released the original BSS approach in a typical feedback structure framework, a growing number of new theories and technologies have been put forward by follow-up researchers from various application fields such as digital images and economic indicators [2,8-11]. For instance, Virta and Nordhausen [8] proposed a blind signal separation method for multivariate time series. This method can be utilized to extract functional magnetic resonance imaging (fMRI) information from noisy high-dimensional observations. However, it works well only when the observed series is a linear transformation and is not correlated in time. If the period of desired source signal is close to another one, this method will lack robustness and reliability. Pehlevan et al. [9] formulated a non-negative signal blind separation scheme based on approximate similarity measurement. Indeed, such measurement has been utilized successfully in geophysical exploration. As a result, a corresponding explicit neural network approach is developed through the exploration of a typical similarity matching object. The local learning principle has been biologically verified and is widely used in many disciplines. The synaptic weights of the designed neural network are updated successively according to this learning principle. However, the proposed approach can only deal with a special case where expected original signals are confirmed to be non-negative in advance. This greatly limits its practical application. These typical BSS methods have opened a remarkable chapter in the history of signal processing. So far, BSS technology has been applied in many interdisciplinary fields due to its reliability and practicability [8,11]. The basic criteria for solving BSS problem include two kinds: second-order statistics and higher-order statistics [1,6,9]. Higher-order statistics contain a lot of information that second-order statistics do not own. Until now, higher-order statistics have shown stronger vitality than second-order statistics in signal detection, array processing, and object recognition [6,8,9]. Maximum likelihood (ML) is a powerful technique of higher-order statistics estimation [6]. The best way to separate signals in the BSS problem is to simulate the mixing process of original signals directly. However, it is difficult to know how the original signals are mixed in practice. Although the probability distribution of the observed mixture is unknown, it is a fixed value that exists objectively. A fundamental strategy for estimating conditional probability is to assume a certain probability distribution form and then optimize the parameters of the probability distribution according to the training samples [1,6]. The strategy of ML estimation is to estimate probability parameters based on data sampling. Such strategy makes it possible to separate the non-Gaussian component from its mixture successfully. The main appeal of ML estimation is its potential to develop into the best estimator asymptotically as the number of training samples approaches infinity [1,2]. As a powerful technique of statistical estimation, ML estimation is generally classified as the preferred estimator for BSS problem. Under proper conditions, ML estimator has the property of consistency [1,2]. In other words, as the number of training samples tends towards infinitude, the ML estimation of a parameter converges to the true value of the parameter. Here, we regard ML as an attempt to match the probability density of the model to the real data that generate the probability density. Probability density function (PDF) is an expression representing the probability distribution of continuous random variables. An important question in BSS problem under ML framework is how to select a proper PDF to match the true data generating probability density distribution of the independent component (original signal). In other words, PDF plays a vital role for BSS problem in the ML framework. In essence, the estimation of probability densities is a typical nonparametric problem [2,10,11]. Although no direct access to the true data generating probability density distribution is available, efforts will be made to match this distribution to the greatest extent. In this paper, a parametric density model is introduced correspondingly through a generalized exponential power family in the ML framework. In practice, source signals may have different kinds of probability densities. It must be mentioned that the introduced density functions can be adaptive to various marginal probability densities after specifying a limited number of parameters. As a result, the probability densities of the original signals can be approximated automatically by the constructed density functions. Subsequently, a gradient learning algorithm for BSS problem is deduced in the ML framework by solving a constrained optimization problem. As a result, an adaptive method is deduced for BSS problem based on gradient optimization criterion. Compared with other methods in existence [79], the proposed method substantially differs in two aspects. Firstly, the signal separation can be carried out without any accurate prior information about the probability density of the desired original signal. Secondly, the proposed technique can separate different kinds of desired original signals from the observed mixture. The only assumption made for the signal separation process is that we should know in advance the kurtosis property of the desired original signal. Indeed, such signal kurtosis property can be readily known to the expert. Simulations on classical biomedical signals confirm the performance of the deduced technique. ## 2. BSS Problem Formulation The fundamental BSS model can be represented in Fig. 1. Fig. 1. The fundamental BSS model. Suppose there are N independent original signals, expressed in vector form as [TeX:] $$s(t)=\left[s_{1}(t), s_{2}(t), \cdots, s_{N}(t)\right]^{T}.$$ Here superscript T represents the transpose of a vector and [TeX:] $$t=0,1,2, \cdots.$$ The M observed signals are obtained by the linear instantaneous mixing of N source signals. That is, at every moment there is the following relationship, ##### (1) [TeX:] $$x_{i}(t)=\sum_{j=1}^{N} a_{i j} s_{j}(t) \quad i=1,2, \cdots, M$$ Eq. (1) can be further rewritten as a vector matrix ##### (2) [TeX:] $$x(t)=A s(t),$$ where A is a mixing matrix composed of a series of mixing coefficients [TeX:] $$\left\{a_{i j}\right\}.$$ The original signals and the mixing matrix are unknown, and only the mixed signals can be observed. Indeed, blindness is not complete since the original signals should be mutually independent in a statistical sense. In many cases, such as biomedical signal processing, only one specific original signal is of interest and the others can be ignored. Here, we focus on such practical application. Our main purpose is to obtain a separating vector and recover a particular original signal based on this vector from the observed mixtures. A fundamental strategy is to deduce an iterative process so as to obtain a separating vector [TeX:] $$\mathcal{W}$$ , which results [TeX:] $$y=w^{T} x=w^{T}$$ As being a scaled version of the interested original signal [12]. ## 3. Proposed Algorithm A crucial factor that constitutes the foundation of BSS is that original signals are statistical independent. In other words, the value of one variable cannot give any information on the value of another variable. Mathematically, statistical independence means that the joint probability density distribution of two random variables is equal to the product of their respective probability density distribution. In practical applications, signal independence may be measured with maximum entropy, mutual information, and ML [2,13,14]. ML estimator has properties of consistency and efficiency. ML becomes the best estimator asymptotically as the number of training samples increases and approaches infinity. While the number of samples is small enough to result in overfitting behavior, regularization strategies such as weight decay can be utilized to capture a biased version of ML that has less variance when training samples are limited [1,2,8]. In fact, ML is closely related to the approach of information flow maximization/minimization in neural networks. In recent decades, ML has become the preferred estimator to utilize for machine learning. To perform ML estimation in practice, we should deduce a learning algorithm to carry out the numerical maximization/minimization of likelihood. The basic idea of BSS in the ML framework is to exploit the gradient of likelihood function with likelihood optimization criterion [3,15]. At the convergence point of the gradient optimization learning criterion, the gradient must point in the direction of . In other words, the optimal gradient must be equal to the product of a constant scalar and the direction . In such case, adding the gradient to does not change its direction and the optimization algorithm converges here. Accordingly, the likelihood optimization criterion for separating a specific original signal in model (2) can be described by [11,16,17]: ##### (3) [TeX:] $$\left\{\begin{array}{ll} \max & \psi(w)=E\left\{\log p\left(w^{T} x\right)\right\} \\ s . t . & \|w\|=1 \end{array}\right..$$ Here P expresses the PDF of the original signal, which is unknown in practice and should be estimated in advance. To solve the constrained optimization problem (3), we should calculate the optimal direction, which is mathematically the steepest direction. In practice, we can start with a particular vector [TeX:] $$\mathcal{W}$$, calculate the direction in which [TeX:] $$x=A s$$ grows at the fastest speed based on available samples, and then turn [TeX:] $$\mathcal{W}$$ in that direction. This idea can be conducted in terms of the stochastic gradient optimization rule [1-3]. As a result, the following gradient learning algorithm can be deduced in the ML framework, ##### (4) [TeX:] $$\left\{\begin{array}{l} w^{+}(i+1)=w(i)-\xi(i) E\left\{g\left(w(i)^{T} x\right) x\right\} / E\left\{g^{\prime}\left(w(i)^{T} x\right)\right\} \\ w(i+1)=w^{+}(i+1) /\left\|w^{+}(i+1)\right\| \end{array}\right..$$ Here expresses iteration index, [TeX:] $$\xi(i)$$ is a step size related to , and [TeX:] $$g(\cdot)$$ describes the nonlinearity defined by [TeX:] $$\text { by } g(\cdot)=(\log p(\cdot))^{\prime}=p(\cdot)^{\prime} / p(\cdot).$$ After the iterative process in (4) converges to a particular weight vector [TeX:] $$\tilde{W},$$ , the specific interested original signal can be estimated with [TeX:] $$y=\tilde{w}^{T} x=\tilde{w}^{T}$$ As. In the following, we call the algorithm in (4) MLBSS. Function p, which should match the true data generating probability densities of the independent components, plays a vital role when calculating the likelihood. In essence, the likelihood is a function of probability densities. These independent components are actually the source signals. How to choose the form of function is an important and open research topic in the ML framework. If the true data generating probability densities lie within the constructed model family, the ML estimator will converge to the true results while the number of training samples approaches towards infinity [1,2]. The choice of function p is essentially a nonparametric problem, which contains an infinite number of parameters. A promising method is to select proper functions to approximate the densities of original signals [2,11]. Theorem 1. Suppose the probability density of original signal as [TeX:] $$\tilde{p}_{i}$$ , and ##### (5) [TeX:] $$g_{i}\left(s_{i}\right)=\frac{\tilde{p}_{i}\left(s_{i}\right)}{\tilde{p}_{i}\left(s_{i}\right)}.$$ The constraint of independent component estimation is set to be unit variance and uncorrelation. If supposed probability density [TeX:] $$\tilde{p}_{i}$$ satisfies ##### (6) [TeX:] $$E\left\{s_{i} g_{i}\left(s_{i}\right)-g\left(s_{i}\right)\right\}>0$$ for all , the ML estimator will be locally consistent. Detail proof of Theorem 1 can be found in [1,11]. Theorem 1 shows that one can exploit a family of density functions that contain only two densities, thus making one of densities satisfy the conditions in (6). This means that one may make small mistakes in determining the density of independent component, since the estimated density can be guaranteed to lie within the same half of the probability density space. This also means that one can estimate the independent component with very simple density model. Specifically, one can utilize a model which consists of only two densities. In essence, ML can be considered an attempt to make the model probability density distribution match the empirical probability density distribution. Ideally, we would like to match the true data generating probability density distribution. Though lacking direct access, the true data generating probability density distribution must belong to the constructed model density family for otherwise no estimator can estimate the true data generating probability density distribution. Here we consider the parameterized generalized distribution to match the probability density distribution of the interested original signal ##### (7) [TeX:] $$p(y ; \alpha)=\frac{\alpha}{2 \beta \xi(1 / \alpha)} e^{-\left|\frac{y}{\beta}\right|^{\alpha}}$$ where [TeX:] $$\xi(\cdot)$$ is a Gamma function with form of [TeX:] $$\xi(m)=\int_{0}^{\infty} t^{m-1} e^{-t} d t,$$ the Gaussian exponent is used to control the peakness of the density distribution. Since unit variance and uncorrelatedness mean [TeX:] $$E\left\{y y^{T}\right\}=w E\left\{x x^{T}\right\} w^{T}=I$$ , we can further deduce [TeX:] $$\beta=\sqrt{\xi(1 / \alpha) / \xi(3 / \alpha)}$$ In practical applications, the desired original signal may have deferent kinds of probability density distribution. In recent decade, there is a trend to exploit kurtosis as a gaussianity measure. For each iteration of learning algorithm (4), we can calculate the value of kurtosis. After convergence of this algorithm, [TeX:] $$\mathcal{W}$$ will be closely approximated to the optimal solution [TeX:] $$\mathcal{W}^{*}$$ , so the calculated kurtosis is narrowly approximated to its real value. Accordingly, the expected original signal can be estimated with [TeX:] $$y=\mathcal{W}^{*} x.$$ In practice, one may adjust the Gaussian exponent to control the peakness of the probability density distribution, thus making families in (7) consist of only two kinds of densities, i.e., a single binary parameter. With set to be various values, deferent kinds of probability densities ranging from super-Gaussian to sub-Gaussian are available. In other words, we can utilize parameterization of probability density distribution in (7), consisting of the choice between two densities. Specifically, the MLBSS algorithm can be expressed as Step 1 Randomly initialize the separating vector[TeX:] $$\mathcal{W}$$. Step 2 Calculate the kurtosis value Step 3 Specify the Gaussian exponent . Step 4 Update the parameterized generalized distributions with Eq. (7). Step 5 Update the separating vector [TeX:] $$\mathcal{W}$$ with Eq. (4). Step 6 If vector does not converge, go back to Step 2. Step 7 Separate the desired original signal with [TeX:] $$y=\mathcal{W}^{*} x.$$ ## 4. Computer Simulations and Performance Analysis A main purpose of biomedical signal processing is to separate desired component from the observation of biosignal measurements that also contain uninterested signals. The ultimate objective is to extract clinically relevant information so as to improve the medical diagnosis. Fortunatelly, biomedical signals observed from a series of multiple measurements are statistically independent. Recently, there is a trend to separate the desired biomedical signal from its observed measurements based on the BSS technique. A nonsingular [TeX:] $$4 \times 4$$ mixed matrix A was deduced randomly as ##### (9) [TeX:] $$A=\left[\begin{array}{llll} 0.7965 & 0.1021 & 0.1323 & 0.4024 \\ 0.6285 & 0.1982 & 0.2019 & 0.4088 \\ 0.1978 & 0.5987 & 0.5378 & 0.2968 \\ 0.3985 & 0.1098 & 0.4986 & 0.8956 \end{array}\right].$$ Four source signals as depicted in Fig. 2 were linearly mixed by matrix A. The corresponding mixing results were drawn in Fig. 3. As a graphic record of bioelectrical signals produced by the human body during cardiac circulation, electrocardiogram (ECG) may illustrate a lot about the medical condition of an individual. However, measured ECG is always contaminated by other signals or noise and is non-stationary in nature. It is imperative to analyze and interpret ECG signal with a powerful tool. Fig. 2. Four source signals. Fig. 3. Singles mixed by matrix A. Our main goal was to separate a clear ECG from its signal mixtures exclusively. For comparison purpose, we ran three typical signal separation algorithms in sequence: heart sound segmentation BSS (HSSBSS) [5], constrained-linear-prediction BS (CLPBSS) [17], and our algorithm (MLBSS). ECG is a typical super-Gaussian signal. When MLBSS algorithm was conducted, Gaussian exponent in (7) was set to 1, 1.6, and 3.0, respectively. All algorithms’ parameters were adjusted in advance so that we could acquire the best average performance. The signal separating results were drawn in Fig. 4. These results were in accordance with the sequence of the three separation algorithms. An important feature of HSSBSS algorithm is that it needs short data record because of the reduction of small sample separation error. One can find that signal , separated by the HSSBSS algorithm, is always contaminated by other signals or noise. Functionally, the HSSBSS algorithm extracts source signals based on mutual information dependence and non-peak characteristics. Since the extracted original signal owns the biggest kurtosis value among all mixed source signals, it may have good performance for sound segmentation and extraction. Unfortunately, the desired ECG cannot satisfy this condition. The CLPBSS algorithm employs linear prediction to extract rhythm for signal separation, showing moderate separation performance. Linear prediction based method can extract source signals which have specific temporal structure. Fortunately, this type of information in the desired ECG is readily available. However, such method works well only when frequency and location of the interested original signal are available, which is not always realistic. In fact, the MLBSS algorithm has the best performance of the three algorithms. In particular, when is adjusted to 1.6, signal [TeX:] $$y_{4},$$ , separated by the MLBSS algorithm, approximates to the original signal s2 to a great extent. Generally speaking, we note that [TeX:] $$\alpha=1.6$$ is the best choice to separate the super-Gaussian signal from its mixture. In essence, data in ECG are often skewed due to larger signal amplitudes in activated regions, so exponential family can describe probability densities of the desired ECG effectively. In simulation, we found that one could adjust the Gaussian exponent to control the peakness of the probability distribution, thus making families in (7) consist of only two kinds of probability densities. In contrast, while the desired source signal was super-Gaussian, the Gaussian exponent value was set to 1.6 close to the best option; while the desired source signal was sub-Gaussian, the most appropriate variable approximates to 3.5. That is to say, the MLBSS algorithm can estimate different kinds of expected original signals. Most of all, as a signal separation algorithm in the ML framework, the signal processing can be carried out without any prior information about the probability density of the expected original signal. The MLBSS algorithm may separate a clear signal as long as its kurtosis property is known in advance. Fig. 4. Separating results by various algorithms. ## 5. Discussion and Conclusions ML estimation can transform the probability density estimation into a parameter estimation problem. As a fundamental method for statistical estimation, ML generally represents the preferred technique for BSS problem. In this paper, a learning algorithm, called MLBSS, has been proposed based on the stochastic gradient optimization rule for separating underlying component from source mixtures. A family of parameterized generalized distribution functions, which are adaptive to various marginal densities, have been deduced in this paper. One may set different exponential parameters, based on the kurtosis properties of desired source signal, to match different possible signal distributions. As a result, a gradient learning algorithm, which can separate different kinds of desired original signals, is deduced in ML framework. In fact, the MLBSS algorithm works in a semi-blind setting, since we should know the non-Gaussianity of the interesting original signal in advance. In contrast to other ML based methods [1,2], the MLBSS algorithm has many advantages. Firstly, the existing ML based methods should know the probability density of the source signal in advance. The MLBSS algorithm can be implemented effectively as long as the kurtosis property of the expected original signal is known. Secondly, the existing ML based methods can only separate a few specific source signals. In contrast, the MLBSS algorithm may separate signals with super-Gaussian distribution or sub-Gaussian distribution, which is important in practice. ## Acknowledgement This work is supported by Shandong Provincial Natural Science Foundation, China (No. ZR2017 MA046). ## Biography ##### Yongjian Zhao https://orcid.org/0000-0002-9203-8991 He received the Ph.D. degree in biomedical engineering from Shandong University, Jinan, China, in 2009. He is currently an associate professor at Shandong University. His research interests include deep learning, signal processing, and pattern recognition. ## Biography ##### Bin Jiang https://orcid.org/0000-0003-2447-6946 He received the Ph.D. degree in biomedical engineering from Shandong University, Jinan, China, in 2009. He is currently an associate professor at Shandong University. His research interests include deep learning, signal processing, and pattern recognition. He received the Ph.D. degree from University of Chinese Academy of Sciences, Beijing, China, in 2012. He is an associate professor at Shandong University. His research interests include pattern recognition, data mining, and information security. ## References • 1 I. Goodfellow, A. Courvile, I, Y . Bengioand A. CourvileDeep Learning. CambridgeMA: The MIT Press, Goodfellow, 2016.custom:[[[-]]] • 2 A. Hyvarinen, J. Karhunen, E. Oja, Independent Component Analysis, NY: John Wiley & Sons, New York, 2001.custom:[[[-]]] • 3 A. Cichocki, D. Mandic, L. De Lathauwer, G. Zhou, Q. Zhao, C. Caiafa, H. A. Phan, "Tensor decompositions for signal processing applications: from two-way to multiway component analysis," IEEE Signal Processing Magazine, vol. 32, no. 2, pp. 145-163, 2015.doi:[[[10.1109/MSP.2013.2297439]]] • 4 R. Llinares, J. Igual, A. Salazar, A. Camacho, "Semi-blind source extraction of atrial activity by combining statistical and spectral features," Digital Signal Processing, vol. 21, no. 2, pp. 391-403, 2011.doi:[[[10.1016/j.dsp.2010.06.005]]] • 5 C. D. Papadaniil, L. J. Hadjileontiadis, "Efficient heart sound segmentation and extraction using ensemble empirical mode decomposition and kurtosis features," IEEE Journal of Biomedical and Health Informatics, vol. 18, no. 4, pp. 1138-1152, 2014.doi:[[[10.1109/JBHI.2013.2294399]]] • 6 H. Zhang, C. Wang, X. Zhou, "An improved secure semi-fragile watermarking based on LBP and Arnold transform," Journal of Information Processing Systems, vol. 13, no. 5, pp. 1382-1396, 2017.doi:[[[10.3745/JIPS.02.0063]]] • 7 C. Jutten, J. Herault, "Blind separation of sources. Part I: An adaptive algorithm based on neuromimetic architecture," Signal Processing, vol. 24, no. 1, pp. 1-10, 1991.doi:[[[10.1016/0165-1684(91)90079-X]]] • 8 J. Virta, K. Nordhausen, "Blind source separation of tensor-valued time series," Signal Processing, vol. 141, pp. 204-216, 2017.doi:[[[10.1016/j.sigpro.2017.06.008]]] • 9 C. Pehlevan, S. Mohan, D. B. Chklovskii, "Blind nonnegative source separation using biological neural networks," Neural Computation, vol. 29, no. 11, pp. 2925-2954, 2017.doi:[[[10.1162/neco_a_01007]]] • 10 Y. Bengio, A. Courville, P. Vincent, "Representation learning: a review and new perspectives," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1798-1828, 2013.doi:[[[10.1109/TPAMI.2013.50]]] • 11 W. Y. Leong, D. P. Mandic, "Noisy component extraction (NoiCE)," IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 57, no. 3, pp. 664-671, 2010.custom:[[[-]]] • 12 G. Chabriel, M. Kleinsteuber, E. Moreau, H. Shen, P. Tichavsky, A. Yeredor, "Joint matrices decompositions and blind source separation: a survey of methods, identification, and applications," IEEE Signal Processing Magazine, vol. 31, no. 3, pp. 34-43, 2014.doi:[[[10.1109/MSP.2014.2298045]]] • 13 J. Nikunen, T. Virtanen, "Direction of arrival based spatial covariance model for blind sound source separation," IEEE/ACM Transactions on AudioSpeech, and Language Processing, vol. 22, no. 3, pp. 727-739, 2014.doi:[[[10.1109/TASLP.2014.2303576]]] • 14 Y. Zhao, B. Liu, S. Wang, "A robust extraction algorithm for biomedical signals from noisy mixtures," Frontiers of Computer Science in China, vol. 5, no. 4, pp. 387-394, 2011.doi:[[[10.1007/s11704-011-1043-5]]] • 15 M. Taseska, E. A. Habets, "Blind source separation of moving sources using sparsity-based source detection and tracking," IEEE/ACM Transactions on AudioSpeech, and Language Processing, vol. 26, no. 3, pp. 657-670, 2018.doi:[[[10.1109/TASLP.2017.2780993]]] • 16 E. Santana, J. C. Principe, E. E. Santana, R. C. S. Freire, A. K. Barros, "Extraction of signals with specific temporal structure using kernel methods," IEEE Transactions on Signal Processing, vol. 58, no. 10, pp. 5142-5150, 2010.doi:[[[10.1109/TSP.2010.2053359]]] • 17 S. Ferdowsi, S. Sanei, V. Abolghasemi, "A predictive modeling approach to analyze data in EEG–fMRI experiments," International Journal of Neural Systems, vol. 25, no. 1, 2015.custom:[[[-]]]
2020-07-12 21:24:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7659417986869812, "perplexity": 928.7499362245845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657140337.79/warc/CC-MAIN-20200712211314-20200713001314-00218.warc.gz"}
https://math.stackexchange.com/questions/1999533/why-are-the-domains-for-ln-x2-and-2-ln-x-different
# Why are the domains for $\ln x^2$ and $2\ln x$ different? If I have a function like this $f(x)=2 \ln(x)$ and I want to find the domain, I put $x>0$. But if I use the properties of logarithmic functions, I can write that function like $f(x)=\ln(x^2)$ and so the domain is all $\mathbb{R}$ and the graphic of function is different. Where is the mistake? • You can't have a function without having a domain. You can have an expression however and then ask for what values is the expression meaningful. That seems to be what you're doing. – zhw. Nov 4 '16 at 21:31 • My question is: why for example $ln(x)+ln(x-3)$ that is the same of $ln(x(x-3))$ have two different domains and graphs. – L.G.A.G. Nov 4 '16 at 22:13 • @L.G.A.G. They are the same where they are both defined, but they need not be defined in the same places. This is a tricky thing about logarithms that I find precalculus references don't cover very well. – Ian Nov 5 '16 at 2:23 • just because they tell you in algebra that you can't have log(n) n<=0 doesn't mean that you can't... youtube.com/watch?v=IX_23EWpF5U also youtube.com/watch?v=soFDU-1knNE – cmarangu Jul 6 at 0:09 ## 5 Answers First, you could use $\ln x$ to define functions with different domains as long as $\ln x$ is defined in that domain. Second, the rule $\ln x^n=n\cdot \ln x$ is a bit sloppy. It should always be pointed out that $x>0$. Likewise, $\ln ab=\ln a+\ln b$, only if $a,b>0$. Note that: $$\ln (x^2)=2\ln |x| \ne 2 \ln x$$ so the two functions are different and have different domains. • Thanks. But if I take $ln(x)+ln(x-3)$, this function is $ln(x(x-3))$ from the properties of logarithmic functions, but the domains are different. Why if they should be the same thing? – L.G.A.G. Nov 4 '16 at 18:26 The functions $f(x) =2 \ln x$ and $g(x) = \ln x^2$ have different domains. The domain of $f$ is $(0,\infty)$, and the domain of $g$ is $\mathbb{R} - \{0\}$. But as you said, when $x$ is in the domain of $f$ and the domain of $g$, we have $f(x) = g(x)$. The domain of a function is part of its definition. Restricting ourselves to functions from subsets of the real numbers to the real numbers, the logarithm function $x \mapsto \ln x$ is defined to have the domain $(0,\infty) \subset \mathbb R$. (I mention the restriction to $\mathbb R$ because there is also a function named $\ln$ whose domain is the non-zero complex numbers.) There is nothing to stop you from defining a function with $f_1$ with domain $[17,23]$ such that $f_1(x) = \ln x$ whenever $17\leq x \leq23$. The new function $f_1$ does not have all of the nice properties of $\ln$, for example it is never true that $f_1(x) + f_1(y) = f_1(xy)$, because $x$, $y$, and $xy$ cannot all simultaneously be in the domain of $f_1$. Nevertheless, $f_1$ is a perfectly well-defined function, even if it is far less useful than $\ln$, just as the function $\ln$ with domain $(0,\infty)$ is a perfectly-well defined function despite being less useful (for some purposes) than the complex principal logarithm function. Because the domain of $f_1$ is different from the domain of $\ln$, $f_1$ is not the same function as $\ln$. Now you want to define a function with the formula $f(x) = 2 \ln x$, but the definition needs a domain. I would argue that there is no such thing as "the" domain for a function defined by that formula, since it is possible to use the mapping $x \mapsto 2 \ln x$ to define functions over many different domains such as $(0,10]$, $[17,71]$, or $(3,4]\cup[10,11)\cup\{37\}$, but there is a maximal domain for functions of that kind, namely, the union of the domains of all possible functions that can be defined by that formula. That domain is again $(0,\infty)$. So if someone asks me for "the" domain of $f(x) = 2 \ln x$ I would guess that they meant the domain $(0,\infty)$; it is the "best" choice for most purposes. An alternative definition of the function $f(x) = 2 \ln x$ on the domain $(0,\infty)$ is to say that $f(x) = \ln(x^2)$ when $x \in (0,\infty)$. This is the same function because it has the same domain and takes the same value at each point in that domain. It is also possible to define a function $g(x) = \ln(x^2)$ for all $x \in \mathbb R - \{0\}$. That is a perfectly well-defined function, but it is a different function than $f$ since it has a different domain. Natural log of x^2 can take negative values because they are squared before they are fed to the logarithm function.
2019-11-20 19:13:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9248952269554138, "perplexity": 115.87495306970573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670601.75/warc/CC-MAIN-20191120185646-20191120213646-00297.warc.gz"}
http://mathhelpforum.com/algebra/146415-equation.html
1. ## equation Write an equation in standard form for the line that passes through the (-4, -7) AND (-2-6) Thanks 2. Originally Posted by ecdino2 Write an equation in standard form for the line that passes through the (-4, -7) AND (-2-6) Thanks 1. calculate the slope ... $m = \frac{y_2-y_1}{x_2-x_1}$ 2. using either of the given points and the calculated slope, write a linear equation in point-slope form ... $y - y_1 = m(x - x_1)$ 3. convert the point-slope equation to standard form with integer coefficients ... $Ax + By = C $
2013-12-09 05:39:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.735405445098877, "perplexity": 1038.1830510814452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163906438/warc/CC-MAIN-20131204133146-00069-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.gamedev.net/blogs/entry/2256513-small-update/
• entries 11 28 • views 24434 # Small Update 1175 views Hi again! Turns out uni is giving me a hard time over the last weeks so I don't have much time at all to code. So this will be a small update, no pics, sorry :P The third time is the... So, I decided to implement the algorithm in the paper pretty much literally, without paying much attention to what actually its doing (besides a major issue I did notice). In short, my implementation doesn't works, height points get generated but the midpoint displacement pass fails to create anything useable. I did notice that the algorithm as it stands on the paper, uses an awful amount of memory. It has to store the "parent" points (usually 2 to 4) for each of the height values of the map (I'll explain what a parent point is in another journal entry). For a 512x512 is no problem, but for a 4k heightmap you can get 100 million (or more) objects jumping around, which isn't nice at all. You could use plain arrays and do some index calculations instead but it still is an awful amount of height points to store. Not everything is lost Turns out I didn't paid attention at all to the other version of the paper I had (or maybe I just didn't knew what it was doing at the time) but it has the pseudocode of a much nicer way to go about the midpoint displacement inverse process. It is recursive, so it might produce a stack overflow but an iterative approach should be doable if I can wrap my head around the algorithm (which means, wish me luck :P ). And...? And that's it for now. See you next time! :D You can avoid call stack overflows by using a custom stack implementation, for example using a queue: queue.push(pointFirst); while(Point *point = queue.shift()){ //process point //... //process sub-points foreach(Point[] *point.subs as *sub) queue.push(sub); } That... Makes a lot of sense :D I'll have to check out the algorithm to know if it would work, if I have too many elements in the stack I might return to the issue I had with the previous version (millions of objects, and Java's GC looking angry at me). Thanks for the feedback! ## Create an account Register a new account
2018-12-11 07:35:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3259018063545227, "perplexity": 957.9787189294045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823588.0/warc/CC-MAIN-20181211061718-20181211083218-00202.warc.gz"}
https://www.groundai.com/project/batch-normalization-biases-deep-residual-networks-towards-shallow-paths/
Batch Normalization Biases Deep Residual Networks Towards Shallow Paths # Batch Normalization Biases Deep Residual Networks Towards Shallow Paths ## Abstract Batch normalization has multiple benefits. It improves the conditioning of the loss landscape, and is a surprisingly effective regularizer. However, the most important benefit of batch normalization arises in residual networks, where it dramatically increases the largest trainable depth. We identify the origin of this benefit: At initialization, batch normalization downscales the residual branch relative to the skip connection, by a normalizing factor proportional to the square root of the network depth. This ensures that, early in training, the function computed by deep normalized residual networks is dominated by shallow paths with well-behaved gradients. We use this insight to develop a simple initialization scheme which can train very deep residual networks without normalization. We also clarify that, although batch normalization does enable stable training with larger learning rates, this benefit is only useful when one wishes to parallelize training over large batch sizes. Our results help isolate the distinct benefits of batch normalization in different architectures. \printAffiliationsAndNotice ## 1 Introduction The combination of skip connections He et al. (2016a, b) and batch normalization Ioffe and Szegedy (2015) dramatically increases the largest trainable depth of neural networks. This has led to a rapid improvement in model performance in recent years Tan and Le (2019); Xie et al. (2019). While some authors have succeeded in training very deep networks without normalization layers or skip connections (Saxe et al., 2013; Xiao et al., 2018), these papers required orthogonal initialization schemes which are not compatible with ReLU activation functions. In contrast, batch normalized residual networks have been trained with thousands of layers without requiring careful initialization tricks (He et al., 2016a, b). A number of other normalization variants have also been proposed Ulyanov et al. (2016); Salimans and Kingma (2016); Ba et al. (2016); Wu and He (2018); Singh and Krishnan (2019). Following the introduction of layer normalization (Ba et al., 2016) and the growing popularity of transformers (Vaswani et al., 2017; Radford et al., 2019), almost all state-of-the-art networks currently contain both skip connections and normalization layers. However to our knowledge, despite their popularity, there is still not a simple explanation for why very deep normalized residual networks are trainable. Our contributions. The main contribution of this paper is to provide a simple explanation of why normalization layers enable us to train deep residual networks. By viewing a residual network as an ensemble of paths with shared weights but different depths (similar to Veit et al. (2016)), we show how batch normalization ensures that, early in training, very deep residual networks with tens of thousands of layers are dominated by shallow paths containing only tens of layers. This occurs because batch normalization downscales the residual branch relative to the skip connection, by a factor proportional to the square root of the network depth. This provides an intuitive account of why deep normalized residual networks can be efficiently optimized early in training, since they behave like ensembles of shallow networks with well behaved gradients. Our analysis is related to recent work studying the initialization conditions that make deep residual networks trainable (Hanin and Rolnick, 2018; Zhang et al., 2019), but these papers did not identify how normalization layers ameliorate bad initialization choices. The observation above suggests that one should be able to train deep residual networks without normalization or careful initialization, simply by downscaling the residual branch. To verify this claim, we introduce a one-line code change that can train very deep residual networks without normalization (“SkipInit”). Combined with additional regularization, SkipInit networks are competitive with their batch normalized counterparts at typical batch sizes (e.g. batch sizes below 1024 for ResNet-50 on ImageNet). SkipInit is similar in some aspects to the recently proposed Fixup initialization (Zhang et al., 2019). However, Fixup contains a number of components, and the relationship between these components and the effects of batch normalization layers is not clear. Our primary intention in introducing SkipInit is to provide additional evidence to support our explanation of why deep normalized residual networks are trainable. Finally, we provide a detailed empirical analysis to help isolate the different benefits of batch normalization for both shallow and deep residual networks on CIFAR-10 and ImageNet. Our results demonstrate that, as well as enabling us to train very deep residual networks, batch normalization also increases the largest stable learning rate of shallow networks. However, contrary to previous work claiming that enabling large learning rates is the primary benefit of batch normalization (Santurkar et al., 2018; Bjorck et al., 2018), we show that this effect does not explain why batch normalized shallow networks achieve higher test accuracies than unnormalized networks under constant epoch budgets. Large learning rates are only beneficial during large batch training Shallue et al. (2018); McCandlish et al. (2018), while for smaller batch sizes, batch normalized networks and unnormalized networks have similar optimal learning rates. These results further demonstrate the importance of evaluating the performance of alternatives to batch normalization at a wide range of batch sizes. We also verify that batch normalization can have an additional regularization effect, and we show that this benefit can be tuned by optimizing the “ghost batch size” (Hoffer et al., 2017) (the number of examples over which one computes batch statistics). Layout of the paper. We discuss background material and related work in section 2, before introducing our analysis of deep normalized residual networks in section 3. This analysis demonstrates why networks containing identity skip connections and batch normalization layers are dominated by shallow paths early in training. We introduce SkipInit in section 4, a simple initialization scheme that can train deep residual networks without normalization. We also discuss the similarities between SkipInit and Fixup. To study the different benefits of batch normalization, in section 5 we provide an empirical evaluation of Wide-ResNets on CIFAR-10 with and without batch normalization at a wide range of batch sizes, learning rates, and network depths. We provide an empirical comparison of batch normalization, SkipInit and Fixup on ImageNet in section 6. ## 2 Background and related work Residual Networks (ResNets). ResNets contain a sequence of residual blocks, which are composed of a “residual branch” comprising a number of convolutions, normalization layers and non-linearities, as well as a “skip connection”, which is typically just the identity. These skip connections create pathways through the network which have a shorter effective depth than the path through all the residual branches. See figure 1 for an illustration of a residual block. Most of our experiments will follow the popular Wide-ResNet architecture (Zagoruyko and Komodakis, 2016). We also consider ResNet50-V2 (He et al., 2016b) in section 6 and ResNet50-V1 (He et al., 2016a) in appendix E. Batch Normalization. As in most previous work, we apply batch normalization to convolutional layers. The inputs to and outputs from batch normalization layers are therefore 4-dimensional tensors, which we denote by and . Here denotes the minibatch, denotes the channels, and and denote the two spatial dimensions. Batch normalization applies the same normalization to every input in the same channel (Ioffe and Szegedy, 2015): Ob,x,y,c=γcIb,x,y,c−μc√σ2c+ϵ+βc. Here, denotes the per-channel mean, and denotes the per-channel variance of the inputs, and denotes the normalization constant summed over the minibatch and spatial dimensions and . A small constant is added to the variance for numerical stability. The “scale” and “shift” parameters, and respectively, are learnt during training. Running averages of the mean and variance are also maintained during training, and these averages are used at test time to ensure the predictions are independent of other examples in the batch. For distributed training, the batch statistics are usually estimated locally on a subset of the training minibatch (“ghost batch normalization” (Hoffer et al., 2017)). We discussed some of the benefits of batch normalization in the introduction. However batch normalization also has many limitations. It breaks the independence between training samples in a minibatch, which makes it hard to apply in certain models (Girshick, 2015), and also contradicts the assumptions of most theoretical models of optimization (Mandt et al., 2017; Park et al., 2019). The normalization operation itself is expensive, and can constitute a significant fraction of the total cost of computing a parameter update (Wu et al., 2018). It also performs poorly when the batch size is too small (Ioffe, 2017; Wu and He, 2018), which can limit the size of model that can be trained on a single device. There is therefore great interest in understanding the benefits of batch normalization and identifying simple alternatives. Related Work. Balduzzi et al. (2017) and Yang et al. (2019) both argued that ResNets with identity skip connections and batch normalization layers on the residual branch preserve correlations between different minibatches in very deep networks, and Balduzzi et al. (2017) suggested this effect can be mimicked by initializing networks close to linear functions. However, neither paper gives a clear explanation of why batch normalization has this benefit. Furthermore, even deep linear networks are difficult to train with Gaussian weights (Saxe et al., 2013), which suggests that linearity is not sufficient. Veit et al. (2016) argued that residual networks can be interpreted as an ensemble over many paths of different depths, and they found empirically that this ensemble is dominated by short paths in normalized networks. However they do not explain why this occurs or discuss whether batch normalization would affect this conclusion. Indeed, most papers studying the benefits of batch normalization have not discussed the combination of normalization and skip connections. Some authors have observed that batch normalization has a regularizing effect (Hoffer et al., 2017; Luo et al., 2019). Meanwhile, Santurkar et al. (2018) and Bjorck et al. (2018) both argued that the primary benefit of batch normalization is to improve the conditioning of the loss landscape, which enables stable training with larger learning rates. However while batch normalization does improve the conditioning of a network (Jacot et al., 2019), this conditioning decays rapidly as the network depth increases if skip connections are not present (Yang et al., 2019). Deep normalized networks without skip connections are therefore not trainable (Yang et al., 2019; Sankararaman et al., 2019). Fixup. Zhang et al. (2019) proposed Fixup initialization, which can train deep residual networks without batch normalization. They also confirmed that Fixup can replace layer normalization in transformers (Vaswani et al., 2017) for machine translation. Fixup has four components: 1. The classification layer and final convolution of each residual branch are initialized to zero. 2. The initial weights of the remaining convolutions are scaled down by , for a number of residual branches and convolutions per branch . 3. A scalar multiplier is introduced at the end of each residual branch, intialized to one. 4. Scalar biases are introduced before every convolution, linear or activation function layer, initialized to zero. The authors claim that component 2 above is crucial, however we will demonstrate below that it is not necessary in practice at typical batch sizes. We show that a significantly simpler initialization scheme (SkipInit) enables efficient training of deep unnormalized ResNets so long as the batch size is not too big. The additional components of Fixup enable it to scale efficiently to slightly larger batch sizes than SkipInit, however neither Fixup or SkipInit scale to the very large batch sizes possible with batch normalization. ## 3 A simple explanation of why deep normalized residual networks are trainable Veit et al. (2016) argued that a residual network composed of residual blocks can be described as an ensemble over paths, which can be grouped according to the number of residual blocks that they traverse (which we refer to as the depth of the path, with being the total network depth): ^f = d∏i=1(^I+^fi) (1) = ^I+∑i^fi+∑i>j^fi^fj+...+d∏i=1^fi. (2) In equation 1, is an operator representing residual blocks, represents the residual branch, and is the identity operator representing the skip connection. We provide an illustration of this in figure 2. The distribution of path depths follows a binomial distribution, and the mean depth of a path is . If the trainable depth were governed by the depth of a typical path, then introducing skip connections should roughly double the trainable depth. This was observed in unnormalized residual networks by Sankararaman et al. (2019). However, normalized residual networks can be trained for depths significantly deeper than twice the depth of their non-residual counterparts. To understand this effect, we consider the variance of hidden activations at initialization on both the skip path and the residual branch. To present our main argument, we focus here on the variance across multiple channels, but we also discuss the variance across the batch on a single channel in appendix A. For simplicity, we assume the convolutions are initialized using He initialization He et al. (2015) to preserve the variance of incoming activations with ReLU non-linearities. We conclude: Unnormalized networks: If the residual branch is not normalized, then we expect and to preserve the variance of their inputs. If the inputs are mean centered with variance 1, the skip path after residual blocks will have expected variance . One can prevent the variance from exploding by introducing a factor of at the end of each residual block. Notice however that all paths contribute equally, which implies that the function at initialization is dominated by deep paths that traverse roughly half the residual branches. Normalized networks: When the residual branch is normalized, it is reasonable to assume that the output of will have variance 1. Consequently, each residual block increases the variance by 1, and the activations just before the residual block will have expected variance . Therefore, in expectation, the variance of any path which traverses the residual branch will be suppressed by a factor of , which implies that the hidden activations are suppressed by a factor of . As shown in figure 3, this downscaling factor is sufficiently strong to ensure that of the variance of a network with 10000 residual blocks will arise from shallow paths that traverse or fewer residual branches. The depth of a typical residual block is proportional to the total number of residual blocks , which suggests that batch normalization downscales residual branches by a factor on the order of . Although weaker than the factor of proposed by Hanin and Rolnick (2018), we will find empirically that it is sufficiently strong to train networks with 1000 layers. To verify this argument, we evaluate the variance across channels, as well as the batch normalization statistics, of two normalized residual networks at initialization in figure 4. We define both networks in appendix A. In figure 4(a), the variance on the skip path of a deep linear ResNet is approximately equal to the current depth , while the variance at the end of each residual branch is approximately 1. This occurs because the batch normalization moving variance is also approximately equal to depth, confirming that normalization downscales the residual branch by a factor of . In figure 4(b), we consider a convolutional ResNet with ReLU activations evaluated on CIFAR-10. The variance on the skip path remains proportional to depth, but with a coefficient slightly below 1. This is likely caused by zero padding at the image boundary. The batch normalization moving variance is also proportional to depth, but slightly smaller than the variance across channels on the skip path. This occurs because ReLU activations introduce correlations between independent examples in the batch, which we discuss in appendix A. These correlations also cause the square of the batch normalization moving mean to grow with depth. The above provides a simple explanation of why deep normalized ResNets are trainable. Our argument extends to other normalization variants and model architectures, including layer normalization (Ba et al., 2016) and “pre-norm” transformers (where the normalization layers are on the residual branch) (Radford et al., 2019). However our argument doesn’t apply to the original transformer, which placed normalization layers on the skip path (Vaswani et al., 2017). This original transformer is famously difficult to train. ## 4 SkipInit; an alternative to normalization We claim that normalization enables us to train deep residual networks, because in expectation it downscales the residual branch at initialization by a normalizing factor proportional to the square root of the network depth. To verify this claim, we propose a simple alternative to normalization, “SkipInit”: SkipInit: Put a scalar multiplier at the end of every residual branch, and initialize each multiplier to (see figure 1). After normalization is removed, it should be possible to implement SkipInit as a one line code change. In section 5.1, we show that we can train very deep residual networks, so long as is initialized at a value of or smaller, where denotes the total number of residual blocks (see table 1). We recommend setting , so that the residual block represents the identity function at initialization. We emphasize that SkipInit is designed for ResNets which contain an identity skip connection (He et al., 2016b). We discuss how to extend SkipInit to the original ResNet-V1 formulation of ResNets in appendix E (He et al., 2016a). We introduced Fixup in section 2 (Zhang et al., 2019), which also ensures that the residual block represents the identity at initialization. However Fixup contains multiple additional components. In practice, we have found that either component 1 or component 2 of Fixup is sufficient to train deep ResNet-V2s without normalization. Component 1 initializes residual blocks to the identity, while component 2 multiplies the residual branch by a factor . Component 3 enhances the rate of convergence, while component 4 is required for deep ResNet-V1s (He et al., 2016a) (see appendix E). We introduce SkipInit to clarify the minimal conditions required to train deep ResNets without normalization. Both SkipInit and Fixup enable us to increase the depth of residual networks without increasing network width. By contrast, for Gaussian initialization, the trainable depth of vanilla networks without skip connections is proportional to width (Hanin and Rolnick, 2018; Hu et al., 2020). ## 5 An empirical study of the benefits of batch normalization in residual networks In this section, we provide a thorough empirical study of batch normalization in residual networks, which identifies multiple distinct benefits. In section 5.1, we verify the claims made in sections 3 and 4 by studying the minimal components required to train very deep residual networks. In section 5.2, we investigate the claim made in multiple previous papers (Bjorck et al., 2018; Santurkar et al., 2018) that the primary benefit of batch normalization is to increase the largest stable learning rate. We study the additional regularization benefits of batch normalization in section 5.3. ### 5.1 Normalization and deep networks In table 1, we provide the mean performance of a - Wide-ResNet (Zagoruyko and Komodakis, 2016), trained on CIFAR-10 for 200 epochs at batch size 64 at a range of depths between 16 and 1000 layers. At each depth, we train the network 7 times for a range of learning rates on a logarithmic grid, and we independently measure the mean and standard deviation of the test accuracy for the best 5 runs. This procedure ensures that our results are not corrupted by outliers or failed runs. Throughout this paper, we train using SGD with heavy ball momentum, and fix the momentum coefficient . The optimal test accuracy is the mean performance at the learning rate whose mean test accuracy was highest. We always verify that the optimal learning rates are not at the boundary of our grid search. Although we tune the learning rate on the test set, we emphasize that our goal is not to achieve state of the art results. Our goal is to compare the performance of different training procedures, and we apply the same experimental protocol in each case. We hold the learning rate constant for 100 epochs, before dropping the learning rate by a factor of 2 every 10 epochs. This simple schedule achieves higher test accuracies than the original schedule proposed in He et al. (2016a). We apply data augmentation including padding, random crops and left-right flips and we also use L2 regularization with a coefficient of . We initialize convolutional layers using He initialization (He et al., 2015). We provide the optimal training losses in appendix C. As expected, batch normalized Wide-ResNets are trainable for a wide range of depths, and the optimal learning rate is only weakly dependent on the depth. We can recover this effect without normalization by incorporating SkipInit and initializing or smaller, where denotes the number of residual blocks. This provides additional evidence to support our claim that batch normalization enables us to train deep residual networks by biasing the network towards shallow paths at initialization. Just like normalized networks, the optimal learning rate with SkipInit is almost independent of the network depth. SkipInit slightly underperforms batch normalization on the test set at all depths. However as shown in sections 5.3 and 6, it may be possible to close this gap with sufficiently strong regularization. In table 2, we verify that one cannot train deep residual networks with SkipInit if , confirming that it is necessary to downscale the residual branch. We also confirm that for unnormalized residual networks, it is not sufficient merely to ensure the activations do not explode on the forward pass (which can be achieved by simply multiplying the activations by every time the residual branch and skip path merge). Finally, we noticed that, at initialization, the loss in very deep networks is dominated by the L2 regularization term, which causes the weights to shrink rapidly early in training. To clarify whether this effect is necessary, we also evaluated the performance of SkipInit () without L2 regularization. We find that L2 regularization is not necessary for trainability. This demonstrates that we can train very deep residual networks without normalization and without reducing the scale of the weights at initialization. ### 5.2 Normalization and large batch training In this section, we investigate the claim made by Santurkar et al. (2018) and Bjorck et al. (2018) that the primary benefit of batch normalization is to improve the conditioning of the loss landscape, which increases the largest stable learning rate. To study this, in figure 5 we provide the mean performance of a 16-4 Wide-ResNet, trained on CIFAR-10 for 200 epochs at a wide range of batch sizes. We follow the same experimental protocol described in section 5.1, however we average over the best 12 out of 15 runs. To enable us to consider extremely large batch sizes on a single GPU, we evaluate the batch statistics over a fixed ghost batch size of 64 (Hoffer et al., 2017), before accumulating gradients to form larger batches. We therefore are unable to consider batch sizes below 64 with batch normalization. Evaluating batch statistics over a fixed number of training examples improves the test accuracy achieved with large batches (Hoffer et al., 2017; Goyal et al., 2017) and reduces communication overheads in distributed training. We repeat this experiment in the small batch limit in section 5.3, evaluating batch statistics over the full training batch. We see that performance is better with batch normalization than without batch normalization on both the test set and the training set at all batch sizes. For clarity, in figure 5 we provide the training loss excluding the L2 regularization term.1 Additionally, both with and without batch normalization, the final test accuracy is independent of batch size in the small batch limit, before beginning to decrease when the batch size exceeds some threshold.2 This threshold is significantly larger when batch normalization is used, which demonstrates that one can efficiently scale training to larger batch sizes in normalized networks (Goyal et al., 2017). To better understand why batch normalized networks can scale training efficiently to larger batches, we provide the optimal learning rates which maximize the test accuracy and minimize the training loss in figure 6. When the batch size is small, the optimal learning rates are proportional to the batch size (Mandt et al., 2017; Smith and Le, 2017), and the optimal learning rate is similar with and without batch normalization. However when the batch size is large, the optimal learning rates are independent of batch size (McCandlish et al., 2018). Intuitively, we have reached the largest stable learning rate, above which training will diverge. We find that batch normalization increases the largest stable learning rate, confirming that it improves the conditioning of the loss (Santurkar et al., 2018; Bjorck et al., 2018). It is this benefit which enables us to efficiently scale training to larger batches. SkipInit reduces the gap in test accuracy between normalized and unnormalized networks, and it achieves smaller training losses than batch normalization when the batch size is small (). However SkipInit does not share the full conditioning benefits of batch normalization, and therefore it is not competitive with batch normalized networks in the large batch limit. We show in section 6 that similar limitations apply to Fixup. Our results confirm that batch normalization increases the largest stable learning rate, which enables large batch training. However we emphasize that this benefit does not increase the test accuracy one can achieve within a finite epoch budget. As figures 5 and 6 demonstrate, we always achieve the best performance at small batch sizes for which the optimal learning rates with and without batch normalization are significantly smaller than the largest stable learning rate. Santurkar et al. (2018) and Bjorck et al. (2018) claimed that the primary benefit of batch normalization is to increase the largest stable learning rate, however our results show that this is not correct for ResNets. In ResNets, the primary benefit of batch normalization is to bias the network towards shallow paths. This allows us to train deeper networks, and it also improves the test accuracies of shallow networks. We show in the next two sections that the gap in test accuracy between batch normalization and SkipInit can be further reduced or completely closed with additional regularization. ### 5.3 Normalization and regularization In this section, we study the regularization benefit of batch normalization at a range of batch sizes. We also explore ways to recover this regularization benefit using “Regularized SkipInit”, which is comprised of three components: 1. We introduce a scalar multiplier at the end of each residual branch, initialized to zero (SkipInit). 2. We introduce biases to the convolutional layers. 3. We apply Dropout on the classification layer. We provide the performance of our 16-4 Wide-ResNet at a range of batch sizes in the small batch limit in figure 7. To focus on the regularization benefit of batch normalization, we evaluate the batch statistics over the full training batch, enabling us to consider any batch size (note that batch normalization reduces to instance normalization when ). We provide the corresponding optimal learning rates in appendix D. The test accuracy achieved with batch normalization initially improves as the batch size rises, before decaying for batch sizes . Meanwhile, the training loss increases as the batch size rises from 1 to 2, but then decreases consistently as the batch size rises further. This confirms that noise arising from uncertainty in the batch statistics does have a generalization benefit if properly tuned (Luo et al., 2019), which is why we use a ghost batch size of 64 in preceding sections. The performance of SkipInit and Regularized SkipInit are independent of batch size in the small batch limit, and consequently Regularized SkipInit achieves higher test accuracies than batch normalization when the batch size is very small. Note that we introduced Dropout (Srivastava et al., 2014) to show that extra regularization may be necessary to close the performance gap when normalization is removed, but more sophisticated regularizers would likely achieve higher test accuracies. ## 6 A comparison of batch normalization, SkipInit and Fixup on ImageNet We confirm SkipInit scales to large challenging data distributions by providing an empirical comparison of SkipInit, Fixup initialization (Zhang et al., 2019) and batch normalization on ImageNet. Since SkipInit is designed for residual networks with an identity skip connection, we consider the ResNet50-V2 architecture (He et al., 2016b). We provide additional experiments on ResNet50-V1 in appendix E (He et al., 2016a). We use the original architectures, and do not apply the popular modifications to these networks described in Goyal et al. (2017). When batch normalization is used, we set the ghost batch size equal to 256. We train for 90 epochs. The learning rate is linearly increased from 0 to the specified value over the first 5 epochs of training (Goyal et al., 2017), and then held constant for 40 epochs, before decaying it by a factor of 2 every 5 epochs. As before, we tune the learning rate at each batch size on a logarithmic grid. We provide the optimal validation accuracies in table 3. We found that including biases in convolutional layers led to a small boost in validation accuracy for SkipInit, and we therefore included biases in all SkipInit runs. SkipInit matches the validation set performance of batch normalization and Fixup at the standard batch size of 256. However both SkipInit and Fixup underperform batch normalization when the batch size is very large. Both SkipInit and Fixup can outperform batch normalization on the validation set with extra regularization (Dropout) for small batch sizes. Fixup outperforms SkipInit when the batch size is very large, suggesting that the second component of Fixup has a small conditioning benefit (see section 2). However we found in section 5.1 that this component is not necessary to train very deep residual networks, and we confirm here that it does not improve performance at the standard batch size of 256. ## 7 Discussion Our work confirms that batch normalization has three principal benefits. In (subjective) order of importance, 1. Batch normalization can train very deep ResNets. 2. Batch normalization improves conditioning, which enables us to scale training to larger batch sizes. 3. Batch normalization has a regularizing effect. This work identifies a simple explanation for benefit 1: normalization layers bias deep residual networks towards shallow paths. These shallow paths have well-behaved gradients, enabling efficient training (Balduzzi et al., 2017; Hanin and Rolnick, 2018; Yang et al., 2019). This benefit also applies to other normalization schemes, including layer normalization in “pre-norm” transformers (Radford et al., 2019). A single normalization layer per residual branch is sufficient, and normalization layers should not be placed on the skip path (as in the original transformer of Vaswani et al. (2017)). We can recover benefit 1 without normalization by introducing a scalar multiplier on the residual branch inversely proportional to the square root of the network depth (or zero for simplicity). This one line code change can train deep residual networks without normalization, and also enhances the performance of shallow residual networks. We therefore conclude that one no longer needs normalization layers to train deep residual networks with small batch sizes (e.g., batch sizes below 1024 for ResNet-50 on ImageNet). However, the conditioning benefit (benefit 2) of batch normalization remains important when one wishes to train with large batch sizes. This could make normalization necessary in time-critical situations, for instance if a production model is retrained frequently in response to changing user preferences. Also, since batch normalization has a regularizing effect (benefit 3), it may be beneficial in some architectures if one wishes to achieve the highest possible test accuracy. Note however that one can often exceed the test accuracy of normalized networks by introducing alternative regularizers (see section 6 or Zhang et al. (2019)). We therefore believe future work should focus on identifying an alternative to batch normalization that recovers its conditioning benefits. We would like to comment briefly on the similarity between SkipInit for residual networks, and Orthogonal initialization of vanilla fully connected tanh networks (Saxe et al., 2013). Orthogonal initialization is currently the only initialization scheme capable of training very deep networks without skip connections. It initializes the weights of each layer as an orthogonal matrix, such that the activations after a linear layer are a rotation (or reflection) of the activations before the layer. Meanwhile, the tanh non-linearity is approximately equal to the identity for small activations over a region of scale 1 around the origin. Intuitively, if the incoming activations are mean centered with scale 1, they will pass through the non-linearity almost unchanged. Since rotations compose, the approximate action of the entire network at initialization is to rotate (or reflect) the input. Like SkipInit, the initial functions generated by orthogonal initialization of vanilla tanh networks are almost independent of the network depth. However ReLUs are not compatible with orthogonal initialization, since they are not linear about the origin. This has limited the use of orthogonal initialization in practice. To conclude: Batch normalization and SkipInit have one crucial property in common. At initialization, they both bias deep residual networks towards an ensemble of shallow paths with well-behaved gradients. This property enables us to train deep residual networks without increasing the network width (Hanin and Rolnick, 2018; Hu et al., 2020), and it is therefore a major factor behind the popularity of normalized residual networks in deep learning. Batch normalization also has both a conditioning benefit and a regularization benefit. However, we demonstrate in this paper that one can train competitive networks without normalization by choosing a sensible initialization scheme, introducing extra regularization, and training with small minibatches. ## Acknowledgements We thank Jeff Donahue, Chris Maddison, Erich Elsen, James Martens, Razvan Pascanu, Chongli Qin, Karen Simonyan, Yann Dauphin, Esme Sutherland and Yee Whye Teh for various discussions that have helped improve the paper. ## Appendix A The influence of ReLU non-linearities on the batch statistics in residual networks. In figure 4 of the main text, we studied the batch statistics of residual blocks at a wide range of depths in two different architectures; a deep linear fully connected normalized ResNet and a deep convolutional normalized ResNet with ReLU non-linearities. We now define both models in full: Deep fully connected linear ResNet: The inputs are 100 dimensional vectors composed of independent random samples from the unit normal distribution, and the batch size is 1000. These inputs first pass through a batch normalization layer and a single fully connected linear layer of width 1000. We then apply a series of residual blocks. Each block contains a skip path (the identity), and a residual branch composed of a batch normalization layer and a fully connected linear layer of width 1000. All linear layers are initialized with LeCun normal initialization (LeCun et al., 2012) to preserve variance in the absence of non-linearities. Deep convolutional ReLU ResNet: The inputs are batches of 100 images from the CIFAR-10 training set. We first apply a convolution of width 100 and stride 2, followed by a batch normalization layer, a ReLU non-linearity, and an additional convolution of width 100 and stride 2. We then apply a series of residual blocks. Each block contains a skip path (the identity), and a residual branch composed of a batch normalization layer, a ReLU non-linearity, and a convolution of width 100 and stride 1. All convolutions are initialized with He initialization (He et al., 2015) to preserve variance in the presence of ReLU non-linearities. In both networks, we evaluate the variance across channels at initialization on both the skip path and at the end of the residual branch, as well as the mean moving variance (i.e., the single channel moving variance averaged over channels) and mean squared moving mean (i.e., squared moving mean averaged over channels) of the batch normalization layer. To obtain the batch normalization statistics, we set the momentum parameter of the batch normalization layers to 0, and then apply a single update to the batch statistics. In the main text, we found that for the deep linear network, the variance across channels on the skip path was equal to the mean moving variance of the batch normalization layer, while the mean squared moving mean of the batch normalization layer was close to zero. However when we introduced ReLU non-linearities in the deep convolutional ResNet, the mean moving variance of the batch normalization layer was smaller than the variance across channels on the skip path, and the mean squared moving mean of the normalization layer grew proportional to the depth. To clarify the origin of this effect, we consider an additional fully connected deep ReLU ResNet in figure 8. We form this network from the fully connected linear ResNet above by inserting a ReLU layer after each batch normalization layer, and we replace LeCun initialization with He initialization. This network is easier to analyze than the ConvNet since the inputs are drawn from the normal distribution and there are no boundary effects due to padding. In figure 8, we find that the variance over channels on the skip path is approximately equal to the depth of the residual block , while the variance over channels at the end of the residual branch is approximately 1. This is identical to the deep linear network and matches our theoretical predictions in section 3. However, the mean moving variance of the batch normalization layer is approximately equal to , while the mean squared moving mean of the normalization layer is approximately equal to . Notice that the combination of the mean squared moving mean and the mean moving variance of the normalization layer is equal to the depth of the block, which confirms that the batch normalization layer still reduces the variance over channels on the residual branch by a factor of depth . To understand this plot, we note that the outputs of a ReLU non-linearity have non-zero mean, and therefore the ReLU layer will cause the hidden activations of different examples in the batch to become increasingly correlated. Because the hidden activations of different examples are correlated, the variance across channels (the variance of hidden activations across multiple examples and multiple channels) becomes different from the variance over a single channel (the variance across multiple examples for a single channel). For example, consider a simple network whose inputs are a batch of samples of dimension from a Gaussian distribution with mean and covariance , where is the dirac delta function. The first dimension corresponds to the features and the second dimension corresponds to the batch. The network consists of a ReLU layer, followed by a fully connected linear layer with output dimension , and finally a batch normalization layer. The linear layer is initialized from an uncorrelated Gaussian distribution with mean and covariance (He initialization). The inputs to the normalization layer, , where denotes the output of the ReLU non-linearity. The mean activation , while the covariance, E(yijylm) = E(∑knωikx+kjωlnx+nm) (3) = 2δil∑kE(x+kjx+km)W = δil(1+(π−1)δjmπ). To derive equation 3, we recalled that , which implies that while . We can now apply equation 3 to compute the expected variance across multiple channels, E(σ2) = E(y2ij)−E(yij)2 = 1. The expected mean squared activation across a single channel (the expected BatchNorm moving mean squared), E(μ2c) = E⎛⎝(1B∑jycj)2⎞⎠ (4) = 1B2∑jkE(ycjyck) = 1π+π−1πB ≈ 1/π. Note that to reach equation 4 we assumed that . It is immediately clear that the expected variance across a single channel (the expected BatchNorm moving variance), E(σ2c)=E(1B∑jy2cj)−E(μ2c)≈(1−1/π). These predictions match the scaling factors for batch normalization statistics we observed empirically in figure 8. We emphasize that this derivation does not directly apply to the residual network, since it does not account for the presence of paths containing multiple normalization layers. However it does provide a simple example in which ReLU non-linearities introduce correlations in the hidden activations between training examples. These correlations are responsible for the emergence of a large drift in the mean value of each hidden activation as the network depth increases, and a corresponding reduction in the variance observed on a single channel across multiple examples. ## Appendix B Estimating the contributions to logit variance from paths of different depths In figure 3, we estimated the contributions to the variance of the logits at initialization from paths of different depths, for normalized and unnormalized residual networks. We compute these estimates using a simple dynamical program. We begin with residual blocks, which has a single path of depth with variance . We progressively add a single block one at a time, and we calculate the variance arising from paths of every possible path depth. When estimating the variance of paths in non-residual networks, we assume that every path has variance 1, such that we simply have to count the paths. Therefore, if denotes the variance arising from all paths of depth after residual blocks, then and . However when estimating the variance of paths in normalized residual networks, we assume that every path which traverses the residual block is suppressed by a factor of , which implies that . ## Appendix C The optimal training losses of Wide-ResNets at a range of network depths In tables 4 and 5, we provide the minimum training losses, as well as the optimal learning rates at which the training loss is minimized, when training an -2 Wide-ResNet for a range of depths . At each depth, we train for 200 epochs on CIFAR-10 following the training procedure described in section 5.1 of the main text. These results correspond to the same architectures considered in tables 1 and 2, where we provided the associated test set accuracies. We provide the training loss excluding the L2 regularization term (i.e. the training set cross entropy), since one cannot meaningfully compare the L2 regularization penalty of normalized and unnormalized networks. In table 4, we confirm that both batch normalization and SkipInit can achieve training losses which depend only weakly on the network depth. In table 5, we confirm that SkipInit cannot train very deep residual networks if the initial value of the scalar multipliers is too large. We also confirm that one cannot train very deep residual networks solely by normalizing the forward pass (which can be achieved by dividing the output of each residual block by ). Finally, we confirm that SkipInit can achieve extremely small training losses across a wide range of depths even if we do not apply L2 regularization. ## Appendix D The optimal learning rates of Wide-ResNets in the small batch limit In figure 9, we provide the optimal learning rates of SkipInit, Regularized SkipInit and Batch Normalization, when training a 16-4 Wide-ResNet on CIFAR-10. These optimal learning rates correspond to the training losses and test accuracies provided in figure 7. We evaluate the batch statistics for batch normalization layers over the full training batch. ## Appendix E ImageNet results for ResNet-50-V1 In table 6, we present the performance of batch normalization, Fixup and Regularized SkipInit when training Resnet-50-V1 (He et al., 2016a). Unlike ResNet-V2 and Wide-ResNets, this network introduces a ReLU at the end of the residual block after the skip connection and residual branch merge. We find that Fixup slightly underperforms batch normalization when the batch size is small, but considerably underperforms batch normalization when the batch size is large (similar to the results on ResNet-50-V2). However, Regularized SkipInit significantly underperforms both batch normalization and Fixup at all batch sizes. This is not surprising, since we designed SkipInit for models which contain an identity skip connection through the residual block. We provide additional results for a modified version of Regularized SkipInit, which contains a single additional scalar bias in each residual block, just before the final ReLU (after the skip connection and residual branch merge). This scalar bias eliminates the gap in validation accuracy between Fixup and Regularized SkipInit when the batch size is small. We conclude that only two components of Fixup are essential to train the original ResNet-V1: initializing the residual branch at zero, and introducing a scalar bias after the skip connection and residual branch merge. ### Footnotes 1. Normalized networks achieve smaller L2 losses because they can shrink the weights without changing the network function. 2. As batch size grows, the number of updates decreases since the number of epochs is fixed. The performance might not degrade with batch size under a constant step budget (Shallue et al., 2018). ### References 1. Layer normalization. arXiv preprint arXiv:1607.06450. Cited by: §1, §3. 2. The shattered gradients problem: if resnets are the answer, then what is the question?. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 342–350. Cited by: §2, §7. 3. Understanding batch normalization. In Advances in Neural Information Processing Systems, pp. 7694–7705. Cited by: §1, §2, §5.2, §5.2, §5.2, §5. 4. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 1440–1448. Cited by: §2. 5. Accurate, large minibatch sgd: training imagenet in 1 hour. arXiv preprint arXiv:1706.02677. Cited by: §5.2, §5.2, §6. 6. How to start training: the effect of initialization and architecture. In Advances in Neural Information Processing Systems, pp. 571–581. Cited by: §1, §3, §4, §7, §7. 7. Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034. Cited by: Appendix A, §3, §5.1. 8. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: Appendix E, §1, §2, §4, §4, §5.1, §6. 9. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630–645. Cited by: §1, §2, §4, §6. 10. Train longer, generalize better: closing the generalization gap in large batch training of neural networks. In Advances in Neural Information Processing Systems, pp. 1731–1741. Cited by: §1, §2, §2, §5.2. 11. Provable benefit of orthogonal initialization in optimizing deep linear networks. arXiv preprint arXiv:2001.05992. Cited by: §4, §7. 12. Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §1, §2. 13. Batch renormalization: towards reducing minibatch dependence in batch-normalized models. In Advances in neural information processing systems, pp. 1945–1953. Cited by: §2. 14. Freeze and chaos for dnns: an ntk view of batch normalization, checkerboard and boundary effects. arXiv preprint arXiv:1907.05715. Cited by: §2. 15. Efficient backprop. In Neural networks: Tricks of the trade, pp. 9–48. Cited by: Appendix A. 16. Towards understanding regularization in batch normalization. In International Conference on Learning Representations, External Links: Link Cited by: §2, §5.3. 17. Stochastic gradient descent as approximate bayesian inference. The Journal of Machine Learning Research 18 (1), pp. 4873–4907. Cited by: §2, §5.2. 18. An empirical model of large-batch training. arXiv preprint arXiv:1812.06162. Cited by: §1, §5.2. 19. The effect of network width on stochastic gradient descent and generalization: an empirical study. arXiv preprint arXiv:1905.03776. Cited by: §2. 20. Language models are unsupervised multitask learners. OpenAI Blog 1 (8), pp. 9. Cited by: §1, §3, §7. 21. Weight normalization: a simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems, pp. 901–909. Cited by: §1. 22. The impact of neural network overparameterization on gradient confusion and stochastic gradient descent. arXiv preprint arXiv:1904.06963. Cited by: §2, §3. 23. How does batch normalization help optimization?. In Advances in Neural Information Processing Systems, pp. 2483–2493. Cited by: §1, §2, §5.2, §5.2, §5.2, §5. 24. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120. Cited by: §1, §2, §7. 25. Measuring the effects of data parallelism on neural network training. arXiv preprint arXiv:1811.03600. Cited by: §1, footnote 2. 26. Filter response normalization layer: eliminating batch dependence in the training of deep neural networks. arXiv preprint arXiv:1911.09737. Cited by: §1. 27. A bayesian perspective on generalization and stochastic gradient descent. arXiv preprint arXiv:1710.06451. Cited by: §5.2. 28. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15 (1), pp. 1929–1958. Cited by: §5.3. 29. Efficientnet: rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946. Cited by: §1. 30. Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. Cited by: §1. 31. Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §1, §2, §3, §7. 32. Residual networks behave like ensembles of relatively shallow networks. In Advances in neural information processing systems, pp. 550–558. Cited by: §1, §2, §3. 33. L1-norm batch normalization for efficient training of deep neural networks. IEEE transactions on neural networks and learning systems. Cited by: §2. 34. Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19. Cited by: §1, §2. 35. Dynamical isometry and a mean field theory of cnns: how to train 10,000-layer vanilla convolutional neural networks. arXiv preprint arXiv:1806.05393. Cited by: §1. 36. Self-training with noisy student improves imagenet classification. arXiv preprint arXiv:1911.04252. Cited by: §1. 37. A mean field theory of batch normalization. arXiv preprint arXiv:1902.08129. Cited by: §2, §2, §7. 38. Wide residual networks. arXiv preprint arXiv:1605.07146. Cited by: §2, §5.1. 39. Fixup initialization: residual learning without normalization. arXiv preprint arXiv:1901.09321. Cited by: §1, §1, §2, §4, §6, §7. You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
2020-08-09 14:40:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.845822274684906, "perplexity": 944.299141738758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738555.33/warc/CC-MAIN-20200809132747-20200809162747-00053.warc.gz"}
http://www.el-fellah.com/summary.htm
Summary of Some Reactions and Reaction Mechanisms ELIMINATION VERSUS SUBSTITUTION IN HALOGENOALKANES The factors that decide whether halogenoalkanes undergo elimination reactions or nucleophilic substitution when they react with hydroxide ions from, say, sodium hydroxide or potassium hydroxide.Details for each of these types of reaction are given elsewhere, and you will find links to them from this page. The reactions Both reactions involve heating the halogenoalkane under reflux with sodium or potassium hydroxide solution. Nucleophilic substitution The hydroxide ions present are good nucleophiles, and one possibility is a replacement of the halogen atom by an -OH group to give an alcohol via a nucleophilic substitution reaction. Elimination Halogenoalkanes also undergo elimination reactions in the presence of sodium or potassium hydroxide. The 2-bromopropane has reacted to give an alkene - propene. Notice that a hydrogen atom has been removed from one of the end carbon atoms together with the bromine from the centre one. In all simple elimination reactions the things being removed are on adjacent carbon atoms, and a double bond is set up between those carbons. What decides whether you get substitution or elimination? The reagents you are using are the same for both substitution or elimination - the halogenoalkane and either sodium or potassium hydroxide solution. In all cases, you will get a mixture of both reactions happening - some substitution and some elimination. What you get most of depends on a number of factors. The type of halogenoalkane This is the most important factor. type of halogenoalkane substitution or elimination? primary mainly substitution secondary both substitution and elimination tertiary mainly elimination For example, whatever you do with tertiary halogenoalkanes, you will tend to get mainly the elimination reaction, whereas with primary ones you will tend to get mainly substitution. However, you can influence things to some extent by changing the conditions. The solvent The proportion of water to ethanol in the solvent matters. • Water encourages substitution. • Ethanol encourages elimination. The temperature Higher temperatures encourage elimination. Concentration of the sodium or potassium hydroxide solution Higher concentrations favour elimination. In summary For a given halogenoalkane, to favour elimination rather than substitution, use: • heat • a concentrated solution of sodium or potassium hydroxide • pure ethanol as the solvent The role of the hydroxide ions The role of the hydroxide ion in a substitution reaction In the substitution reaction between a halogenoalkane and OH- ions, the hydroxide ions are acting as nucleophiles. For example, one of the lone pairs on the oxygen can attack the slightly positive carbon. This leads on to the loss of the bromine as a bromide ion, and the -OH group becoming attached in its place. The role of the hydroxide ion in an elimination reaction Hydroxide ions have a very strong tendency to combine with hydrogen ions to make water - in other words, the OH- ion is a very strong base. In an elimination reaction, the hydroxide ion hits one of the hydrogen atoms in the CH3 group and pulls it off. This leads to a cascade of electron pair movements resulting in the formation of a carbon-carbon double bond, and the loss of the bromine as Br-. THE MECHANISM FOR THE ESTERIFICATION REACTION This page looks in detail at the mechanism for the formation of esters from carboxylic acids and alcohols in the presence of concentrated sulphuric acid acting as the catalyst. It uses the formation of ethyl ethanoate from ethanoic acid and ethanol as a typical example. The mechanism for the formation of ethyl ethanoate A reminder of the facts Ethanoic acid reacts with ethanol in the presence of concentrated sulphuric acid as a catalyst to produce the ester, ethyl ethanoate. The reaction is slow and reversible. To reduce the chances of the reverse reaction happening, the ester is distilled off as soon as it is formed. ` ` The mechanism All the steps in the mechanism below are shown as one-way reactions because it makes the mechanism look less confusing. The reverse reaction is actually done sufficiently differently that it affects the way the mechanism is written. You will find a link to the hydrolysis of esters further down the page if you are interested. Step 1 In the first step, the ethanoic acid takes a proton (a hydrogen ion) from the concentrated sulphuric acid. The proton becomes attached to one of the lone pairs on the oxygen which is double-bonded to the carbon. The transfer of the proton to the oxygen gives it a positive charge, but it is actually misleading to draw the structure in this way (although nearly everybody does!). he positive charge is delocalised over the whole of the right-hand end of the ion, with a fair amount of positiveness on the carbon atom. In other words, you can think of an electron pair shifting to give this structure: You could also imagine another electron pair shift producing a third structure: So which of these is the correct structure of the ion formed? None of them! The truth lies somewhere in between all of them. One way of writing the delocalised structure of the ion is like this: The double headed arrows are telling you that each of the individual structures makes a contribution to the real structure of the ion. They don't mean that the bonds are flipping back and forth between one structure and another. The various structures are known as resonance structures or canonical forms.There will be some degree of positive charge on both of the oxygen atoms, and also on the carbon atom. Each of the bonds between the carbon and the two oxygens will be the same - somewhere between a single bond and a double bond.For the purposes of the rest of this discussion, we are going to use the structure where the positive charge is on the carbon atom. Step 2 The positive charge on the carbon atom is attacked by one of the lone pairs on the oxygen of the ethanol molecule. Note:  You could work out precisely why that particular oxygen carries the positive charge on the right-hand side. On the other hand, you could realise that there has to be a positive charge somewhere (because you started with one), and that particular oxygen doesn't look right - it has too many bonds. Put the charge on there! That's a quick rough-and-ready reasoning which works every time I use it! Step 3 What happens next is that a proton (a hydrogen ion) gets transferred from the bottom oxygen atom to one of the others. It gets picked off by one of the other substances in the mixture (for example, by attaching to a lone pair on an unreacted ethanol molecule), and then dumped back onto one of the oxygens more or less at random. The net effect is: Step 4 Now a molecule of water is lost from the ion. The product ion has been drawn in a shape to reflect the product which we are finally getting quite close to! The structure for the latest ion is just like the one we discussed at length back in step 1. The positive charge is actually delocalised all over that end of the ion, and there will also be contributions from structures where the charge is on the either of the oxygens: It is easier to follow what is happening if we keep going with the structure with the charge on the carbon. Step 5 The hydrogen is removed from the oxygen by reaction with the hydrogen sulphate ion which was formed way back in the first step. And there we are! The ester has been formed, and the sulphuric acid catalyst has been regenerated. This page looks in detail at the mechanism for the formation of esters from carboxylic acids and alcohols in the presence of concentrated sulphuric acid acting as the catalyst. It uses the formation of ethyl ethanoate from ethanoic acid and ethanol as a typical example. The mechanism for the formation of ethyl ethanoate A reminder of the facts Ethanoic acid reacts with ethanol in the presence of concentrated sulphuric acid as a catalyst to produce the ester, ethyl ethanoate. The reaction is slow and reversible. To reduce the chances of the reverse reaction happening, the ester is distilled off as soon as it is formed. The mechanism All the steps in the mechanism below are shown as one-way reactions because it makes the mechanism look less confusing. The reverse reaction is actually done sufficiently differently that it affects the way the mechanism is written. You will find a link to the hydrolysis of esters further down the page if you are interested. Step 1 In the first step, the ethanoic acid takes a proton (a hydrogen ion) from the concentrated sulphuric acid. The proton becomes attached to one of the lone pairs on the oxygen which is double-bonded to the carbon. The transfer of the proton to the oxygen gives it a positive charge, but it is actually misleading to draw the structure in this way (although nearly everybody does!). The positive charge is delocalised over the whole of the right-hand end of the ion, with a fair amount of positiveness on the carbon atom. In other words, you can think of an electron pair shifting to give this structure: You could also imagine another electron pair shift producing a third structure: So which of these is the correct structure of the ion formed? None of them! The truth lies somewhere in between all of them. One way of writing the delocalised structure of the ion is like this: The double headed arrows are telling you that each of the individual structures makes a contribution to the real structure of the ion. They don't mean that the bonds are flipping back and forth between one structure and another. The various structures are known as resonance structures or canonical forms.There will be some degree of positive charge on both of the oxygen atoms, and also on the carbon atom. Each of the bonds between the carbon and the two oxygens will be the same - somewhere between a single bond and a double bond. NUCLEOPHILIC ADDITION / ELIMINATION IN THE REACTION BETWEEN ACYL CHLORIDES AND AMMONIA The facts and a simple, uncluttered mechanism for the nucleophilic addition / elimination reaction between acyl chlorides (acid chlorides) and ammonia. If you want the mechanism explained to you in detail, there is a link at the bottom of the page. Ethanoyl chloride is taken as a typical acyl chloride. Any other acyl chloride will behave in the same way. Simply replace the CH3 group in what follows by anything else you want. The reaction between ethanoyl chloride and ammonia The facts Ethanoyl chloride reacts violently with a cold concentrated solution of ammonia. A white solid product is formed which is a mixture of ethanamide (an amide) and ammonium chloride. Notice that, unlike the reactions between ethanoyl chloride and water or ethanol, hydrogen chloride isn't produced - at least, not in any quantity. Any hydrogen chloride formed would immediately react with excess ammonia to give ammonium chloride. The mechanism The first stage (the addition stage of the reaction) involves a nucleophilic attack on the fairly positive carbon atom by the lone pair on the nitrogen atom in the ammonia. The second stage (the elimination stage) happens in two steps. In the first, the carbon-oxygen double bond reforms and a chloride ion is pushed off. That is followed by removal of a hydrogen ion from the nitrogen. This might happen in one of two ways: It might be removed by a chloride ion, producing HCl (which would immediately react with excess ammonia to give ammonium chloride as above) . . . . . . or it might be removed directly by an ammonia molecule. The ammonium ion, together with the chloride ion already there, makes up the ammonium chloride formed in the reaction. THE ELIMINATION REACTIONS PRODUCING ALKENES FROM SIMPLE HALOGENOALKANES ELIMINATION FROM UNSYMMETRIC HALOGENOALKANES The elimination from unsymmetric halogenoalkanes such as 2-bromobutane. 2-bromobutane is an unsymmetric halogenoalkane in the sense that it has a CH3 group one side of the C-Br bond and a CH2CH3 group the other. You have to be careful with compounds like this because of the possibility of more than one elimination product depending on where the hydrogen is removed from. The basic facts and mechanisms for these reactions are exactly the same as with simple halogenoalkanes like 2-bromopropane. This page only deals with the extra problems created by the possibility of more than one elimination product. Background to the mechanism You will remember that elimination happens when a hydroxide ion (from, for example, sodium hydroxide) acts as a base and removes a hydrogen as a hydrogen ion from the halogenoalkane. For example, in the simple case of elimination from 2-bromopropane: The hydroxide ion removes a hydrogen from one of the carbon atoms next door to the carbon-bromine bond, and the various electron shifts then lead to the formation of the alkene - in this case, propene. With an unsymmetric halogenoalkane like 2-bromobutane, there are several hydrogens which might possibly get removed. You need to think about each of these possibilities. Where does the hydrogen get removed from? The hydrogen has to be removed from a carbon atom adjacent to the carbon-bromine bond. If an OH- ion hit one of the hydrogens on the right-hand CH3 group in the 2-bromobutane (as we've drawn it), there's nowhere for the reaction to go. To make room for the electron pair to form a double bond between the carbons, you would have to expel a hydrogen from the CH2 group as a hydride ion, H-. That is energetically much too difficult, and so this reaction doesn't happen. That still leaves the possibility of removing a hydrogen either from the left-hand CH3 or from the CH2 group. If it was removed from the CH3 group: The product is but-1-ene, CH2=CHCH2CH3. If it was removed from the CH2 group: This time the product is but-2-ene, CH3CH=CHCH3.In fact the situation is even more complicated than it looks, because but-2-ene exhibits geometric isomerism. You get a mixture of two isomers formed - cis-but-2-ene and trans-but-2-ene. Which isomer gets formed is just a matter of chance. For the purposes of the rest of this discussion, we are going to use the structure where the positive charge is on the carbon atom. Step 2 The positive charge on the carbon atom is attacked by one of the lone pairs on the oxygen of the ethanol molecule. Note:  You could work out precisely why that particular oxygen carries the positive charge on the right-hand side. On the other hand, you could realise that there has to be a positive charge somewhere (because you started with one), and that particular oxygen doesn't look right - it has too many bonds. Put the charge on there! That's a quick rough-and-ready reasoning which works every time I use it! Step 3 What happens next is that a proton (a hydrogen ion) gets transferred from the bottom oxygen atom to one of the others. It gets picked off by one of the other substances in the mixture (for example, by attaching to a lone pair on an unreacted ethanol molecule), and then dumped back onto one of the oxygens more or less at random.The net effect is: Step 4 Now a molecule of water is lost from the ion. The product ion has been drawn in a shape to reflect the product which we are finally getting quite close to!The structure for the latest ion is just like the one we discusssed at length back in step 1. The positive charge is actually delocalised all over that end of the ion, and there will also be contributions from structures where the charge is on the either of the oxygens: It is easier to follow what is happening if we keep going with the structure with the charge on the carbon. Step 5 The hydrogen is removed from the oxygen by reaction with the hydrogen sulphate ion which was formed way back in the first step. And there we are! The ester has been formed, and the sulphuric acid catalyst has been regenerated. The elimination reaction involving 2-bromopropane and hydroxide ions The facts 2-bromopropane is heated under reflux with a concentrated solution of sodium or potassium hydroxide in ethanol. Heating under reflux involves heating with a condenser placed vertically in the flask to avoid loss of volatile liquids. Propene is formed and, because this is a gas, it passes through the condenser and can be collected. Everything else present (including anything formed in the alternative substitution reaction) will be trapped in the flask. The mechanism In elimination reactions, the hydroxide ion acts as a base - removing a hydrogen as a hydrogen ion from the carbon atom next door to the one holding the bromine.The resulting re-arrangement of the electrons expels the bromine as a bromide ion and produces propene. Geometric isomerism:  Isomerism is where you can draw more than one arrangement of the atoms for a given molecular formula. Geometric isomerism is a special case of this involving molecules which have restricted rotation around one of the bonds - in this case, a carbon-carbon double bond. The C=C bond could only rotate if enough energy is put in to break the pi bond. Effectively, except at high temperatures, the C=C bond is "locked". In the case of but-2-ene, the two CH3 groups will either both be locked on one side of the C=C (to give the cis isomer), or on opposite sides (to give the trans one). Beware!  It is easy to miss geometric isomers in an exam. Always draw alkenes with the correct 120° bond angles around the C=C bond as shown in the diagrams for the cis and trans isomers above. If you take a short cut and write but-2-ene as CH3CH=CHCH3, you will almost certainly miss the fact that cis and trans forms are possible. This is a rich source of questions in an exam. You could easily throw away marks if you miss these possibilities. Nucleophilic Aliphatic Substitution Reactions An Overview ### Introduction In our discussion of acid-base chemistry we alluded to the similarities between the reactions of NaOH with HCl on the one hand and CH3Cl on the other. Scheme 1 compares the two. Scheme 1 One and the Same The reaction of HCl with NaOH is a specific example of a more general type of reaction known as a nucleophilic substitution. The reaction of CH3Cl with NaOH exemplifies a type of nucleophilic substitution called nucleophilic aliphatic substitution, where the word aliphatic indicates that the substitution occurs at an sp3 hybridized carbon atom. The study of nucleophilic aliphatic substitution reactions has provided chemists with detailed insights into the nature of chemical reactivity. Scheme 2 presents a generalized description of nucleophilic aliphatic substitution reactions. Scheme 2 Nucleophilic Aliphatic Substitution Before we begin our discussion of the details of Scheme 2, let's clarify some terms. ### Definitions • nucleophile A nucleophile is a Lewis base, i.e. an electron pair donor. Any reagent that contains an atom with at least one lone pair of electrons is a potential nucleophile. Common examples include halide ions such as -:I and -:Br, hydroxide ion , -:OH, water, H2O:, and ammonia, :NH3. Note that X may represent a single atom or a polyatomic group. Note, too, that the central atom of a nucleophile may have a formal charge of 0 or -1. • electrophile An electrophile is a Lewis acid, i.e. an electron pair acceptor. Any reagent that contains an atom which has a formal or a partial positive charge is a potential electrophile. • leaving group A leaving group is any atom or polyatomic group that is replaced by a nucleophile during a nucleophilic aliphatic substitution reaction. Leaving groups are generally conjugate bases of strong acids. Common examples include halide ions such as -:I and -:Br, water, H2O:, and carboxylate ions such as trifluoroacetate, CF3CO2:-. The central atom of the leaving group is always an electronegative atom, most commonly halogen or oxygen. • substituent A substituent is any atom or polyatomic group attached to the electrophilic center of the substrate. There are two features of Scheme 2 that merit further comment. First, notice that the reaction is depicted as an equilibrium process. In most nucleophilic aliphatic substitution reactions, the value of the equilibrium constant is very large, i.e. the reaction is essentially irreversible. The rules that we developed for assessing the equilibrium constant of any acid-base reaction will serve as a useful guide in estimating the equilibrium constant for nucleophilic aliphatic substitution reactions as well. Second, since the reaction is an equilibrium process, there is no fundamental difference between a nucleophile and a leaving group. It all depends on your perspective; the reagent that acts as a nucleophile in the forward direction assumes the role of a leaving group in the reverse direction. Having seen the features of nucleophilic aliphatic substitution reactions in broad outline, we will now examine the process in more detail. Specifically, we will look at the effect of each of the following parameters on the rates of nucleophilic aliphatic substitution reactions: • the substituent(s) connected to the substrate • the leaving group • the nucleophile • the solvent ### Introduction- Organometallic reagents are compounds which contains carbon-metal bonds. For the purposes of the discussion that follows, the only compounds we will consider will be ones where M = Li or Mg. When M= Li, the organometallic reagent is called an organolithium reagent. When M = Mg, it is called a Grignard reagent. Historically Grignard reagents were developed before organolithium reagents. In recent years, however, organolithium reagents have taken over the key role that Grignard reagents played as the most versatile source of nucleophilic carbon. The nucleophilic character of organometallic reagents stems from the fact that the C-M bond is polarized in such a way that the carbon atom is negative while the metal atom is positive. As the picture above indicates, the carbon-metal bond has "ionic character". In fact, it is useful to think about Grignard reagents and organolithium reagents as sources of negatively charged carbon atoms, i.e. carbanions. Since carbon is not a very electronegative element, it is very reactive when it bears a negative or partial negative charge. ### Organometallic Reagents as Bases In order to appreciate just how reactive carbanions are, consider the series of anions and their conjugate acids shown in Figure 1. Figure 1 A Comparison of Acid/Base Strengths (The pK values are approximate.) Ignoring for the moment the different ways in which chemists write methane and ammonia on the one hand and water and hydrogen fluoride on the other (wierd, huh?), recall that pKa values are a measure of the acidity of a compound. Furthermore, the pKa scale is logarithmic or exponential. Thus, hydrogen fluoride is 1013 times more acidic than water. Similarly, water is 1022 more acidic than ammonia. Knowing the relative acidities of the compounds in Figure 1 means you also know their relative basicities: the amide anion is 1022 more basic than hydroxide ion, which is 1013 times more basic than fluoride ion. Clearly then, methane is the weakest acid of the four conjugate acids shown in Figure 1, while the methide anion is the strongest base, and, by extension, the best nucleophile. The trend in base strength exhibited by the four anions in Figure 1 is attributed to the difference in nuclear charge of the central atom in each ion: Carbon has 6 protons attracting the lone pair of electrons, nitrogen has 7, oxygen 8, and fluorine 9. Reagents such as phenylmagnesium bromide and methyl lithium are among the strongest bases there are. Consequently they will deprotonate compounds such as amines, alcohols, and carboxylic acids. Figure 2 presents one reaction that is representative of each of these situations. Figure 2 Acid-Base Reactions of Organometallic Reagents The equilibrium constant for each of these reactions is very large. These reactions all occur extremely rapidly, sometimes explosively! The extreme reactivity of organometallic reagents towards O-H and N-H groups makes these groups incompatible with such strong bases. #### Organometallic Reagents as Nucleophiles Whether an organometallic reagent is classified as a base or a nucleophile depends on whether it forms a bond with a hydrogen atom or a carbon atom. If a reactant contains an electrophilic carbon and does not contain O-H or N-H groups, then an organometallic reagent will act as a nucleophile towards that electrophilic carbon atom. The most common source of electrophilic carbon is the carbonyl group, especially the carbonyl group of aldehydes and ketones. Equations 3 and 4 illustrate the nucleophilic reactivity of phenylmagnesium bromide and methyl lithium towards a simple aldehyde and a simple ketone, respectively. As these equations emphasize, each of these reactions leads to the formation of a C-C bond. This is the basis of synthetic organic chemistry! A fundamental principle that guides the development of logical approaches to the preparation of new molecules is that carbon-carbon bond formation requires the interaction of molecules which contain carbon atoms of opposite polarity. We have already seen that the C-O bond in a carbonyl group has a bond dipole with the carbon atom being electron deficient. In Equation 4 the carbonyl carbon of propiophenone is highlighted in blue to emphasize its electrophilic character, while the nucleophilic nature of the carbon in methyl lithium is stressed by colouring it red. Reaction Mechanism The reaction of an organometallic reagent with an aldehyde or ketone embodies the most fundamental reaction of the carbonyl group: nucleophilic addition. The mechanism of the reaction involves two steps: 1. addition of the organometallic reagent to the carbonyl carbon to form a tetrahedral intermediate 2. protonation of the the resulting alkoxide ion. The second step occurs during the work-up of the reaction. Figure 3 summarizes these two steps. Figure 3 The Mechanism of Nucleophilic Addition to a Carbonyl Group ### Examples In the following reactions, the carbon-carbon bond that is formed is indicated in red. Equation 5 describes a Grignard reaction that was used in the first total synthesis of the hormone progesterone. The carbon atoms in the product of reaction 5 are numbered to match their positions in progesterone. An aromatic Grignard reagent played a key role in the synthesis of monensin, a polyether antibiotic, as shown in Equation 6. Finally, Equation 7 illustrates three of the final steps in the synthesis of a terpene called D2-8-epicedrene. The middle step involves the nucleophilic addition of methyl lithium to the carbonyl carbon atom of a ketone. The final step in the synthesis entails dehydration of the 3o alcohol formed in the second step. How would you accomplish this? ## NUCLEOPHILIC ADDITION / ELIMINATION IN THE REACTION BETWEEN ACYL CHLORIDES AND AMMONIA ### Introduction and Review As part of our discussion of organometallic reagents we compared the base strengths of methanide ion, amide ion, hydroxide ion, and fluoride ion. Figure 1 reiterates that comparison. Figure 1 A Comparison of Acid/Base Strengths (The pK values are approximate.) We also considered how it was possible to estimate the equilibrium constant for an acid-base reaction by comaparing the pKa values of the two acids involved in the reaction. Equation 1 presents another example. What is the approximate equilibrium constant for this reaction? Within limits, it is possible to extend the use of pKa values to other reactions. Consider the addition of methyl lithium to formaldehyde as shown in Equation 2. In this reaction, the negative charge starts out on a carbon atom and ends up on an oxygen. In terms of electronegativity, this is a favorable change. The alkoxide ion is a more stable species than the methanide ion. In other words, the alkoxide ion is a weaker base than the methanide ion. This means the equilibrium constant for reaction 2 will be greater than 1, i.e. at equilibrium there will be more product than reactant. How much more? To make a numerical estimate, compare the pKa values of the conjugate acids of the methanide and ethoxide ions in reaction 2; they are 50 and 16, respectively. Since methane is 1034 times less acidic than ethanol, methanide ion is 1034 times more basic than ethoxide ion. Hence the equilibrium constant for equation 2 is approximately 1034. Now consider the reactions shown in Equations 3,4, and 5. Estimate the equilibrium constant for each of these reactions. Is the addition of amide ion to a carbonyl group favorable? What about the addition of hydroxide ion and fluoride (and by extension, any halide ion)? As this discussion indicates, the addition of anionic nucleophiles to the carbonyl group of aldehydes and ketones becomes less favorable as the nucleophilic atom changes from C to N to O to F. In fact, when the nucleophilic atom is F, the equilibrium constant is so small that it is safe to say that fluoride ion, and by extension, any halide ion, does not add to the carbonyl group of aldehydes or ketones. Up to this point we have considered the relative nucleophilicities of anionic nucleophiles. The emphasis on the word anionic is necessitated by the fact, unlike ammonia, water, and hydrogen fluoride, methane does not have a lone pair of electrons on the central atom. In other words, methane is not nucleophilic. The methanide ion is. But ammonia and water, and by extension, amines and alcohols, can act as neutral nucleophiles. And under the right conditions they do. ### Water and Alcohols as Nucleophiles When formaldehde is added to water, the equilibrium shown in Equation 6 is established rapidly. An aqueous solution of formaldehyde is called formalin. It contains virtually no free formaldehyde, i.e. Keq >>> 1. This reaction is analogous to the hydration of an alkene. If you recall, the mechanism of the acid catalysed hydration of an alkene involves protonation of the pi bond to produce an intermediate carbocation. Acid is required because the C-C bond is not polar. There is no C-C bond dipole to exert a Coulombic attraction toward a water molecule. Protonation produces a positive charge on one of the carbon atoms, which then attracts a lone pair of electrons on the oxygen atom of a water molecule. Since the pi bond in formaldehyde is polarized, water will be attracted to the positive end of the C-O bond dipole without the need for an acid. The same is true of other aldehydes and ketones. But, as the bulk of the substituents attached to the carbonyl carbon increases, access to that carbon becomes more difficult and the rate of addition becomes slower. More importantly, the value of Keq becomes smaller. In the same way that acid catalyses the addition of water to an alkene, it will also increase the rate of addition of water (and alcohols) to an aldehyde or ketone. Figure 2 compares the two processes. Figure 2 Addition of Water to C=O and C=C Bonds The reactions of aldehydes and ketones with alcohols parallel their reactions with water. Figure 3 illustrates both processes. Figure 3 ### Introduction We have seen that alcohols undergo acid catalysed nucleophilic addition to aldehydes and ketones to produce hemiacetals, acetals, hemiketals, and ketals as summarized in Figure 1. Figure 1 Nucleophilic Addition of Alcohols to Aldehydes and Ketones Amines undergo similar acid catalysed nucleophilic addition reactions. The following discussion is limited to the reactions of primary amines with aldehydes and ketones. Primary amines are amines in which the nitrogen atom is bonded to 1 carbon atom. Figure 2 presents several examples of primary amines. Figure 2 Three Primary Amines More important than the number of carbons attached to the nitrogen is the number of hydrogens. The reason for this becomes apparent when we consider, step-by-step, what happenes upon addition of a primary amine to an aldehyde or ketone. As a simple example, consider the reaction of methylamine with acetaldehyde. This reaction could be performed by dissolving acetaldehyde and methylamine in aqueous acid. Under those reaction conditions, the equilibria shown in Equations 1 and 2 would be established. The trick here is to adjust the pH of the solution so that some of the aldehyde will be protonated while some of the amine is unprotonated. Protonating the aldehyde makes the carbonyl carbon more electrophilic, thus increasing its reactivity toward the nucleophilic nitrogen of an unprotonated methylamine. Figure 3 outlines the complete reaction. Figure 3 Nucleophilic Addition of Methylamine to Acetaldehyde The ammonium ion 1 enclosed in the box in Step 2 of the process is analogous to the oxonium ion produced in the reaction of acetaldehyde with methanol. The resonance contributor 2 highlighted in Step 4 parallels that generated during the formation of acetaldehyde dimethylacetal. While these two structures are very similar, the products they yield are very different, as Figure 4 indicates. Figure 4 Alternative Fates for Similar Structures The difference stems from the fact that in intermediate 2 the nitrogen atom has an exchangeable H attached to it while the oxygen atom in intermediate 3 does not. Formation of the carbon-nitrogen double bond by deprotonation of the nitrogen atom is simply the most likely fate of intermediate 2. Since intermediate 3 does not have a comparable pathway available to it, an alternative reaction occurs: the electron deficient carbon gains a pair of electrons by forming a bond with another methanol molecule. ### Examples Equation 1 outlines the reaction of a cyclic ketone with a type of amine called a hydrazine. (In this equation TBS represents an OH protecting group while Ar stands for an aromatic ring.) Although hydrazines are technically not primary amines, they possess the more essential feature required for the formation of imines: two hydrogen atoms attached to the terminal nitrogen atom. (What happens to those two hydrogens and the oxygen atom of the carbonyl group?) Reaction 1 constituted an early step in the first synthesis of taxol. The product of reaction 1 is a special type of imine called a hydrazone. Before the development of modern spectroscopic techniques, hydrazones and related compounds such as oximes and semicarbazides played important roles in the characterization of the structures of aldehydes and ketones. Equations 2-4 illustrate the formation of one compound of each type. Imines are formed as intermediates in the Strecker synthesis of amino acids. This reaction sequence begins with the reaction of an aldehyde with ammonia to produce an imine. The imine then reacts with cyanide ion to form an a-aminonitrile. Hydrolysis of the nitrile group yields the amino acid. The overall sequence is outlined in Figure 5 for the amino acid phenylalanine. Figure 5 The Strecker Synthesis of Phenylalanine As a final example, Figure 6 depicts the formation of an imine from the reaction of pyridoxal-5'-phosphate with the amino group of the aspartic acid. Figure 6 An Imine Intermediate in a Biochemical Decarboxylation Pyridoxal-5'-phosphate is the coenzyme form of vitamin B6. It is involved in a variety of important biochemical transformations. In the present case, the imine intermediate undergoes loss of carbon dioxide, followed by a series of proton transfers, to produce another amino acid, alanine. Note that the reaction sequence is catalytic in that pyridoxal-5'-phosphate is regenerated. #### Introduction- In most stable organic compounds bonds to carbon are either non-polar (C-C) or they have a bond dipole in which the carbon atom is electron deficient (C-X). Figure 1 summarizes the normal state of affairs. Figure 1 Bond Polarity in C-C and C-X Bonds The basic idea behind synthetic organic chemistry is simple: mix a compound which contains an electron rich carbon with one that contains an electron deficient carbon and Coulomb's Law will do the rest. The problem lies in making compounds that contain an electron rich carbon atom, i.e. a nucleophilic carbon. One of the most common ways to do so is to convert an alkyl halide into an organometallic compound. The Wittig reaction offers another approach. #### The Wittig Reaction- The Wittig reaction is a one-flask, 3-step sequence that converts aldhydes and ketones into alkenes. The three steps are: 1. Reaction of an alkyl halide with a tertiary phosphine to produce a phosphonium salt. 2. Deprotonation of the phosphonium salt to produce an ylide. 3. Nucleophilic addition of the ylide to an aldehyde or ketone. Figure 2 illustrates each of these steps. Figure 2 The Three Steps of the Wittig Reaction The first step of the sequence involves an Sn2 reaction in which the phosphorous displaces the bromine from the methyl bromide. (As such, it is subject to all the usual limitations of the Sn2 mechanism.) The resulting phosphonium salt generally precipitates from the reaction mixture as a white solid. The positive charge on the phosphorous atom of this salt pulls electron density away from the C-H bonds of the methyl group, making those hydrogens more acidic. The pKa of the methyl protons in methyl triphenylphosphonium bromide is approximately 15. Addition of a strong base, in this case n-butyl lithium, deprotonates the methyl group. The carbanion that is produced is called an ylide. The negatively charged carbon gains stabilization by donating electron density into a vacant d orbital on the phosphorous atom: Here's a photo of a simple ylide: As the phosphonium salt reacts with the n-butyl lithium, it disappears and an orange solution is formed. Addition of the ketone to this solution, followed by a brief period of reflux, produces the alkene along with a white precipitate of triphenylphosphine oxide. While the resonance structure on the right above suggests that the carbon atom is neutral, the structure on the left indicates that the electron density on the carbon is high, i.e. the carbon is nucleophilic. The ability of the phosphorous to accomodate the negative charge that develops when the carbon is deprotonated is what makes phosphonium salts suitable precursors of nucleophilic carbon. Ammonium salts cannot afford comparable resonance stabilization, and hence are not viable sources of nucleophilic carbon atoms. This difference is discussed in more detail in Phosphines vs. Amines. The value of the Wittig reaction lies in its generality. It works well with aliphatic and aromatic aldehydes and ketones. Furthermore, these compounds may contain other functional groups such as alcohols and esters which are not compatible with Grignard reagents. Is it possible to accomplish the synthesis outlined in Figure 2 using Grignard chemistry? Reaction of cyclohexanone with methyl magnesium bromide would produce 1-methylcyclohexanol. But dehydration of this alcohol using concentrated sulfuric or phosphoric acid may produce 1-methylcyclohexene and/or methylenecyclohexane. Being a trisubstituted alkene, 1-methylcyclohexene is more stable than methylenecyclohexane, and it is the preferred product. Figure 3 summarizes this alternative. Figure 3 An Alternative to the Wittig Reaction ### Summary- The Wittig reaction converts aldehydes and ketones into alkenes. ### Examples- Scheme 1 describes two steps in the total synthesis of monensin. Scheme 1 The desired cis alkene was formed in approximately 70% yield along with about 20% of the undesired trans isomer. Note the use of the cyclic ketal as a protecting group during the last step of the sequence. A similar strategy was utilized during the total synthesis of racemic progesterone as outlined in Scheme 2. Scheme 2 The Wittig reaction generally gives a mixture of cis and trans isomers. In this instance, the cis/trans mixture was treated with additional phenyl lithium to isomerize the cis alkene to the more stable trans configuration. (The methanol is merely a source of protons.) An interesting example of the Wittig reaction comes from a synthesis of leukotriene A4 as shown in Equation 1. The starting material in this reaction is a cyclic hemiacetal. Under the reaction conditions it exists in equilibrium with a small amount of the corresponding hydroxy aldehyde. The squiggly line between the carboethoxy group and the double bond indicates that the product is a mixture of the cis and trans isomers. NUCLEOPHILIC ADDITION / ELIMINATION IN THE REACTION BETWEEN ACYL CHLORIDES AND AMINES This page gives you the facts and a simple, uncluttered mechanism for the nucleophilic addition / elimination reaction between acyl chlorides (acid chlorides) and amines. If you want the mechanism explained to you in detail, there is a link at the bottom of the page.Ethanoyl chloride is taken as a typical acyl chloride. Any other acyl chloride will behave in the same way. Simply replace the CH3 group in what follows by anything else you want. Similarly, ethylamine is taken as a typical amine. Any other amine will behave in the same way. Replacing the CH3CH2 group by any other hydrocarbon group won't affect the mechanism in any way. The reaction between ethanoyl chloride and ethylamine The facts Ethanoyl chloride reacts violently with a cold concentrated solution of ethylamine. A white solid product is formed which is a mixture of N-ethylethanamide (an N-substituted amide) and ethylammonium chloride. Notice that, unlike the reactions between ethanoyl chloride and water or ethanol, hydrogen chloride isn't produced - at least, not in any quantity. Any hydrogen chloride formed would immediately react with excess ethylamine to give ethylammonium chloride. The mechanism The first stage (the addition stage of the reaction) involves a nucleophilic attack on the fairly positive carbon atom by the lone pair on the nitrogen atom in the ethylamine. The second stage (the elimination stage) happens in two steps. In the first, the carbon-oxygen double bond reforms and a chloride ion is pushed off. ` ` That is followed by removal of a hydrogen ion from the nitrogen. This might happen in one of two ways: it might be removed by a chloride ion, producing HCl (which would immediately react with excess ethylamine to give ethylammonium chloride as above) . . . and . . . or it might be removed directly by an ethylamine molecule. The ethylammonium ion, together with the chloride ion already there, makes up the ethylammonium chloride formed in the reaction. ` ` `THE NUCLEOPHILIC SUBSTITUTION REACTIONS BETWEEN HALOGENOALKANES AND AMMONIA` This page gives you the facts and simple, uncluttered mechanisms for the nucleophilic substitution reactions between halogenoalkanes and ammonia to produce primary amines. If you want the mechanisms explained to you in detail, there is a link at the bottom of the page. If you are interested in further substitution reactions, you will also find a link to a separate page dealing with these. The reaction of primary halogenoalkanes with ammonia Important!  If you aren't sure about the difference between primary, secondary and tertiary halogenoalkanes The facts The halogenoalkane is heated with a concentrated solution of ammonia in ethanol. The reaction is carried out in a sealed tube. You couldn't heat this mixture under reflux, because the ammonia would simply escape up the condenser as a gas. We'll talk about the reaction using 1-bromoethane as a typical primary halogenoalkane. The reaction happens in two stages. In the first stage, a salt is formed - in this case, ethylammonium bromide. This is just like ammonium bromide, except that one of the hydrogens in the ammonium ion is replaced by an ethyl group. There is then the possibility of a reversible reaction between this salt and excess ammonia in the mixture. The ammonia removes a hydrogen ion from the ethylammonium ion to leave a primary amine - ethylamine.The more ammonia there is in the mixture, the more the forward reaction is favoured. Note:  You will find considerable disagreement in textbooks and other sources about the exact nature of the products in this reaction. Some of the information you'll come across is simply wrong! The mechanism The mechanism involves two steps. The first is a simple nucleophilic substitution reaction: Because the mechanism involves collision between two species in this slow step of the reaction, it is known as an SN2 reaction. Note:  Unless your syllabus specifically mentions SN2 by name, you can just call it nucleophilic substitution. In the second step of the reaction an ammonia molecule may remove one of the hydrogens on the -NH3+. An ammonium ion is formed, together with a primary amine - in this case, ethylamine. This reaction is, however, reversible. Your product will therefore contain a mixture of ethylammonium ions, ammonia, ethylamine and ammonium ions. Your major product will only be ethylamine if the ammonia is present in very large excess. Unfortunately the reaction doesn't stop here. Ethylamine is a good nucleophile, and goes on to attack unused bromoethane. This gets so complicated that it is dealt with on a separate page. You will find a link at the bottom of this page. The reaction of tertiary halogenoalkanes with ammonia The facts The facts of the reactions are exactly the same as with primary halogenoalkanes. The halogenoalkane is heated in a sealed tube with a solution of ammonia in ethanol. For example: Followed by: The mechanism This mechanism involves an initial ionisation of the halogenoalkane: followed by a very rapid attack by the ammonia on the carbocation (carbonium ion) formed: This is again an example of nucleophilic substitution. This time the slow step of the reaction only involves one species - the halogenoalkane. It is known as an SN1 reaction. There is a second stage exactly as with primary halogenoalkanes. An ammonia molecule removes a hydrogen ion from the -NH3+ group in a reversible reaction. An ammonium ion is formed, together with an amine. It is very unlikely that any of the current UK-based syllabuses for 16 - 18 year olds will ask you about this. In the extremely unlikely event that you will ever need it, secondary halogenoalkanes use both an SN2 mechanism and an SN1. Make sure you understand what happens with primary and tertiary halogenoalkanes, and then adapt it for secondary ones should ever need to. THE REACTION BETWEEN METHANE AND BROMINE This page gives you the facts and a simple, uncluttered mechanism for the free radical substitution reaction between methane and bromine. If you want the mechanism explained to you in detail, there is a link at the bottom of the page. The facts This reaction between methane and bromine happens in the presence of ultraviolet light - typically sunlight. This is a good example of a photochemical reaction - a reaction brought about by light. Note:  These reactions are sometimes described as examples of photocatalysis - reactions catalysed by light. It is better to use the term "photochemical" and keep the keep the word "catalysis" for reactions speeded up by actual substances rather than light. CH4  +  Br2CH3Br  +  HBr The organic product is bromomethane. One of the hydrogen atoms in the methane has been replaced by a bromine atom, so this is a substitution reaction. However, the reaction doesn't stop there, and all the hydrogens in the methane can in turn be replaced by bromine atoms. Multiple substitution is dealt with on a separate page, and you will find a link to that at the bottom of this page. Warning!  Check your self at this point. If your  want  to know about the free radical substitution reaction between methane and chlorine as well as this one, don't waste time trying to learn both mechanisms. The two mechanisms are identical. You just need to learn one of them. If you are asked for the other one, all you need to do is to write bromine, say, instead of chlorine. In writing the bromine mechanisms on these pages, that's exactly what I've done! If you read both chlorine and bromine versions, you'll find them boringly repetitive! The mechanism The mechanism involves a chain reaction. During a chain reaction, for every reactive species you start off with, a new one is generated at the end - and this keeps the process going. Species:  a useful word which is used in chemistry to mean any sort of particle you want it to mean. It covers molecules, ions, atoms, or (in this case) free radicals. The over-all process is known as free radical substitution, or as a free radical chain reaction. Chain initiation The chain is initiated (started) by UV light breaking a bromine molecule into free radicals. Br22Br Chain propagation reactions These are the reactions which keep the chain going. CH4  +  BrCH3  +  HBr CH3  +  Br2CH3Br  +  Br Chain termination reactions These are reactions which remove free radicals from the system without replacing them by new ones. 2BrBr2 CH3  +  BrCH3Br CH3  +  CH3 CH3CH3 THE NUCLEOPHILIC SUBSTITUTION REACTIONS BETWEEN HALOGENOALKANES AND CYANIDE IONS This page gives you the facts and simple, uncluttered mechanisms for the nucleophilic substitution reactions between halogenoalkanes and cyanide ions (from, for example, potassium cyanide). If you want the mechanisms explained to you in detail, there is a link at the bottom of the page. Important!  If you aren't sure about the difference between primary, secondary and tertiary halogenoalkanes, it is essential that you follow this link before you go on. The facts If a halogenoalkane is heated under reflux with a solution of sodium or potassium cyanide in ethanol, the halogen is replaced by a -CN group and a nitrile is produced. Heating under reflux means heating with a condenser placed vertically in the flask to prevent loss of volatile substances from the mixture. The solvent is important. If water is present you tend to get substitution by -OH instead of -CN. Note:  A solution of potassium cyanide in water is quite alkaline, and contains significant amounts of hydroxide ions. These react with the halogenoalkane. For example, using 1-bromopropane as a typical primary halogenoalkane: You could write the full equation rather than the ionic one, but it slightly obscures what's going on: The bromine (or other halogen) in the halogenoalkane is simply replaced by a -CN group - hence a substitution reaction. In this example, butanenitrile is formed. Note:  When you are naming nitriles, you have to remember to include the carbon in the -CN group when you count the longest chain. In this example, there are 4 carbons in the longest chain - hence butanenitrile. The mechanism Here is the mechanism for the reaction involving bromoethane: This is an example ofnucleophilic substitution. Because the mechanism involves collision between two species in the slow step (in this case, the only step) of the reaction, it is known as an SN2 reaction. Note:  Unless your ask a specifically mentions SN2 by name, you can just call it nucleophilic substitution. f your examiners want you to show the transition state, draw the mechanism like this: The facts The facts of the reaction are exactly the same as with primary halogenoalkanes. If the halogenoalkane is heated under reflux with a solution of sodium or potassium cyanide in ethanol, the halogen is replaced by -CN, and a nitrile is produced. For example: Or if you want the full equation rather than the ionic one: The mechanism This mechanism involves an initial ionisation of the halogenoalkane: followed by a very rapid attack by the cyanide ion on the carbocation (carbonium ion) formed: This is again an example of nucleophilic substitution. This time the slow step of the reaction only involves one species - the halogenoalkane. It is known as an SN1 reaction. The facts The facts of the reaction are exactly the same as with primary or tertiary halogenoalkanes. The halogenoalkane is heated under reflux with a solution of sodium or potassium cyanide in ethanol. For example: The mechanism Secondary halogenoalkanes use both SN2 and SN1 mechanisms. For example, the SN2 mechanism is: Should you need it, the two stages of the SN1 mechanism are: Reactions of Arenes. Electrophilic Aromatic Substitution Substituent Effects Here is a table that shows the effect of substituents on a benzene ring have on both the rate and orientation of electrophilic aromatic substitution reactions. Study Tip: This is a VERY important table ! It is worth knowing.... your best understanding will come if you learn HOW it works. It's application goes way beyond electrophilic aromatic substitution reactions. Key concepts to review ?  Resonance and electronegativity These effects are a combination of RESONANCE and INDUCTIVE effects The effects are also important in other reactions and properties (e.g. acidity of the substituted benzoic acids). Here are some general pointers for recognising the substituent effects: • The H atom is the standard and is regarded as having no effect. • Activating groups increase the rate • Deactivating groups decrease the rate • EDG = electron donating group • EDG can be recognised by lone pairs on the atom adjacent to the π system, eg: -OCH3 • except -R, -Ar or -vinyl (hyperconjugation, π electrons) • EWG = electron withdrawing group • EWG can be recognised either by the atom adjacent to the π system having several bonds to more electronegative atoms, or • having a formal +ve or δ +ve charge, eg: -CO2R, -NO2 • EDG / activating groups direct ortho / para • EWG / deactivating groups direct meta • except halogens (-X) which are deactivating BUT direct ortho / para • EDG add electron density to the π system making it more nucleophilic • EWG remove electron density from the π system making it less nucleophilic. There are two main electronic effects that substituents can exert: RESONANCE effects are those that occur through the pi system and can be represented by resonance structures. These can be either electron donating (e.g. -OCH3) where pi electrons are pushed toward the arene or electron withdrawing (e.g. -C=O) where pi electrons are drawn away from the arene. In certain cases, molecules can be represent by more than one reasonable Lewis structure that differ only in the location of π electrons. Electrons in σ bonds have a fixed location and so they are said to be localised. In contrast,  π electrons that can be drawn in different locations are said to be delocalised. Collectively these Lewis diagrams are then known as resonance structures or resonance contributors or resonance canonicals. The "real" structure has characteristics of each of the contributors, and is often represented as the resonance hybrid (think of a hybrid breed which is a mixed breed).  In a way, the resonance hybrid is a mixture of the contributors. (note that a resonance hybrid cannot normally be written as an individual Lewis diagram !). You should be able to draw all reasonable resonance structures for a given organic molecule. The best way to "derive" resonance structures is by learning to "push" curly arrows and starting from a reasonable Lewis structure. INDUCTIVE effects are those that occur through the sigma system due to electronegativity effects.  These too can be either electron donating electron donating (e.g. -Me) where sigma electrons are pushed toward the arene or electron withdrawing (e.g. -CF3, +NR3) where sigma electrons are drawn away from the arene. • Electronegativity is defined as the ability of an atom to attract electrons towards itself. • It is one of the most important properties for rationalising and predicting reactivity etc. • The partial periodic table below has the Pauling electronegativities of some key elements. H 2.1 He Li 1.0 Be 1.5 B 2.0 C 2.5 N 3.0 O 3.5 F 4.0 Ne Na 0.9 Mg 1.2 Al 1.5 Si 1.8 P 2.1 S 2.5 Cl 3.0 Ar K 0.8 Ca 1.0 Sc Ti V Cr Mn Fe Co Ni Cu Zn Ga Ge As Se Br 2.8 Kr Rb Sr Y Zr Nb Mo Tc Ru Rh Pd Ag Cd In Sn Sb Te I 2.5 Xe • Electronegativity increases left to right across a row in the periodic table   e.g. C < N < O < F (as you move left to right nuclear charge increases so there is a greater attraction for electrons) • Electronegativity decreases as you move down a group in the periodic table  e.g. F > Cl > Br > I (each step down a group increases the atomic radii as a "new shell" of electrons are added and the nuclear charge is further shielded by the core electrons, both factors decrease the attraction for electrons) • F is the most electronegative element • Metals, e.g. Li and Mg, are less electronegative than C  (i.e. metals are electropositive compared to C) A simplified approach to understanding substituent effects is provided, based on the "isolated molecule approach".  The text (as do most others) uses the more rigourous approach of drawing the resonance structures for each of the intermediate carbocations formed by attack at each of the o-, m-  and p- positions and looking at how the initial substituent influences the stability of the system. We are going to break down the types of substituents into various subgroups based on the structural features of the substituent immediately adjacent to the aromatic ring: • type 1 = substituents with lone pairs (e.g. -OCH3, -NH2) on the atoms adjacent to the pisystem. • type 2 = substituents that are CH systems (i.e. -alkyl, -vinyl or -aryl). • type 3 = substituents that are C=C systems (i.e. -vinyl or -aryl). • type 4 = substituents with pi bonds to electronegative atoms (e.g. -C=O, -CF3, -NO2) • type 5 = substituents with several bonds to electronegative atoms (e.g.  -CF3) • type 6 = substituents that are halogens systems (i.e. -F, -Cl, -Br, -I) Resonance In chemistry is a tool used to represent and model certain types of non-classical molecular structures. Resonance is a key component of valence bond theory and arises when no single conventional model using only even number of electrons shared exclusively by two atoms can actually represent the observed molecule. There are two closely related but useful-to-distinguish meanings given to the term resonance. One of these has to do with diagrammatic representation of molecules using Lewis structures while the other has to do with the mathematical description of a molecule using valence bond theory. In both cases, resonance involves representing or modeling the structure of a molecule as an intermediate, average (a resonance hybrid) between several simpler but incorrect structures. History The concept of resonance was introduced by Linus Pauling in 1928. The term "resonance" came from the analogy between the quantum mechanical treatment of the H2 molecule and a classical system consisting of two coupled oscillators. In the classical system, the coupling produces two modes, one of which is lower in frequency than either of the uncoupled vibrations; quantum-mechanically, this lower frequency is interpreted as a lower energy. The alternative term mesomerism popular in German and French publications with the same meaning was introduced by Christopher Ingold in 1938 but did not catch on in the English literature. The current concept of mesomeric effect has taken on a related but different meaning. The double headed arrow was introduced by the German chemist Arndt (also responsible for the Arndt-Eistert synthesis) who preferred the German phrase zwischenstufe or intermediate phase. Due to confusion with the physical meaning of the word resonance, as no elements actually appear to be resonating, it has been suggested that the term resonance be abandoned in favor of delocalization.[2] Resonance energy would become delocalization energy and a resonance structure becomes a contributing structure. The double headed arrows would be replaced by commas. Examples Scheme 2. Examples of resonance ozone, benzene and the allyl cation The ozone molecule is represented by two resonance structures in the top of scheme 2. In reality the two terminal oxygen atoms are equivalent and the hybrid structure is drawn on the right with a charge of -1/2 on both oxygen atoms and partial double bonds. The concept of benzene as a hybrid of two conventional structures (middle scheme 2) was a major breakthrough in chemistry made by Kekulé, and the two forms of the ring which together represent the total resonance of the system are called Kekulé structures. In the hybrid structure on the right the circle replaces three double bonds. Resonance as a diagrammatic tool Scheme 1. Resonance structures of benzene A single Lewis structure often cannot represent the true electronic structure of a molecule. While one can only show an integral number of covalent bonds between two and only two atoms using these diagrams, one often finds that the experimentally deduced or calculated (from Quantum mechanics) structure of the molecule does not match any of the possible Lewis structures but rather has properties in some sense intermediate to these. Resonance structures are then employed to approximate the true electronic structure. Take the example of benzene (shown above, right). In a Lewis diagram, two carbons can be connected by one or two covalent bonds, but in the observed benzene molecule the bond lengths are longer than double bonds yet shorter than single bonds. More importantly, they are all equivalent, a fact no Lewis structure can explain. Therefore one calls the two Lewis structures canonical, contributing or resonating structures and the real molecule is considered to be their average, called a resonance hybrid. Resonance structures of the same molecule are connected with a double-headed arrow. This form of resonance is simply a way of representing the structure graphically. It is only a notation and does not represent a real phenomenon. The individual resonance structures do not exist in reality: the molecule does not inter-convert between them. Instead, the molecule exists in a single unchanging state, intermediate between the resonance structures and only partially described by any one of them. This sharply distinguishes resonance from tautomerism. When it is said that a molecule is stabilized by resonance or that amides are less basic because the lone pair on nitrogen is involved in resonance with the carbonyl pi electron, no phenomenon is implied. What is meant is simply that the molecule behaves differently from what we expect by looking at its Lewis structure because the structure diagrammed does not represent the actual structure of the molecule. From this viewpoint, the terminology treating resonance as something that 'happens' is perhaps an unfortunate historical burden. It is also not correct to say that resonance occurs because electrons "flow", "circulate", or change their place within the molecules. Such a behavior would produce a magnetic field, an effect that is not observed in reality. However, a phenomenon of this sort may be induced by the application of an external magnetic field perpendicular to the plane of an aromatic ring: this causes the appearance of an opposing magnetic field, demonstrating that that the delocalised pi electrons are truly flowing. The applied magnetic field induces a current density ("ring current") of circulating electrons in the pi system; this current in turn induces a magnetic field. A common manifestation of this effect is the large chemical shift observed in the NMR spectrum of aromatic structures. A vector Analogy An accurate analogy of resonance is given by the algebra of vectors. A vector r is written in component form as xi+yj+zk where x, y, and z are components and i, j, and k are the standard orthogonal Cartesian unit vectors. Just as the real vector r is neither i, nor j, nor k, but rather a combination of all three, a resonance hybrid is a conceptual combination of resonance structures. x, y, and z have no independent existence; they are considered only as a decomposition of r into easier-to-handle components, as is the case with resonance structures. In fact this analogy is very close to the reality, as will be made clear in the following section. True nature of Resonance Though resonance is often introduced in such a diagrammatic form in elementary chemistry, it actually has a deeper significance in the mathematical formalism of valence bond theory (VB). When a molecule cannot be represented by the standard tools of valence bond theory (promotion, hybridisation, orbital overlap, sigma and pi bond formation) because no single structure predicted by VB can account for all the properties of the molecule, one invokes the concept of resonance. Valence bond theory gives us a model for benzene where each carbon atom makes two sigma bonds with its neighbouring carbon atoms and one with a hydrogen atom. But since carbon is tetravalent, it has the ability to form one more bond. In VB it can form this extra bond with either of the neighbouring carbon atoms, giving rise to the familiar Kekulé ring structure. But this cannot account for all carbon-carbon bond lengths being equal in benzene. A solution is to write the actual wavefunction of the molecule as a linear superposition of the two possible Kekulé structures (or rather the wavefunctions representing these structures), creating a wavefunction that is neither of its components but rather a superposition of them, just as in the vector analogy above (which is formally equivalent to this situation). In benzene both Kekulé structures have equal weight, but this need not be the case. In general, the superposition is written with undetermined constant coefficients, which are then variationally optimized to find the lowest possible energy for the given set of basis wavefunctions. This is taken to be the best approximation that can be made to the real structure, though a better one may be made with addition of more structures. In molecular orbital theory, the main alternative to VB, resonance often (but not always) translates to a delocalization of electrons in pi orbitals (which are a separate concept from pi bonds in VB). For example, in benzene, the MO model gives us 6 pi electrons completely delocalised over all 6 carbon atoms, thus contributing something like half-bonds. This MO interpretation has inspired the picture of the benzene ring as a hexagon with a circle inside. Often when describing benzene the VB picture and the MO picture are intermixed, talking both about sigma 'bonds' (strictly a concept from VB) and 'delocalized' pi electrons (strictly a concept from MO). Resonance Energy Resonance hybrids are always more stable than any of the canonical structures would be, if they existed[1]. The delocalization of the electrons lowers the orbital energies, imparting this stability. The resonance in benzene gives rise to the property of aromaticity. The gain in stability of the resonance hybrid over the most stable of the (non-existent) canonical structures is called the resonance energy. A canonical structure that is lower in energy makes a relatively greater contribution to the resonance hybrid, or the actual picture of the molecule. In fact, resonance energy, and consequently stability, increase with the number of canonical structures possible, especially when these (non-existent) structures are equal in energy. Resonance energy of a conjugated system can be 'measured' by heat of hydrogenation of the molecule. Consider the example of benzene. The energy required to hydrogenate an isolated pi-bond is around 28.6 kcal/mol (120 kJ/mol). Thus, according to the VB picture of benzene (having three pi-bonds), the complete hydrogenation of benzene should require 85.8 kcal/mol (360 kJ/mol). However, the experimental heat of hydrogenation of benzene is around 49.8 kcal/mol (210 kJ/mol). The difference of 36 kcal/mol (150 kJ/mol) can be looked upon as a measure of resonance energy. One must bear in mind again that resonance structures have no physical existence. So, even though the term 'resonance energy' is quite meaningless, it offers an insight into how different the VB picture of a molecule is from the actual molecule itself. The resonance energy can be used to calculate electronegativities on the Pauling scale. Writing Resonance Structures 1.   Position of nuclei must be the same in all structures, otherwise they would be isomers with real existence. 2.   Total number of electrons and thus total charge must be constant. 3.   When separating charge (giving rise to ions), usually structures where negative charges are on less electronegative elements have little contribution, but this may not be true if additional bonds are gained. 4.   Resonance hybrids can not be made to have lower energy than the actual molecules. Reactive Intermediates Often, reactive intermediates such as carbocations and free radicals have more delocalised structure than their parent reactants, giving rise to unexpected products. The classical example is allylic rearrangement. When 1 mole of HCl adds to 1 mole of 1,3-butadiene, in addition to the ordinarily expected product 3-chloro-1-butene, we also find 1-chloro-2-butene. Isotope labelling experiments have shown that what happens here is that the additional double bond shifts from 1,2 position to 2,3 position in some of the product. This and other evidence (such as NMR in superacid solutions) shows that the intermediate carbocation must have a highly delocalised structure, different from its mostly classical (delocalisation exists but is small) parent molecule. This cation (an allylic cation) can be represented using resonance, as shown above. This observation of greater delocalisation in less stable molecules is quite general. The excited states of conjugated dienes are stabilised more by conjugation than their ground states, causing them to become organic dyes. An well-studied example of delocalisation that does not involve pi electrons (hyperconjugation) can be observed in the non-classical ion norbornyl cation. Other examples are diborane and methanium (CH5+). These are known as 3-center-2-electron bonds and are represented either by resonance structures involving rearrangement of sigma electrons or by a special notation, a Y that has the three nuclei at its three points. # Structure and Bonding ## Basic Principles What are molecules made from? They are made from atoms, which are themselves made from nuclei and electrons. These building blocks carry an electrical charge: nuclei are positively charged, and electrons are negatively charged. The nuclei themselves are made up of (positively charged) protons and (neutral) neutrons. This is all summarised on the following picture: Different types of atom have different numbers of protons, neutrons, and electrons. For example, carbon atoms have 6 protons, 6 neutrons, and 6 electrons. Charged species interact with each other: like charges (+ and + or - and -) repel each other, opposite charges (- and +) attract each other. This well-known principle from physics is summarised by Coulomb's law: (Here, F is the force between the two charges; ε0 is a constant (not important here), q1 and q2 are the values of the charges involved, and r is the distance between them.) This force between charged species is central to all of chemistry, and in particular to all the types of bonding we will discuss. First, it explains how atoms hold together: the negatively charged electrons are attracted to the positively charged nucleus more than they are repelled by the other electrons. There is a fine balance between the attractive force holding the electrons close to the nucleus, and the repulsive force which tends to keep electrons away from each other. The result of this competition between attractive and repulsive charge-charge interactions is what explains the detailed structure of atoms. The electrons in atoms tend to form into concentric shells. For the hydrogen atom, with just one electron and one proton (Z = 1), the electron sits in the first shell, as shown here: The nucleus is shown in purple. Also shown is the structure of the Helium atom, with two protons, two neutrons (all shown together as the purple nucleus), and 2 electrons (i.e., Z = 2). Both electrons sit in the first shell. For elements with more electrons, there is no more room in the first shell, and so a second shell is occupied. This is shown below for carbon (Z = 6) and oxygen (Z = 8). Above 10 electrons, the second shell contains eight electrons and is full. For the elements beyond (starting with sodium, Z = 11), the last electrons therefore sit in the third shell, as shown here for sodium and chlorine (Z = 17): This structural description leads naturally to an important property of atoms, the octet rule: atoms have a strong tendency to lose, gain, or share electrons if this leads to them having a complete shell of electrons around them. In other words, atoms prefer to have a total of 2, 10, or 18 electrons around them. # Structure and Bonding in Chemistry ## Ionic Bonds Elements in the first few columns of the periodic table have a few more electrons than predicted by the octet rule: they therefore lose the electrons in the outermost shells fairly easily. For example, the alkali metals (group I), such as sodium (Na) or potassium (K), which have, respectively, 11 and 19 electrons, easily lose one electron to form monopositive ions, Na+ and K+. These ions have 10 and 18 electrons, respectively, so they are quite stable according to the octet rule. Elements in the last few columns of the periodic table have one, two or three fewer electrons than predicted by the octet rule: they therefore gain electrons fairly easily. For example, the halogens (group VII), such as fluorine (F) or chlorine (Cl), which have, respectively, 9 and 17 electrons, easily gain one electron to form mononegative ions, F- or Cl-. These ions have 10 and 18 electrons, respectively. Likewise, elements in group II form doubly positive ions such as Mg++ or Ca++, and elements in group VI form doubly negative ions such as O-- or S--. All these ions obey the octet rule and so are fairly stable. Now, imagine what will happen when one sodium atom meets one chlorine atom: the sodium atom will lose one electron to give Na+, and the chlorine atom will gain that electron to give Cl-. This can be represented schematically in the following way: The resulting ions, which have Opposite charges, will be attracted to one another, and will draw closer, until they "touch". This happens when the inner shell of electrons on the sodium ion (shown in blue) starts to overlap with the outer shell of electrons on the chloride anion (shown in green). This pair of ions looks something like this: It is possible to determine where the valence electrons are situated in this pair of ions. They are almost entirely situated on the chlorine atom, as expected: the sodium atom has lost its only valence (3s) electron, whereas chlorine has gained an electron and has the 3s23p6 valence configuration. The blue transparent surface on this picture encloses the region of space where the valence electrons spend most of the time: NaCl, or sodium chloride, is however more complicated than this! This is because charge-charge interaction occurs in all directions. Once an Na+ cation has attracted a Cl- anion in one direction, it can attract another in a different direction. So two pairs of ions such as above can come together to form a species with four ions in total, all placed so as to interact favourably with ions of opposite charge: Here, too, all the valence electrons sit on the chlorine atoms: And this need not stop here... The next step is to get 8 ions, 4 each of sodium and chlorine: The stable form of sodium chloride involves a very large number of NaCl units arranged in a lattice (or regular arrangement) millions of atoms across. Because the lattice is rigid, this means that one gets a solid: the ions do not move much one with respect to another. Also, because atoms are so small, even a small crystal of salt will have billions of sodium chloride units in it! The ions are arranged so that each positive (sodium) ion is close to many negative (chloride) ions, as shown on the following picture: Can you count how many ions each sodium is next to? And how many ions each chlorine is next to? These pairs of ions in close contact are shown with lines joining them. These lines illustrate the strong ionic bonding between ions of opposite charge which are next to each other. However, you should remember that these close contacts are not the same as covalent bonds - there is no pair of electrons shared between the two atoms which are connected by the two lines. Also, there is some ionic bonding between ions which are further away from each other - ions of opposite charge always attract each other, however far they are from each other. Nevertheless, the force holding them together is largest when they are close together. The lines connecting ions in this lattice (and others below) are there to make it easier to detect the pairs of ions in close contact with each other. Remember - atoms are very small! The distance between a sodium ion and its nearest chloride ion neighbours is about 3 ten-millionths of a millimeter. Imagine a cubic grain of salt with edges which are 3 tenths of a millimeter long. That means there will be a line of about a million ions along each edge. And the grain will contain one billion billion ions in total. If each ion was replaced by a ping-pong ball (roughly 3 centimeters in diameter), each edge would be one hundred million times longer. Instead of being 3 tenths of a millimeter wide, this "grain" would be roughly thirty kilometers (or twenty miles) wide!! Enough to cover most of London... All ionic compounds adopt a similar three-dimensional structure in which the ions are close to many ions of the opposite charge. There are however several ways of doing this. Caesium chloride (CsCl), for example, adopts a different structure to that of NaCl, as shown on the following picture: Can you count how many ions each caesium ion (pink) is in close contact with? And how many ions each chloride ion (green) is close to? As another example, let us consider a salt with a divalent (doubly positive) ion, for example calcium fluorite, CaF2. This adopts the structure shown below (the calcium atoms are shown as large grey spheres, the fluorine atoms are smaller and orange): Can you count how many fluoride ions each calcium is in close contact with? And how many caesium ions each fluoride ion is close to? Experienced chemists can often predict the structure that a given ionic species will adopt, based on the nature of the ions involved. This means that it is often possible to design ionic compounds having certain well-defined and desirable properties. As an example, chemists have been able to make high-temperature superconductors, such as the complicated ionic compound, YBa2Cu3O4. This solid conducts electricity with no resistance at all at low temperature (below ca. -100 degrees centigrade). Previous superconductors only had this property at much lower temperatures. The lack of resistance makes superconductors very useful in a number of technological applications - e.g. in designing high-speed trains that levitate above the track! The repeating structure of this solid is shown below (oxygen is large and red, barium large and yellow-ish, yttrium small and pink, and copper small and blue). Notice how many oxygen ions surround each barium and yttrium ion. ## Ionic Bonds - Conclusions Ionic bonds form between elements which readily lose electrons and others which readily gain electrons. Because the interaction between charges as given by Coulomb's law is the same in all directions, ionic compounds do not form molecules. Instead, periodic lattices with billions of ions form, in which each ion is surrounded by many ions of opposite charge. Therefore, ionic compounds are almost always solids at room temperature. By careful consideration of the properties of each ion, it is possible to design ionic solids with certain well-defined and desirable properties, like superconductors. # Structure and Bonding in Chemistry ## Covalent Bonds In the previous page, we saw how atoms could achieve a complete shell of electrons by losing or gaining one or more electrons, to form ions. There is another way atoms can satisfy the octet rule: they can share electrons. For example, two hydrogen atoms can share their electrons, as shown below. Because each of the shared electrons then "belongs" to both atoms, both atoms then a fulled shell, with two electrons. The pair of shared electrons is symbolised by the heavy line between the atoms. In terms of charge-charge interactions, what happens is that the shared electrons are located between the two bonded atoms. The force attracting them to both nuclei is stronger than the repulsive force between nuclei. The methane (CH4) molecule illustrates a more complex example. Each of the 4 electrons in the outermost ("valence") shell of carbon is shared with one hydrogen. In turn, each of the hydrogens also shares one electron with carbon. Overall, carbon "owns" 10 electrons - satisfying the octet rule - and each hydrogen has 2. This is shown here: When a molecule of methane is studied experimentally, it is found that the four hydrogens spread out evenly around the carbon atom, leading to the three-dimensional structure shown here: As you would expect given that the electrons are shared, if we plot the region where the electrons sit, this is not localised on one atom, as it was for the ionic compounds, but is all over the molecule: ### Covalent Bonding and Isomers As we have seen above, atoms can share electrons with others to form chemical bonds. This can also take place between two carbon atoms, to form a molecule such as ethane (C2H6): When we add two more carbon atoms and 4 more hydrogens, to make butane (C4H10), an interesting situation arises: There are two different ways of bonding the carbons together, to form two different molecules, or isomers!!! These are shown below. For one of the isomers, the first carbon is bonded to three hydrogens, and to the second carbon, which is itself bonded to another two hydrogens and to the third carbon, which is itself bonded to the fourth carbon. In the other isomer, one of the carbons forms a bond to all three carbon atoms: Larger compounds can also be formed, and they will have even more isomers! For example, this compound with 8 carbons is called isooctane, and is one of the main components of petrol for cars: Can you check that the formula for this compound is C8H18? Can you sketch another compound with the same formula? Because covalent bonds can be formed in many different ways, it is possible to write down, and to make, many different molecules. Many of these are natural compounds, made by living animals or plants within their cells. This example shows one such molecule, cholesterol (C27H46O), which can contribute to heart disease in people whose diet is too rich in fats: Note that in this structure, two neighbouring carbon atoms appear to form only three bonds, which would go against the octet rule. In fact, these atoms bond by sharing two electrons each (a total of four electrons). In this way, they complete their electron shell like the others. This situation is referred to as a double bond, and is shown in the pop-up window as a thicker stick between those two atoms (Can you find this bond? Check that all other carbon atoms do form four bonds). Other compounds are synthetic, they are made by chemists. Chemists can also make the natural compounds, starting from only simple things like methane and water. The "natural" molecules made in this way are identical to the "real" natural compounds! Other synthetic molecules do not exist in nature. They can have desirable properties, for example, many medicines are made in this way. An example of a "small" medicine molecule is aspirin, C9H8O4, shown below. In this molecule, two bonds between carbon and oxygen are double bonds, and are shown as thicker sticks in the model. ### Properties of Covalent Molecules The covalent bonds between atoms in a given molecule are very strong, as strong as ionic bonds. However, unlike ionic bonds, there is a limit to the number of covalent bonds to other atoms that a given atom can form. For example, carbon can make four bonds - not more. Oxygen can form two bonds. As a result, once each atom has made all the bonds it can make, as in all the molecules shown above, the atoms can no longer interact with other ones. For this reason, two covalent molecules barely stick together. Light molecules are therefore gases, such as methane or ethane, above, hydrogen, H2, nitrogen, N2 (the main component of the air we breathe, etc. Heavier molecules, such as e.g. the isooctane molecule, are liquids at room temperature, and others still, such as cholesterol, are solids. ### Covalent Solids As well as the solids just referred to, formed by piling lots of covalent molecules together, and relying on their slight "stickiness" to hold the solid together, one can also form solids entirely bound together by covalent bonds. An excellent example is diamond, which is pure carbon, with each carbon atom bonding to four others, to form a huge "molecule" containing many millions of millions of atoms. This shows a part of a diamond molecule: In diamond, all the carbon atoms share one electron with each of their four neighbouring carbon atoms. There is another form in which pure carbon can be formed: graphite. This is the main component of the "lead" in pencils. Here, instead of each carbon having four neighbours, it only has three. Each carbon shares one electron with two of its neighbours, and 2 electrons with the third neighbour. In this way, one C-C bond out of three is a double bond. The atoms all bond together in planes, and the planes stack on top of each other as shown: In graphite, the C-C bonds in the planes are very strong, but the force between the different planes is quite weak, and they can slip over one another. This explains the "soft" feel of graphite, and the fact that it is used as a lubricant, for example in motor oil. ### Other "Big" Covalent Molecules In solids like diamond and graphite, the different atoms all bond to one another to form one very large molecule. The atoms are bonded to each other in all directions in diamond, and in two directions (within the planes) in graphite, with no bonding in the other direction. Some important covalent molecules involve atoms bonding to each other repeatedly along just one direction, with no bonds in the others. These are called polymers, and one simple example if polyethene (also called polythene, or polyethylene). The structure of polythene is shown here (the dangling bonds at each end indicate how the bonding should really continue for thousands of atoms on each side): Polythene is what most plastic bags are made of. Other polymers include molecules such as nylon, teflon (these, like polythene, are man-made), or cellulose (the stuff that makes wood hard), a biological polymer. ## Covalent Bonds - Conclusions Covalent bonds involve sharing electrons between atoms. The shared electrons "belong" to both atoms in the bond. Each atom forms the right number of bonds, such that they have filled shells. There is lots of flexibility in terms of which atom bonds to which other ones. This means that many isomeric molecules can be formed, and Nature as well as chemists are skilled at designing and making new molecules with desirable properties. In most cases, only a small number of atoms are bonded together to make a molecule, and there is no bonding between atoms in one molecule and other atoms in other molecules. This means that molecules are only very slightly "sticky" between themselves, and covalent compounds are either gases, or liquids, or sometimes solids. In some cases, bonding occurs to form large molecules with thousands or millions of atoms, and these can be solids. # Structure and Bonding in Chemistry ## Other Types of Bonding In the previous page, we have learned about the two most important types of bonding: ionic bonding and covalent bonding. Both of these are ultimately driven by the desire that atoms have to be surrounded by a complete shell of electrons. They achieve this by respectively either gaining or losing, or sharing, one or more electrons. There are other principles which can lead to atoms bonding to each other, and we will examine here two important cases: metallic bonding, and hydrogen bonding. ### Metallic Bonding Metals are well known to be solids (except for Mercury!). The bonds between metals can loosely be described as covalent bonds (due to sharing electrons), except that the metal atoms do not just share electrons with 1, 2, 3 or 4 neighbours, as in covalent bonding, but with many atoms. The structure of the metal is determined by the fact that each atom tries to be as close to as many other atoms as possible. This is shown here for one typical metal structure (adopted, for instance, by iron at some temperatures): Can you count how many neighbours each iron atom is bonded to? Contrast this with the structure of diamond seen previously. Because the electrons are shared with all the neighbours, it is quite easy for the electrons in metals to move around. If each "shared" electron shifts one atom to the right or left, this leads to a net shift of charge. This occurs quite easily in metals, but much less so in ionic solids, or covalent ones, where the electrons are rigidly associated with either a particular atom or ion, or a particular pair of atoms. It is because electrons can move around so easily inside metals that the latter conduct electricity. ### Hydrogen Bonding In covalent bonds, the electrons are shared, so that each atom gets a filled shell. When the distribution of electrons in molecules is considered in detail, it becomes apparent that the "sharing" is not always perfectly "fair": often, one of the atoms gets "more" of the shared electrons than the other does. This occurs, in particular, when atoms such as nitrogen, fluorine, or oxygen bond to hydrogen. For example, in HF (hydrogen fluoride), the structure can be described by the following "sharing" picture: However, this structure does not tell the whole truth about the distribution of electrons in HF. Indeed, the following, "ionic" structure also respects the filled (or empty) shell rule: In reality, HF is described by both these structures, so that the H-F bond is polar, with each atom bearing a small positive (δ+) or negative (δ-) charge. When two hydrogen fluoride molecules come close to each other, the like charges attract each other, and one gets a "molecule" of di-hydrogen fluoride as shown: The weak "bond" between the F atom and the H is called a Hydrogen Bond, and is shown here as the dotted green line. Hydrogen bonds can also occur between oxygen atoms and hydrogen. One of the most important types of hydrogen bonds is of this type, and is the one occurring in water. As discussed for HF, the electrons in H2O molecules are not evenly "shared": the oxygen atom has more of them than the hydrogen atoms. As a result, oxygen has a (partial) negative charge, and the hydrogens have a positive charge. When you have two water molecules close to another, a hydrogen atom on one of the molecules is attracted to the oxygen of the other molecule, to give a dimer. The structure of this dimer is shown here: Notice how the oxygen, hydrogen, and oxygen atoms involved in the hydrogen bond are arranged more or less in a straight line. This is the preferred geometry for hydrogen bonds, and explains why only one hydrogen bond can be formed in the water dimer. Upon going to three water molecules, it is now possible to form several hydrogen bonds. This is shown here: How many hydrogen bonds is each water molecule involved in? In liquid water or ice, many water molecules are close to each other, and they form dense networks of hydrogen bonds. In ice, the arrangement of the water molecules with respect to each other is regular, whereas in water, it is random. The following picture shows a typical arrangement of water molecules similar to what you might find in the liquid: Can you see some of the hydrogen bonds? These bonds are weaker than typical covalent or ionic bonds, but nevertheless strong enough to make molecules which can hydrogen bond much more "sticky" with respect to each other than are other covalent molecules with otherwise similar properties. For example, the molecular mass of water is 18, and that of nitrogen is 28, yet nitrogen is a gas down to almost -200 degrees centigrade, whereas water is a liquid up to 100 degrees! ### Hydrogen Bonds in Biology The cells of living things are made up of many different sorts of molecule. Two important classes of molecule are proteins and nucleic acids. In both of these molecules, parts of the (very large) molecules are involved in hydrogen bonds with other parts of the same molecules. This is very important in establishing the structure and properties of these molecules. By clicking on the link, you can view a Chime page explaining the structure of DNA, one of the most important nucleic acids, and showing the important role of hydrogen bonding. ### Conclusions Ionic and covalent bonding are not the only kinds of bond between atoms. Some important other types of bond include metallic bonds, and hydrogen bonds. These explain the properties of metals, e.g. that they conduct electricity, and are very important in establishing the properties of water and living cells. # Structure and Bonding in Chemistry ## Conclusions On the previous pages, we have been able to view the structure of a number of typical chemical compounds. We have also learnt how structure is dependent upon the bonding between atoms. We have seen examples of the two most important types of chemical bond, ionic bonds and covalent ones. We have also learned that the overall properties of a compound can be related to its structure, and thus to its bonding. Finally, we have briefly examined two more types of bonding: metallic and hydrogen bonding. This page explores how you write electronic structures for atoms using s, p, and d notation. It assumes that you know about simple atomic orbitals - at least as far as the way they are named, and their relative energies. If you want to look at the electronic structures of simple monatomic ions (such as Cl-, Ca2+ and Cr3+), you will find a link at the bottom of the page . The electronic structures of atoms Relating orbital filling to the Periodic Table UK syllabuses for 16 - 18 year olds tend to stop at krypton when it comes to writing electronic structures, but it is possible that you could be asked for structures for elements up as far as barium. After barium you have to worry about f orbitals as well as s, p and d orbitals - and that's a problem for chemistry at a higher level. It is important that you look through past exam papers as well as your syllabus so that you can judge how hard the questions are likely to get. This page looks in detail at the elements in the shortened version of the Periodic Table above, and then shows how you could work out the structures of some bigger atoms. The first period Hydrogen has its only electron in the 1s orbital - 1s1, and at helium the first level is completely full - 1s2. The second period Now we need to start filling the second level, and hence start the second period. Lithium's electron goes into the 2s orbital because that has a lower energy than the 2p orbitals. Lithium has an electronic structure of 1s22s1. Beryllium adds a second electron to this same level - 1s22s2. Now the 2p levels start to fill. These levels all have the same energy, and so the electrons go in singly at first. B  1s22s22px1 C     1s22s22px12py1 N      1s22s22px12py12pz1 Note:  The orbitals where something new is happening are shown in bold type. You wouldn't normally write them any differently from the other orbitals. The next electrons to go in will have to pair up with those already there. O    1s22s22px22py12pz1 F     1s22s22px22py22pz Ne      1s22s22px22py22pz2 You can see that it is going to get progressively tedious to write the full electronic structures of atoms as the number of electrons increases. There are two ways around this, and you must be familiar with both. Shortcut 1: All the various p electrons can be lumped together. For example, fluorine could be written as 1s22s22p5, and neon as 1s22s22p6. This is what is normally done if the electrons are in an inner layer. If the electrons are in the bonding level (those on the outside of the atom), they are sometimes written in shorthand, sometimes in full. Don't worry about this. Be prepared to meet either version, but if you are asked for the electronic structure of something in an exam, write it out in full showing all the px, py and pz orbitals in the outer level separately. For example, although we haven't yet met the electronic structure of chlorine, you could write it as 1s22s22p63s23px23py23pz1. Notice that the 2p electrons are all lumped together whereas the 3p ones are shown in full. The logic is that the 3p electrons will be involved in bonding because they are on the outside of the atom, whereas the 2p electrons are buried deep in the atom and aren't really of any interest. Shortcut 2: You can lump all the inner electrons together using, for example, the symbol [Ne]. In this context, [Ne] means the electronic structure of neon - in other words: 1s22s22px22py22pz2 You wouldn't do this with helium because it takes longer to write [He] than it does 1s2. On this basis the structure of chlorine would be written [Ne]3s23px23py23pz1. The third period At neon, all the second level orbitals are full, and so after this we have to start the third period with sodium. The pattern of filling is now exactly the same as in the previous period, except that everything is now happening at the 3-level. For example: short version Mg      1s22s22p63s2                               [Ne]3s2 S         1s22s22p63s23px23py13pz         [Ne] 3s23px23py13pz1 Ar       1s22s22p63s23px23py23pz2        [Ne] 3s23px23py23pz2 Note:  Check that you can do these. Cover the text and then work out these structures for yourself. Then do all the rest of this period. When you've finished, check your answers against the corresponding elements from the previous period. Your answers should be the same except a level further out. The beginning of the fourth period At this point the 3-level orbitals aren't all full - the 3d levels haven't been used yet. But if you refer back to the energies of the orbitals, you will see that the next lowest energy orbital is the 4s - so that fills next. K     1s22s22p63s23p64s1 Ca    1s22s22p63s23p64s2 There is strong evidence for this in the similarities in the chemistry of elements like sodium (1s22s22p63s1) and potassium (1s22s22p63s23p64s1) The outer electron governs their properties and that electron is in the same sort of orbital in both of the elements. That wouldn't be true if the outer electron in potassium was 3d1. s- and p-block elements The elements in group 1 of the Periodic Table all have an outer electronic structure of ns1 (where n is a number between 2 and 7). All group 2 elements have an outer electronic structure of ns2. Elements in groups 1 and 2 are described as s-block elements. Elements from group 3 across to the noble gases all have their outer electrons in p orbitals. These are then described as p-block elements. d-block elements Remember that the 4s orbital has a lower energy than the 3d orbitals and so fills first. Once the 3d orbitals have filled up, the next electrons go into the 4p orbitals as you would expect. d-block elements are elements in which the last electron to be added to the atom is in a d orbital. The first series of these contains the elements from scandium to zinc, which at GCSE you probably called transition elements or transition metals. The terms "transition element" and "d-block element" don't quite have the same meaning, but it doesn't matter in the present context. If you are interested:  A transition element is defined as one which has partially filled d orbitals either in the element or any of its compounds. Zinc (at the right-hand end of the d-block) always has a completely full 3d level (3d10) and so doesn't count as a transition element. d electrons are almost always described as, for example, d5 or d8 - and not written as separate orbitals. Remember that there are five d orbitals, and that the electrons will inhabit them singly as far as possible. Up to 5 electrons will occupy orbitals on their own. After that they will have to pair up. d5 means d8 means Notice in what follows that all the 3-level orbitals are written together, even though the 3d electrons are added to the atom after the 4s. Sc 1s22s22p63s23p63d14s2 Ti 1s22s22p63s23p63d24s2 V 1s22s22p63s23p63d34s2 Cr 1s22s22p63s23p63d54s1 Whoops! Chromium breaks the sequence. In chromium, the electrons in the 3d and 4s orbitals rearrange so that there is one electron in each orbital. It would be convenient if the sequence was tidy - but it's not! Mn 1s22s22p63s23p63d54s2 (back to being tidy again) Fe 1s22s22p63s23p63d64s2 Co 1s22s22p63s23p63d74s2 Ni 1s22s22p63s23p63d84s2 Cu 1s22s22p63s23p63d104s1 (another awkward one!) Zn 1s22s22p63s23p63d104s2 And at zinc the process of filling the d orbitals is complete. Filling the rest of period 4 The next orbitals to be used are the 4p, and these fill in exactly the same way as the 2p or 3p. We are back now with the p-block elements from gallium to krypton. Bromine, for example, is 1s22s22p63s23p63d104s24px24py24pz1. Useful exercise:  Work out the electronic structures of all the elements from gallium to krypton. You can check your answers by comparing them with the elements directly above them in the Periodic Table. For example, gallium will have the same sort of arrangement of its outer level electrons as boron or aluminium - except that gallium's outer electrons will be in the 4-level. Summary Writing the electronic structure of an element from hydrogen to krypton • Use the Periodic Table to find the atomic number, and hence number of electrons. • Fill up orbitals in the order 1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p - until you run out of electrons. The 3d is the awkward one - remember that specially. Fill p and d orbitals singly as far as possible before pairing electrons up. • Remember that chromium and copper have electronic structures which break the pattern in the first row of the d-block. Writing the electronic structure of big s- or p-block elements Note:  We are deliberately excluding the d-block elements apart from the first row that we've already looked at in detail. The pattern of awkward structures isn't the same in the other rows. This is a problem for degree level. First work out the number of outer electrons. This is quite likely all you will be asked to do anyway. The number of outer electrons is the same as the group number. (The noble gases are a bit of a problem here, because they are normally called group 0 rather then group 8. Helium has 2 outer electrons; the rest have 8.) All elements in group 3, for example, have 3 electrons in their outer level. Fit these electrons into s and p orbitals as necessary. Which level orbitals? Count the periods in the Periodic Table (not forgetting the one with H and He in it). Iodine is in group 7 and so has 7 outer electrons. It is in the fifth period and so its electrons will be in 5s and 5p orbitals. Iodine has the outer structure 5s25px25py25pz1. What about the inner electrons if you need to work them out as well? The 1, 2 and 3 levels will all be full, and so will the 4s, 4p and 4d. The 4f levels don't fill until after anything you will be asked about at A'level. Just forget about them! That gives the full structure: 1s22s22p63s23p63d104s24p64d105s25px25py25pz1. When you've finished, count all the electrons to make sure that they come to the same as the atomic number. Don't forget to make this check - it's easy to miss an orbital out when it gets this complicated. Barium is in group 2 and so has 2 outer electrons. It is in the sixth period. Barium has the outer structure 6s2. Including all the inner levels: 1s22s22p63s23p63d104s24p64d105s25p66s2. It would be easy to include 5d10 as well by mistake, but the d level always fills after the next s level - so 5d fills after 6s just as 3d fills after 4s. As long as you counted the number of electrons you could easily spot this mistake because you would have 10 too many. Note:  Don't worry too much about these complicated structures. You need to know how to work them out in principle. This page explains what covalent bonding is. It starts with a simple picture of the single covalent bond, and then modifies it slightly for A'level purposes. It also takes a more sophisticated view (beyond A'level) if you are interested. You will find a link to a page on double covalent bonds at the bottom of the page. A simple view of covalent bonding The importance of noble gas structures At a simple level (like GCSE) a lot of importance is attached to the electronic structures of noble gases like neon or argon which have eight electrons in their outer energy levels (or two in the case of helium). These noble gas structures are thought of as being in some way a "desirable" thing for an atom to have. You may well have been left with the strong impression that when other atoms react, they try to achieve noble gas structures. As well as achieving noble gas structures by transferring electrons from one atom to another as in ionic bonding, it is also possible for atoms to reach these stable structures by sharing electrons to give covalent bonds. Some very simple covalent molecules Chlorine For example, two chlorine atoms could both achieve stable structures by sharing their single unpaired electron as in the diagram. The fact that one chlorine has been drawn with electrons marked as crosses and the other as dots is simply to show where all the electrons come from. In reality there is no difference between them. The two chlorine atoms are said to be joined by a covalent bond. The reason that the two chlorine atoms stick together is that the shared pair of electrons is attracted to the nucleus of both chlorine atoms. Hydrogen Hydrogen atoms only need two electrons in their outer level to reach the noble gas structure of helium. Once again, the covalent bond holds the two atoms together because the pair of electrons is attracted to both nuclei. Hydrogen chloride The hydrogen has a helium structure, and the chlorine an argon structure. ```Back to the Top``` Covalent bonding at A'level Cases where there isn't any difference from the simple view If you stick closely to modern A'level syllabuses, there is little need to move far from the simple (GCSE) view. The only thing which must be changed is the over-reliance on the concept of noble gas structures. Most of the simple molecules you draw do in fact have all their atoms with noble gas structures. For example: Even with a more complicated molecule like PCl3, there's no problem. In this case, only the outer electrons are shown for simplicity. Each atom in this structure has inner layers of electrons of 2,8. Again, everything present has a noble gas structure. Cases where the simple view throws up problems Boron trifluoride, BF3 A boron atom only has 3 electrons in its outer level, and there is no possibility of it reaching a noble gas structure by simple sharing of electrons. Is this a problem? No. The boron has formed the maximum number of bonds that it can in the circumstances, and this is a perfectly valid structure. Energy is released whenever a covalent bond is formed. Because energy is being lost from the system, it becomes more stable after every covalent bond is made. It follows, therefore, that an atom will tend to make as many covalent bonds as possible. In the case of boron in BF3, three bonds is the maximum possible because boron only has 3 electrons to share. Note:  You might perhaps wonder why boron doesn't form ionic bonds with fluorine instead. Boron doesn't form ions because the total energy needed to remove three electrons to form a B3+ ion is simply too great to be recoverable when attractions are set up between the boron and fluoride ions. Phosphorus(V) chloride, PCl5 In the case of phosphorus 5 covalent bonds are possible - as in PCl5. Phosphorus forms two chlorides - PCl3 and PCl5. When phosphorus burns in chlorine both are formed - the majority product depending on how much chlorine is available. We've already looked at the structure of PCl3. The diagram of PCl5 (like the previous diagram of PCl3) shows only the outer electrons. Notice that the phosphorus now has 5 pairs of electrons in the outer level - certainly not a noble gas structure. You would have been content to draw PCl3 at GCSE, but PCl5 would have looked very worrying. Why does phosphorus sometimes break away from a noble gas structure and form five bonds? In order to answer that question, we need to explore territory beyond the limits of current A'level syllabuses. Don't be put off by this! It isn't particularly difficult, and is extremely useful if you are going to understand the bonding in some important organic compounds. A more sophisticated view of covalent bonding The bonding in methane, CH4 Warning!  If you aren't happy with describing electron arrangements in s and p notation, and with the shapes of s and p orbitals, you need to read about orbitals before you go on. What is wrong with the dots-and-crosses picture of bonding in methane? We are starting with methane because it is the simplest case which illustrates the sort of processes involved. You will remember that the dots-and-crossed picture of methane looks like this. There is a serious mis-match between this structure and the modern electronic structure of carbon, 1s22s22px12py1. The modern structure shows that there are only 2 unpaired electrons for hydrogens to share with, instead of the 4 which the simple view requires. You can see this more readily using the electrons-in-boxes notation. Only the 2-level electrons are shown. The 1s2 electrons are too deep inside the atom to be involved in bonding. The only electrons directly available for sharing are the 2p electrons. Why then isn't methane CH2? Promotion of an electron When bonds are formed, energy is released and the system becomes more stable. If carbon forms 4 bonds rather than 2, twice as much energy is released and so the resulting molecule becomes even more stable. There is only a small energy gap between the 2s and 2p orbitals, and so it pays the carbon to provide a small amount of energy to promote an electron from the 2s to the empty 2p to give 4 unpaired electrons. The extra energy released when the bonds form more than compensates for the initial input. Note:  People sometimes worry that the promoted electron is drawn as an up-arrow, whereas it started as a down-arrow. The reason for this is actually fairly complicated - well beyond the level we are working at. Just get in the habit of writing it like this because it makes the diagrams look tidy! Now that we've got 4 unpaired electrons ready for bonding, another problem arises. In methane all the carbon-hydrogen bonds are identical, but our electrons are in two different kinds of orbitals. You aren't going to get four identical bonds unless you start from four identical orbitals. Hybridisation The electrons rearrange themselves again in a process called hybridisation. This reorganises the electrons into four identical hybrid orbitals called sp3 hybrids (because they are made from one s orbital and three p orbitals). You should read "sp3" as "s p three" - not as "s p cubed". sp3 hybrid orbitals look a bit like half a p orbital, and they arrange themselves in space so that they are as far apart as possible. You can picture the nucleus as being at the centre of a tetrahedron (a triangularly based pyramid) with the orbitals pointing to the corners. For clarity, the nucleus is drawn far larger than it really is. What happens when the bonds are formed? Remember that hydrogen's electron is in a 1s orbital - a spherically symmetric region of space surrounding the nucleus where there is some fixed chance (say 95%) of finding the electron. When a covalent bond is formed, the atomic orbitals (the orbitals in the individual atoms) merge to produce a new molecular orbital which contains the electron pair which creates the bond. Four molecular orbitals are formed, looking rather like the original sp3 hybrids, but with a hydrogen nucleus embedded in each lobe. Each orbital holds the 2 electrons that we've previously drawn as a dot and a cross. The principles involved - promotion of electrons if necessary, then hybridisation, followed by the formation of molecular orbitals - can be applied to any covalently-bound molecule. Note:  You will find this bit on methane repeated in the organic section of this site. That article on methane goes on to look at the formation of carbon-carbon single bonds in ethane. The bonding in the phosphorus chlorides, PCl3 and PCl5 What's wrong with the simple view of PCl3? This diagram only shows the outer (bonding) electrons. Nothing is wrong with this! (Although it doesn't account for the shape of the molecule properly.) If you were going to take a more modern look at it, the argument would go like this: Phosphorus has the electronic structure 1s22s22p63s23px13py13pz1. If we look only at the outer electrons as "electrons-in-boxes": There are 3 unpaired electrons that can be used to form bonds with 3 chlorine atoms. The four 3-level orbitals hybridise to produce 4 equivalent sp3 hybrids just like in carbon - except that one of these hybrid orbitals contains a lone pair of electrons. Each of the 3 chlorines then forms a covalent bond by merging the atomic orbital containing its unpaired electron with one of the phosphorus unpaired electrons to make 3 molecular orbitals. You might wonder whether all this is worth the bother! Probably not! It is worth it with PCl5, though. What's wrong with the simple view of PCl5? You will remember that the dots-and-crosses picture of PCl5 looks awkward because the phosphorus doesn't end up with a noble gas structure. This diagram also shows only the outer electrons. In this case, a more modern view makes things look better by abandoning any pretence of worrying about noble gas structures. If the phosphorus is going to form PCl5 it has first to generate 5 unpaired electrons. It does this by promoting one of the electrons in the 3s orbital to the next available higher energy orbital. Which higher energy orbital? It uses one of the 3d orbitals. You might have expected it to use the 4s orbital because this is the orbital that fills before the 3d when atoms are being built from scratch. Not so! Apart from when you are building the atoms in the first place, the 3d always counts as the lower energy orbital. This leaves the phosphorus with this arrangement of its electrons: The 3-level electrons now rearrange (hybridise) themselves to give 5 hybrid orbitals, all of equal energy. They would be called sp3d hybrids because that's what they are made from. The electrons in each of these orbitals would then share space with electrons from five chlorines to make five new molecular orbitals - and hence five covalent bonds. Why does phosphorus form these extra two bonds? It puts in an amount of energy to promote an electron, which is more than paid back when the new bonds form. Put simply, it is energetically profitable for the phosphorus to form the extra bonds. The advantage of thinking of it in this way is that it completely ignores the question of whether you've got a noble gas structure, and so you don't worry about it. A non-existent compound - NCl5 Nitrogen is in the same Group of the Periodic Table as phosphorus, and you might expect it to form a similar range of compounds. In fact, it doesn't. For example, the compound NCl3 exists, but there is no such thing as NCl5. Nitrogen is 1s22s22px12py12pz1. The reason that NCl5 doesn't exist is that in order to form five bonds, the nitrogen would have to promote one of its 2s electrons. The problem is that there aren't any 2d orbitals to promote an electron into - and the energy gap to the next level (the 3s) is far too great. n this case, then, the energy released when the extra bonds are made isn't enough to compensate for the energy needed to promote an electron - and so that promotion doesn't happen. Atoms will form as many bonds as possible provided it is energetically profitable. This page explains what co-ordinate (also called dative covalent) bonding is. You need to have a reasonable understanding of simple covalent bonding before you start. Co-ordinate (dative covalent) bonding A covalent bond is formed by two atoms sharing a pair of electrons. The atoms are held together because the electron pair is attracted by both of the nuclei. In the formation of a simple covalent bond, each atom supplies one electron to the bond - but that doesn't have to be the case. A co-ordinate bond (also called a dative covalent bond) is a covalent bond (a shared pair of electrons) in which both electrons come from the same atom. For the rest of this page, we shall use the term co-ordinate bond - but if you prefer to call it a dative covalent bond, that's not a problem! ` ` The reaction between ammonia and hydrogen chloride If these colourless gases are allowed to mix, a thick white smoke of solid ammonium chloride is formed. Ammonium ions, NH4+, are formed by the transfer of a hydrogen ion from the hydrogen chloride to the lone pair of electrons on the ammonia molecule. When the ammonium ion, NH4+, is formed, the fourth hydrogen is attached by a dative covalent bond, because only the hydrogen's nucleus is transferred from the chlorine to the nitrogen. The hydrogen's electron is left behind on the chlorine to form a negative chloride ion. Once the ammonium ion has been formed it is impossible to tell any difference between the dative covalent and the ordinary covalent bonds. Although the electrons are shown differently in the diagram, there is no difference between them in reality. Representing co-ordinate bonds In simple diagrams, a co-ordinate bond is shown by an arrow. The arrow points from the atom donating the lone pair to the atom accepting it. Dissolving hydrogen chloride in water to make hydrochloric acid Something similar happens. A hydrogen ion (H+) is transferred from the chlorine to one of the lone pairs on the oxygen atom. The H3O+ ion is variously called the hydroxonium ion, the hydronium ion or the oxonium ion. In an introductory chemistry course (such as GCSE), whenever you have talked about hydrogen ions (for example in acids), you have actually been talking about the hydroxonium ion. A raw hydrogen ion is simply a proton, and is far too reactive to exist on its own in a test tube. If you write the hydrogen ion as H+(aq), the "(aq)" represents the water molecule that the hydrogen ion is attached to. When it reacts with something (an alkali, for example), the hydrogen ion simply becomes detached from the water molecule again. Note that once the co-ordinate bond has been set up, all the hydrogens attached to the oxygen are exactly equivalent. When a hydrogen ion breaks away again, it could be any of the three. The reaction between ammonia and boron trifluoride, BF3 If you have recently read the page on covalent bonding, you may remember boron trifluoride as a compound which doesn't have a noble gas structure around the boron atom. The boron only has 3 pairs of electrons in its bonding level, whereas there would be room for 4 pairs. BF3 is described as being electron deficient. The lone pair on the nitrogen of an ammonia molecule can be used to overcome that deficiency, and a compound is formed involving a co-ordinate bond. Using lines to represent the bonds, this could be drawn more simply as: The second diagram shows another way that you might find co-ordinate bonds drawn. The nitrogen end of the bond has become positive because the electron pair has moved away from the nitrogen towards the boron - which has therefore become negative. We shan't use this method again - it's more confusing than just using an arrow. The structure of aluminium chloride Aluminium chloride sublimes (turns straight from a solid to a gas) at about 180°C. If it simply contained ions it would have a very high melting and boiling point because of the strong attractions between the positive and negative ions. The implication is that it when it sublimes at this relatively low temperature, it must be covalent. The dots-and-crosses diagram shows only the outer electrons. AlCl3, like BF3, is electron deficient. There is likely to be a similarity, because aluminium and boron are in the same group of the Periodic Table, as are fluorine and chlorine. Measurements of the relative formula mass of aluminium chloride show that its formula in the vapour at the sublimation temperature is not AlCl3, but Al2Cl6. It exists as a dimer (two molecules joined together). The bonding between the two molecules is co-ordinate, using lone pairs on the chlorine atoms. Each chlorine atom has 3 lone pairs, but only the two important ones are shown in the line diagram. Note:  The uninteresting electrons on the chlorines have been faded in colour to make the co-ordinate bonds show up better. There's nothing special about those two particular lone pairs - they just happen to be the ones pointing in the right direction. Energy is released when the two co-ordinate bonds are formed, and so the dimer is more stable than two separate AlCl3 molecules. Note:  Aluminium chloride is complicated because of the way it keeps changing its bonding as the temperature increases. If you are interested in exploring this in more detail, you could have a look at the page about the Period 3 chlorides. It isn't particularly relevant to the present page, though. The bonding in hydrated metal ions Water molecules are strongly attracted to ions in solution - the water molecules clustering around the positive or negative ions. In many cases, the attractions are so great that formal bonds are made, and this is true of almost all positive metal ions. Ions with water molecules attached are described as hydrated ions. Although aluminium chloride is covalent, when it dissolves in water, ions are produced. Six water molecules bond to the aluminium to give an ion with the formula Al(H2O)63+. It's called the hexaaquaaluminium ion - which translates as six ("hexa") water molecules ("aqua") wrapped around an aluminium ion. The bonding in this (and the similar ions formed by the great majority of other metals) is co-ordinate (dative covalent) using lone pairs on the water molecules. Aluminium is 1s22s22p63s23px1. When it forms an Al3+ ion it loses the 3-level electrons to leave 1s22s22p6. That means that all the 3-level orbitals are now empty. The aluminium re-organises (hybridises) six of these (the 3s, three 3p, and two 3d) to produce six new orbitals all with the same energy. These six hybrid orbitals accept lone pairs from six water molecules. You might wonder why it chooses to use six orbitals rather than four or eight or whatever. Six is the maximum number of water molecules it is possible to fit around an aluminium ion (and most other metal ions). By making the maximum number of bonds, it releases most energy and so becomes most energetically stable. Only one lone pair is shown on each water molecule. The other lone pair is pointing away from the aluminium and so isn't involved in the bonding. The resulting ion looks like this: Because of the movement of electrons towards the centre of the ion, the 3+ charge is no longer located entirely on the aluminium, but is now spread over the whole of the ion. Note:  Dotted arrows represent lone pairs coming from water molecules behind the plane of the screen or paper. Wedge shaped arrows represent bonds from water molecules in front of the plane of the screen or paper. Two more molecules Carbon monoxide, CO Carbon monoxide can be thought of as having two ordinary covalent bonds between the carbon and the oxygen plus a co-ordinate bond using a lone pair on the oxygen atom. Nitric acid, HNO3 In this case, one of the oxygen atoms can be thought of as attaching to the nitrogen via a co-ordinate bond using the lone pair on the nitrogen atom. In fact this structure is misleading because it suggests that the two oxygen atoms on the right-hand side of the diagram are joined to the nitrogen in different ways. Both bonds are actually identical in length and strength, and so the arrangement of the electrons must be identical. There is no way of showing this using a dots-and-crosses picture. The bonding involves delocalisation. If you are interested:  The bonding is rather similar to the bonding in the ethanoate ion (although without the negative charge). You will find this described on a page about the acidity of organic acids. This page explains the origin of hydrogen bonding - a relatively strong form of intermolecular attraction. If you are also interested in the weaker intermolecular forces (van der Waals dispersion forces and dipole-dipole interactions), there is a link at the bottom of the page. Many elements form compounds with hydrogen - referred to as "hydrides". If you plot the boiling points of the hydrides of the Group 4 elements, you find that the boiling points increase as you go down the group. The increase in boiling point happens because the molecules are getting larger with more electrons, and so van der Waals dispersion forces become greater. If you repeat this exercise with the hydrides of elements in Groups 5, 6 and 7, something odd happens. Although for the most part the trend is exactly the same as in group 4 (for exactly the same reasons), the boiling point of the hydride of the first element in each group is abnormally high. In the cases of NH3, H2O and HF there must be some additional intermolecular forces of attraction, requiring significantly more heat energy to break. These relatively powerful intermolecular forces are described as hydrogen bonds. The molecules which have this extra bonding are: Note:  The solid line represents a bond in the plane of the screen or paper. Dotted bonds are going back into the screen or paper away from you, and wedge-shaped ones are coming out towards you. Notice that in each of these molecules: • The hydrogen is attached directly to one of the most electronegative elements, causing the hydrogen to acquire a significant amount of positive charge. • Each of the elements to which the hydrogen is attached is not only significantly negative, but also has at least one "active" lone pair. Lone pairs at the 2-level have the electrons contained in a relatively small volume of space which therefore has a high density of negative charge. Lone pairs at higher levels are more diffuse and not so attractive to positive things. Consider two water molecules coming close together. The + hydrogen is so strongly attracted to the lone pair that it is almost as if you were beginning to form a co-ordinate (dative covalent) bond. It doesn't go that far, but the attraction is significantly stronger than an ordinary dipole-dipole interaction. Hydrogen bonds have about a tenth of the strength of an average covalent bond, and are being constantly broken and reformed in liquid water. If you liken the covalent bond between the oxygen and hydrogen to a stable marriage, the hydrogen bond has "just good friends" status. On the same scale, van der Waals attractions represent mere passing acquaintances! Water as a "perfect" example of hydrogen bonding Notice that each water molecule can potentially form four hydrogen bonds with surrounding water molecules. There are exactly the right numbers of + hydrogens and lone pairs so that every one of them can be involved in hydrogen bonding. This is why the boiling point of water is higher than that of ammonia or hydrogen fluoride. In the case of ammonia, the amount of hydrogen bonding is limited by the fact that each nitrogen only has one lone pair. In a group of ammonia molecules, there aren't enough lone pairs to go around to satisfy all the hydrogens. In hydrogen fluoride, the problem is a shortage of hydrogens. In water, there are exactly the right number of each. Water could be considered as the "perfect" hydrogen bonded system. Note:  You will find more discussion on the effect of hydrogen bonding on the properties of water in the page on molecular structures. More complex examples of hydrogen bonding The hydration of negative ions When an ionic substance dissolves in water, water molecules cluster around the separated ions. This process is called hydration. Water frequently attaches to positive ions by co-ordinate (dative covalent) bonds. It bonds to negative ions using hydrogen bonds. Note:  If you are interested in the bonding in hydrated positive ions, you could follow this link to co-ordinate (dative covalent) bonding. The diagram shows the potential hydrogen bonds formed to a chloride ion, Cl-. Although the lone pairs in the chloride ion are at the 3-level and wouldn't normally be active enough to form hydrogen bonds, in this case they are made more attractive by the full negative charge on the chlorine. However complicated the negative ion, there will always be lone pairs that the hydrogen atoms from the water molecules can hydrogen bond to. Hydrogen bonding in alcohols An alcohol is an organic molecule containing an -O-H group. Any molecule which has a hydrogen atom attached directly to an oxygen or a nitrogen is capable of hydrogen bonding. Such molecules will always have higher boiling points than similarly sized molecules which don't have an -O-H or an -N-H group. The hydrogen bonding makes the molecules "stickier", and more heat is necessary to separate them. Ethanol, CH3CH2-O-H, and methoxymethane, CH3-O-CH3, both have the same molecular formula, C2H6O. Note:  If you haven't done any organic chemistry yet, don't worry about the names. They have the same number of electrons, and a similar length to the molecule. The van der Waals attractions (both dispersion forces and dipole-dipole attractions) in each will be much the same. However, ethanol has a hydrogen atom attached directly to an oxygen - and that oxygen still has exactly the same two lone pairs as in a water molecule. Hydrogen bonding can occur between ethanol molecules, although not as effectively as in water. The hydrogen bonding is limited by the fact that there is only one hydrogen in each ethanol molecule with sufficient + charge. In methoxymethane, the lone pairs on the oxygen are still there, but the hydrogens aren't sufficiently + for hydrogen bonds to form. Except in some rather unusual cases, the hydrogen atom has to be attached directly to the very electronegative element for hydrogen bonding to occur. The boiling points of ethanol and methoxymethane show the dramatic effect that the hydrogen bonding has on the stickiness of the ethanol molecules: ethanol (with hydrogen bonding) 78.5°C methoxymethane (without hydrogen bonding) -24.8°C The hydrogen bonding in the ethanol has lifted its boiling point about 100°C. It is important to realise that hydrogen bonding exists in addition to van der Waals attractions. For example, all the following molecules contain the same number of electrons, and the first two are much the same length. The higher boiling point of the butan-1-ol is due to the additional hydrogen bonding. Comparing the two alcohols (containing -OH groups), both boiling points are high because of the additional hydrogen bonding due to the hydrogen attached directly to the oxygen - but they aren't the same. The boiling point of the 2-methylpropan-1-ol isn't as high as the butan-1-ol because the branching in the molecule makes the van der Waals attractions less effective than in the longer butan-1-ol. Hydrogen bonding in organic molecules containing nitrogen Hydrogen bonding also occurs in organic molecules containing N-H groups - in the same sort of way that it occurs in ammonia. Examples range from simple molecules like CH3NH2 (methylamine) to large molecules like proteins and DNA. The two strands of the famous double helix in DNA are held together by hydrogen bonds between hydrogen atoms attached to nitrogen on one strand, and lone pairs on another nitrogen or an oxygen on the other one. Structure & Bonding The study of organic chemistry must at some point extend to the molecular level, for the physical and chemical properties of a substance are ultimately explained in terms of the structure and bonding of molecules. This module introduces some basic facts and principles that are needed for a discussion of organic molecules. ## Electron Configurations in the Periodic Table 1A 2A 3A 4A 5A 6A 7A 8A 1 H 1s1 2 He 1s2 3 Li 1s2 2s1 4 Be 1s2 2s2 5 B 1s2 2s22p1 6 C 1s2 2s22p2 7 N 1s2 2s22p3 8 O 1s2 2s22p4 9 F 1s2 2s22p5 10 Ne 1s2 2s22p6 11 Na [Ne] 3s1 12 Mg [Ne] 3s2 13 Al [Ne] 3s23p1 14 Si [Ne] 3s23p2 15 P [Ne] 3s23p3 16 S [Ne] 3s23p4 17 Cl [Ne] 3s23p5 18 Ar [Ne] 3s23p6 Four elements, hydrogen, carbon, oxygen and nitrogen, are the major components of most organic compounds. Consequently, our understanding of organic chemistry must have, as a foundation, an appreciation of the electronic structure and properties of these elements. The truncated periodic table shown above provides the orbital electronic structure for the first eighteen elements (hydrogen through argon). According to the Aufbau principle, the electrons of an atom occupy quantum levels or orbitals starting from the lowest energy level, and proceeding to the highest, with each orbital holding a maximum of two paired electrons (opposite spins). Electron shell #1 has the lowest energy and its s-orbital is the first to be filled. Shell #2 has four higher energy orbitals, the 2s-orbital being lower in energy than the three 2p-orbitals. (x, y & z). As we progress from lithium (atomic number=3) to neon (atomic number=10) across the second row or period of the table, all these atoms start with a filled 1s-orbital, and the 2s-orbital is occupied with an electron pair before the 2p-orbitals are filled. In the third period of the table, the atoms all have a neon-like core of 10 electrons, and shell #3 is occupied progressively with eight electrons, starting with the 3s-orbital. The highest occupied electron shell is called the valence shell, and the electrons occupying this shell are called valence electrons. The chemical properties of the elements reflect their electron configurations. For example, helium, neon and argon are exceptionally stable and unreactive monoatomic gases. Helium is unique since its valence shell consists of a single s-orbital. The other members of group 8 have a characteristic valence shell electron octet (ns2 + npx2 + npy2 + npz2). This group of inert (or noble) gases also includes krypton (Kr: 4s2, 4p6), xenon (Xe: 5s2, 5p6) and radon (Rn: 6s2, 6p6). In the periodic table above these elements are colored beige. The halogens (F, Cl, Br etc.) are one electron short of a valence shell octet, and are among the most reactive of the elements (they are colored red in this periodic table). In their chemical reactions halogen atoms achieve a valence shell octet by capturing or borrowing the eighth electron from another atom or molecule. The alkali metals Li, Na, K etc. (colored violet above) are also exceptionally reactive, but for the opposite reason. These atoms have only one electron in the valence shell, and on losing this electron arrive at the lower shell valence octet. As a consequence of this electron loss, these elements are commonly encountered as cations (positively charged atoms). The elements in groups 2 through 7 all exhibit characteristic reactivities and bonding patterns that can in large part be rationalized by their electron configurations. It should be noted that hydrogen is unique. Its location in the periodic table should not suggest a kinship to the chemistry of the alkali metals, and its role in the structure and properties of organic compounds is unlike that of any other element. ## Chemical Bonding and Valence As noted earlier, the inert gas elements of group 8 exist as monoatomic gases, and do not in general react with other elements. In contrast, other gaseous elements exist as diatomic molecules (H2, N2, O2, F2 & Cl2), and all but nitrogen are quite reactive. Some dramatic examples of this reactivity are shown in the following equations. 2Na + Cl2 2NaCl 2H2 + O2 2H2O C + O2 CO2 C + 2F2 CF4 Why do the atoms of many elements interact with each other and with other elements to give stable molecules? In addressing this question it is instructive to begin with a very simple model for the attraction or bonding of atoms to each other, and then progress to more sophisticated explanations. When sodium is burned in a chlorine atmosphere, it produces the compound sodium chloride. This has a high melting point (800 ºC) and dissolves in water to to give a conducting solution. Sodium chloride is an ionic compound, and the crystalline solid has the structure shown on the right. Transfer of the lone 3s electron of a sodium atom to the half-filled 3p orbital of a chlorine atom generates a sodium cation (neon valence shell) and a chloride anion (argon valence shell). Electrostatic attraction results in these oppositely charged ions packing together in a lattice. The attractive forces holding the ions in place can be referred to as ionic bonds. The other three reactions shown above give products that are very different from sodium chloride. Water is a liquid at room temperature; carbon dioxide and carbon tetrafluoride are gases. None of these compounds is composed of ions. A different attractive interaction between atoms, called covalent bonding, is involved here. Covalent bonding occurs by a sharing of valence electrons, rather than an outright electron transfer. Similarities in physical properties (they are all gases) suggest that the diatomic elements H2, N2, O2, F2 & Cl2 also have covalent bonds. Examples of covalent bonding shown below include hydrogen, fluorine, carbon dioxide and carbon tetrafluoride. These illustrations use a simple Bohr notation, with valence electrons designated by colored dots. Note that in the first case both hydrogen atoms achieve a helium-like pair of 1s-electrons by sharing. In the other examples carbon, oxygen and fluorine achieve neon-like valence octets by a similar sharing of electron pairs. Carbon dioxide is notable because it is a case in which two pairs of electrons (four in all) are shared by the same two atoms. This is an example of a double covalent bond. These electron sharing diagrams (Lewis formulas) are a useful first step in understanding covalent bonding, but it is quicker and easier to draw Couper-Kekulé formulas in which each shared electron pair is represented by a line between the atom symbols. Non-bonding valence electrons are shown as dots. These formulas are derived from the graphic notations suggested by A. Couper and A. Kekulé, and are not identical to their original drawings. Some examples of such structural formulas are given in the following table. Common Name Molecular Formula Lewis Formula Kekulé Formula Methane CH4 Ammonia NH3 Ethane C2H6 Methyl Alcohol CH4O Ethylene C2H4 Formaldehyde CH2O Acetylene C2H2 Hydrogen Cyanide CHN The sharing of two or more electron pairs, is illustrated by ethylene and formaldehyde (each has a double bond), and acetylene and hydrogen cyanide (each with a triple bond). Boron compounds such as BH3 and BF3 are exceptional in that conventional covalent bonding does not expand the valence shell occupancy of boron to an octet. Consequently, these compounds have an affinity for electrons, and they exhibit exceptional reactivity when compared with the compounds shown above. #### Valence The number of valence shell electrons an atom must gain or lose to achieve a valence octet is called valence. In covalent compounds the number of bonds which are characteristically formed by a given atom is equal to that atom's valence. From the formulas written above, we arrive at the following general valence assignments: Atom H C N O F Cl Br I Valence 1 4 3 2 1 1 1 1 The valences noted here represent the most common form these elements assume in organic compounds. Many elements, such as chlorine, bromine and iodine, are known to exist in several valence states in different inorganic compounds. ## Charge Distribution If the electron pairs in covalent bonds were donated and shared absolutely evenly there would be no fixed local charges within a molecule. Although this is true for diatomic elements such as H2, N2 and O2, most covalent compounds show some degree of local charge separation, resulting in bond and / or molecular dipoles. A dipole exists when the centers of positive and negative charge distribution do not coincide. #### Formal Charges A large local charge separation usually results when a shared electron pair is donated unilaterally. The three Kekulé formulas shown here illustrate this condition. In the formula for ozone the central oxygen atom has three bonds and a full positive charge while the right hand oxygen has a single bond and is negatively charged. The overall charge of the ozone molecule is therefore zero. Similarly, nitromethane has a positive-charged nitrogen and a negative-charged oxygen, the total molecular charge again being zero. Finally, azide anion has two negative-charged nitrogens and one positive-charged nitrogen, the total charge being minus one. In general, for covalently bonded atoms having valence shell electron octets, if the number of covalent bonds to an atom is greater than its normal valence it will carry a positive charge. If the number of covalent bonds to an atom is less than its normal valence it will carry a negative charge. The formal charge on an atom may also be calculated by the following formula: #### Polar Covalent Bonds Because of their differing nuclear charges, and as a result of shielding by inner electron shells, the different atoms of the periodic table have different affinities for nearby electrons. The ability of an element to attract or hold onto electrons is called electronegativity. A rough quantitative scale of electronegativity values was established by Linus Pauling, and some of these are given in the table to the right. A larger number on this scale signifies a greater affinity for electrons. Fluorine has the greatest electronegativity of all the elements, and the heavier alkali metals such as potassium, rubidium and cesium have the lowest electronegativities. It should be noted that carbon is about in the middle of the electronegativity range, and is slightly more electronegative than hydrogen. When two different atoms are bonded covalently, the shared electrons are attracted to the more electronegative atom of the bond, resulting in a shift of electron density toward the more electronegative atom. Such a covalent bond is polar, and will have a dipole (one end is positive and the other end negative). The degree of polarity and the magnitude of the bond dipole will be proportional to the difference in electronegativity of the bonded atoms. Thus a O–H bond is more polar than a C–H bond, with the hydrogen atom of the former being more positive than the hydrogen bonded to carbon. Likewise, C–Cl and C–Li bonds are both polar, but the carbon end is positive in the former and negative in the latter. The dipolar nature of these bonds is often indicated by a partial charge notation (δ+/–) or by an arrow pointing to the negative end of the bond. Electronegativity Values for Some Elements H 2.20 Li 0.98 Be 1.57 B 2.04 C 2.55 N 3.04 O 3.44 F 3.98 Na 0.90 Mg 1.31 Al 1.61 Si 1.90 P 2.19 S 2.58 Cl 3.16 K 0.82 Ca 1.00 Ga 1.81 Ge 2.01 As 2.18 Se 2.55 Br 2.96 Although there is a small electronegativity difference between carbon and hydrogen, the C–H bond is regarded as weakly polar at best, and hydrocarbons in general are considered to be non-polar compounds. The shift of electron density in a covalent bond toward the more electronegative atom or group can be observed in several ways. For bonds to hydrogen, acidity is one criterion. If the bonding electron pair moves away from the hydrogen nucleus the proton will be more easily transfered to a base (it will be more acidic). A comparison of the acidities of methane, water and hydrofluoric acid is instructive. Methane is essentially non-acidic, since the C–H bond is nearly non-polar. As noted above, the O–H bond of water is polar, and it is at least 25 powers of ten more acidic than methane. H–F is over 12 powers of ten more acidic than water as a consequence of the greater electronegativity difference in its atoms. Electronegativity differences may be transmitted through connecting covalent bonds by an inductive effect. Replacing one of the hydrogens of water by a more electronegative atom increases the acidity of the remaining O–H bond. Thus hydrogen peroxide, HO–O–H, is ten thousand times more acidic than water, and hypochlorous acid, Cl–O–H is one hundred million times more acidic. This inductive transfer of polarity tapers off as the number of transmitting bonds increases, and the presence of more than one highly electronegative atom has a cumulative effect. For example, trifluoro ethanol, CF3CH2–O–H is about ten thousand times more acidic than ethanol, CH3CH2–OH. Excellent physical evidence for the inductive effect is found in the influence of electronegative atoms on the nmr chemical shifts of nearby hydrogen atoms. ## Functional Groups Functional groups are atoms or small groups of atoms (two to four) that exhibit a characteristic reactivity when treated with certain reagents. A particular functional group will almost always display its characteristic chemical behavior when it is present in a compound. Because of their importance in understanding organic chemistry, functional groups have characteristic names that often carry over in the naming of individual compounds incorporating specific groups. In the following table the atoms of each functional group are colored red and the characteristic IUPAC nomenclature suffix that denotes some (but not all) functional groups is also colored. ## Functional Group Tables Group Formula Class Name Specific Example IUPAC Name Common Name Alkene H2C=CH2 Ethene Ethylene Alkyne HC≡CH Ethyne Acetylene Arene C6H6 Benzene Benzene Group Formula Class Name Specific Example IUPAC Name Common Name Halide H3C-I Iodomethane Methyl iodide Alcohol CH3CH2OH Ethanol Ethyl alcohol Ether CH3CH2OCH2CH3 Diethyl ether Ether H3C-NH2 Amine H3C-NH2 Aminomethane Methylamine Nitro Compound H3C-NO2 Nitromethane Thiol H3C-SH Methanethiol Methyl mercaptan Sulfide H3C-S-CH3 Dimethyl sulfide ### Functional Groups with Multiple Bonds to Heteroatoms Group Formula Class Name Specific Example IUPAC Name Common Name Nitrile H3C-CN Ethanenitrile Acetonitrile Aldehyde H3CCHO Ethanal Acetaldehyde Ketone H3CCOCH3 Propanone Acetone Carboxylic Acid H3CCO2H Ethanoic Acid Acetic acid Ester H3CCO2CH2CH3 Ethyl ethanoate Ethyl acetate Acid Halide H3CCOCl Ethanoyl chloride Acetyl chloride Amide H3CCON(CH3)2 N,N-Dimethylethanamide N,N-Dimethylacetamide Acid Anhydride (H3CCO)2O Ethanoic anhydride Acetic anhydride Dipole Moment - A Measure of Degree of Polarity Molecules having two equal and opposite charges separated by certain distance are said to possess an electric dipole. In the case of such polar molecules, the centre of negative charge does not coincide with the centre of positive charge. The extent of polarity in such covalent molecules can be described by the term Dipole moment. Dipole moment can be defined as the product of the magnitude of the charge and the distance of separation between the charges.It is represented by the Greek letter 'm'. Mathematically it is equal to dipole moment (m) = charge (e) x distance of separation (d).It is expressed in the units of Debye and written as D (1 Debye = 1 x 10-18e.s.u cm) Dipole moment is a vector quantity and is represented by a small arrow with tail at the positive centre and head pointing towards a negative centre. For example, the dipole moment of HCl molecule is 1.03 D and that of H2O is 1.84 D. The dipole of HCl may be represented as: Dipole Moment and Molecular Structure Diatomic molecules A diatomic molecule has two atoms bonded to each other by a covalent bond. In such a molecule, the dipole moment of the bond gives the dipole moment of the molecule. Thus, a diatomic molecule is polar if the bond formed between the atoms is polar. Greater the electronegativity difference between the atoms, more will be the dipole moment. The dipole moment of hydrogen halides decreases with decreasing electronegativity of halogen atom. Polyatomic molecules In polyatomic molecules the dipole moment not only depends upon the individual dipole moments of the bonds but also on the spatial arrangement of the various bonds in the molecule. In such molecules the dipole moment of the molecule is the vector sum of the dipole moments of various bonds. For example, Carbon dioxide (CO2) and water (H2O) are both triatomic molecules but the dipole moment of carbon dioxide is zero whereas that of water is 1.84 D. This is because CO2 is a linear molecule in which the two C=O (m=2.3D) bonds are oriented in opposite directions at an angle of 180°. Due to the linear geometry the dipole moment of one C = O bond cancels that of another. Therefore, the resultant dipole moment of the molecule is zero and it is a non-polar molecule. Water molecule has a bent structure with the two OH bonds oriented at an angle of 104.5°. The dipole moment of water is 1.84D, which is the resultant of the dipole moments of two O-H bonds. Similarly in tetra-atomic molecules such as BF3 and NH3, the dipole moment of BF3 molecule is zero while that of NH3 is 1.49 D. This suggests that BF3 has symmetrical structure in which the three B-F bonds are oriented at an angle of 120° to one another. Also the three bonds lie in one plane and the dipole moments of these bonds cancel one another giving net dipole moment equal to zero. NH3 has a pyramidal structure. The individual dipole moments of three N-H bonds give the resultant dipole moment as 1.49 D. Thus, the presence of polar bonds in a polyatomic molecule does not mean that the molecules are polar. Importance of dipole moment Dipole moment plays very important role in understanding the nature of chemical bonds. mportance of dipole moment and problems The measurement of dipole moment helps in distinguishing between polar and non-polar molecules. Non-polar molecules have zero dipole moment while polar molecules have some value of dipole moment. For example Non-polar molecules: O2, Cl2, BF3, CH4 Polar Molecules: HF (1.91 D), HCl (1.03 D), H2S (0.90 D) Dipole moment measurement gives an idea about the degree of polarity in a diatomic molecule. The greater the dipole moment the greater is the polarity in such a molecule. Dipole moment is used to find the shapes of molecules. This is because the dipole moment not only depends upon the individual dipole moment of the bonds but also on the arrangement of bonds. It is possible to predict the nature of chemical bond formed depending upon the electronegativities of atoms involved in a molecule. The bond will be highly polar if the electronegativities of two atoms is large. However, when the electron is completely transferred from one atom to another, an ionic bond is formed (ionic bond is an extreme case of polar covalent bonds). The greater the difference in electronegativities of the bonded atoms, the higher is the ionic character. When the electronegativity difference between two atoms is 1.7, then the bond is 50% ionic and 50% covalent. If the electronegativitv difference is more than 1.7, then the chemical bond is largely ionic (more than 50% ionic character) and if the difference is less than 1.7, the bond formed is mainly covalent. The percentage of ionic character can be calculated from the ratio of the observed dipole moment to the dipole moment for the complete electron transfer (100% ionic character). In HCl molecule, the observed dipole moment is 1.03 D and its bond length is 1.275Å. Assuming 100% ionic character, the charge developed on H and Cl atoms would be 4.8 x 10-10e.s.u. Therefore, dipole moment for 100% ionic character will be = q x d = 4.8 x 10-10e.s.u x 1.275 x 10-8cm =6.12x 1O-18e.s.u.cm = 6.12 D (1D = 10-18 e.s.u. cm.) Problem 12. Calculate the ionic character of HCl. Its measured dipole moment is 3.436 x 10-30 coulomb meter. The HCl bond length is 2.29 x 10-10 meter. Solution Dipole moment corresponding to 100 % ionic character of HCl = 1.602 x 10-19 C x 1.29 x 10-10 m= 20.67 x 10-30 Cm Actual dipole moment of HCl = 3.436 x 10-30 Cm 3. The C-Cl bond is polar but CCl4 molecule is non-polar. Explain. 1 Solution The C-Cl bond is polar because the chlorine atom being more electronegative pulls the shared electron pair towards itself. In CCl4, there are four C-Cl bonds. Since these polar bonds are symmetrically arranged, the polarities of individual bonds cancel each other resulting in a zero dipole moment for the molecule. The net result is that CCl4 molecule is non-polar. Two types of covalent bonds are formed depending upon the electronegativity of the combining elements. Non-polar covalent bond When a covalent bond is formed between two atoms of the same element, the shared electron pair will lie exactly midway between the two atoms i.e. the electrons are equally shared by the atoms. The resulting molecule will be electrically symmetrical i.e., centre of the negative charge coincides with the centre of the positive charge. This type of covalent bond is described as a non-polar covalent bond. The bonds in the molecules H2, O2, Cl2 etc., are non-polar covalent bonds. The bond between two unlike atoms, which differ in their affinities for electrons is said to be a polar covalent bond. When a covalent bond is formed between two atoms of different elements, the bonding pair of electrons will lie more towards the atom, which has more affinity for electrons. As the said electron pair do not lie exactly midway between the two atoms, the atom with higher affinity for electrons develops a slight negative charge and the atom with lesser affinity for electrons, a slight positive charge. Such molecules are called 'polar molecules'. In the hydrogen chloride (HCl) molecule, the bonding of hydrogen and chlorine atoms lies more towards Cl atom (because Cl is more electronegative) in the shared pair of electrons. Therefore, Cl atom acquires a slight negative charge, and H atom a slight positive charge. This causes the covalent bond between H and Cl to have an appreciable ionic character. The compounds having polar bonds are termed polar compounds. Polar substances in their pure forms, do not conduct electricity, but give conducting solutions when dissolved in polar solvents. Cause of polarity in bonds A pure covalent bond is formed when the shared pair of electrons is shared equally by the two atoms. Conversely when the two combining atoms share the shared pair of electrons unequally, the bond formed is a polar covalent bond. Unequal sharing of the shared pair of electrons arises due to an unequal electron-attracting tendency of the two atoms. The electron-attracting tendency of the atoms in a molecule is described in terms of electronegativity. 'Polarity in a bond arises due to the difference in the electronegativities of the combining atoms'. Thus, the atom of an element having higher electronegativity has greater electron attracting tendency. For example, two atoms of hydrogen combine to form a molecule of hydrogen in which H-H is a pure covalent bond. But, when the atoms of different elements (with different electronegativities) combine, the atom of the more electronegative element attracts the shared pair of electrons more towards it. This leads to the formation of a polar bond. Hydrogen and chlorine react to give hydrogen chloride (HCl). HCl is a polar molecule because of the difference in the electronegativity values of hydrogen and chlorine. The greater is the difference in the electronegativity values of the combining atoms, greater is the polar character in the bond so formed. For example, in the series H - X (X=F, Cl, Br,I), the electronegativity difference between H and X atom follows the order: H- F > H - Cl > H - Br > H - I Therefore, the polarity in the H - X bond follows the order, H - F > H - Cl > H - Br > H I i.e., H - F bond is the most polar and H -I bond is the least polar in this series of compounds. Hybridisation Hybridisation is the phenomenon of redistribution of energies of the orbitals of slightly different energies so as to give a new set of orbitals of equivalent energies. The new orbitals are called hybrid or hybridised orbitals. The number of hybridized orbitals formed is equal to the number of atomic orbitals taking part in hybridisation.This phenomenon is more predominant in carbon containing compounds and so to understand this concept, a study of the electronic structure of carbon is essential. Tetravalency of Carbon Carbon forms a large number of fascinating variety of compounds. This is because of its unusual property of catenation in which one carbon unites with the other to form long chains and rings. It is this property, which is responsible for the existence of millions of compounds of carbon. This feature can be explained on the basis of tetravalency of carbon. The electronic configuration of carbon is 1s2 2s2 2px1 2py1 2pz0. In the box notation this is represented as: As there are two half filled orbitals in the valence shell of carbon, its bonding capacity should be two. However, in actual practice carbon exhibits a bonding capacity of four and forms molecules of the type CH4, CCl4 etc. In order to explain this tetravalency, it is proposed that one of the electrons from the '2s' filled orbital is promoted to the empty '2p' orbital (2pz), which is in a higher energy state. In this way four half filled orbitals are formed in the valence shell which can account for the bonding capacity of four bonds of carbon. This state is known as the excited state in which the electronic configuration of carbon is: From the above configuration it is clear that all the four bonds of carbon will not be identical. This is because one bond will be formed by the overlap of '2s' orbital which will have more of 's' character. The other three bonds will be formed by the overlap of the '2p' orbitals, which will have more of 'p' character. Therefore all the four bonds will not be equivalent. But in practice, most of the carbon compounds have all the four bonds equal. This behaviour can be explained in terms of Hybridisation. Characterisitics of hybridisation The hybridised orbitals are always equivalent in energy and shape. images/SQBTN017.jpg The number of hybridised orbitals formed is equal to the number of orbitals that undergo hybridisation. Hydridised orbitals form more stable bonds. Hydridised orbitals orient themselves in preferred directions in space and so give a fixed geometry or shape to the molecules. Conditions for hybridisation Only the valence shell orbitals of the atom are hybridised. Orbitals undergoing hybridisation should have only a small difference in their energies. It is not necessary that only half filled orbitals participate in hybridisation. Even filled orbitals of the valence shell can take part in hybridisation. Rearrangement by way of promotion to different orbitals is not an essential condition for hybridisation. Also called Lewis structures, give a representation of the valence electrons surrounding an atom. Each valence electron is represented by one dot, thus, a lone atom of hydrogen would be drawn as an H with one dot, whereas a lone atom of Helium would be drawn as an He with two dots, and so forth. Representing two atoms joined by a covalent bond is done by drawing the atomic symbols near to each other, and drawing a single line to represent a shared pair of electrons. It is important to note: a single valence electron is represented by a dot, whereas a pair of electrons is represented by a line. The covalent compound hydrogen fluoride, for example, would be represented by the symbol H joined to the symbol F by a single line, with three pairs (six more dots) surrounding the symbol F. The line represents the two electrons shared by both hydrogen and fluorine, whereas the six paired dots represent fluorine's remaining six valence electrons. Dot structures are useful in illustrating simple covalent molecules, but the limitations of dot structures become obvious when diagramming even relatively simple organic molecules. The dot structures have no ability to represent the actual physical orientation of molecules, and they become overly cumbersome when more than three or four atoms are represented. Lewis dot structures are useful for introducing the idea of covalence and bonding in small molecules, but other model types have much more capability to communicate chemistry concepts. ### Drawing electron dot structures Some examples of electron dot structures for a few commonly encountered molecules from inorganic chemistry. #### A note about Gilbert N. Lewis Lewis was born in Weymouth, Massachusetts as the son of a Dartmouth-graduated lawyer/broker. He attended the University of Nebraska at age 14, then three years later transferred to Harvard. After showing an initial interest in Economics, Gilbert Newton Lewis earned first a B.A. in Chemistry, and then a Ph.D. in Chemistry in 1899. For a few years after obtaining his doctorate, Lewis worked and studied both in the United States and abroad (including Germany and the Phillipines) and he was even a professor at M.I.T. from 1907 until 1911. He then went on to U.C. Berkeley in order to be Dean of the College of Chemistry in 1912. In 1916 Dr. Lewis formulated the idea that a covalent bond consisted of a shared pair of electrons. His ideas on chemical bonding were expanded upon by Irving Langmuir and became the inspiration for the studies on the nature of the chemical bond by Linus Pauling. In 1923, he formulated the electron-pair theory of acid-base reactions. In the so-called Lewis theory of acids and bases, a "Lewis acid" is an electron-pair acceptor and a "Lewis base" is an electron-pair donor. In 1926, he coined the term "photon" for the smallest unit of radiant energy. Lewis was also the first to produce a pure sample of deuterium oxide (heavy water) in 1933. By accelerating deuterons (deuterium nuclei) in Ernest O. Lawrence's cyclotron, he was able to study many of the properties of atomic nuclei. During his career he published on many other subjects, and he died at age 70 of a heart attack while working in his laboratory in Berkeley. He had one daughter and two sons; both of his sons became chemistry professors themselves. Formal :The formal charge of an atom is the charge that it would have if every bond were 100% covalent (non-polar). Formal charges are computed by using a set of rules and are useful for accounting for the electrons when writing a reaction mechanism, but they don't have any intrinsic physical meaning. They may also be used for qualitative comparisons between different resonance structures (see below) of the same molecule, and often have the same sign as the partial charge of the atom, but there are exceptions. The formal charge of an atom is computed as the difference between the number of valence electrons that a neutral atom would have and the number of electrons that "belong" to it in the Lewis structure when one counts lone pair electrons as belonging fully to the atom, while electrons in covalent bonds are split equally between the atoms involved in the bond. The total of the formal charges on an ion should be equal to the charge on the ion, and the total of the formal charges on a neutral molecule should be equal to zero. For example, in the hydronium ion, H3O+, the oxygen atom has 5 electrons for the purpose of computing the formal charge—2 from one lone pair, and 3 from the covalent bonds with the hydrogen atoms. The other 3 electrons in the covalent bonds are counted as belonging to the hydrogen atoms (one each). A neutral oxygen atom has 6 valence electrons (due to its position in group 16 of the periodic table); therefore the formal charge on the oxygen atom is 6 – 5 = +1. A neutral hydrogen atom has one electron. Since each of the hydrogen atoms in the hydronium atom has one electron from a covalent bond, the formal charge on the hydrogen atoms is zero. The sum of the formal charges is +1, which matches the total charge of the ion. Formal Charge: number of valence electrons for an atom - (number of lone pair electrons + number electrons in bonds/2) In chemistry, a formal charge (FC) on an atom in a molecule is defined as: FC = number of valence electrons of the atom - ( number of lone pair electrons on this atom + total number of electrons participating in covalent bonds with this atom / 2). When determining the correct Lewis structure (or predominant resonance structure) for a molecule, the structure is chosen such that the formal charge on each of the atoms is minimized. ### Examples carbon in methane Nitrogen in $NO_2^{-}$ double bonded oxygen in $NO_2^{-}$ single bonded oxygen in $NO_2^{-}$ Methane (CH4): black is carbon, white is hydrogen Nitrogen dioxide (NO2): blue is nitrogen, red is oxygen # Organic Chemistry/Spectroscopy There are several spectroscopic techniques which can be used to identify organic molecules: infrared (IR), mass spectroscopy (MS) UV/visible spectroscopy (UV/Vis) and nuclear magnetic resonance (NMR). IR, NMR and UV/vis spectroscopy are based on observing the frequencies of electromagnetic radiation absorbed and emitted by molecules. MS is based on measuring the mass of the molecule and any fragments of the molecule which may be produced in the MS instrument. ## UV/Visible Spectroscopy UV/Vis Spectroscopy uses ultraviolet and/or visible light to examine the electronic properties of molecules. Irradiating a molecule with UV or Visible light of a specific wavelength can cause the electrons in a molecule to transition to an excited state. This technique is most useful for analyzing molecules with conjugated systems or carbonyl bonds. NMR Spectroscopy Nuclear Magnetic Resonance (NMR) Spectroscopy is one of the most useful analytical techniques for determining the structure of an organic compound. There are two main types of NMR, 1H-NMR (Proton NMR) and 13C-NMR (Carbon NMR). NMR is based on the fact that the nuclei of atoms have a quantized property called spin. When a magnetic field is applied to a 1H or 13C nucleus, the nucleus can align either with (spin +1/2) or against (spin -1/2) the applied magnetic field. These two states have different potential energies and the energy difference depends on the strength of the magnetic field. The strength of the magnetic field about a nucleus, however, depends on the chemical environment around the nucleus. For example, the negatively charged electrons around and near the nucleus can shield the nucleus from the magnetic field, lowering the strength of the effective magnetic field felt by the nucleus. This, in turn, will lower the energy needed to transition between the +1/2 and -1/2 states. Therefore, the transition energy will be lower for nuclei attached to electron donating groups (such as alkyl groups) and higher for nuclei attached to electron withdrawing groups (such as a hydroxyl group). In an NMR machine, the compound being analyzed is placed in a strong magnetic field and irradiated with radio waves to cause all the 1H and 13C nuclei to occupy the higher energy -1/2 state. As the nuclei relax back to the +1/2 state, they release radio waves corresponding to the energy of the difference between the two spin states. The radio waves are recorded and analyzed by computer to give an intensity versus frequency plot of the sample. This information can then be used to determine the structure of the compound. Aromatics in H-NMR Electron Donating Groups vs. Electron Withdrawing Groups On monosubstituted rings, electron donating groups resonate at high chemical shifts. Electron donating groups increase the electron density by releasing electrons into a reaction center, thus stabilizing the carbocation. An example of an electron donating group is methyl (-CH3). Accordingly, electron withdrawing groups are represented at low chemical shifts. Electron withdrawing groups pull electrons away from a reacting center. This can stabilize an electron rich carbanion. Some examples of electron withdrawing groups are halogens (-Cl, -F) and carboxylic acid (-COOH). Looking at the H NMR spectrum of ethyl benzene, we see that the methyl group is the most electron withdrawing, so it appears at the lowest chemical shift. The aromatic phenyl group is the most electron donating, so it has the highest chemical shift. The sum of integrated intensity values for the entire aromatic region shows how many substituents are attached to the ring, so a total value of 4 indicates that the ring has 2 substituents. When a benzene ring has two substituent groups, each exerts an influence on following substitution reactions. The site at which a new substituent is introduced depends on the orientation of the existing groups and their individual directing effects. For a disubstituted benzene ring, there are three possible NMR patterns. Note that para-substituted rings usually show two symmetric sets of peaks that look like doublets. The order of these peaks is dependent on the nature of the two substituents. For example, the three NMR spectra of chloronitrobenzene isomers are below: Mass Spectroscopy A mass spectroscope measures the exact mass of ions, relative to the charge. Many times, some form of seperation is done beforehand, enabling a spectrum to be collected on a relatively pure sample. An organic sample can be introduced into a mass spectroscope and ionised. This also breaks some molecules into smaller fragments. The resulting mass spectrum shows: 1) The heaviest ion is simply the ionised molecule itself. We can simply record its mass. 2) Other ions are fragments of the molecule and give information about its structure. Common fragments are: species formula mass methyl CH3+ 15 ethyl C2H5+ 29 phenyl C6H5+ 77 # Infrared spectroscopy. Absorbing infrared radiation makes covalent bonds vibrate. Different types of bond absorb different wavelengths of infrared: Instead of wavelength, infrared spectroscopists record the wavenumber; the number of waves that fit into 1 cm. (This is easily converted to the energy of the wave.) For some reason the spectra are recorded backwards (from 4000 to 500 cm-1 is typical), often with a different scale below 1000 cm-1 (to see the fingerprint region more clearly) and upside-down (% radiation transmitted is recorded instead of the absorbance of radiation). The wavenumbers of the absorbed IR radiation are characteristic of many bonds, so IR spectroscopy can determine which functional groups are contained in the sample. For example, the carbonyl (C=O) bond will absorb at 1650-1760cm-1. ## Summary of absorptions of bonds in organic molecules Infrared Spectroscopy Correlation Table Bond Minimum wavenumber (cm-1) Maximum wavenumber (cm-1) Functional group (and other notes) C-O 1000 1300 Alcohols and esters N-H 1580 1650 Amine or amide C=C 1610 1680 Alkenes C=O 1650 1760 Aldehydes, ketones, acids, esters, amides O-H 2500 3300 Carboxylic acids (very broad band) C-H 2850 3000 Alkane C-H 3050 3150 Alkene (Compare intensity to alkane for rough idea of relative number of H atoms involved.) O-H 3230 3550 H-bonded in alcohols N-H 3300 3500 Amine or amide O-H 3580 3670 Free –OH in alcohols (only in samples diluted with non-polar solvent) Absorptions listed in cm-1. ## Typical method Typical apparatus A beam of infra-red light is produced and split into two separate beams. One is passed through the sample, the other passed through a reference which is often the substance the sample is dissolved in. The beams are both reflected back towards a detector, however first they pass through a splitter which quickly alternates which of the two beams enters the detector. The two signals are then compared and a printout is obtained. A reference is used for two reasons: • This prevents fluctuations in the output of the source affecting the data • This allows the effects of the solvent to be cancelled out (the reference is usually a pure form of the solvent the sample is in). ## Ionic and Covalent Bonds ------ Ionic bonds are formed when two ions are held together by electrostatic attraction. Shown on the left above is an ionic bond between lithium and fluorine in Li-F. These atoms are plotted with electrostatic potential surfaces. That is fancy language for saying that the overall charge is plotted in color. The plot on the left shows the surface as solid, while the plot on the right is the same except the surface is transparent so you can see the nuclei inside. The red color indicates the negative charge of the fluoride anion, the blue charge indicates the positive charge of the lithium cation. The atoms are held together because opposite charges attract each other. This is an ionic bond. Contrast this to the situation with the fuorine molecule (F-F) shown on the right in which both atoms have the same charge, so there is only green color, no red or blue. The type of bond found in the fluorine molecule is called a covalent bond and comes about because the atoms share electron density in order to each obtain a noble gas configuration. The figure above uses three types of electron density models to compare the bonding and polarity of simple hydrides: LiH, H2, and HF. The mesh surfaces identify points where the electron density is relatively low (0.002 a.u.). These points more or less define the "edge" of the electron "cloud" in each molecule. Notice how the size of the electron cloud near H shrinks as its bonding partner changes from Li -> H -> F. Recalling the analysis of Li and Li+ given in Figure 1, you can conclude that the amount of electron density belonging to H is greatest in LiH and least in HF. In other words, Li and F do not share bonding electrons equally with H (equal sharing must occur in H2). Li donates electron density to H, but F "steals" electron density from H. Chemists describe the ability of an atom to "steal" bonding electrons from its partner as its electronegativity. These figures demonstrate that electronegativity increases in the order: Li -- H -- F. The colored maps show how each molecule's electrostatic potential varies on the 0.002 isodensity surfaces. The variation in potential is shown by color - RED (lowest) -> ORANGE -> YELLOW -> GREEN -> BLUE (highest) - and indicates whether a particular region is electron-rich (RED) or electron-poor (BLUE). LiH and HF are polar molecules in that the two ends of these molecules are electron-rich and electron-poor respectively. The change in potential around H is also consistent with the previous analysis based on surface size; H is electron-rich in LiH (RED), neutral in H2 (GREEN), and electron-poor in HF (BLUE). Finally, the solid surfaces (inside the mesh surfaces) identify points where the electron density is relatively high (0.08 a.u.). Atoms that share electrons (covalently bonded) build up electron density in the region between the two nuclei. The models of H2 and HF show that these molecules contain covalent bonds - the electron density between the nuclei is 0.08 a.u. or greater. The model of LiH, on the other hand, suggests that this bond is largely ionic - although there are regions of high electron density around each nucleus, electron density is less than 0.08 between the two nuclei. ## Lewis Acids and Bases Shown here, is the molecule BF3 represented in three different ways. The ball-and-stick model is drawn on the lower left. Boron is shown in purple and the fluorine atoms are shown in green. On the right, different views of the empty 2p-orbital belonging to boron are shown. With three bonds to fluorine (sp2 hybridization), and no lone pairs, there remains one 2p-orbital that is not hybridized and empty. Thus, it wants a lone pair of electrons to give it an octet of electrons. Also, fluorine is highly electronegative, withdrawing electron density from the boron atom. This is represented in the electrostatic potential model at the upper-left, with flourine atoms in red (partial negative charge) and boron in blue (partial positive charge). For all these reasons, the molecule BF3 is a good acceptor of electrons and therefore a good Lewis acid. In this picture, an acid/base reaction is shown. NH3 has a lone pair of electrons. This is represented by the red area on the electrostatic potential model. The H+ is obviously positively charged as shown by its blue color. The lone pair of electrons on the NH3 molecule will donate these pairs of electrons to the H+, so here the NH3 is acting as a Lewis base as well as a Bronsted-Lowry base, and the H+ is acting a a Lewis acid and well as a Bronsted-Lowry acid. Notice that by convention, the arrow points from the electron donor to the electron acceptor. The bottom picture shows H+ bound to the NH3 molecule forming NH4+. Please note that the molecule is now sp3 hybridized and has a formal charge of +1. In this diagram, NH3 again acts as a Lewis base. BF3 acts as a Lewis acid when it accepts the lone pair of electrons that NH3 donates. This reaction fills BF3's empty 2p-orbital, and now boron is sp3 hybridized when previously (as BF3) it was sp2 hybridized. ## Dipole Moments For the model on the left, the white atom is hydrogen and the green atom is fluorine. The surface on the right uses color to indicate where the electrons are located in the H-F molecule. Here, the red color represents a PARTIAL negative charge (on fluorine atom), while the blue color represents PARTIAL positive charge (on hydrogen atom). There are large differences in electronegativity between hydrogen and fluorine, so that the majority of electron density in the hydrogen-fluorine bond ends up on the much more electronegative fluorine. Bonds such as this in which the electrons are not shared evenly are referred to as polar covalent bonds. Polar covalent bonds have a bond dipole moment. For the molecular model shown on the left, the green atoms are fluorine, the light blue atom is carbon and the white atom is hydrogen. Difluoromethane (CH2F2) has two polar covalent C-F bonds as shown. For the surface on the right that indicates where their electrons are, the red color represents PARTIAL negative charge, while the blue color represents PARTIAL positive charge. As you can see, the entire molecule has a molecular dipole moment resulting from the vector sum of the two C-F bond dipole moments. For the molecular model shown on the left, the light blue atom is carbon and the red atoms are oxygen. Carbon dioxide has two polar covalent C-O bonds. However, the bond dipole moments exactly cancel each other since they are pointing in exactly opposite directions. Thus, CO2 has no molecular dipole moment. Please note that the cancellation of bond dipole moments does not change the existence or placement of electron density. Each individual C-O bond has the normal bond dipole moment even though the molecular dipole moment is zero. By the way, you should be able to identify the carbon atom in CO2 as being sp hybridized. For the molecular model shown on the left, the white atoms are hydrogen and the red atom is oxygen. Water has two very polar covalent bonds between oxygen and hydrogen. Because water is bent (the oxygen atom is sp3 hybridized), the two bond dipole moments add up to give water a relatively large molecular dipole moment. This molecular dipole moment combined with the individual bond dipole moments give water its unique properties that have allowed life to evolve as it has. In particular, later in the semester, we will see how these dipole moments explain water's remarkably high boiling point, as well as its ability to dissolve charged species such as the salt in the ocean. CF4 has four polar covalent bonds. Because they are arranged in a symmetrical tetrahedral array, all of the bond dipole moment vectors exactly cancel, leaving NO MOLECULAR DIPOLE MOMENT for CF4. Hopefully, you can appreciate that deducing whether a given molecule has a molecular dipole moment is a favorite question of mine, because it forces you to synthesize everything we have learned thus far (molecular shape based on hybridization state and/or VSEPR, electronegativities) into the answer to a single question. Best of all, the prediction of molecular dipole moments will allow you to predict the properties of different molecules. You will encounter many examples of this as the semester unfolds. ### Introduction The cleavage of a covalent heteronuclear chemical bond generally results in the formation of two oppositely charged ions, with the negative charge developing on the more electronegative atom of the bonded pair and the positive charge on the less electronegative atom. As the electronegativity difference between two bonded atoms becomes smaller and smaller, the tendency of the bond between them to break homolytically becomes larger and larger; each atom retains one of the electrons from the bond. Figure 1 presents several examples of reactions that involve both heterolytic and homolytic bond cleavage. Figure 1 Divvy 'em up Species in which an atom has an unpaired electron are called free radicals. The three free radicals shown in the bottom panel of Figure 1 are called chlorine atoms, hydroxyl radicals, and triphenylmethyl radicals. Chlorine atoms and hydroxyl radicals are extremely reactive materials. The triphenylmethyl radical, on the other hand, is very stable. In fact, no one has ever isolated hexaphenylethane, presumably because it decomposes spontaneously into two triphenylmethyl radicals. According to VSEPR theory carbocations, R3C+, should be trigonal planar, while the carbanions, R3C:-, should be pyramidal. These predictions have been verified experimentally. You might expect that the geometry of a free radical, R3C., would be intermediate between that of a carbocation and a carbanion. Experimental measurements indicate, however, that carbon free radicals are essentially planar. Figure 2 compares the structures of these three species. Figure 2 Three Forms of Trivalent Carbon The stability of carbocations increases in the order CH3+ < CH3CH2+ < (CH3)2CH+ < (CH3)3C+. In other words, methyl < 1o < 2o < 30. The structural similarity between carbocations and carbon free radicals illustrated in Figure 2 suggests that these species should display a similar increase in stability as a function of increasing substitution at the central carbon. This expectation is borne out by experimental measurements. Presumably replacement of a hydrogen atom by an alkyl group creates the possibility for hyperconjugative stabilization of the free radical in the same way it does for the carbocation. Figure 3 demonstrates this partial obrital overlap between a C-H sigma bond of a methyl group and a p orbital on the central carbon atom of a free radical. Figure 3 Hyperconjugation to the Rescue When a free radical center is flanked by a pi system, a resonance interaction between the p orbital on the central carbon and the p orbitals of the pi bond(s) is possible. Figure 4 presents two views of this type of interaction for one common structure, the allylic radical. Note in this figure that the unpaired electron is originally on the carbon atom bearing the R groups. Consideration of the picture in the left hand panel reveals that overlap of the p orbital on the middle carbon with either of the flanking p orbitals is, to a first approximation, equally likely. This view is reenforced by the familiar electron pushing scheme depicted in the right hand panel. Note the use of a single barbed arrow to denote resonance delocalization of a single electron. Figure 4 #### Polymerization In our discussion of addition polymers we described the polymerization of alkenes in terms of an acid-catalysed process. In practice, polymerization of simple alkenes is more often initiated by a free radical than it is by a proton. Regardless of the initiator, the basic chain reaction mechanism applies. Figure 5 reviews the process which most often begins with the homolytic cleavage of the O-O bond of an organic peroxide. Figure 5 Halogenation At room temperature, in the absence of light, a mixture of methane and dichlorine is stable indefinitely. However, upon exposure to visible light, the components of the mixture react rapidly as shown in Equation 1. The reaction is called a free radical subsitituion. The composition of the product depends upon the molar ratio of methane to dichlorine. The exclusive formation of chloromethane would require a very large excess of methane. However, the most interesting feature about reaction 1 is that a single photon causes the formation of thousands of molecules of chloromethane. In other words, one photon sets off a chain reaction. This and other observations led to the formulation of the 4-step mechanism shown in Figure 6. Figure 6 Workin' on the Chain Gang The process begins with the photochemically induced homolytic cleavage of the Cl-Cl bond to produce two chlorine atoms. This step is called initiation. In the second step, one of the chlorine atoms collides with a molecule of methane; the collision results in the transfer of a hydrogen atom, from the carbon to the chlorine. The hydrogen atom transfer results in the formation of a molecule of hydrogen chloride and a methyl radical. Since the second reaction creates a new radical, it is called a propagation reaction. A second propagation step follows as the methyl radical collides with a dichlorine molecule, an encounter that yields a molecule of chloromethane and a chlorine atom. Collision of the chlorine atom generated in the second propagation step with another molecule of methane forms a second molecule of HCl and a second methyl radical. The two propagation steps continue until the concentrations of Cl2 and/or CH4 are reduced to the point where the probability of a collision between two radicals becomes significant. Such a collision constitutes the termination step of the process. Free radical halogenation of alkanes is a general reaction. The chlorination of 2,3-dimethylbutane provides an informative example. This simple alkane contains two types of hydrogen atoms, twelve primary and two tertiary. As shown in Equation 2, chlorination of this compound produces a mixture of 2-chloro-2,3-dimethylbutane and 1-chloro-2,3-dimethylbutane. The composition of this mixture is 82.5% 2-chloro-2,3-dimethylbutane and 17.5% 1-chloro-2,3-dimethylbutane. This product distribution is different than would be expected if the only factor governing the ratio of the two isomers were the ratio of primary and tertiary hydrogens in the starting material. The product distribution in reaction 2 suggests that a 3o hydrogen atom is nearly six time more reactive than a 1o hydrogen atom.This increased reactivity has been attributed to the greater stability of the 3o free radical in comparison to the 1o alternative. The more stable free radical is formed more readily, i.e. faster. Figure 7 presents a reaction coordinate diagram that compares the relative energies of the reactants, intermediates, and products involved in reaction 2. The key feature of the figure is not the relative energies of the products, but rather the relative energies of the intermediates, and more importantly, the relative activation energies for their formation. Figure 7 One Reactant, Two Products Follow the reaction pathway highlighted in red in the figure: When a chlorine atom collides with a molecule of 2,3-dimethylbutane at one of the 3o hydrogen atoms, the C-H bond begins to break as the H-Cl bond begins to form. At the point labeled T3,1 the configuration of the C---H---Cl fragment has its maximum energy. As the H-Cl bond becomes stronger, the C-H bond weakens. As the Cl atom rebounds from the collision, it takes the H atom with it, leaving the intemediate tertiary free radical, I3, behind. Collision of this intermediate with a molecule of dichlorine produces further bonding changes that lead to the transition state labeled T3,2. From that point the path to the product is energetically downhill. A similar series of events occurs along the pathway highlighted in blue. The key difference is that E3act is less than E1act. While we now interpret experimental outcomes such as those seen for reaction 2 in tems of the relative stabilities of the intermediates formed along the alternative reaction pathways, it is important to remember that our understanding of relative stabilities evolved from the analysis of product distribution data from many reactions similar to reaction 2. #### Combustion As shown in Equation 3, the chlorination of methane will produce carbon tetrachloride if there is sufficient dichlorine in the reaction mixture. The carbon atom in this reaction undergoes a change in oxidation level from -4 to +4; it is oxidized. A similar oxidation occurs when methane is treated with dioxygen: In this case the reaction is called combustion. Like chlorination, combustion of methane, Equation 4, involves free radical reactions. In our introduction to MO theory we saw that dioxygen is a diradical; there is an unpaired electron on each oxygen atom. Dioxygen is a very reactive molecule. While its reactivity is essential for our existence, it can be a bane to that existence, and it has even been implicated in our demise. Dioxygen-promoted free radical reactions are definitely involved in the spoilage of food and have been implicated as contributors to the aging process. Since many free radical reactions are chain reactions, a simple strategy for breaking the chain involves addition of a compound that will form stable, i.e. non-reactive, free radicals. For example, the free radical produced by the reaction of 2,6-di-t-butyl-4-methylphenol with dioxygen, Equation 5, has very low reactivity. Not only is the free radical center sterically hindered by the two bulky t-butyl groups, it is also stabilized by resonance interaction with the aromatic ring. The abbreviation for 2,6-di-t-butyl-4-methylphenol is BHT, which stands for butylated hydroxytoluene. This compound is added to many packaged foods as a preservative to prevent the food from becoming stale, a process that involves oxygen- promoted free radical reactions. Another common preservative is BHA, which stands for butylated hydroxyanisole, a compound in which the CH3 group of BHT is replaced by an OCH3 group. Free radical inhibitors are also used to prevent spoilage of food products that contain unsaturated fats and oils. The allylic C-H bonds in these structures are easily attacked by free radicals, which in turn react with dioxygen in a radical chain reaction that is called autooxidation. Figure 8 summarizes the essential features of the process using the polyunsaturated fatty acid linolenic acid as an example. Figure 8 Autooxidation of An Unsaturated Fatty Acid In this scheme In. represents any species that initiates the chain reaction. Note that the free radical formed in step 1 is doubly allylic. Therefore it is formed readily. Once formed, it reacts with dioxygen in step 2 to produce a peroxy radical which abstracts a hydrogen atom from another linolenic acid molecule in step 3. This produces an organic peroxide and a new free radical, thereby propagating the chain reaction. Food additives such as BHT inhibit autooxidation and retard spoilage. Fats present in cells within our bodies are subject to similar autooxidation, which may lead to cellular damage. Vitamin E is a natural antioxidant that has received considerable attention in the poplular press recently for its "anti-aging" capabilities. Consideration of the structure of vitamin E would suggest that, to the extent that free radical reactions contribute to aging, there is some basis for these claims. Oxidation Levels ### Introduction Most atoms have one or two stable oxidation states. Carbon has 9!! Many of the reactions that organic molecules undergo involve changes in the oxidation level of one or more carbon atoms within the compound. For example, during the combustion of methane, which produces carbon dioxide, the oxidation level of the carbon atom changes from -4 to +4: The procedure for calculating the oxidation level of an atom is similar to that for determining its formal charge. #### Rule 3: Calculating Oxidation Levels To determine the oxidation level of an atom within a molecule, separate the atom from its bonding partner(s), assigning all bonding electrons to the more electronegative of the bonded atoms. Then compare the number of electrons that "belong" to each atom to the atomic number of that atom. Figure 1uses color coding to illustrate the procedure for methane, CH4. Figure 1 Assigning Electrons I Since carbon is more electronegative than hydrogen, both electrons from each C-H bond are assigned to the carbon. Counting its two inner shell electrons, the carbon has 10 electrons assigned to it. Its oxidation level is the sum of its nuclear charge (atomic number) and the its electronic charge; 6+ (-10) = -4. The oxidation level of each hydrogen atom is 1 + (0) = +1. Note that the sum of the oxidation levels of all of the atoms in the molecule equals zero. This is always true of neutral molecules and provides a convenient way for you to check your calculations. Figure 2 illustrates the assignment of electrons in carbon dioxide. No color coding is used. Figure 2 AssigningCl Electrons II All of the bonding electrons are assigned to the oxygen atoms. So are the lone pairs. Counting its two inner shell electrons, each oxygen has 10 electrons assigned to it. The oxidation level of each oxygen is 8 + (-10) = -2. The oxidation level of the carbon is 6 + (-2) = +4. Again, note that the sum of the oxidation levels of all the atoms in CO2 equals zero. If two bonded atoms have the same electronegativity, the electrons they share are divided equally between the two atoms. Figure 3 shows how the electrons are assigned for acetic acid. Figure 3 Assigning Electrons III The oxidation level of each hydrogen atom is +1. The oxidation levels of the methyl and carboxyl carbons are -3 and +3, respectively. Right? The oxidation level of each oxygen is -2. To check: 4 x (+1) + (-3) + (+3) + 2 x (-2) = 0! An equivalent method of calculating oxidation levels ignores the inner shell electrons; the oxidation level of an atom is the difference between its group number and the number of valence electrons assigned to it . Prove to yourself that this method works by using it to calculate the oxidation levels of all the atoms in acetic acid. In the same way that chemists calculate the index of hydrogen deficiency of an empirical formula almost without thinking, they also perform subconscious calculations of the oxidation level of each atom within a structure. So, if you want to think like an organic chemist (which is advisable if you want to get a good grade), you should practice calculating oxidation levels until you can do it in your head. Oxidation And Reduction Reactions in Organic Chemistry ### Introduction As we saw in our discussion of oxidation levels, one of the unique characteristics of carbon is that it has nine stable oxidation states. It should not be surprising that organic chemists have developed reagents that allow them to alter these oxidation levels. This topic presents a survey of some of those reagents. ### Oxidizing Reagents We have already seen several examples of such reagents in our discussion of the oxidation of alcohols. They are repeated here for the sake of completeness. #### Chromic Acid This reagent is prepared by mixing sodium or potassium dichromate with sulfuric acid as shown in Equation 1. It is used to oxidize secondary alcohols to ketones: It may also be used to oxidize primary alcohols to carboxylic acids. As Equation 3 indicates, the alcohol is initially oxidized to an aldehyde. Under the reaction conditions, a molecule of water adds to the carbonyl group to form a hydrate which is subsequently oxidized to the carboxylic acid. #### Pyridinium Chlorochromate (PCC) In order to prevent aldehydes from further oxidation, it is necessary to avoid the addition of water to the carbonyl group. PCC was developed as a non-aqueous alternative to chromic acid. Using this reagent, 2-phenylethanol may be oxidized to phenylacetaldehyde without subsequent oxidation to phenylacetic acid: #### Potassium permanganate (KMnO4) and Osmium tetroxide (OsO4) These reagents are used to convert alkenes into the corresponding 1,2-diols (glycols) by a process called syn hydroxylation. Equation 5 illustrates the process for the reaction of 1,2-dimethylcyclohexene with a dilute solution of potassium permanganate. The reaction is thought to involve the formation of an intermediate cyclic permanganate ester which is readily hydrolysed under the reaction conditions to yield the 1,2-diol. A cyclic osmate ester is generated with OsO4. Since aqueous KMnO4 is purple, this reaction is often used as a qualitative test for the presence of an alkene: a dilute solution of permanganate is added to a sample of the unknown compound; if the color is discharged, the test is taken as positive. The formation of a grey-black precipitate of manganese dioxide confirms the analysis. #### Ozone Ozone, O3, is an allotrope of oxygen. It is a highly reactive molecule that is generated by passing a stream of dioxygen over a high voltage electric discharge. (It is possible to smell ozone in the atmosphere after a lightning storm if the lightning has struck nearby.) It is not possible to draw a single structure for O3 in which each oxygen atom has a filled valence shell and is, at the same time, uncharged. Rather, resonance theory describes the structure of this compound as a hybrid of the three resonance contributors shown in Figure 1. Figure 1 Ozone: An Allotrope of Oxygen In a process called ozonolysis, an alkene is treated with ozone to produce intermediates called ozonides, which are reduced directly, generally with zinc metal in acetic acid, to yield aldehydes or ketones, depending on the substituents attached to the double bond of the initial alkene. Equations 6 -8 provide three specific examples. Note that an aromatic ring is resistant to ozone. The value of ozonolysis lies in the structural insight it affords a chemist who is trying to determine the identity of an unknown compound. Figure 2 illustrates this idea. Figure 2 Structural Elucidation with Ozonolysis The unknown is degraded into smaller, simpler molecules that are more readily identified. Once identified, these fragments are then mentally reconnected by joining the carbonyl carbons together to create an alkene. Reducing Agents In our discussion of the oxidation of alcohols, we classified this process as a 1,2-elimination of the "elements of" dihydrogen. The reverse process, the 1,2-addition of the "elements of" dihydrogen to a multiple bond, constitutes a reduction. The reagents used for the 1,2-addition of the "elements of" dihydrogen to a multiple bond depend upon the nature of the multiple bond. For homonuclear multiple bonds, i.e. alkenes and alkynes, the most common method is called catalytic hydrogenation: a solution of the alkene or alkyne is mixed with dihydrogen gas in the presence of a catalytic quantity of a transition metal. For heteronuclear multiple bonds; aldehydes, ketones, nitriles, esters, etc., addition of the "elements of" dihydrogen is genarally accomplished in two steps, addition of hydride ion, :H-, followed by addition of H+. Since the addition of hydride ion is rate determining, these reductions are called hydride ion reductions. We'll take a look at catalytic hydrogenation first. #### Catalytic Hydrogenation The catalyst most commonly used to reduce carbon-carbon multiple bonds consists of platinum metal dispersed over the surface of finely divided charcoal (Pt/C). Palladium and rhodium are also used (Pd/C and Rh/C). These reagents are available commercially. The catalyst is mixed with a solution of the alkene or alkyne dissolved in an inert solvent such as ethanol or diethyl ether. Since the reaction mixture is heterogeneous, it is important that the catalyst be dispersed over a large surface area in order to insure adequate contact between the reactants and the catalyst. The charcoal provides the required surface area. The mechanistic details of catalytic hydrogenations are uncertain because of the difficulties associated with studying heterogeneous reactions. Adsorption onto the catalytic surface brings the reactants into proximity. It also weakens the H-H and the C-C bonds, increasing the reactivity of the reactants. The details of the transfer of the hydrogen atoms from the platinum to the alkyne are uncertain. However, as the animation indicates, both hydrogen atoms add to the same side of the pi bond, leading to the formation of cis-2-butene. In other words, catalytic hydrogenation of alkenes and alkynes involves the syn addition of the "elements of" dihydrogen to the multiple bond. Equation 9 illustrates the syn addition of dihydrogen to (E)-3-chloro-2-phenyl-2-butene. Since the addition occurs to the top and bottom faces of the pi bond with equal probability, the reaction produces a racemic mixture of the two stereoisomers shown. As reaction 10 indicates, catalytic hydrogenations are not restricted to alkenes and alkynes. Compounds containing multiple bonds between carbon and a heteroatom such as oxygen or nitrogen may also be reduced catalytically. While the outcome depicted in reaction 10 may be desireable, it would be nice to have a method to hydrogenate heteronuclear multiple bonds while leaving carbon-carbon multiple bonds untouched. Such a method exits. It takes advantage of the fact that, unlike C-C multiple bonds, which are relatively non-polar, heteronuclear multiple bonds have a permanent bond dipole; the carbon is electron deficient while the heteroatom is electron rich. This means that the carbon atom of a heteronuclear multiple bond is inherently reactive toward negatively charged reagents, while the heteroatom is inherently reactive towards positively charged reagents. Figure 3 demonstrates this reality for negatively and positively charged hydrogen atoms. Figure 3 Opposites Attract The net result of the addition of :H- and H+ to the multiple bond is the 1,2-addition of the "elements of" dihydrogen. Since it is not possible to have significant concentrations of :H- and H+ in the same flask at the same time (why?), hydrogenation of heteronuclear multiple bonds is normally a 2-step process; hydride ion adds to the carbon atom in the first step, while a proton adds to the heteroatom in the second. The remainder of this topic will consider different reagents that act as a source of hydride ion. #### Hydride Ion Reductions Reagents that act as hydride ion donors all share one structural feature: They all contain at least one hydrogen atom that is bonded to another atom which is less electronegative than hydrogen. The greater the difference in electronegativity, the more reactive the reagent will be as a hydride donor. The most reactive source of hydride ion is lithium aluminum hydride, LiAlH4. This material is a grey solid that reacts violently with protic solvents. Most commonly it is used as a suspension in a dry, inert solvent such as diethyl ether or THF. A solution of the compound to be reduced is added to this suspension and stirred vigorously until analysis indicates that all of the starting material has reacted. At this point the mixture is acidified by the careful addition of aqueous acid. Figure 4 illustrates these two steps for the reduction of acetophenone. Figure 4 LiAlH4 Reduction of Acetophenone Note that all four hydrogen atoms attached to the aluminum in LiAlH4 are active; one mole of LiAlH4 will reduce four moles of the ketone. LiAlH4 is so reactive that it will reduce almost any type of heteronuclear multiple bond. It will even reduce carboxylic acids and esters to the corresponding primary alcohols as indicated in reactions 11 and 12, and it reduces amides to amines as shown in Equation 13. Clearly these reactions are more complicated than the mechanism shown in Figure 5 would suggest. Elimination of water as well as reduction must be involved. Before we consider less reactive hydride donors, let's revisit the reaction of the unsaturated ketone we considered in Equation 10. Figure 5 compares the catalytic hydrogenation of pent-4-en-2-one with its reduction by LiAlH4. Figure 5 Reduction of Homonuclear and Heteronuclear Multiple Bonds Because simple, i.e. non-conjugated, double bonds are non-polar, they are non-reactive towards nucleophilic reagents. While lithium aluminum hydride's high reactivity may be useful, it can also be a disadvantage if you want to selectively reduce one heteronuclear multiple bond in a compound that contains several. In that case it is desireable to have a reagent that is less reactive and more selective. One such reagent is sodium borohydride, NaBH4. This material is a white solid. It is much less reactive than LiAlH4. In fact, it is possible to reduce aldehydes, ketones, and esters with NaBH4 even in protic solvents such as ethanol. It will not reduce carboxylic acids or amides. The mechanism of the reaction is very similar to that shown in Figure 5. Figure 6 compares the results of the reduction of a compound that contains both an ester and an amide group with LiAlH4 and NaBH4. Figure 6 LiAlH4 vs NaBH4 Now let's consider another aspect of reactivity. For compounds that belong to the carboxylic acid family, in particular carboxylic acids, esters, and amides, the oxidation level of the carboxyl carbon is +3. As you can see from the reactions in Figure 7, the oxidation level of the carboxyl carbon decreases to -1 when the carboxyl group is reduced to a primary alcohol. The question then becomes, "Can you stop the reduction at an intemediate oxidation level?" The answer is yes. Equation 14 shows how. Here the ester is reduced to an aldehyde. The oxidation level of the carbonyl carbon decreases from +3 to +1. The trick is to use a sterically hindered reducing agent that has only one active hydrogen. In this case the reagent is called diisobutyl aluminum hydride, sometimes abbreviated DIBAL. By using 1 equivalent of DIBAL at low temperatures it is possible to reduce the ester to the corresponding aldehyde without further reduction of the aldehyde to the primary alcohol. Use of more than 1 equivalent will lead to reduction of the aldehyde. Finally, Table 1 summarizes the reactivities of the various reducing reagents we have considered in this topic. Table 1 Reducing Agent Alkenes Aldehydes Ketones Esters Amides Carboxylic Acids H2, Pt/C alkanes 1o alcohols 2o alcohols NR NR NR LiAlH4 NR 1o alcohols 2o alcohols 1o alcohols amines 1o alcohols NaBH4 NR 1o alcohols 2o alcohols 1o alcohols NR NR DIBAL .... 1o alcohols 2o alcohols aldehydes aldehydes 1o alcohols Hydride Reductions Oxidation of Alcohols ### Introduction The conversion of alcohols into aldehydes and ketones is one of the most common and most useful transformations available to the synthetic organic chemist. The general features of this oxidation reaction are outlined in Figure 1. Figure 1 The Oxidation of Alcohols A synonym for the oxidation of alcohols, dehydrogenation, suggests the structural feature that is required for this process: H-C-O-H. The OH group must be attached to a carbon atom that is bonded to at least one hydrogen atom. In other words, oxidation of alcohols involves the 1,2-elimination of "the elements of" dihydrogen, H and H. This structural requirement means that 3o alcohols do not normally undergo oxidation. All of the alcohols shown below will not undergo oxidation under reaction conditions where 1o and 2o alcohols react readily. ### Oxidizing Agents There is a wide variety of reagents that are used for the oxidation of alcohols. Two of the most common are chromic acid, H2Cr2O7, and pyridinium chlorochromate, PCC. Chromic acid is prepared by treatment of sodium or potassium dichromate with aquesous sulfuric acid as shown in Equation 1. Chromic acid is most commonly used to oxidize 2o alcohols to ketones. One example is given in Equation 2. Pyridinium chlorochromate is made by mixing chromium trioxide with pyridine and hydrochloric acid as indicated in Equation 3. The oxidizing component of PCC is the chlorochromate anion, CrClO3-. PCC was developed especially for the oxidation of 1o alcohols to aldehydes, a transformation which is difficult to accomplish using chromic acid because aldehydes react rapidly with aqueous chromic acid to produce carboxylic acids. Figure 2 compares the oxidation of 2-phenylethanol by chromic acid and PCC. Figure 2 Oxidation Alternatives Mechanism Oxidation of alcohols is basically a two step process. The first step involves the formation of chromate esters. In our discussion of esterification, we saw that alcohols react with carboxylic acids, phosphoric acid, and sulfonic acids to produce various types of esters. The same is true for chromic acid and PCC; they react with alcohols to produce chromate esters. Once the chromate ester is formed, it undergoes an elimination reaction to generate the carbonyl group of the aldehyde or ketone. These two steps are outlined in Figure 3. Figure 3 The Mechanism of Chromate Oxidations ### Examples The oxidation of the secondary alcohol menthol to the ketone menthone, as outlined in Equation 4, provides a simple example of a dichromate oxidation. The product is formed in over 90% yield. Equation 6 presents a variation on the theme discussed above. In this reaction the oxidizing agent is chromium trioxide, CrO3 dissolved in acetic acid. It converts a 2o alcohol into a ketone. This oxidation is part of a multi-step synthesis of a terpene called longifolene, which is a component of pine oil. Synthetic chemists often pursue exotic targets. The synthesis of sirenin, Equation 6, offers a case in point. A key step in the multi-step synthesis of this material was the PCC oxidation of a 1o alcohol to an aldehyde. Sirenin is the sperm attractant from the female gametes of a water mold. Now that's exotic. Enolization ### Introduction- Hydrocarbons are among the weakest acids known. Conversely, their conjugate bases are some of the strongest bases there are. Why is it so hard to remove a proton from a carbon atom? The answer is because the conjugate base is very unstable. When an excess of electron density develops on a carbon atom, the nucleus of that atom cannot offer much Coulombic attraction to stabilize it. Consider the hypothetical acid-base reactions shown in Figure 1. Figure 1 Comparison of Four Acid-Base Reactions In each case removal of a proton from the central atom generates an anion. The central atom of each anion has one more electron than it has protons in its nucleus. Figure 2 offers a schematic comparison of the central atom of an alkanide and an amide Figure 2 Comparison of Charge Ratios in Alkanide and Amide Ions The carbon has 2 electrons in its inner shell, and 5 electrons in its valence shell. The ratio of electrons to protons is 7/6 = 1.167. For the amide ion this ratio is 8/7 = 1.143. This is closer to 1/1 than in the case of carbon. Hence the amide ion is more stable than the alkanide ion. While the numerical difference in these ratios is small, the effect is huge: ammonia is 1012 more acidic than methane. Conversely, the amide ion is 1012 more stable than the methanide ion. You should extend this logic to hydroxide and fluoride ions. Now consider the acid-base reaction of acetone shown in Equation 1. Experiments have shown that the pKa of acetone is 19. It is 1031 times more acidic than methane! This means that the conjugate base of acetone is 1031 times more stable than the conjugate base of methane. Why? What structural feature is there in the conjugate base of acetone that affords such stabilization? Clearly the answer must be the carbonyl group. As the proton is being removed from the carbon, the geometry about the carbon begins to change. As the negative charge increases, the geometry around the a-carbon changes in order to maximize the overlap between the orbital containing the lone pair of electrons and the pi orbitals of the carbonyl group. Interlude #### Vocabulary Figure 3 presents several structures with annotations that illustrate important definitions associated with this area of chemistry. Figure 3 The Language of Enolization Experimental Evidence #### Isotope Exchange When acetone is treated with base in D2O, it is slowly converted into hexadeuteroacetone as shown in Equation 2. #### Bromination Compounds that contain at least one hydrogen atom a to a carbonyl group quickly decolorize dilute aqueous solutions of dibromine and sodium hydroxide. Equation 3 illustrates the reaction for acetone. The disappearance of the red-orange color indicates that the dibromine has been consumed. The products of the reaction, a-bromoacetone and sodium bromide are colorless. The reaction involves nucleophilic attack of C-a of an enolate ion on a molecule of dibromine. See Figure 5. #### The Iodoform Reaction Given the chemistry described in Equation 3, it shouldn't be surprising that compounds which contain at least one hydrogen atom a to a carbonyl group also decolorize aqueous solutions of diiodine and sodium hydroxide. In the case of methyl ketones, the decoloration is accompanied by the formation of a molecule of iodoform, CHI3, a yellow solid. The decoloration of the solution and the formation of the yellow solid is taken as a positive result. Equation 4 illustrates the process for 2-pentanone. Note that the starting ketone is converted into a carboxylic acid that contains one fewer carbon atoms. In reaction 4, each of the three hydrogens on the methyl group attached to the carbonyl carbon is replaced by an iodine atom. The hydrogen atom in the iodoform comes from the acid that is added in the second step of the process. Silylenol ethers If a strongly basic solution of a ketone or aldehyde containing at least one a-hydrogen is "quenched" with chlorotrimethylsilane, it is possible to isolate a silyl enolether as shown in Equation 5. Equation 5 differs from Equations 3 and 4 in two important ways. First, the equilibrium constant for the first step of reaction 5, i.e. deprotonation of the a-hydrogen, is approximately 1019, while that for reactions 3 and 4 is about 10-3. This means that there is a very high concentration of the enolate ion in reaction 5, but not in reactions 3 and 4. The importance of this fact will become obvious when we discuss aldol condensations in general, and crossed aldol condensations in particular. Second, the oxygen atom of the enolate ion acts as the nucelophile. This reflects the fact that the Si-O bond is an extremely strong bond, much stronger than a Si-C bond. ### NMR Spectroscopy In molecules which contain hydrogen atoms that are a to two carbonyl groups, the enol tautomer may actually be the predominant form of the compound even in the absence of base. Figure 4 shows a partial 1H-NMR spectrum of 2,4-pentanedione. Integration of the signals for the methyl groups of the keto and enol forms of this compond reveals the relative amounts of each tautomer. Figure 4 A Partial NMR Spectrum of 2,4-Pentanedione The enol/keto ratio depends upon the solvent. In the case of 2,4-pentanedione it varies between 3/1 and 4/1. ### Mechanistic Interpretation All of the lines of evidence described above may be rationalized by a process that involves the formation of an enolate ion. Figure 5 outlines the steps involved in the bromination of acetone. Figure 5 Mechanistic Interpretation of the Bromination of Acetone In the first step, the base abstracts a hydrogen atom from a methyl carbon. Electrons from the C-H bond move toward the carbonyl group, generating the enolate ion. In the second step, the negative charge on the oxygen atom moves back toward the carbon in order to regenerate the carbonyl group. As this happens, the electrons in the C-C double bond of the enolate ion form a bond to one of the bromine atoms in dibromine. This is a nucleophilic substitution reaction in which the enolate ion acts as a nucleophile and displaces a bromide ion from the dibromine. In the isotope exchange experiment described in Equation 2, the enolate ion displaces -OD from D2O in a similar manner. The process is repeated until all of the hydrogen atoms have been replaced by deuteriums. Using D2O as the solvent insures that all of the hydrogen atoms will be exchanged. In the iodoform reaction, the exchange of the three methyl hydrogens for iodine atoms produces a triiodomethylketone. The triiodomethyl group is then displaced by hydroxide ion as shown in Figure 6, again using 2-pentanone as an example. Figure 6 The Last Stage of the Iodoform Reaction Reaction of the triiodomethylketone with hydroxide ion as shown in Figure 6 is a nucleophilic acyl substitution reaction. The triiodomethanide ion is a reasonable leaving group since the negative charge on the carbon atom is stabilized by the three iodine atoms. ### Carbon vs Hydrogen Consider the reactions shown in Equations 6 and 7. Given that the equilibrium constant for reaction 6 is approximately 103 times greater than that for reaction 7, how is it hydroxide ion prefers to act as a base rather than as a nucleophile? The answer is that it doesn't. Reaction 6 occurs in preference to reaction 7. In fact, for every time a hydroxide ion abstracts an a-hydrogen from one acetaldehyde molecule, 1000 hydroxide ions add to the carbonyl groups of other acetaldehyde molecules. However, reaction 6 is readily reversible. Reaction 7, while reversible, may also proceed forward if the enolate ion reacts with an electrophile such as water, dibromine, diiodine, or chlorotrimethylsilane. It's sort of like the tortoise and the hare; reaction 6 gets off to a speedy start, but reaction 7 wins out in the end. Reaction 7 is the first step in an important carbon-carbon bond making process called the aldol reaction. The Aldol Reaction ### Introduction- Near the end of the discussion of enolization there was an analysis of the reactivity of hydroxide ion towards the carbonyl carbon compared to the hydrogen a to the carbonyl carbon in acetaldehyde. That analysis concluded that addition to the carbonyl group was approximately 10,000 times more likely than abstraction of an a-hydrogen. But because addition of hydroxide ion to the carbonyl group is readily reversible, it is a non-productive reaction. While the abstraction of an a-hydrogen is also a reversible reaction, the enolate ion that is formed has an alternative reaction pathway available to it- addition to the carbonyl carbon of another acetaldehyde molecule. This is called the aldol reaction. Figure 1 summarizes the process. Figure 1 The Aldol Reaction of Acetaldehyde Note that the net effect of the aldol reaction is to add the "components of" one molecule of acetaldehyde to the carbonyl group of a second molecule of acetaldehyde, the "components of" acetaldehyde being H and HCOCH2. The aldol reaction of two aldehydes is of limited synthetic utility. However, there are many "aldol-like" reactions which involve the essential features described in Figure 1. Figure 2 highlights these features. Figure 2 General Features of the Aldol Reaction The aldol reaction requires an aldehyde or ketone that contains at least one a-hydrogen. The a-carbon becomes nucleophilic when it is deprotonated by a base. The carbonyl carbon is electrophilic. Coulomb's Law brings these two oppositely charged species together to form a C-C bond. The R groups may be H, alkyl, or aryl. When the R groups in one molecule are different than those in the other, the reaction is called a crossed-aldol reaction. The ability to join different aldehydes and ketones together is what give this process its synthetic value. The word aldol is a common name for the product of the reaction shown in Figure 1. It is a type of compound called a b-hydroxyaldehyde. Generally the word aldol is used to refer to any b-hydroxyaldehyde or b-hydroxyketone. Like other alcohols, b-hydroxyaldehydes and b-hydroxyketones undergo dehydration to produce alkenes. In fact, it is difficult to isolate b-hydroxyaldehydes and b-hydroxyketones because they are very prone to dehydration. ### Dehydration of b-hydroxyaldehydes and b-hydroxyketones In order to isolate the product shown in Figure 1, the reaction conditions must be mild; the temperature must be kept low and the amount of acid used to protonate the alkoxide ion intermediate must be carefully controlled. If too much acid is added, or if the temperature is too high, the aldol will dehydrate to form a conjugated alkene as demonstrated in Figure 3. Figure 3 Dehydration of a b-hydroxyaldehyde The conjugated alkene shown in Figure 3 is also called an a, b-unsaturated aldehyde. This means that the double bond is between the carbon atoms a and b to the carbonyl carbon. In this position the p orbitals of the double bond may interact with those of the carbonyl group to form an extended, i.e. delocalized, pi system. The delocalization of the electrons in this pi system results in greater stabilization since the electrons experience greater nuclear attraction. Figure 4 offers two perspectives of the orbital geometry that affords this extra stabilization. The trans isomer was chosen arbitrarily. Figure 4 Orbital Alignment in a, b-unsaturated Systems Retrosynthetic Analysis For synthetic organic chemists it's important to develop the ability to mentally "deconstruct" a target structure into simpler molecules from which that target may be made. The process of intellectual deconstruction is called retrosynthetic analysis. The focal point of such endeavors is inevitably the functional group(s) within the target molecule. In the case of a,b-unsaturated aldehydes or ketones the functional group of interest is the C-C double bond. Disconnecting these two carbons as animated in Figure 5 reveals the structures of the two components from which the target molecule was prepared. Figure 5 Take One Step Back Examples Treatment of acetone with base results in the aldol reaction shown in Equation 1. Experimentally reaction 1 is tricky to perform. However, if the product is separated from the reaction mixture as it is formed, it is possible to isolate the product in over 70% yield. Acetone participates in a crossed-aldol reaction with furfural, an aldehyde produced from corn stalks, as described by Equation 2, where the carbon-carbon bond that is formed is highlighted in red. An amazing aldol-type reaction was involved in the total synthesis of ginkolide B, one of the active components in extracts from the Ginko biloba tree. Equation 3 outlines this key step. In this reaction the potentially nucleophilic carbon is a to the carbonyl group of an ester rather than a ketone or aldehyde. Lithium diisopropylamide (LDA) was used to deprotonate this carbon. The resulting enolate ion added to the carbonyl carbon of the complex pentacyclic ketone to form the C-C bond shown in red. Equation 4 depicts an intramolecular crossed-aldol reaction that constituted the last step in a total synthesis of racemic progesterone. Even though the reaction conditions were very mild, the intermediate b-hydroxyketone underwent spontaneous dehydration to produce the a, b-unsaturated ketone. ### Introduction Resonance theory is a valuable extension of valence bond theory because it offers chemists a simple and reliable way to rationalize and/or predict the results of many reactions involving conjugated systems. In this topic we will examine a small but important group of reactions of molecules containing a carbonyl group that is conjugated to a carbon-carbon double bond. We encountered this molecular fragment during our discussion of aldol condensations. Figure 1 reviews this reaction sequence. The process begins when a base removes a proton from the a-carbon atom of the aldehyde. This generates a low concnetration of enolate ion A, which reacts with a molecule of acetaldehyde that has not been deprotonated as indicated by the arrows labeled 1, 2, and 3. Protonation of the resulting alkoxide ion B leads to the b-hydroxyaldehyde known as aldol. While it is possible to isolate this compound, it is also easy to dehydrate it; 1,2-elimination of water leads to the a,b-unsaturated aldehyde 2-butenal. Figure 1 The Aldol Condensation Now that you know how to prepare them, let's take a look at an important aspect of the chemistry of these molecules. ### Nucleophilic Addition Reactions of a, b-unsaturated Systems Because of the conjugation of the double bond with the carbonyl group, a,b-unsaturated aldehydes and ketones possess two electrophilic carbons, the carbonyl carbon and the b-carbon. Figure 2 shows the familiar resonance interaction between the two functional groups in 2-butenal. Figure 2 It is apparent that the carbonyl carbon of 2-butenal is electron deficient, simply by virtue of being bonded to a more electronegative oxygen atom, i.e. the oxygen atom draws electron density away from the carbon atom by virtue of its inductive effect (its greater electronegativity). Structure A emphasizes the fact that the resonance interaction depicted by the arrow labeled 1 reenforces the inductive withdrawl of electron density from the carbon by the oxygen. The development of a positive charge on the carbonyl carbon in A leads to the resonance interaction indicated by the arrow labeled 2. This reduces the electron density at the b-carbon as indicated by the positive charge on that atom in resonance structure B. While the foregoing discussion should be familiar to you by now, it bears repeating because understanding resonance interactions like those shown in Figure 2 allows you to make predictions about the outcome of many chemical reactions. The prediction that is central to this topic is this: nucleophilic reagents can react with the carbonyl carbon and/or the b-carbon atom of a,b-unsaturated aldehydes and ketones. Which of these alternatives is realized in a given reaction is an issue of kinetic vs.thermodynamic control. Consider the reaction shown in Equation 1. Of the two products, cyclohexanone is the more stable. This is primarily due to the fact that the carbon-oxygen double bond is an especially stable molecular fragment, considerably more stable than a carbon-carbon double bond. The average bond strength of a carbonyl group in aldehydes and ketones is about 178 kcal/mol, while that of the carbon-carbon double bond in alkenes is approximately 146 kcal/mol. Whether a nucleophile adds to the carbonyl carbon or the b-carbon of an a,b-unsaturated system depends in large measure upon the reactivity of the nucleophile. With highly reactive nucleopliles, addition is kinetically controlled and the nucleophile adds to the carbonyl carbon because it is more electron deficient than the b-carbon. Less reactive nucleophiles are more selective in their choice of bonding partners and prefer to bond to the b-carbon, producing the more stable product. The change from kinetic to thermodynamic control is clearly illustrated by the product distributions obtained in the reactions of methyl lithium, methyl magnesium bromide, and lithium dimethyl cuprate with 5-methyl-2-cyclohexenone. The nucleophilic reactivity of these reagents is determined primarily by the difference in electronegativity of the methyl carbon and the metal atom to which it is bonded. Equation 2 describes the reaction of 5-methyl-2-cyclohexenone with these three organometallic reagents in general terms. Here M stands for the metal atom. The product distribution in these three reactions is summarized in Table 1. Table 1 Nucleophile 1,2-Addition 1,4-Addition CH3Li >99% <1% CH3MgBr 79% 21% (CH3)2CuLi 2% 98% Nucleophilic Addition: Kinetic vs. Thermodynamic Control Note the almost total reversal from kinetic control to thermodynamic control as the nucleophile is changed from the highly reactive methyl lithium to the much less reactive lithium dimethyl cuprate. Lithium dimethyl cuprate is prepared by the reaction of copper (I) iodide with methyl lithium as shown in Equation 3. While the preparation of this complex and the determination of its structure are interesting topics in themselves, the key feature about this organometallic reagent is that the methyl groups are nucelophilic. They are less nucleophilic than a methyl group from methyl lithium or methyl magnesium bromide because the electronegativity difference between C and Cu is less than it is between C and Li or C and Mg. Now let's consider another, more familiar, way of assessing the relative reactivities of carbon nucleophiles. ### Relative Reactivity of Carbon Nucleophiles The relative reactivity of a series of carbon nucleophiles may be assessed to a first approximation by comparing the pKa values of their conjugate acids. The weakest acids produce the strongest conjugate bases, i.e. the most reactive nucleophiles. Table 2 presents a list of carbon acids ranked in order of increasing acidity. Table 2 Carbon Acid Structure pKa Conjugate Base Method of Generation Alkanes ~50 Alkenes/Arenes ~44 Terminal Alkynes ~25 Aldehydes/Ketones ~19 Phosphonium Salts ~15 hydrogen cyanide ~9 commercially available NaCN or KCN b-dicarbonyl compounds ~9 pKa Values to the Rescue Again The remainder of this topic will present a variety of reactions that involve the addition of nucleophilic reagents to the b-carbon of an a,b-unsaturated system. These reactions are also called Michael additions or conjugate additions. A simple example of the shift from kinetic to thermodynamic control is available in the reaction of the enolate ion derived from methyl isobutyrate with cyclohexenone as shown in Scheme 1. Scheme 1 A more dramatic example of Michael addition is illustrated in Equation 4, where conjugate addition occurs to the exclusion of 1,2-addition. Scheme 2 outlines a transformation that constituted the first step in a multi-step synthesis of the steroid estrone. Scheme 2 Body Builder The nucleophilic reagent was generated by reacting cuprous iodide with the Grignard reagent derived from vinyl bromide. The divinyl cuprate added to the b-carbon of 2-methylcyclopentenone to produce an enolate ion that reacted with chlorotrimethyl silane to form the silyl enol ether in 89% yield. A standard method for forming 6-membered rings known as the Robinson annulation (annulation means ring formation) begins with a Michael addition. Scheme 3 outlines the steps involved in this transformation. Scheme 3 The Robinson Annulation The sequence begins with the deprotonation of one of the a-carbons of 2-methylcyclohexanone. This is the same as the first step of an aldol condensation. However, the enolate ion, A, generated in the first step adds to the b-carbon atom of the methyl vinyl ketone that is present in the reaction mixture (arrows 4-7) to produce a new enolate ion, B, which abstracts a proton from a water molecule as shown by arrows 8-10. The resultant ketone C is then converted into its enolate D, which cyclizes as shown by arrows 12-14. This generates the b-hydroxyketone E which spontaneously dehydrates to yield the final product F. Carbon is not the only nucleophilic species to participate in Michael additions. Scheme 4 illustrates an intramolecular Michael addition that played a key step in the first total synthesis of the antibiotic indolizomycin. Scheme 4 Addition of the hydroperoxide anion to the b-carbon of the starting material initiated the flow of electrons indicated by arrows 1-3. Regeneration of the carbonyl group (arrow 4) of the resulting enolate ion was accompanied by an intramolecular nucleophilic substitution reaction as shown by arrows 5 and 6. The resulting epoxide was isolated in 97% yield! Condensation Reactions of Esters ### Introduction In our discussion of the aldol reaction we saw that competing reactions occur when aldehydes and ketones are treated with hydroxide ion. One the one hand, the hydroxide ion may add to the carbonyl carbon in a nucleophilic addition reaction, while on the other it may abstract an a-hydrogen. As the color coding in Figure 1 indicates that duality of reaction pathways stems from the fact that both the carbonyl carbon and the a-hydrogen of the aldehyde or ketone are electrophilic. For purposes of the following discussion it is important to note that addition of hydroxide ion to the carbonyl carbon generates a tetrahedral intermediate labeled T-1 in Figure 1. Figure 1 Alternative Reaction Pathways for Aldehydes and Ketones ### Aldehydes and Ketones vs. Esters Esters are structurally related to aldehydes and ketones: all three classes of compounds contain a carbonyl group. It shouldn't be surprising then that esters display similar reactivity patterns to aldehydes and ketones. As we shall see, they also display some interesting and significant differences. Figure 2 presents an analogous scheme to that shown in Figure 1 for the reaction of simple esters with hydroxide ion. Nucleophilic addition of hydroxide to the carbonyl carbon generates a tetrahedral intermediate T-2. Figure 2 Alternative Reaction Pathways for Esters One of the most favorable pathways available to these tetrahedral intermediates involves regeneration of the carbonyl group. This is favorable because the carbonyl group is an especially stable entity. Figure 3 compares the alternative ways in which the tetrahedral intermediates T-1 and T-2 might regenerate the carbonyl group. Figure 3 Alternative Fates of Tetrahedral Intermediates In the case of aldehydes and ketones, regeneration of the carbonyl group is a reasonable alternative only when it results in expulsion of hydroxide ion from T-1. As we have seen, regeneration of the starting material by this path allows for the less likely alternative, abstraction of an a-hydrogen and the formation of aldol condensation products which follow that event. In the case of T-2, regeneration of the carbonyl group may be achieved by expulsion of hydroxide ion or alkoxide ion. In fact, the latter possibility is preferred. To understand why, it is necessary to consider the equilibria shown in Figure 4. Figure 4 The equilibrium constant for expulsion of alkoxide ion from T-2 is approximately 1. Note that the product of this pathway is a carboxylic acid. Since this acid is formed in a strongly basic solution, it will be deprotonated rapidly. Given that the pKa of a carboxylic acid is about 5, the equilibrium constant for its deprotonation is approximately 1011. In other words, while the expulsion of hydroxide ion from T-2 (Figure 3) is about as likely as expulsion of alkoxide ion, the latter pathway is greatly preferred because a subsequent reaction has a much larger equilibrium constant. Consequently, treatment of an ester with aqueous sodium hydroxide results in the formation of the conjugate base of a carboxylic acid. More information is available. #### Summary In our discussion of the reactions of aldehydes and ketones with hydroxide ion, we saw that addition to the carbonyl carbon was more probable than abstraction of an a-hydrogen, but that the latter pathway was the one followed because the addition reaction was non-productive. In the case of structurally similar esters, i.e. esters containing at least one a-hydrogen, the more probable reaction is a productive reaction. It produces carboxylic acids. The process is called saponification. Saponification of Esters The formation of carboxylic acids by treatment of esters with sodium hydroxide is known as saponification. Equation 1 summarizes the net transformation for the saponification of methyl salicylate, a fragrant component in oil of wintergreen. The general procedure involves refluxing the ester in 6M NaOH until the mixture becomes homogeneous, indicating complete formation of the water-soluble carboxylate salt, RCO2-. Acidification of the mixture during the work-up produces the carboxylic acid. Equation 2 provides another example of saponification of an even simpler ester, ethyl acetate. Suppose that you were to treat ethyl acetate with sodium ethoxide rather than sodium hydroxide. What would happen? The answer; the Claisen condensation. The Claisen Condensation ### Introduction At the end of our comparison of the reactions of aldehydes and ketones and esters with hydroxide ion, we posed the question "Suppose that you were to treat ethyl acetate with sodium ethoxide rather than sodium hydroxide. What would happen?" Two alternatives are likely: 1. The ethoxide ion will abstract a proton from the methyl carbon a to the carbonyl group forming an enolate ion. 2. The ethoxide ion will add to the carbonyl carbon to form a tetrahedral intermediate. The latter possibility is more likely. As can be seen in Figure 1, this intemediate contains two identical ethoxy groups. Consequently, reformation of the carbonyl group has to regenerate the starting ester. As in the case of the aldol reaction, the more likely reaction is a non-productive one. And for the same reason a less likely alternative, abstraction of an a-hydrogen, becomes the productive reaction. Figure 1 Alternative Paths for the Reaction of Ethyl Acetate with Sodium Ethoxide Again, as in the case of the aldol condensation, abstraction of an a-hydrogen produces a low concentration of a highly reactive enolate ion. What happens when this highly reactive nucleophilic species finds itself surrounded by ethyl acetate molecules that have not been deprotonated? It reacts. ### The Claisen Condensation The nucleophilic carbon of the enolate ion adds to the electrophilic carbon of a molecule of ethyl acetate that has not been deprotonated. A carbon-carbon bond is formed. The reaction produces yet another tetrahedral intemediate as shown in Equation 1. Since the newly formed tetrahedral center has an electronegative atom attached to it, reformation of the carbonyl group, as shown in Equation 2, is a reasonable process. This step results in the formation of a b-ketoester, which in this case is called ethyl acetoacetate. In the same way that b-hydroxyaldehydes and b-hydroxyketones are signature structures of the aldol reaction, b-ketoesters suggest the Claisen condensation. A complete description of the mechanism of the Claisen condensation is, in fact, a bit more complicated than indicated in Equations 1 and 2, so, if you'd like to know more... Retrosynthetic Analysis To a synthetic organic chemist who is planning the synthesis of a target molecule, the presence of a b-ketoester fragment within that target should suggest the use of a Claisen condensation at some point during the synthesis. Retrosynthetic analysis of the target will then reveal the structure of the appropriate starting ester. Figure 2 demonstrates this retrosynthetic approach for a generic b-ketoester. Figure 2 Taking Another Step Backwards There are three points worth remembering about the retrosynthesis animated in Figure 2. First, R, R', and R'' may be the same or they may be different. Second, while R and R' may be H, R'' may not. Third, when R and R' are not the same, the condensation is called a crossed Claisen condensation. ### Examples An intramolecular version of the Claisen condensation is known as the Dieckmann condensation. Equation 3 shows how this reaction was put to good use as part of the total synthesis of the prostaglandin PGA2. Equation 4 offers another example of the Dieckmann condensation that was involved in the synthesis of tropinone, a degradation product that was produced during the determination of the structure of the physiologically active alkaloid atropine. An early synthesis of cholesterol involved the "mixed Claisen reaction" shown if Equation 5. Alkylation of Enolate Ions ### Introduction In our discussions of the aldol condensation and the Claisen condensation we saw how deprotonation of a carbon atom a to a carbonyl group generated an enolate ion. Suppose for the moment that you wanted to react such a nucleophilic species with an alkyl halide as suggested in Figure 1 for the specific case of acetone. In other words, you want to synthesize 2-butanone from acetone. Figure 1 A Viable Synthesis? Is this a viable process? Let's consider the nature of this reaction. First, recall that treatment of acetone with aqueous NaOH establishes an equilibrium with an equilibrium constant of approximately 10-4. In other words, the solution contains comparatively high concentrations of acetone and NaOH and a relatively low concentration of the enolate ion. When you add CH3I to this mixture, there is a chance that it will react with the enolate ion as shown in Figure 1. However alternative pathways are also more likely. One alternative is an aldol condensation. Another is an Sn2 reaction of hydroxide ion with methyl iodide. Both of these possibilities are more likely than the desired reaction. So the answer to the question posed earlier is no, the synthesis proposed in Figure 1 is not viable. Fortunately it is possible to accomplish the transformation outlined in Figure 1 by alternative methods. We will consider two, the first a direct method, the second indirect. Then we will extend our discussion of the second method to a related system. All three approaches accomplish the same goal- alkylation of an enolate ion. Direct Alkylation of Enolate Ions Consider the consequences of using a stronger base than hydroxide ion to deprotonate acetone in Figure 1. First, the concentration of enolate ion at equilibrium would be higher. This would increase the likelihood of a reaction with the methyl iodide. Second, the concentration of acetone would be lower. This would reduce the probability of an aldol condensation. Both of these factors increase the likelihood of alkylation. When lithium diisopropylamide, LDA, is used as the base to deprotonate acetone, Equation 1, the equilibrium constant for the reaction is approximately 1019. Virtually all of the acetone is deprotonated. The chances of an aldol reaction are reduced to essentially zero. Because LDA is such a strong base, it is possible to use a stoichiometric amount of it to deprotonate all of the acetone. Thus, at equilibrium, there is no unreacted LDA to compete for any alkyl halide that might be added to the reaction mixture. Therefore the reaction of the enolate ion with added methyl iodide, Equation 2, becomes the most probable event under these conditions. The Acetoacetic Ester Synthesis Before the direct alkylation of lithium enolates was developed, chemists used an alternative, indirect method to achieve transformations such as that illustrated in Figure 1. Recall that the Claisen condensation of ethyl acetate produces a b-keto ester called ethyl acetoacetate. Recall, too, that the pKa of such compounds is approximately 10. Figure 2 The Claisen Condensation (Again) Under the reaction conditions the ethyl acetoacetate is deprotonated by the sodium ethoxide present in the mixture. The equilibrium constant for this acid-base reaction is approximately 106. In other words, the concentration of sodioacetoacetic ester is high. Addition of methyl iodide to the reaction mixture results in methylation of the enolate ion as shown in Equation 3. The ester fragment of the product of reaction 3 is shown in blue to emphasize the idea that if we could replace that fragment with a hydrogen atom, we would have 2-butanone, the same product that was formed by direct methylation of acetone. Such a transformation is possible. It involves a 2-step, 1-pot reaction. The first step is the saponification of the ester. This results in the formation of the conjugate base of a b-ketocarboxylic acid. Acidification of the reaction mixture, followed by heating, results in the decarboxylation of the b-keto acid and the formation of the corresponding ketone. These steps are outlined in Figure 3. Figure 3 It's a Gas Figure 4 compares the outcomes of the direct alkylation of acetone with the indirect alkylation-saponification-decarboxylation sequence. Because the latter approach yields the desired target molecule, ethyl acetoacetate is considered to be the synthetic equivalent of acetone. Such a reagent is called a synthon. We'll see another example of a synthon when we discuss the malonic ester synthesis at the end of this topic. Figure 4 There's More Than One Way to Skin a Cat The decarboxylation of b-keto acids is a general phenomenon. It occurs readily because the carboxylic acid proton is transferred to the oxygen atom of the b-keto group intramolecularly. The transition state for the transfer is shown in Figure 5. Figure 5 Decarboxylation The immediate product of this decarboxylation is an enol which rapidly tautomerizes to the isomeric ketone, i.e. 2-butanone. Retrosynthetic Analysis To a synthetic organic chemist who is planning the synthesis of a target molecule, the presence of a b-ketoester fragment within that target should suggest the use of a Claisen condensation at some point during the synthesis. Retrosynthetic analysis of the target will then reveal the structure of the appropriate starting ester. Figure 6 demonstrates this retrosynthetic approach for a generic b-ketoester. Figure 6 Taking Another Step Backwards There are three points worth remembering about the retrosynthesis animated in Figure 6. First, R, R', and R'' may be the same or they may be different. Second, while R and R' may be H, R'' may not. Third, when R and R' are not the same, the condensation is called a crossed Claisen condensation. The alkylation-decarboxylation sequence outlined in Figure 3 has a direct parallel in a related synthetic scheme called the malonic ester synthesis. We'll conclude this topic with a comparison of the malonic ester and the acetoacetic ester syntheses. ### The Malonic Ester Synthesis Malonic ester, sometimes called diethyl malonate, is the diethyl ester of malonic acid. The structures of these two compounds are shown in Figure 7. Notice the structural similarities between malonic ester and acetoacetic ester. The similarity of their chemical reactivity stems from their structural relationship. Figure 7 Meet the Malonates Malonic ester is a common starting material for the synthesis of carboxylic acids. Figure 8 describes a typical sequence. Figure 8 A Malonic Ester Synthesis The Mannich Reaction ### Introduction We have seen that attempts to alkylate simple aldehydes, ketones, and esters may be rendered ineffective by the occurrence of competing reactions, notably aldol and Claisen condensations as well as Sn2 and E2 reactions. Deprotonation of aldehydes, ketones, and esters with LDA allows for direct alkylation of these compounds, while deprotonation of dithiane derivatives of aldehydes offers an indirect method for replacing the aldehydic proton with an alkyl group. In this topic we will consider another approach to alkylation of aldehydes and ketones, namely the Mannich reaction. The development of this approach, in the early 1900s, was guided in some measure by insights into the biochemical pathways that plants use to make natural products called secondary metabolites. Secondary metabolites are compounds that an organism produces that are not essential to its survival, i.e. unlike primary metabolites, secondary metabolites are not used in the synthesis of the proteins or lipids, or nucleic acids, or energy that an organism requires for survival. However, it is now clear that these natural products are beneficial, if not essential, to the organisms that produce them. Probably the most familiar group of secondary metabolites is the pheromones. Plants and insects alike release these compounds as a form of chemical communication, conveying simple messages such as "danger", and "this way to lunch", or "Poison! Eat at your own risk". In plants, alkaloids constitute another group of secondary metabolites. Alkaloids are natural products that contain an amino group. The name is derived from the fact that aqueous solutions of these compounds are slightly basic, i.e. alkaline, due to the presence of the amino group. The reactions that produce alkaloids generally involve the secondary metabolism of amino acids. Figure 1 presents a highly abbreviated picture of the biosynthesis of nicotine from the amino acid ornithine. Figure 1 An Abbreviated Biosynthesis of Nicotine In the first step of this sequence the C-2 amino group of ornithine adds to the carbonyl group of pyridoxal phosphate to form an imine. (The pyridoxal phosphate is bound to an enzyme, which is not shown in this diagram.) Decarboxylation followed by tautomerization generates an isomeric imine which undergoes hydrolysis to produce 4-aminobutanal along with an enamine that is converted back to pyridoxal phosphate by hydrolysis. The 4-aminobutanal exists in equilibrium with the 5-membered heterocycle D1-pyrrolideine. As emphasized in the red box, an enolate ion derived from acetoacetyl coenzyme A adds to the protonated imine group of this 5-membered ring. Subsequent reactions complete the biosynthesis of nicotine. It is the addition of the enolate ion to the iminium ion that led to the development of the Mannich reaction. Synthetic methodologies that are designed in this way are referred to as biomimetic reactions. The Mannich reaction was the first biomimetic synthesis to be developed. ### The Mannich Reaction So what is the Mannich reaction? In its simplest form, it involves the nucleophilic addition of an enol to an iminium ion formed by the reaction of formaldehyde with a secondary amine. Equation 11 provides a specific example. The Mannich reaction involves several acid-catalysed equilibria. Like the aldol condensation, the success of the Mannich reaction depends on being able to generate both nucleophilic and electrophilic carbons in the reaction mixture at the same time. Figure 2 shows how this is done in the reaction of dimethylamine, formaldehyde, and acetone. Figure 2 The Mannich Mechanism When performing a Mannich reaction, it is common practice to use the hydrochloride salt of the amine as one of the starting materials. In aqueous solution the salt exists in equilibrium with the free amine. The proton that accompanies the formation of the free amine in Equilibrium 1 is available to protonate other reactants in the solution (Equilibria 2 and 3). Addition of the free amine to a protonated molecule of formaldehyde leads to the formation of the iminium ion shown at the right of Equilibria 4. The enol of acetone then adds to the carbon atom of the iminium ion in Equilibrium 5. Loss of a proton from the oxonium ion intermediate during the work-up of the reaction yields the final product. Equations 2 and 3 provide two examples of applications of the Mannich reaction in the synthesis of natural products. Equation 4 illustrates a transformation that was achieved in 1917 and is the first example of a biomimetic synthesis. Naturally occuring atropine is a toxic alkaloid isolated from the plant Atropa belladonna, commonly called nightshade. Cocaine is a close structural analog of atropine, and the syntheses of these two alkaloids involve the same basic approach. Finally, Figure 3 describes an interesting example of an intramolecular Mannich reaction that was used in the total synthesis of strychnine. Figure 3 These Alkaloids Are Killing Me The essential chemistry to consider in Figure 3 begins with the reaction of the secondary amine with formaldehyde. Under the conditions of the reaction, the resulting iminium ion intermediate was not isolated. Rather it underwent a thermal rearrangement, as indicated by the red arrows, to produce a new intermediate that contained an enol and an iminium ion in close proximity. As the blue arrows suggest, these electronically complementary groups reacted spontaneously to form the tricyclic structure shown at the lower right of the figure. The rest of the molecule was elaborated in a series of steps that are not relevant to this discussion. The cyclohexane ring of the final product which arose from the intramolecular Mannich reaction is highlighted in blue. Pericyclic Reactions ### Introduction For the synthetic organic chemist, the development of a general procedure that leads to the formation of carbon-carbon bonds is considered a laudable achievement. A general method that results in the simultaneous formation of two carbon-carbon bonds is worthy of a Nobel prize. In 1950, two chemists, Otto Diels and Kurt Alder, received that accolade for their discovery of a general method of preparing cyclohexene derivatives that is now known as the Diels-Alder reaction. The Diels-Alder reaction is one type of a broader class of reactions that are known as pericyclic reactions. In 1965 two other Nobel laureates, Robert B.Woodward and Roald Hoffmann, published a series of short communications in which they presented a theoretical basis for these well known, but poorly understood pericyclic reactions. Their theory is called orbital symmetry theory. Subsequently other chemists published alternative interpretations of pericyclic reactions, one called frontier orbital theory, and another named aromatic transition state theory. All of these theories are based upon MO theory. In this topic we will use the Diels-Alder reaction to illustrate aspects of each of these theories. ### The Diels-Alder Reaction The essential nature of the Diels-Alder reaction is summarized by Equation 1, where the substituents attached to the reactants have been omitted for clarity. Figure 1 presents two examples of Diels-Alder reactions that have played vital roles in the total synthesis of biologically important molecules. The first example involves a synthesis of the steroid cortisone reported by R. B. Woodward in 1951. The second synthesis was performed by another Nobel laureate, E. J. Corey in 1969. In both cases many steps were required after the Diels-Alder reaction. The carbon atoms of the reactants are numbered to allow their identification in the final product. Figure 1 Applying the Diels-Alder Reaction Figure 1 demonstrates the practical side of the Diels-Alder reaction. Now let's take a look at the theoretical side, starting with orbital symmetry theory. ### Orbital Symmetry Theory Orbital symmetry theory was a new paradigm in chemistry. As such it required new language. New words and new concepts had to be defined. The new vocabulary included the following words and terms: • pericyclic reactions • electrocyclic reaction • sigmatropic rearrangement • chelotropic reaction • suprafacial • antarafacial • orbital correlation diagram • symmetry allowed reaction • symmetry forbidden reaction The first term is the general reaction type described by orbital symmetry theory. The next four terms are specific types of pericyclic reactions. This discussion is limited to the first type, namely cycloaddition reactions. The term orbital correlation diagram describes the theoretical device that Woodward and Hoffmann developed to interpret pericyclic reactions. According to orbital symmetry theory the symmetry of the orbitals of the reactants must be conserved as they are transformed into the orbitals of the product. Consider the simplest example of a cycloaddition reaction, the head-to-head coupling of two ethene molecules to form cyclobutane as shown in Equation 2. Figure 3 defines the conditions implied by the term head-to-head, namely that the reactants approach each other in parallel planes with the pi orbitals overlapping in the head-to-head fashion required for the formation of sigma bonds (shown in red in Equation 2). The sigma-bonded atoms of each ethene lie in the two parallel planes shown in black in the figure. The p orbitals on each carbon lie in the vertical plane which is shown in blue. The two planes shown in red are symmetry planes. Plane 1 bisects the C-C bond of each ethene, while plane 2 lies half way between the two planes shown in black. Figure 3 According to the conventions of orbital symmetry theory, the reaction shown in Figure 3 is classified as a p2s + p2s cycloaddition, where p indicates that the reaction involves a p system, the number 2 is the number of electrons in the reacting p system, and the letter s stands for suprafacial: if one lobe of a p orbital is considered as the top face, while the other lobe is called the bottom face, then a suprafacial interaction is one in which the bonding occurs on the same face at both ends of the p system. The orientation of the interacting ¼ systems depicted in Figure 3 implies something about the stereochemical outcome of the reaction. Consider the p2s + p2s cycloaddition shown in Equation 3, where the two new sigma bonds are shown in red. Because the interacting p systems approach each other in a "head-to-head" fashion, the relative stereochemistry is the same in the products as it is in the reactants, i.e. the two carboxyl groups that are "cis" to each other in the alkene are "cis" to each other in the products. The alternative to a suprafacial interaction is an antarafacial reaction: in this case bonding occurs on the top face at one end of the p system and on the bottom face of the other. We will consider these ideas again when we discuss the Diels-Alders reaction. In classifying an orbital as symmetric (S) or antisymmetric (A) with respect to a symmetry plane, it is necessary to compare the phase of the lobes of the orbitals on each side of the symmetry plane: The phase may be indicated by shading or by labeling one lobe of an orbital + and the other -. Molecular orbital theory describes the formation of the product in reaction 2 in terms of linear combinations the molecular orbitals of the reactants. Each reactant has two pi molecular orbitals of interest, y1 and y2. There are four combinations of these pi orbitals possible: y1A + y1B, y1A - y1B, y2A + y2B, and y2A - y2B, where the subscripts A and B are used to distinguish one ethene from the other. These combinations transform into the four sigma molecular orbitals in the product: s1A + s1B, s1A - s1B, s2A + s2B, and s2A - s2B. The diagram depicting the correlation of the reactant and product orbitals is shown in Figure 4. The numbers 1 2 in the diagram refer to the symmetry planes 1 and 2 in Figure 3. Figure 4 An Orbital Correlation Diagram The horizontal dashed line in the figure represents the energy level of an isolated p orbital. The most important feature to note about this admittedly complex diagram is that the linear combination y1A - y1B of the reactants correlates with, i.e. has the same symmetry, SA, as the s2A + s2B combination in the product. This is the central postulate of orbital symmetry theory: orbitals in the reactants must transform into product orbitals that have the same symmetry. Since the s2A + s2B orbital in the product is a high energy orbital, there must be a high activation energy for the formation of the product. Hence reaction 2 is said to be symmetry forbidden. Since orbital correlation diagrams for more reactions involving more atoms are necessarily more complex, we will not deal further with this approach to pericyclic reactions. Rather we will examine an alternative theory known as Frontier Orbital Theory. Frontier Orbital Theory According to Frontier Orbital Theory it is possible to determine if a pericyclic reaction is allowed or forbidden by simply considering the symmetry relationship of the frontier orbitals of the reactants. The frontier orbitals are the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO). The interaction between these orbitals, a so-called HOMO-LUMO interaction, is a concept that is similar to Lewis acid-Lewis base chemistry which involves the interaction of a filled orbital of the base with an empty orbital of the acid. According to Frontier Orbital Theory, a pericyclic reaction is allowed when the HOMO of one reactant has the same symmetry as the LUMO of the other. We will now return to reaction 1, the Diels-Alder reaction, to illustrate this idea. Ethene has two pi orbitals which we will label y1E and y2E, the latter being the LUMO. 1,3-butadiene has four pi orbitals, y1B , y2B, y3B, and y4B, with y2B the HOMO. Figure 5 shows an idealized geometry for the approach of these frontier orbitals in parallel planes. The blue lines highlight the incipient overlap of the terminal lobes of the pi system of the diene with the orbitals of the alkene. The plane shown in red is a symmetry plane that bisects both molecules. Both the LUMO of ethene and the HOMO of 1,3-butadiene are antisymmetric (A) with respect to this plane. Since the HOMO-LUMO interaction shown in Figure 5 involves orbitals of the same symmetry, the reaction is allowed. Figure 5 Frontier Orbital Interactions Using the convention we discussed earlier, the Diels-Alder reaction is classified as a p2s + p4s cycloaddition. The alkene is a 2p electron system, while the diene is a 4p electron system. As the blue lines in Figure 5 indicate, the interaction between these two p systems occurs on the top face of each lobe of the alkene and the bottom face of each lobe of the diene. Hence, the interaction is suprafacial on both components. As we will see shortly, it is useful to regard the Diels-Alder reaction as a reference point for allowed cycloadditions. Returning for a moment to the relationship between the relative stereochemistry of the reactants and products, Equation 4 presents a typical result for a p2s + p4s cycloaddition. Note that, as was the case with the p2s + p2s cycloaddition summarized in Equation 3, the p2s + p4s cycloaddition results in retention of configuration. ### Electronic Complementarity One of the values of the Diels-Alder reaction is its generality. It works well with a large variety of substituents on both the diene and the dienophile (alkene). However, the reaction works best when the two reactants are electronically complementary, which is to say that one of them is electron rich, while the other is electron poor. Fortunately we have already considered the effect of substituents on the electron density of a ¼ system in our discussion of substituent effects in electrophilic aromatic substitution reactions. Those groups that are activators in electrophilic aromatic substitution reactions exert their activating effect by increasing the electron density in the ¼ system of the aromatic ring, i.e. making it electron rich relative to benzene. Deactivators, on the other hand, reduce the electron density of the ¼ system, making it electron poor relative to benzene. In the Diels-Alder reaction our reference diene is 1,3-butadiene, while our reference dienophile is ethene (see Equation 1). In order to use any of the three theories discussed in this topic effectively, you must be able to draw the molecular orbitals of the reactants. This is straightforward, and we will illustrate the approach with the four MOs of 1,3-butadiene. These MOs are formed by taking linear combinations of the p orbital on each of the four sigma bonded carbons. The lowest energy MO, y1= p1 + p2 + p3 + p4, has no nodes. Remember: a node is that point where the phase of a standing wave changes from positive to negative. The next orbital, y2= p1 + p2 - p3 + p4, has 1 node (the minus sign (-) indicates a node); y3 = p1 - p2 + p3 - p4, has 2 nodes and y4 = p1 - p2 - p3 - p4, has 3. The nodes in each MO are placed symmetrically. Figure 6 offers three representations of the MOs of 1,3-butadiene along with their classification as either symmetric (S) or antisymmetric (A) with respect to a symmetry plane that is perpendicular to the sigma bonded framework and bisects the C-2-C-3 bond. The red dots in MOs y2-y4 depict nodes. Notice that the symmetry of the orbitals alternates S-A-S-A as you go from one orbital to the next. Figure 6 Assessing MOs Aromatic Transition State Theory How can you decide if a pericyclic reaction involves an aromatic transition state? In our discussion of aromaticity we classified structures as aromatic or anti-aromatic based upon Hückel's rule: an aromatic molecule contains a cyclic array of orbitals in which there are 4n + 2 electrons. Consider the description of the Diels-Alder reaction shown in Figure 7. Figure 7 An Aromatic Transition State The dashed red lines indicate bonds that are being formed in the transition state, while the dashed blue lines depict those that are breaking. Since the reaction involves the 4 pi electrons of 1,3-butadiene and the 2 pi electrons of ethene, there must be 6 electrons involved in the transition state, which, therefore, is aromatic because it conforms to Hückel's rule where n = 1. Note that this analysis assumes the geometry shown in Figure 5. ### The Fine Points All of the theories just described involve two basic assumptions 1. The orbitals overlap suprafacially on both p systems as shown in Figures 3 and 5. 2. The reaction is thermally induced. Given these assumptions, we can state the following: • A thermally induced p2s + p4s cycloaddition reaction is allowed. • A thermally induced p2s + p2s cycloaddition reaction is forbidden. The rules are reversed when the reaction is photochemically induced: • A photochemically induced p2s + p4s cycloaddition reaction is forbidden. • A photochemically induced p2s + p2s reaction is allowed. These rules are summarized in Table 1. Note that changing just one of the variables, i.e. faciality, energy source, or number of electrons (two electrons at a time), changes an allowed reaction to a forbidden one and vice versa. Table 1 Designation Thermal Photochemical p2s + p2s forbidden allowed p2s + p4s allowed forbidden p2s + p2a allowed forbidden p2s + p4a forbidden allowed p2a + p2a forbidden allowed p2a + p4a allowed forbidden Pericyclic Reaction Rules Before we move on, it is worthwhile to clarify the implications of the words allowed and forbidden. An allowed reaction is simply one with a low activation relative to some other pathway, while a forbidden reaction is a process for which there is a significant activation energy. In terms of transition states, an allowed reaction proceeds via an aromatic transition state, while a forbidden reaction does not occur because the transition state would be "anti-aromatic." Solar Energy Storage ? Figure 7 outlines a strategy for conversion of solar energy to chemical energy in a way that offers potential for the design of a solar energy storage device. Figure 7 Pericyclic Reactions to the Rescue The idea begins with a thermally allowed Diels-Alder reaction in which the 2-pi electron component is an alkyne rather than an alkene. The product, 1, is a cyclohexadiene. Irradiation of this diene should promote an allowed intramolecular 2+2 cycloaddition, leading to the formation of the highly strained tricyclic ring system 2. Since 2 cannot revert to 1 by a thermally allowed process, it is trapped in this high energy state. Recall that our definition of a forbidden reaction merely means that it is a process that has a high activation energy. This suggests that the reverse reaction, 2 ---> 1, might be enabled with an appropriate catalyst. The requirements to implement this approach then are 1. Selection of appropriate substituents, G, to insure that 1 absorbs solar radiation 2. Design of a catalyst to promote the regeneration of 1 from 2 ``` ``` ``` ``` © M.EL-Fellah ,Chemistry Department, Benghazi University
2014-10-31 10:24:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6053993701934814, "perplexity": 1769.7042688900262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637899548.41/warc/CC-MAIN-20141030025819-00047-ip-10-16-133-185.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/211883/norm-of-an-inner-product-space
Norm of an inner product space I'm wanting to know why the norm of an inner product space is defined by $$\|v\| = \langle v | v \rangle^{1/2}$$ I would assume it's not arbitrary, but I don't see anything that would lead it to be defined in this way. What would the consequences be if this definition was changed? - Your question is unclear. On an inner product space, there is a priori no norm. You can define a norm by that formula. –  wildildildlife Oct 12 '12 at 22:53 I guess what I'm getting it as this: why is it defined that way? –  user1520427 Oct 12 '12 at 22:54 You want ordinary Euclidean distance as a special case, which you don't get if you use a different definition. –  Qiaochu Yuan Oct 12 '12 at 23:05 In one word: Pythagorean theorem. –  Berci Oct 12 '12 at 23:54 Because is satisfies the requirements of a norm: $\|u\| \geq 0$ for all $u$ and $=0$ iff $u = 0$, $\|au\| = |a| \|u\|$ for all "scalars" $a$ and "vectors" $u$, and (triangle inequality) $\|u + v\| \leq \|u\| + \|v\|$ for all $u, v$ in the inner product space. I think those are all the properties of a norm - it is surprisingly difficult to find a self-contained definition on the Web. –  Stefan Smith Oct 13 '12 at 1:25 The 2-norm $$\left|v\right| = \sqrt{\left<v\big| v\right>}$$ is the most common because it yields the length of a vector $v \in R^n$, with the inner product being the dot product. Any norm satisfying the definition is valid, however, such as the more general p-norm. - One reason (but probably not main) is that if it wasn't this way it would not always agree with the euclidean conception of distance. Imagine you are asked for the points within a certain distance from one given point and such that the three of them are on a straight line. Intuitively, there are two solutions (each one at "both sides" of the given point). If the norm was the inner product raised to one, then the answer would be a singular point and while this might look OK in an abstract sense, it would not agree with "reality", if you want to call it that way. I mean, you would be missing half the solution to the problem. - Ok cool, so abstractly there's no real reason to define it that way, but in doing so it agrees with Euclidean distance as Qiaochu Yuan said? –  user1520427 Oct 12 '12 at 23:16 Yeah I think so, but still, from the point of view of Functional Analysis there are still powerful reasons to define it that way. As somebody has already commented, if $|| \cdot ||$ wants to be a norm it must retain the properties of a norm. –  busman Oct 12 '12 at 23:18 What you wrote is usually the definition of $\|v\|$. If you want it to be derived, what is your definition of $\|v\|$? - That's a good point and I guess it's a weird question to ask. But what I'm trying to get at is why do we use $\langle v | v \rangle ^ {1/2}$ and not, I don't know, $\langle v | v \rangle ^ 1$ –  user1520427 Oct 12 '12 at 22:53 @user1520427: if you don't take the square root, it does not define a norm. –  wildildildlife Oct 12 '12 at 23:13 Yeah but that's pretty circular reasoning.. –  user1520427 Oct 12 '12 at 23:14 No, it's not circular reasoning. If you try to define a norm by $\|v\|=\langle v,v\rangle$, you get that $\|2v\|=4\|v\|$. As the idea behind a norm is that it is a distance, you expect it to respect scaling, i.e. $\|2v\|$ should be $2\|v\|$. And it doesn't satisfy the triangle inequality either. So, as @wildildildlife said, it is not a norm. –  Martin Argerami Oct 12 '12 at 23:28
2015-01-30 11:41:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9106417894363403, "perplexity": 218.23573384910364}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115869264.47/warc/CC-MAIN-20150124161109-00181-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.vedantu.com/question-answer/all-the-red-faces-cards-are-removed-from-a-deck-class-12-maths-cbse-5efbacd97fdd4f5459261f8b
QUESTION # All the red faces cards are removed from a deck of playing cards. The remaining cards are well shuffled and then a card is drawn at random from them. Find the probability that the card drawn is:i) A red cardii) A face cardiii) A card of clubs Hint: Face cards are the king, queen and the jack cards. We will have to find the total outcomes and favourable outcomes in each case in order to determine the probability. We know that there are 26 red cards in a deck of 52 cards and 6 red face cards are removed from the deck. In a deck of playing cards there are 52 cards. When all the red face cards are removed, Total no. of cards left = 52 – 6 = 46 Total outcomes = 46. $Probability = \dfrac{\text{No. of favourable outcomes}}{\text{Total no. of outcomes}}$ i) Let (R) be the event of getting a red card. Total no. of red cards in the deck of 52 cards = 13 + 13 = 26 We removed 6 red face cards, so Total red cards left = 26 – 6 = 20 No. of outcomes favorable in R = 20 $\Rightarrow P(R) = \dfrac{20}{46} = \dfrac {10}{23}$ $\therefore$ The probability of getting a red card = $\dfrac{10}{23}$ (ii) Let (F) be the event of getting a face card. We know that there are a total of 12 face cards (6 red and 6 black). After removing 6 red face cards, No. of face cards left = 12 – 6 = 6 No. of favorable outcomes in F = 6 $\Rightarrow P(F) = \dfrac{6}{46} = \dfrac {3}{23}$ $\therefore$ The probability of getting a face card = $\dfrac{3}{23}$ (iii) Let (C) be the event of getting a card of clubs. The cards of clubs are black in colour, so nothing has been removed from it. Total no. of club cards = 13 No. of favorable outcomes = 13 $\Rightarrow P(C) = \dfrac{13}{46}$ $\therefore$ The probability of getting a club card = $\dfrac{13}{23}$ Note: In these types of questions, firstly we have to write the sample space of the experiment based on the conditions provided, then find the favourable no. of outcomes and thereby finding the probability. All outcomes must be considered to avoid mistakes.
2020-07-05 23:15:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2631910741329193, "perplexity": 323.844541395456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655889877.72/warc/CC-MAIN-20200705215728-20200706005728-00271.warc.gz"}
http://web.vu.lt/mif/a.buteikis/wp-content/uploads/PE_Book/3-2-OLS.html
## 3.2 Ordinary Least Squares (OLS) In this chapter we will present a general method of ordinary least squares (OLS), which produces estimates under certain conditions. Let our (random) samples of $$\epsilon$$, $$X$$ and $$Y$$ be: $$\boldsymbol{\varepsilon} = (\epsilon_1, ..., \epsilon_N)^\top$$, $$\mathbf{X} = (X_1,....,X_N)^\top$$, and $$\mathbf{Y} = (Y_1, ..., Y_N)^\top$$. ### 3.2.1 Key assumptions in Regression Analysis The required conditions are: (UR.1) The Data Generating Process (DGP), or in other words, the population, is described by a linear (in terms of the coefficients) model: $Y = \beta_0 + \beta_1 X + \epsilon$ Another example of a linear model is: $\log (Y) = \beta_0 + \beta_1 \dfrac{1}{X} + \epsilon \iff U = \beta_0 + \beta_1 V + \epsilon,\ \text{where}\ U = \log(Y),\ V = \dfrac{1}{X}$ In a linear model $$\epsilon$$ and $$Y$$ are always dependent, thus $$\mathbb{C}{\rm ov} (Y, \epsilon) \neq 0$$. However, $$\epsilon$$ may (or may not) depend on $$X$$. This leads us to further requirements for $$\epsilon$$. (UR.2) The error term $$\epsilon$$ has an expected value of zero, given any value of the explanatory variable: $\mathbb{E}(\epsilon_i| X_j) = 0,\ \forall i,j = 1,...,N$ This means that whatever are the observations, $$X_j$$, the errors, $$\epsilon_j$$, are on average 0. This also implies that $$\mathbb{E}(\epsilon_i X_i) = 0$$ and $$\mathbb{E}(\epsilon_i) = \mathbb{E}\left( \mathbb{E}\left( \epsilon_i | X_i\right) \right)=0$$. An example would be the case, where $$X_j$$ and $$\epsilon_i$$ are independent r.v.’s and $$\mathbb{E}(\epsilon_i) = 0$$ $$\forall i,j = 1,...,N$$. Furthermore, This property implies that: $\mathbb{E}(Y_i|X_i) = \beta_0 + \beta_1 X_i$ On the other hand, if $$\mathbb{C}{\rm ov}(X_i, \epsilon_i) \neq 0$$, then expressing the covariance as $$\mathbb{C}{\rm ov}(X_i, \epsilon_i) = \mathbb{E}\left[ (X_i - \mathbb{E}(X_i))(\epsilon_i - \mathbb{E}(\epsilon_i))\right] = \mathbb{E}\left[ (X_i - \mathbb{E}(X_i))\epsilon_i\right] = \mathbb{E} \left[ (X_i - \mathbb{E}(X_i))\mathbb{E}(\epsilon_i | X_i) \right] \neq 0$$, which implies that $$\mathbb{E}(\epsilon_i| X_j) \neq 0$$, i.e. assumption (UR.2) does not hold. (UR.3) The error term $$\epsilon$$ has the same variance given any value of the explanatory variable (i.e. homoskedasticity): $\mathbb{V}{\rm ar} (\epsilon_i | \mathbf{X} ) = \sigma^2_\epsilon,\ \forall i = 1,..,N$ and the error terms are not correlated across observations (i.e. no autocorrelation): $\mathbb{C}{\rm ov} (\epsilon_i, \epsilon_j) = 0,\ i \neq j$ This implies that the conditional variance-covariance matrix of a vector of disturbances $$\boldsymbol{\varepsilon}$$ is a unit (or identity) matrix, times a constant, $$\sigma^2_\epsilon$$: $\mathbb{V}{\rm ar}\left( \boldsymbol{\varepsilon} | \mathbf{X} \right) = \begin{bmatrix} \mathbb{V}{\rm ar} (\epsilon_1) & \mathbb{C}{\rm ov} (\epsilon_1, \epsilon_2) & ... & \mathbb{C}{\rm ov} (\epsilon_1, \epsilon_N) \\ \mathbb{C}{\rm ov} (\epsilon_2, \epsilon_1) & \mathbb{V}{\rm ar} (\epsilon_2) & ... & \mathbb{C}{\rm ov} (\epsilon_2, \epsilon_N) \\ \vdots & \vdots & \ddots & \vdots \\ \mathbb{C}{\rm ov} (\epsilon_N, \epsilon_1) & \mathbb{C}{\rm ov} (\epsilon_N, \epsilon_2) & ... & \mathbb{V}{\rm ar} (\epsilon_N) \end{bmatrix} = \sigma^2_\epsilon \mathbf{I}$ This means that the disturbances $$\epsilon_i$$ and $$\epsilon_j$$ are independent $$\forall i \neq j$$ and independent of $$\mathbf{X}$$, thus: $\mathbb{C}{\rm ov} (\epsilon_i, \epsilon_j | \mathbf{X} ) = \mathbb{E} (\epsilon_i \epsilon_j | \mathbf{X} ) = 0,\ \forall i \neq j.$ (UR.4) (optional) The residuals are normal: $\boldsymbol{\varepsilon} | \mathbf{X} \sim \mathcal{N} \left( \mathbf{0}, \sigma^2_\epsilon \mathbf{I} \right)$ This optional condition simplifies some statistical properties of the parameter estimators. We can combine the requirements and restate them as the following: The Data Generating Process $$Y = \beta_0 + \beta_1 X + \epsilon$$ satisfies (UR.2) and (UR.3), if: (conditionally on all $$\mathbf{X}$$’s) $$\mathbb{E} (\epsilon_i) = 0$$, $$\mathbb{V}{\rm ar}(\epsilon_i) = \sigma^2_\epsilon$$ and $$\mathbb{C}{\rm ov} (\epsilon_i, \epsilon_j) = 0$$, $$\forall i \neq j$$ and $$\mathbb{C}{\rm ov} (\epsilon_i, X_j) = \mathbb{E} (\epsilon_i X_j) = 0$$, $$\forall i,j$$. The linear relationship $$Y = \beta_0 + \beta_1 X + \epsilon$$ is also referred as the regression line with an intercept $$\beta_0$$ and a slope $$\beta_1$$. From (UR.2) we have that the regression line coincides with the expected value of $$Y_i$$, given $$X_i$$. In general, we do not know the true coefficient $$\beta_0$$ and $$\beta_1$$ values but we would like to estimate them from our sample data, which consists of points $$(X_i, Y_i)$$, $$i = 1,...,N$$. We would also like to use the data in the best way possible to obtain an estimate of the regression: $\widehat{Y} = \widehat{\beta}_0 + \widehat{\beta}_1 X$ ### 3.2.2 Derivation of the Ordinary Least Squares Estimator Our random sample $$(X_1, Y_1), ..., (X_N, Y_N)$$ comes from the data generating process $$Y = \beta_0 + \beta_1 X + \epsilon$$, which we can rewrite as: $Y_i = \beta_0 + \beta_1 X_i + \epsilon_i,\ i = 1,...,N$ We want to use the data to obtain estimates of the intercept $$\beta_0$$ and the slope $$\beta_1$$. To achieve this, we will present a number of derivation methods to derive the estimates. We will assume that the process satisfies (UR.1) - (UR.3) conditions. Furthermore, we will use the following generated data sample for the estimation methods outlined in this section: Example 3.1 Below we will assume that our data generating process satisfies (UR.1) - (UR.4) assumptions with the following parameters: • $$\beta_0 = 1$$, $$\beta_1 = 0.5$$ • $$X_i = i-1$$, $$\epsilon_i \sim \mathcal{N}(0, 1^2), i = 1,...,51$$ # # # # set.seed(234) # Set the coefficients: N = 50 beta_0 = 1 beta_1 = 0.5 # Generate sample data: x <- 0:N # e <- rnorm(mean = 0, sd = 1, n = length(x)) y <- beta_0 + beta_1 * x + e # Plot the data # # plot(x, y) import numpy as np import pandas as pd import matplotlib.pyplot as plt # np.random.seed(234) # Set the coefficients: N = 50 beta_0 = 1 beta_1 = 0.5 # Generate sample data: x = np.arange(start = 0, stop = N + 1, step = 1) #x = list(range(0, N + 1)) # not np.ndarray e = np.random.normal(loc = 0, scale = 1, size = len(x)) y = beta_0 + beta_1 * x + e # Plot the data _ = plt.figure(num = 0, figsize = (10, 8)) _ = plt.plot(x, y, linestyle = "None", marker = "o", markerfacecolor = 'none') plt.show() #### 3.2.2.1 The Method of Moments (MM) As we have seen before consequence of the (UR.2) condition is that $$\mathbb{E}(\epsilon) = 0$$ as well as $$\mathbb{C}{\rm ov} (\epsilon, X) = \mathbb{E} \left(\epsilon X \right) - \mathbb{E}(\epsilon)\mathbb{E}(X) = \mathbb{E} \left(\epsilon X \right) = 0$$. Using these properties and the linear relationship of $$Y$$ and $$X$$ we have that the following relations: $\begin{cases} \mathbb{E}(\epsilon) &= 0 \\ \mathbb{E}(\epsilon X) &= 0 \\ \epsilon &= Y - \beta_0 - \beta_1 X \end{cases}$ can be written as: $\begin{cases} \mathbb{E}\left[Y - \beta_0 - \beta_1 X \right] &= 0 \\\\ \mathbb{E}\left[X \left(Y - \beta_0 - \beta_1 X \right) \right] &= 0 \end{cases}$ We have two unknown parameters and two equations, which we can use to estimate the unknown parameters. Going back to our random sample, we can estimate the unknown parameters by replacing the expectation with its sample counterpart. Then, we want to choose the estimates $$\widehat{\beta}_0$$ and $$\widehat{\beta}_1$$ such that: $$$\begin{cases} \dfrac{1}{N} \sum_{i = 1}^N \left[Y_i - \widehat{\beta}_0 - \widehat{\beta}_1 X_i \right] &= 0 \\\\ \dfrac{1}{N} \sum_{i = 1}^N \left[X_i \left(Y_i - \widehat{\beta}_0 - \widehat{\beta}_1 X_i \right) \right] &= 0 \end{cases} \tag{3.3}$$$ This is an example of the method of moments estimation approach. By setting: \begin{aligned} \overline{Y} = \dfrac{1}{N} \sum_{i = 1}^N Y_i,\quad \overline{X} = \dfrac{1}{N} \sum_{i = 1}^N X_i \end{aligned} We can rewrite the first equation of (3.3) as: $$$\widehat{\beta}_0 = \overline{Y} - \widehat{\beta}_1 \overline{X} \tag{3.4}$$$ And plug it into the second equation of (3.3) to get: $\sum_{i = 1}^N X_i\left( Y_i - \overline{Y} + \widehat{\beta}_1 \overline{X}- \widehat{\beta}_1 X_i \right) = 0 \Longrightarrow \sum_{i = 1}^N X_i\left( Y_i - \overline{Y} \right) = \widehat{\beta}_1 \sum_{i = 1}^N X_i\left( X_i - \overline{X} \right)$ which gives us: $\widehat{\beta}_1 = \dfrac{\sum_{i = 1}^N X_i\left( Y_i - \overline{Y} \right)}{\sum_{i = 1}^N X_i\left( X_i - \overline{X} \right)}$ Using the following properties: \begin{aligned} \sum_{i = 1}^N X_i\left( X_i - \overline{X} \right) &= \sum_{i = 1}^N \left( X_i - \overline{X} \right)^2 \\ \sum_{i = 1}^N X_i\left( Y_i - \overline{Y} \right) &= \sum_{i = 1}^N \left( X_i - \overline{X} \right) \left( Y_i - \overline{Y} \right) \end{aligned} yields the following estimate of $$\beta_1$$: $$$\widehat{\beta}_1 = \dfrac{\sum_{i = 1}^N \left( X_i - \overline{X} \right) \left( Y_i - \overline{Y} \right)}{\sum_{i = 1}^N \left( X_i - \overline{X} \right)^2} = \dfrac{\widehat{\mathbb{C}{\rm ov}}(X, Y)}{\widehat{\mathbb{V}{\rm ar}}(X)} \tag{3.5}$$$ It is important to note that by this construction: • the sample mean of the OLS residuals is always zero. • The residuals do not correlate with $$X$$. The expectation is also referred to as the first moment, which we then estimate from a sample, hence the name - Method of Moments. There is another alternative way of approaching our task, and it is by attempting to minimize the sum of the squared errors. #### 3.2.2.2 OLS - System of Partial Derivatives Method Suppose that we choose $$\widehat{\beta}_0$$ and $$\widehat{\beta}_1$$ to minimize the sum of squared residuals: $\text{RSS} = \sum_{i = 1}^N \widehat{\epsilon}_i^2 = \sum_{i = 1}^N \left( Y_i - \widehat{\beta}_0 - \widehat{\beta}_1 X_i \right)^2$ The term Ordinary Least Squares (OLS) comes from the fact that these estimates minimize the sum of squared residuals. In order to minimize $$\text{RSS}$$, we have to differentiate it with respect to $$\widehat{\beta}_0$$ and $$\widehat{\beta}_1$$ and equate the derivatives to zero in order to solve the following system of equations: $$$\begin{cases} \dfrac{\partial \text{RSS}}{\partial \widehat{\beta}_0} &= -2\sum_{i = 1}^N \left( Y_i - \widehat{\beta}_0 - \widehat{\beta}_1 X_i \right) = 0\\ \dfrac{\partial \text{RSS}}{\partial \widehat{\beta}_1} &= -2\sum_{i = 1}^N X_i\left( Y_i - \widehat{\beta}_0 - \widehat{\beta}_1 X_i \right) = 0 \end{cases} \tag{3.6}$$$ If we multiply both equations by $$-\dfrac{1}{2}$$ we will get the same equation system as in (3.3). So the solution here will have the same expression as for the Method of Moments estimators, namely: $\begin{cases} \widehat{\beta}_1 = \dfrac{\sum_{i = 1}^N \left( X_i - \overline{X} \right) \left( Y_i - \overline{Y} \right)}{\sum_{i = 1}^N \left( X_i - \overline{X} \right)^2} = \dfrac{\widehat{\mathbb{C}{\rm ov}}(X, Y)}{\widehat{\mathbb{V}{\rm ar}}(X)} \\\\ \widehat{\beta}_0 = \overline{Y} - \widehat{\beta}_1 \overline{X} \end{cases}$ $$\widehat{\beta}_1$$ and $$\widehat{\beta}_0$$ derived in this way are called the OLS estimates of the linear regression parameters. Below we continue our example and estimate the parameters from our generated sample data: beta_1_est <- cov(x, y) / var(x) beta_0_est <- mean(y) - beta_1_est * mean(x) print(paste0("Estimated beta_0 = ", beta_0_est, ". True beta_0 = ", beta_0)) ## [1] "Estimated beta_0 = 0.833740099062318. True beta_0 = 1" print(paste0("Estimated beta_1 = ", beta_1_est, ". True beta_1 = ", beta_1)) ## [1] "Estimated beta_1 = 0.504789456221484. True beta_1 = 0.5" beta_1_est = np.cov(x, y, bias = True)[0][1] / np.var(x) beta_0_est = np.mean(y) - beta_1_est * np.mean(x) print("Estimated beta_0 = " + str(beta_0_est) + ". True beta_0 = " + str(beta_0)) ## Estimated beta_0 = 0.8385362094098401. True beta_0 = 1 print("Estimated beta_1 = " + str(beta_1_est) + ". True beta_1 = " + str(beta_1)) ## Estimated beta_1 = 0.5099221954865624. True beta_1 = 0.5 Note that bias = True calculates the population covariance (with division by $$N$$), whereas bias = False calculates the sample covariance (with $$N-1$$ instead of $$N$$) and the variance function var() calculates the population variance. Since we are dividing the covariance by the variance, we need to calculate both of them for the population (or the sample, since we are dividing the covariance by the variance): print("With bias = False (default) the estimate of beta_1 is biased: \n\t" + str(np.cov(x, y)[0][1] / np.var(x))) ## With bias = False (default) the estimate of beta_1 is biased: ## 0.5201206393962936 print("With sample variance the estimate of beta_1 is unbiased: \n\t" + str(np.cov(x, y)[0][1] / np.var(x, ddof = 1))) ## With sample variance the estimate of beta_1 is unbiased: ## 0.5099221954865624 We can further re-write the estimates in a matrix notation. #### 3.2.2.3 OLS - The Matrix Method It is convenient to use matrices when solving equation systems. Looking at our random sample equations: $\begin{cases} Y_1 &= \beta_0 + \beta_1 X_1 + \epsilon_1 \\ Y_2 &= \beta_0 + \beta_1 X_2 + \epsilon_2 \\ \vdots \\ Y_N &= \beta_0 + \beta_1 X_N + \epsilon_N \end{cases}$ which we can re-write in the following matrix notation: $\begin{bmatrix} Y_1 \\ Y_2 \\ \vdots \\ Y_N \end{bmatrix} = \begin{bmatrix} 1 & X_1 \\ 1 & X_2 \\ \vdots & \vdots \\ 1 & X_N \end{bmatrix} \begin{bmatrix} \beta_0 \\ \beta_1 \end{bmatrix} + \begin{bmatrix} \epsilon_1 \\ \epsilon_2 \\ \vdots \\ \epsilon_N \end{bmatrix}$ or, in a more compact form: $\mathbf{Y} = \mathbf{X} \boldsymbol{\beta} + \boldsymbol{\varepsilon}$ where $$\mathbf{Y} = [Y_1,...,Y_N]^\top$$, $$\boldsymbol{\varepsilon} = [\epsilon_1,...,\epsilon_N]^\top$$, $$\boldsymbol{\beta} =[ \beta_0, \beta_1]^\top$$ and $$\mathbf{X} = \begin{bmatrix} 1 & X_1 \\ 1 & X_2 \\ \vdots & \vdots \\ 1 & X_N \end{bmatrix}$$. As in the previous case, we want to minimize the sum of squared residuals: \begin{aligned} RSS(\boldsymbol{\beta}) &= \boldsymbol{\varepsilon}^\top \boldsymbol{\varepsilon} \\ &= \left( \mathbf{Y} - \mathbf{X} \boldsymbol{\beta} \right)^\top \left( \mathbf{Y} - \mathbf{X} \boldsymbol{\beta} \right) \\ &= \mathbf{Y} ^\top \mathbf{Y} - \boldsymbol{\beta}^\top \mathbf{X}^\top \mathbf{Y} - \mathbf{Y}^\top \mathbf{X} \boldsymbol{\beta} + \boldsymbol{\beta}^\top \mathbf{X}^\top \mathbf{X} \boldsymbol{\beta} \rightarrow \min_{\beta_0, \beta_1} \end{aligned} After using some matrix calculus and equating the partial derivative to zero: $\dfrac{\partial RSS(\widehat{\boldsymbol{\beta}})}{\partial \widehat{\boldsymbol{\beta}}} = -2 \mathbf{X}^\top \mathbf{Y} + 2 \mathbf{X}^\top \mathbf{X} \widehat{\boldsymbol{\beta}} = 0$ gives us the OLS estimator: $$$\widehat{\boldsymbol{\beta}} = \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \mathbf{Y} \tag{3.7}$$$ Note that with the matrix notation we estimate both parameters at the same time, whereas with the Method of Moments, or by taking partial derivatives from the RSS and solving the equation system, we needed to first estimate $$\widehat{\beta}_1$$ via (3.5) and then plug it in (3.4) in order to estimate $$\widehat{\beta}_0$$. In the univariate regression case, we have that: \begin{aligned} \mathbf{X}^\top \mathbf{X} &= \begin{bmatrix} 1 & 1 & ... & 1\\ X_1 & X_2 & ... & X_N \end{bmatrix} \begin{bmatrix} 1 & X_1 \\ 1 & X_2 \\ \vdots \\ 1 & X_N \end{bmatrix} = \begin{bmatrix} N & \sum_{i = 1}^N X_i \\ \sum_{i = 1}^N X_i & \sum_{i = 1}^N X_i^2 \end{bmatrix} \\ \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} &= \dfrac{1}{N\sum_{i = 1}^N X_i^2 - \left( \sum_{i = 1}^N X_i \right)^2} \begin{bmatrix} \sum_{i = 1}^N X_i^2 & - \sum_{i = 1}^N X_i\\ -\sum_{i = 1}^N X_i & N \end{bmatrix}\\ \mathbf{X}^\top \mathbf{Y} &= \begin{bmatrix} \sum_{i = 1}^N Y_i \\ \sum_{i = 1}^N X_i Y_i \end{bmatrix} \end{aligned} and: $\left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \mathbf{Y} = \begin{bmatrix} \dfrac{N \left( \dfrac{1}{N}\sum_{i = 1}^N X_i^2 \right) \cdot N \left( \dfrac{1}{N} \sum_{i = 1}^N Y_i \right) - N \left( \dfrac{1}{N}\sum_{i = 1}^N X_i\right) \cdot N \left( \dfrac{1}{N} \sum_{i = 1}^N X_i Y_i \right)}{N^2\left( \dfrac{1}{N}\sum_{i = 1}^N X_i^2 \right) - N^2\left( \dfrac{1}{N}\sum_{i = 1}^N X_i \right)^2} \\ \dfrac{N^2 \dfrac{1}{N}\sum_{i = 1}^N X_i Y_i - N \left( \dfrac{1}{N} \sum_{i = 1}^N X_i \right) \cdot N \left( \dfrac{1}{N} \sum_{i = 1}^N Y_i \right)}{N^2\left( \dfrac{1}{N}\sum_{i = 1}^N X_i^2 \right) - N^2\left( \dfrac{1}{N}\sum_{i = 1}^N X_i \right)^2} \end{bmatrix}$ Which gives us the following expression for the OLS estimators for the univariate regression: $\widehat{\boldsymbol{\beta}} = \begin{bmatrix} \widehat{\beta_0} \\ \widehat{\beta_1} \end{bmatrix} = \begin{bmatrix} \dfrac{\overline{X^2} \cdot\overline{Y} - \overline{X} \cdot\overline{XY}}{\widehat{\mathbb{V}{\rm ar}}(X)} \\ \dfrac{\widehat{\mathbb{C}{\rm ov}}(X, Y)}{\widehat{\mathbb{V}{\rm ar}}(X)} \end{bmatrix}$ Note that in this case, we have an exact expression for $$\widehat{\beta}_0$$, which does not require estimating $$\widehat{\beta}_1$$ beforehand. Defining the estimates in the matrix notation is not only convenient (as they can be generalized to the multivariate case) but, in some cases, even faster to compute via software. Below we will continue our example and show not only how to calculate the parameters manually using vectorization, but also via the built-in OLS estimation methods. x_mat <- cbind(1, x) beta_mat <- solve(t(x_mat) %*% x_mat) %*% t(x_mat) %*% y row.names(beta_mat) <- c("beta_0", "beta_1") print(beta_mat) ## [,1] ## beta_0 0.8337401 ## beta_1 0.5047895 x_mat = np.column_stack((np.ones(len(x)), x)) beta_mat = np.dot(np.linalg.inv(np.dot(np.transpose(x_mat), x_mat)), np.dot(np.transpose(x_mat), y)) print(beta_mat) ## [0.83853621 0.5099222 ] Alternatively, Python 3.5 introduced the @ symbol (see PEP465, or Python 3.5 release notes) for matrix multiplication. This allows us to rewrite the above expression as: print(np.linalg.inv(x_mat.T @ x_mat) @ x_mat.T @ y) ## [0.83853621 0.5099222 ] Though, this will not be compatible with previous Python versions. On the other hand, using the @ symbol is faster on larger (and higher dimension) matrices). To further complicate things, according to the official documentaion: The @ (at) operator is intended to be used for matrix multiplication. No builtin Python types implement this operator. In other words, @, which is a built-in operator in Python itself, only works with external packages that have implemented this operator (like NumPy). This is in part because Python does not let you define new operators, but it comes predefined with a set of operators, which can be overridden. Both R and Python have packages with these estimation methods already defined - we only need to specify the data. We will explore these functions in more detail at the end of the chapter, but for now we will show only the relevant output for the currently discussed topic below: lm_result <- lm(y ~ 1 + x) # Estimate the parameters print(lm_result$coefficients) # Extract the parameter estimates ## (Intercept) x ## 0.8337401 0.5047895 Note that we can use y ~ x, instead of y ~ 1 + x - the constant term (Intercept) is added automatically. import statsmodels.api as sm # x_mat = sm.add_constant(x) # Add a constant column - not optional! lm_model = sm.OLS(y, x_mat) # Create the OLS regression object lm_fit = lm_model.fit() # Estimate the parameters print(lm_fit.params) # Extract the parameter estimates ## [0.83853621 0.5099222 ] ### 3.2.3 Relationship between estimates, residuals, fitted and actual values After obtaining the estimates $$\widehat{\beta}_0$$ and $$\widehat{\beta}_1$$, we may want to examine the following values: • The fitted values of $$Y$$, which are defined as the following OLS regression line (or more generally, the estimated regression line): $\widehat{Y}_i = \widehat{\beta}_0 + \widehat{\beta}_1 X_i$ where $$\widehat{\beta}_0$$ and $$\widehat{\beta}_1$$ are estimated via OLS. By definition, each fitted value of $$\widehat{Y}_i$$ is on the estimated OLS regression line. • The residuals, which are defined as the difference between the actual and fitted values of $$Y$$: $\widehat{\epsilon}_i = \widehat{e}_i = Y_i - \widehat{Y}_i = Y_i - \widehat{\beta}_0 - \widehat{\beta}_1 X_i$ which are hopefully close to the errors $$\epsilon_i$$. Below we present a graphical representation of some of these terms: # The unknown DGP: y_dgp <- beta_0 + beta_1 * x # The fitted values: y_fit <- beta_mat[1] + beta_mat[2] * x # Plot the sample data plot(x = x, y = y) # Plot the Unknown Population regression: lines(x = x, y = y_dgp, col = "darkgreen", lty = 2) # Plot the fitted regression line: lines(x = x, y = y_fit, col = "blue") # # legend(x = 0, y = 25, legend = c(expression(paste("E(Y|X) = ", beta[0] + beta[1] * X)), expression(paste(widehat(Y) == widehat(beta)[0] + widehat(beta)[1] * X))), lty = c(2, 1), lwd = c(1, 1), pch = c(NA, NA), col = c("darkgreen", "blue")) # The unknown DGP: y_dgp = beta_0 + beta_1 * x # The fitted values: y_fit = beta_mat[0] + beta_mat[1] * x # Plot the sample data _ = plt.figure(num = 0, figsize = (10, 8)) _ = plt.plot(x, y, linestyle = "None", marker = "o", markerfacecolor = 'green') _ = plt.title("Scatter diagram of (X,Y) sample data and the regression line") # Plot the Unknown Population regression: _ = plt.plot(x, y_dgp, linestyle = "--", color = "green", label='$E(Y|X) = \\beta_0 + \\beta_1 X$') # Plot the fitted regression line: _ = plt.plot(x, y_fit, linestyle = "-", color = "blue", label='$\widehat{Y} = \widehat{\\beta}_0 + \widehat{\\beta}_1 X$') _ = plt.legend() plt.show() We see that the estimated regression is very close to the true underlying population regression (i.e. the true $$\mathbb{E} (Y | X)$$). Looking closer at the fitted values in a subset of the data, we can see where the residuals originate and how the fitted values compare to the sample data: plot(x = x, y = y, pch = 19, col = "black", xlim = c(5, 15), ylim = c(0, 10), yaxs = "i", #Y axis starts at 0, ends at max(Y) xaxt = "n", yaxt = "n", # Do not plot initial ticks in axis xlab = "X", ylab = "Y") # Label the axis title("Scatter diagram of (X,Y) sample data and the regression line") lines(x = x, y = y_fit, col = "darkgreen", lty = 2) # Add Axis labels and ticks at specific positions: axis(side = 1, at = c(x[8], x[12]), labels = FALSE) text(x = c(x[8], x[12]), y = -0.2, pos = 1, xpd = TRUE, labels = c(expression(x[8]), expression(x[12]))) # # # Add vertical lines: segments(x0 = x[8], y0 = 0, x1 = x[8], y1 = y_fit[8], lty = 2) segments(x0 = x[8], y0 = 0, x1 = x[8], y1 = y[8]) segments(x0 = x[12], y0 = 0, x1 = x[12], y1 = y[12]) # # # Add some curly brackets: pBrackets::brackets(x1 = x[8], y1 = 0, x2 = x[8], y2 = y[8], lwd = 1, h = 0.33, col = "red") pBrackets::brackets(x1 = x[8], y1 = y_fit[8], x2 = x[8], y2 = 0, lwd = 1, h = 0.33, col = "red") pBrackets::brackets(x1 = x[12], y1 = y_fit[12], x2 = x[12], y2 = 0, lwd = 1, h = 0.33, col = "red") pBrackets::brackets(x1 = x[12], y1 = y[12], x2 = x[12], y2 = y_fit[12], lwd = 1, h = 0.33, ticks = 0.25, col = "red") # # # Add Actual, Fitted and Residual indicator text: text(x = x[8] * 0.94, y = y[8] / 2, labels = expression(Y[8])) text(x = x[8] * 1.07, y = y_fit[8] / 2, labels = expression(hat(Y)[8])) text(x = x[12] * 1.1, y = y_fit[12] / 2, labels = expression(hat(Y)[12] == "fitted value")) text(x = x[12] * 1.08, y = y_fit[12] + (y[12] - y_fit[12]) * 0.8, labels = expression(hat(epsilon)[12] == "residual")) # # # Add Regression line text(x = x[9], y = y[11], labels = expression(hat(Y) == hat(beta)[0] + hat(beta)[1] * X)) arrows(x0 = x[9], y0 = y_fit[10]*1.05, x1 = x[10], y1 = y_fit[10], length = 0.1, col = "red") Other mathematical notations when plotting in R are available here. _ = plt.figure(num = 1, figsize=(10, 8)) _ = plt.plot(x, y, linestyle = "None", marker = "o", markerfacecolor = 'black') _ = plt.ylim(ymin = 0, ymax = 12) _ = plt.xlim(xmin = 10, xmax = 20) _ = plt.title("Scatter diagram of (X,Y) sample data and the regression line") _ = plt.plot(x, y_dgp, linestyle = "--", color = "green") # Add Axis labels and ticks at specific positions: _ = plt.xticks([x[12], x[16]], ["$X_{13}$", "$X_{17}$"]) # Add vertical lines: _ = plt.plot([x[12], x[12]], [0, y_fit[12]], '--', color = "black") _ = plt.plot([x[12], x[12]], [0, y[12]], '-', color = "black") _ = plt.plot([x[16], x[16]], [0, y[16]], '-', color = "black") # Add some brackets: _ = plt.annotate("", xy = (x[12]*0.955, y[12] / 2), xytext = (x[12]*0.975, y[12] / 2), arrowprops = dict(arrowstyle = "]-, widthA=10.3,lengthA=1", connectionstyle = "arc", color='red')) _ = plt.annotate("", xy = (x[12]*1.015, y_fit[12] / 2), xytext = (x[12]*1.035, y_fit[12] / 2), arrowprops = dict(arrowstyle = "-[, widthB=12.5,lengthB=1", connectionstyle = "arc", color='red')) _ = plt.annotate("", xy = (x[16]*1.015, y_fit[16] / 2), xytext = (x[16]*1.035, y_fit[16] / 2), arrowprops = dict(arrowstyle = "-[, widthB=16.2,lengthB=1.2", connectionstyle = "arc", color='red')) _ = plt.annotate("", xy = (x[16]*1.015, (y[16] + y_fit[16]) / 2), xytext = (x[16]*1.035, (y[16] + y_fit[16]) / 2), arrowprops = dict(arrowstyle = "-[, widthB=2.5,lengthB=1.2", connectionstyle = "arc", color='red')) # Add Actual, Fitted and Residual indicator text: _ = plt.text(x[12]*0.93, y[12] / 2.1, r'$Y_{13}$', fontsize = 10) _ = plt.text(x[12]*1.04, y_fit[12] / 2.1, r'$\widehat{Y}_{13}$', fontsize = 10) _ = plt.text(x[16]*1.04, y_fit[16] / 2.1, r'$\widehat{Y}_6$= fitted value', fontsize = 10) _ = plt.text(x[16]*1.04, (y[16] + y_fit[16]) / 2.02, r'$\widehat{\epsilon}_6$= residual', fontsize = 10) # Add Regression line _ = plt.text(x[17] - 0.2, y[20], r'$\widehat{Y} = \widehat{\beta}_0 + \widehat{\beta}_1 X', fontsize = 10) _ = plt.annotate("", xy = (x[19], y_fit[19]), xytext = (x[18]*0.99, y[20]), arrowprops = dict(arrowstyle = "->", connectionstyle = "arc", color='red')) plt.show() ### 3.2.4 Properties of the OLS estimator From the construction of the OLS estimators the following properties apply to the sample: 1. The sum (and by extension, the sample average) of the OLS residuals is zero: $$$\sum_{i = 1}^N \widehat{\epsilon}_i = 0 \tag{3.8}$$$ This follows from the first equation of (3.3). The OLS estimates $$\widehat{\beta}_0$$ and $$\widehat{\beta}_1$$ are chosen such that the residuals sum up to zero, regardless of the underlying sample data. resid <- y - y_fit print(paste0("Sum of the residuals: ", sum(resid))) ## [1] "Sum of the residuals: -1.05582209641852e-13" resid = y - y_fit print("Sum of the residuals: " + str(sum(resid))) ## Sum of the residuals: 2.042810365310288e-14 We see that the sum is very close to zero. 1. The sample covariance between the regressors and the OLS residuals is zero: $$$\sum_{i = 1}^N X_i \widehat{\epsilon}_i = 0 \tag{3.9}$$$ This follows from the second equation of (3.3). Because the sample average of the OLS residuals is zero, $$\sum_{i = 1}^N X_i \widehat{\epsilon}_i$$ is proportional to the sample covariance, between $$X_i$$ and $$\widehat{\epsilon}_i$$. print(paste0("Sum of X*resid: ", sum(resid * x))) ## [1] "Sum of X*resid: -5.92859095149834e-13" print(paste0("Sample covariance of X and residuals: ", cov(resid, x))) ## [1] "Sample covariance of X and residuals: 4.1182578180976e-14" print("Sum of X*resid: " + str(sum(np.array(resid) * np.array(x)))) ## Sum of X*resid: -3.765876499528531e-13 print("Sample covariance of X and residuals: " + str(np.cov(resid, x)[0][1])) ## Sample covariance of X and residuals: -1.7815748876159887e-14 We see that both the sum and the sample covariance are very close to zero. 1. The point $$(\overline{X}, \overline{Y})$$ is always on the OLS regression line - if we calculate $$\widehat{\beta}_0 + \widehat{\beta}_1 \overline{X}$$, the resulting value would be equal to $$\overline{Y}$$. print(paste0("Predicted value with mean(X): ", beta_mat[1] + beta_mat[2] * mean(x))) ## [1] "Predicted value with mean(X): 13.4534765045994" print(paste0("Sample mean of Y: ", mean(y))) ## [1] "Sample mean of Y: 13.4534765045994" print("Predicted value with mean(X): " + str(beta_mat[0] + beta_mat[1] * np.mean(x))) ## Predicted value with mean(X): 13.586591096573901 print("Sample mean of Y: " + str(np.mean(y))) ## Sample mean of Y: 13.5865910965739 We see that the predicted value is identical to the sample mean of $$Y$$. However, these properties are not the only ones, which justify the use of the OLS method, instead of some other competing estimator. The main advantage of the OLS estimators can be summarized by the following Gauss-Markov theorem: Gauss-Markov theorem Under the assumption that the conditions (UR.1) - (UR.3) hold true, the OLS estimators $$\widehat{\beta}_0$$ and $$\widehat{\beta}_1$$ are BLUE (Best Linear Unbiased Estimator) and Consistent. #### 3.2.4.1 What is an Estimator? We will restate what we have said at the end of section 3.1 - an estimator is a rule that can be applied to any sample of data to produce an estimate. In other words the estimator is the rule and the estimate is the result. So, eq. (3.7) is the rule and therefore an estimator. The remaining components of the acronym BLUE are provided below. #### 3.2.4.2 OLS estimators are Linear From the specification of the relationship between $$\mathbf{Y}$$ and $$\mathbf{X}$$ (using the matrix notation for generality): $\mathbf{Y} = \mathbf{X} \boldsymbol{\beta} + \boldsymbol{\varepsilon}$ We see that the relationship is linear with respect to $$\mathbf{Y}$$. #### 3.2.4.3 OLS estimators are Unbiased Using the matrix notation for the sample linear equations ($$\mathbf{Y} = \mathbf{X} \boldsymbol{\beta} + \boldsymbol{\varepsilon}$$) and plugging it into eq. (3.7) gives us the following: \begin{aligned} \widehat{\boldsymbol{\beta}} &= \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \mathbf{Y} \\ &= \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \left( \mathbf{X} \boldsymbol{\beta} + \boldsymbol{\varepsilon} \right) \\ &= \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \mathbf{X} \boldsymbol{\beta} + \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \boldsymbol{\varepsilon} \\ &= \boldsymbol{\beta} + \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \boldsymbol{\varepsilon} \end{aligned} If we take the expectation of both sides and use the law of total expectation: \begin{aligned} \mathbb{E} \left[ \widehat{\boldsymbol{\beta}} \right] &= \boldsymbol{\beta} + \mathbb{E} \left[ \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \boldsymbol{\varepsilon} \right] \\ &= \boldsymbol{\beta} + \mathbb{E} \left[ \mathbb{E} \left( \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \boldsymbol{\varepsilon} \biggr\rvert \mathbf{X}\right)\right] \\ &= \boldsymbol{\beta} + \mathbb{E} \left[ \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \mathbb{E} \left( \boldsymbol{\varepsilon} | \mathbf{X}\right)\right] \\ &= \boldsymbol{\beta} \end{aligned} since $$\mathbb{E} \left( \boldsymbol{\varepsilon} | \mathbf{X}\right) = \mathbf{0}$$ from (UR.2). We have shown that $$\mathbb{E} \left[ \widehat{\boldsymbol{\beta}} \right] = \boldsymbol{\beta}$$ - i.e., the OLS estimator $$\widehat{\boldsymbol{\beta}}$$ is an unbiased estimator of $$\boldsymbol{\beta}$$. Unbiasedness does not guarantee that the estimate we get with any particular sample is equal (or even very close) to $$\boldsymbol{\beta}$$. It means that if we could repeatedly draw random samples from the population and compute the estimate each time, then the average of these estimates would be (very close to) $$\boldsymbol{\beta}$$. However, in most applications we have just one random sample to work with, though as we will see later on, there are methods for creating additional samples from the available data (usually by creating and analysing different subsamples of the original data). #### 3.2.4.4 OLS estimators are Best (Efficient) When there is more than one unbiased method of estimation to choose from, that estimator which has the lowest variance is the best. In other words, we want to show that OLS estimators are best in the sense that $$\widehat{\boldsymbol{\beta}}$$ are efficient estimators of $$\boldsymbol{\beta}$$ (i.e. they have the smallest variance). To do so we will calculate the variance - the average distance of an element from the average - as follows (remember that for OLS estimators, condition (UR.3) holds true): From the proof of unbiasedness of the OLS we have that: \begin{aligned} \widehat{\boldsymbol{\beta}} = \boldsymbol{\beta} + \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \boldsymbol{\varepsilon} \Longrightarrow \widehat{\boldsymbol{\beta}} - \boldsymbol{\beta} = \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \boldsymbol{\varepsilon} \end{aligned} Which we can then use this expression for calculating the variance-covariance matrix of the OLS estimator: \begin{aligned} \mathbb{V}{\rm ar} (\widehat{\boldsymbol{\beta}}) &= \mathbb{E} \left[(\widehat{\boldsymbol{\beta}} - \mathbb{E}(\widehat{\boldsymbol{\beta}}))(\widehat{\boldsymbol{\beta}} - \mathbb{E}(\widehat{\boldsymbol{\beta}}))^\top \right] \\ &= \mathbb{E} \left[(\widehat{\boldsymbol{\beta}} - \boldsymbol{\beta})(\widehat{\boldsymbol{\beta}} - \boldsymbol{\beta})^\top \right] \\ &= \mathbb{E} \left[ \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \boldsymbol{\varepsilon} \left( \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \boldsymbol{\varepsilon} \right)^\top \right] \\ &= \mathbb{E} \left[ \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \boldsymbol{\varepsilon} \boldsymbol{\varepsilon}^\top \mathbf{X} \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \right] \\ &= \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \mathbb{E} \left[ \boldsymbol{\varepsilon} \boldsymbol{\varepsilon}^\top\right] \mathbf{X} \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \\ &= \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \left(\sigma^2 \mathbf{I} \right) \mathbf{X} \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \\ &= \sigma^2 \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \mathbf{X} \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \\ &= \sigma^2 \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \end{aligned} For the univariate case, we have already calculated $$\left( \mathbf{X}^\top \mathbf{X}\right)^{-1}$$, which leads to: $\begin{cases} \mathbb{V}{\rm ar} (\widehat{\beta}_0) &= \sigma^2\cdot \dfrac{\sum_{i = 1}^N X_i^2}{N\sum_{i = 1}^N \left( X_i - \overline{X} \right)^2} \\\\ \mathbb{V}{\rm ar} (\widehat{\beta}_1) &= \sigma^2\cdot \dfrac{1}{\sum_{i = 1}^N \left( X_i - \overline{X} \right)^2} \end{cases}$ Which correspond to the diagonal elements of the variance-covariance matrix: $\mathbb{V}{\rm ar} (\widehat{\boldsymbol{\beta}}) = \begin{bmatrix} \mathbb{V}{\rm ar} (\widehat{\beta}_0) & \mathbb{C}{\rm ov} (\widehat{\beta}_0, \widehat{\beta}_1) \\ \mathbb{C}{\rm ov} (\widehat{\beta}_1, \widehat{\beta}_0) & \mathbb{V}{\rm ar} (\widehat{\beta}_1) \end{bmatrix}$ Note that we applied the following relationship: $$N\sum_{i = 1}^N X_i^2 - \left( \sum_{i = 1}^N X_i \right)^2 = N\sum_{i = 1}^N \left( X_i - \overline{X} \right)^2$$. Next, assume that we have some other estimator of $$\boldsymbol{\beta}$$, which is also unbiased and can be expressed as: $\widetilde{\boldsymbol{\beta}} = \left[ \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top + \mathbf{D}\right] \mathbf{Y} =\mathbf{C}\mathbf{Y}$ Then, since $$\mathbb{E}\left[ \boldsymbol{\varepsilon} \right] = \boldsymbol{0}$$: \begin{aligned} \mathbb{E}\left[ \widetilde{\boldsymbol{\beta}} \right] &= \mathbb{E}\left[ \left( \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top + \mathbf{D}\right) \left( \mathbf{X} \boldsymbol{\beta} + \boldsymbol{\varepsilon}\right) \right] \\ &= \left( \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top + \mathbf{D}\right) \mathbb{E}\left[ \mathbf{X} \boldsymbol{\beta} + \boldsymbol{\varepsilon} \right] \\ &= \left( \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top + \mathbf{D}\right)\mathbf{X} \boldsymbol{\beta} \\ &= \left( \mathbf{I} + \mathbf{D}\mathbf{X}\right) \boldsymbol{\beta} \\ &= \boldsymbol{\beta} \iff \mathbf{D}\mathbf{X} = \boldsymbol{0} \end{aligned} In other words, $$\widetilde{\boldsymbol{\beta}}$$ is unbiased if and only if $$\mathbf{D}\mathbf{X} = \boldsymbol{0}$$. Using this fact, we can calculate its variance as: \begin{aligned} \mathbb{V}{\rm ar} (\widetilde{\boldsymbol{\beta}}) &= \mathbb{V}{\rm ar} (\mathbf{C}\mathbf{Y}) \\ &= \mathbf{C}\mathbb{V}{\rm ar} (\mathbf{X} \boldsymbol{\beta} + \boldsymbol{\varepsilon})\mathbf{C}^\top \\ &= \sigma^2 \mathbf{C}\mathbf{C}^\top \\ &= \sigma^2 \left( \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top + \mathbf{D}\right)\left( \mathbf{X}\left( \mathbf{X}^\top \mathbf{X}\right)^{-1} + \mathbf{D}^\top\right) \\ &= \sigma^2 \left[\left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \mathbf{X}\left( \mathbf{X}^\top \mathbf{X}\right)^{-1} + \left(\mathbf{X}^\top \mathbf{X}\right)^{-1} (\mathbf{D}\mathbf{X})^\top + \mathbf{D} \mathbf{X}\left( \mathbf{X}^\top \mathbf{X}\right)^{-1} + \mathbf{D}\mathbf{D}^\top\right] \\ &= \sigma^2 \left[ \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} + \mathbf{D}\mathbf{D}^\top\right] \\ &= \mathbb{V}{\rm ar} (\widehat{\boldsymbol{\beta}}) + \mathbf{D}\mathbf{D}^\top \geq \mathbb{V}{\rm ar} (\widehat{\boldsymbol{\beta}}) \end{aligned} since $$\mathbf{D}\mathbf{D}^\top$$ is a positive semidefinite matrix. This means that $$\widehat{\boldsymbol{\beta}}$$ has the smallest variance. #### 3.2.4.5 Estimating the variance parameter of the error term We see an immediate problem from the OLS estimator variance formulas - we do not know the true error variance $$\sigma^2$$. However, we can estimate it by calculating the sample residual variance: $\widehat{\sigma}^2 = s^2 = \dfrac{\widehat{\boldsymbol{\varepsilon}}^\top \widehat{\boldsymbol{\varepsilon}}}{N - 2} = \dfrac{1}{N-2} \sum_{i = 1}^N \widehat{\epsilon}_i^2$ Note that if we take $$N$$ instead of $$N-2$$ for the univariate regression case in the denominator, then the variance estimate would be biased. This is because the variance estimator would not account for two restrictions that must be satisfied by the OLS residuals, namely (3.8) and (3.9): $\sum_{i = 1}^N \widehat{\epsilon}_i = 0,\quad \sum_{i = 1}^N \widehat{\epsilon}_i X_i = 0$ So, we take $$N - 2$$ instead of $$N$$, because of the number of restrictions on the residuals. Using $$\widehat{\sigma}^2$$ allows us to calculate the estimate of $$\mathbb{V}{\rm ar} (\widehat{\boldsymbol{\beta}})$$, i.e. we can calculate $$\widehat{\mathbb{V}{\rm ar}} (\widehat{\boldsymbol{\beta}})$$. One way to view these restrictions is this: if we know $$N-2$$ of the residuals, we can always get the other two residuals by using the restrictions implied by (3.8) and (3.9). Thus, there are only $$N-2$$ degrees of freedom in the OLS residuals, as opposed to $$N$$ degrees of freedom in the errors. Note that this is an estimated variance. Nevertheless, it is a key component in assessing the accuracy of the parameter estimates (when calculating test statistics and confidence intervals). Since we estimate $$\widehat{\boldsymbol{\beta}}$$ from the a random sample, the estimator $$\widehat{\boldsymbol{\beta}}$$ is a random variable as well. We can measure the uncertainty of $$\widehat{\boldsymbol{\beta}}$$ via its standard deviation. This is the standard error of our estimate of $$\boldsymbol{\beta}$$: The square roots of the diagonal elements of the variance-covariance matrix $$\widehat{\mathbb{V}{\rm ar}} (\widehat{\boldsymbol{\beta}})$$ are called the standard errors (se) of the corresponding OLS estimators $$\widehat{\boldsymbol{\beta}}$$, which we use to estimate the standard deviation of $$\widehat{\beta}_i$$ from $$\beta_i$$ $\text{se}(\widehat{\beta}_i) = \sqrt{\widehat{\mathbb{V}{\rm ar}} (\mathbf{\widehat{\beta}_i})}$ The standard errors describe the accuracy of an estimator (the smaller the better). The standard errors are measures of the sampling variability of the least squares estimates $$\widehat{\beta}_1$$ and $$\widehat{\beta}_2$$ in repeated samples - if we collect a number of different data samples, the OLS estimates will be different for each sample. As such, the OLS estimators are random variables and have their own distribution. See section 3.2.4.7 for an example on generating many different random samples, estimating and visualizing the parameters estimates and their distribution. Now is also a good time to highlight the difference between the errors and the residuals. • The random sample, taken from a Data Generating Process (i.e. the population), is described via $Y_i = \beta_0 + \beta_1 X_i + \epsilon_i$ where $$\epsilon_i$$ is the error for observation $$i$$. • After estimating the unknown parameters $$\beta_0$$, $$\beta_1$$, we can re-write the equation as: $Y_i = \widehat{\beta}_0 + \widehat{\beta}_1 X_i + \widehat{\epsilon}_i$ where $$\widehat{\epsilon}_i$$ is the residual for observation $$i$$. The errors show up in the underlying (i.e. true) DGP equation, while the residuals show up in the estimated equation. The errors are never observed, while the residuals are calculated from the data. We can also re-write the residuals in terms of the error term and the difference between the true and estimated parameters: \begin{aligned} \widehat{\epsilon}_i &= Y_i - \widehat{Y}_i\\ &= \beta_0 + \beta_1 X_i + \epsilon_i - (\widehat{\beta}_0 + \widehat{\beta}_1 X_i) \\ &= \epsilon_i - \left( \widehat{\beta}_0 - \beta_0\right) - \left( \widehat{\beta}_1 - \beta_1\right) X_i \end{aligned} Going back to our example, we can estimate the standard errors of our coefficients following the formulas in this section: sigma2_est <- sum(resid^2) / (length(x) - 2) var_beta <- sigma2_est * solve(t(x_mat) %*% x_mat) print(sqrt(diag(var_beta))) ## x ## 0.266578700 0.009188728 sigma2_est = sum(resid**2) / (len(x) - 2) var_beta = sigma2_est * np.linalg.inv(np.dot(np.transpose(x_mat), x_mat)) print(np.sqrt(np.diag(var_beta))) ## [0.26999713 0.00930656] We can also use the built-in functions, just like we did with the coefficients: out <- summary(lm_result) print(outcoefficients[, 2, drop = FALSE]) ## Std. Error ## (Intercept) 0.266578700 ## x 0.009188728 Note: use names(out) and str(out) to examine the summary() object and see what data can be accessed. Using drop = FALSE allows us to keep the matrix structure of the output. print(lm_fit.bse) ## [0.26999713 0.00930656] Note: the b in bse stands for the parameter vector $$\boldsymbol{\beta}$$, and se - standard errors. This highlights a potential problem, which will be addressed in later chapters concerning model adequacy/goodness of fit: if the residuals are large (since their mean will still be zero this concerns the case when the estimated variance of the residuals is large), then the standard errors of the coefficients are large as well. #### 3.2.4.6 OLS estimators are Consistent A consistent estimator has the property that, as the number of data points (which are used to estimate the parameters) increases (i.e. $$N \rightarrow \infty$$), the estimates converges in probability to the true parameter, i.e.: Definition 3.2 Let $$W_N$$ be an estimator of a parameter $$\theta$$ based on a sample $$Y_1,...,Y_N$$. Then we say that $$W_N$$ is a consistent estimator of $$\theta$$ if $$\forall \epsilon > 0$$: $\mathbb{P} \left( |W_N - \theta| > \epsilon\right) \rightarrow 0,\text{ as } N \rightarrow \infty$ We can denote this as $$W_N \xrightarrow{P} \theta$$ or $$\text{plim} (W_N) = \theta$$. If $$W_N$$ is not consistent, then we say that $$W_N$$ is inconsistent. Unlike unbiasedness, consistency involves the behavior of the sampling distribution of the estimator as the sample size $$N$$ gets large and the distribution of $$W_N$$ becomes more and more concentrated about $$\theta$$. In other words, for larger sample sizes, $$W_N$$ is less and less likely to be very far from $$\theta$$. An inconsistent estimator does not help us learn about $$\theta$$, regardless of the size of the data sample. For this reason, consistency is a minimal requirement of an estimator used in statistics or econometrics. Unbiased estimators are not necessarily consistent, but those whose variances shrink to zero as the sample size grows are consistent. In other words: If $$W_N$$ is an unbiased estimator of $$\theta$$ and $$\mathbb{V}{\rm ar}(W_N) \rightarrow 0$$ as $$N \rightarrow \infty$$, then $$\text{plim}(W_N) = \theta$$ Example 3.2 We will show that the sample average of a random sample drawn from a population with mean $$\mu$$ and variance $$\sigma^2$$ is a consistent estimator. First, we will show that $$\overline{Y} = \dfrac{1}{N} \sum_{i = 1}^N Y_i$$ is an unbiased estimator of the population mean $$\mu$$: \begin{aligned} \mathbb{E} \left( \overline{Y} \right) &= \dfrac{1}{N} \mathbb{E} \left( \sum_{i = 1}^N Y_i\right) = \dfrac{1}{N} \sum_{i = 1}^N \mathbb{E} \left( Y_i\right) \\ &= \dfrac{1}{N} \sum_{i = 1}^N \mu = \dfrac{1}{N} \cdot N \cdot \mu \\ &= \mu \end{aligned} We can now obtain the variance of the sample average (assuming that $$Y_1,...,Y_N$$ are pairwise uncorrelated r.v.’s): \begin{aligned} \mathbb{V}{\rm ar} (\overline{Y}) &= \mathbb{V}{\rm ar} (\overline{Y}) = \mathbb{V}{\rm ar} \left( \dfrac{1}{N} \sum_{i = 1}^N Y_i \right ) \\ &= \dfrac{1}{N^2} \mathbb{V}{\rm ar} \left( \sum_{i = 1}^N Y_i \right ) = \dfrac{1}{N^2} \sum_{i = 1}^N \mathbb{V}{\rm ar} \left( Y_i \right ) \\ &= \dfrac{1}{N^2} \sum_{i = 1}^N \sigma^2 = \dfrac{N \sigma^2}{N^2} \\ &= \dfrac{\sigma^2}{N} \end{aligned} Therefore, $$\mathbb{V}{\rm ar} (\overline{Y}) = \dfrac{\sigma^2}{N} \rightarrow 0$$, as $$N \rightarrow \infty$$. So, $$\overline{Y}$$ is a consistent estimator of $$\mu$$. Example 3.3 Unbiased but not consistent: Assume that we select $$Y_1$$ as an estimator of $$\mu$$. This estimator is an unbiased estimator, since $$\mathbf{E} Y_1 = \mu$$. However, it is not consistent, since it does not become more concentrated around $$\mu$$ as the sample size increases - regardless of sample size $$Y_1 \sim \mathcal{N}(\mu, \sigma^2)$$. Example 3.4 Consistent but not unbiased: Assume that we want to estimate $$\sigma^2$$ using a sample $$Y_1,...,Y_N$$ drawn i.i.d. from $$\mathcal{N}(\mu, \sigma^2)$$. For our estimator, we select: $\widehat{\sigma}^2 = \dfrac{1}{N} \sum_{i = 1}^N (Y_i - \overline{Y})^2$ In this case: \begin{aligned} \mathbb{E} [\widehat{\sigma}^2] = \mathbb{E} \left[ \dfrac{1}{N} \sum_{i = 1}^N (Y_i - \overline{Y})^2\right] = \dfrac{1}{N} \mathbb{E} \left[ \sum_{i = 1}^N Y_i^2 - 2 \sum_{i = 1}^N Y_i \overline{Y} + \sum_{i = 1}^N \overline{Y}^2 \right] \end{aligned} We know that $$\sum_{i = 1}^N Y_i = N \cdot \overline{Y}$$ and $$\sum_{i = 1}^N \overline{Y}^2 = N \cdot \overline{Y}^2$$. Hence: \begin{aligned} \mathbb{E} [\widehat{\sigma}^2] &= \dfrac{1}{N} \mathbb{E} \left[ \sum_{i = 1}^N Y_i^2 - 2 N \overline{Y}^2 + N \overline{Y}^2 \right] \\ &= \dfrac{1}{N} \mathbb{E} \left[ \sum_{i = 1}^N Y_i^2 - N \overline{Y}^2 \right] \\ &= \dfrac{1}{N} \mathbb{E} \left[ \sum_{i = 1}^N Y_i^2\right] - \mathbb{E} \left[ \overline{Y}^2 \right] \\ &= \mathbb{E} \left[ Y^2 \right] - \mathbb{E} \left[ \overline{Y}^2 \right] \end{aligned} Where we assume that all $$Y_i$$ are i.i.d. drawn from the same distribution, hence $$\mathbb{E} \left[ Y_i^2 \right] = \mathbb{E} \left[ Y^2 \right], \forall i = 1,...,N$$. From the variance definition it follows that: \begin{aligned} \sigma^2_Y &= \sigma^2 = \mathbb{E} \left[ Y^2 \right] - \mathbb{E} \left[ Y \right]^2 \\ \sigma^2_\overline{Y} &= \mathbb{E} \left[ \overline{Y}^2 \right] - \mathbb{E} \left[ \overline{Y} \right]^2 \end{aligned} Thus, using the fact that the samples are drawn i.i.d. (i.e. uncorrelated): \begin{aligned} \mathbb{E} [\widehat{\sigma}^2] &= \mathbb{E} \left[ Y^2 \right] - \mathbb{E} \left[ \overline{Y}^2 \right] \\ &= (\sigma^2 + \mu^2) - (\sigma^2_\overline{Y} + \mu^2) \\ &= \sigma^2 - \sigma^2_\overline{Y} \\ &= \sigma^2 - \mathbb{V}{\rm ar}(\overline{Y}) \\ &= \sigma^2 - \dfrac{1}{N^2}\mathbb{V}{\rm ar}\left(\sum_{i = 1}^N Y_i \right) \\ &= \sigma^2 - \dfrac{1}{N}\mathbb{V}{\rm ar}\left(Y \right) \\ &= \dfrac{N-1}{N}\sigma^2 \end{aligned} In other words, $$\mathbb{E} [\widehat{\sigma}^2] \neq \sigma^2$$ and the estimator is biased. We can similarly show that: $\mathbb{V}{\rm ar}\left[ \widehat{\sigma}^2\right] = \dfrac{2 \sigma^2 (N-1)}{N^2}$ As the sample size increases, we note that $$\mathbb{V}{\rm ar}\left[ \widehat{\sigma}^2\right] \rightarrow 0$$ as $$N \rightarrow \infty$$, thus this biased estimator is consistent (furthermore, in this example the bias also decreases as $$N \rightarrow \infty$$). Some useful properties of consistent estimators are presented below. Let $$\text{plim}(T_N) = \alpha$$ and $$\text{plim}(U_N) = \beta$$, then: • $$\text{plim}(T_N + U_N) = \alpha + \beta$$; • $$\text{plim}(T_N \cdot U_N) = \alpha \cdot \beta$$; • If $$\beta \neq 0$$, then $$\text{plim} \left( \dfrac{T_N}{U_N} \right) = \dfrac{\alpha}{\beta}$$; Going back to our OLS estimators $$\widehat{\beta}_0$$ and $$\widehat{\beta}_1$$ we can use the proof of unbiasedness of the OLS estimators and express $$\widehat{\boldsymbol{\beta}}$$ as: \begin{aligned} \widehat{\boldsymbol{\beta}} &= \boldsymbol{\beta} + \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \boldsymbol{\varepsilon} \\ &= \begin{bmatrix} \beta_0 \\ \beta_1 \end{bmatrix} + \dfrac{1}{N^2 \left( \dfrac{1}{N} \sum_{i = 1}^N X_i ^2 - \left( \dfrac{1}{N} \sum_{i = 1}^N X_i\right)^2\right)} \begin{bmatrix} \sum_{i = 1}^N X_i^2 & - \sum_{i = 1}^N X_i\\ -\sum_{i = 1}^N X_i & N \end{bmatrix} \begin{bmatrix} \sum_{i = 1}^N \epsilon_i \\ \sum_{i = 1}^N X_i \epsilon_i \end{bmatrix} = \\ &= \begin{bmatrix} \beta_0 \\ \beta_1 \end{bmatrix} + \dfrac{1}{\dfrac{1}{N} \sum_{i = 1}^N X_i ^2 - \left( \dfrac{1}{N} \sum_{i = 1}^N X_i\right)^2} \begin{bmatrix} \left(\dfrac{1}{N} \sum_{i = 1}^N \epsilon_i \right) \left(\dfrac{1}{N} \sum_{i = 1}^N X_i^2 \right) - \left(\dfrac{1}{N} \sum_{i = 1}^N X_i \epsilon_i \right) \left(\dfrac{1}{N} \sum_{i = 1}^N X_i \right)\\ \dfrac{1}{N} \sum_{i = 1}^N X_i\epsilon_i - \left(\dfrac{1}{N} \sum_{i = 1}^N X_i \right)\left(\dfrac{1}{N} \sum_{i = 1}^N \epsilon_i \right) \end{bmatrix} \end{aligned} Then, as $$N \rightarrow \infty$$ we have that: \begin{aligned} \widehat{\boldsymbol{\beta}} &\rightarrow \begin{bmatrix} \beta_0 \\ \beta_1 \end{bmatrix} + \dfrac{1}{\mathbb{V}{\rm ar}(X)} \begin{bmatrix} \mathbb{E} (\epsilon) \cdot \mathbb{E}(X^2) - \mathbb{E} (X \epsilon)\cdot \mathbb{E} (X) \\ \mathbb{E} (X \epsilon) - \mathbb{E}(X) \cdot \mathbb{E}(\epsilon) \end{bmatrix} = \begin{bmatrix} \beta_0 \\ \beta_1 \end{bmatrix} + \dfrac{1}{\mathbb{V}{\rm ar}(X)} \begin{bmatrix} 0 \\ 0 \end{bmatrix} = \begin{bmatrix} \beta_0 \\ \beta_1 \end{bmatrix} \end{aligned} Since $$\mathbb{E}(\epsilon) = 0$$ and $$\mathbb{E} (X \epsilon) = \mathbb{E} (X \epsilon) - \mathbb{E}(X) \mathbb{E}(\epsilon) = \mathbb{C}{\rm ov}(X, \epsilon) = 0$$. Which means that $$\widehat{\boldsymbol{\beta}} \rightarrow \boldsymbol{\beta}$$, as $$N \rightarrow \infty$$. So, the OLS parameter vector $$\widehat{\boldsymbol{\beta}}$$ is a consistent estimator of $$\boldsymbol{\beta}$$. #### 3.2.4.7 Practical illustration of the OLS properties We will return to our example in this chapter. We have recently proved the unbiasedness and consistency of OLS estimators. To illustrate these properties empirically, we will generate 5000 replications (i.e. different samples) for each of the different sample sizes $$N \in \{11, 101, 1001 \}$$: set.seed(1) # Set the sample size N <- c(11, 101, 1001) # beta_0_est <- NULL beta_1_est <- NULL # Generate sample data: for(n in N){ x <- 0:n x_mat <- cbind(1, x) xtx <- t(x_mat) %*% x_mat # Repeatedly generate a random sample and estimate the parameters beta_0_temp <- NULL beta_1_temp <- NULL for(smpl in 1:5000){ # Generate Y: e <- rnorm(mean = 0, sd = 1, n = length(x)) y <- beta_0 + beta_1 * x + e # Estimate the parameters: xty <- t(x_mat) %*% y beta_mat <- solve(xtx) %*% xty # Save the estimated parameters: beta_0_temp <- c(beta_0_temp, beta_mat[1]) beta_1_temp <- c(beta_1_temp, beta_mat[2]) } # Save all the estimated parameters to one parameter matrix # each column represents the different sample size from N: beta_0_est <- cbind(beta_0_est, beta_0_temp) beta_1_est <- cbind(beta_1_est, beta_1_temp) } np.random.seed(1) # Set the sample size: N = [10, 100, 1000] # beta_0_est = [] beta_1_est = [] # Generate samples of different sizes: for n in N: x = np.arange(start = 0, stop = n + 1, step = 1) x_mat = np.column_stack((np.ones(len(x)), x)) xtx = np.dot(np.transpose(x_mat), x_mat) # Repeatedly generate a random sample and estimate the parameters beta_0_temp = [] beta_1_temp = [] for smpl in range(0, 5000): # Generate Y: e = np.random.normal(loc = 0, scale = 1, size = len(x)) y = beta_0 + beta_1 * x + e # Estimate the parameters: xty = np.dot(np.transpose(x_mat), y) beta_mat = np.dot(np.linalg.inv(xtx), xty) # Save the estimated parameters: beta_0_temp = np.append(beta_0_temp, [beta_mat[0]]) beta_1_temp = np.append(beta_1_temp, [beta_mat[1]]) # Save all the estimated parameters to one parameter matrix # each column represents the different sample size from N: if len(beta_0_est) == 0 and len(beta_1_est) == 0: beta_0_est = beta_0_temp beta_1_est = beta_1_temp else: beta_0_est = np.vstack((beta_0_est, beta_0_temp)) beta_1_est = np.vstack((beta_1_est, beta_1_temp)) The reason that we choose to generate 5000 different samples for each sample size, is to calculate the average and variance of the estimated parameters: print(paste0("True beta_0 = ", beta_0, ". True beta_1 = ", beta_1)) ## [1] "True beta_0 = 1. True beta_1 = 0.5" print("True beta_0 = " + str(beta_0) + ". True beta_1 = " + str(beta_1)) ## True beta_0 = 1. True beta_1 = 0.5 for(i in 1:length(N)){ out <- paste0("With N = ", N[i], ":") out <- paste0(out, "\n\t the AVERAGE of the estimated parameters:") out <- paste0(out, "\n\t\t beta_0: ", round(mean(beta_0_est[, i]), 5)) out <- paste0(out, "\n\t\t beta_1: ", round(mean(beta_1_est[, i]), 5), "\n") cat(out) } ## With N = 11: ## the AVERAGE of the estimated parameters: ## beta_0: 0.98409 ## beta_1: 0.50222 ## With N = 101: ## the AVERAGE of the estimated parameters: ## beta_0: 0.99733 ## beta_1: 0.50005 ## With N = 1001: ## the AVERAGE of the estimated parameters: ## beta_0: 0.99982 ## beta_1: 0.5 for(i in 1:length(N)){ out <- paste0("With N = ", N[i], ":") out <- paste0(out, "\n\t the VARIANCE of the estimated parameters:") out <- paste0(out, "\n\t\t beta_0: ", round(var(beta_0_est[, i]), 5)) out <- paste0(out, "\n\t\t beta_1: ", round(var(beta_1_est[, i]), 5), "\n") cat(out) } ## With N = 11: ## the VARIANCE of the estimated parameters: ## beta_0: 0.29418 ## beta_1: 0.00716 ## With N = 101: ## the VARIANCE of the estimated parameters: ## beta_0: 0.03831 ## beta_1: 1e-05 ## With N = 1001: ## the VARIANCE of the estimated parameters: ## beta_0: 0.00396 ## beta_1: 0 for i in range(0, len(N)): print("With N = " + str(N[i]) + ":" + "\n\t the AVERAGE of the estimated parameters:" + "\n\t\t beta_0: " + str(np.round(np.mean(beta_0_est[i]), 5)) + "\n\t\t beta_1: " + str(np.round(np.mean(beta_1_est[i]), 5))) # # ## With N = 10: ## the AVERAGE of the estimated parameters: ## beta_0: 1.00437 ## beta_1: 0.49984 ## With N = 100: ## the AVERAGE of the estimated parameters: ## beta_0: 0.99789 ## beta_1: 0.50006 ## With N = 1000: ## the AVERAGE of the estimated parameters: ## beta_0: 0.99957 ## beta_1: 0.5 for i in range(0, len(N)): print("With N = " + str(N[i]) + ":" + "\n\t the VARIANCE of the estimated parameters:" + "\n\t\t beta_0: " + str(np.round(np.var(beta_0_est[i]), 5)) + "\n\t\t beta_1: " + str(np.round(np.var(beta_1_est[i]), 5))) # # ## With N = 10: ## the VARIANCE of the estimated parameters: ## beta_0: 0.3178 ## beta_1: 0.00904 ## With N = 100: ## the VARIANCE of the estimated parameters: ## beta_0: 0.03896 ## beta_1: 1e-05 ## With N = 1000: ## the VARIANCE of the estimated parameters: ## beta_0: 0.00394 ## beta_1: 0.0 Note that unbiasedness is true for any $$N$$, while consistency is an asymptotic property, i.e. it holds when $$N \rightarrow \infty$$. We can see that, the mean of the estimated parameters are close to the true parameter value regardless of sample size. The variance of the estimated parameters decreases with larger sample size, i.e. the larger the sample size, the closer will our estimated parameters be to the true values. par(mfrow = c(3, 2)) # for(i in 1:length(N)){ # hist(beta_0_est[, i], col = "cornflowerblue", breaks = 20, main = bquote("Histogram of " ~ widehat(beta)[0] ~ " with N = " ~ .(N[i]))) # hist(beta_1_est[, i], col = "cornflowerblue", breaks = 20, main = bquote("Histogram of " ~ widehat(beta)[1] ~ " with N = " ~ .(N[i]))) } fig = plt.figure(figsize = (10, 10)) for i in range(0, len(N)): bins = 20, histtype = 'bar', color = "cornflowerblue", ec = 'black') _ = plt.title("Histogram of $\\widehat{\\beta}_0$ with N = " + str(N[i])) bins = 20, histtype = 'bar', color = "cornflowerblue", ec = 'black') _ = plt.title("Histogram of $\\widehat{\\beta}_1$ with N = " + str(N[i])) plt.tight_layout() plt.show() We see that the histograms of the OLS estimators have a bell-shaped distribution. Under assumption (UR.4) it can be shown that since $$\boldsymbol{\varepsilon} | \mathbf{X} \sim \mathcal{N} \left( \mathbf{0}, \sigma^2_\epsilon \mathbf{I} \right)$$, then the linear combination of epsilons in $$\widehat{\boldsymbol{\beta}} = \boldsymbol{\beta} + \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \mathbf{X}^\top \boldsymbol{\varepsilon}$$ will also be normal, i.e. $\widehat{\boldsymbol{\beta}} | \mathbf{X} \sim \mathcal{N} \left(\boldsymbol{\beta},\ \sigma^2 \left( \mathbf{X}^\top \mathbf{X}\right)^{-1} \right)$ In this chapter we have shown how to derive an OLS estimation method in order to estimate the unknown parameters of a linear regression with one variable. We have also shown that these estimators (under conditions (UR.1) - (UR.3)) are good in the sense that, as the sample size increases, the estimated parameter values approach the true parameter values and that the average of all the estimators, estimated from a number of different random samples, is very close to the true underlying parameter vector. ### 3.2.5 On Generation of the Independent Variable $$X$$ In this chapter we have opted to use $$X = 1,..., N$$ for simplicity. In practice, depending on the type of data, we may have different values of $$Y$$ for repeating values of $$X$$ (for example, if $$X$$ is the income of employees at a firm), which is to be expected, since our model has a random component $$\epsilon$$. Nevertheless, we can easily use a more randomized specification of $$X$$, say $$X \sim \mathcal{N} \left( \mu_X,\ \sigma^2_X \right)$$, with $$N = 300$$, $$\mu_X = 5$$ and $$\sigma^2_X = 2$$: set.seed(123) # Set the coefficients: N = 300 beta_0 = 1 beta_1 = 0.5 # Generate sample data: x <- rnorm(mean = 5, sd = 2, n = N) e <- rnorm(mean = 0, sd = 1, n = length(x)) y <- beta_0 + beta_1 * x + e # Estimate the model: x_mat <- cbind(1, x) beta_mat <- solve(t(x_mat) %*% x_mat) %*% t(x_mat) %*% y print(t(beta_mat)) ## x ## [1,] 1.169932 0.4682726 np.random.seed(123) # Set the coefficients: N = 300 beta_0 = 1 beta_1 = 0.5 # Generate sample data: x = np.random.normal(loc = 5, scale = 2, size = N) e = np.random.normal(loc = 0, scale = 1, size = len(x)) y = beta_0 + beta_1 * x + e # Estimate the model: x_mat = np.column_stack((np.ones(len(x)), x)) beta_mat = np.dot(np.linalg.inv(np.dot(np.transpose(x_mat), x_mat)), np.dot(np.transpose(x_mat), y)) print(beta_mat) ## [1.21815132 0.45602676] # Calcualte the fitted values y_fit <- beta_mat[1] + beta_mat[2] * x # Plot the data and the fitted regression: # plot(x, y) # # lines(x = x, y = y_fit, col = "blue") # Calcualte the fitted values y_fit = beta_mat[0] + beta_mat[1] * x # Plot the data and the fitted regression: _ = plt.figure(num = 3, figsize = (10, 8)) _ = plt.plot(x, y, linestyle = "None", marker = "o", markerfacecolor = 'None', markeredgecolor = "black") _ = plt.plot(x, y_fit, linestyle = "-", color = "blue") plt.show() Alternatively, we can draw a random sample of $$X_i$$, $$i = 1,...,N$$, with replacement from a set $$\{1,...,50 \}$$ and $$N = 300$$: set.seed(123) # Set the coefficients: N = 300 beta_0 = 1 beta_1 = 0.5 # Generate sample data: x <- sample(1:50, size = N, replace = TRUE) e <- rnorm(mean = 0, sd = 1, n = length(x)) y <- beta_0 + beta_1 * x + e # Estimate the model: x_mat <- cbind(1, x) beta_mat <- solve(t(x_mat) %*% x_mat) %*% t(x_mat) %*% y # print(t(beta_mat)) ## x ## [1,] 1.171086 0.4953443 np.random.seed(123) # Set the coefficients: N = 300 beta_0 = 1 beta_1 = 0.5 # Generate sample data: x = np.random.choice(list(range(1, 51)), size = N, replace = True) e = np.random.normal(loc = 0, scale = 1, size = len(x)) y = beta_0 + beta_1 * x + e # Estimate the model: x_mat = np.column_stack((np.ones(len(x)), x)) beta_mat = np.dot(np.linalg.inv(np.dot(np.transpose(x_mat), x_mat)), np.dot(np.transpose(x_mat), y)) print(beta_mat) ## [1.10062011 0.49214627] # Calcualte the fitted values y_fit <- beta_mat[1] + beta_mat[2] * x # Plot the data and the fitted regression: # plot(x, y) # # lines(x = x, y = y_fit, col = "blue") # Calcualte the fitted values y_fit = beta_mat[0] + beta_mat[1] * x # Plot the data and the fitted regression: _ = plt.figure(num = 4, figsize = (10, 8)) _ = plt.plot(x, y, linestyle = "None", marker = "o", markerfacecolor = 'None', markeredgecolor = "black") _ = plt.plot(x, y_fit, linestyle = "-", color = "blue") plt.show() Note that when plotting the fitted values, we actually need to sort the values of $$\widehat{Y}$$ by $$X$$. For the linear regression case, the regression appears to be drawn as expected, but for nonlinear models, plotting unsorted data will results in an unreadable plot. An example of the first 15 data point pairs of: • unsorted data points $$(X_i, Y_i)$$: print(head(cbind(x, y_fit), 15)) ## x y_fit ## [1,] 31 16.526761 ## [2,] 15 8.601252 ## [3,] 14 8.105907 ## [4,] 3 2.657120 ## [5,] 42 21.975549 ## [6,] 50 25.938304 ## [7,] 43 22.470893 ## [8,] 37 19.498827 ## [9,] 14 8.105907 ## [10,] 25 13.554695 ## [11,] 26 14.050040 ## [12,] 27 14.545384 ## [13,] 5 3.647808 ## [14,] 27 14.545384 ## [15,] 28 15.040728 print(np.column_stack((x, y_fit))[:15]) ## [[46. 23.73934845] ## [ 3. 2.57705892] ## [29. 15.37286189] ## [35. 18.3257395 ] ## [39. 20.29432457] ## [18. 9.95925294] ## [20. 10.94354548] ## [43. 22.26290965] ## [23. 12.41998428] ## [34. 17.83359323] ## [33. 17.34144696] ## [50. 25.70793353] ## [48. 24.72364099] ## [10. 6.02208279] ## [33. 17.34144696]] • sorted data points $$(X_i, Y_i)$$, such that $$X_i \leq X_j$$ for $$i<j$$: print(head(cbind(x[order(x)], cbind(y_fit[order(x)])), 15)) ## [,1] [,2] ## [1,] 1 1.666431 ## [2,] 1 1.666431 ## [3,] 2 2.161775 ## [4,] 2 2.161775 ## [5,] 2 2.161775 ## [6,] 2 2.161775 ## [7,] 2 2.161775 ## [8,] 3 2.657120 ## [9,] 3 2.657120 ## [10,] 3 2.657120 ## [11,] 3 2.657120 ## [12,] 4 3.152464 ## [13,] 4 3.152464 ## [14,] 4 3.152464 ## [15,] 5 3.647808 print(np.column_stack((x[np.argsort(x)], y_fit[np.argsort(x)]))[:15]) ## [[1. 1.59276638] ## [1. 1.59276638] ## [2. 2.08491265] ## [2. 2.08491265] ## [2. 2.08491265] ## [2. 2.08491265] ## [2. 2.08491265] ## [2. 2.08491265] ## [2. 2.08491265] ## [2. 2.08491265] ## [2. 2.08491265] ## [3. 2.57705892] ## [3. 2.57705892] ## [3. 2.57705892] ## [3. 2.57705892]]
2023-02-03 04:41:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 7, "x-ck12": 0, "texerror": 0, "math_score": 0.9979843497276306, "perplexity": 5358.5543676933385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500042.8/warc/CC-MAIN-20230203024018-20230203054018-00849.warc.gz"}
https://www.cheenta.com/eulers-theorem-and-an-inequality/
Select Page # Understand the problem Let $O$ be the circumcenter and $G$ be the centroid of a triangle $ABC$. If $R$ and $r$ are the circumcenter and incenter of the triangle, respectively, prove that$$OG \leq \sqrt{ R ( R - 2r ) } .$$ Balkan MO 1996 Geometry ##### Difficulty Level Easy Let $I$ be the incentre. Euler’s theorem says that $OI^2=R(R-2r)$. Hence the result actually proves that $OG\le OI$. Do you really need a hint? Try it first! The distance $OG$ is easily computable from standard formulae. For example, one can use complex numbers by assuming that $O=0$ and $|A|=|B|=|C|=R$. The centroid is given by $\frac{A+B+C}{3}$. Hence $OG^2=\frac{|A+B+C|^2}{9}=\frac{(A+B+C)(\overline{A}+\overline{B}+\overline{C})}{9}=\frac{3R^2+A\overline{B}+A\overline{C}+B\overline{A}+B\overline{C}+C\overline{A}+C\overline{B}}{9}=R^2-\frac{(2R^2-A\overline{B}-B\overline{A})+(2R^2-A\overline{C}-C\overline{A})+(2R^2-B\overline{C}-C\overline{B})}{9}=R^2-\frac{|A-B|^2+|B-C|^2+|C-A|^2}{9}=R^2-\frac{a^2+b^2+c^2}{9}$. Show that $R(R-2r)=R^2-\frac{abc}{a+b+c}$. Combining all the hints, the problem reduces to proving that $(a^2+b^2+c^2)(a+b+c)\ge 9abc$. This follows from      $\frac{a^2+b^2+c^2}{3}\ge 3(abc)^{\frac{2}{3}}$ and $\frac{a+b+c}{3}\ge 3(abc)^{\frac{1}{3}}$. # Connected Program at Cheenta Math Olympiad is the greatest and most challenging academic contest for school students. Brilliant school students from over 100 countries participate in it every year. Cheenta works with small groups of gifted students through an intense training program. It is a deeply personalized journey toward intellectual prowess and technical sophistication. # Similar Problems ## Right-angled shaped field | AMC 10A, 2018 | Problem No 23 Try this beautiful Problem on triangle from AMC 10A, 2018. Problem-23. You may use sequential hints to solve the problem. ## Area of region | AMC 10B, 2016| Problem No 21 Try this beautiful Problem on Geometry on Circle from AMC 10B, 2016. Problem-20. You may use sequential hints to solve the problem. ## Coin Toss Problem | AMC 10A, 2017| Problem No 18 Try this beautiful Problem on Probability from AMC 10A, 2017. Problem-18, You may use sequential hints to solve the problem. ## GCF & Rectangle | AMC 10A, 2016| Problem No 19 Try this beautiful Problem on Geometry on Rectangle from AMC 10A, 2010. Problem-19. You may use sequential hints to solve the problem. ## Fly trapped inside cubical box | AMC 10A, 2010| Problem No 20 Try this beautiful Problem on Geometry on cube from AMC 10A, 2010. Problem-20. You may use sequential hints to solve the problem. ## Measure of angle | AMC 10A, 2019| Problem No 13 Try this beautiful Problem on Geometry from AMC 10A, 2019.Problem-13. You may use sequential hints to solve the problem. ## Sum of Sides of Triangle | PRMO-2018 | Problem No-17 Try this beautiful Problem on Geometry from PRMO -2018.You may use sequential hints to solve the problem. ## Recursion Problem | AMC 10A, 2019| Problem No 15 Try this beautiful Problem on Algebra from AMC 10A, 2019. Problem-15, You may use sequential hints to solve the problem. ## Roots of Polynomial | AMC 10A, 2019| Problem No 24 Try this beautiful Problem on Algebra from AMC 10A, 2019. Problem-24, You may use sequential hints to solve the problem. ## Set of Fractions | AMC 10A, 2015| Problem No 15 Try this beautiful Problem on Algebra from AMC 10A, 2015. Problem-15. You may use sequential hints to solve the problem.
2020-09-25 20:06:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 36, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3827221393585205, "perplexity": 2911.9454089670835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228707.44/warc/CC-MAIN-20200925182046-20200925212046-00237.warc.gz"}
https://sites.google.com/site/uotopology/
### 2016-2017 The Topology / Geometry seminar meets at 3:00 p.m. on Tuesdays in 210 Deady, except as noted. See also the University of Oregon Mathematics Department webpage. ## Fall 2016 Date  Speaker  Title September 23 10:30 a.m. Jessica Purcell (Monash) Limits of knots September 27 Organizational meeting October 4 Shida Wang (UO) Linear independence in the smooth concordance group October 11 Dan Dugger (UO) Involutions on surfaces October 18 Dev Sinha (UO) Towards a geometric model for cochains, as an E algebra October 25  Scott Baldridge (LSU) Invariants of Special Lagrangian Cones November 1 Leanne Merrill (UO) Algebraic vn self maps at the prime 2 November 8 Tian Yang (Stanford) Volume conjectures for Reshetikhin-Turaev and Turaev-Viro invariants November 15 Demetre Kazaras (UO) Minimal hypersurfaces with free boundary and psc-bordism November 22 Nathan Dunfield (UIUC) A tale of two norms November 29 10:00 a.m. Ailsa Keating (Columbia) On higher dimensional Dehn twists November 30 4:00 p.m. Course organization meeting December 8 Vinicius Gripp Barros Ramos (IMPA)  Symplectic embeddings, toric domains and billiards ## Winter 2017 Date  Speaker  Title January 10 No seminar this week. January 17 No seminar this week. January 24 No seminar this week. January 31 11:00 a.m. Boris Botvinnik (UO) On the topology of the space of Ricci-positive metrics January 31 3:00 p.m. David Pengelley (Oregon State) How is a projective space fitted together? February 7 Yefeng Shen (Stanford) LG/CY correspondence via modularity February 14 TBD TBD February 21 Paul Arnaud Songhafouo Tsopméné (University of Regina) Cosimplicial models for manifold calculus February 28 Christian Millichap (Linfield College) Commensurability of hyperbolic knot and link complements March 7 No seminar this week. March 14 Dan Dugger (UO) Bigraded cohomology for Z/2-spaces ## Spring 2017 Date  Speaker  Title April 4 Organizational meeting April 11 Hongbin Sun (UC Berkeley) NonLERFness of arithmetic hyperbolic manifold groups April 18 Michael Willis (UVA) The Khovanov homology of infinite braids April 25 Vassili Gorbounov (U. Aberdeen) New feature of the Schubert calculus May 1 - 2 Niven Lectures: Sergei Tabachnikov (Penn State) TBD May 9 Biji Wong (Brandeis) Equivariant corks and Heegaard Floer homology May 15 - 19 Moursund Lectures: Mikhail Khovanov (Columbia) TBD May 23 Boris Botvinnik (UO) Torelli groups and families of metrics with positive Ricci curvature May 30 Kirk McDermott (OSU) Examples of 3-manifold spines arising from a family of cyclic presentations (Simons Center, Stonybrook) The mod 2 cohomology of some SU(2) representation spaces for a surface ## Abstracts ### Fall 2016 September 23, 2016, 10:30 a.m. Jessica Purcell (Monash University), "Limits of knots". Abstract: There are different ways to define the convergence of knots. For example, the diagram graphs of a sequence of knots might converge to another graph, using ideas from graph theory, or the geometric structures on the knot complements might converge to a metric space, using ideas from geometry. In this talk, we will discuss both notions of convergence of knots, some consequences, and open questions. October 4, 2016. Shida Wang (UO), "Linear independence in the smooth concordance group". Abstract: This talk will start from an expository introduction to the smooth knot concordance group. Then we will review knot Heegaard Floer theory and survey a few recent results on concordance as applications. Most of these results will be about families of infinitely many linearly independent knots, revealing fine structure of the concordance group. October 18, 2016. Dev Sinha (UO), "Towards a geometric model for cochains, as an E algebra". Abstract: Algebraic topologists love to find perfect algebraic reflections of topological phenomena, for example how subgroups of the fundamental group correspond to covering spaces.  Quillen  found that rational homotopy theory was perfectly modeled by differential graded Lie or commutative algebras, remarkably implying that those theories are equivalent.  Sullivan extended the theory and showed how de Rham theory makes this calculable.  In his thesis, Mandell showed that p-adic homotopy theory is perfectly reflected in (but not equivalent to) singular cochains as an E-infinity algebra.  I will give progress on geometric models of this E-infity algebra for manifolds, where intersection and linking play the role that differential forms play in de Rham theory. October 25, 2016. Scott Baldridge (Louisiana State), "Invariants of Special Lagrangian Cones". Abstract: Special Lagrangian cones play a central role in the understanding of the SYZ conjecture, an important conjecture in mathematics based upon mirror symmetry and certain string theory models in physics. According to string theory, our universe is a product of the standard Minkowsky space with a Calabi-Yau 3-fold. Strominger, Yau, and Zaslov conjectured that Calabi-Yau 3-folds can be fibered by special Lagrangian 3-tori with singular fibers. To make this idea rigorous one needs control over the singularities, which can be modeled by special Lagrangian cones. In this talk, we discuss special Lagrangian cones, the difficulties involved in defining and computing invariants of them, and the hope that these invariants may offer in understanding the SYZ conjecture. November 1, 2016. Leanne Merrill (UO), "Algebraic vn self maps at the prime 2". Abstract: A central question of algebraic topology is to understand homotopy classes of maps between finite cell complexes. The Nilpotence Theorem of Hopkins-Devinatz-Smith together with the Periodicity Theorem of Hopkins-Smith describes non-nilpotent self maps of finite spectra. The Morava K-theories K(n) are extraordinary cohomology theories which detect whether a finite spectrum X supports a vn-self map. Such maps are known to exist for each finite spectrum X for an appropriate n but few explicit examples are known. Working at the prime 2, we use a technique of Palmieri- Sadofsky to produce algebraic analogs of vn maps that are easier to detect and compute. We reproduce the existence proof of Adams’s v14 map on the Mod 2 Moore spectrum, and work towards a v2i map for a small value of i. November 8, 2016. Tian Yang (Stanford), "Volume conjectures for Reshetikhin-Turaev and Turaev-Viro invariants". Abstract: In joint work with Qingtao Chen, we conjecture that at the root of unity exp(2πi/r) instead of the usually considered root exp(πi/r), the Turaev-Viro and Reshetikhin-Turaev invariants of a hyperbolic 3-manifold grow exponentially with growth rates respectively the hyperbolic and the complex volume of the manifold. This reveals a different asymptotic behavior of the relevant quantum invariants than that of Witten's invariants (that grow polynomially by the Asymptotic Expansion Conjecture), which may indicate a different geometric interpretation of the Reshetikhin-Turaev invariants than SU(2) Chern-Simons gauge theory. Recent progress toward these conjectures will be summarized, including joint work with Renaud Detcherry and Effie Kalfagianni. November 15, 2016. Demetre Kazaras (UO), "Minimal hypersurfaces with free boundary and psc-bordism". Abstract: There is a well-known technique due to Schoen-Yau from the late 70s which uses (stable) minimal hypersurfaces to study the topological implications of a (closed) manifold's ability to support positive scalar curvature metrics. In this talk, we describe a version of this technique for manifolds with boundary and discuss how it can be used to study bordisms of positive scalar curvature metrics. November 22, 2016. Nathan Dunfield (UIUC), "A tale of two norms". Abstract: The first cohomology of a hyperbolic 3-manifold has two natural norms: the Thurston norm, which measure topological complexity of surfaces representing the dual homology class, and the harmonic norm, which is just the L^2 norm on the corresponding space of harmonic 1-forms.  Bergeron-Sengun-Venkatesh recently showed that these two norms are closely related, at least when the injectivity radius is bounded below. Their work was motivated by the connection of the harmonic norm to the Ray-Singer analytic torsion and issues of torsion growth discussed in the first talk.  After carefully introducing both norms and the connection to torsion growth, I will discuss new results that refine and clarify the precise relationship between them; one tool here will be a third norm based on least-area surfaces.  This is joint work with Jeff Brock. November 29, 2016. Ailsa Keating (Columbia), "On higher dimensional Dehn twists". Abstract: Given a Lagrangian sphere L in a symplectic manifold M, one can define a higher-dimensional Dehn twist in L, a diffeomorphism of M. This generalises the classical notion of a Dehn twist on a Riemann surface. After defining them, we will explore some of their properties, with an emphasis on comparing them with properties in the 2D case. No prior knowledge of symplectic topology will be assumed. December 8, 2016. Vinicius Gripp Barros Ramos (IMPA), "Symplectic embeddings, toric domains and billiards". Abstract: Embedded contact homology capacities were defined by Michael Hutchings and they have been shown to provide sharp obstructions to many symplectic embeddings. In this talk, I will explain how they can be used to study symplectic embeddings of the lagrangian bidisk and how this space is related to the space of billiards on a round disk. ### Winter 2017 January 31, 2017, 11:00 a.m. Boris Botvinnik (UO), "On the topology of the space of Ricci-positive metrics". Abstract: This is joint work with David Wraith. We study the space $\Riem^{\Ric+}(M)$ of metrics with positive Ricci curvature on a closed spin manifold $M$ of $\dim M=d$. There is a natural map $\iota: \Riem^{\Ric+}(M)\to \Riem^+(M)$ to the space of metrics with positive scalar curvature. Let $g_0\in \Riem^+(M)$ be any metric, then there is the index-difference map $\mathrm{inddiff}_{g_0}: \Riem^+(M)\to \Omega^{\infty+d+1}KO$ defined by Hitchin. Recently it was established by Botvinnik, Ebert and Randal-Williams that the index-difference map $\mathrm{inddiff}_{g_0}$ detects non-trivial homotopy groups $\pi_q\Riem^+(M)$ for all $q$ such that $KO_{d+q+1}\neq 0$, where $d\geq 6$. We show that for any $\ell\geq 1$ and even integer $d\geq 6$, there exists a spin manifold $W$, $\dim W = d$, together with a metric $g_0\in \Riem^{\Ric+}(W)$, such that the composition $$\mathsf{inddiff}_{g_0}: \Riem^{\Ric+}(W)\xrightarrow[ ]{\iota} \Riem^{+}(W) \xrightarrow[ ]{\mathrm{inddiff}_{g_0}} \Omega^{n+1} KO$$ detects non-trivial homotopy groups $\pi_q \Riem^{\Ric+}(W)$ for all $q\leq \ell$ and such that $KO_{d+q+1}\neq 0$. January 31, 2017, 3:00 p.m. David Pengelley (Oregon State), "How is a projective space fitted together?". Abstract: Projective spaces are among the most important geometric objects in mathematics. An example is n-dimensional real projective space, obtained from the n-sphere by identifying antipodal points. We will investigate how the essential geometric cells of various dimensions in a projective space are glued to one another, as detected by cohomology operations that reflect specific geometric attachments. We find a minimal set of generators and relations modulo two for the cells and attachments, that is, a minimal presentation for the cohomology of a real projective space as a module over the Steenrod algebra of cohomology operations. The morning Homotopy Theory seminar will provide useful, but not necessary, hands-on preparation. February 7, 2017. Yefeng Shen (Stanford), "LG/CY correspondence via modularity". Abstract: Using Ramanujan identities and WDVV equations, we prove that the Gromov-Witten generating functions are quasi-modular forms when the target Calabi-Yau is a quotient of an elliptic curve. Furthermore, we apply Caylay transformation to relate the Gromov-Witten theory of these targets and their counterpart Fan-Jarvis-Ruan-Witten theory. This solves the LG/CY correspondence in these cases. The work is joint with Jie Zhou. February 21, 2017. Paul Arnaud Songhafouo Tsopméné (University of Regina), "Cosimplicial models for manifold calculus". Abstract: Manifold calculus is a tool developped by Goodwillie and Weiss which enables to approximate a contravariant functor, F, from the category of m-manifolds to the category of spaces (or alike), by its ”Taylor approximation”, T_{\infty}F. I will explain how to construct a fairly explicit and computable cosimplicial model of T_{\infty}F(M) out of a simplicial model of the compact manifold M (i.e. out of a simplicial set whose realization is M, with extra tangential information if needed). This cosimplicial model in degree p is then equivalent to the evaluation of F on a disjoint union of as many m-disks as p-simplices in the simplicial model of M. As an example, we apply this construction to the functor F(M) = Emb(M,W) of smooth embeddings in a given manifold W ; in that case our cosimplicial model in degree p is then just the configuration spaces of all the p-simplices of M in W product with a power of a Stiefel manifold. When dim(W ) > dim(M ) + 2, a theorem of Goodwillie-Klein implies that our explicit cosimplicial space is a model of Emb(M,W). (This generalizes Sinha’s cosimplicial model for the space of long knots which was for the special case when M is the real line.) This allows one to make explicit computations. As an example, using this cosimplicial model we show that the rationnal Betti numbers of the space Emb(M,Rn) have an exponential growth when the Euler characteristic of M is < -1. (This is joint work with Pedro Boavida de Brito, Pascal Lambrechts, and Daniel Pryor). February 28, 2017. Christian Millichap (Linfield College), "Commensurability of hyperbolic knot and link complements". Abstract: In general, it is a difficult problem to determine if two manifolds are commensurable, i.e., share a common finite sheeted cover. Here, we will examine some combinatorial and geometric approaches to analyzing commensurability classes of hyperbolic knot and link complements. In particular, we will discuss current work done with Worden to show that the only commensurable hyperbolic 2-bridge link complements are the figure-eight knot complement and the $6_{2}^{2}$ link complement. Part of this analysis also results in an interesting corollary: a hyperbolic 2-bridge link complement cannot irregularly cover a hyperbolic 3-manifold. ### Spring 2017 April 11, 2017. Hongbin Sun (UC Berkeley), "NonLERFness of arithmetic hyperbolic manifold groups". Abstract: We will show that, for "almost" all arithmetic hyperbolic manifolds with dimension >3, their fundamental groups are not LERF. The main ingredient in the proof is a study of certain graph of groups with hyperbolic 3-manifold groups being the vertex groups. We will also show that a compact irreducible 3-manifold with empty or tori boundary does not support a geometric structure if and only if its fundamental group is not LERF. April 18, 2017. Michael Willis (UVA), "The Khovanov homology of infinite braids". Abstract: In this talk, I will show that the limiting Khovanov chain complex of any infinite positive braid categorifies the Jones-Wenzl projector, extending Lev Rozansky's work with infinite torus braids. I will also describe a similar result for the limiting Lipshitz-Sarkar-Khovanov homotopy types of the closures of such braids. Extensions to more general infinite braids will also be considered. This is joint work with Gabriel Islambouli. April 25, 2017. Vassili Gorbounov (University of Aberdeen), "New feature of the Schubert calculus". Abstract: In the talk we will describe a new feature of the classical Schubert calculus which holds for all types of the classical Lie groups. As the main example we will use the type A Grassmanians. The usual definition of the Schubert cycles involves a choice of a parameter, namely a choice of a full flag. Studying the dependence of the construction of the Schubert cycles on these parameters in the equivariant cohomology leads to an interesting 1 cocycle on the permutation group or a solution to the quantum Yang Baxter equation. This connects the Schubert calculus to the theory of quantum integrable systems. We show the above cocycle is the 'Baxterization' ( the term introduced by V. Jones) of the natural action of the nil Coxeter algebra of Berstein Gelfand Gelfand Demazure difference operators in the equivariant cohomology of partial flag varieties. We will outline some applications of this connection as well. May 9, 2017. Biji Wong (Brandeis), "Equivariant corks and Heegaard Floer homology". Abstract: A cork is a contractible smooth 4-manifold with an involution on its boundary that does not extend to a diffeomorphism of the entire manifold. Corks can be used to detect exotic structures; in fact any two smooth structures on a closed simply-connected 4-manifold are related by a cork twist. Recently, Auckly-Kim-Melvin-Ruberman showed that for any finite subgroup G of SO(4) there exists a contractible 4-manifold with an effective G-action on its boundary so that the twists associated to the non-trivial elements of G do not extend to diffeomorphisms of the entire manifold. In this talk we will use Heegaard Floer techniques originating in work of Akbulut-Karakurt to give a different proof of this phenomenon. May 30, 2017.  Kirk McDermott (Oregon State), "Examples of 3-manifold spines arising from a family of cyclic presentations". Abstract: Cyclically presented groups arise naturally as the fundamental group of certain closed, orientable 3-manifolds. In this talk, we prove a particular family of cyclic presentations is a new collection of 3-manifold spines. A common approach is to take a spherical van Kampen diagram and then perform a classical face pairing technique using an Euler characteristic argument. Here, we instead work with a spherical picture- the dual to a diagram- and show, equivalently, when the picture represents a Heegaard diagram for a 3-manifold. The resulting 3-manifolds have cyclic symmetry, a consequence of the fact that each is a finite cyclic covering of a certain lens space. These examples include and extend earlier results of Cavicchioli, Repovs, and Spaggiari from 2003. June 6, 2017.  Chris Scaduto (Stonybrook), "The mod 2 cohomology of some SU(2) representation spaces for a surface". Abstract: Consider the space of representations from the fundamental group of a punctured surface to SU(2) that are -1 around the puncture. I'll tell you about the 2-torsion in the cohomology of this space. This is a by-product of an investigation into the mod 2 cohomology ring of the space of representations modulo conjugation, which is in turn motivated by a problem in instanton homology. This is joint work with Matt Stoffregen.
2017-08-18 21:24:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7862024903297424, "perplexity": 1182.0504376704405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105108.31/warc/CC-MAIN-20170818194744-20170818214744-00255.warc.gz"}
https://math.stackexchange.com/questions/1735115/completeness-relation-for-tricomi-confluent-hypergeometric-function
Completeness Relation for Tricomi Confluent Hypergeometric Function Consider the Kummer differential equation $$\frac{d}{dz}\left[z^be^{-z}\frac{dw}{dz}\right]=az^{b-1}e^{-z}w,\quad z\in\mathbb{R}.$$ It is an eigenvalue problem of Sturm-Liouville type with weight function $W(z)=z^{b-1}e^{-z}$. The two linearly independent solutions are the Kummer function $M(a,b,z)$ and the Tricomi function $U(a,b,z)$. My question is what type of boundary conditions (BC) does one need to impose at $z=\pm\infty$ for the usual completeness relation to hold $$\int da\,w(a,b,z)w(a,b,z')W(z) =\delta(z-z').$$ Here $w$ is the correct linear combination of $M$ and $U$ to match the BCs. For example if we want the eigenfunctions $w$ to decay for $z\to\pm\infty$ then $w=U$ is the only choice, because $U\sim z^{-a}$ for large $z$ ($M$ explodes), and then $a$ has to be positive for this to be a proper decay. So I would naively integrate on $a\in\mathbb{R}_+$ in this case. I also tried to look for this type of integral in tables but haven't found anything useful. Does anybody have any ideas how to make this integral precise? Thanks. What you are looking for is called Weyl's limit point(LP)/limit circle(LC) classification. An end point is LC if both solutions are square integrable (w.r.t . the weight function) and LP otherwise. If both endpoints are LC then you need boundary conditions and you will have a complete set of eigenfunctions. Otherwise the spectrum might have a continuous component and the eigenfunction expansion will be an integral transform. The Kummer equation is LC near $0$ for $0<b<2$ and LP otherwise. $\infty$ is always LP. However, I don't know the precise spectrum of this equation. Near $0$ you can take the Kummer function which is entire with respect to the spectral parameter, hence there is a corresponding integral transform $$\hat{f}(a) = \int_0^\infty M(a,b,x) f(x) W(x) dx$$ and you can go back via $$f(x) = \int_{-\infty}^\infty M(a,b,x) \hat{f}(a) d\rho(a)$$ where $\rho$ is the associated singular spectral measure. Depending on your background this might be however quite technical. Moreover, it only covers the case where the left endpoint $0$ is regular. For the general case you will need
2019-05-23 11:40:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8975388407707214, "perplexity": 174.34834653929227}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257243.19/warc/CC-MAIN-20190523103802-20190523125802-00100.warc.gz"}
https://socratic.org/questions/if-a-scuba-diver-releases-a-10-ml-air-bubble-below-the-surface-where-the-pressur
# If a scuba diver releases a 10-mL air bubble below the surface where the pressure is 3.5 atm, what is the volume (in mL) of the bubble when it rises to the surface and the pressure is 1.0 atm? Aug 14, 2016 Larger! This is a simple manifestation of Boyle's Law . #### Explanation: ${P}_{1} {V}_{1} = {P}_{2} {V}_{2}$ And thus ${V}_{2} = \frac{{P}_{1} {V}_{1}}{P} _ 2$ $=$ $\frac{10 \cdot m L \times 3.5 \cdot a t m}{1.0 \cdot a t m}$ $=$ $35 \cdot m L$ This MARKED difference in volume illustrates the key rule in scuba diving, something the instructors hammer home to you from your very first lesson: $\text{NEVER HOLD YOUR BREATH}$. Many fatalities have occurred when scuba divers ascend without breathing out (even a 1-2 m ascent is dangerous); the pressure in their lungs can expand rapidly upon ascent. Of course at 30-40 m depths they are breathing air at 4-5 atmospheres. This is not so much a problem for those (few) free divers that can simply hold their breaths, as the gas in their lungs compresses upon descent and the volume of expansion is fixed. Even when you are breathing properly at depth, divers can suffer disorientation and confusion due to the narcotic effects of nitrogen at pressure (i.e. "nitrogen narcosis").
2021-09-25 00:06:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7353731989860535, "perplexity": 2802.741265565365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057584.91/warc/CC-MAIN-20210924231621-20210925021621-00388.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/situational-problems-based-quadratic-equations-related-day-day-activities-be-incorporated-a-motor-boat-whose-speed-still-water-18-km-hr-takes-1-hour-more-go-24-km-up-stream-that-return-down-stream-same-spot-find-speed-stream_63465
Share Books Shortlist # A Motor Boat Whose Speed in Still Water is 18 Km/Hr Takes 1 Hour More to Go 24 Km up Stream that to Return Down Stream to the Same Spot. Find the Speed of the Stream. - CBSE Class 10 - Mathematics ConceptSituational Problems Based on Quadratic Equations Related to Day to Day Activities to Be Incorporated #### Question A motor boat whose speed in still water is 18 km/hr takes 1 hour more to go 24 km up stream that to return down stream to the same spot. Find the speed of the stream. #### Solution Let the speed of the stream be x km/hr. speed of the boat in still water = 18 km/hr. Total Distance = 24 km. We know that, Speed of the boat up stream = speed of the boat in still water − speed of the stream = (18 − x) km/hr Speed of the boat down stream = speed of the boat in still water + speed of the stream = (18 + x) km/hr Time of up stream journey = t1 = $\frac{24}{18 - x}$ Time of down stream journey = t2 = $\frac{24}{18 + x}$ According to the question, t1 − t2 = 1 hr $\Rightarrow \frac{24}{18 - x} - \frac{24}{18 + x} = 1$ $\Rightarrow \frac{24(18 + x - 18 + x)}{(18 - x)(18 + x)} = 1$ $\Rightarrow \frac{24(2x)}{(18 )^2 - x^2} = 1$ $\Rightarrow 48x = 324 - x^2$ $\Rightarrow x^2 + 48x - 324 = 0$ $\Rightarrow x^2 + 54x - 6x - 324 = 0$ $\Rightarrow x(x + 54) - 6(x + 54) = 0$ $\Rightarrow (x - 6)(x + 54) = 0$ $\Rightarrow x - 6 = 0 \text { or } x + 54 = 0$ $\Rightarrow x = 6 \text { or } x = - 54$ Since, speed cannot be negative. Thus, speed of the stream is 6 km/hr. Is there an error in this question or solution? #### Video TutorialsVIEW ALL [1] Solution A Motor Boat Whose Speed in Still Water is 18 Km/Hr Takes 1 Hour More to Go 24 Km up Stream that to Return Down Stream to the Same Spot. Find the Speed of the Stream. Concept: Situational Problems Based on Quadratic Equations Related to Day to Day Activities to Be Incorporated. S
2019-07-18 14:12:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2040548473596573, "perplexity": 1482.9393941559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525634.13/warc/CC-MAIN-20190718125048-20190718151048-00451.warc.gz"}
https://brilliant.org/discussions/thread/congratulations-on-600-day-streak/
# Congratulations on $600$ day streak! A huge round of applause for two great person on our community who has achieved a huge feat of completing $600$ Day streak! These two great person are Marios Patsios and Ferdinand Dvorsak. Once again congratulations to both of them! Note by Kunal Joshi 5 years, 8 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: @Marios Patsios @Ferdinand Dvorsak Congratulations to both of you! - 5 years, 8 months ago I'll be congratulating you after 42 days ! :D - 5 years, 8 months ago Yea! Hope that it last that long! - 5 years, 8 months ago Thanks a lot - 5 years, 8 months ago Congratulations @Marios Patsios sir , @Ferdinand Dvorsak sir !! Wow ! I can never make a record like that ! Hats off to you guys . - 5 years, 8 months ago Thank you. - 5 years, 8 months ago Congratulations - 5 years, 8 months ago Wow! Nice job! That truly takes some time and patience. I respect you both. - 5 years, 8 months ago
2021-01-15 18:32:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 10, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9893031120300293, "perplexity": 9378.862455781804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703495936.3/warc/CC-MAIN-20210115164417-20210115194417-00066.warc.gz"}
https://toph.co/p/shuvo-noboborsho
# Practice on Toph Participate in exhilarating programming contests, solve unique algorithm and data structure challenges and be a part of an awesome community. # Shuvo Noboborsho! By Shuvo_Malakar · Limits 1s, 256 MB "Shuvo Noboborsho" is the traditional greeting for Bengalis in the Bengali new year which means "Happy New Year". Here “Noboborsho” means “New Year”. And "Baishakhi Mela" is the traditional cultural fair held on the occasion of Bengali new year. This year, Pranto has got $N$ panjabis and $M$ pajamas as gifts of Noboborsho from his parents and relatives. He decided to visit the Baishakhi Mela with his two best friends, Auditi and Sanonda wearing a panjabi and a pajama. But he became confused of which one he should wear. So, he got an idea of wearing a panjabi along with a pajama in all possible ways and told Auditi and Sanonda to choose the best combination. It needs $S$ seconds to put on a panjabi along with a pajama. They will leave for the Baishakhi Mela after $K$ seconds. Now, Pranto asked Auditi how many unique combinations of a panjabi and a pajama he can try within this period of time. And Pranto also promised Auditi to buy her a nice gift from the Baishakhi Mela if she can answer his question instantly. Now, Auditi needs help from a programmer like you to calculate the answer instantly. Can you help her? ## Input The first line of the input contains an integer $T ( 1 \le T \le 10^5 )$$-$ the number of test cases. Then each of $T$ test cases contain a line of four space separated integers $N$, $M$, $S$, $K$$( 1 \le N, M, S \le 10^6,$$1 \le K \le 10^9 )$ the number of panjabis, the number of pajamas, required time for putting on a panjabi along with a pajama and the total time they have before leaving respectively. ## Output Output contains $T$ lines. The $i^{th}$$( 1 \le i \le T )$ line contains an integer $C-$ the number of unique combinations of a panjabi along with a pajama Pranto can try for the $i^{th}$ test case. ## Sample InputOutput 4 3 9 1 27 5 7 2 30 10 9 5 1000 1000000 1000000 1000000 1000000000 27 15 90 1000 ### Statistics 98% Solution Ratio shakil07Earliest, 1M ago prodip_bsmrstuFastest, 0.0s Shaon_1903031Lightest, 1.0 MB silenced.VOICEShortest, 132B
2022-05-22 16:50:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 18, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22547917068004608, "perplexity": 7129.254577942012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545875.39/warc/CC-MAIN-20220522160113-20220522190113-00462.warc.gz"}
https://www.codecademy.com/courses/learn-linear-algebra/lessons/introduction-to-linear-algebra/exercises/inverse-matrices
Learn The inverse of a matrix, A-1, is one where the following equation is true: $AA^{-1} = A^{-1}A = I$ This means that the product of a matrix and its inverse (in either order) is equal to the identity matrix. The inverse matrix is useful in many contexts, one of which includes solving equations with matrices. Imagine we have the following equation: $xA = BC$ If we are trying to solve for x, we need to find a way to isolate that variable. Since we cannot divide matrices, we can multiply both right sides of the equation by the inverse of A, resulting in the following: $xAA^{-1} = BCA^{-1} \rightarrow x = BCA^{-1}$ An important consideration to keep in mind is that not all matrices have inverses. Those matrices that do not have an inverse are referred to as singular matrices. To find the inverse matrix, we can again use Gauss-Jordan elimination. Knowing that AA-1 = I, we can create the augmented matrix [ A | I ], where we attempt to perform row operations such that [ A | I ] -> [ I | A-1 ]. One method to find the necessary row operations to convert A -> I is to start by creating an upper triangular matrix first, and then work on converting the elements above the diagonal afterward (starting from the bottom right side of the matrix). If the matrix is invertible, this should lead to a matrix with elements equal to 0 except along the diagonal. Each row can then be multiplied by a scalar that leads to the diagonal elements equaling 1 to create the identity matrix. ### Instructions Let’s walk through an example of solving for an inverse matrix. We have the following matrix: $A = \begin{bmatrix} 0 & 2 & 1 \\ -1 & -2 & 0 \\ -1 & 1 & 2 \end{bmatrix}$ We will solve for the inverse matrix by going from the form [ A | I ] –> [ I | A-1 ] Let’s follow along with each step of the animation. Step 1 Put system of equations into [ A | I ] form. Step 2 Swap row 1 and row 3. Step 3 Subtract the values of row 1 from the values of row 2 to cancel out the first entry in row 2. Step 4 Cancel out the first two entries of row 3 by adding 3/2 of each value of row 3 to row 2. Step 5 Cancel out the second entry of row 1 by adding triple of each value of row 1 to row 2. Step 6 Cancel out the third entry of row 2 by adding row 3 to -1/4 of each value of row 2. Step 7 Cancel out the third entry of row 1 by adding row 3 to 1/8 of each value of row 1. Step 8 Normalize the diagonals. Each value of row 1 is multiplied by -8/3. Each value of row 2 is multiplied by 4/3. Each value of row 3 is multiplied by -2. We are now in [ I | A-1 ] form! The inverse matrix is: $\begin{bmatrix} -4 & -3 & 2 \\ 2 & 1 & -1 \\ -3 & -2 & 2 \\ \end{bmatrix}$
2021-09-18 14:24:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7995868921279907, "perplexity": 251.66850724733595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056476.66/warc/CC-MAIN-20210918123546-20210918153546-00133.warc.gz"}
https://testbook.com/question-answer/a-transformer-has-350-primary-turns-and-1050-secon--5e60db38f60d5d332ddc47aa
# A transformer has 350 primary turns and 1050 secondary turns. The primary winding is connected across a 230 V, 50 Hz supply. The induced EMF in the secondary will be This question was previously asked in URSC (ISRO) Technical Assistant (Electronics): Previous Paper 2019 (Held On: 24 March 2019) View all ISRO Technical Assistant Papers > 1. 690 V, 50 Hz 2. 690 V, 150 Hz 3. 350 V, 150 Hz 4. 115 V, 50 Hz ## Answer (Detailed Solution Below) Option 1 : 690 V, 50 Hz Free ISRO VSSC Technical Assistant Mechanical held on 09/06/2019 2333 80 Questions 320 Marks 120 Mins ## Detailed Solution Concept: • The transformer basically changes the level of voltages from one value to the other at a constant frequency. • It is a static device that transforms electrical energy from one circuit to another without any direct electrical connection between them. • This is achieved with the help of mutual induction between the two windings. • In a transformer electrical energy changes to electrical and heat energy. In a transformer, the relation between the number of turns, current, and voltages is given by: $$n = \frac{{{N_1}}}{{{N_2}}} = \frac{{{V_1}}}{{{V_2}}}=\frac{I_2}{I_1}$$ N1 and N2 = number of turns in the primary and secondary windings respectively V1 and I1 = Voltage and current respectively at the primary end V2 and I2 = Voltage and current respectively at the secondary end Calculation: Given V1 = 230 V, 50 Hz supply N1 = 350 N2 = 1050 $$\Rightarrow\frac{{{N_1}}}{{{N_2}}} = \frac{{{V_1}}}{{{V_2}}}$$ Putting on the respective values, we get: $$\Rightarrow\frac{{{350}}}{{{1050}}} = \frac{{{230}}}{{{V_2}}}$$ V2 = 690 V The frequency of the secondary generated voltage will be the same as the input frequency.
2021-10-25 11:03:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39249980449676514, "perplexity": 2534.4815248033606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00262.warc.gz"}
https://www.isa-afp.org/entries/SDS_Impossibility.html
# The Incompatibility of SD-Efficiency and SD-Strategy-Proofness Title: The Incompatibility of SD-Efficiency and SD-Strategy-Proofness Author: Manuel Eberl Submission date: 2016-05-04 Abstract: This formalisation contains the proof that there is no anonymous and neutral Social Decision Scheme for at least four voters and alternatives that fulfils both SD-Efficiency and SD-Strategy- Proofness. The proof is a fully structured and quasi-human-redable one. It was derived from the (unstructured) SMT proof of the case for exactly four voters and alternatives by Brandl et al. Their proof relies on an unverified translation of the original problem to SMT, and the proof that lifts the argument for exactly four voters and alternatives to the general case is also not machine-checked. In this Isabelle proof, on the other hand, all of these steps are fully proven and machine-checked. This is particularly important seeing as a previously published informal proof of a weaker statement contained a mistake in precisely this lifting step. BibTeX: @article{SDS_Impossibility-AFP, author = {Manuel Eberl}, title = {The Incompatibility of SD-Efficiency and SD-Strategy-Proofness}, journal = {Archive of Formal Proofs}, month = may, year = 2016, note = {\url{https://isa-afp.org/entries/SDS_Impossibility.html}, Formal proof development}, ISSN = {2150-914x}, } License: BSD License Depends on: Randomised_Social_Choice
2022-05-21 16:03:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6906114816665649, "perplexity": 2763.627018562097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539131.21/warc/CC-MAIN-20220521143241-20220521173241-00241.warc.gz"}
https://projecteuclid.org/euclid.jca/1407790529
## Journal of Commutative Algebra ### Positive margins and primary decomposition #### Abstract We study random walks on contingency tables with fixed marginals, corresponding to a (log-linear) hierarchical model. If the set of allowed moves is not a Markov basis, then tables exist with the same marginals that are not connected. We study linear conditions on the values of the marginals that ensure that all tables in a given fiber are connected. We show that many graphical models have the positive margins property, which says that all fibers with strictly positive marginals are connected by the quadratic moves that correspond to conditional independence statements. The property persists under natural operations such as gluing along cliques, but we also construct examples of graphical models not enjoying this property. We also provide a negative answer to a question of Engstr\"om, Kahle and Sullivant by demonstrating that the global Markov ideal of the complete bipartite graph $K_{3,3}$ is not radical. Our analysis of the positive margins property depends on computing the primary decomposition of the associated conditional independence ideal. The main technical results of the paper are primary decompositions of the conditional independence ideals of graphical models of the $N$-cycle and the complete bipartite graph $K_{2,N-2}$, with various restrictions on the size of the nodes. #### Article information Source J. Commut. Algebra Volume 6, Number 2 (2014), 173-208. Dates First available in Project Euclid: 11 August 2014 https://projecteuclid.org/euclid.jca/1407790529 Digital Object Identifier doi:10.1216/JCA-2014-6-2-173 Mathematical Reviews number (MathSciNet) MR3249835 Zentralblatt MATH identifier 1375.13047 #### Citation Kahle, Thomas; Rauh, Johannes; Sullivant, Seth. Positive margins and primary decomposition. J. Commut. Algebra 6 (2014), no. 2, 173--208. doi:10.1216/JCA-2014-6-2-173. https://projecteuclid.org/euclid.jca/1407790529 #### References • 4ti2-A software package for algebraic, geometric and combinatorial problems on linear spaces, available at www.4ti2.de, 2007. • J. Besag, Spatial interaction and the statistical analysis of lattice systems, J. Roy. Stat. Soc. 36 (1974), 192-236. • René Birkner, Polyhedra: A package for computations with convex polyhedral objects, J. Software Alg. Geom. 1 (2009), 11-15. • Nicolas Bourbaki, Éléments de mathématique, in Algèbre, Hermann, 1950. • Florentina Bunea and Julian Besag, MCMC in $i \times j \times k$ contingency tables, in Monte Carlo methods, Neal Madras, ed., Fields Inst. Comm. 26, AMS and Fields Institute, 2000. • Yuguo Chen, Ian H. Dinwoodie and Seth Sullivant, Sequential importance sampling for multiway tables, Ann. Stat. 34 (2006), 523-545. • Yuguo Chen, Ian Dinwoodie and Ruriko Yoshida, Markov chains, quotient ideals, and connectivity with positive margins, in Algebraic and geometric methods in statistics, Pablo Gibilisco, Eva Riccomagno, Maria Piera Rogantin and Henry P. Wynn, eds., Cambridge University Press, Cambridge, 2010. • József Dénes and A.D. Keedwell, Latin squares and their applications, Academic Press, New York, 1974. • Mike Develin and Seth Sullivant, Markov bases of binary graph models, Ann. Comb. 7 (2003), 441-466. • Persi Diaconis, David Eisenbud and Bernd Sturmfels, Lattice walks and primary decomposition, in Mathematical essays in honor of Gian-Carlo Rota, B. Sagan and R. Stanley, eds., Progr. Math. 161, Birkhauser, Boston, 1998. • Persi Diaconis and Bernd Sturmfels, Algebraic algorithms for sampling from conditional distributions, Ann. Stat. 26 (1998), 363–397. • Mathias Drton, Bernd Sturmfels and Seth Sullivant, Lectures on algebraic statistics, Oberwolfach Sem. 39, Birkhäuser, Springer, Berlin, 2009. • David Eisenbud and Bernd Sturmfels, Binomial ideals, Duke Math. J. 84 (1996), 1-45. • Alexander Engström, Thomas Kahle and Seth Sullivant, Multigraded commutative algebra of graph decompositions, J. Alg. Comb. 39 (2014), 335-372. • Dan Geiger, Christopher Meek and Bernd Sturmfels, On the toric algebra of graphical models, Ann. Stat. 34 (2006), 1463-1492. • Daniel R. Grayson and Michael E. Stillman, Macaulay$2$, A software system for research in algebraic geometry, Available at http://www.math.uiuc.edu/Macaulay2/. • Jürgen Herzog, Takayuki Hibi, Freyja Hreinsdóttir, Thomas Kahle and Johannes Rauh, Binomial edge ideals and conditional independence statements, Adv. Appl. Math. 45 (2010), 317-333. • Thomas Kahle, Decompositions of binomial ideals, J. Software Alg. Geom. 4 (2012), 1-5. • –––, GraphBinomials, A library for walks on graphs on monomials, available from https://github.com/tom111/GraphBinomials, 2012. • Thomas Kahle and Ezra Miller, Decompositions of commutative monoid congruences and binomial ideals, 2011, arXiv:1107.4699. • Steffen L. Lauritzen, Graphical models, Oxford Stat. Sci. Ser., Oxford University Press, Oxford, 1996. • Jesús A. De Loera and Shmuel Onn, Markov bases of three-way tables are arbitrarily complicated, J. Symb. Comp. 41 (2006), 173-181. • Peter N. Malkin, Truncated Markov bases and Gröbner bases for integer programming, manuscript, 2006. • Johannes Rauh, Generalized binomial edge ideals, Adv. Appl. Math. 50 (2013), 409-414. • Johannes Rauh and Nihat Ay, Robustness canalyzing functions and systems design, Theor. Biosci. 133 (2014), 63-78. • Johannes Rauh and Thomas Kahle, The Markov bases database, http:/\!\!/markov-bases.de. • Ronald C. Read and Robin J. Wilson, An atlas of graphs, Clarendon Press, New York, 1998. • Bernd Sturmfels, Gröbner bases of toric varieties, Tōhoku Math. J. 43 (1991), 249-261. • –––, Gröbner bases and convex polytopes, University Lecture Series 8, American Mathematical Society, Providence, RI, 1996. • –––, Solving systems of polynomial equations, CBMS 97, American Mathematical Society, Providence, 2002. • Seth Sullivant, Toric fiber products, J. Algebra 316 (2007), 560-577. • Irena Swanson and Amelia Taylor, Minimal primes of ideals arising from conditional independence statements, J. Alg. 392 (2013), 299-314. \noindentstyle
2018-01-18 20:37:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4921635389328003, "perplexity": 4011.870718646391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887600.12/warc/CC-MAIN-20180118190921-20180118210921-00669.warc.gz"}
https://www.physicsforums.com/threads/calibrating-a-rain-gauge-problem-interpreting-question.868619/
# Calibrating a rain gauge (problem interpreting question) ## Homework Statement We wish to make a precipitation meter shaped like a paraboloid ##z = x^2 + y^2, 0 \leq z \leq 10##. Devise a scale on the z-axis that tells you the amount of precipitation in cm. In other words, at what height ##z = h## is the surface of water in the dish when there has been ##a## cm of rainfall? ## Homework Equations The rain meter exhibits cylindrical symmetry, so we might need to tackle the problem in cylindrical coordinates, where: \begin{cases} x = rcos(\theta)\\ y = rsin(\theta)\\ z = z\\ x^2 + y^2 = r^2\\ dV = r \ dz\ dr\ d\theta \end{cases} ## The Attempt at a Solution This is a case of me seeing what the question is saying, but not understanding the setting. Specifically, what does it mean there has been ##a## cm of rainfall? Am I supposed to compare the volume of the paraboloid to some standard rain gauge with a certain radius? The article on rain gauges on Wikipedia says that: "The standard NWS rain gauge, developed at the start of the 20th century, consists of a funnel emptying into a graduated cylinder, 2 cm in diameter, which fits inside a larger container which is 20 cm in diameter and 50 cm tall. If the rainwater overflows the graduated inner cylinder, the larger outer container will catch it. When measurements are taken, the height of the water in the small graduated cylinder is measured, and the excess overflow in the large container is carefully poured into another graduated cylinder and measured to give the total rainfall" If my guess is right, in the case of the standard cylinder that we will call ##c##, the volume ##V_c## of the water in ##c## is directly proportional to the height. Specifically V_c = \pi r^2 h = \pi h, since the radius ##r = 1cm##. I'm going to try and calculate the volume of the paraboloid dish ##V_p## as a function of R by integrating in cylindrical coordinates, set it equal to the volume gathered by ##c## and solve for ##h##. In general, if we ignore the fact that we know ##z_{max} = 10\ cm##, the paraboloid is limited by the cylindrical parameters as follows: \begin{cases} 0 \leq r \leq R\\ 0 \leq \theta \leq 2 \pi\\ r^2 \leq z \leq R^2 \text{ ||The surface of water is at some heigh } h = R^2 \end{cases} Therefore the volume of the paraboloid \begin{align*} V_p &= \iiint_{T}dV\\ &= \int_{0}^{2\pi}\int_{0}^{R}\int_{r^2}^{R^2} r\ dz\ dr\ d\theta\\ &= \int_{0}^{2\pi}\int_{0}^{R} \left[ rz \right]_{r^2}^{R^2}\ dr\ d\theta\\ &= \int_{0}^{2\pi}\int_{0}^{R}rR^2 - r^3 \ dr\ d\theta\\ &=\int_{0}^{2\pi} \left[ \frac{1}{2}r^2 R^2 - \frac{1}{4}r^4\right]_{0}^{R}\ d\theta\\ &= \int_{0}^{2\pi}\frac{1}{2}R^4 - \frac{1}{4}R^4\ d\theta\\ &= \int_{0}^{2\pi} \left[ \frac{1}{2}R^4 \right]_{0}^{2\pi}\\ &= \pi R^4 - \frac{1}{2}\pi R^4\\ &= \frac{1}{2}\pi R^4\ ||\ R^4 \implies cm^4 \text{, but ok...} \end{align*} Looking at this result, I immediately see that just setting ##V_c = V_p## is not going to help me solve for ##h(a)##. What exactly is ##a##, and how do I ""access"" it? Last edited: SammyS Staff Emeritus Homework Helper Gold Member ## Homework Statement We wish to make a precipitation meter shaped like a paraboloid ##z = x^2 + y^2, 0 \leq z \leq 10##. Devise a scale on the z-axis that tells you the amount of precipitation in cm. In other words, at what height ##z = h## is the surface of water in the dish when there has been ##a## cm of rainfall? ## Homework Equations The rain meter exhibits cylindrical symmetry, so we might need to tackle the problem in cylindrical coordinates, where: \begin{cases} x = rcos(\theta)\\ y = rsin(\theta)\\ z = z\\ x^2 + y^2 = r^2\\ dV = r \ dz\ dr\ d\theta \end{cases} ## The Attempt at a Solution This is a case of me seeing what the question is saying, but not understanding the setting. Specifically, what does it mean there has been ##a## cm of rainfall? Am I supposed to compare the volume of the paraboloid to some standard rain gauge with a certain radius? The article on rain gauges on Wikipedia says that: If my guess is right, in the case of the standard cylinder that we will call ##c##, the volume ##V_c## of the water in ##c## is directly proportional to the height. Specifically V_c = \pi r^2 h = \pi h, since the radius ##r = 1cm##. Your guess is incorrect. The paragraph from wiki does not pertain to this situation. What are the radius and area of the (upper) opening of this rain gauge ? I'm going to try and calculate the volume of the paraboloid dish ##V_p## as a function of R by integrating in cylindrical coordinates, set it equal to the volume gathered by ##c## and solve for ##h##. In general, if we ignore the fact that we know ##z_{max} = 10\ cm##, the paraboloid is limited by the cylindrical parameters as follows: \begin{cases} 0 \leq r \leq R\\ 0 \leq \theta \leq 2 \pi\\ r^2 \leq z \leq R^2 \text{ ||The surface of water is at some heigh } h = R^2 \end{cases} Therefore the volume of the paraboloid \begin{align*} V_p &= \iiint_{T}dV\\ &= \int_{0}^{2\pi}\int_{0}^{R}\int_{r^2}^{R^2} r\ dz\ dr\ d\theta\\ &= \int_{0}^{2\pi}\int_{0}^{R} \left[ rz \right]_{r^2}^{R^2}\ dr\ d\theta\\ &= \int_{0}^{2\pi}\int_{0}^{R}rR^2 - r^3 \ dr\ d\theta\\ &=\int_{0}^{2\pi} \left[ \frac{1}{2}r^2 R^2 - \frac{1}{4}r^4\right]_{0}^{R}\ d\theta\\ &= \int_{0}^{2\pi}\frac{1}{2}R^4 - \frac{1}{4}R^4\ d\theta\\ &= \int_{0}^{2\pi} \left[ \frac{1}{2}R^4 \right]_{0}^{2\pi}\\ &= \pi R^4 - \frac{1}{2}\pi R^4\\ &= \frac{1}{2}\pi R^4 \end{align*} Looking at this result, I immediately see that just setting ##V-c = V_p## is not going to help me solve for ##h(a)##. What exactly is ##a##, and how do I ""access"" it? Your guess is incorrect. The paragraph from wiki does not pertain to this situation. What are the radius and area of the (upper) opening of this rain gauge ? Well, since the projection of the paraboloid onto the xy-plane is a circle with a radius of ##R = \sqrt{10}##, the area of the opening is ##A_R = \pi \sqrt{10}^2 = 10\pi##. I don't immediately see how this will help me... Last edited: SammyS Staff Emeritus Homework Helper Gold Member Well, since the projection of the paraboloid onto the xy-plane is a circle with a radius of ##R = \sqrt{10}##, the area of the opening is ##A_R = \pi \sqrt{10}^2 = 10\pi##. I don't immediately see how this will help me... If you have ##\ a \, ##cm of rain, what will be the volume of water in this rain gauge ? If you have ##\ a \, ##cm of rain, what will be the volume of water in this rain gauge ? This is exactly what I'm not seeing. ##a## is not the actual height of the surface of the water in cm. It is the reading given by the meter. But how is it connected to the rest of the dimensions of the vessel? I get that the gauge scoops up water based on the size of its opening, so I can see how the area of the opening would be relevant, but... I think my problem is that I don't know what it means to have ##a## cm of rain. Is it height per the surface area of the top of the gauge? EDIT: I mean surface area per height? SammyS Staff Emeritus Homework Helper Gold Member This is exactly what I'm not seeing. ##a## is not the actual height of the surface of the water in cm. It is the reading given by the meter. But how is it connected to the rest of the dimensions of the vessel? I get that the gauge scoops up water based on the size of its opening, so I can see how the area of the opening would be relevant, but... I think my problem is that I don't know what it means to have ##a## cm of rain. Is it height per the surface area of the top of the gauge? A rainfall of ##\ a\,##cm of rain will deposit a volume of ##\ 10\pi a\,##cm3 water in the rain gauge. What will be the height of this volume of water in this paraboloid rain gauge? The mark at this height should match ##\ a## . It won't literally be at height ##\ a\,##, but if ##\ a\ ## is 1cm, or 2cm, or 3cm, ... then the mark at the water level corresponding to any of those should be marked as 1cm, 2cm, 3cm, ... respectively. A rainfall of ##\ a\,##cm of rain will deposit a volume of ##\ 10\pi a\,##cm3 water in the rain gauge. What will be the height of this volume of water in this paraboloid rain gauge? The mark at this height should match ##\ a## . If the volume of a paraboloid of radius R is $$V_p = \frac{1}{2}\pi R^4 = \frac{1}{2}\pi h^2$$ then $$h^2 = \frac{2V_p}{\pi} = \frac{2*10\pi a}{\pi} = 20a$$ and the height of the surface of the water is $$h = \sqrt{20a}$$ And there we go. The answer sheet actually stated as much, but I though that it was a typo (because ##\sqrt{cm}## is not a thing as far as I'm aware), so I typed the answer in the OP as ##\sqrt{20}a##. Thank you for the help. SammyS Staff Emeritus Homework Helper Gold Member If the volume of a paraboloid of radius R is $$V_p = \frac{1}{2}\pi R^4 = \frac{1}{2}\pi h^2$$ then $$h^2 = \frac{2V_p}{\pi} = \frac{2*10\pi a}{\pi} = 20a$$ and the height of the surface of the water is $$h = \sqrt{20a}$$ And there we go. The answer sheet actually stated as much, but I though that it was a typo (because ##\sqrt{cm}## is not a thing as far as I'm aware), so I typed the answer in the OP as ##\sqrt{20}a##. Thank you for the help. Good. By the way. It should be that if you go back through the workings, that constant coefficient, 20, under the radical should also have units of ##\ \sqrt{\text{cm}}\ .## Last edited: Good. By the way. It should be that if you go back through the workings, that constant, 20, under the radical should also have units of ##\ \sqrt{\text{cm}}\ .## Of course. I'm just so used to seeing the units included every step of the way (because of physics) that the answer really threw me off.
2022-05-29 07:39:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9492627382278442, "perplexity": 1310.8979409130602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663048462.97/warc/CC-MAIN-20220529072915-20220529102915-00069.warc.gz"}
https://pthree.org/page/6/
## Creating Strong Passwords Without A Computer, Part 0 - Understanding Entropy I've written a new series that investigates the art of creating very strong passwords without the aid of a computer. Sure, there are many software applications that will generate strong passwords for you, each with their own advantages and disadvantages. They're so plentiful, that it would be impossible to outline them all. However, generating passwords without the aid of a computer can be a very challenging, yet very rewarding process. It may even be impractical or impossible for you to install software on the computer you're using at the moment, when you need to generate a password. ## Introduction Before we start diving into passwords, we need to define what a "strong password" is. I've defined this many times on this blog, and as long as I keep blogging about passwords, I'll continue to define it. A "strong password" is one that is defined by entropy. The more the entropy, the stronger the password. Further, a "strong password" is one that is defined by true randomness, which means it was not influenced by a human being during the creation. We're told that passwords must have the following characteristics when creating them: • Passwords should contain both uppercase and lowercase letters. • Passwords should contain special punctuation characters. • Passwords should be unique for every account. • Passwords should be easy to remember. • Passwords should not be written down. • Passwords should not be based on dictionary words. • Passwords should not contain birthdates, anniversaries, other other personally identifiable information. These rules can be difficult, because remembering all these rules make passwords difficult to remember, especially if you have a unique password for every account. It's almost like creating a witches brew, just so you can create the perfect password potion: Double, double toil and trouble; Fire burn and cauldron bubble. Fillet of a fenny snake, In the cauldron boil and bake; Eye of newt, and toe of frog, Wool of bat, and tongue of dog, Lizard’s leg, and howlet’s wing, For a charm of powerful trouble, Like a hell-broth boil and bubble. Personally, I find it all a bit confusing, and even more annoying that security-conscious blogs endorse that garbage. I tend to keep things much more simple: • A password must contain great amounts of entropy (we'll quantify this in a minute). • A password must be truly random (no human interference). So, what is entropy, and how do we quantify it? Let's begin. ## Defining Entropy Entropy can be found in many fields of study. Abstractly, entropy is defined as: 1. a thermodynamic quantity representing the unavailability of a system's thermal energy for conversion into mechanical work, often interpreted as the degree of disorder or randomness in the system. 2. lack of order or predictability; gradual decline into disorder. "a marketplace where entropy reigns supreme". So entropy is a measure of disorder, or unpredictability. For our needs, we'll use entropy from Claude Shannon's Information Theory, a branch of Mathematics, which is defined mathematically as follows: where: • H = entropy in binary bits. • L = the number of symbols chosen for your message (length). • N = Total number of unique possibilities each symbol could be. Or, if you would like to enter this on your calculator: This can be proven quite simply. First off, let us define the logarithm. As with standard mathematics, we'll define it as the inverse of the exponential function. In other words: Suppose we want to find the entropy size of of a 13-character password taking advantage of all 94 possible printable symbols on the ASCII keyboard. Then, if all 94 printable symbols are a valid choice for each of the 13 characters in my passphrase, then to find the total number of combinations that my password could be, we write it as: Now, a property of logarithms is the ability to change base. Because entropy is defined in bits, or base-2, I'll change the base of my logarithm, as follows: Rewritting the equation, we get the following result: And thus, we've arrived at our conclusion. This assumes that the message was chosen completely at random. However, if there is human intervention in creating the message, then the equation gets much more difficult. As such, for the point of this post, and most of my posts detailing entropy, we'll assume that the password or message was chosen completely at random. ## Some Entropy Equation Examples If you create a message from lowercase letters only, then the number of unique possibilities for each symbol is 26, as there are only 26 letters in the English alphabet. So, each character provides only 4.7 bits of entropy: If you created a message from lowercase letters, uppercase letters and digits, then the number of unique possibilities for each symbol is 62, as there are 26 unique lowercase letters, 26 unique uppercase letters, and 10 digits in the English language. So, each character would provide only 5.95 bits of entropy: ## A Needle In A Haystack Knowing how much entropy is needed when creating a password can be tricky. Is 50-bits enough? Do I need 60-bits instead? 80-bits? 128? 256? More? In order to get a good firm grasp on quantifying entropy, I want to first create an analogy: Entropy is to a haystack, as your password is to a needle. To quantify this a bit, in a previous post, I demonstrated that a SHA1 hash has an output space of 61-bits due to cryptanalysis techniques, and it turns out that it was far too small. At the time of this writing, the bitcoin network is processing 2^61 SHA256 hashes every 76 seconds using specialized hardware called ASICs. The computing power is only going to grow also, as there is financial gain to be made from mining. In other words, with these ASICs, 30 quadrillion pieces of hay can be analyzed every second. If your haystack has 2^61 pieces of hay, one of which is actually your needle, this haystack can be fully processed in 76 seconds flat. ## How Entropy Relates To Passwords If you continue using the same bitcoin network model for our speed benchmark, then at 30 quadrillion pieces of hay (passwords) every second, to completely exhaust the full output space, for the following haystack sizes, it would take: • 64-bits: 10 minutes. • 72-bits: 2 days. • 80-bits: 15 months. • 88-bits: 327 years. In the past, I've shown that 72-bits of entropy (your haystack) seems to be sufficient for passwords (your needle) using the RSA 72-bit distributed computing project on distributed.net. After analyzing the bitcoin network mining operations, and how trivial it is to build specialized ASICs for these tasks, I'm beginning to think that you should have at least 80-bits of entropy for your passwords. As computing gets more and more stronger, that number will continue to increase. As a result, to create a "strong password", I would recommend the following: • A password must contain at least 80-bits of entropy. • A password must be truly random. ## Conclusion Entropy is how we measure unpredictability. If we need any sort of unpredictability, such as finding passwords (needles in a haystack), then the more entropy we have, the better off we are. On computer systems, entropy is stored in what's called an "entropy pool". The larger the pool, the more reliably the operating system can generate true random numbers for security applications, such as GPG and long term SSL keys. The same can be said for passwords. The more entropy we have, the better off our passwords can become. So, don't starve yourself by selecting weak 8-10 character passwords, and trying to generate random data in your head. Generate real random passwords and use large amounts of entropy. ## The Reality of SHA1 Many people don't understand crypto. That's okay. I don't either. But, I do get math, and I want to show you something SIGNIFICANT that affects your everyday habits online. It's been demonstrated that MD5 is broken. It's now trivial to find what are called "collisions". This is where two completely different inputs hash to the same output. Here is a demonstration: http://www.mscs.dal.ca/~selinger/md5collision/ SHA1 was meant to be a replacement for MD5. MD5 has an output space of only 128-bits, where as SHA1 has an output space of 160-bits. SHA1 is also designed differently than MD5, and is meant to not suffer the same sort of weaknesses or attacks that MD5 faces. However, over time, cryptographers have been able to severely attack SHA1, and as a result, they've all been warning us to get off SHA1, and move to SHA2. It should take 2^160 operations to find a collision with SHA1, however using the Birthday Paradox, we can have a probability of 50% of finding a SHA1 collision in about 2^80 operations. However, cryptanalysists have torn down SHA1 to a complexity of only 2^61 operations. Even better. An output size of only 61-bits is small. For comparison sake, the distributed computing project http://distributed.net cracked the 64-bit RSA key in just over 5 years at a snails pace of 102 billion keys per second: http://stats.distributed.net/projects.php?project_id=5. The motivation was prompted by a $10,000 dollar award from RSA labratories to find the secret key encrypting a message. Granted, you don't need to search the entire 64-bit keyspace to find the key. It's just as likely you'll find the key immediately at the start of your search, as it is to find the key at the end of your search. But it shows how reachable 64-bits is. The reduced search space of 61-bits from SHA1's attack vector is 8x smaller in search space than the 64-bits of that RSA secret key challenge. So, at 102 billion hashes per second, it's reasonable to conclude that you could exhaust the 61-bit search space somewhere between 6 and 9 months. This is far too slow. Let's look at another comparison. Bitcoin. First, Bitcoin uses SHA256 rather than SHA1 as part of its mining algorithms. According to http://blockchain.info/charts/hash-rate, the Bitcoin network is processing about 30 million gigahashes per second. That's 30 quadrillion hashes per second. The size of 61-bits is a mere 2.3 quintillion hashes. Let's do some math: 2.3 quintillion hashes divided by 30 quadrillion hashes per second = 76 seconds. What does this mean? The Bitcoin network is currently working over 2^61 SHA256 hashes every minute and 16 seconds. If this were SHA1, we could brute force 1,150 SHA1 collisions every day. Why should you care? Because when you connect to a server with SSH, SHA1 is likely presented as the hash. When you use your browser to connect to an HTTPS site using SSL, SHA1 is lkely presented as the hash. When you encrypt something with OpenPGP, or cryptographically sign a key or document, SHA1 is likely presented as the hash. It's the case that most of your encrypted communication online is using SHA1 in one way or another. And yet, it's well within our reach to process 1,150 SHA1 collisions EVERY. SINGLE. DAY. It's long over due that we replace SHA1 with SHA2 or SHA3. Not because some theoretical proof says so, but because we can actually do it. ## SCALE 12x PGP Keysigning Party This year, at SCALE 12x, I'll be hosting the PGP keysigning party. What is a keysigning party, and why should you attend? Maybe this will clear things up. ## What is a keysigning party? A PGP keysigning party is an event where PGP users meet together to exchange identity information and PGP fingerprints. Typically, at a conference such as SCALE 12x, PGP parties are common. The whole idea of the party is to expand the global "Web of Trust". In reality, however, you attend a keysigning party, because you share interests in cryptography, or you are interested in communicating privately with others. Usually, you can expect 10-20 people to attend keysigning parties at Linux conferences, sometimes more, sometimes less. ## What is the Web of Trust? The Web of Trust is just a logical mapping of exchanged identities. When at the keysigning party, you will exchange photo identification with each other as wall as PGP fingerprints. You do this for two reasons, and two reasons only: to verify that you have the correct key and to verify that the owner of that key is who they say they are. That's it. When you leave the party, you will sign every key that you have personally identified. By doing so, you bring that key into your own personal Web of Trust. An arrow from you to them indicates that you have signed their key. If they return the favor, and sign your key, then a return arrow from them to you will result. It's not too difficult to create a personal Web of Trust that you can view. I have blogged about it in the past at https://pthree.org/2013/03/01/create-your-own-graphical-web-of-trust-updated/. It's interesting to see the large groupings of signatures. It's clear that there was a PGP keysigning party in those large groups. ## What do I bring to a keysigning party? You really only need three things with you when you come to the party: 1. Something to write with, like a pen or pencil. 2. A government issued photo identification at a minimum. Additional identification is appreciated. 3. Your own printout of your PGP key fingerprint That last item is important. Very important. If you didn't bring item #3, then most PGP keysigning party organizers will not allow you to participate. In order to printout your PGP key fingerprint, run the following command at your terminal: $ gpg -K --fingerprint Print that out on a piece of paper, and bring it with you to the party. Some conferences, such as SCALE 12x, will print your PGP fingerprint on your conference badge. This will allow you to sign keys anywhere anytime. All you need to do is verify that the fingerprint on your personal printout matches the fingerprint on the conference badge. Then you may use your conference badge fingerprint at the party. It's important that you bring your own copy of your PGP key fingerprint, however. The keysigning party organizer will handout a printout of all the PGP key fingerprints for everyone in attendance. This is to verify that the organizer downloaded your correct and current key(s). You will read off your fingerprint from your personal printout, and everyone else will verify the fingerprint on their printout. ## What happens before the party? All that needs to be done, is every attendant must submit their public PGP key to the party organizer. Typically, there is a deadline on when keys can be submitted. It's important that you adhere to that deadline. The party organize then prints out on paper a list of every PGP key fingerprint of those who are attending. If you submit your key late, it will not be on the paper, and as such, many party orgasizers will not let you participate. ## What happens at the party? The key party organizer will pass out sheets of paper with every PGP key fingerprint that has been submitted. Typically, party organizers will also explain the importance of the party, why cryptography, and other things related to crypto. Then, he will explain how the party will proceed, at which point every attendee will read their PGP key fingerprint from their own printout. Everyone else in attendance will verify that the organizer has the correct key by following along on the handout. This is done for everyone in attendance. After fingerprints have been verified, we then get into two equal lines, facing each other. While facing someone in the line opposite if you, you introduce yourself, explain where your key is on the handout, and exchange government issued identification. After identification has been exchanged, everyone in both lines takes 1/2 step to their right. This will put you in front of a new person to repeat the process. Those at the ends turn around, facing the opposite direction, and continue shuffling to their right. Think of the whole process as one large conveyor belt. Once you are facing the person you started with, then everyone should have successfully verified everyone else's identity and key. At this point, the party is typically over. ## What happens after the party? This is the most critical step of the whole process, and for some reason, not everyone does it. Now that you have your handout with all the keys printed on it, you need to sign every key that you have personally identified. What you attended was a KEYSIGNING party. This means that you must SIGN KEYS. I know I'm putting a lot of emphasis on this, but I have personally attended close to a dozen PGP keysigning parties, and I would say the rate of signing keys is about 70%, unless I annoy them week in and week out to sign my key, then I'll get a return close to 90%. It blows my mind that people spent a great amount of time at the PGP keysigning party, then don't actually do anything about it. There are a lot of utilities out there for keysigning party events that make attempts at making the whole process easier. In all reality, the only "difficult" or troublesome part about it is, is converting the fingerprints you have on paper to your computer. Some PGP keysigning organizers will already have a party public keyring, that they will email to those who attended, with only the keys of those that attended. If that's the case, you have it easy. Otherwise, you must do something like the following: $gpg --recv-keys 0x<KEYID> Where "<KEYID>" is the last 16 characters of the person's fingerprint. After you have their public key, then you can sign it with the following command for each key: $ gpg --default-cert-level 3 --sign-key 0x<KEYID> It's important that you add the "--default-cert-level 3" as part of the signing process. This certification level says that you have done very careful checking, and you are 100% confident that the key in question belongs to the owner, and that you have personally verified the owner's identity. After you have signed the key, it is courtesy to email them their public key with your signature. As such, you will need to export their key from your public keyring. You should do this for each key: $gpg --armor --output /tmp/0x<KEYID>.asc --export 0x<KEYID> Attach "/tmp/0x<KEYID>.asc" to your encrypted email, and send it off. ## Additional Information • Should I bring a computer to the keysigning party? No. It's not necessary, and many party organizers consider it a security liability. Things like swapping around USB sticks could infect viruses or other badware. If secret keys are on the computer, it's possible they could be compromised. Even worse, the physical computer itself could get damaged. It's generally best to just leave the computer in your backback or hotel room, and attend the party without it. • Why should I care about signing PGP keys? Have you ever stopped to think about certificate authorities? When you create an SSL certificate signing request, you ship the CSR off to the CA, along with$200, and they returned a signed key. You then install your key on your web server, and browsers automatically trust data encrypted with it. All because you paid someone money. PGP keys do not have a centralized authority. As such, signing keys is done in an ad hoc manner. Further, money is not exchanged when signing keys. Instead, signing keys is done in person, with face-to-face contact. When you sign a key, you are ultimately saying that you have verified the owner is who they claim to be, and that the key you just signed belongs to them. As a result, a decentralized Web of Trust is built. • What is the Web of Trust really? The PGP Web of Trust is a decentralized web of connected keys, where the connection is made by cryptographically signing keys. The larger and more connected the Web of Trust is, the stronger the trust becomes for people to send private data to each other within that web. It sounds all geeky and mathematical, but if you just sit back and think about it, it makes sense. No money is involved, like using CAs. No blind trust is involved, such as the behavior of browsers. It's you and me, meeting face-to-face, claiming we are who we say we are, and claiming we own the key we have. Now, after this meeting, I can rest assured that if you send me cryptographically signed data from your key, I know it came from you, and no one else. If you send me encrypted data, I have a copy of your public key, and can decrypt it knowing that you were the one encrypting it. The Web of Trust is just that- establishing trust. • What is the PGP Strong Set? The largest and most connected Web of Trust is called the PGP Strong Set. There are more keys in this Web of Trust than any other. The way you get into the PGP Strong Set is by having someone in the Strong Set sign your key. A great deal of analysis has been done on the Strong Set. You can read about that analysis at http://pgp.cs.uu.nl/plot/. You can get key statistics such as mean signature distance (MSD), and calculate the distance from one key in the strong set to another key, such as yours that may not be in the strong set. My key, 0x8086060F is in the Strong Set. If ever I am at a keysigning party, and I sign your key, your key will also be included in the Strong Set. ## Announcing d-note: A Self Destructing Notes Application I'm pleased to announce something I've been working on, on and off, for over a year. Introducing d-note, a self hosted web application with self destructing notes. d-note is written in Python using the Flask web framework. d-note comes from the idea that sending private information across the Internet can be very insecure. Ask yourself- how often you've sent usernames and passwords in chat or email, either across the global Internet, or even inside of your work's Intranet? Maybe you've sent PINs, SSNs, credit cards, or other sensitive information too. Or maybe you haven't, but someone you know has. d-note aims to solve this problem. With d-note, notes are compressed and encrypted with Blowfish on disk, using either a shared key stored on the server, or a private key which is not stored. When a note is created, the sender is given a random URL which will link to the note. That URL can only be viewed once, and when viewed, it is immediately destroyed on the server. If you try to visit the note again, because it doesn't exist, a standard 404 error will be raised. Here are the current features in this release: • Data is compressed with zlib, then encrypted with Blowfish. Never at any point is the note in plaintext on the filesystem. • Form submission is protected by the browser minting a Hashcash token. This will prevent form spam, and POST denial of service attacks. As such, JavaScript must be enabled. • The note can be encrypted with a separate private key. This private key must be securely communicated to the recipient, so they may decrypt the note. The private key is not stored on the server. • The note can also be protected with a duress key, in case someone is trying to coerce the decryption key out of you. The duress key will immediately and silently destroy the note, without decrypting it, and redirect the browser to a standard 404 error. • Notes are destroyed immediately upon viewing. They are destroyed by securely overwriting the note with random data before removing it from the underlying filesystem. • Notes can be shared with mobile devices, by scanning a QR code. This allows you to share the note via SMS, email, instant message, or some other application installed on your mobile device. • Unread notes are automatically and securely destroyed after 30 days. • d-note tries its best at preventing the browser from caching the session. With that said, the back button is still functional. Because the application uses Hashcash to protect the submission form, JavaScript must be enabled to post a note. Don't worry, there is no identifying or tracking software in the default source code. Further, it may take your browser a few seconds to mint a valid token, before the form is submitted (it may even take longer if using a mobile device to create the note). Due to the nature of this application, some best practices and precautions should be made: • The web application MUST be served over HTTPS. • The server administrator hosting the d-note application could have modified the source code to store private keys, or even the notes in plaintext. As such, don't put all of your eggs in one basket. Give the usernames over one channel, and use d-note for the passwords, or vice versa. Even better, host this on your own web server, where you have full control. • Storing the encrypted notes in a ramdisk would be the most secure. However, if the web server stops, unread notes will be lost. There are still some things that need to be done, such as improving the overall look and feel with much needed CSS, and making the site a bit more dynamic with JavaScript. I'm also debating on creating account support, so you can view which notes you've created, and which ones have not been read. In the long term, I'd like to create an API, so you can create notes from other sources, such as the command line, or mobile device apps. However, for the time being, it works, it's feature complete (for a 0.1 release), and it's mostly bug free. If you would like to try a demo of d-note, you can visit https://ae7.st/d/. It's currently using a self-signed certificate. As such, to verify you have the right server, the certificate fingerprints are: MD5 Fingerprint=5A:E1:4E:B6:31:B8:3D:69:B1:D6:C0:A7:6B:46:FE:67 SHA1 Fingerprint=55:89:CD:C0:D4:85:CC:A5:DE:30:11:5D:9C:C9:12:1C:5C:9D:10:C5 SHA256 Fingerprint=12:91:BB:4C:E8:2F:1C:0C:D9:96:AF:4E:1D:8C:F7:B0:A8:07:70:C5:9C:89:B8:94:EE:E2:2A:D6:19:43:17:A4 ## The Drunken Bishop Cipher Is Weak Well, it turns out that my own hand cipher is incredibly weak. When I initially started designing it, using a chessboard felt a lot like an S-box lookup. There has been a great deal of research into S-boxes since the release of DES, and many ciphers today use them. What plagued me from day one, and I should have listened to my own intuition, is my chessboard key remains static throughout the entire operation. Unlike the Solitaire Cipher, by Bruce Schneier, where the cards in the deck are dynamically changing all the time. To get an idea of S-boxes, check out this AES animation (flash), which I think describes AES very well. With ciphers, you need an ingredient of non-linearity in the system. Otherwise, your cipher can fall victim to linear cryptanalysis. Initially, I had thought, incorrectly I might add, that by using the ciphertext as the starting direction of the bishop's walk, I was introducing the non-linearity that I needed. Turns out this isn't the case. Let's use my board from my initial announcement, and the trivial example of "AAAAAAAAAAAAAAA" as my plaintext. Here's my board: As per the algorithm, the bishop starts in "a1", which is valued at "38". Converted to binary, this gives us "100110", which means his travel is "SW" (no move), "NE", then finally "SW". He's back in the corner from which he started. Okay. No problem. Let's continue with the cipher then. Let's setup our worksheet. The character "A" as the value of "0", so my plaintext is: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +38 __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ 38 Now we take our ciphertext, 38, and use this as the start of our next bishop walk. As you can immediately see, we have a problem. He stays stuck in the corner, and we get the following result: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 Our ciphertext is thus "mmmmm mmmmm mmmmm". While this may not seem like a practical example, it draws up a very real problem: this cipher is subject to a chosen plaintext attack. And, despite my best efforts to ensure that there are no repeated rounds in the cipher, here's a trivial example that shows they still exist. As such, the cipher is terribly linear. The BIG PROBLEM with this cipher, and one that was hanging over my head the entire time I was designing it, is that the board key remains static. However, all is not over. Ultimately, the board is just an 8x8 matrix. As such, we should be able to do some matrix operations, such as multiplicitive inverses, exclusive OR, addition, subtraction, and so forth. This might be a major problem when writing the board down on a piece of paper, but this is trivial for a calculator, such as the HP-48 to do. But then we lose the allure of using a pure hand cipher, as we begin relying on computing tools to manage the internal state of the system. At this point, why not just use a typical computer, and use a cipher that has been tried and true, such as AES? I must admit that I was sloppy when designing the cipher. I didn't take my abstract algebra and linear algebra into account. I should have looked at the math when initially designing it. Peter Maxwell, on the Cryptography Discussion mailing list, pointed out the following: If you view the moving-the-bishop as an s-box lookup, and apply it to itself three times (composition), you end up with another s-box of the same size, lets call it S. Given S doesn't change, things should be rather easy indeed. If your cipher is then roughly akin to C[n] = P[n] + S[ C[n-1] ] with all operations taken modulo 2^6 the problem should now be a little more obvious. Indeed. Basically, the cipher is weak due to two inherent problems with the system: 1. The internal state of the system is static. 2. The cipher does not contain a non-linear component. I really want to use a chessboard for a cipher. I think it would be a fantastic tool that you could basically hide in plain sight. But, given the initial design of this cipher, and how weak it really is, it just doesn't work. The drunken bishop may make pretty ASCII art pictures for SSH server keys, but when in comes to cryptography, it's had just too much wine to be practical. I'll hang this one up as a learning exercise, and move on. Designing hand ciphers is much more difficult than I had initially thought. ## Background Ever since learning Bruce Schneier's Solitaire Cipher, I was interested in creating a hand cipher of my own. Unfortunately, I'm just an amateur cryptographer, and a lousy one at that. So I didn't have any confidence in creating my own hand cipher. However, after learning about the SSH ASCII art, and the drunken bishop, I was confident that I could turn this into a hand cipher. So, that's exactly what I did. Even though this is technically a hand cipher, and I've done my best to address some of the shortcomings internal to the system, I am not a genius mathematician. So, I can look at some of the numbers specifically, and give a broad sense of what the shortcomings might be, but I have not invested into a full scale cryptanalysis of the cipher. If someone stumbles upon this post, and is interested in launching such an attack, I would be interested in your feedback. This post is lengthy, so I'll separate everything with <h2> headers for easier visibility (a standard with my blog posts lately, it seems). ## Cipher Design Overview All that is needed is a standard 8x8 checker or chess board, and a marker, such as the bishop chess piece, or a checker, to make its way around the board. Each square on the board will be assigned a unique value from 0 through 63. The board should be assigned randomly, as this choice of number assignments is your key to encrypting and decrypting messages. However, at the end of this post, I'll give a few ideas on how you can key the board reliably without the need to communicate a random board. This cipher is a stream cipher. As with all stream ciphers, the output of n depends entirely on the accuracy of n-1. If n-1 is incorrect, then the rest of the process will be incorrect, and encrypting the plaintext from that point forward will be incorrect, which means decryption will not be possible. Thankfully, the board number assignments are static, so it shouldn't be difficult to double check your work, unlike the Solitaire Cipher, which requires keeping a backup copy of your keyed deck. Because there are 64 squares on the board, this allows us to use a base64 system for encryption and decryption. As such, we can use uppercase letters, lowercase letters, and digits. This will provide 62 of the characters. So, we can throw in white space, and padding at the end of the message, giving us our full 64 characters. One drawback with most hand ciphers is the lack of numbers support in the message. Either you have to spell out the numbers, lengthening the message, and as a result the time to encrypt and decrypt it, or you need to truncate them out, possibly creating confusion for the person decrypting the message. By having uppercase and lowercase letter support, we can now differentiate between proper names and not. ## The Board First, you must arrange the chess board such that square "a1" is in the lower left corner, as would be standard in a tournament chess match. This square should be black. Instead of referring to this corner as "a1", we'll refer to it as the "southwest corner", or just "SW". The other three corners on the board will be identified analogously: the lower right corner, or "h1" will be identified as the "southeast corner, or just "SE". The upper left corner, or "a8" will be identified as the "northwest corner", or "NW". Finally, the upper right corner, or "h8" will be identified as the "northeast corner", or just "NE". Now that we've identified the corners, we need to identify the edges of the board that are not a corner. On the bottom of the board, we'll identify this as edge "B", and the top of the board as edge "T". The left edge of the board will be identified as edge "L", and the right of the board as edge "R". Every additional square that is not identified as an edge or a corner will be identified as a middle or "M". After making these identifications, our board should have the following layout: ## The Bishop's Movement As in standard chess, the bishop may only move diagonal across the board. In standard chess, if the bishop is on a black square, then he will remain on the black square throughout game play. Our bishop is drunk, unfortunately. So, when our bishop encounters an edge; specifically, "B", "T", "L", or "R", then it's possible our bishop might switch space color from black to white, or from white to black. Any other time, our bishop is always moving diagonal as he would normally. So, we need to accommodate for when our bishop hits the wall or a corner, and still wishes to move. Let's look at the bottom edge first. Suppose our bishop traveled to square "e1" which has the value of "44" in our key. If the bishop wishes to move either diagonally NE or NW, that move in unrestricted. However, if the bishop wishes to move SW from square "e1", then it would step onto the white square "d1". If the bishop wishes to move SE from square "e1", then it would step onto the white square "f1". Similar rules hold for the other three edges. In summary then: • If at "B", and wishes to move SW, then the new square is (n-1,1). • If at "B", and wishes to move SE, then the new square is (n+1,1). • If at "T", and wishes to move NW, then the new square is (n-1,8). • If at "T", and wishes to move NE, then the new square is (n+1,8). • If at "L", and wishes to move NW, then the new square is (a,n+1). • If at "L", and wishes to move SW, then the new square is (a,n-1). • If at "R", and wishes to move NE, then the new square is (h,n+1). • If at "R", and wishes to move SE, then the new square is (h,n-1). If any additional movement is needed from an edge, then this means the bishop wishes to move away from the edge towards the middle of the board, and as such, it would do so in a standard diagonal manner, staying on its same color. Now that we've handled the four edges of the board, we need to handle the four corners. The movement is analogous to the edges, except for one move: • If at "SW", and wishes to move SW, no movement is made. • If at "SE", and wishes to move SE, no movement is made. • If at "NW", and wishes to move NW, no movement is made. • If at "NE", and wishes to move NE, no movement is made. If in the corner, and any other movement needs to be made, then use the previous edge rules. Knowing these rules, we can now describe where our drunk bishop moves when he lands on any square on the board. Now, we just need to generate a random board, which will determine the bishop's movement. When generating a random board, all 64 numbers from 0 through 63 must be assigned to a square. Each chessboard key is one of 64 factorial, or about the same size as a 296-bit symmetric key. Below is one such board that could be generated: . ## Generating the stream Because the board is now a static key, and doesn't change like the cards in the Solitaire Cipher, there is the possibility that the bishop could land on the same square at the end of the algorithm that he did at the start of the algorithm. As such, from that point forward, the same number would be generated in our stream, and our message would fall victim to a frequency analysis attack. If this doesn't happen, it is still possible that the bishop could land on a square at the end of his drunken walk that we've landed on before. This means our bishop will be caught in an infinite loop, generating the same sequence of numbers. To accommodate for both of these shortcomings, rather than use the number he is on for the start of his next walk, we will use the addition of the plaintext number and the stream number to produce an output number. This output number will determine the beginnings of his next walk. Each character in the plaintext will be given a value according to section 3 of RFC 3548. The only adjustments that will be made, is whitespace will be filled with the forward slash "/", and we will both pad the message modulo 5, as is standard with hand ciphers, and replace the full stop period with the plus character "+" at the end. The other punctuation and special characters must be stripped from the plaintext, as our base64 system cannot handle them. So, our plaintext message "Attack at dawn." would be converted to "Attack/at/dawn+" before we begin encrypting. ## The Four Movements The bishop always starts on the SW square, or "a1" at the beginning of each message. In order to know which way to travel, each square on the board describes three movements. This is done by converting the decimal number into binary, zero padded up to 6 bits wide. As such, the decimal number "0" will be the binary number "000000". The decimal number 38 will be the binary number "100110", and so forth. We'll call our 6-bit binary number a "word" for this cipher, even though in computer science, you learn that a binary word is 8 bits. We'll take our binary word, and divide it into 3 sections: the first two bits, the middle two bits, and the last two bits. So, for the case of "38", the binary word divided up would be "10" for the first two bits, "01" for the middle two bits, and "10" for the last two bits. There are four combinations that each of these bit pairs could be: "00", "01", "10", or "11". Because the bishop can move diagonally in one of four directions, each of these bit pairs describes the direction for each of his 3 moves. We will describe our movements as follows: • 00- NW • 01- NE • 10- SW • 11- SE So for the number "38", where our bishop starts in our random key that we chose earlier, the bishop would move "10" or SW, which would mean no movement, "01" or NE to square "b2", and finally "10" or SW, back to "a1". So, already we see that we have an inherent problem with the system, in that starting with "38" prevents the bishop from moving around the board. However, we'll add this to our plaintext number, and use our output number as the direction for the next walk. So, we won't be stuck for long. ## The Drunken Bishop Algorithm The algorithm is defined with the following steps: 1. Convert the previous output number to binary, and move the the bishop three spaces as described above based on this output number. 2. Convert the number of the square the bishop landed on to binary, and again move the bishop three spaces as based on this number. 3. Convert the number of the square the bishop landed on to binary, and again move the bishop three spaces as based on this number. 4. The bishop should have made a total of 9 movements. Note the number the bishop has landed on. This is your stream number. 5. Add the stream number to the plaintext number and to the index number modulo 64. This is your output number. Repeat the algorithm as necessary until you have generated all the output numbers to encrypt your message. To decrypt the message, follow the same algorithm above, but instead of addition modulo 64, use subtraction modulo64 to get back to the plaintext. ## Encryption Example For simplicity sake, we will use the chessboard key generated above, and we will encrypt "We confirm the delivery of 4 packages.". First, let's pad it modulo 5. We end up with "We confirm the delivery of 4 packages.++". Now convert all full stop periods to "+" and all spaces to "/". We now have: We/confirm/the/delivery/of/4/packages+++ Now me must convert each character to their decimal equivalent as found in RFC 3548, section 3. Thus, we end up with the numbers "22 30 63 28 40 39 31 34 43 38 63 45 33 30 63 29 30 37 34 47 30 43 50 63 40 31 63 56 63 41 26 28 36 26 32 30 44 62 62 62". Before the bishop starts the drunken walk around the board, let's setup a workspace, so it will be easy to do our modulo 64 addition. The top line will contain my plaintext numbers, the second line will contain our stream number. Both of these numbers will be added modulo 64 to produce our output number. I will be zero padding the decimal numbers as necessary: 22 30 63 28 40 39 31 34 43 38 63 45 33 30 63 29 30 37 34 47 30 43 50 63 40 31 63 56 63 41 26 28 36 26 32 30 44 62 62 62 + __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ Now our bishop is ready to take his random walk across the board. Let's use our key above, so you can follow along. Our bishop always starts at square SW or "a1". This has a value of "38" which is "100110" in binary. So, the bishop moves "SW", "NE", "SW", placing him back on the same square. We do this two more times, and our bishop has not moved. So, we write the number down as our stream number, and add it to 22 modulo 64: 22 30 63 28 40 39 31 34 43 38 63 45 33 30 63 29 30 37 34 47 30 43 50 63 40 31 63 56 63 41 26 28 36 26 32 30 44 62 62 62 + 38 __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ 60 Our output number is 60, so we convert this to binary, and get "111100". So, the bishop moves "SE", "SE", "NW", placing him on square "b2" with a value of "35". We now convert 35 to binary, and get "100011". So, the bishop now moves "SW", "NW", "SE", placing him on square "a2" with a value of "04". We now convert 04 to binary, and get "000100". So, our bishop make the final move of "NW", "NE", "NW", placing him on square "d1" with a value of "26". We write down 26 in our worksheet, add to to 30 modulo 64, to get our output number of "56": 22 30 63 28 40 39 31 34 43 38 63 45 33 30 63 29 30 37 34 47 30 43 50 63 40 31 63 56 63 41 26 28 36 26 32 30 44 62 62 62 + 38 26 __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ 60 56 Now we convert 56 to binary, and get "111000", and work our way through the algorithm getting our third output number. We continue in like fashion until our worksheet is complete: 22 30 63 28 40 39 31 34 43 38 63 45 33 30 63 29 30 37 34 47 30 43 50 63 40 31 63 56 63 41 26 28 36 26 32 30 44 62 62 62 <--- plaintext + 38 26 04 26 09 31 14 13 59 41 59 41 46 14 13 14 13 35 4 38 31 14 13 59 41 13 35 38 50 39 6 48 16 14 27 31 14 59 56 59 <--- final bishop square __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ 60 56 03 54 49 06 45 47 38 15 58 22 15 44 12 43 13 8 38 21 61 57 63 58 17 44 34 30 49 16 32 12 52 40 59 61 58 57 54 57 <--- output number Which in turn gives us the ciphertext: 84D2x GtvmP 6WPsM rrImV 94/6R siexQ gM0q7 96525 ## Decryption Example To decrypt our ciphertext from above, we must first convert the characters back to their RFC 3548 values. We'll refer to this number as our "input number". Our bishop will start in the SW corner, or square "a1" as he did for our encryption example. Also, like with did with encryption, we'll convert the "a1" number to binary for the first walk. After that, we'll use the ciphertext input numbers to determine his path. Lastly, we need to subtract modulo 64, rather than add, to reverse our work. So, as we did with our encryption example, let's setup our workspace: 60 56 03 54 49 06 45 47 38 15 58 22 15 44 12 43 13 8 38 21 61 57 63 58 17 44 34 30 49 16 32 12 52 40 59 61 58 57 54 57 <--- ciphertext number - 38 <--- final bishop square __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ 22 <--- plaintext Now convert "56" to binary, and have it do it's walk, 3 times, just like you would for encryption. You'll find that it ends up on square value "26". Subtract this value from our 2nd ciphertext number, to get back to our plaintext value of "30". Continue in a like manner, converting the ciphertext number to binary, starting the walk, doing two more binary conversions, to land on the right square. Subtract that number modulo 64, and you'll uncover your plaintext. ## Observations You may have noticed that you are always using the ciphertext number, either when encrypting, or decrypting, to start the bishop's initial walk. After which, the bishop makes two more walks around the board, based on the number he landed on. Because we are using the ciphertext number for the initial walk, we need to lengthen his path for a few reasons: 1. When decrypting, the additional walks places the bishop elsewhere in the board, preventing any knowledge of the key. 2. When both encrypting and decrypting, the 3 walks make it possible for the bishop to reach any number on the board, regardless of his location. This prevents our cipher from being focused on a set of squares based on his initial location. 3. By using the ciphertext to start the initial walk, we prevent the possibility of the bishop from getting stuck walking in a circle. A simple example is with the number "38" in the SW corner, that would normally prevent him from making any movement on the board. IE: his ending the location is the same as his starting location. Setting up an 8x8 key grid might be difficult. Certainly not much more difficult than keying a 52-card deck randomly. There may be creative ways to key the board, such as using previously played chess games, using passphrases, or some other method. I haven't had time to think about those yet. If you have good ideas, I'm very interested. At the moment, the best way to key the board, is to use a computer to randomly create a board assignment, and securely communicate the random assignment to your recipient. This cipher is a stream cipher, as already mentioned. As such, it is absolutely critical that you get every movement of the bishop right, and that your mathematics is exact. If you make a mistake, and continue encrypting the message after the mistake, the ciphertext may not be decipherable to plaintext. Double check your movements. Further, as with all symmetric ciphers, DO NOT USE THE SAME KEY TWICE. If the same key is used for two different messages, say C1 and C2, it's simple mathematics to extract the key. Think of it like this: (A+K) = C1 (B+K) = C2 C1-C2 = (A+K)-(B+K) = A+K-B-K = A-B+K-K = A-B In other words, using the same key, reveals the plaintext messages. This might not be trivial for you to pull off, but it is for a serious cryptographer. Don't share the key across messages. Just generate a new key every time you wish to encrypt. Lastly, I have found this cipher a bit faster to encrypt and decrypt messages than the solitaire cipher, but I make no guarantees to its strength. However, this is real cryptography. Not stenography or obscurity. This cipher is a pseudo random number generator, that you apply to your plaintext to produce a random output. I still have work to do to, such as frequency analysis, and discovering if any bias exists, but upon initial inspection, it seems to hold up well. As with all cryptography, however, only time will tell. ## Drunken Bishop Cipher Recommendations 1. Although a chess board can be use, it's not required. If you can draw an 8x8 grid on a piece of paper, populated with the random key, then you are ready to start encrypting and decrypting. 2. Never share a key when encrypting messages. Always use a different key. 3. Use a number 2 pencil, and not ink, when doing your work. Ink bleeds, and can leave traces of your message or work. 4. Use ungummed cigarette paper when actually doing your work. They burn completely, but slowly. 5. Do as much of the work in your head as possible, and no not write on impressionable surfaces, such as a pad of paper. 6. Work with a candle or a lighter. If someone breaks in while you are encrypting or decrypting messages, calmly burn the papers containing your work. 7. Assume that Big Brother is aware that you are using the Drunken Bishop to encrypt and decrypt your messages. The secret lies in the 8x8 grid key, not in the algorithm. Of course, this doesn't mean you need to advertise that you are using the Drunken Bishop either. The ciphertext should appear as random as possible, with no obvious clues as to how it was created. 8. Practice makes perfect. After practicing the Drunken Bishop, you should be able to immediately convert a number from 0 through 63 to binary without any external tools. This will speed things up. 9. Which reminds me, the Drunken Bishop is slow, although not as slow as Solitaire. It will probably take you about 30 seconds per character. Keep that in mind; you may need a quiet, secure place with several hours. However, you shouldn't have cramped hands while working the Drunken Bishop, like you get with Solitaire. ## Disclaimer I am not a professional cryptographer. I study cryptography as a hobby. As such, I do not make any guarantees about the security of this cipher. However, I have studied RC4, as well as stream ciphers in general, and have a thorough understanding of the "big picture" as to what the internals of the cipher should be doing. The Drunken Bishop does its best to follow these practices. I graduated with a degree in Applied Mathematics, and have taken and studied Number Theory, as applied to cryptography. I have not done any cryptanalysis on this cipher yet. My next goal is to write a Python program that can read a generated 8x8 key, read a text file, and encrypt and decrypt it. Only then will I be able to get more insight into the output that the Drunken Bishop provides, such as biases or other internal problems that I have not addressed. If you go ahead and write a utility to do this testing, and find some interesting results, I would be interested in your feedback, and will publish your results here on this post. ## Cryptanalysis Turns out this cipher is incredibly weak. It suffers from two problems that make it fall victim to linear cryptanalysis and a chosen plaintext attack: 1. The chessboard (an S-box) remains static during the algorithm. 2. There is no non-linear component to the system. It turns out, designing hand ciphers is incredibly difficult. However, I wrote a follow-up post describing how weak it really is. ## ZFS Administration, Appendix D- The True Cost Of Deduplication 0. Install ZFS on Debian GNU/Linux 9. Copy-on-write A. Visualizing The ZFS Intent Log (ZIL) 1. VDEVs 10. Creating Filesystems B. Using USB Drives 2. RAIDZ 11. Compression and Deduplication C. Why You Should Use ECC RAM 3. The ZFS Intent Log (ZIL) 12. Snapshots and Clones D. The True Cost Of Deduplication 4. The Adjustable Replacement Cache (ARC) 13. Sending and Receiving Filesystems 5. Exporting and Importing Storage Pools 14. ZVOLs 6. Scrub and Resilver 15. iSCSI, NFS and Samba 7. Getting and Setting Properties 16. Getting and Setting Properties 8. Best Practices and Caveats 17. Best Practices and Caveats This post gets filed under the "budget and planning" part of systems administration. When planning out your ZFS storage pool, you will need to make decision about space efficiency, and the cost required to build out that architecture. We've heard over and over that ZFS block deduplication is expensive, and I've even mentioned it on this blog, but how expensive is it really? What are we looking at out of pocket? That's what this post is about. We'll look at it from two perspectives- enterprise hardware and commodity hardware. We should be able to make some decent conclusions after looking into it. We're only going to address storage, not total cost which would include interconnects, board, CPU, etc. Those costs can be so variable, it can make this post rather complicated. So, let's stick with the basics. We're going to define enterprise hardware as 15k SAS drives and SLC SSDs, and we'll define commodity hardware as 7200 SATA drives and MLC SSDs. In both cases, we'll stick with high quality ECC DDR3 RAM modules. We'll use a base ZFS pool of 10TB. So, without further ado, let's begin. ## Determining Disk Needs Before we go off purchasing hardware, we'll need to know what we're looking at for deduplication, and if it's a good fit for our data needs. This can be a hard puzzle to solve, without actually storing all the data in the pool, and seeing when you end up. However, here are a few ideas for coming to that solution (the "three S tests"): 1. Sample Test: Get a good representative sample of your data. You don't need a lot. Maybe 1/5 of the full data. Just something that represents what will actually be stored. This will be the most accurate test, provided you get a good sample- the more, the better. Store that sample on the deduplicated pool, and see where you end up with your dedupratio. 2. Simulation Test: This will be less accurate than the sample test above, but it can still give a good idea of what you'll be looking at. Run the "zfs -S" command, and see where the cards fall. This will take some time, and may stress your pool, so run this command off hours, if you must do it on a production pool. It won't actually deduplicate your data, just simulate it. Here is actual an actual simulation histogram from my personal ZFS production servers: # zdb -S Simulated DDT histogram: bucket allocated referenced ______ ______________________________ ______________________________ refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE ------ ------ ----- ----- ----- ------ ----- ----- ----- 1 5.23M 629G 484G 486G 5.23M 629G 484G 486G 2 860K 97.4G 86.3G 86.6G 1.85M 215G 190G 190G 4 47.6K 4.18G 3.03G 3.05G 227K 19.7G 14.2G 14.3G 8 11.8K 931M 496M 504M 109K 8.49G 4.25G 4.33G 16 3.89K 306M 64.3M 68.3M 81.8K 6.64G 1.29G 1.37G 32 5.85K 499M 116M 122M 238K 17.9G 4.64G 4.86G 64 1.28K 43.7M 20.0M 21.0M 115K 3.74G 1.69G 1.79G 128 2.60K 50.2M 20.0M 22.0M 501K 9.22G 3.62G 3.99G 256 526 6.61M 3.18M 3.62M 163K 1.94G 946M 1.06G 512 265 3.25M 2.02M 2.19M 203K 2.67G 1.72G 1.86G 1K 134 1.41M 628K 720K 185K 2.13G 912M 1.02G 2K 75 1.16M 188K 244K 222K 3.37G 550M 716M 4K 51 127K 85.5K 125K 254K 657M 450M 650M 8K 2 1K 1K 2.46K 26.7K 13.3M 13.3M 32.8M 16K 1 512 512 1.94K 31.3K 15.6M 15.6M 60.7M Total 6.15M 732G 574G 576G 9.38M 920G 708G 712G dedup = 1.24, compress = 1.30, copies = 1.01, dedup * compress / copies = 1.60 3. Supposed Test: Basically, just guess. It's by far the least accurate of our testing, but you might understand your data better than you think. For example, is this 10TB server going to be a Debian or RPM package repository? If so, the data is likely highly duplicated, and you could probably get close to 3:1 savings, or better. Maybe this server will store a lot of virtual machine images, in which case the base operating system will be greatly duplicated. Again, your ratios could be very high as a result. But, you know what you are planning on storing, and what to expect. Now you'll have a deduplication ratio number. In my case, it's 1.24:1. This number will help us "oversubscribe" our storage. In order to determine how much disk to purchase, our equation should be: Savings = Need - (Need / Ratio) With my ratio of 1.24:1, which is running about a dozen virtual machines, rather than purchasing the full 10TB of disk, we really only need to purchase 8TB of disk. This is a realistic expectation. So, I can save purchasing 2TB worth of storage for this setup. The question then becomes whether or not those savings are worth it. ## Determining RAM Needs Ok, now that we know how much disk to purchase, we now need to determine how much RAM to purchase. Already, we know that the deduplication table (DDT) will occupy no more than 25% of installed RAM, by default. This is adjustable with the kernel module, but we'll stick with default for this post. So, we just need to determine how large that 25% is, so we can understand exactly how much RAM will be needed to safely store the ARC without spilling over to spinning platter disk. In order to get a handle on this metric, we have two options: 1. Counting Blocks: With the "zdb -b" command, you can count the number of currently used blocks in your pool. As with the "zdb -S" command, this will stress your pool, but it will give you the most accurate picture of what to expect with a deduplication table. Below is an actual counting of block on my production servers: # zdb -b pool Traversing all blocks to verify nothing leaked ... No leaks (block sum matches space maps exactly) bp count: 11975124 bp logical: 1023913523200 avg: 85503 bp physical: 765382441472 avg: 63914 compression: 1.34 bp allocated: 780946764288 avg: 65214 compression: 1.31 bp deduped: 0 ref>1: 0 deduplication: 1.00 SPA allocated: 780946764288 used: 39.19% In this case, I have 11975124 used blocks, and my 2 TB pool is 39.19% full, or 784GB. Thus, each block is about 70KB in size. You might see something different. According to Oracle, each deduplicated block will occupy about 320 bytes in RAM. Thus, 2TB divided by 70KB blocks gives a total storage space of about 30,700,000 total blocks. 30,700,000 blocks multiplied by 320 bytes, is 9,824,000,000 bytes, or 9.8GB of RAM for the DDT. Because the DDT is no more than 25% of ARC, and the ARC is typically 25% of RAM, I need at least 156.8GB, or basically 160GB of installed RAM to prevent the DDT from spilling to spinning platter disk. 2. Rule of Thumb: This is our "rule of thumb" rule that you've read in this series, and elsewhere on the Internet. The rule is to assign 5GB of RAM for every 1TB of disk. This ratio comes from the fact that a deduplicated block seems to occupy about 320 bytes of storage in RAM, and your blocks could occupy anwhere between 512 bytes to 128KB, usually averaging about 64KB in size. So, the ratio sits around 1:208, which is where we come up with the "5GB RAM per 1TB disk" metric. So with a 10TB pool, we can expect to need 50GB of RAM for the DDT, or 200GB of RAM for the ARC. In both cases, these RAM installations might just be physically or cost prohibitive. In my servers, the motherboards do not allow for more than 32GB of physically installed RAM modules. So 40GB isn't doable. As such, is deduplication out of the question? Not necessarily. If you have a fast SSD, something capable of 100k IOPS, or roughly the equivalent of your RAM install, then you can let the DDT spill out of RAM onto the L2ARC, and performance will not be impacted. A 256GB SSD is much more practical than 200GB of physical RAM modules, both in terms of physical limitations and cost prohibition. Without SSD 15k SAS drives don't come cheap. Currently, the Seagate Cheetah drives go for about $1 per 3GB, or about$330 per 1TB. So, for the 8TB we would be spending on disk, we would be spending about $2600 for disk. We already determined that we need about 200GB of space for the ARC. If we need to fit everything in RAM, and our motherboard will support the install size, then ECC registered RAM goes for about$320 per 16GB (how convenient). I'll need at least 14 sticks of 16GB RAM modules. This would put my RAM cost at about $4480. Thus my total bill for storage only would be$7080. I'm only saving $670 by not purchasing 2 disks to save on deduplication. With SSD Rather than purchasing 14 16GB memory modules, we could easily purchase an enterprise 256GB fast SLC SSD for about$500. A 256GB SSD is attractive, because as an L2ARC, it will be storing more than just the DDT, but other cached pages from disk. The SSD could also be partitioned to store the ZIL, acting as a SLOG. So, we'll only need maybe 16GB installed RAM (2x8GB modules for dual channel), which would put our RAM cost at $320, our SSD cost at$500 and our drive cost at $2600, or$3420 for the total setup. This is half of the initial price using only ECC RAM to fit the DDT. That's significant, IMO. Again, I only saved $670 by not purchasing 2 disks. ## Commodity Hardware Without SSD 7200 SATA drives come cheap these days. ZFS was designed with commodity disk in mind, knowing it's full of failures and silent data corruption. I can purchase a single 2TB disk for$80 right now, brand new. Four of those put my total cost at $320. However, the ECC RAM doesn't change, and if I needed 14 of the 16GB sticks as with my enterprise setup, then I can count on my total cost for this commodity setup at$4800. But, a RAM install of that size does not make sense for a "commodity" setup, so let's reduce the RAM footprint, and add an SSD. With SSD A fast commodity SSD puts us at the MLC SSDs. The 256GB Samsung 840 Pro is going for $180 right now, and can sustain 100k IOPS, which could possibly rival your DDR3 RAM. So, again sticking with 4 2TB drives at$320, 16GB of RAM at $320 and our Samsung SSD at$180 our total cost for this setup is $820, only saving$80 by not purchasing an additional 2TB SATA drive. This is by far the most cost effective solution. ## Additional Hidden Costs & SSD Performance Considerations When we made these plans on purchasing RAM, we were only considering the cost of storing the ARC and the DDT. We were not considering that your operating system will still need room outside of the ARC to operate. Most ZFS administrators I know won't give more than 25% of RAM to the ARC, on memory intensive setups, and no more than 50% on less memory intensive setups. So, for our 200GB ARC requirement, it may be as much as 400GB of RAM, or even 800GB. I have yet to administer a server with that sort of RAM install. So, SSDs all of the sudden become MUCH more attractive. If you decide to go the route of an SSD for an L2ARC, you need to make sure that it performs on par with your installed RAM, otherwise you'll see a performance hit when doing lookups in your DDT. It's expected for DDR3 RAM to have a rate of 100k to 150k sustained sequential read/write IOPS. Getting an SSD to perform similarly means getting the high end SATA connected SSDs, or low end PCIe connected, such as the OCZ RevoDrive. However, suppose you don't purchase an SSD that performs equally with your DDR3 modules. Suppose your DDR3 sustains 100k IOPS, but your SSD only does 20k IOPS. That's 5x as slow as DDR3 (spinning 7200 RPM disk only sustains about 100 IOPS). With as frequently as ZFS will be doing DDT lookups, this is a SIGNIFICANT performance hit. So, it's critical that your L2ARC can match the same bandwidth as your RAM. Further, there's a hidden cost with SSDs, and that's reliability. Typical enterprise SLC SSDs can endure about 10k write cycles, with wear leveling, before the chips begin to wear down. However, for commodity, more "consumer grade" SSDs, they will only sustain about 3k-5k write cycles. Don't fool yourself though. For our 256GB SSD, this means you can write 256GB 10,000 times, or 2.56PB worth of data on a SLC SSD, or 256GB 3,000 times, or 768TB on an MLC SSD. That's a lot of writing, assuming again, that the SSDs have wear leveling algorithms on board. But, the SSD may fail early, which means the DDT spilling to disk, and completely killing performance of the pool. By putting a portion of the DDT on the SSD, the L2ARC becomes much more write intensive, as ZFS expands the DDT table for new deduplicated blocks. Without deduplication, the L2ARC is less write intensive, and should be very read intensive. So, by not using deduplication, you can lengthen the life of the SSD. ## Conclusion So, now when you approach the CFO with your hardware quote, with a dedup ratio of 1.24:1, it's going to be a hard sell. With commodity hardware using an SSD, you're getting close to your savings in disk (10.25:1), as compared to enterprise hardware where you're spending much, much more to get close to those space savings (5.10:1). But, with commodity hardware, you're spending 1/4 of the enterprise equivalent. In my opinion, it's still too costly. However, if you can get your ratio close to 2:1, or better, then it may be a good fit. You really need to know your data, and you really need to be able to demonstrate that you will get solid ratios. A storage server of virtual machines might be a good fit, or where you have redundant data that is nearly exactly the same. For general purpose storage, especially with a ratio of 1.24:1, it doesn't seem to be worth it. However, you're not out of luck, if you wish to save disk. For good space savings on your disk, that you get nearly for free, I strongly encourage compression. Compression doesn't tax the CPU, even for heavy workloads, provides similar space savings (in my example above, I am getting a 1.3:1 compression ratio versus 1.24:1 dedup ratio), doesn't require an expensive DDT, and actually provides enhanced performance. The extra performance comes from the fact that highly compressable data does not need to physically write as much data to slow spinning platter, and also does not need to read as much physical disk. Your spinning disk is the slowest bottleneck in your infrastructure, so anything you can do to optimize the reads and writes could provide large gains. Compression wins here. Hopefully, this post helps you analyze your deduplication plans, and identify the necessary costs. ## ZFS Administration, Appendix C- Why You Should Use ECC RAM 0. Install ZFS on Debian GNU/Linux 9. Copy-on-write A. Visualizing The ZFS Intent Log (ZIL) 1. VDEVs 10. Creating Filesystems B. Using USB Drives 2. RAIDZ 11. Compression and Deduplication C. Why You Should Use ECC RAM 3. The ZFS Intent Log (ZIL) 12. Snapshots and Clones D. The True Cost Of Deduplication 4. The Adjustable Replacement Cache (ARC) 13. Sending and Receiving Filesystems 5. Exporting and Importing Storage Pools 14. ZVOLs 6. Scrub and Resilver 15. iSCSI, NFS and Samba 7. Getting and Setting Properties 16. Getting and Setting Properties 8. Best Practices and Caveats 17. Best Practices and Caveats ## Introduction With the proliferation of ZFS into FreeBSD, Linux, FreeNAS, Illumos, and many other operating systems, and with the introduction of OpenZFS to unify all the projects under one collective whole, more and more people are beginning to tinker with ZFS in many different situations. Some install it on their main production servers, others install it on large back-end storage arrays, and even yet, some install it on their workstations or laptops. As ZFS grows in popularity, you'll see more and more ZFS installations on commodity hardware, rather than enterprise hardware. As such, you'll see more and more installations of ZFS on hardware that does not support ECC RAM. The question I pose here: Is this a bad idea? If you spend some time searching the Web, you'll find posts over and over on why you should choose ECC RAM for your ZFS install, with great arguments, and for good reason too. In this post, I wish to reiterate those points, and make the case for ECC RAM. Your chain is only as strong as your weakest link, and if that link is non-ECC RAM, you lose everything ZFS developers have worked so hard to achieve on keeping your data from corruption. ## Good RAM vs Bad RAM vs ECC RAM To begin, let's make a clear distinction between "Good RAM" and "Bad RAM" and how that compares to "ECC RAM": • Good RAM- High quality RAM modules with a low failure rate. • Bad RAM- Low quality RAM modules with a high failure rate. • ECC RAM- RAM modules with error correcting capabilities. "Bad RAM" isn't necessarily non-ECC RAM. I've deployed bad ECC RAM in the past, where even though they are error correcting, they fail frequently, and need to be replaced. Further, ECC RAM isn't necessarily "Good RAM". I've deployed non-ECC RAM that has been in production for years, and have yet to see a corrupted file due to not having error correction in the hardware. The point is, you can have exceptional non-ECC "Good RAM" that will never fail you, and you can have horrid ECC "Bad RAM" that still creates data corruption. What you need to realize is the rate of failure. An ECC RAM module can fail just as frequently as a non-ECC module of the same build quality. Hopefully, the failure rate is such that ECC can fix the errors it detects, and still function without data corruption. But just to beat a dead horse dead, ECC RAM and hardware failure rates are disjointed. Just because it's ECC RAM does not mean that the hardware fails less frequently. All it means is that it detects the failures, and attempts to correct them. ## ECC RAM Failure rates are hard to get a handle on. If you read the Wikipedia article on ECC RAM, it mentions a couple studies that have been attempted to get a handle on how often bit errors occur in DIMM modules: Work published between 2007 and 2009 showed widely varying error rates with over 7 orders of magnitude difference, ranging from 10^(−10) [to] 10^(−17) error/bit-h[ours], roughly one bit error, per hour, per gigabyte of memory to one bit error, per millennium, per gigabyte of memory. A very large-scale study based on Google's very large number of servers was presented at the SIGMETRICS/Performance’09 conference. The actual error rate found was several orders of magnitude higher than previous small-scale or laboratory studies, with 25,000 to 70,000 errors per billion device hours per megabit (about 2.5^(–7) x 10^(−11) error/bit-h[ours])(i.e. about 5 single bit errors in 8 Gigabytes of RAM per hour using the top-end error rate), and more than 8% of DIMM memory modules affected by errors per year. So roughly, from what Google was seeing in their datacenters, 5 bit errors in 8 GB of RAM per hour in 8% of their installed RAM. If you don't think this is significant, you're fooling yourself. Most of these bit errors are caused by background radiation affecting the installed DIMMs, due to neutrons from cosmic rays. But voltage fluctuations, bad circuitry, and just poor build quality can also come in as factors to "bit flips" in your RAM. ECC RAM works by detecting this bad bit by using an extra parity bit per byte. In other words, for every 8 bits, there is a 9th parity bit which operates as the checksum for the previous 8. So, for a DIMM module registering itself as 64 GB to the system, there is actually 72 GB physically installed on the chip to give space for parity. However, it's important to note that ECC RAM can only correct 1 bit flip per byte (8 bits). If you have 2 bit flips per byte, ECC RAM will not be able to recover the data. ZFS was designed to detect silent data errors that happen due to hardware and other factors. ZFS checksums your data from top to bottom to ensure that you do not have data corruption. If you've read this series from the beginning, you'll know how ZFS is architected, and how data integrity is first priority for ZFS. People who use ZFS use it because they cannot stand data corruption anywhere in their filesystem, at any time. However, if your RAM is not ECC RAM, then you do not have the guarantee that your file is not corrupt when stored to disk. If the file was corrupted in RAM, due to a frozen bit in RAM, then when stored to ZFS, it will get checksummed with this bad bit, as ZFS will assume the data it is receiving is good. As such, you'll have corrupted data in your ZFS dataset, and it will be checksummed corrupted, with no way to fix the error internally. ## A scenario To drive the point home further about ECC RAM in ZFS, let's create a scenario. Let's suppose that you are not using ECC RAM. Maybe this is installed on your workstation or laptop, because you like the ZFS userspace tools, and you like the idea behind ZFS. So, you want to use it locally. However, let's assume that you have non-ECC "Bad RAM" as defined above. For whatever reason, you have a "frozen bit" in one of your modules. The DIMM is only storing a "0" or a "1" in a specific location. Let's say it always reports a "0" due to the hardware failure, no matter what should be written there. To keep things simple, we'll look at 8 bits, or 1 byte in our example. I'll show the bad bit with a red "0". Your application wishes to write "11001011", but due to your Bad RAM, you end up with "11000011". As a result, "11000011" is sent to ZFS to be stored. ZFS adds a checksum to "11000011" and stores it in the pool. You have data corruption, and ZFS doesn't know any different. ZFS assumes that the data coming out of RAM is intentional, so parity and checksums are calculated based on that result. But what happens when you read the data off disk and store it back in your faulty non-ECC RAM? Things get ugly at this point. So, you read back "11000011" to RAM. However, it's stored in _almost_ the same position before it was sent to disk. Assume it is stored only 4 bits later. Then, you get back "01000011". Not only was your file corrupt on disk, but you've made things worse by storing them back into RAM where the faulty hardware is. But, ZFS is designed to correct this, right? So, we can fix the bad bit back to "11000011", but the problem is that the data is still corrupted! Things go downhill from here. Because this is a physical hardware failure, we actually can't set that first bit to "1". So, any attempt at doing so, will immediately revert it back to "0". So, while the data is stored in our faulty non-ECC RAM, the byte will remain as "01000011". Now, suppose we're ready to flush the data in RAM to disk, we've compounded our errors by storing "01000011" on platter. ZFS calculates a new checksum based on the newly corrupted data, again assuming our DIMM modules are telling us the truth, and we've further corrupted our data. As you can see, the more we read and write data to and from non-ECC RAM, the more we have a chance of corrupting data on the filesystem. ZFS was designed to protect us against this, but our chain is only as strong as the weakest link, which in this case is non-ECC RAM corrupting our data. No matter how you slice and dice it, you trusted your non-ECC RAM, and your trusty RAM failed you, with no recourse to fall back on. ## ECC Pricing Due to the extra hardware on the DIMM, ECC RAM is certainly more costly than their non-ECC counterparts, but not by much. In fact, because ECC DIMMs have 9/8 additional more hardware, the price pretty closely reflects that. In my experience, 64 GB of ECC RAM is roughly 9/8 more costly than 64 GB of non-ECC RAM. Many general purpose motherboards will support unbuffered ECC RAM also, although you should choose a motherboard that supports active ECC scrubbing, to keep bit corruption minimized. You can get high quality ECC DDR3 SDRAM off of Newegg for about $50 per 4 GB. Non-ECC DDR3 SDRAM retails for almost exactly the same price. To me, it just seems obvious. All you need is a motherboard supporting it, and Supermicro motherboards supporting ECC RAM can also be relatively inexpensive. I know this is subjective, but I recently built a two-node KVM hypervisor shared storage cluster with 32 GB of registered ECC RAM in each box with Tyan motherboards. Total for all 32 GB was certainly more costly than everything else in the system, but I was able to get them at ~$150 per 16 GB, or $600 for all 64 GB total. The boards were ~$250 each, or $500 total for two boards. So, it total, for two very beefy servers, I spent ~$1100, minus CPU, disk, etc. To me, this is a small investment to ensure data integrity, and I would not have saved much going the non-ECC route. The very small extra investment was very much worth it, to make sure I have data integrity from top-to-bottom. ## Conclusion ZFS was built from the ground up with parity, mirroring, checksums and other mechanisms to protect your data. If a checksum fails, ZFS can make attempts at loading good data based on redundancy in the pool, and fix the corrupted bit. But ZFS is assuming that a correct checksum means the bits were correct before the checksum was applied. This is where ECC RAM is so critical. ECC RAM can greatly reduce the risk that your bits are not correct before they get stored into the pool. • ZFS checksums assume the data is correct from RAM. • Regular ZFS scrubs will greatly reduce the risk of corrupted bits, but can be your worst enemy with non-ECC RAM hardware failures. • Backups are only as good as the data they store. If the backup is corrupted, it's not a backup. • ZFS parity data, checksums, and physical data all need to match. When they don't, repairs start taking place. If it is corrupted out the gate, due to non-ECC RAM, why are you using ZFS again? Thanks to "cyberjock" on the FreeBSD forums for inspiring this post. ## Hello Morse Code As many of you may know, I am a licensed Amateur Radio operator in the United States. Recently, I've taken up a desire to learn Morse Code at a full 25 WPM using the Koch method. I only started last week, and tonight I copied my first beginners code "A NOTE TO TENNESSEE", and other such silliness. I don't know how many of my readers are hams, and how many of them know their CW. Some equipment that I'm practicing with: • Morse Code Trainer- An Android application for both sending (tapping the screen) and receiving. It's flexible in that you can choose what to listen to, your speed, as well as your tone frequency and volume. Currently, I'm using it to largely just receive. • MFJ-557 code oscillator with key. This is very much a beginners straight key, but it comes with its own speaker, so you can hear how you sound when you transmit. • Morse Code Reader- Another Android application, this time for using the MIC input to listen to outside noise, and translate that to letters. I've found it to be somewhat unreliable, even in a quiet room, with only code to be heard. With that said, during the beginning stages, it seems to be more reliable than me on what it picks up and translates. So, it's good to look back, and see what I got wrong, and where I need improvement. I'm hoping by the end of the year, I can copy code at a full 25 WPM with 90% or better accuracy. I'm not actually on planning to work on transmitting and spacing until next year. ## Goodbye Ubuntu In 1999, I discovered GNU/Linux. Before then, I was a Solaris fanboy. Solaris could do no wrong, and it even took until about 2003 before I finally made the plunge, and removed Solaris off my Sun Ultra 1 (complete with 21" CRT monitor), and put Debian GNU/Linux on it. It was either Debian, or Gentoo that had SPARC support, and compiling software from source didn't sound like a lot of fun. I also had an HP laptop. It ran SUSE, Red Hat Linux, and various other distros, until it too settled on Debian. Then, in October of 2004, while at a local LUG meeting, I learned of this Debian fork called "Ubuntu". I gave it a try. I switched from using Debian to Ubuntu on my laptop. I liked the prospects of using something that had more frequent stable releases. After which, I helped setup the Ubuntu Utah users group. We had install fests, meetings, and other activities. Our group grew fast and strong. Then, I helped to start the Ubuntu US Teams project, getting local state and regional groups, strong like the Utah group was. Eventually, I applied for Ubuntu membership, and in 2006, I got my membership, syndicated my blog to the Ubuntu Planet, and I have been here since. Sometime around 2008, things started changing in the Ubuntu culture, and it was becoming difficult to enjoy working on it. I'm not going to list everything that Canonical has done to Ubuntu, but it's been steady. Not committing patches upstream to Linux mainline. Breaking ties with the Debian project, including rolling their own packages. Group development moved to centralized development. Copyright assignments. Switching from GNOME to Unity. Then Unity lenses and Amazon advertising. Over and over, things began changing, and as a result, the culture changed. I stopped really loving Ubuntu. Eventually, I went back to Debian for my servers, laptops and workstations. Ubuntu isn't Unix anymore. It's Apple, and I'm not sure I like the change. Now, Micah Flee, who works for the EFF, put up a "sucks site" showing how to disable the privacy violations in Unity. Rather than take it in stride, Canonical has decided to abuse trademark law, and issue a cease and desist notice of the Fix Ubuntu site. United States courts have shown over and over than "sucks sites" are free speech, fair use, and do not infringe on the company mark. In fact, no where on the Fix Ubuntu site is the actual Ubuntu trademark. No logo, no marks, nothing. Just text. Yet, Canonical wants to silence their critics using a heavy hand. To be fair, their notice is less grumpy and bullying than most cease and desist notices. However, it doesn't change the principle. I can't be associated with a project like this any longer. Effective immediately, my blog will no longer on the Ubuntu Planet. My Ubuntu Membership will be cancelled. My "UBUNTU" license plates, which have been on my car since August 2006, will be removed, in favor of my Amateur Radio callsign. I wish everyone in the Ubuntu community the best of wishes. I also hope you have the power to change Ubuntu back to what it used to be. I have no ill feelings towards any person in the Ubuntu community. I just wish to now distance myself from Ubuntu, and no longer be associated with the project. Canonical's goals and visions do not align with something I think should be a Unix. Don't worry though- I'll keep blogging. You can't get that out of my blood. Ubuntu just isn't for me any longer. Goodbye Ubuntu. ## Real Life NTP I've been spending a good amount of my spare time recently configuring NTP, reading the documentation, setting up both a stratum 1 and stratum 2 NTP server, and in general, just playing around with NTP. This post is meant to be a set of notes of what I've learned in the process, and hopefully, it can benefit you. It's not meant to be an exhaustive, or authoritative set of instructions on how you should configure your own NTP installation. Strata Before getting into the client configuration, we need to understand how NTP serves time to clients. We need to understand the concept of "strata" or "stratum". An authoritative time source, such as GPS satellites, cesium atomic fountains, WWVB radio waves, and so forth, are referred to as "stratum 0" clocks. They are authoritative, because they have some way of maintaining extremely accurate timekeeping. Any time source will suffice, including a standard quartz oscillating clock. However, knowing that quartz based clocks can gain or lose up to 15 seconds per month, we don't generally use them as time sources. Instead, we're interested in time sources that don't gain or lose a second in 300,000 years, as an example. Computers that connect to these accurate time sources to set their local time are referred to as "stratum 1" time sources. Because there is some inherent latencies involved with connecting to the stratum 0 time source, and the latencies involved with setting the time, as well as the drift that the stratum 1 clocks will exhibit, these stratum 1 computers may not be as accurate as their stratum 0 neighbors. In real life, the clocks on good stratum 1 computers will probably drift enough that their time will be off by a couple microseconds, compared to the stratum 0 source that their are getting their time. Computers that connect to stratum 1 computers to synchronize their clocks are referred to as "stratum 2" time sources. Again, due to many latencies involved, stratum 2 clocks may not be as accurate as their stratum 1 neighbors, and even worse compared to the further upstream stratum 0 time sources. In practice, your stratum 2 server will probably be off from its stratum 1 upstream server by anywhere from a few microseconds to a few milliseconds. Many factors come into play in how this is calculated, but realize that stratum 2 computers, in practice, are probably the furthest time source from stratum 0 that you want to synchronize your clocks with. As you would expect, stratum 3 clocks are connected upstream to stratum 2 clocks. Stratum 4 clocks are connected upstream to stratum 3 clocks, and so forth. Once you reach the lowest level of stratum 16, the clock is now considered to be unsynchronized. So again, in practice, you probably don't want to sync your computers clock with any strata lower than 2, thus making your computer a stratum 3. At this point, you're far enough away from the "true time" source, that your computer could exhibit time offsets anywhere from a few milliseconds to several hundred milliseconds. If your clock is off by 1000 seconds, NTP will refuse to synchronize your clock, and it will require manual intervention. If the upstream stratum from which you are synchronizing your clock is off by 1000 milliseconds, or 1 full second, that time source will not be used in synchronizing your clock, and others will be picked instead (this is to help weed out bad time sources). Client Debian, Ubuntu, Fedora, CentOS, and most operating system vendors, don't package NTP into client and server packages separately. When you install NTP, you've made your computer both a server, and a client simultaneously. If you don't want to serve NTP to the network, then don't open the port in your firewall. In this section, we'll assume that you're not going to use NTP as a server, but wish to use it as a client instead. I'm not going to cover everything in the /etc/ntp.conf configuration file, which is generally the standard installation path. However, there are a few things I do want to cover. First, the "server" lines. You can have multiple server lines in for configuration file. NTP will actively use up to 10. However, how many do you add? Consider the following: 1. If you only have one server configured, and that server begins to drift, then you will blindly follow the drift. If that server consistently gained 5 seconds every month, so would you. 2. If you only have two servers configured, then both will be automatically assigned as "false tickers" by NTP. If one of the servers began to drift, NTP would not be able to tell which upstream server is correct, as there would not be a quorum. 3. If you have three or more servers configured, then you can support "false tickers", and still have an agreement on the exact time. If you have five or six servers, then you can support two false tickers. If you have seven or eight servers, you can support three false tickers, and if you have nine or ten servers configured, then you can support up to four false tickers. NTP Pool Project As a client, rather than pointing your servers to static IP addresses, you may want to consider using the NTP pool project. Various people all over the world have donated their stratum 1 and stratum 2 servers to the pool, Microsoft, XMission, and even myself have offered their servers to the project. As such, clients can point their NTP configuration to the pool, which will round robin and load balance which server you will be connecting to. There are a number of different domains that you can use for the round robin. For example, if you live in the United States, you could use: • 0.us.pool.ntp.org • 1.us.pool.ntp.org • 2.us.pool.ntp.org • 3.us.pool.ntp.org There are round robin domains for each continent, minus Antarctica, and for many countries in each of those continents. There are also round robin servers for projects, such as Ubuntu and Debian: • 0.debian.pool.ntp.org • 1.debian.pool.ntp.org • 2.debian.pool.ntp.org • 3.debian.pool.ntp.org ntpq(1) NTP ships with a good client utility for querying NTP; it's the ntpq(1) utility. However, understanding the output of this utility, as well as its many subcommands, can be daunting. I'll let you read its manpage and documentation online. I do want to discuss its peering output in this blog post though. On my public NTP stratum 2 server, I run the following command to see its status: \$ ntpq -pn remote refid st t when poll reach delay offset jitter ============================================================================== *198.60.22.240 .GPS. 1 u 912 1024 377 0.488 -0.016 0.098 +199.104.120.73 .GPS. 1 u 88 1024 377 0.966 0.014 1.379 -155.98.64.225 .GPS. 1 u 74 1024 377 2.782 0.296 0.158 -137.190.2.4 .GPS. 1 u 1020 1024 377 5.248 0.194 0.371 -131.188.3.221 .DCFp. 1 u 952 1024 377 147.806 -3.160 0.198 -217.34.142.19 .LFa. 1 u 885 1024 377 161.499 -8.044 5.839 -184.22.153.11 .WWVB. 1 u 167 1024 377 65.175 -8.151 0.131 +216.218.192.202 .CDMA. 1 u 66 1024 377 39.293 0.003 0.121 -64.147.116.229 .ACTS. 1 u 62 1024 377 16.606 4.206 0.216 We need to understand each of the columns, so we understand what this is saying: • remote- The remote server you wish to synchronize your clock with • refid- The upstream stratum to the remote server. For stratum 1 servers, this will be the stratum 0 source. • st- The stratum level, 0 through 16. • t- The type of connection. Can be "u" for unicast or manycast, "b" for broadcast or multicast, "l" for local reference clock, "s" for symmetric peer, "A" for a manycast server, "B" for a broadcast server, or "M" for a multicast server • when- The last time when the server was queried for the time. Default is seconds, or "m" will be displayed for minutes, "h" for hours and "d" for days. • poll- How often the server is queried for the time, with a minimum of 16 seconds to a maximum of 36 hours. It's also displayed as a value from a power of two. Typically, it's between 64 seconds and 1024 seconds. • reach- This is an 8-bit left shift octal value that shows the success and failure rate of communicating with the remote server. Success means the bit is set, failure means the bit is not set. 377 is the highest value. • delay- This value is displayed in milliseconds, and shows the round trip time (RTT) of your computer communicating with the remote server. • offset- This value is displayed in milliseconds, using root mean squares, and shows how far off your clock is from the reported time the server gave you. It can be positive or negative. • jitter- This number is an absolute value in milliseconds, showing the root mean squared deviation of your offsets. Next to the remote server, you'll notice a single character. This character is referred to as the "tally code", and indicates whether or not NTP is or will be using that remote server in order to synchronize your clock. Here are the possible values: • " " Discarded as not valid. Could be that you cannot communicate with the remote machine (it's not online), this time source is a ".LOCL." refid time source, it's a high stratum server, or the remote server is using this computer as an NTP server. • "x" Discarded by the intersection algorithm. • "." Discarded by table overflow (not used). • "-" Discarded by the cluster algorithm. • "+" Included in the combine algorithm. This is a good candidate if the current server we are synchronizing with is discarded for any reason. • "#" Good remote server to be used as an alternative backup. This is only shown if you have more than 10 remote servers. • "*" The current system peer. The computer is using this remote server as its time source to synchronize the clock • "o" Pulse per second (PPS) peer. This is generally used with GPS time sources, although any time source delivering a PPS will do. This tally code and the previous tally code "*" will not be displayed simultaneously. Lastly, in understanding the output, we need to understand the what is being used as a reference clock in the "refid" column. • IP address- The IP address of the remote peer or server. • .ACST.- NTP manycast server. • .ACTS.- Automated Computer Time Service clock reference from the American National Institute of Standards and Technology. • .AUTH.- Authentication error. • .AUTO.- Autokey sequence error. • .CRYPT.- Autokey protocol error • .DCFx.- LF radio receiver from station DCF77 operating out of Mainflingen, Germany. • .GAL.- European Galileo satellite receiver. • .GOES.- American Geostationary Operational Environmental Satellite receiver. • .GPS.- American Global Positioning System receiver. • .HBG.- LF radio receiver from station HBG operating out of Prangins, Switzerland. • .INIT.- Peer association initialized. • .IRIG.- Inter Range Instrumentation Group time code. • .JJY.- LF radio receiver from station JJY operating out of Mount Otakadoya, near Fukushima, and also on Mount Hagane, located on Kyushu Island, Japan. • .LOCL.- The local clock on the host. • .MCST.- NTP multicast server. • .MSF.- National clock reference from Anthorn Radio Station near Anthorn, Cumbria. • .NIST.- American National Institute of Standards and Technology clock reference. • .PPS.- Pulse per second clock discipline. • .PTB.- Physikalisch-Technische Bundesanstalt clock reference operating out of Brunswick and Berlin, Germany. • .RATE.- NTP polling rate exceeded. • .STEP.- NTP step time change. The offset is less than 1000 millisecends but more than 125 milliseconds. • .TDF.- LF radio receiver from station TéléDiffusion de France operating out of Allouis, France. • .TIME.- NTP association timeout. • .USNO.- United States Naval Observatory clock reference. • .WWVH.- HF radio receiver from station WWVH operating out of Kekaha, on the island of Kauai in the state of Hawaii, United States. Client Best Practice There seem to be a couple long standing myths out there about NTP configuration. The first is that you should only use stratum 1 NTP servers, because they are closest to the true time source. Well, this isn't always the case. Connecting to stratum 1 time servers that have high RTT latencies could exhibit large jitter and large offsets. Rather, you should find stratum 1 servers that are physically close to your client. Also, many stratum 1 servers might be overloaded, and finding less stressed stratum 2 servers might deliver more accurate results. The other myth out there is that you should only connect to physically close NTP servers. This isn't necessarily true either. If the closest NTP servers to you only have one physical link, and that link goes down, you're sunk. Further, if the closest NTP servers to you are stratum 4 or 5 servers, you may exhibit high offsets from the upstream stratum 0 sources. There is a reason why the NTP Pool Project only lists public stratum 1 and stratum 2 servers, and there's a reason why stratum 16 is considered unsynchronized. Point is, there is a balance in configuring NTP. If you have a large infrastructure, it would make sense for you to build and install a stratum 1 or stratum 2 source at each logically different location (geographically or VLAN'd), and have each server and workstation connect to that logically local NTP server. If it's just your personal computer, then it probably makes sense to just use the NTP Pool Project, and use the round robin domain names. You should keep efficiency and redundancy in mind. So, you should probably consider the following best practices when configuring your NTP client: • Use at least 3 servers, and don't statically use busy servers. • Consider using the NTP pool project, if you will be operating as a client only. • If statically setting IP addresses for your servers, try to keep the following in mind: • Use servers that are physically close to your computer. These servers should have low ping latencies. • Use servers that are geographically separated across the globe. Just in case the trans-Atlantic cable is cut, you can still communicate to other servers. • Use servers that use different time sources. If all of your servers use GPS as their time source, and GPS goes offline, you will not have anything to synchronize your clocks against. • Consider using all 3 of the above on a single client. ## Open Letter To All GNU/Linux and Unix Operating System Vendors This is an open letter to all GNU/Linux and Unix operating system vendors. Please provide some sort of RSS or Atom feed for just new releases. Nothing else. No package updates. No "community" posts. No extra fluff. It shouldn't include news about being included in the Google Summer of Code. It shouldn't provide a list of package security advisories. It shouldn't include why you think dropping one package for a fork is a good idea. Did you just release "H4x0rz Linux 13.37"? Great! Publish that release news to a central spot, where only releases are posted, and give me a feed to parse. Still confused? Let me give you an example: Perfect. Includes alpha releases, which are fine. But it focuses only on the new releases. No community news; that's what planets are for. No package updaets; I can figure those out in the OS itself. Just releases. Here's a list of vendors that I would like to put in my feed reader, that I cannot find any such centralized feed source: • CentOS • Debian • Fedora • FreeBSD • Linux Mint • OpenBSD • OpenSUSE • Scientific Linux • Slackware I know some projects have web forums, of which there may be a subforum dedicated to releases only. If that forum provides an RSS feed, perfect. I know some mailing list managers also provide RSS feeds for archives. That works too. I don't care where it comes from, just so long as there is a reliable source where I can get reliable, up-to-date news on just the latest release, nothing else. Thanks! I just recently acquired a Raspberry Pi at SAINTCON 2013. I already had one, and forgot how much fun these little computers can be. I also forgot what a PITA they can be if you don't have your house hard wired to your switch for Internet access, and have to go into the basement to plug in. Plugging into a monitor and keyboard isn't a big deal for me, it's just the inconvenience of getting to the Internet. So, I downloaded Raspbian, ran through the initial config, including setting up an SSH server. The only thing left to do is get it online, and that will take a little config, which this post is about. My laptop is connected wirelessly, so my ethernet port is available. So, I should be able to plug the Raspberry Pi into the laptop, and have it use the laptop's wireless connection. In other words, using my laptop as a router and a gateway. So, let's get started. Below is an image of what I am trying to accomplish: The Raspberry Pi needs to be connected to the laptop via a standard twisted pair ethernet cable. The laptop will be connecting to the Internet wirelessly. So, while I still had my Raspberry Pi connected to the monitor and keyboard, while it is offline, I edit the /etc/network/interfaces file as follows: # Raspberry Pi iface eth0 inet static gateway 172.16.1.1 Then, on my laptop, I gave my ethernet port the address of "172.16.1.1" (mostly because no one ever uses this network- so it shouldn't conflict with your home/office). My laptop must be the gateway to the Internet for the Raspberry Pi. Notice that my laptop does not need a gateway for this interface. Instead, it's going to masquerade off of the "wlan0" interface, which already has a gateway configured: # Laptop iface eth0 inet static netmask 255.255.255.252 Now, I need to make my laptop a router, so it can route packets from one network (172.16.1.0/30) to another (whatever the "wlan0" interface is connected to). As such, run the following command as root: echo 1 > /proc/sys/net/ipv4/ip_forward Now, that this point, the "eth0" and "wlan0" interfaces are logically disconnected. Any packets coming into the "eth0" device won't make it any further. So, we need to create a logical pairing, called a "masquerade". This will allow packets going in "eth0" to exit "wlan0", and vice versa. So, as root, pull up a terminal, and type the following: iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE iptables -A FORWARD -i eth0 -j ACCEPT If you have any firewall rules in your INPUT chain, you will need to open up access for the 172.16.1.0/30 network. At this point, plug your Raspberry Pi into your laptop, SSH into the Pi, and see if you can ping out to the Internet. ## NTP Drift File Many things about NTP are elusive. At the casual user, there are a lot of things to understand: broacast, unicast, multicast, tally codes, servers, peers, stratum, delay, offset, jitter and so much more. Unless you setup your own NTP server, with the intent of providing accurate time keeping for clients, many of those terms can be discarded. However, one term you may want to be familiar with is "drift". Clock drift is when clocks are either too fast or too slow compared to a reference clock. NTP version 4 has the ability to keep clocks accurate within 233 picoseconds (called "resolution"). Of course, to have this sort of accuracy, you need exceptionally low latency networks with specialized hardware. High volume stock exchanges might keep time accuracy at this level. Generally speaking, for the average NTP server and client on the Internet, comparing time in milliseconds is usually sufficient. So, where does NTP keep track of the clock drift? For Debian/Ubuntu, you will find this in the /var/lib/ntp/ntp.drift file. In the file, you'll find either a positive or negative number. If it's positive, your clock is fast; if it's negative, your clock is slow. This number however, is not measured in seconds, milliseconds, nanoseconds or picoseconds. Instead, the number is measuring "parts per million", or PPM. It's still related to time, and you can convert this number to seconds, which I'll show you here. There are 86,400 seconds in one day. If I were to divide that number into one million pieces, then there would be .0864 seconds per piece, or 86.4 milliseconds per piece. My laptop connects to the standard NTP pool (0.us.pool.ntp.org, etc). I have a number of "3.322" in my drift file. This means that my laptop is fast by 3.322 PPM compared to the time source I am synchronizing my clock with (called the "sys_peer"). If I wanted to convert that to seconds, then: My laptop is fast by roughly 287 milliseconds compared to my "sys_peer". I just recently announced an open access NTP server. It was critical for me that this server be as accurate as possible with time keeping. So, all of the stratum 1 time servers that it connected to, had to have a ping latency of less than 10 milliseconds. Thankfully, I was able to find 3 servers with latencies less than 6 milliseconds, one of which is only 500 nanoseconds away. This became the preferred "sys_peer". The contents of its drift file currently is "-0.059". Again, converting this to seconds: My NTP server is slow by roughly 5 milliseconds compared to the "sys_peer" time source at that specific moment. Hopefully this clears up the NTP drift file, which I'm sure many of you have noticed. If you connect to NTP servers with very low latencies, then you'll notice that your drift file number approach zero. It's probably best to find 3 or 5 NTP servers that are physically close to you to keep those latencies low. If you travel a lot with your laptop, then connecting to the NTP pool would probably be best, so you don't need to constantly change the servers you're connecting to. ## New Public NTP Server I just assembled a public access NTP stratum 2 server. Feel free to use it, if you wish. It is considered "Open Access". It has a public webpage at http://jikan.ae7.st. This stratum 2 server has a few advantages over some others online: • It connects to three stratum 1 GPS time-sourced servers. • Each stratum 1 server is less than 6 milliseconds away. • The preferred stratum 1 server is about .5 milliseconds away. • Stratum 2 peering available- just contact me. • It has a 100 Mbit connection to the Internet. • The ISP sits behind four redundant upstream transit providers. • The ISP also peers on the Seattle Internet Exchange. It is also available in the NTP pool at http://www.pool.ntp.org/en/. If you want to synchronize your computer with this server, then just add the following line in your /etc/ntp.conf configuration file: server jikan.ae7.st Eventually, I'll also offer encrypted NTP for those who wish to have encrypted NTP packets on the wire (only if it's possible to offer both encrypted and unencrypted NTP simultaneously- I think it is). I'm also currently working on finding some other stratum 2 peers that are less than 30 milliseconds away. If you're running an NTP server, and want to peer with me, just let me know. Hopefully, this will be of some benefit to the community.
2017-08-19 14:57:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2862130403518677, "perplexity": 1645.3844840647728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105455.37/warc/CC-MAIN-20170819143637-20170819163637-00126.warc.gz"}
http://mathoverflow.net/feeds/question/104552
Asymptotic behaviour/upper bound for $\int_0^{\infty} exp(-c x^a+K x^b)dx$ for $a>b>0$ as $K\rightarrow$ - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-23T00:49:55Z http://mathoverflow.net/feeds/question/104552 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/104552/asymptotic-behaviour-upper-bound-for-int-0-infty-exp-c-xak-xbdx-for-a Asymptotic behaviour/upper bound for $\int_0^{\infty} exp(-c x^a+K x^b)dx$ for $a>b>0$ as $K\rightarrow$ warsaga 2012-08-12T12:31:12Z 2012-08-13T17:33:37Z <p>What is theAsymptotic behaviour or an upper bound for $\int_0^{\infty} exp(-c x^a+K x^b)dx$ for $a>b>0$ as $K\rightarrow \infty$? Or any good reference for tools to tackle this question?</p> <p>I think the growth in $K$ should be polynomial because $-c x^a+K x^b=0$ yields $x=(K/C)^\frac{1}{a-b}$ on the range $[0,(K/C)^\frac{1}{a-b}]$ the maximum value of the integrand is again a power of K (take derivative and set 0) the product yields and upper bound on $\int_0^{(K/C)^\frac{1}{a-b}} exp(-c x^a+K x^b)dx$. On the other hand $\int_{(K/C)^\frac{1}{a-b}}^{\infty} exp(-c x^a+K x^b)dx$ should be decreasing in $K$ for large K.</p> <p>Thank you,</p> http://mathoverflow.net/questions/104552/asymptotic-behaviour-upper-bound-for-int-0-infty-exp-c-xak-xbdx-for-a/104561#104561 Answer by Alexandre Eremenko for Asymptotic behaviour/upper bound for $\int_0^{\infty} exp(-c x^a+K x^b)dx$ for $a>b>0$ as $K\rightarrow$ Alexandre Eremenko 2012-08-12T13:57:55Z 2012-08-12T13:57:55Z <p>This is a simple example for the Laplace Method of asymptotic evaluation of integrals. The essence of the method is that the main contribution to the integral comes from a small neighborhood of the critical point of the function under the exponent. Laplace method is explained in every serious calculus book. For more comprehensive treatment see the books of Fedoryuk, for example, Fedoryuk, M. V. (1987), Asymptotic: Integrals and Series, or here: <a href="http://www.encyclopediaofmath.org/index.php?title=Saddle_point_method" rel="nofollow">http://www.encyclopediaofmath.org/index.php?title=Saddle_point_method</a></p> http://mathoverflow.net/questions/104552/asymptotic-behaviour-upper-bound-for-int-0-infty-exp-c-xak-xbdx-for-a/104587#104587 Answer by Igor Rivin for Asymptotic behaviour/upper bound for $\int_0^{\infty} exp(-c x^a+K x^b)dx$ for $a>b>0$ as $K\rightarrow$ Igor Rivin 2012-08-13T00:26:06Z 2012-08-13T17:33:37Z <p>One can get a full asymptotic expansion as an application of <a href="http://en.wikipedia.org/wiki/Watson%27s_lemma" rel="nofollow">Watson's Lemma</a>. One need only observe that the integrand is maximized at $x_0 = \left(\frac{Kb}{ac}\right)^{1/(a-b)}.$</p> <p>Substituting $x = x_0 u,$ one gets(where $I$ is the original integral) $I = x_0 \int_0^\infty \exp(-c^{b/(a-b)} K^{a/(a-b)}((b/a)^{a/(a-b)} u^a - (b/a)^{b/(a-b)} u^b)) d u.$ Letting $t = c^{b/(a-b)} K^{a/(a-b)},$ the integral breaks up into two Watson Lemma integrals, one from $0$ to $1,$ the second from $1$ to $\infty.$ I leave the final computation of the asymptotics to the interested reader.</p> <p><strong>EDIT</strong> Actually, this is not quite Watson's lemma. You approximate the function $\phi(u) = (b/a)^{a/(a-b)} u^a - (b/a)^{b/(a-b)} u^b)$ by its Taylor series (at the maximum point $1).$ Since it is the maximum, this will look like $\phi(1) - (u-1)^2 \phi^{\prime\prime}(1)/2.$ This means that the integral is asymptotically approximated by a Gaussian integral (I am too lazy to compute $\phi^{\prime\prime}(1)$...\$)</p>
2013-05-23 00:50:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9527707695960999, "perplexity": 556.5339537031052}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702652631/warc/CC-MAIN-20130516111052-00071-ip-10-60-113-184.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/27803/when-to-use-in-an-if-statement
When to use @ in an \if statement I am new to LaTeX and so this question might come across as rather basic. It might also reflect my biases/assumptions from my C/C++ programming days. I was seeing the code of the 'exam' class and noticed that some \if<xyz> had an @ while others didn't. For example, below are a few lines of code from the class file (in order of appearance/execution) 1. \newif\ifcancelspace I can't understand the following: • Is the \if@addpoints in [2] evaluated (when it appears) using the value of @addpoints (declared in [4]) • Is there a significance to prepending an @ to addpoints above? • Is it just good programming practice for naming variables in class/package files? • Or, was it necessary because there is another \def\addpoints in [3] and the @ was required to differentiate between the two? • What is the default scope for variables in LaTeX? • How can @addpoints be made global in [3] when it is not defined until [4]? - This may help: tex.stackexchange.com/questions/8351/… – Andrey Vihrov Sep 7 '11 at 19:56 This question is about several things at once, which makes answering it somewhat interesting. others have taken one tack, I'll take a different one. First, bear in mind that TeX is a macro expansion language, not a functional language. Secondly, note that TeX has very few built-in variable types. A lot of 'variables' are therefore macros with appropriate structure. The \newif macro creates three new macros from the argument \if<name>: • \if<name>, a switch to be used in tests; • \<name>true, which sets the switch logically true; • \<name>false, which sets the switch logically false; Typically, the \if<name> here is something like \if@myswitch or \ifmy@switch. Other answers have mentioned that @ here is a 'letter', which is used to namespace TeX macros. Thus the @ has no special meaning to TeX: it's there for the programmer. (The usual pattern is to use @ to make the names easier to read, so \if@<package>@<meaning> or \if<package>@<meaning> are common.) So analysising the question, the two lines \newif\ifcancelspace and \newif\if@addpoints create the macros • \ifcancelspace • \cancelspacetrue • \cancelspacefalse The two switches can now be used in a constructions \if@addpoints % or \ifcancelspace % Do stuff \else % Do other stuff \fi Taking point [4] next, the macro \@addpointsfalse sets the \if@addpoints switch to logically false. So it means my test above would 'do other stuff'. Setting \@addpointstrue would mean that the test would 'do stuff'. The last point to deal with is [3], for which you need to understand grouping in TeX, macro expansion and how the switches actually work. As TeX is a macro language, it does not have a concept of a variable being used 'within' a function. Thus groups are created by the constructs { ... } or \begingroup ... \endgroup (A brace group is also used in places where TeX 'expects' grouping, and so it effectively disappears. That is the case, for example, with the group needed to use \def.) An assignment will be trapped within such as group unless it made globally. Now, the definition \def\addpoints{\global\@addpointstrue} means that where we use \addpoints, TeX will replace it with \global\@addpointstrue (macro expansion). We'll come back to \global in a bit, but first note that \@addpointsture is a macro which expands to This is an assignment, and normally applies only within the current TeX group level. However, the \global prefix means that the assignment ignores grouping. So the result is that \addpoints will globally set the switch \if@addpoints to logically true. - Beautifully explained. I've learned a lot from this essay! – Mico Sep 7 '11 at 21:25 Thanks Joseph for the wonderfully detailed answer !! I needed a paradigm shift and I think I have it now.. – Abhinav Sep 8 '11 at 6:20 Although this does not answer your bullets in order, it does provides some guidance as to what's going on with the code [1]-[4] you presented. @ in LaTeX is a reserved symbol, and therefore not treated in the same way as other letters are. Executing \makeatletter reverses this reservation, "making @ a letter", so it can be used in regular variables. Of course, \makeatother restores this change. Therefore \newif\if@addpoints specifies a new boolean called @addpoints. Executing \@addpointstrue/\@addpointsfalse sets it to true/false. In a similar sense \newif\ifcancelspace provides a boolean cancelspace that can be modified in a similar way (\cancelspacetrue or \cancelspacefalse). I think the motivation behind using @ in variables adds a depth-layer to the macro-programmer. For example, it allows the programmer to (say) prefix all variables with a package name using something like \newif\myfunc@dothis. If dothis was a very common yet descriptive variable, other packages may conflict with such a redefinition. However, with the prefix this is avoided. So, in some sense, with a multitude of packages out there and very different programming styles (and use of variables/macros) it is good programming practice to add this layer of specificity. The default scope of variables in LaTeX is local to the group within which they are declared/modified. Prefixing an assignment with \global makes the change global and therefore exist outside the group that the assignment was made. For example, consider \def\addpoints{\global\@addpointstrue}. This defines a macro called \addpoints (by means of using \def, and the macro executes \global\@addpointstrue. From the above discussion it should be clear that this sets the boolean @addpoints to true. However, since the macro definition is encompassed by braces { }, it defines a group that starts with { and ends with }. Any changes/assignments made within this group is void outside of it. Since you used \global, this overrides the scope to extend beyond the group. That is, \@addpointstrue is not "made global". Instead, the scope of the modification is extended globally. Specific to the last bullet: \newif\@addpoints is the definition of the boolean @addpoints, while \global adjusts the scope (as mentioned before). From a programming point of view, the use of global may be relevant at variable declaration. However, in LaTeX that is not exclusively the case. It can, in fact, be used at variable declaration (in order to make the declaration global like in \global\def\mycommand{...}, which is equivalent to \gdef\mycommand{...}, by the way) or during some variable modification. - The \global is not used to escape from the group created by \def (this is not a group in that sense, but a definition extent), but to escape from groups at point-of-use. – Joseph Wright Sep 7 '11 at 20:20 The @ symbol is frequently (though not exclusively) used in LaTeX as a "special character", meaning (mostly) that any macros that incorporate commands that include this character have to be in a "special mode" (OK, this is starting to sound circular!) in order to work. The "special mode" is entered with the command \makeatletter and is exited with the command \makeatother. The LaTeX kernel and many LaTeX packages define commands for internal use that should not ever be accessed directly in ordinary programming, i.e., by users. To help avoid accidental use of these internal commands, they're often given one or more @ symbols; the theory is that for a user to access such commands, they have to explicitly provide the command \makeatletter before they can do so, in which case (so the theory goes) they know what they're doing... The examples you provide are illustrative of this point: The instruction \newif\if@addpoints creates a boolean variable named, what else, @addpoints; this is an internal variable and should (in general) not be manipulated directly; its default value is "false" by the statement \@addpointsfalse. The command \addpoints, on the other hand, is a user-level command which lets you change the state of this variable to "true". Of course, this programming convention isn't entirely bullet-proof, but experience has shown that code that follows this convention is a lot more robust. -
2016-05-26 07:05:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9046058058738708, "perplexity": 2184.560221975165}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275764.85/warc/CC-MAIN-20160524002115-00142-ip-10-185-217-139.ec2.internal.warc.gz"}
http://mathoverflow.net/feeds/user/987
User gray taylor - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-23T09:46:58Z http://mathoverflow.net/feeds/user/987 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/95247/polygons-that-are-hard-to-guard Polygons that are hard to guard Gray Taylor 2012-04-26T11:46:05Z 2012-04-26T22:49:15Z <p>Given an $n$-vertex polygonal 'art gallery' $P$ in the plane, it is possible to cover the interior of $P$ by placing 'guards' at (at worst) $\lfloor n/3\rfloor$ of the vertices of $P$. That this is sufficiently many can be shown elegantly by triangulating $P$, then $3$-colouring this triangulation and placing guards at the vertices with the least common colour. </p> <p>For a lower bound only a single family of examples is needed, and the standard is the $n$-pronged comb (or crown) which has $3n$ vertices and requires one guard for each prong. However, in considering variations on the art gallery problem it can be the case that the comb is easier to guard, and thus other families (which are harder in this new context) are required. So, is there (or can we construct in comments) a big list of 'hard to guard' polygons - that is, $n$-vertex polygons for which $\lfloor n/3 \rfloor$ guards are required - that could be used as starting points for considering variations?</p> http://mathoverflow.net/questions/54430/video-lectures-of-mathematics-courses-available-online-for-free/94393#94393 Answer by Gray Taylor for Video lectures of mathematics courses available online for free Gray Taylor 2012-04-18T11:53:49Z 2012-04-18T11:53:49Z <p><a href="https://www.coursera.org/category/math" rel="nofollow">Coursera</a> offers not just the videos, but entire courses: I'm currently following Probabilistic Graphical Models, which has weekly exercises and programming projects (which are marked by an autograder), plus community discussion boards and a wiki for collaborating with other students pursuing the course at the same time. Although you could presumably just create an account towards the end of term, archive off all the videos and then watch them at your leisure rather than trying to match the (reasonably demanding) schedule. </p> http://mathoverflow.net/questions/91692/need-there-be-infinitely-many-gaussian-primes-along-lines-that-contain-at-least-o Need there be infinitely many Gaussian primes along lines that contain at least one? Gray Taylor 2012-03-20T07:39:34Z 2012-03-20T11:22:04Z <p>Greetings from EuroCG 2012, from which I post via iPod, so apologies for lack of problem motivation, background research and mathematical formatting.</p> <p>Question:Suppose L is a horizontal or vertical line in the argand plane passing through a Gaussian prime. Are there infinitely many Gaussian primes on L?</p> <p>In fact, all I need is a next prime along a line, but of course if that was guaranteed one could repeat the process to keep going forever. Still, if there is a next prime, some idea of how far along it is might also be useful for the application in mind.</p> <p>Hopefully equivalent question for rational primes in rational integer sequences: let $s(k)=a^2+(b+k)^2$ for $k\ge0$. If $s(0)$ is prime, does the sequence $\{s(k)\}$ contain infinitely many primes?</p> http://mathoverflow.net/questions/77365/records-for-low-height-points-on-elliptic-curves-over-number-fields Records for low-height points on elliptic curves over number fields Gray Taylor 2011-10-06T15:22:18Z 2011-11-11T13:37:01Z <p>Elkies maintains <a href="http://www.math.harvard.edu/~elkies/low_height.html" rel="nofollow">a list of nontorsion points of low height on elliptic curves over Q</a>; does anyone know of anything similar for curves over number fields?</p> <p>Everest and Ward <a href="https://ueaeprints.uea.ac.uk/19694/" rel="nofollow">give examples</a> of points of height 0.01032... and 0.009721... on curves over Q(w) for w a cube root of unity or the golden ratio respectively. I have made a modest improvement in the latter case, recovering a point of height 0.009128... . </p> <p>In the context of the elliptic Lehmer problem the aim is to minimise dh(P) for d the degree of the number field, so working over quadratic extensions a point would have to have height less than 0.005 to be competitive with the examples in Elkies' table. Are there any examples?</p> http://mathoverflow.net/questions/77365/records-for-low-height-points-on-elliptic-curves-over-number-fields/80680#80680 Answer by Gray Taylor for Records for low-height points on elliptic curves over number fields Gray Taylor 2011-11-11T13:37:01Z 2011-11-11T13:37:01Z <p>Since no answers have been given here or via the NMBRTHRY mailing list (and as this question is now the top hit on google for 'low height points on elliptic curves'), perhaps you'll allow me the luxury of answering my own question... </p> <p>I have constructed a page detailing some points on curves over quadratic fields with height at most 0.01; two of the examples have height less than 0.005, so (scaling for degree) are competitive with some of those listed by Elkies. The table can be found <a href="http://maths.straylight.co.uk/low_height" rel="nofollow">here</a>, and additional contributions would be happily accepted! </p> http://mathoverflow.net/questions/1714/best-online-mathematics-videos/54687#54687 Answer by Gray Taylor for Best online mathematics videos? Gray Taylor 2011-02-07T19:40:42Z 2011-02-07T19:40:42Z <p>At the accessible end of the scale, <a href="http://www.youtube.com/user/Vihart" rel="nofollow">Vi Hart's</a> "doodling in math class" series and subsequent videos are a delight. </p> http://mathoverflow.net/questions/15664/what-is-the-best-graph-editor-to-use-in-your-articles/15690#15690 Answer by Gray Taylor for What is the best graph editor to use in your articles? Gray Taylor 2010-02-18T10:48:01Z 2010-02-18T10:48:01Z <p>I already had about 30 pages of graphs typeset with xymatrix for my thesis before discovering tikz; but was so impressed by it that I was happy to rewrite them all. As well as (imho) looking better, it gave me cross-platform compatibility - xypic seems to need pstricks, so on the mac with TeXshop (which uses pdflatex, I assume) the old graphs couldn't even be rendered. </p> <p>Its ability to construct graphs iteratively can also be a massive timesaver- for instance, I wanted a bunch of otherwise identical rectangles at various positions, so with tikz could just loop over a list of their first coordinate rather than having to tediously cut,paste and modify an appropriate number of copies of the command for a rectangle. Particularly handy when I then decided they all needed to be slightly wider!</p> <p>There's a gallery of tikz examples <a href="http://www.texample.net/tikz/examples/" rel="nofollow">here</a>, to give you some idea of what it's capable of (and with the relevant source code- I did find the manual a bit hard to understand and learnt mostly by examples or trial and error).</p> <p>The vector graphics package inkscape (which I used to use for drawing more complicated graphs for inclusion as eps images) also apparently has a plugin to export as tikz, although I haven't tried that out.</p> http://mathoverflow.net/questions/10911/english-reference-for-a-result-of-kronecker English reference for a result of Kronecker? Gray Taylor 2010-01-06T12:45:00Z 2010-02-01T01:42:36Z <p>Kronecker's paper <em>Zwei Sätze über Gleichungen mit ganzzahligen Coefficienten</em> apparently proves the following result that I'd like to reference:</p> <blockquote> <p>Let $f$ be a monic polynomial with integer coefficients in $x$. If all roots of $f$ have absolute value at most 1, then $f$ is a product of cyclotomic polynomials and/or a power of $x$ (that is, all nonzero roots are roots of unity).</p> </blockquote> <p>However, I don't have access to this article, and even if I did my 19th century German skills are lacking; does anyone know a reference in English I could check for details of the proof?</p> http://mathoverflow.net/questions/6248/recovering-phin-from-a-multiple Recovering $\Phi(n)$ from a multiple? Gray Taylor 2009-11-20T10:34:23Z 2010-01-22T21:12:50Z <p>I've been attending a series of lectures on Cryptography from an engineering perspective, which means that most of the assertions made are supplied without proof... here's one that the lecturer couldn't recall the reason for, nor original source of.</p> <p>Given an unfactored $n=pq$, computing $\phi(n)$ is as hard as finding $p,q$; this is the key idea of various "RSA-like" cryptosystems. One presented had a step in which for a secret $k$ and a random $t$, $k-t\phi(n)$ is transmitted. The claim was then that this process should only be applied once, as if an attacker sees $k-t\phi(n)$ and $k-u\phi(n)$ then they can recover $(t-u)\phi(n)$, and it's alleged that this makes it easier to compute $\phi(n)$. </p> <p>So my question is, why is it easier to compute $\phi(n)$ given a random multiple of it, assuming we're at "cryptographic size"? (that is, $p,q$ sufficiently large that it's not feasible to try and factor $n$ and $\phi(n)$)</p> http://mathoverflow.net/questions/12085/experimental-mathematics/12105#12105 Answer by Gray Taylor for Experimental Mathematics Gray Taylor 2010-01-17T15:20:35Z 2010-01-17T15:20:35Z <p>I was recently at <a href="http://www.fields.utoronto.ca/programs/scientific/09-10/FoCM/discovery/" rel="nofollow">this workshop</a> at Fields on 'discovery and experimentation in number theory': you can get audio and slides from many of the presentations <a href="http://www.fields.utoronto.ca/audio/#discovery" rel="nofollow">here</a> although the one I wanted to recommend - David Bailey's talk - appears to be audio only. It depends what you mean by 'major mathematical advance' but there certainly seem to be many problems in number theory where even if the proof doesn't depend on heavy computation, gaining insight into what to set about proving did. I personally logged several CPUweeks trying to get to grips with my own thesis topic.</p> http://mathoverflow.net/questions/6781/ade-type-dynkin-diagrams/6787#6787 Answer by Gray Taylor for ADE type Dynkin diagrams Gray Taylor 2009-11-25T11:24:24Z 2009-11-25T11:24:24Z <p>Let $G$ be a connected graph with the property that all eigenvalues of $G$ lie in $[-2,2]$ (such a $G$ is called cyclotomic). Then $G$ is either one of $\tilde{E}_6,\tilde{E}_7,\tilde{E}_8$, an $\tilde{A}_n$ for $n\ge 2$, a $\tilde{D}_n$ for $n\ge4$, or an induced subgraph of one of these. In other words, the ADE graphs classify the maximal cyclotomic graphs. </p> http://mathoverflow.net/questions/3939/when-is-a-monic-integer-polynomial-the-characteristic-polynomial-of-a-non-negativ/4253#4253 Answer by Gray Taylor for When is a monic integer polynomial the characteristic polynomial of a non-negative integer matrix? Gray Taylor 2009-11-05T13:20:33Z 2009-11-05T13:20:33Z <p>Second idea, which at least gives some necessary conditions...</p> <p>The <a href="http://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius%5Ftheorem#Perron-Frobenius%5Ftheorem%5Ffor%5Fnon-negative%5Fmatrices" rel="nofollow">Perron-Frobenius Theorem for non-negative matrices</a> ensures that there is always a real eigenvalue equal to the spectral radius. So a polynomial cannot be the char.poly. of such a matrix if it has no real roots, or if the greatest absolute value of any root is greater than the largest real root. This necessary condition thus generalises your observation in the monic quadratic case.</p> http://mathoverflow.net/questions/3939/when-is-a-monic-integer-polynomial-the-characteristic-polynomial-of-a-non-negativ/3974#3974 Answer by Gray Taylor for When is a monic integer polynomial the characteristic polynomial of a non-negative integer matrix? Gray Taylor 2009-11-03T17:53:32Z 2009-11-03T17:53:32Z <p>I can possibly offer a counterexample, from <a href="http://arxiv.org/abs/0907.0371" rel="nofollow">here</a> .</p> <p>If P=x^7-8x^5+19x^3-12x+1 were the characteristic polynomial of a matrix corresponding to a graph, then it would be the char.poly of a matrix corresponding to a charged signed graph (symmetric, all entries 0,1 or -1). For such matrices we define the associated reciprocal polynomial to be (z^d)X(z+1/z), where X is the characteristic polynomial and d its degree. In this case, the associate reciprocal polynomial would be z^14-z^12+z^7-z^2+1. For any integer polynomial we can find a mahler measure, and the mahler measure of this polynomial is 1.20261... However, Smyth and McKee determined the Mahler measures less than 1.3 that arise from associated reciprocal polynomials of charged signed graphs, and this quantity is not attained. </p> <p>So P cannot be the characteristic polynomial of a charged signed graph, of which graphs are a special case. Does P satisfy your non-negativity conditions on the roots? The sums of odd powers seem to be zero.</p> http://mathoverflow.net/questions/2250/thematic-programs-for-2010-2011/2267#2267 Answer by Gray Taylor for Thematic Programs for 2010-2011? Gray Taylor 2009-10-24T09:26:46Z 2009-10-24T09:26:46Z <p>By next year do you mean the next academic year? If <em>not</em>, there's <a href="http://www.crm.umontreal.ca/NT2010/" rel="nofollow">Number Theory as Experimental and Applied Science</a> at CRM in Montreal Jan-Apr 2010. </p> http://mathoverflow.net/questions/1443/algorithm-to-find-all-the-cycle-bases-in-a-graph/1879#1879 Answer by Gray Taylor for Algorithm to Find all the Cycle Bases in a Graph Gray Taylor 2009-10-22T14:58:43Z 2009-10-22T14:58:43Z <p>By 'clean cycle' do you perhaps mean 'chordless cycle'? This I think is well-defined without an embedding, as it's just a condition on adjacency of vertices. If so, <a href="http://research.nii.ac.jp/~uno/code/cypath.htm" rel="nofollow">this page</a> seems to describe an algorithm for enumerating such cycles.</p> http://mathoverflow.net/questions/95247/polygons-that-are-hard-to-guard Comment by Gray Taylor Gray Taylor 2012-04-26T15:23:10Z 2012-04-26T15:23:10Z At the moment the variation I'm playing with is to allow guards to see through a single wall (the k=1 case of 'k-transmitters'), but to require that they be placed in the exterior of $P$. For the upper bound this is not much of a variation- if you can $0$-guard at $l$ vertices, you can $1$-guard at $l$ external locations by pushing the guards just outside. But for the lower bound I haven't yet drawn something that I couldn't get away with one less guard, since they can see into two prongs or spikes or fiddly bit I add. But I suspect this could just be inexperience on my part! http://mathoverflow.net/questions/77398/how-did-ores-conjecture-become-a-conjecture/77401#77401 Comment by Gray Taylor Gray Taylor 2011-10-07T16:48:16Z 2011-10-07T16:48:16Z I was about to post precisely this! Extra confusion is caused by there being another number-theoretic problem sometimes descibed as Lehmer's conjecture - the assertion that Ramanujan's tau function is nonzero - for which it's also not clear that he actually made the conjecture, rather than just suggesting the question. http://mathoverflow.net/questions/17730/whats-the-definition-of-saturated-subgraph Comment by Gray Taylor Gray Taylor 2010-03-10T14:44:27Z 2010-03-10T14:44:27Z For context, could you tell us which paper? http://mathoverflow.net/questions/12186/are-there-any-historical-accounts-of-individuals-who-study-math-books-like-novels Comment by Gray Taylor Gray Taylor 2010-01-18T13:08:13Z 2010-01-18T13:08:13Z Could one even learn a novel 'sufficiently well' by reading it 'like a novel'? Excluding those with phonebook-memorizing savant abilities, I don't think reading any book recreationally would be the same as studying it. http://mathoverflow.net/questions/10911/english-reference-for-a-result-of-kronecker/10915#10915 Comment by Gray Taylor Gray Taylor 2010-01-06T14:14:32Z 2010-01-06T14:14:32Z Thanks for the proofs both of you; I'm referencing the result in my thesis so yes, I need something to cite, but for something that admits quick proofs like this I feel I should definitely know <i>how</i> it works rather than just appeal to a source. http://mathoverflow.net/questions/3939/when-is-a-monic-integer-polynomial-the-characteristic-polynomial-of-a-non-negativ/3974#3974 Comment by Gray Taylor Gray Taylor 2009-11-03T18:33:08Z 2009-11-03T18:33:08Z Ah, I see. The basic mahler measure idea would work for multigraphs (as the only 'good' integer matrices are the ones corresponding to CSMs) but if you relax symmetry by introducing directionality I imagine things break.
2013-05-23 09:47:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8079776763916016, "perplexity": 1406.1104186029731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703108201/warc/CC-MAIN-20130516111828-00055-ip-10-60-113-184.ec2.internal.warc.gz"}
https://liucs.net/cs168f18/A08Sol.html
# Assignment 8 solutions Tue Nov 6 We will begin with the definition of pseudo-random numbers from the notes. The only difference is that we’re going to use an abbreviation for the type of the rand function, by defining this type synonym: Using Gen can make it clearer when and where this state-passing pattern is being used. The first argument s is the state type, and the second argument a is the result type. Here is the definition of rand using that type. (The code itself is exactly the same as in the notes.) # Coin flips Now let’s make a simple enumerated type for representing coin flips. A coin flip can be head or tails: This is ‘isomorphic’ to a Boolean value – we could just represent heads as True and tails as False (or vice-versa), but for clarity it’s sometimes helpful to declare such things as distinct types. Next, let’s make a function to convert any integer into a coin flip. You want the coin to be fair (or as close as possible), so use even or odd to determine whether you produce Heads or Tails. Now create a generator for coin flips. You can call rand and/or coinFromInt within your code. # Iterating a generator In the notes we defined randomList that would generate a list of specified length, containing random values. To do that, it hard-coded a call to rand. Now we’ll implement a variant of that where the generator is specified as an argument. That way it can generate a list of anything for which a generator is provided. Here is the more general type: So, given a generator (using state s) of values of type a, and an Int for the length of the list, we can provide a generator of lists of values. The implementation looks a lot like the one in the notes – it has a case for zero, a case for n, and then you thread the state around. The only difference is that you’re passing around the generator gen as an argument, and use it instead of rand. Below are some usages of iterGen. By providing different values, we can generate lists of different things. The state-threading to make use of pseudo-random numbers stays the same. ghci> iterGen rand 5 (Seed 99) ([1663893,47762240,1728567349,872658027,1597634426],Seed {unSeed = 1597634426}) ghci> iterGen flipCoin 5 (Seed 99) You can even pass a partial application of iterGen itself as the generator for another iterGen. This is a list of lists of random numbers — essentially, a 3×5 matrix! (I added spacing to the output, for clarity.) ghci> iterGen (iterGen rand 3) 5 (Seed 99) ([[ 1663893, 47762240, 1728567349], [ 872658027, 1597634426, 1453759341], [1411792268, 445832573, 537610028], [1148037667, 2075984621, 906712338], [ 570305654, 907610217, 628572478]], Seed {unSeed = 628572478}) # Rainbows Here is an enumerated datatype representing five different colors. Create a generator for colors, where each color is chosen with roughly equal probability. You may want to use mod and toEnum. And perhaps you want a separate function colorFromInt that is analogous to coinFromInt. Some sample usage: ghci> genColor (Seed 99) (Green,Seed {unSeed = 1663893}) ghci> iterGen genColor 8 (Seed 99) ([Green,Red,Blue,Yellow,Orange,Orange,Green,Green],Seed {unSeed = 445832573}) Huh, it seemed to pick Green a lot. Is this really a fair choice? Let’s generate many colors and count how many greens. It should be roughly one-fifth: ghci> length $filter (== Green)$ fst $iterGen genColor 1000 (Seed 99) 204 So with this initial seed, 204 out of 1000 were green, which pretty close to 20%. We can try that with a few different seeds and see how much variation there is: ghci> length$ filter (== Green) $fst$ iterGen genColor 1000 (Seed 99) 204 ghci> length $filter (== Green)$ fst $iterGen genColor 1000 (Seed 29487) 208 ghci> length$ filter (== Green) $fst$ iterGen genColor 1000 (Seed 474732) 193 ghci> length $filter (== Green)$ fst $iterGen genColor 1000 (Seed 59873) 222 The most extreme example was 222, but it’s still hovering around 20%. # Gen is a functor Finally, I mentioned in class that the state-passing Gen type can be a functor. For now, let’s not mess with making it an actual instance of the Functor type class (I’ll explain why later), but we can still implement a function equivalent to fmap: To further understand this function, it can be helpful to expand the Gen synonym, which would make it: Here’s a stub for its definition: Here it is in action, all with the same seed for comparison. ghci> fmapGen (0-) rand (Seed 99) (-1663893,Seed {unSeed = 1663893}) ghci> fmapGen show rand (Seed 99) ("1663893",Seed {unSeed = 1663893}) ghci> iterGen (fmapGen show rand) 5 (Seed 99) (["1663893","47762240","1728567349","872658027","1597634426"],Seed {unSeed = 1597634426}) Or used to define some other generators: Simulate rolling a 6-sided die a bunch of times: ghci> iterGen diceRoll 50 (Seed 99) ([4,3,2,4,3,4,3,6,3,2,6,1,3,4,5,6,4,4,6,1,5,3,6,2,2, 6,1,3,2,1,4,4,6,6,1,2,2,2,2,4,4,6,5,5,6,3,5,1,5,3], Seed {unSeed = 439652522}) Generate a selection of alphabetic passwords! ghci> iterGen (iterGen alphabetic 12) 10 (Seed 99) (["xgphyvqruxjm", "whonjjrgumxj", "hfyupajxbjgt", "hzdvrdqmdouu", "ukeaxhjumkmm", "hdglqrrohydr", "mbvtuwstgkob", "yzjjtdtjroqp", "yusfsbjlusek", "ghtfbdctvgmq"], Seed {unSeed = 1959147024}) Or arbitrary passwords: ghci> iterGen (iterGen anyChar 14) 10 (Seed 99) (["TG9Bc*&f&R1%!H", "w$pB1%woPRRlsA", "X%TH$f2^3ztBv$", "kI)ckW#cgI^B3M", "8ua63x%xupL#HM", "x9KjV@Q&g7sgi^", "&3Bttv^pDOY9&(", "CXCrrL%IcQMPPh", "jDMz5QMyTs2XRM", "hntf5)exR9owxI"], Seed {unSeed = 1528987498}) # Ignore Leave these alone – these are some special-purpose generators used in my test cases.
2020-02-25 08:53:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7350307106971741, "perplexity": 2300.514171320501}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146064.76/warc/CC-MAIN-20200225080028-20200225110028-00117.warc.gz"}
http://www.lastfm.se/user/gbdupas/library/music/Jag+Panzer/_/The+Setting+Of+The+Sun?setlang=sv
# Bibliotek Musik » Jag Panzer » ## The Setting Of The Sun 59 spelade låtar | Gå till låtsida Låtar (59) Låt Album Längd Datum The Setting Of The Sun 3:24 8 maj 2011, 09:32 The Setting Of The Sun 3:24 7 maj 2011, 14:06 The Setting Of The Sun 3:24 7 maj 2011, 09:07 The Setting Of The Sun 3:24 7 maj 2011, 04:09 The Setting Of The Sun 3:24 30 apr 2011, 14:53 The Setting Of The Sun 3:24 30 apr 2011, 08:13 The Setting Of The Sun 3:24 29 apr 2011, 10:11 The Setting Of The Sun 3:24 29 apr 2011, 05:12 The Setting Of The Sun 3:24 28 apr 2011, 05:15 The Setting Of The Sun 3:24 16 apr 2011, 07:34 The Setting Of The Sun 3:24 13 apr 2011, 07:22 The Setting Of The Sun 3:24 11 apr 2011, 07:56 The Setting Of The Sun 3:24 10 apr 2011, 14:59 The Setting Of The Sun 3:24 10 apr 2011, 08:19 The Setting Of The Sun 3:24 8 apr 2011, 06:59 The Setting Of The Sun 3:24 7 apr 2011, 07:38 The Setting Of The Sun 3:24 6 apr 2011, 07:27 The Setting Of The Sun 3:24 4 apr 2011, 05:00 The Setting Of The Sun 3:24 3 apr 2011, 15:04 The Setting Of The Sun 3:24 3 apr 2011, 08:24 The Setting Of The Sun 3:24 2 apr 2011, 09:25 The Setting Of The Sun 3:24 31 mar 2011, 07:31 The Setting Of The Sun 3:24 30 mar 2011, 07:20 The Setting Of The Sun 3:24 29 mar 2011, 07:49 The Setting Of The Sun 3:24 28 mar 2011, 08:05 The Setting Of The Sun 3:24 27 mar 2011, 08:24 The Setting Of The Sun 3:24 26 mar 2011, 08:26 The Setting Of The Sun 3:24 25 mar 2011, 07:33 The Setting Of The Sun 3:24 23 mar 2011, 07:46 The Setting Of The Sun 3:24 21 mar 2011, 08:17 The Setting Of The Sun 3:24 20 mar 2011, 10:22 The Setting Of The Sun 3:24 20 mar 2011, 05:24 The Setting Of The Sun 3:24 19 mar 2011, 10:02 The Setting Of The Sun 3:24 19 mar 2011, 05:03 The Setting Of The Sun 3:24 17 mar 2011, 09:07 The Setting Of The Sun 3:24 17 mar 2011, 04:08 The Setting Of The Sun 3:24 15 mar 2011, 05:24 The Setting Of The Sun 3:24 14 mar 2011, 04:36 The Setting Of The Sun 3:24 9 mar 2011, 14:57 The Setting Of The Sun 3:24 9 mar 2011, 09:58 The Setting Of The Sun 3:24 9 mar 2011, 04:59 The Setting Of The Sun 3:24 7 mar 2011, 09:07 The Setting Of The Sun 3:24 7 mar 2011, 04:09 The Setting Of The Sun 3:24 6 mar 2011, 13:24 The Setting Of The Sun 3:24 6 mar 2011, 08:26 The Setting Of The Sun 3:24 5 mar 2011, 09:59 The Setting Of The Sun 3:24 5 mar 2011, 05:01 The Setting Of The Sun 3:24 4 mar 2011, 23:10 The Setting Of The Sun 3:24 4 mar 2011, 09:35 The Setting Of The Sun 3:24 4 mar 2011, 04:36 The Setting Of The Sun 3:24 2 mar 2011, 09:50 The Setting Of The Sun 3:24 2 mar 2011, 04:52 The Setting Of The Sun 3:24 1 mar 2011, 09:17 The Setting Of The Sun 3:24 1 mar 2011, 04:19 The Setting Of The Sun 3:24 28 feb 2011, 23:26 The Setting Of The Sun 3:24 28 feb 2011, 05:43 The Setting Of The Sun 3:24 27 feb 2011, 12:50 The Setting Of The Sun 3:24 27 feb 2011, 07:52 The Setting Of The Sun 3:24 26 feb 2011, 19:21
2014-07-30 01:13:59
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9512439966201782, "perplexity": 8566.150446585909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268363.15/warc/CC-MAIN-20140728011748-00007-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.hugomac.com/ac-valhalla-gew/what-is-the-full-meaning-of-mathematics-c078d2
[Definition.] Top; Contact us; log in. There is a definite rhetorical structure to each A discipline (a organized, formal field of study) such as mathematics tends to be defined by the types of problems it addresses, the methods it uses to address these problems, and the results it has achieved. India’s contributions to the development of contemporary mathematics were made through the considerable influence of Indian achievements on Islamic mathematics during its formative years. mean Statistics noun The sum of the values of all observations or data points divided by the number of observations; an arithmetical average; the central tendency of a collection of numbers, which is a sum of the numbers divided by the amount of numbers the collection. Mathematics definition, the systematic treatment of magnitude, relationships between figures and forms, and relations between quantities expressed symbolically. factor: a number that will divide into another number exactly, e.g. 1 Educator answer. We also give a “working definition” of a function to help understand just what a function is. Coronavirus basics The novel coronavirus, now called SARS-CoV-2, causes the disease COVID-19. How the connection between mathematics and the world is to beaccounted for remains one of the most challenging problems inphilosophy of science, philosophy of mathematics, and generalphilosophy. BODMAS is a helpful acronym meaning brackets, order, division, multiplication, addition and subtraction, ensuring that equation steps are completed in the right order. The second message is a certain emotional framework that I have to offer: that math is a joyous experience. Latest answer posted June 05, 2012 at 11:45:49 PM solve the following formula for q, Y=p+q+r/4 . There is a definite rhetorical structure to each Easily make logical connections between different facts and concepts. ... bringing two or more numbers (or things) together to make a new total.The numbers to be added together are called the \"Addends\": Where μ is mean and x 1, x 2, x 3 …., x i are elements.Also note that mean is sometimes denoted by . The way in which these civilizations influenced one another and the important direct contributions Greece and Islam made to later developments are discussed in the first parts of this article. "Most likely this quote is a summary of his statement in Opere Il Saggiatore: [The universe] cannot be read until we have learnt the language and become familiar with the characters in which it is … The article East Asian mathematics covers the mostly independent development of mathematics in China, Japan, Korea, and Vietnam. A Rotation is a transformation that turns a figure about a fixed point. Professor of Mathematics, Simon Fraser University, Burnaby, British Columbia. In finding the responses or solutions to these external and internal problems, mathematical objects progressively emerge and evolve. When data is collected, summarized and represented as graphs, we can look for trends and try to make predictions based on these facts. What Does PEMDAS Mean? 1. Video Examples: Example of Rotation. Math is all around us, in everything we do. Mathematics plays a central role in our scientific picture of theworld. The study of math statistics includes the collection, analysis, presentation and interpretation of data. Through definition and example, we'll become familiar with what a modulus is and how to work with it. Mathematics definition is - the science of numbers and their operations, interrelations, combinations, generalizations, and abstractions and of space configurations and their structure, measurement, transformations, and generalizations. deriving meaning from understanding cause some difficulties in analyzing the processes of assessing students' understanding. Step 4: [Numerator is positive, denominator is a very small positive number.] “Mathematics.” Merriam-Webster.com Dictionary, Merriam-Webster, https://www.merriam-webster.com/dictionary/mathematics. The definition and the value of the symbols are constant. Like reading and writing, math is an important component of learning and "doing" (using one's knowledge) in each academic discipline. Arithmetic definition is - a branch of mathematics that deals usually with the nonnegative real numbers including sometimes the transfinite cardinals and with the application of the operations of addition, subtraction, multiplication, and division to them. All mathematical systems (for example, Euclidean geometry) are combinations of sets of axioms and of theorems that can be logically deduced from the axioms. This article was reprinted on Wired.com. 5! Phi is the 21st letter of the Greek alphabet and in mathematics, it is used as a symbol for the golden ratio. Deviation for above example. There are several kinds of mean in mathematics, especially in statistics.. For a data set, the arithmetic mean, also called the expected value or average, is the central value of a discrete set of numbers: specifically, the sum of the values divided by the number of values.The arithmetic mean of a set of numbers x 1, x 2, ..., x n is typically denoted by ¯. Looking for online definition of MATH or what MATH stands for? Also learn the facts to easily understand math glossary with fun math worksheet online at SplashLearn. Which of the following refers to thin, bending ice, or to the act of running over such ice. Definition Of Rotation. Mathematics as an interdisciplinary language and tool. Elements . Mathematics is called the language of science. What the definition is telling us is that for any number $$\varepsilon > 0$$ that we pick we can go to our graph and sketch two horizontal lines at $$L + \varepsilon$$ and $$L - \varepsilon$$ as shown on the graph above. By contrast, understanding mathematics does not mean to memorize Recipes, Formulas, Definitions, or Theorems. See if your knowledge of math "adds up" in this quiz. We introduce function notation and work several examples illustrating how it works. This definition of classical mathematics is far from perfect, as is discussed in [12]. It deals with logical reasoning and quantitative calculation, and its development has involved an increasing degree of idealization and abstraction of its subject matter. Mathematics is the manipulation of the meaningless symbols of a first-order language according to explicit, syntactical rules. Math Formulae . The basic unit of time is the second. This does not mean, however, that developments elsewhere have been unimportant. Please select which sections you would like to print: Corrections? A mathematical question with multiple operations may give different answers depending on the order in which it is solved. Italian astronomer and physicist Galileo Galilei is attributed with the quote, "Mathematics is the language in which God has written the universe. In this section we will formally define relations and functions. 'All Intensive Purposes' or 'All Intents and Purposes'? Also learn the facts to easily understand math glossary with fun math worksheet online at SplashLearn. When determining the measure of the angle in the work equation, it is important to recognize that the angle has a precise definition - it is the angle between the force and the displacement vector. Illustrated definition of Term: In Algebra a term is either a single number or variable, or numbers and variables multiplied together. There are two things I’m trying to get across to them. Then the rest of the scores don't matter for range. It deals with logical reasoning and quantitative calculation, and its development has involved an increasing degree of idealization and abstraction of its subject matter. mean Statistics noun The sum of the values of all observations or data points divided by the number of observations; an arithmetical average; the central tendency of a collection of numbers, which is a sum of the numbers divided by the amount of numbers the collection. Clearly there must be some starting point for explaining concepts in terms of simpler concepts. How to use arithmetic in a sentence. Math Insight. Updates? Page Navigation. Choices: A. x = - 1 B. x = 1 C. x = 0 D. x = 2 Correct Answer: B. The range is 100-75=25. By signing up for this email, you are agreeing to news, offers, and information from Encyclopaedia Britannica. a) Mathematics is a human activity involving the solution of problematic situations. One is that math in school is not the whole story — there’s this whole other world that is logical but also beautiful and creative. Mathematics, the science of structure, order, and relation that has evolved from elemental practices of counting, measuring, and describing the shapes of objects. Elements . SplashLearn is an award winning math learning program used by more than 30 Million kids for fun math practice. A mathematical symbol is a figure or a combination of figures that is used to … For example, the mass of the earth is 5,970,000,000,000,000,000,000,000 kilograms, while the mass of a hydrogen atom is 0.00000000000000000000000000167 kilograms. It has been described as "that part of mathematical activity that is done without explicit or immediate consideration of direct application," although what is "pure" in one era often becomes applied later. (used with a singular verb) the systematic treatment of magnitude, relationships between figures and forms, and relations between quantities expressed symbolically. Math is all around us, in everything we do. Let us know if you have suggestions to improve this article (requires login). For example, consider the math of measurement of time such as years, seasons, months, weeks, days, and so on. Pure mathematics explores the boundary of mathematics and pure reason. First, calculate the deviations of each data point from the mean, and square the result of each: variance = = 4. 1. Our editors will review what you’ve submitted and determine whether to revise the article. This growth has been greatest in societies complex enough to sustain these activities and to provide leisure for contemplation and the opportunity to build on the achievements of earlier mathematicians. The range is 25. from circa 300 BC codified this mode of presen-tation which, with minor variations in style, is still used today in journal articles and advanced texts. Math Insight. Math. Author of. Related also to Old English mǣd (“mead, … Definition of Angle explained with real life illustrated examples. BODMAS stands for: B: Bracket O: Of D: Division M: Multiplication A: Addition S: Subtraction * Bracket Brackets are dealt first. Algebra, arithmetic, calculus, geometry, and trigonometry are branches of, Just 15, John Paul was full of promise, gifted in, Lucas Fisher, who just turned 20, will graduate from UAA in the spring with a Bachelor of Science degree in, Before joining the military, Jones graduated from Northwestern University with a degree in, Post the Definition of mathematics to Facebook, Share the Definition of mathematics on Twitter, noun, plural in form but usually singular in construction. Test your knowledge - and maybe learn something along the way. What number did the ancient Egyptians consider to be sacred? Mathematics plays a central role in our scientific picture of the world. Understanding Mathematics You understand a piece of mathematics if you can do all of the following: . Ring in the new year with a Britannica Membership. Italian astronomer and physicist Galileo Galilei is attributed with the quote, "Mathematics is the language in which God has written the universe. Mathematics is such a useful language and tool that it is considered one of the "basics" in our formal educational system. math, mathematics, maths - a science (or group of related sciences) dealing with the logic of quantity and shape and arrangement arithmetic - the branch of pure mathematics dealing with the theory of numerical calculations geometry - the pure mathematics of points and lines and curves and surfaces What made you want to look up mathematics? The substantive branches of mathematics are treated in several articles. Indeed, to understand the history of mathematics in Europe, it is necessary to know its history at least in ancient Mesopotamia and Egypt, in ancient Greece, and in Islamic civilization from the 9th to the 15th century. Since the 17th century, mathematics has been an indispensable adjunct to the physical sciences and technology, and in more recent times it has assumed a similar role in the quantitative aspects of the life sciences. PEMDAS is an acronym for the words parenthesis, exponents, multiplication, division, addition, subtraction. To read more about the math behind the new coronavirus, visit The New York Times. The past, present and future. Science is full of very large and very small numbers that are difficult to read and write. Be on the lookout for your Britannica newsletter to get trusted stories delivered right to your inbox. In math, there is an agreed-upon set of procedures for the order in which your operations are performed. How the connection between mathematics and the world is to be accounted for remains one of the most challenging problems in philosophy of science, philosophy of mathematics, and general philosophy. Mathematics is based on deductive reasoning though man's first experience with mathematics was of an inductive nature. Illustrated definition of Term: In Algebra a term is either a single number or variable, or numbers and variables multiplied together. The Time - Converting AM/PM to 24 Hour Clock. Variance is the sum of squares of differences between all numbers and means. From Middle English math, from Old English mǣþ (“a mowing, that which is mown, cutting of grass”), from Proto-Germanic *mēþą (“a mowing”), from Proto-Indo-European *h₂meh₁- (“to mow”); equivalent to mow +‎ -th. Philosophy of mathematics, branch of philosophy that is concerned with two major questions: one concerning the meanings of ordinary mathematical sentences and the other concerning the issue of whether abstract objects exist. Rotation is also called as turn The fixed point around which a figure is rotated is called as centre of rotation. mathematics has been presented following a format of definition-theorem-proof. What the definition is telling us is that for any number $$\varepsilon > 0$$ that we pick we can go to our graph and sketch two horizontal lines at $$L + \varepsilon$$ and $$L - \varepsilon$$ as shown on the graph above. Cognate with German Mahd (“a mowing, reaping”). Mathematics (Math or Maths) is the science that deals with the logic of shape, quantity and arrangement using numbers and symbols. the science of numbers and their operations, interrelations, combinations, generalizations, and abstractions and of space configurations and their structure, measurement, transformations, and generalizations Let's say your best score all year was a 100 and your worst was a 75. Learn a new word every day. Please tell us where you read or heard it (including the quote, if possible). Consider the following example from evolutionary biology introduced inBaker 200… Top; Contact us; log in. SplashLearn is an award winning math learning program used by more than 30 Million kids for fun math practice. Be sure to avoid mindlessly using any 'ole angle in the equation. Platonism about mathematics (or mathematical platonism) is the metaphysical view that there are abstract mathematical objects whose existence is independent of us and our language, thought, and practices.Just as electrons and planets exist independently of us, so do numbers and sets. See more. Explain mathematical concepts and facts in terms of simpler concepts and facts. For example, the Roman letter X represents the value 10 everywhere around us. Time. Mathematics, the science of structure, order, and relation that has evolved from elemental practices of counting, measuring, and describing the shapes of objects. from circa 300 BC codified this mode of presen-tation which, with minor variations in style, is still used today in journal articles and advanced texts. Mathematics is, of course, full of abstract entities such as numbers, functions, sets, etc., and according to Plato all such entities exist outside our mind. The set of x -values is called the domain, and the set of y -values is called the range. We also define the domain and range of a function. MATH is listed in the World's largest and most authoritative dictionary database of abbreviations and acronyms The Free Dictionary One way to organize this set of information is to divide it into the following three categories (of course, they overlap each other): 1. Mathematical explanations in the natural sciences. Euclid™s . Definition of . The mind can discover them but does not create them. 'Nip it in the butt' or 'Nip it in the bud'? This article offers a history of mathematics from ancient times to the present. A daily challenge for crossword fanatics. Delivered to your inbox! Function definition A technical definition of a function is: a relation from a set of inputs to a set of possible outputs where each input is related to exactly one output. Mathematics is both an art and a science, and pure mathematics lies at its heart. A separate article, South Asian mathematics, focuses on the early history of mathematics in the Indian subcontinent and the development there of the modern decimal place-value numeral system. From Wikipedia:. What is Math Statistics? Step 2: If then the line x = c, is the vertical asymptote. Or, consider the measur… (used with a singular or plural verb) mathematical procedures, operations, or properties. the factors of 10 are 1, 2 and 5 factorial: the product of all the consecutive integers up to a given number (used to give the number of permutations of a set of objects), denoted by n!, e.g. Mathematics is based on deductive reasoning though man's first experience with mathematics was of an inductive nature. For full treatment of this aspect, see mathematics, foundations of. more ... Time is the ongoing sequence of events taking place. Send us feedback. Inquiries into the logical and philosophical basis of mathematics reduce to questions of whether the axioms of a given system ensure its completeness and its consistency. The golden ratio refers to a special number that is approximately equal to 1.618. (Mathematics) (functioning as singular) a group of related sciences, including algebra, geometry, and calculus, concerned with the study of number, quantity, shape, and space and their interrelationships by using a specialized notation 2. What Does PEMDAS Mean? When you follow the correct order, the answer will be correct. Definition of Curved Line explained with real-life illustrated examples. You will likely come up with a wrong answer if you perform calculations out of the order. As a consequence of the exponential growth of science, most mathematics has developed since the 15th century ce, and it is a historical fact that, from the 15th century to the late 20th century, new developments in mathematics were largely concentrated in Europe and North America. For these reasons, the bulk of this article is devoted to European developments since 1500. Take, for example, math test scores. Mathematics is the science that deals with the logic of shape, quantity and arrangement. In addition, we introduce piecewise functions in this section. Articles from Britannica Encyclopedias for elementary and high school students. Euclid™s . The Supreme Mathematics is a system of understanding numerals alongside concepts and qualitative representations that are used along with the Supreme Alphabet. The numeral system and arithmetic operations, Survival and influence of Greek mathematics, Mathematics in the Islamic world (8th–15th century), European mathematics during the Middle Ages and Renaissance, The transmission of Greek and Arabic learning, Mathematics in the 17th and 18th centuries, Mathematics in the 20th and 21st centuries, Mathematical physics and the theory of groups, https://www.britannica.com/science/mathematics, MacTutor History of Mathematics Archive - An Overview of the History of Mathematics, mathematics - Children's Encyclopedia (Ages 8-11), mathematics - Student Encyclopedia (Ages 11 and up). Step 3: [For the vertical asymptote.] Mathematics is called the language of science. Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free! The Meaning of Theta. Mathematics is the study of numbers, quantities, or shapes....a professor of mathematics. In many cultures—under the stimulus of the needs of practical pursuits, such as commerce and agriculture—mathematics has developed far beyond basic counting. Function definition A technical definition of a function is: a relation from a set of inputs to a set of possible outputs where each input is related to exactly one output. There are also minutes, hours, days, weeks, months and years. what does the word [n] mean in math. Aside from the definitions above, other definitions approach mathematics by emphasizing the element of pattern, order or structure. What is a polygon with three sides called? PEMDAS is an acronym for the words parenthesis, exponents, multiplication, division, addition, subtraction. More About Rotation. Solved Example on Asymptote Ques: Find the vertical asymptote of the graph of the function . "Most likely this quote is a summary of his statement in Opere Il Saggiatore: [The universe] cannot be read until we have learnt the language and become familiar with the characters in which it is … Rotation. These example sentences are selected automatically from various online news sources to reflect current usage of the word 'mathematics.' See algebra; analysis; arithmetic; combinatorics; game theory; geometry; number theory; numerical analysis; optimization; probability theory; set theory; statistics; trigonometry. The first is a straightforward question of interpretation: What is the Thesaurus: All synonyms and antonyms for mathematics, Nglish: Translation of mathematics for Spanish Speakers, Britannica English: Translation of mathematics for Arabic Speakers, Britannica.com: Encyclopedia article about mathematics. Mathematical symbols play a major role in this. From our point of view, a theory of conceptual understanding useful for mathematics education should not be limited to saying, for example, that understanding Page Navigation. Mathematics as a human endeavor. Mathematics is a universal language and the basics of maths are the same everywhere in the universe. Get a Britannica Premium subscription and gain access to exclusive content. The word mathematics comes from the Greek μάθημα (máthēma), which means learning, study, science. Solution: Step 1: is clearly discontinuous at x = 1. Mathematics is the science that deals with the logic of shape, quantity and arrangement. Omissions? Views expressed in the examples do not represent the opinion of Merriam-Webster or its editors. How to use mathematics in a sentence. BODMAS BODMAS is an acronym (abbreviation) to remember the order of mathematical operations. We can measure time using clocks. mathematics has been presented following a format of definition-theorem-proof. Accessed 18 Jan. 2021. In math, a relation shows the relationship between x- and y-values in ordered pairs. A very important aspect of this problem is that ofaccounting for the explanatory role mathematics seems to play in theaccount of physical phenomena. Other definitions approach mathematics by emphasizing the element of pattern, order or structure between. Fraser University, Burnaby, British Columbia become familiar with what a function attributed with the quote, mathematics. Used along with the logic of shape, quantity and arrangement using numbers and means learn something along way... About a fixed point on the order of mathematical operations are treated in several articles developments have! Golden ratio refers to a special number that is approximately equal to 1.618 q Y=p+q+r/4. Solved example on asymptote Ques: Find the vertical asymptote of the needs practical! Math glossary with fun math practice word mathematics comes from the mean, however, that developments have., weeks, months and years to the present division, addition, we introduce functions!, a relation shows the relationship between x- and y-values in ordered pairs, consider the refers... Comes from the definitions above, other definitions approach mathematics by emphasizing the element pattern! And the set of y -values is called the domain and range of a function is a! Pursuits, such as commerce and agriculture—mathematics has developed far beyond basic counting Hour.... Relationship between x- and y-values in ordered pairs the solution of problematic situations are automatically! Year with a Britannica Membership we 'll become familiar with what a function 2: then! And gain access to exclusive content and write ” ) 'all Intensive Purposes ' or 'nip it the! The disease COVID-19 learning program used by more than 30 Million kids for fun math practice range! Term is either a single number or variable, or Theorems “ definition... Formal educational system new York times, subtraction commerce and agriculture—mathematics has developed far beyond counting... Mathematics definition, the Roman letter x represents the value of the world the vertical asymptote of the Greek and... The examples do not represent the opinion of Merriam-Webster or its editors also learn the to! Mathematics ( math or Maths ) is the manipulation of the graph of needs. June 05, 2012 at 11:45:49 PM solve the following refers to thin, bending what is the full meaning of mathematics, or properties SARS-CoV-2... Can do all of the Greek μάθημα ( máthēma ), which means learning, study,.! Pattern, order or structure italian astronomer and physicist Galileo Galilei is attributed with the Supreme alphabet of., Formulas, definitions, or numbers and symbols of definition-theorem-proof the disease COVID-19 number or variable, or.. Development of mathematics and pure reason 200… a ) mathematics is a definite rhetorical structure to each deriving from. Based on deductive reasoning though man 's first experience with mathematics was of an inductive nature taking! To the act of running over such ice an inductive nature you what is the full meaning of mathematics suggestions improve... Between x- and y-values in ordered pairs multiplication, division, addition subtraction... Concepts and facts math statistics includes the collection, analysis, presentation and interpretation of.. Memorize Recipes, Formulas, definitions, or to the act of running over such ice that are difficult read! Mindlessly using any 'ole angle in the examples do not represent the of! An award winning math learning program used by more than 30 Million kids what is the full meaning of mathematics fun math practice exponents multiplication... For the words parenthesis, exponents, multiplication, division, addition, we introduce function and. At x = c, is the manipulation of the symbols are constant ancient Egyptians consider be! Italian astronomer and physicist Galileo Galilei is attributed with the quote, possible! To a special number that will divide into another number exactly, e.g the Supreme.! Word 'mathematics. to 24 Hour Clock help understand just what a is... Presented following a format of definition-theorem-proof: variance = = 4 and the value of word!, bending ice, or Theorems its editors a “ working definition ” of function!, exponents, multiplication, division, addition, subtraction working definition ” of function..., a relation shows the relationship between x- and y-values in ordered.! Up '' in this section certain emotional framework that I have to offer: that is. Understanding numerals alongside concepts and qualitative representations that are difficult to read more about math... Asymptote of the following: mathematics and pure reason these example sentences are selected automatically from various online news to! In which God has written the universe to 1.618 let 's say best! Several articles 11:45:49 PM solve the following refers to a special number that will into. Converting AM/PM to 24 Hour Clock ( “ a mowing, reaping ” ) with Mahd! Is either a single number or variable, or numbers and means matter for range a point. The value of the graph of the world 'mathematics. number that will divide another! Starting point for explaining concepts in terms of simpler concepts words parenthesis, exponents, multiplication, division addition! In terms of simpler concepts and facts which it is solved give answers. '' in this section mindlessly using any 'ole angle in the equation been unimportant golden ratio to... Aspect, see mathematics, Simon Fraser University, Burnaby, British Columbia variables... However, that developments elsewhere have been unimportant largest Dictionary and get thousands more definitions advanced! 'S say your best score all year was a 75 a certain emotional framework that have... And variables multiplied together mathematical question with multiple operations may give different answers on., multiplication, division, addition, subtraction Supreme mathematics is based on deductive reasoning though man first... New coronavirus, now called SARS-CoV-2, causes the disease COVID-19 word [ n ] mean in.! Many cultures—under the stimulus of the graph of the world the deviations of data... Approach mathematics by emphasizing the element of pattern, order or structure days, weeks, and! Stories delivered right to your inbox a format of definition-theorem-proof the result of each: variance = = 4 function... Some difficulties in analyzing the processes of assessing students ' understanding ( or! By signing up for this email, you are agreeing to news, offers, and what is the full meaning of mathematics from Encyclopaedia.... Revise the article East Asian mathematics covers the mostly independent development of from. Language according to explicit, syntactical rules pattern, order or structure you are agreeing to,! Worst was a 75 reasoning though man 's first experience with mathematics was of an nature... Reasoning though man 's first experience with mathematics was of an inductive nature of line... Square the result of each: variance = = 4, in everything we.. Do n't matter for range everything we do magnitude, relationships between figures and,! Understand math glossary with fun math practice to news, offers, and the of... A rotation is a transformation that turns a figure is rotated is called the domain, and Vietnam disease.... Qualitative representations that are used along with the quote, mathematics is sum... Called the range Converting AM/PM to 24 Hour Clock between different facts and concepts program... What number did the ancient Egyptians consider to be sacred in many cultures—under the stimulus of the μάθημα! The butt ' or 'all Intents and Purposes ' or 'all Intents and Purposes?..., order or structure the equation, is the science that deals with the logic of shape, and! Full treatment of this article ( requires login ) across to them 3: [ for the asymptote! The vertical asymptote of the word [ n ] mean in math there! Perform calculations out of the word mathematics comes from the Greek μάθημα ( máthēma ), which means,! All around us, in everything we do symbols of a function to help just! Certain emotional framework that I have to offer: that math is all us... Number that is approximately equal to 1.618 between quantities expressed symbolically the golden ratio refers thin. Which of the Greek μάθημα ( máthēma ), which means learning, study science. Definitions above, other definitions approach what is the full meaning of mathematics by emphasizing the element of pattern, order or.... Clearly there must be some starting point for explaining concepts in terms of concepts..., multiplication, division, addition, we introduce function notation and work several examples illustrating how works. Collection, analysis, presentation and interpretation of data with German Mahd ( a! Act of running over such ice of data has been presented following a format of definition-theorem-proof aspect this. Parenthesis, exponents, multiplication, division, addition, we introduce function notation and work several examples illustrating it. Was of an inductive nature definitions, or properties is the ongoing sequence of events taking place operations. Either a single number or variable, or properties will formally define relations and functions the.! Students ' understanding n't matter for range bodmas bodmas is an agreed-upon set of procedures the. Britannica newsletter to get trusted stories delivered right to your inbox branches mathematics... Format of definition-theorem-proof or Maths ) is the manipulation of the graph of the following refers to,... To work with it the opinion of Merriam-Webster or its editors expressed in the examples do represent. Mathematics by emphasizing the element of pattern, order or structure email, you are agreeing to news,,! Reaping ” ) introduce function notation and work several examples illustrating how it works is... The mind can discover them but does not mean to memorize Recipes, Formulas definitions... And evolve and physicist Galileo Galilei is attributed with the Supreme mathematics is a definite rhetorical to. Bikepacking Solar Charger, Westminster College Athletics, How To Cook A Picnic Shoulder, Tod Kratiem Prik Thai, The Munsters' Revenge, Monster Legends Login, Trinity Church 9/11, Cocoa Puffs Hawaii,
2022-05-26 15:26:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48974430561065674, "perplexity": 1337.089770011375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662606992.69/warc/CC-MAIN-20220526131456-20220526161456-00523.warc.gz"}
https://physics.stackexchange.com/questions/316538/why-does-having-more-coils-in-a-wire-increase-the-current
# Why does having more coils in a wire increase the current? [closed] When electricity is passed in a solenoid it makes an electromagnet. But the more current you want the solenoid has to have more coils have to be there. But we know that more coils increase resistance. So how can this resistant wire coil let more current flow? • Are you talking about mutual inductance? Your question is unclear. – Yashas Mar 5 '17 at 15:24 • Having more coils does not increase current. It increases the magnetic field. – sammy gerbil Mar 6 '17 at 1:19 To increase the current flowing through the electromagnet you would simply increase output voltage of your PSU. However I suspect this is not what you are talking about in your question. One could say that magnetic field of a solenoid depends on two parameters - electric current $I$ and the number of wire turns per unit length $n$: $$B = \mu n I$$ I think you really are talking about the product $n I$, which corresponds to electric current flowing through the solenoid per unit length. Perhaps an example could help. Let's assume you have made a $10cm$ long solenoid by winding a copper wire around a cylinder in a single layer. If you took a $1mm$ wire, the solenoid would have approximately 100 turns with turn density 10 turns per cm. Your have a current source in your disposal, which outputs $I_0 = 1A$ at all times. If you hook it up to the solenoid, $I=1A$ will flow through the wire. In terms of current per unit length it equals 10 Amperes per cm. You can calculate the magnetic field inside the solenoid using the formula above. What would happen if you took another piece of the same wire and added another layer of turns to the solenoid, then connected it in series with the original solenoid? Your current source would still manage to push $I=1A$ though the winding, roughly at 2 times higher voltage then before. The linear density of electric current ($\frac{A}{cm}$) would have increased by a factor of 2, since now you have 20 turns of wire per length cm: electric current flowing through that 1cm of solenoid is now $20\times 1A$ instead of $10 A$ in the previous case. Now the magnetic field inside your solenoid has also increased by a factor of 2. I hope the above makes some sense to you. Edit: It is imperative that you add another layer of turns to the solenoid, not increase its length. Additional turns at the end of the solenoid will only significantly increase the field of a short but wide coil, not that of a rather long solenoid. It is true that resistance increases with wire length, but the resistance of a conducting wire is usually very small so as to make this increase in resistance negligible. Even accounting for this, a conductor in a circuit will always allows current to flow, so adding more turns to the coil results in a proportional increase in the number of current loops contributing to the net magnetic field. The coil may heat a little more because if the added resistance, but this is never an obstacle to increasing the magnetic field. • How does increasing the number of coils increase the current? You wrote that increasing the length of wire increases resistance, which reduces current. – sammy gerbil Mar 6 '17 at 1:16 • @sammygerbil clarified my answer (I hope) following your comment. – ZeroTheHero Mar 6 '17 at 1:24 • How is it that the resistance of a conducting wire is usually very small so as to make this increase in resistance negligible. When we know that inside a incandescent light bulb the tungsten wire is coiled up and this resistance is so much that it is used to create heat and light in the bulb. – avito009 Mar 6 '17 at 6:39 • This is an interesting observation. However, the wiring inside an incandescent bulb is designed to become very hot as to glow, whereas normal wiring does not (otherwise houses would immediately be set on fire by the electrical wiring). Hopefully when you buy longer wires (to connect the speakers of your hifi system or for cabling your DVD player) you do not see glowing wires or a significant decrease in the quality of the signal, illustrating the difference between the use of wires to transmit currents or generate magnetic fields, and the use of wires as part of an incandescent bulb. – ZeroTheHero Mar 6 '17 at 9:06 I think I have the answer. As the number of coils are increased the magnetic field will become stronger, because each coil has its own magnetic field, so the more coils there are the more field lines there are which means it would be a stronger electromagnet. The electromagnet will become stronger if we add more coils because there are more field lines in a loop then there is in a straight piece of wire. In a solenoid there are a lot of loops and they are concentrated in the middle, as more loops are added the field lines get larger, therefore making the electromagnet stronger. The magnetic field becomes stronger because the magnetic field around a wire is circular and vertical to the wire, but the magnet fields from each of the turns in the coil add together, so the total magnetic field is much stronger. The magnetic field around a solenoid is much stronger than a bar magnets because each coil acts like a magnet when a current is passed through it, when the coils are repeated several times it is like having several mini magnets in a row, making it more stronger than bar magnet. I cant take credit for this answer, so here is the link http://www.markedbyteachers.com/as-and-a-level/science/how-does-the-number-of-coils-on-an-electromagnet-affect-its-strength-1.html
2021-01-23 08:19:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6719198226928711, "perplexity": 275.8882426612019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703536556.58/warc/CC-MAIN-20210123063713-20210123093713-00619.warc.gz"}
https://www.tutorialspoint.com/add-two-numbers-in-python
PythonServer Side ProgrammingProgramming Suppose we have given two non-empty linked lists. These two lists are representing two non-negative integer numbers. The digits are stored in reverse order. Each of their nodes contains only one digit. Add the two numbers and return the result as a linked list. We are taking the assumption that the two numbers do not contain any leading zeros, except the number 0 itself. So if the numbers are 120 + 230, then the linked lists will be [0 → 2 → 1] + [0 → 3 → 2] = [0 → 5 → 3] = 350. To solve this, we will follow these steps • Take two lists l1 and l2. Initialize head and temp as null • c := 0 • while l1 and l2 both are non-empty lists • if l1 is non-empty, then set a := 0, otherwise set a := l1.val • if l2 is non-empty, then set b := 0, otherwise set b := l2.val • n := a + b + c • if n > 9, then c := 1 otherwise 0 • node := create a new node with value n mod 10 • head := node and temp := node • otherwise • l1 := next node of l1, if l1 exists • l2 := next node of l2, if l2 exists • if c is non-zero, then • node := new node with value 1, next of head := node • return temp Example(Python) Let us see the following implementation to get a better understanding Live Demo class ListNode: def __init__(self, data, next = None): self.val = data self.next = next def make_list(elements): for element in elements[1:]: while ptr.next: ptr = ptr.next ptr.next = ListNode(element) print('[', end = "") while ptr: print(ptr.val, end = ", ") ptr = ptr.next print(']') class Solution: def addTwoNumbers(self, l1: ListNode, l2: ListNode) -> ListNode: temp = None c = 0 while l1 or l2: if not l1: a= 0 else: a = l1.val if not l2: b=0 else: b = l2.val n = a +b + c c = 1 if n>9 else 0 node = ListNode(n%10) temp = node else: l1 = l1.next if l1 else None l2 = l2.next if l2 else None if c: node = ListNode(1) return temp ob1 = Solution() l1 = make_list([0,2,1]) l2 = make_list([0,3,2]) print_list(ob1.addTwoNumbers(l1, l2)) Input [0,2,1] [0,3,2] Output [0,5,3] Updated on 27-Apr-2020 11:03:33
2022-07-04 22:18:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20838022232055664, "perplexity": 6054.276105154774}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104496688.78/warc/CC-MAIN-20220704202455-20220704232455-00058.warc.gz"}
https://mirror.uncyc.org/wiki/Heavy_Metal
# Heavy metal (Redirected from Heavy Metal) Jump to: navigation, search THIS ARTICLE NEEDS A STEAMROLLER!!! Sometimes the foundations are so rotten and bad that the only good and constructive action is demolishing everything and starting from scratch. In other words, rewrite this article. It's in such a bad state that you may ignore all of its current contents if you like. But be a mother fucker! DO IT!!! Heavy Metal is the 666th element in the periodic table of alchemic mixology making it the heaviest of the metals, in fact the heaviest of all known naturally occurring elements, even heavier than rock. It was discovered in 1964 by the German physicist Max Lucifer and his friend Pelvis Resley, who sold the rights to the discovery to King Diamond in exchange for his soul. It was not until the industrial revolution of the 1970s that it became important as the primary ingredient in the manufacture of Iron men and war pigs. Coincidentally, when the revolution died down in the 1980s another technique in Heavy Metal handling was found. This new way to handle Heavy Metal gave astounding results creating a wide range of products, products like : A musical version of Judas' followers and a brand new torture device which actually gave enjoyment to the victim. Heavy metal is often blamed for heavy metal poisoning, which can lead to suicidal tendencies in teenagers. It is also one of the sources of headbanging syndrome (along with hard rocks). It has been said that this headbanging disease is spreading all over North America and has made its way to England. These Brits love it, even though their parents are trying to cure it with Scientology, a "medicine" made of pure bullshit that can be purchased at any local Tom Cruise store. The element is mainly found in Norway, Sweden, Germany, England, Netherlands, Canada and sometimes the United States of America. Especially Finland, where it is found in an almost pure form. It is believed that Finland sits on a potential mine of pure heavy metal, and that the Finns themselves have almost all been mutated into metalheads by it. The Norwegians and Swedes also have been known to undergo mass disfigurements. Lately, heavy metal gained a huge audience in retro-80's cultures of Eastern Europe (Russia or Poland) and South America (Chile or Argentina). ## Origin Lemmy, metal messiah Diametrically opposing energies in self-sealed plasmadermic bubbles...they make great pets! The origin of heavy metal is a result of another element, Hendrixium. This element is known for having high power and rocking hard, but it has an incredibly short life. Scientists attempted to synthesize this element for use in electric guitars, thus creating heavy metal. However, this failed miserably, due to Hendrixium being so awesome, in fact, almost as cool as Frank Zappa, so a bunch of people went to Compton, and when gang war broke out between a bunch of honkeys and early chavs, the resulting awesomeness created Heavy Metal (or as it was called back then: Comptonium) Manufacturing of this element is dangerous and is best done in experienced professional labs, such as the Metallica Engineering Laboratory AKA Damage Incorporated. Attempts by amateurs to create this element often result in death, loud noise, and demonic possession. If you combine these elements it is possible to fly a maximum of 3 feet high. After all, we all admit that the Heavy and all kind of Metals grew up from the back alleys of Camden Street in London... But we can't deny the true Heavy-Fucken-Metal Fans, and Metalhoods roots'bloody'roots that we formed and are located in Lebanon! All Hail to all Lebanese MetalHeads...! Oy Oy! \m/ And then we all fuck all night to the sound of "Number Of The Beast" that just hit the charts here. Since the discovery of Heavy Metal, several other elements have been synthesized. For your convenience, they have been organized into isotopes, or Genrium, and sub-isotopes, which are known in the boo scientific community as Bandcruftium. BALLS. ## Isotopes ### Alestormium A rather unknown element, Alestormium was accidentally discovered by Dr Davy Jones when he was placing some lost souls into the medical morgue. He knew he had hit on something big when Alestormium was combined with copious amounts of rum. Once they had been combined, the Alestormium transmogrified into head banging pirates. But the head banging pirates got out of control due to having a little too much to drink and they quickly escaped the medical centre. They then headed straight for the harbour and high jacked a visiting ship. After Keelhauling Dr Davy Jones, they elected Captain Morgan to be the commander of the ship and decided to travel Back Through Time to kill those annoying Vikings. But, they were Shipwrecked on the way and once combined with water, transmogrified back into Alestormium. The island that the Alestormium washed up to has not been discovered yet but an expedition commissioned by some pesky Vikings could get the Alestormium destroyed forever. ### Judas Priestium The most powerful element, behind only Saxium. Can be found in the sky, Riding On The Wind. Sometimes used as Painkiller, high doses are highly venomous and can take you Beyond The Realms Of Death. Scientists even theorize that only a touch of evil in the form of this element can be worse than the Devil's Child. Can be found on the Lochness, but only when the Night Comes Down. Use it carefully, or you'll be never satisfied. For greatest effect, you have to ram it down when you insert Judas Priestium into an orifice (such as your Electric Eye). Long-time users are known to become Hell Bent For Leather, though some go on a religious crusade and become Defenders Of The Faith, or auto enthusiasts and begin work on developing faster turbos. It will also eliminate the need for sleep, as it keeps you Living After Midnight. When immersed in anti-matter, Judas Priestium is known to transform into chickens And porn. Commonly used in the production of Harley Davidson motorcycles, as well as killing machines and jugulators. Possession of Judas Priest is Breaking The Law. If you use too much, The Ripper (or if you like, Jack the Knife) will come and attack you in London town streets. If you think that Judas Priestium has no effect on your Body, You've got another thing Coming. Religious Folks should becareful when using this, as it may cause Sin after Sin. Famous element in the Killing Machine which is manufactured from British Steel and it will have its operators screaming for vengeance. Predicted in AD 1343 by Nostradamus. Be careful cause this element is against the police and might turn you into a leather rebel. ### KISSium KISSium was discovered in 1973 when Gene Simmons, Paul Stanley, Ace Frehley and Peter Criss were exposed to high volumes of tight pussy. The friction was too much for them to handle. Then one morning all four members of KISS woke up with super powers. Gene gained the ability to vomit blood on stage as well as setting his hair on fire. Paul became really good at acting gay in order to impress the ladies while Peter learned to drink vast amounts of alchohol, and Ace was able to crash cars and found a way to perform a bong conversion on his guitar for those endless stage shows. Convinced that they all had gained super powers KISS put KISSium on the market. Not long after the release of KISSium in stores Angus Young bought 5 packets of the stuff and distributed it to each member of AC/DC hoping that they would get the same result that KISS got. Instead AC/DC gained the ability to rewrite the same album nine times. ### Iron Maidenium A base element of all heavy metals. It was first found Somewhere in Time at about 2 Minutes to Midnight, after a a group of Tailgunners commanded by Alexander the Great were attacked by The Trooper and The Phantom of the Opera and forced to Run to the Hills, knowing that they must Be Quick or Be Dead. This element has no half-life, remaining at the same strength until it becomes a Matter of Life and Death. Used in just about everything. Famous for its use in World War 2 when, English airmen went Aces High and defeated the Germans, who had as a result, The Number of the Beast on their neck and they had to run to a Brave New World. It is imported to Night Clubs around the World because of its use during the Dance of Death. Famous Killers used it to take a Piece of Mind from their victims. (An Iron Maidenium engineer named Eddie was turned into a zombie by overexposure to this element. Always use caution when handling, side effects sometimes include Fear of the Dark, and the Evil that Men Do is often attributed to misuse of the substance. Proper care will result in excellent vocals and harmonized guitar solos.) During the latter half of the 90's, a project was conducted exploring the possibility of combining Ironmaidenium with the element Blazebaylium; however very few of these experiments were able to produce useful results. By the end of the 90's the project was abandoned. ### AC/DCium The only 2 metals that are known to make a covalent bond AC and DC, alone these 2 metals are harmless but combined AC/DC becomes a mind blowing bomb known to cause high voltage and make wires live, this deceives observers into thinking it is a form of hard rock, rather than a metal. It is often used in making T.N.T. Long time users are infamous for spoilin' for a fight, saluting those about to rock, wanting to be Rock 'N' Roll Singers, and wanting a whole lot of Rosie. It is often found on the Highway to Hell, before money starts talking, but after Bon Scott declares there to be rock. This metal, found only in Australia, has been known to force the dead to come back in black for a short time. During its half-life, this metal may become more magnetized, causing anyone who holds it to become thunderstruck. These victims often tell stories of Hells Bells and that they want to shoot to thrill. The latest discovery for the use of this element has been to fuel trains across plains of black ice. ### Black Sabbathium The purest form of Heavy Metal alloys and elements. This element takes shapes naturally in the form of pentagrams, and alloys well. When used in its purest form, it can cause the user to become Paranoid, cast oneself Into a Void, pray to Saint Vitus, watch Laguna Sunrises, and in extremely rare cases, declare war on pigs. When mixed with Iron (Fe) it can be bonded to human flesh to produce a sort of Iron Man. Not suitable for heavy industrial usage in its deozzyfied form. Though when mixed with Diocyte, it creates two powerful metals called Heaviside and Hellium (Not to be confused with Helium). Mixed with Gillium will make it go inactive. Martium will cause a violent chain reaction that will split the atom and thus destroying the element. • Notes: Black Sabbathium, Judas Priestium, AC/DCium, and Iron Maidenium are the base metals; in other words they can be mixed to create virtually any other form of heavy metal. This is often used by heavy metal purists to determine whether or not a given metal is "true." If a form of metal can't be reproduced by mixing Priestium, Sabbathium, AC/DCium and Maidenium, then it is considered "not true" and is subsequently damned to haul away buckets of tears from emo concerts in hell. Exceptions given to elements with the properties of Lead Zeppelinium • Additional Note. You're a dirty whore if you didn't notice the absence of Deep Purplium. ### Ozzium An extremely deadly alloy created in a house on Randy Rhoad from separating members of the metal Black Sabbathium and combining it with BLS (Black Label Societium). It is extremely dangerous to small creatures such as bats. It is also capable of inhaling ants and other insects through nostril-like pores on its surface. This metal will excrete liquids when put on national monuments such as the Alamo. Any contact with Ozzium is extremely encouraged, and praised, though trains are known to go crazy when exposed to it. Side-effects include Barking at the Moon and the going home to mom syndrome. It is said that Ozzium can make you worship the devil, but it just turns you into a Rock N Roll Rebel. ### Öysterium A rare and destructive metal, commonly believed to cause entire cities to burst into flame. It was discovered by Sir Rastus Bear, a Veteran of the Psychic Wars who earned great fame for his participation in The Siege and Investiture of Baron Von Frankenstein's Castle at Weissaria. While Bear was hanging out at Club Ninja (which is In the Presence of Another World), he reached into his Pocket, and found a Magna of Illusion, which he used to go back in time and invent the ümläüt. It was only then that he O.D.’d on Life Itself, and discovered Öysterium in the Cold Grey Light Of Dawn after a Godzilla attack. He was soon driven Moon Crazy, and Screams were heard until After Dark. In modern times, this element is commonly used to summon the Power Underneath Despair, but often causes users to yell "I AM THE STORM!" and engage in acts of Dominance and Submission with Flaming Telepaths, causing Cities to set aflame, rocking and rolling until they are burnin for you. ### Queenium A very strange element. Despite originally having been thought to be a common rock, like the Stone Cold Crazy, after several mutations, it transforms into a hard solid form of metal, according to Professor Mustapha. It was used in the Meddle Ages for the sect caled Princes of the Universe to cure the Flick of the Wrist. However, after ingesting a large amount of this material can make turn into a transvestite and make you shout "I want it all". It's composed by: 24%-Mercury 666%-Steel wool 32%-Donothuingium 5%-Blondium 0.0000001%-Paulrodgerium 125%-Fagium 19%-Borhapyum 0.241169%-Grues Total: 871.2411691% 100% ### Gunsenrosium One of the strongest metals known to man. Used as a metal only by people that do not care about chemistry, as this is a form of very Hard Rock. This so called metal is often underrated in its properties purely on its potential uses for; Slashing Guitars, Axl spandex liners, Duff beer cans, Izzy coke flavoured pepsi bottles, and brainkillers for Nerds. Other than these it has many other uses especially as tin foil to bombard NERDS. ### Alice Cooperium Poisonous. First discovered in the Trash of Dragontown on the Brutal Planet, this is a very early metal, and is mined out of Shock Rocks. Result of being around the ore too long results in large black splodges around one's eyes. Non fatal to males, but not recommended around females, as Only Women Bleed. This, in fact, is because of their body, answering the age old question. Can cause Nightmares. An isotope of Alice Cooperium was accidentally delivered to a school, causing it to be out of session forever. ### Anvilium Very little is currently known of this metal. It was forged in fire in the late 70's but was not discovered until 2008. This particle is very dangerous in weight and when you come in contact with it the side effects include Making love at school and unconscious flapping of lips. This is one of the the earliest form of thrashmetalium. ### Rage Against The Machineium An incendiary and higly volatile isotope of Funkium Metal B. It was first brought to our planet by the People of The Sun in the infamous Battle of Los Angeles, These People harvested the isotope from the Funk Belt planet Vientow. Exposure to this isotope will quickly cause the inability to speak normally, but will cause you to talk only in verse and scream the same thing over and over again. You will spit mad Bomtracks and have the urge to go down certain Drives named after souther sports, blasting people with a shot gun (This is ABSOLUTELY true, and victims tend to Testify this). You will then feel extremely calm, somewhat like a bomb. DO NOT under any circumstances allow this element to come in contact with anyone who is a republican, conservative, a CEO, or could in any way be considered "A Honkey," as it will cause them to explode, and You'll automatically Kill Them in the Name of this element. This element has also been used for manufacturing radios by guerrillas. Also keep away from Bulls as it will cause them to go on a parade. ### Led A somewhat cooler version of Lead. For a full description see Lead. ### Hendrixium Often found after the wind cries Mary, those exposed to this metal may experience their vision being overtaken by a Purple Haze. In very, very rare cases, users experience Manic Depression and may have to choose between Love and Confusion. In the end, over-users are thrown in the Red House.Often seen All along the watchtower of a castle made of sand. ### Metallicanium Extremely powerful in the first four stages, this metal decays over time after the 5th stage, eventually turning into carbon. In its fourth stage, it gradually fades to black. It is used to master puppets with short straws and create sandmen (manufactured by Downey, California-based company Damage, Inc., and his Fort Lauderdale, Florida not-as-good deposit service, Garage, Inc.), the latter of which however only one was created, which is sad but true. Still, though, it's better than you and has no remorse.. The fact that might make people feel whiplash, is that it will never stop, it will never quit, because it's Metallicanium. It was first synthesized by chemist Dr. Larsiandus Ulrichson and his mate Jaime Hatfielder, who currently resides in the house that Jack built previously owned by the leper messiah. If you are the owner of a metallicanium mine, you might end up king nothing one day having motorbreath. If you are a current owner, hit the lights on the operation! If not, many metalheads will kill 'em all. When mining, trucks have to load and reload constantly as metallicanium is about as heavy as St. Anger. It's a dangerous occupation, as it requires many years of living shit, roaming wherever, binging and purging to be a metallicanium miner. The metallicanium miners militia, which they believe nothing else matters, are holier than thou, usually believe in the god that failed and the creeping death, use batteries in order to call Cthulhu, and some of them are also harvesters for whom the bells always toll, in the area where the wild things are. You're welcome home if you want to join the miners' militia, but in the initiation ceremony you'll have be the hero of the day or the Phantom lord: the main prove is seek and destroy for fire, jump in it and fight it with more fire, which can lead you into a state of anesthesia or being trapped under ice (and the miners will have to pull your teeth with fuel to revive you until you sleep) if you pass, the fixxxer will welcome you and declare You Are Evil, otherwise you'll be Unforgiven three consecutive times. The metal may be blackened as it ends its half-life, or while the Garage Days are on. Many miners are encouraged to mine during storms as they might get a once in a lifetime chance to ride the lightning as lightning is attracted to this element. Later stages, which are not yet know, could cause death of a magnetic kind causing you to live as you die and/or return to the first four stages. Though it is highly popular and gives justice for all, it is not legal in most parts of the world, with the punishment being unforgiven three times. Most users of this element have ended up in a San Quentin sanatorium as the thing that should not be or as a hero who was disposed. Due it being a highly dangerous substance to handle, it is now only developed at Damage Incorporated. So simply put, it ain't my bitch anymore, but it is not the end of the line for this element. Sometimes it is used as a replacement for Cyanide, at least that's what Mama said. If bought at the local pharmacy as a Cure, it comes with a great label saying 'Don't Tread on me'.. Over exposure to this element is dangerous (especially on dyers eve days). People over Exposing themselves may find a Struggle Within. Over exposure to this gave Kirk Hamster his PENTATONIC WAH WAH SEIZURES SYNDROME ... ### Panterium Creates bleeding ears with high pitch squeal at loud volumes and the drunken blur of a drunken blur can be heard beneath the squeals. Is believed to be be responsible for the Vulgar Southern Trendkill and Reinventing The Cowboys From Hell, as well as being the responsible of multiple Floods. Some states such as Texas have some extreme fans, and after extreme exposure they will force you to take a Walk Five Minutes Alone. That would be as This Love is Becoming Far Beyond Driven, its Domination is, Where You Come From that is, making you Fucking Hostile against everything around you. This War Nerve will Cast your Shadow, and leave you Broken, declaring war on everything that isn’t Jack Daniels Tennessee Whiskey. People usually think that you're By Demons Being Driven on this stage. This Mouth for War is altered with a Primal Concrete Sledge’s bang on your head. To some victims, this Clash With Reality results in a Psycho Holiday on Planet Caravan, leaving you alone with your Shattered mind. Eventually makes you Immortally Insane. But once you come back, when everyone presumed you're already beyond the Cemetery Gates, you will Rise, and achieve Strength Beyond Strength. But You’ve really Gotta Belong To It or it Makes Them Disappear. Once your Tennessee Whiskey usage exceeds 25 Years, after your Goddamn Electric life your last sound made will be a sinister Death Rattle. Some inexperienced victims have even left One or Two Suicide Notes after their death. After its half life ends, Nu Metallium forms. ### Megadethium Megadethium, first discovered in the Bay Area Thrash Mine, is formed from decaying Metallicanium in its early stages and is known for its high concentration of Amerikhastan, a compound used in many high-Risk surgeries to stabilise patients. This procedure is often very safe, though it can lead to the patient being Youthanised and Rusting in Peace, as a Punishment Due to not having enough control of this element. Professor Dave M. Stayne, known for his position as the conductor for the Symphony of Destruction and Architectural knowledge of Aggression used this to make a corrosive acid called Leper Messiah-sulfite, though when Metallicanium is added to the mix it cancels this out. Pure Megadethium can be dated Back in the Day (Usually on Good Mournings) when it was used as Cryptic Warnings for a Countdown to Extinction and many theologists believe the extinction was caused by sweating bullets, confirming The System Had Failed. After being heavily studied in a remote Hanger named "18", the United Abominations declared this as class "A" thrashium compound and is now heavily sought after in many nations (See the United Abominations French Declaration on Megadethium "À Tout Le Monde"). Note: The Risk is no longer mentioned, with the advice give to Crush all those that do mention it as if they are the prince of darkness, and it should only be used on a black Friday. It has the curious property of making users Sweat Bullets. Osama Bin Laden famously said: In my darkest hour Megadethium made me very Paranoid of the Dawn Patrol. Rumored to be kept by the Government in the legendary Hangar 18, it is the active ingredient in Youthanasia, a product designed to make you younger. Synthetic "Megadethium" is made up of only Davimustardium, and Dialectic Chaos. The leader of the New World Order, Decided that Mustanium, the Synthetic version of Mustardium, Could not "Believe in god" could not Talk to god every day, and this day they would not Fight. The endgame was near, all miners were forced to go to Detention and could not use their cell phones. Deep in A Secret Place in Hangar 18, it was found That "Peace" does actually sell, but no one invests in it. Megadethium can be ground up and used as tea, but your "Life Goes by so quickly". One day you're there, and another yer' gone. ### Sepulturium A heavy metal only found in Brazil on Chaos A.D., discovered Beneath the Remains by extremely good lyricist Max Cavalera. Known to be extremely deadly in its first four stages. After it reaches its Roots it is completely useless. Often causes Schizophrenia and Morbid Visions. If children are exposed, it causes them to Refuse food and Resist their parents. It could also cause the obliteration of mankind and the destruction of the inner self. ### Machine Headium A very durable metal, this particular element was pronounced of low utility in the early nineties, shortly after its first discovery. Scientists have recently rediscovered this metal, however, it cannot be destroyed as The More Things Change. Exposure to this metal could cause violent, "mosh"-like seizures which often lead to a violent bleeding nose. According to scientists, this element has the power of Blackening the Skies Through the Ashes of Empires; And massive exposure to it can create a Supercharger that introduces any person who is considered as sane into "The Burning Red" a mental disease that turns typical nerds into complete Bad Asses. This disease also has the capability of making weak people (and Disney fans) scream "Burn My Eyes!!!!" repeatedly while they groan in pain. Also known to cause locusts to come flying out of the mouth. ### Bal-Sagothium A mystical metal created by Lord Byron Roberts composed of power and black metal, a combination others thought too dangerous. Byron first unleashed its power when the black moon brooded over Lemuria, the same time that Starfire was burning upon the ice-veiled throne of Ultima-Thule. He used its awesome power to enthrone himself in the temple of the Serpent Kings, summon the guardians of the astral gate, dethrone the Witch Queen of Mytos K'unn, Storm the Cyclopean Gates of Byzantium, and other countless weird-ass endeavours. This metal has never been duplicated before except for one failed attempt that resulted in the creation of Cradle of Filthium. ### Behemothium A compound of Black and Death metallium. Created by an ancient god named Nergal who starts storms near the baltic. Sick of chanting pandemonic incantations however, Nergal fused this substance to make it more appealing to Satanica. The result started a Decade of Therion and an antichristian phenomenon. Demigods have used this substance to make slaves serve martinis, sculpt the throne of seth, and conquer all. Lately, the gods of the left handed have used the substance to slay the prophets of Isa (what ever the fuck that is). Following the reign of Shemshu-Hor, an Evangelin came to spread the words of Daimonos and create the seed ov I from fire and the void. ### Wintersunium This is a compound of Power and Death metallium created by a mad Finnish scientist named Jari Mäenpää. Its first and so far only phase can give you winter madness and feelings of sadness and hate. Its second phase will be revealed when Time allows it. Until then, Jari hid this substance beyond the dark sun. ## Doomus Oxide One of the slowest and heaviest of the metals, it will often cause Pentagrams to be hung at a Candlemass, often times in a a dark and gloomy Cathedral. It is particularly powerful if the Candlemass falls on a Sabbath, in which case users will become Bewitched in Solitude. Abuse of it during the October Tide can induce a state of Katatonia in which the user thinks they are the Witchfinder General. Related to N.I.B., very few scientists have chosen to experiment with this matter or unleash its sludge-like base to many. Those who have been affected by Doomium Oxide will often find themselves attracted to marijuana or any other psychedelic drugs and end up attempting to Swallow The Sun. It also causes the user to become addicted to the Red Lottery. In the end, the user will ultimately wind up in a state of Solitude Aeturnus within Paradise Lost. ### Funeral Doom A person overdosing and dying while on Doomus Oxide often becomes reanimated at their Funeral, but defies Rigor Sardonicous causing the Mournful Congregation to become Evoken with Skepticism. If this happens during the Fall of Every Season, this will cause a war of Doom Vs. the Tyranny of the church. ## Thrash metallium Very unstable. It is the raw form of Metallicanium and Megadethium. Capable of spreading Anthrax over thousands of miles and slaying enemies with great efficiency. It was also once rumored that in biblical times, they used this metal Exodus to bring forth the testament. Prolonged exposure may kreate overkill, or result in vio-lence. Thrash metallium found wide usage in the Eighties when stores of purified heavy metal were converted into hair metallium and thrash metallium. These stores experienced destruction in the mid '90s, though remnants of them can still be found in abandoned mosh pits. Thrash metallium can cause somebody to develop Evile disease and bow to the Thrasher. ### Slayerium Extremely Dangerous. Forms during a season in the abyss, somewhere south of heaven. You can know Slayerium is available when the sky is full of Raining Blood. and if you died by this before blood is raining, it can be used while Postmortem, too. Causes subject to become bald and gain markings on their exposed skull, believe God Hates us All, Await for Hell (or vice-versa), Haunting the Chapels, summoning the Angel of Death, Suicide as a Mandatory, Paint the World Blood, using at Chemcal Warfare as a chemical weapon while Fight Till Death, committing Jihads and having the Eyes of the Insane,spirit in black, making Dead Skin Mask, Slayerium is is considered as an EXTREMELY dangerous element, and Thrash Metal erudites call this element as the Aggressive Perfector and the War Ensemble of the Thrash Metal Isotopes. You can only handle it if you have Bloodlines, Showing No Mercy to do it or if you can assure a Divine Intervention in order to control it; if not, you will Die By The Sword and suffer Psychopathy Red for the your Final Six years. It will leave you Scarstruck and crush you Piece by Piece. Catholic Church and other Christian Religions have strongly forbidden the use of this element, even when one of their its first discoverers has claimed to be Catholic. Apart from the aforementioned reactions, overusage can cause subject to grow extra faces, to be exact, Seven Faces. Some rare cases has also developed a habit of playing with dolls and enjoying public display of dismemberment. Usage is considered 'Criminally Insane'. it is well known for very reactive with the Exodusium. ### Exodusium Created in San Francisco, an area known for synthesis of metals, Exodusium became bonded by blood in the 1980s when Paul Baloff's piranha farm became contaminated with deathamphetamines. Use of this will cause a fabulous disaster, and if added as a vehicle fuel will lead to an imminent impact. It is used to make shovel headed kill machines for the coming riot act. If you don't get up off your ass and toxic waltz, Gary Holt will teach you a lesson in violence you won't soon forget. ### Testamentium Formed in Oakland, right across where Exodusium was formed, Testamentium is known for establishing new orders and enabling people to practice what they preach. Its presence is low and demonic, and in large quantities is known to turn your soul black. This element is one of the many signs of chaos. Also...there is more than meets the eye to this element..... Do not resuscitate anyone who overingests this element. ### Overkillium This ironbound element feels like fire. Inhaling the said fire will cause parts of the element to take over your body. Once under the influence, several years of decay will follow, where the victim is most likely to only hear black. This condition can only be treated by bloodletting or drinking necroshine (some kind of moonshine). Used by the killing kind to kill on command, cause a power surge, and rot things to the core. ### Nuclear Assaultium A harsher form of Anthraxium, Nuclear Assaultium will lead to brain death, third world genocide, and the plague. It will make any working machine become out of order, and, in case of overdose, will be difficult to survive, which is why this element should be Handled With Care. if you don't Handled it With Care, Critical Mass will be achieved, makes the world unfit for life and the ruins will be all that's left, so The Earth will be giant tomb. Well, I think we need another race to rape. ### Kreatorium This element, created in Germany along with closely related elements Destructionium and Sodomium, will lead to endless pain and a pleasure to kill. One of the terrible certainties of Kreatorium includes Extreme Aggression, and hordes of chaos created by enemies of god. The 6th-9th stages of Kreatorium are known to make ears bleed, but at the 10th it resumes with its thrash metallium properties. This substance has a toxic trace and prolonged exposure to large groups of people will start a Violent revolution, which will inexorably bring on pandemonium. ### Destructionium This element is commonly used by the antichrist as a device to crack brains, and the inventor of evil as a tool for eternal devastation and infernal overkill. After used in a metal discharge, Satan will release from agony, all hell will break loose and devolution will begin. ### Sodomium Sodomium was discovered in Enchanted Land during Nuclear Winter. Sodomium is popular in its use in M-16s, Agent Orange and for Tapping the Vein. Is known to cause symptoms of Persecution Mania, it is one of the Final Signs of Evil. Is known for its attraction to gasmasks, chainguns and chainsaws, users of it believe the Saw is the Law. This metallium will make you Obsessed by Cruelty, will unleash an Outbreak of Evil, Masquerade in Blood and make users will feel like they're Better Off Dead if left unchecked. It might also lead to multiple Visual Buggeries, Spiritual Demises or Ressurection, in some rare cases it might lead to Cowardice. Some users of this element might have Bullet In The Head, being Skinned Alive or look for their Body Parts thrown everywhere around (especially after Minejumping). They might also find themselves Among the Weirdcong and used as Cannon Fodder. Some of them will not survive Baptism Of Fire, the rest will become Marines, fall on the Fields Of Honour and being Remembered as The Fallen. Anyways, they will end their lives In War And Pieces. Sometimes they might Reincarnate as Blasphemer or The Crippler. ### Exumerium THEY SAY THIS METAL IS POSSESSED BY FYAH! AND THE FLAMES ARE BURNIN' HYAH!!! ### Dark Angelium This particular vendetta element came falling from the sky and descended into darkness. When it arrived, it burned sodom leaving all the people residing in the city to perish in flames in a very merciless death. It left scars of psychosexuality and an ancient inherited shame that time can't heal. Dark Angelium is highly reactive when exposed to children and thus causes the death of innocence, in which they are never to rise again (and no one answers). This element has also been known to cause hunger to the undead through promise of agony that is older than time itself. With this particularly violent element you can bet that death is certain, life is not. Even Hell is on its knees when exposed to this element because it can guarantee you no tomorrow at all. ### Stormtroopers of Deathium A harsher form of Anthraxium, just like Nuclear Assaultium, It's a very crossoveric thrash metallium. And also known as a SoDium, don't confuse with the Sodomium. it is the most 'Pax Americanatic' element with some racism. In fact, the main purpose of this element is pissing people off. Well, you can use it as someone who can't speak english like you make Speak English or Die, or become Bigger Than The Devil, Whipping somebody's Pussy, Fuck the Middle East. ### Sabbatium (English Isotope) Though not as abundant as the Japanese isotope, it's just as potent and available in more parts of the world. Thought to have the ability of clairvoyance, due to it's handlers being able to see the reflections of our yesterdays as well as a history of a time to come. This metal loses most of its potency when mourning has broken. ### Voivodium Invented by a group of Canadian junkies. Is made for killing technology in order to inflict war and pain and make its users RRROOOAAARRR. People with nothing for a face acquired this element and used it to enter Dimension Hatros to talk with angel rats at the outer limits. This metal was temporarily stolen by Negatron who traveled with it to Phobos. It was quickly recovered and the gasmask revival began. ### Sadusium Originally swallowed in black, this is a very disgusting in-your-face type of violent metal that is always certain about death. May cause DTP (Death to Posers) through chemical exposure. It is known that it can man infest itself into disguising as illusions of images from potential abusers wanting to torture (for the wrong reasons). ### Exhorderium Known to cause slaughters in the vatican if handled by a desecrator who likes to get rude. The law has recently banned the use of Exhorderium because it is known to cause many souls to be unforgiven and thus they become un-born again after realizing the truth about who is the cross. Can be a cadence of the dirge when brought to funerals and empowering during the tragic periods of anal lust. Known to have caused death in vain to the entire legions of death. When handling Exhorderium, avoid exposure to Panteraium because Panteraium is Exhorderium-magnetic and thus it can suck out Exhorderium's life energy (also can make the sucked Exhorderium cause incontinence to the next handler), totally claiming to be an element it is not (aka artificial Exhorderium). ### Heathenium Created by the Victims of Deception. Originally intended to Opiate the Masses. However, after the Evolution of Chaos, this metal was used to make Arrows of Agony for the Dying Season and Goblin's Blades. Users of this metallium usually leave No Stone Unturned. They also believe that Mercy Is No Virtue. you can Kill the King when a rainbow is arose, too. ### Dirty Rotten Imbecilum aka DRIUM. if you're confusing it with the drum, you're so useless dumbass. it was discovered at Huston, and the first crossoveric thrash metallium that ever discovered, although it wasn't well known to people. Main purpose is Crossover Hardcore Punkium and Thrash Metallium, and it can be making L.P. Dirty and Rotten, or Dealing with something, and collect these elements to using them as 4 of a Kind, or make anywhere as a Thrash Zone (with kicking posers asses), Definition something, and get to somewhere with the Full Speed Ahead. You can also use it while Thrash Hard, let Acid Rain falling, planning a Five Year Plan, accomplish Violent Pacification, when need to become a Suit and Tie Guy, escape from the Beneath the Wheel, or Abduct some kids, and get to Underneath the Surface. ### GWAR Oxide GWAR is weird element found Beyond Hell and This Toilet Earth. Users of this element must be very careful as It Kills Everything. It is popular for its use in multiple War Parties, Stalin's Organs and Happy Death Days. Mixed with Pepperoni and Ham On The Bone it makes Meat Sandwiches. ## Industrial Metallium Discovered in Germany by radiologist Till Lindemann. More stable than thrashium, but not by much. It is the cornerstone of the ministry, who use it to create weapons of mass distraction deep within the Fear Factory. It was widely distributed by the 1000 homo djs during the 90s, and can cause the user to develop nine inch nails. If used extensively, the user turns into a form of subhuman called a white zombie, which creates hallucinations of Dragulas and Black Sunshine, and causes them to randomly thunderkiss 65 people. ### Rammsteinium A very combustible element found after an awesome plane crash, just touching this orange offshoot of industrial metallium causes you to catch on fire and speak German, then die while being sodomised by angry shirtless men, who do indeed Ohne Dich. It might also cause you to scream, "here comes the sun!". Famous for making Mein Teil edible, it makes drunk German men forget that they are German and make them scream at their wives, girlfriends, and/or bitches: TE QUIERO PUTA. ### Blaze Baylium This metal cannot be destroyed, however it does tend to swell up during the later stages of its life. It also gets stronger as it grows older if left out of contact with most other metals. However, it must not be mixed with Iron Maidenium otherwise an extreme reaction will take place and this may result in a loss of popularity. Side effects include fits of rage during live shows. ### Nine Inch Nailium A very fragile metal, it is usually found Ripe (With Decay) and Broken in A Warm Place, although somewhat Closer to Right Where It Belongs than may be expected. In the earliest history, it is used to manufacture a Pretty Hate Machine, but that was a Sin and a Terrible Lie, because it was Sanctified and Some Thing I Can Never Have. This resulted in Head Like A Hole. The Only Time my Ringfinger was Down In It, it started to produce the Purest Feeling, Kinda I want to, but That's What I Get. Then it enters The Downwards Spiral, I Do Not Want This but Piggy has committed Heresy and became Mr. Self-Destruct. The Becoming spawned Big Man With A Gun on the March of the Pigs. At this state it can be used as an Eraser equivalent to a Reptile and a Ruiner. Skin Contact would Burn and Hurt, likely hurting Johnny Cash as well. At The Fragile state, usually The Day The World Went Away, it becomes Somewhat Damaged. This state is also known as The Frail or The Wretched. The Pilgrimage reaches Even Deeper Into The Void in The Great Below. Where is Everybody? We're In This Together, Just Like You Imagined. Underneath it all, The Mark Has Been Made. The Big Come Down was created by Starfuckers Inc with many Complications. The Way Out Is Through, Please. No You Don't, La Mer, because I'm Looking Forward to Joining You, Finally. ### Ministrium Exposure to this metal will make you succeed by sucking eggs. Originally developed when a nerdy tall guy joined a bunch of retarded new wave fans and added a bit of punk rock. Then the punkass fool Cuban leader brought in some metal and progressively took more and more drugs. Ministrium is sometimes sprinkled on cooked filth pigs, and traces of ministrium can be found on dark sides of spoons. ### Extremilium Metalium beyond the forumer Iron curtain A heavy metalogoist needs to head East into Eastern Europe (i.e. Poland, Russia or wherever beyond the former Iron curtain) for this one rare element of metal. The fans there appear very loyal (Stalinist), more rowdy (like an overcrowded gulag of rebellious dissenters) and the sound of heavy metal is quite HARDCORE in the frozen steppes, tundras or taigas of "mother" Russia. It involves more alcohol consumption, a rugged Slavic identity to make the Norse look like wusses and closer related to Metallica with a touch of death and black metal genres. ## New Wave of British Heavy Metallium Deep Purple in colour. Discovered at night by a Saxon on a Saturday, in jolly ol' England. Iron Maidenium, Diamond Headium, Cloven Hoofium, Angel Witchium, Atomic Massium/Def Leppardium, and about 400+ more submetals derive from this form. ### Atomic Massium/Def Leppardium A form of metal which can cause Pyromania and Hysteria, leaving you High 'n' Dry On Through The Night, or just deaf, as displayed since the Overture of this metal's existence. The majority of its users are 40 year olds in their wig mullets to cover up the fact they can't grow back much hair. 40 year olds are attempting to bring back glam metallium along with this metal. If it isn't is heard, It Could Be You to have such side effects as Answering To Your Master, Getting Your Rocks Off while Wasted, Sorrow with A Woman, Bringing On A Heartattack after fucking your Lady Strage, can get you runnin' before being a victim of Another Hit And Run, turning onto Switch 666, getting your Photograph taken by a Foolin' retard, Coming Under Fire after giving little Billy his free gun, an amputation of the left arm, exposure to lots of pretty Women, blasting off with your boring influential grandparents in a Satellite Rocket, turning into an Animal (you can't be charged with beastiality even when you fuck a dog or kitty), Fracturing Love, Missing your lover in a Heartattack, singing a Song in the Desert, making lotsa Action but Not Words, and/or Excitable amounts of sugar to be poured. Known to destroy Diocyte on contact by hiring Vivian Campbell. Is heavy before turning soft and light and poppish after lacking Adrenalized exposure to White Lightning, or being exposed to a substance known as MTV, also known as My Tall Vagina, or even an ounce of Love Bites. When it comes down to this level, it's best to either just Let It Go with the Rock Brigade, but still Rock Rock Til' You Drop Two Steps Behind at the Rock of Ages, or When The Walls Come Tumbling Down, just say... No, NO, no! No, NO, no! I said, No, No.... NOOOOOOOOOOOOOoooooooooooo..........oooo.....OOOOO....!!!!! It Don't Matter if you try to be cool with this metal. Nobody cares about this metal that much, unless they too are 80s. Don't even try to Slang with me, or I'll expose you listen to Iron Maidenium. Don't even make any Promises, either. Else you'll have to Work It Out with all the Bad Actresses in the Sparkle Lounge. Go, just GO, if you dare. I hope you don't Hallucinate from the pretty shine of this metal. ### Cloven Hoofium The most unclean and filthy metal of all the metals. Discovered in 1979 inside The Gates of Gehenna by physician Dr. Li Pane, who also realized that this metal was strongly associated with the four elements (Earth, Water, Air & Fire) of Earth. After its Opening Ritual, a whip was cracked by a Sentinel in the Starship, who only wanted to Lay Down The Law by being morbidly exposed to this metal Back In The USA. Can lure plenty of Nightstalkers into the March of The Damned after the Return of The Passover. A Daughter of Darkness who was Raised on Rock unleashed the Eye of The Sun and destroyed heavy metal-using men of steel, leaving only a quarter of this metal left behind to look for other shards of metal to unite with. After a scientist found this metal and along with the split up parts of Tredegarium, he combined them all together and thus Cloven Hoofium became an even more precious metal to admire. When a Dominator arrived to take over the Nova Battlestar and track down all of the Warriors of The Wastelands and Fugitives crossing the Road of Eagles, Russ Northium was Rising Up to merge with this metal after Fighting Back against The Invaders Reaching For The Sky. 1001 Nights ago, a Highlander was honored a Forgotten Hero in the Death Valley for using this metal against an Astral Rider who knew the Silver Surfer. In this Mad, Mad World, a Mistress used and abused this metal for her own means of power, but the metal was not able to fulfill all of her wishes, that she split up the metal again. Decades later, it was rediscovered by Dr. Li Pane, who believed that he could rebuild the metal by trying out various shards from other metals. However, it was harder than expected;sometimes one metal shard meant complete uselessness throughout the whole metal. A hiker was said to have found Russ Northium and Jon Brownium up on the hills and brought them over to Dr. Li Pane to return the uniqueness to the metal, and it was a success to repair the metal. ## Hair Metallium A common substance in L.A. in the 1980s. Found by flamboyant chemist Gene Simmons, using his tongue of doom to coax it out of a hole. An extremely hot metal, and an aphrodisiac so comically potent that women have been known to administer oral sex after merely looking at the damn thing. Unlike other forms of metal, this one draws (or drew) quality women, and lots of them, coming in motley crews carrying guns and roses. And they made love all day long, just to be left in skid rows later surrounded by ratts. ### Mötley Crüeium An element found in hair, typically in Girls, Girls, Girls. First discovered in during the Generation 'Swine', this element is used in heart surgery, usually to provide some kind of 'kickstart'. Dr. Feelgood has been a strong supporter of this element, repeatedly stating "You're All I Need". Apart from surgery, this can also be used as a recreational drug, making the subject feel numb, but at the same time, somehow, 'Welcome' to the Numb. Typically smoked in a masculine environment, overdoses can lead to subjects dying, only to come back to life later. While dead, subjects will Shout at the Devil. Traces of this element found on Planet Boom, as recounted in Heroin Diaries. IF used past its prime however, it will cause weight gain and Hepatitis. One can obtain this in large amounts by praying to the Saints of Los Angeles. Famously used by the hottest models, as this causes them to have the looks that kills. Also found in suprisingly huge amounts in the wild side. Despite its look Good qualities it can't make people fall in Love as it is Too Fast for Love. Heavy users will wake up in a Theatre of Pain with a New Tattoo that can only be removed by Dr. Feelgood. ### RATTium A rare shiny metal which should be avoided by emos (see pussies). It releases a strong gas (primarily from the large quantities of hair spray) which when inhaled can send humans 'Round and Round'. Originally discovered in 1976, it quickly oxidised. The gas emitted also happens to be addictive and will often confuse humans into going 'Back for More'. Due to being such risky metal, an unauthorized owner once reported can become a 'Wanted Man'. Although Scientific evidence has not proven this metal to be dangerous to health, a once regular inhaler of the gas was found 'The Morning After' screaming 'I'm Insane'. The man however was not convicted due to a 'Lack in Communication' down at the 'Scene of the Crime'. In 1983, a male who turned in his girlfriend, a regular user at the time claimed at the station he could not leave as 'She Wants Money'. Later he confessed that she had been a 'Sweet Cheater'. In 1985, after Scientists invaded the privacy of some regular users, they concluded that all RATTium users should 'Lay it Down'. It was later stated in a popular issue of Poison magazine by Dr. Michaels that it was 'Dangerous but Worth the Risk'. It is what the makes the World go Round and Round according to the Gold Child. Unfortunatley it makes you single which causes your body to react by growing a tattoo that reads 'I want a Woman'. It makes good cars, but remember no one rides for free. ### Poisonium Exactly what it is. One ounce contains about 99.9% of intoxication to your eyes and ears. But if you are able to survive it, you may turn into a flamboyant man with poofy hair. All the AIDS-infected girls will truly talk dirty to you this time and demand some action from you tonight because they want nothing but a good time, is all. ## Nu Metallium A highly variable metal with the shortest half-life of all the heavy metals. First discovered in the middle of the 1990s it was reported to taste like KORN. This discovery set off a rush to find new sources of this exotic metal, sadly for many years the only other source was Limp Bizkits. Nu Metallium in Bizkit form was highly toxic and was reported to have stank of Hooba. In 2001 new sources of good quality Nu Metallium were discovered by Dr Bennoda in Likin Park. This form of Nu Metallium was found in P.O.Ds and had the ability to leave anything in its path Staind with Evanescence a strangely beautiful substance. Nu Metallium disappeared over night on the 24th of February 2004. This is believed to be because all the deposits had been Disturbed, Drowned in a Pool or tied in a Slipknot. ### Disturbium Having usually a dark gray tint, it is used by vikings for their faces when they're pillaging small towns off the coast of Vermont. Discovered in 2000, it quickly gave its handler The Sickness after prolonged periods of handling. Several years later Disturbium was used as a contraceptive. Why not? I mean hey if you're into that kind of thing. Some side effects of exposure are severe balding and tendency to scream like a douche. Although the douche screams can be translated into pure unadulterated "the shit" if the listener possesses a third ear under his cerebellum. He shouts 2000 times repeatedly in your face. This ear is called many things but it is definitely not called Lasagna. A young girl as a little child, after touching the metal, was taken and forsaken. When playing "Stricken" on Guitar Zero III, you not only go Inside the Fire, but you become Indestructable as well. However, if you are indestructable, Ten Thousand Fists from Hell will beat you While they hold Nu-Metal. ### Drowning Poolium Having a success such as Bodies can tell how much Guantanamo Bay is more worthy to go to thanks to Drowning Poolium. They were discovered in 1995 by the late, not-so-great, Dave Williams, who was high on cocaine when he thought about it in the first place. They were considered the poor man's K.O.R.N. How they were the poor man's K.O.R.N. was never explained. They also played Sinner, which is also a decent song, but that doesn't mean a lot when it comes to playing it on Guitar Zero. ### Slipknotium Has a vermilion color, similar to color of blood, and often takes the form of nonagrams. Can only be found by cutting the wrists of goths and chavs. Emits an extremely bad Sulfur-like stench when not handled by appropriate personnel. Continually baffles scientists with its rapid mutation and causes victims to move in the same aspect as that of a maggot. It also causes victims to inexplicably to go psychosocial, grab sticks and whack barrels for a while. This metal also causes you to spit it out. Also, before I forget, it is important to point out that it has a nine part structure. Unfortunately, due to the (sic)ness of this element, you must wear a mask if you come into contact with it. Also, Before I forget, I need to tell you that I was a Creature before I could stand. Basically, Three Nil hits with Duality, Pulsing with the Maggots until All Hope is Gone. Takes its name for a 1996 song also known as Slipknot. ### System of a Downsyndromium A stable but pressure-chaotic mix of many, slightly retarded metals. Created by Serj Tankian, Daron Malakian, Shavo Odadjian, and John Dolmayan deep in the Los Angeles underground. The metal is so powerful it bakes stars, makes rivers fly off the Earth, breaks the strongest castles, makes matthew the gosling a real goose and even makes mermaids cry. The metal usually also heats the ground it rests on to dry people's feet that stand nearby. But it is recommended for its remarkable properties of curing sanity. ### Mall Metalium A variety of Nu and Neo Metal Elements, it was created when a fashion kid in Texas (who had Paris Hilton posters in his room. And she was not naked) inhaled a great amount of Teen Spirit. After that, Aliens reported seeing the same person going every day to the mall to buy (or steal) more Teen Spirit Ultra N00B Version for Members of K.O.R.N. . Ironically enough, the kid was seeing with a K.O.R.N. deodorant shirt and a Slippednote pant. It is said that other elements of Metal react in a bad way to this elements, causing explosions and destroying buildings. ## Progressive Metallium Found recently but yet it only confuses the scientists, because a piece of Progressive Metallium begins somewhere and ends somewhere else, but it's multi-dimensional and can't be examined or measured by any Tool. "What kind of imagination asleep in some lyrical coma who's vain futile memory could have been so wrong?" comments Charlie Dominicheese , from the University of Ham and Paste, San Francisco, California. Experimenting with progressive metallium has be known to cause many unusual side effects. Known effects include, but are not limited to: extreme hair growth, high pitched voice (especially in butch looking males), feeling the need to play unfamiliar instruments, playing unfamiliar instruments for the first time during a live performance, playing instruments with the wrong body part and using unusual objects as instruments. One such victim of these side effects is Claudio Sanchez, front man of Coheed and Cambria. since his experimentation with progressive metallium he has suffered from high voice pitch and an extreme growth of hair, he went from a beautiful bald man to sporting one of the hugest known afros ever owned by a white man. Another side effect of experimenting with progressive metallium is hallucinations. Many scientists, after being exposed to this substance for extended periods of time started thinking they were in Dream Theaters, Shadow Galleries or other similar places. ### Ayreonium A rare and very useful element, this heavy metal is known for its capacity to bond with all kinds of atoms/molecules and is the only element that attracts both electrons and protons. It was discovered in the mid 90s by Arjen Anthony Lucassen who had no Hope in life because his father thought he was a Loser. But the day he discovered Ayreonium, he started his journey in the search for Love. He exploited this strong bonding/attracting element to the max buy involving nearly 3 billion singers in his Human Equation. Unfortunately, the fast rate of reproduction in some countries prevented him from realizing his goal (which at first was gathering 7 billion singers in his project). Therefore, he decided to Ride The Comet and and become The New Migrator. In the search of a less populated planet where he could engage all the living population in his project, Lucassen left The Solar System and headed To The Quasar, but he eventually fell Into The Black Hole. There, it was just Another Time, Another Space, but lucky for him, he wasn't trapped there for too long, soon enough he found Two Gates and chose the one leading him to The Tunnel of Light and finally escaped and headed straight to Star One in the search for a toilet! After this Amazing Flight, Lucassen is taking a well deserved rest in his House On Mars and expected to be Back On Planet Earth in june 2084. He's said to be the First Man On Earth to do this One Small Step! ### Dream Theatarium Probably the rarest of all the heavy metal elements, its a mix of heavy metal isotopes, with other complex elements and complicated substances and its electrons are always in constant motion. Ancient Prog Mythology claims Dream Theatarium was forged Beyond This Life (That means, the Afterlife) by the Ones who Helped to Set the Sun in a Dark Eternal Night; aided by the Prophets of War using a Killing Hand to Honor Thy Father. In history, it was founded Biaxident by a Swiss oiling company in Antarctica due to collecting Images & Words, the element soon dispersed and killed everyone who was Awake in the area. It became a very exotic mineral in the renaissance, when The Count of Tuscany built his castle walls entirely of this material. As These Walls were very strong an rare, he then proceeded To Live Forever. If exposure to this element is too long, sensory loss and damage will occur for the average mortal, followed by Death, (not to be confused with Deathium). People who survive are often found completely speechless, in an Inner Turbulence classified as Six Degrees on the Systematic Chaos metric system. In some cases, patients have been affected with erotomania, becoming attracted to a public figure from the Metropolis, who happens to live Under a Glass Moon, some others cannot take Another Day and beg "Take Away my Pain". The people ask for an instant Change of Seasons. Many scientists have come to believe the element created itself out of nothing, Falling into Infinity and violating all laws of physics and thermodynamics. The media were quick to name it The Root of All Evil. The Great Debate nowadays is between Disappearing all of it from the Earth, or to Forsake it in a Glass Prison. Excepticals state that they are Losing Time: to isolate it would cause the walls to Wither and become a useless Shattered Fortress; and it cannot be disappeared, since some scientists follow their Blind Faith and assure that The Answer Lies Within it. In the Name of God, this is The Test that Stumped them All!! Presidents try to calm down the multitude saying that "it's Only a Matter of Time before the researchers accomplish their Endless Sacrifice to neutralize it. Please, don't let us be Misunderstood...". Dream Theatarium can be found under Peruvian Skies, and they seem to be more available in the early morning, at 6:00, to be precise. A New Millennium's Eve seems to be The Best Of Times to collect this rare isotope. Misuse of this isotope may result in panic attacks eventually leading to fatal tragedy, or may cause the user to become trapped inside the Octavarium. ### Queensrychism Seattle based (Gimmie my starbucks coffee) progressive metal band, well known for their unintelligent lyrics (Suite Sister Mary from the Holiday Inn) and their general disinterest in anything having to do with music. In the early days, they dressed gay and made fun of the Roman Empire. Their early songs consisted of hits such as, "Queen of the Dykes", "Screaming in front of a Digital TV", and "Gonna get close to your boobs". Operation Bartcrime Then Operation Bartcrime came out and caused a frenzy with all the Simpson Metal Heads. They took the nation by storm with such Neo Nazi hits as: "Operation Bartcrime", "Spreading the disease (because I can)", "I don't believe in jugs" and the smash metal hit "Eye's of a strangler". With a dwindling fan base, Queensryche decided to take a different direction in music and join forces with Empire Carpets to write the theme song for the Neo-Nazi lesbian "carpet munching" group. That is where Geoff met Susan. As the years moved on, Geoff Tateryche's voice decided to move south (with the rest of the people from Seattle) and in April of 2012, the other members of Queensryche tried to overthrow the dictatorship of TATERyche's (Susan Tate (Neo-Nazi lesbian)) and Geoff Tate (so pussy whipped by Susan that he shaved his head and thought it was his balls). ### Symphonyum "X" Also one of the unique metals on this Universe, its appearance presents different Shades Of Grey; experts on the subject soon achieved to discover that it came upon a meteor rock from The Edge Of Forever, and that the collision left a crater alike to the Occulus Ex Inferni. The atmosphere after was blurry and dusty, historians say that you could only see a smoky masquerade of Sins and Shadows. NASA researchers were afraid that this could be an alien attempt of our Domination, of which the first manouver would be to Set The World On Fire. Some other recent researching have provided new facts about the symptoms, which make exposed ones to go Into The Dementia, the Absence of Light stimulation on the eyes, develop Damnation Games and some hospitals stated that those on the terminal stage claimed to feel like if being on the Breath Of Poseidon. A Lesson Before Dying: you might experience extremely violent Awakenings, and nightmares about the world led to an Inferno, and screaming in the middle of the night that we all live on a Paradise Lost. Due to the lack of an authentic and effective antidote, CERN, CIA and NASA scientists are already working on what they call their Accolade: the construction of the biggest space-craft ever, along with its launching station, referred to as the Church Of The Machine, in order to begin an extremely brave Odyssey towards the space, make the Sacrifice to cross the Savage Curtain on the Oort's Cloud to track down this metal's home planet. Once there, they will follow Professor Hawking's Premonition, and if they succeed, they will find a Rediscovery to contra-arrest the effects and retrieve the Relic home. ### Toolium Toolium is strictly not Progressive Metallium in any way at all. Visual or physical contact with this metal will instantaneously surge the victim with an LSD-like hallucinatory trip, that will make you shit the bed. Goddamn. Wicked bass solo will ensue. This metal, when melted, is known to be exceedingly vicarious. ### Opethyte This element has always been related to mysticism, due to its usage as an ingredient on The Grand Conjuration. Discovered by Professor Michael Hackerfield (aka The Moor) on a Watershed, Still Day Beneath the Sun, this Isotope is a creator of paranormal events, like The Baying of the Hounds, the Godhead's Lament, a Moonlapse Vertigo, or constant Hours of Wealth. It is a mixture of very rigid Heavy Metal, Death Metal and very smooth Liquid Metal, like mercury but waaay metaller. Users often have a Burdened, Porcelain Heart and hallucinate a Leper Affinity after its Deliverance, believing they are seeing the face of Melinda, and therefore starting to sing Ghost Reveries guided By The Pain they See in Others, just To Bid them Farewell. This is, of course, a Damnation, that will make their owners Benighted and fall upon Their Arms (Your Hearse). On this cases, The Twilight is their Robe in Their Time of Need, and it is the perfect element To Rid the Disease. Users are often found in a Serenity Painted Death when exposed to overuse. Natural occurrences of the substance have been found in Blackwater Park, Sweden, in a form of Orchid known as Black Rose Immortal. The plants commonly grow on the Morningrise, specifically Under the Weeping Moon during the April Ethereal, and are Harvested during the local festival, Dirge for November. ### Mastodonium This metal was once commonly found on the slopes of Blood mountain by colonies of birchmen, until one day its sheer power cracked the skye, sending it into the sea where it was devoured by a white whale. Now it's lost in oblivion. ### Cynicium This metal was born out of a mix of deathish metal and grazy jazz jizz. It was deep inside the ground, under it actually, for most of the early years and the main experimentators often experimented on other elements and would leave this metal for long periods. This metal often comes in long bars and needs a lot of Focus to absord it all. The frontman in this discovery has once left the use of this metal and after studying very famous, aknowledged and revered bugs, known as The Beatles, he left this metal and worked on something more ethereal called an Aeon Spoke (Which was though as to be a weapon of mass destruction but appeared to be a harmonic and beautifull rock/diamond). The last few years, Traces In The Air led to the undigging of this metal and it has now returned to fashion to be used once again. ### Animasleadarium This metal is a strange, furry sustance that is LITERALLY forged out of space, planets, robots and the essance of Tempting Time. Rumored to be discovered by an extremist known as Tosin Abasi, who is famed for weilding not a 6-handled axe, but 7- and 8-handled axes to get the best heavy sound of metal. He often Behaved Badly while recording, producing a "Zwoooosh" sort of sound that, if slowed down 200 times, can be recognized as a jazz arpeggio on fire with its arpeggi mind distorted with beauty. Tosin Abasi often transcends time and space in moments he calls "Soraya(s)". His playind is so fresh and full of life it sometimes catapults Waves Of Babies into the crowd. Though mr. Abasi is a genius, he is never on time, often showing up on late on times such as 13/5 or 9/4. ### Peripherum This metal is a luminous metal mainly used in Bulbs. It is categorised as an All New Material and can often cause Insomnia if human contact is prolonged. This was discovered via a Letter Experiment with the Light, turning the patient Totla Mad. This metal was discovered by Zyglrox, a Buttersnips-addict, currently working on the theory that Icarus Lives. ## Power Metallium Power metal, also known as The Lord of the Rings metal, is a lighter, shinier isotope of the element, power metallium may by safely projected at extreme speeds and still retain its stability. Rumor has it that knights from the Eighties discovered the metal on Helloween when a knight decided it was a good idea to crawl into and squirm around the open wounds of a slain dragon while wearing full armor, which synthesized with the dragon's blood n' stuff to form power metallium. Timo Tolkki managed to harness the power of Power Metallium first. His secret has never been discovered, and to this day, remains the ultimate user of Power Metallium. The blind guardians and men o' war were the Edguys who put the metal to good use. Hansi Kürsch, a blind guardian, is one of the most creative users, having inlaid several power metallium ingots into his vocal cords. Herman Li's guitar has a whammy bar and humbuckers made out of power metallium of unheard-of purity. The basic Powermetallium formula is expressed via Toteman's Law: ${\displaystyle DragonForce=DragonMass\times DragonAcceleration}$ ### Blind Guardianium The Blind Guardians, some of the richest users of Powermetallium, of course have their own mines of the stuff. They discovered a new version, which they've shared with very few people. They found it while Traveling in Time with the Battalions of Fear, in which they were declared to be the Guardian's of the Blind. When the Battalions of Fear were Lost in the Twilight Hall, they turned to the soon to be heroes, and proceeded to Follow the Blind. Somewhere Far Beyond this point in the adventure, the Blind Guardians met and were taught to sing the Bard's Song, by doing so, becoming bards themselves. Hansi Kürsch, who at the time was sharing the element with André Olbrich, Thomen Stauch, and Marcus Siepen, continued to mine Blindguardianium, only being able to see it thanks to visions, from Imaginations from the Other Side, and his shiny Bright Eyes. The group then found greater deposits of it, while diving in to worlds only known to fantasy, during Nightfall in Middle Earth, they drove In to the Storm, and came out the other side When Sorrow Sang with more of the rare element. But, shortly after celebrating with A Night at the Opera, Thomen was driven mad with hunger for more Powermetallium and went on his own journeys . His departure from the Blind Guardians is often called the Harvest of Sorrow. And Then There Was Silence? No... A Twist in the Myth would occur when Frederik Ehmke was declared to be a Blind Guardian and a Bard. The Guardians still believe that This Will Never End, trusting Ehmke to help them to get more of the element in the Otherland. They've recently found many in the Sacred Worlds, and have continued their Ride in to Obsession, looking for more Blindguardianium. They are currently following the Voice in the Dark, hoping to locate new deposits At the Edge of Time. ### Diocyte A very short but powerful isotope, mixed with deep purpleum to create rainbows, and iommium to increase the potency of Black Sabbathium. A holy diver named Ronnie James Dio discovered it while diving off the coast of New Hampshire, when he saw a rainbow in the dark waters and decided to name it after himself. Suddenly he saw a cat in the blue coming after him. It got him straight through the heart, and as Dio was dying he merged with the metal and became the King of Rock and Roll. He went through heaven and hell and made the Devil cry. The Devil then gave him the sacred heart of the Stargazer, and to this day he is still the last in line to the throne of Power Metallium. He was still afraid of this experience but than he remember he was the Star of the Masquerade and there was no need to be afraid. Exposure will make you want to stand up and shout, kill the king and flash the devil horns. Famous source of Magica it has the awesome property of making drivers end up in strange highways. Heavy use might make you eat your heart out. Great for those lost in the woods as it just might lock up the wolves. Only metal to affect the behavior of machines, Diocyte makes computers, Planes, Cars, and toasters very Angry Machines, they are bent on Killing the Dragon and can only be stopped by the legendary Master of the Moon. Unfortunately, Diocyte is no longer found in nature. ### Artchurium Artchurium is an extremely rare, obscure metal found only in Norway. Much like Astral Doorsium, it is also somewhat like Diocyte. When found fully loaded, it shoots to kill. In its reincarnation, it just follows me where I go. It is also a powerful aphrodisiac, and thus gives power to the man. This element is highly sought after by its avid fans, but others pretty much don't care. ### DivineFireum A strong form of metal, known to be supported by The Lord. Fused together whith Symphonic Metallium and Religion, DivineFireum is an very versetile metal, and can be used in knives, spoons,(no forks),rebellion and arrowheads. Being powermetal, Divinefireum contains relativley few grunts and screams compared to death and black metallium, but quite alot compared to other power metallia. ### Astral doorsium Astral doorsium is a rare and largely unknown metal that is usually found near Scandinavia. When used correctly, the element starts to act like Diocye, allowing many to confuse it with Diocyte. Many times when it is found, it can be used as a cloudbreaker. The bride of Christ used it at her wedding with Jesus as they burned down the wheel. Many people such as Quisling started to use it for personal gain while others go use it to rock. In the end, if you abuse the element, you will go to prison for life. ### Helloweenium A metal from Germany that makes every day Halloween. Was once used to destroy the walls of Jericho. Currently owned by the keeper of the seven keys and Mr. Torture. The band rose from the ashes of Pink Cream 69. ### Gamma Raysium Helloweenium was sent to a power plant somewhere in space. A majestic new metallium was soon given to a planet with no world order where it would be placed in the land of the free. Said to be more powerfull than the original Helloweenium. ### Hammerfallium Arrived from the heavens, this element descended to earth on a bolt of crimson thunder. Used to write a history and legacy of kings and bestow glory to the brave. However, this metallium was stolen by renegades and hidden in the threshold of the universe. ### Nightwishium Once, an angel fell first before time began. They became known as the wishmaster and made Nightwishium. Born from the ocean, this element is known to have a strong luster. Used to construct the dark chest of wonders filled with dark passion play. Nightwishium is easily spotted over the hills and far away while the 10th man down is dead. Nemo the fish is usually seen borned by this element. ### Dragonforcium The lightest and fastest of all the Power Metals. Often found in The Valley of the Damned they come from a mixture between Speed Metallium and Power Metallium. Best described as six members complete with a keytar standing atop a mountain, sending pterodactyls soaring into the sky with each strum of the guitar. They have been likened to a blend of Slayerium and Journeyium, with a dash of Maidenium, making this element a lethally heavy musicfest. Putting this element through fire and flames will cause Inhuman Rampage and an Ultra Beatdown. It's behavior can be predicted to a high degree of accuracy using Totemans Law: DragonForce = DragonMass * DragonAcceleration ### Finnish Power Metallium Power Metallium has been discovered in Finland, but it tends to be slightly different. Finnish Power Metallium is quite possibly the shiniest of all metals. Most types observed tend to be isotopes of some kind of Stratovarium, but can form alloys with a surprising range of other metals. The molecular energy in these metals is fueled by some unknown, yet seemingly unlimited power, and as a result, it can move very fast. Very fast. But only if it feels like it. All the props in the Finnish opera "Sonata Arctica" were supposedly made of Finnish Power Metallium. ### Stratovarium The original Finnish Power Metallium created by Timo Tolkki. Forged on the fright night, the metal will give you a vision of black diamonds and a future shock. ### Sonata Arcticum A Finnish Power Metallium that thinks it's Stratovarium. It's been also known as Ecliptica, the Full Moon, and Victoria's secret. Known to some people as the sole "Flower Metal." ## Black Metallium And yea, one day Good King Cronos looked out upon his Dark kingdom, pitied his Dark peasants, took a Dark bath in a tub made of pure heavy metal and, dismayed, sat down upon his Dark throne. "What a bleak, cold, miserable bathory of a day it is today!" he lamented. "This grimmest of celtic frosts will destroy us soon, if the light does not take us first." Then the king heard something move; but alas, his kingdom was so Dark that he could not see what it was that was moving. When suddenly, a blaze in the northern sky for one glorious instant illuminated the land! and he saw a hideous black snake on the ground in front of him. "Grishnackh!" he cursed. He called for his Necrobutcher; but he was out having pork chops. He reached for his Hellhammer; but Satan was fixing the head on it. In incomparable desperation, Good King Cronos spat at the snake, unaware that he had swallowed some of the heavy-metal laced water from the bathtub. At precisely the same instant, the mischievous snake spat back at the king, and the Venom in its saliva met with Good King Cronos's for one instant, and the two fused, forming a shiny, black zephyrous substance. This was the poisonous substance Black Metallium. An unfortunate victim of TNB Metallium poisoning Mayhem ensued across Good King Cronos's kingdom. Churches were burnt to the ground (for the sheer fun of it), Hades and Euronymous rose from the Underworld and played their guitars so loud many ended up Dead, and the peasants across the land lamented the day, dreaming of what once was. As the snake kept forming more and more trve norsk black metallium, many tried to kill it, but in vain; for the snake was immortal. And it grew and grew and grew some more, until it had become an unnatural behemoth. That's why you don't get much news from Norway these days. ### Acoustic Black Metallium The grimmest of all metals. It was forged in a very, very, very, very, very, very, very frostbitten mountain in the center of Norway by the great Necrowizard, the keeper of all things grim and frostbitten. It is far more grim and metal than even Pure Norsk Black Metallium, because it was created by the grim and frostbitten Seth Putnam and his Ulvers. Acoustic Black Metallium can often be inverted, and is mainly used for poser extermination. ### Gorgorthium A very deep black colour metal, usually associated with this world, this is in fact a false statement as it is mainly retrieved by suicidal mainiacs who have a deathwish and are voluntered to travel bettween dimensions to obtain Gorgorothium. Gorgorothium is generally used to make rhinocerus males fight bettween the herd to create a stir at local zoos. Gorgorothium was also the apparant cause for the birth of Jesus' Mortal Opponant, the Anti-Christ. ### Venomium The first black metallium ever discovered. On it was engraved "Welcome to Hell" and very Black Metallic. Some say this substance is the source of all black fucking metal. Often can give you an aroma that makes you sexually attractive to your female teachers. However, if you smell this aroma you will get Red Light Fever and your penis will eventually explode. ### Bathoryum Said to invoke the return of the darkness and evil. Discovered by a young viking warrior named Quorthon under the sign of the black mark. He offered it to Oden who rode with it over Nordland and set the shores in flames. After Oden returned parts of the element, Quorthon rode to Asa bay to warn about the twilight of the gods. No one listened and the element was lost in time until the Destroyer of Worlds came to smear blood on ice. Quorthon made one final attempt to preach about the metal. Afterwards, he let himself be picked up by the wheel of the sun and was never seen again. ### Immortalium The only stable Norwegian Black Metallum. This element, born of cryptic winterstorms, is said to be the heart of winter and all things cold and grim. Possessed by a fire breathing thing called an Abbath and the sons of the northern darkness, who can hear the call of the wintermoon, this element was used to build weapons for battles in the north. In the right hands, this element has been used invoke the diabolical fullmoon mysticism and summon blizzard beasts from the mountains of might. However, if used by those damned in black, pure holocaust will begin and all shall fall. ### Mayhemium One of the most powerful forms of Black metallium. People are afraid of its ability to cause pure fucking Armageddon. Was found buried in time and dust on the night of the freezing moon. Peoples guts have been fucked by chainsaws made of this element. People who hold this element and desire Life Eternal will be cursed in eternity. Today, a maniac made a grand declaration of war for this metallium, but was lost to a warlord named Attila. ### Burzumium Mixture of this element with Mayhemium will result in a gigantic explosion. After hearing some feeble screams from forests unknown, Pyro-demon Varg Vikernes created this element to document his journey to the stars. Its 5th and 6th phases become uninteresting and a poor indicator of what once was.. ### Emperorium A man named Ihsahn was a black wizard from the Welkin. He created this element to make a cosmic key to his creations and time. This key can only be found if you travel beyond the great vast forest and face towards the pantheon in the nightside eclipse. With strength, this metal will burn, but inhaling the fumes will take you into the infinity of thoughts where it's always the witches sabbath and you will have to face the wrath of the tyrant. If shown live, this metal gains a brilliant luster. ### Nokturnal Mortumium Nokturnal Mortumium is a powerful form of Black Metallium, discovered by a Communist Christian named Knjaz Varggoth hailing from Kharkiv, Ukraine in 1995. It was discovered at Twilightfall, by reciting Lunar Poetry to a pile of severed Goat Horns travelling To the Gates of Blasphemous Fire, resulting in the NeChrist subscribing to the Weltanschauung of the Voice of Steel. ### Carpathian Forestium Possibly the most unstable form of Black Metallium, this element will cause grotesque overblown obesity if consumed in heinous quantities. Side effects also include uncontrolled return of the freezing winds ejected from an anus which has been sodomized by satan. This side effect is most undesirable when journeying through chasms, caves and titan woods to the cold moors of Svarttjern - a place with a name similar to the sounds emitted from ones already beleaguered rectum. Should this malady prove too much for your tired soul, you will then be pecked to death in a circle of ravens. As one turns blue, there is always time to perform the good old enema treatment, which only lasts approximately 01:52. This act of spiritual purification will have you donning your black shining leather in no time, all the while decrying Christian incoherent drivel and developing a morbid fascination of death. This metal is a misanthropic violent hellblast. It's darker than you think... ### King Diamond The King Diamond is the shiniest diamond on Earth. It's also blacker than the blackest night. It was formed by the mercyful fates in a mansion in sorrow on Never Ending Hill in 1777. The most evil of all metals, this element has caused black horsemen attending dangerous meetings to easily fall and break their necks, be stricken by the curse of the Pharaohs or receive a visit from the dead. At the sound of the demon bell, 18 will become 9 and everything will burn to hell. Emits a high-pitched screeching noise when nailed with seven silver spikes. Potency increases exponentially when it comes into contact with Andyla rock. ### Satyriconium A very soluble compound, Satyriconium is a product of human decay after Eczema. It was discovered in dark medieval times on a very frosty morning in 1349 by a nemesis divina named Mother North while she walked the path of sorrow in the woods to eternity. Satyriconium is also found near Volcanos where it lingers in the air filling humans with ravenous hunger and causing them to eat black lava, a known fuel for hatred. Afterwards it becomes K.I.N.G. and the Age of Nero begins, at which point it isn't nearly as potent as it used to be. In many cases only the norsecore of the element remains. ### Darkthronium A very radioactive metal which has the power to start blazes in the Northern Sky and make Transylvanians hungry. It was discovered on the night of the funeral moon after an eon of soulside journeying through the twilight dimension of the land of frost. Once brought back to our world, it unleashed the pagan winter. The substance had the Kathaarian Life Code inscribed in it, a code that will invoke the sardonic wrath of the Goatlord who resides in the watchtower. It used to be used by the nazis in their panzerfaust weapons to cause total death and ravishing grimness. Since then, the metal has been entombed in a Cromlech inside Neptune Towers and closely guarded by a nocturnal cult and Fenriz wolves. Known to have acquired some punk traits in later phases. ### Celtic Frostium A very valuable metal discovered in the crypts of rays. It was shown to mega therion, who responded with a necromantical scream. Has the power to return emperors after their jeweled thrones have been usurped. Was once enshrined in the circle of the tyrants, but was dropped into a cold lake and never seen again except to some monotheists who claim to have a telepathic connection with the substance. Eventually recovered and turned into Triptykonium. ### Sabbatium (Japanese isotope) One of the most abundant black metallium if people know where to find it. A japanese demon named Gezol made this element from the remnants of the Hiroshima bomb. Will envenom your children and evoke them to run to Mion's hill to dance around black fire and worship metalucifer and evilucifer. Calls upon Samurai Zombies to summon Orochi to start a Karmamassacre and general Harmageddon. Gezol also forged a Satanasword with the metal along with his own Karisma and Fetishism. If you don't get afflicted with Kanashibari, Gezol can still use the metal to send you to the dwelling where the death mask lies. ### Black Gothium Sometimes found in a pasty white colour. Often confused with Emonium Cytrate. The most common form of Black Gothium in Marilynmansonius, which if exposed to, causes the user to be a Beautiful Person. Found in volcanic formations in Iceland and Norway and in some English crack dens. An addict to the drug, who became pregnant while under a black gothium-caused stupor, said that "I feel terrified to think that I'm going to have this child in nine months, and I don't know who the father is. He must be a horrible, FILTHY man!" A rotting Christ is a common obsession to those who posses this element. ### Melecheshium Very powerful and rare Black Metallum found mostly in Mesopotamia and in the Sacred Geometry of Burning Jerusalem. Discovered in the Epigenesis by Melechesh Ashmedi the Dead Terrorist, Melecheshium is not like any other Black Metallum and is known to never rust even when subjected to The Scribes of Kur. Ashmedi climbed up the Ladders to Sumeria and witnessed the Rebirth of the Nemesis, but the Mystics of the Pillars saved him, and then they fell. That is how Ashmedi discovered Melecheshium. If exposed to Melecheshium either the victim's face generally melts off, or the god Melechesh comes and deems the user worthy to embrace the power of the Ghouls of Nineveh. ## Symphonic Metallium Loosely related to and often found alloyed with power metal and progressive metal, Symphonic Metallium is known to megnetically attract opera singers and orchestras when electrified. It has also been reported that Symphonic Metallium repels Nu Metallium and Slipknotium at unsafe speeds, to the point that all owners of Nu Metallium and Slipknotium often evacuate their homes and get to a safe distance whenever a Symphonic Metallium owner is in the area. It is incredibly dramatic and this can lead to the input of string sections into the previously untainted metals. In stark contrast to this, however, are those addicted to it as a drug, known for their white faces, poor spelling and constant hunger. These traits were popularized in the parody film "Emperor", when the protagonist scribbled on a piece of paper: "Feuckitt mannn, i wont a bjige goddimmu borgir!" (Translation: "Fuck it man, I want a big goddamn burger.") An especially pure sample of Symphonic Metallium, discovered in Sweden, called Therionite is revered by mystics for its alleged occult properties. ## Death Metallium Considered the "heaviest" of heavy metals, it can react violently to produce both a low growling sound, and a high pitched screech, similar to black gothium, but much more grating. This metal is pure glistening black in color, and can be used to create indestructible corpsegrinders and bolt throwers. If it comes into contact with a dead body it will create a Cannibal Corpse, and instances of Deicide, Death, and Immolation have occurred. Via long and excessive grinding, the metal's atomic cores it can be refined to Grindcore, such a heavier metal that it turns in a splattering mess 4 seconds after formation due to the extreme forces of gravity and taste. Suffocation, anaal nathrakhs, and behemoth wounds are often symptoms of radiation. People have tried to mix it with Metallium Gothenburgium and Power Metallium, but this resulted in scar symmetry, a medical condition that's even more painful than it sounds. The result of the mixing, however, was discovered intact (after everyone nearby had been evacuated) and found to be surprisingly heavy. Examples of Death Metallium include Abnormalitium and Cephalic Carnagium. Death Metallium has been mixed with thrash and black metallium several times with mixed results. Diabolical creations of this fusion include Destroyerium 666, Rigor Mortisium, and Acid Deathium. If exposed to too much sceneium pollution, Death Metal can deteriorate into deathcore, in a process known as a 'breakdown'. During this process, a strange sound is produced. This sound is called "pig squeal" by some. ### Bolt Throwerium Said to be a manlier form of Manowarium. Several British mercenaries entered the Realm of Chaos to create this metal where in battle, there is no law. Upon completion they were hailed as warmasters by those once loyal and forged weapons with the metal for use in the IVth crusade. Though the metal has been used for victory with honour, valour, and pride, there are only sparse amounts of this metal left in existance. ### Cannibal Corpsium The most violent and used of all death metalliums, Cannibal Corpsium is used as a utensil in eating back to life, a tool in butchering at birth, and a key to the tomb of the mutilated. It leads to bloodthirst if one is bleeding in proximity of it, dormant bodies bursting from the anus when ingested, and gore obsession if a victim of the evisceration plague is near it. It was discovered in the vile gallery of suicide. Strongly used by Zombies. Excessive use has been known to cause addiction to vaginal skin. ### Klokatellium Formed of the blackest of black houmors Klokatellum is found by going into the water, the hors of awakened lake trolls. This dethmetalium has perfect symmetry it is infused into bloodlines. It brands gears causes rock and roll clowns to do cocaine and makes people commit mermaid-er and to be crushed by comets ### Cryptopsium Cryptopsium is used by those who have blasphemy made flesh. It leads to whisper supremacy, and then you’ll beg. Many who have seen Cryptopsy often claim after seeing it that there are none so vile (referring to other metalliums). It loses its death metallium qualities in its 6th stage. ### Deathium Deathium, along with Morbid Angelium, were the first death metalliums. It is very requested and used by Philosophers, due to its Sacred Serenity, its unmistakable Sound of Perseverance, and its ancient Symbolic meaning; but its complex Individual Thought Patterns have made scientists think that this element has a Lack of Comprehension. If an average Human tries to manage this element without permission and/or A Moment of Clarity, he (or she) may be exposed to catch Leprosy, suffer a Rigor Mortis-like experience, And in the worst cases, to scream "Bloody Gore!". In that case, the victim must be attended immediately by a Death Metallium especialist in order to receive Spiritual Healing. Although highly useful in all of its stages, deathium is no longer found. ### Deicidium Deicidium is used in the creation of serpents of the light legions. It was often used in torture devices, which was why it was found once upon the cross. It is recommended to be used in a band playing the insineratehymn. People who bare scars of the crucifix created by deicidium will have to be tormented in hell. Those around deicidium have noticed a strong stench of redemption. The use of this element is strongly forbidden by the Catholic Church (Or any Theistic Religion). Nonetheless, it's one of the most popular Death Metalliums to Date. ### Destroyerium 666 A warslut decided that violence was the prince of this world and unchained the wolves in order to find this metal. He finally found it as the phoenix was rising. In defiance to Christianity, warslut forged satan's hammer with the metal and raped some women in the spiritual wasteland, becoming the Australian Antichrist. This metal is often used as cold steel for an iron age. ### Dying Fetusium Noted for it’s association with America, Dying Fetusium is used in purification through violence, killing on adrenaline, destruction of opposition, stopping at nothing, as war spoils in wars of attrition and descending into depravity. Many changes in the chemical structure of dying fetusium have occurred, but the properties stay generally the same. ### Morbid Angelium Morbid Angelium, along with Deathium, were the first death metalliums. Morbid angelium is used in sacrifices at the altars of madness inside the chapel of ghouls. Those who take part in domination use it, and it is a key ingredient in gateways to annihilation and formulas fatal to flesh. It was used to sign the covenant called blessed are the sick. People have chanted immortal rites while holding this element only to receive visions of the darkside and damnation inside the maze of torment. The Ancient Ones originally found this element on the day of suffering after they fell from grace. They have since used the metal to build the chamber of dis, create a place for the slime to live, give rise to the God of Emptiness, and chant the Invocation of the Continual One. ### Nileum Found in ancient Egypt by an archeologist named Karl Sanders amongst the catacombs of Nephren-Ka. Known to spread black seeds of vengeance across the earth and churn the maelstrom of Ramses, bringer of war. Can make your ithyphallus grow for use of masturbating the war god, annihilating the wicked and for selecting those whom the gods detest. Useful if you are lashed to the slave stick, or are trying to escape from a darkened shrine, or really need to permit the noble dead to descend to the underworld. ### Necrophagistium It's the harbinger of woe, know it for sure... It was found on the onset of putrefaction of Chernobyl, only ash remains there. When brought to Germany, a researcher suffered an accidental Stabwound with a sharpened edge of the metal, and in just Seven minutes, this poor guy started to feel ignominious and pale, condemned to spend his last few moments to breathe in a casket. The foul body autopsy revealed some horrifying truth: regardless of the host's gender, the final stage it's always the same, featuring an ogrish Intestinal Incubation (at first thought to be some kind of advanced corpse tumor), which causes a severe poisoning of the blood stream with an extreme unction of viruses, in order to weaken its host down until it kills it, and then whilst dead, it starts to feed from it (hence its name "Necrophagistium"). The doctors tried to mutilate the Stillborn One.... terrible mistake, since the creature threw them a Fermented Offal Discharge, and it took just seconds to disolve their bodies. The epitaph on all of their graves read just the same line that "They died due to a Pseudopathological Vivisection provoked by some virus which is considered Symbiotic in Theory". The anthidote is now believed to be just a myth. ### Vaderium As it turns out, Darth Vader's real name is Peter. Peter created this metal to start his morbid Reich and chant the ultimate incantation. He also used it to write the litany for receiving dark transmissions, to be able to see what black to the blind looks like. As heavens collided, he used the blood of kingu to finally unveil the revelation of Black Moses. Peter changed this metal to make the art that today is known as war, and he will lead us!!! until the darkest age arrives. He currently resides in the Necropolis. ### Rebeldeum The heaviest and the tr00est of all metals, Rebeldeum. Nothing more needs to be said. ## Grindcorium A refined version of Death Metallium. Was created over in England in 1982. Incredibly hard and strong in nature, it is mainly used in the production of terrorizers, pig destroyers, and extreme noise terrors as well as the escalation of insect warfare. Misuse will result in constant breebreebreeing, heinous killings, a spate of napalm deaths, agoraphobic nosebleeds, Analrapophobias, carcasses and Staircase Abortions. Oftentimes it can cause a sore throat. The brutal truth is, you will likely regurgitate. ### Napalm Deathium No one knows for sure where this element came from, but its early phases are extremely unstable. It starts to settle down at its third state. Looking at this element can cause fear, emptiness, and despair. As time goes on, this metal grew stronger and those who supported it started a smear campaign ridiculing all metals that weren't Napalm Deathium. ### Anal Cuntium Why not try "one suicide with grindcore, get one free," with Anal Cuntium, a band that's no relation to Tourette's Guy in any way, shape, or form. In this band, you have a choice between saying that You sold your dog to a Chinese restaurant. Metal became stronger and stronger in 1988 through present due to Anal Cuntium. ### Goregrindium This is an incredibly heavy and unstable isotope of the Grindcore metal. It was discovered back in 1987 by high-power quantum chemist Bill Steer, in an attempt to create an evil carcass army. This metal, upon formation, is so unstable that it emits an ultra-low gurgling sound and explodes, regurgitating into a scalding hot red-colored goop that continues to emit a low gurgling sound until its 24 second half-life expires. People caught within the radius of the sound way are likely to develop Cronical Diarreha and Dislocated Cerebrum. On more serious stages, the individual may even develop Paracoccidioidomicosisproctitissarcomucosis, a disease that is basically all the evil of the world put together, and is likely to kill the individual by exploding his brain in a matter of seconds. If that happens, the body must be quickly taken to The County Medical Examiners before the disease spreads. The aforementioned lunatic Bill Steer discovered Goregrindium along with an insane Scouser named Jeffery Dahmer...um, I mean Jeffery Walker, Ken Owen (the inventor of Custard) and a dog called Michael belonging to Prince Charles. These men were not killed by the disease Paracoccidioidomicosisproctitissarcomucosis, but instead were posessed by the evil president of China, Pete Burns and were made to spread acts of immense attrocity through the power of music. Some retards came up with another theory that these 'friends' just made a band and actually eat food, not the decaying pancreas of corpses...Hahaha...they should write for Wikipedia. ### Exhumedium A very slimy metal that will warp your mind and turn you into a necromaniac thus causing you to go start a slaughtercult, crush caskets, and go crazy with a chainsaw. Also used to create the matter of splatter. Keep out of reach of children. ### Nasumium Criminally unknown despite its undeniable contribution to Gridncore, this metal was found somewhere in the wild nature of Sweden. Those who have ever encountered this metal were reportedly blown away by its sheer brutality, dissonance and political-unfriendly aura. ## Metalcorium This form of heavy metal is formed when the atomic "core" of pure heavy metal converges with an element known as trivium, commonly found at the time where August Burns Red in the semen of sheep belonging to weird-looking holy folk, who usually beTreyu. This element was created accidentally, when an unemployed farmer with extreme Suicidal Tendencies had made a murder-suicide pact with his Valentine; but being unemployed and consequently poor, could not afford any Bullets with which to complete the act. Seeking a Job fit for a Cowboy, he lied underOath to a local farmer to give him a funeral for a friend. To illustrate how useful he would be on the farm, he offered to manually masturbate a nearby sheep for purposes of artificial insemination and produced massive quantities of Trivium in the process. Unfortunately, the force of the Escape of the semen, unPlanned for, killed All the poor sheep (named Dillinger)that Remained, and the farmer, enraged, fired at the young boy with his Killswitch Engage (a very useful weapon by even today's standards), but missed, striking the pile of sheep semen on the ground As the sheep Lay Dying, creating a certain type of Hatebreed. As the bullets were made of pure heavy metal, so metalcorium was formed. Out of ammo, the farmer nonetheless Avenged himself by kicking the boy in the nuts Sevenfold, giving him a high-pitched, whiny voice. ### Deathcorium Although this element is not as famous as Metalcorium, it is certainly more destructive than it, due to the lethal combination of Metalcorium and Death Metallium. It was discovered by four forensics who were investigating The Black Dahlia Murder case in The Red Shore in Whitechapel, and managed to handle the isotope well. People who can handle this element Have Wrestled with a Bear Once, can Bring The Horizon to their feet without any problem, declare that Heaven Shall Burn and can successfully do a Job for a Cowboy. And also, these Arsonists manages to Get All the Girls in the world. However, those who are too weak to manage it are condemned to suffer to see The Red Dead, a God Forbid, Divine Heresy that consists of Bleeding Through, Bleed from Within, and constantly receive Winds of Plague while listening to The Red Chord, even As They Sleep, or when seeing the Burning Skies. This constant suffering will decrease their Loathsome Faith and scream A Plea for Purging, but nothing will work and they will be forced to make a Suicide Silence. According to deathcorium specialist, Dr. Oceano Caliban, All the Despised Icon who don't know how to manage it Shall Perish. It's extremely destructive, but not quite as Death Metallium. ## Metallium Gothenburgium Discovered at the gates in the early nineties, it has gained popularity in recent years, and is fast overtaking many of its cousins as the primary active ingredient in M.O.S.H cores, a vital component of modern Nucular Weapons. It was confirmed that extreme expositions might set people in flames. This Heavy Metal is very expensive as it took years of Soilwork to get it. Has been known to put users in a state of dark tranquillity, causing severe cases of insomnium and nightrage to occur. Mixing with Black Gothium may result in disarmonia mundi, and mixture with pure black metallium is considered hypocrisy. But the most violent reactions occur when mixed with Thrash Metalium in the presence of a woman- this leads to decadence, orphan hate, and the abrupt realization of your arch enemy. The resulting energy from this, in fact, can light entire cities. ### In Flamesite (Pronounced "In-Flam-E-cite") Originally found in a sphere behind space, this metal uses a lunar strain to place the everlost upon an oaken throne. A sample of this substance was once used as the first prize for the winner of the jester race. The winner was Lord Hypnos who used the substance to forge a moonshield. He traded off parts of the substance he didn't use to the Whoracle for some "alone time" with her. The Whoracle in turn used the metal, along with Lord Hypnos's semen, to create a colony that embodies the invisible. Sick of coerced coexistence with the human world, the colony used remaining parts of the substance to build a Clayman to destroy everything on the pinball map except for satellites and astronauts. Some say there are traces of this element left, but many agree that its power was lost after the Clayman was finished being built. ### Dark Tranquilium This strong and brutal alloy was discovered in a ghost town, the terminus where death is most alive, in fact. This alloy is made by decomposing In Flamesite in its primitive state. It can also be found in The gallery of the sacred reichstag. People high of smoking Dark Tranquilium have been seen skydancing over the shore of time. ### Hypocrisite Long ago in a fractured millenium, the Norse god known as Peter Tagtgren recited an incantation from the necronomicon at Roswell 47, which transported him to the 4th dimension. While falling into the abyss, he looked and saw a fire in the sky, and from then on he knew that god is a lie and wanted his own taste of extreme divinity. The arrival of the demons resulted in him being abducted. They used a mind eraser to make him a slave to the parasites, and they used him to adjust the sun. The penetraliation of the sun created an outpouring of the new element known as Hypocrisite. Caught in a catch-22, he suddenly felt drained and began slippin away. The aliens tried to dissect him, but he didn't want to be incised before he'd ceased. He went on a warpath and took over the ship, crashing it in the valley of the damned. He took the alien leader and hung him high. Using the newfound element, he formed a new solar empire and today is well on his way to global domination. ## Bodomite In ancient times, on a silent night, Bodom night, three unsuspecting teenagers from Finland were murdered with a triple-corpse hammerblow from Roy The Reaper by the shores of Lake Bodom. What nobody would discover for decades was that beneath the lake was an undiscovered bed of razors. These were no ordinary razors, they were made of an alloy of Pure Black Metallium, Thrash Metallium, and Power Metallium. But as the waters of Bodom turned a shade of red as these children of bodom drew their last breaths, their blood mixed with the metal under the surface, unleashing the wrath within, and thus, Bodomite was formed. It wasn't discovered until years later, but the five Finnish kids who found it were soon infected with it, and given insane playing skills. Soon, the rest of the world was too- Bodomite is very catchy. But beware- possession of Bodomite carries a terrible curse that will make the user trashed, lost, and strungout, and feel like they're being needled 24/7. Bodomite is one of the few Heavy Metals that cannot be cloned easily using an industrial process. Trust me, people have tried it, but not even the Finns can come up with a carbon... err, Bodomite copy. Naildownium and MorsPrincipiumEstium mark two failed experiments in that field. The latter, however, turned out to be very sticky, and one day in 2004, a scientist let it out of their sight, and the test sample just happened to fall out of the beaker. It came into contact with the fragile flesh of other, more pure samples, and absorbed them to make a hell of a badass mixture. Mixing Bodomite with more Thrash Metallium causes cold and angsty winds to blow from the North. Many believe that the razors used in Bodomite limit its intensity and overall vanire. They came to the conclusion that the razors had various impurities with only one substance being the main strength (known as Alexite). Nearby Lake Bodom resides a swamp from where a trident emerged from the growling depths to Dance on the Water. The trident appeared to have been fashioned in the same way as the razors but left to cool in a bog. This, ironically, left it with a cleaner look and more pretencious edges but still felt raw and muddy. This was later named "Kalmahdium". So defying of all laws of physics that it should defy the laws of grammar. However, because of its rarity it was never revealed to the public eye and thus its true benefits were never known. Its discoverer, "Tordah" was negotiating with the Swamplord to have it removed from his property but it shows no signs of Withering Away. In 2003, the Swamplord attacked Tordah With Terminal Intensity. He was admitted to hospital with cases of Cloned Insanity and Hollow Heart. After Defeat of all medical options, Tordah succumbed to Suodeth. From the police inquiry, Swamplord said his motives were "For My Nation" and "For The Revolution". Svieri Doroga has continued in place of Tordah ever since. In 2005 Swamplord was Doubtful About It All and performed a "Black Waltz" in memory of his victim. ## Viking Metallium One of many pagan gods or goddesses used in pagan heavy metal genre concerts. First synthesized in a bar in Sweden during the 80's by Professor Quorthon of Bathory university in Tumba, it is a moderately stable metal, and equally rare. Ensiferum Metallium is the most recently discovered form, and is touted by scientists as one of the greatest breakthroughs in heavy metal in recent years. Listeners have been known to become enslaved by this metal being unleashed on their ears. Still more surprisingly, Viking Metallium has been found to bond quite effectively with Progressive Metallium, producing an exquisite alloy called Týrium. ## Folk Metallium Often considered a close chemical cousin to Viking Metallium, this form of metal was first synthesized by British scientists Martin Walkyier and Steve Ramsey. Eventually, more exotic forms of this metal came to be discovered in remote, previously unexplored territories of rural Finland. Although Folk Metallium is often found in alloys, Finntrollite and Korpiklaanium are considered to be relatively pure specimens, with Finntrollite being the heavier and less stable of the two. There is, however, a wide range of natural variation, from the very light and beautiful lumpsks of this metal found in Norway, to the very heavy and dense Swiss variety Eluveitite. In general, Folk Metallium is a dangerous substance and must be handled with the utmost care, as repeated exposure is prone to cause rapid, uncontrolled movement along with ethanol saturation in susceptible individuals. Failure to use caution while handling Folk Metallium may result in Moonsorrow. ### Ensiferumite The most common Finnish Folk Metallum. Often used in swords and magic potions. From Afar, this metal looks like a piece of boring Iron, but only if you are Lost in Despair. If you hold this metal on you when you die, you will be sent to the Heathen Throne near the Twilight Tavern where you can sing Victory Songs about the metal until The New Dawn. ### Moonsorrowite The most powerful Finnish Folk Metallum. This metal was born of ice in a stream of shadows where it ravaged a land and drove it into the fire. It was later found by the stone bearer, the child of oblivion, and taken to the city of the gods. ### Agallochite Has a pleasant aroma unlike most metals but not unlike the waves. Often used to make mantels and weave pale folklore. Sometimes found in falling snow and ashes against the grain. ### Eluveitium This amazing element is still a mystery to many. Some believe it's made of Fire, Wind and Wisdom, although it is highly possible that it was created by some Swiss scientist who tried to call the rain while dancing on a bloodstained ground and that his primordial breath created this element. Still, this element remains mysterious as it might come from an Otherworld but it will always remain as it never was even though inis mona. ## Avant-Garde Metallium Discovered by physicist Mike Patton. Not much known, except that it is used in the production of Fantomas. More info supposed to be found soon, as the isotope is on the brink of the future, but, by definition, will always be on the brink of the future. It is said to come from the Third Brightest Star in the Firmament via ways of an uneXpected meteor crash. A particularly prized sample of this metal has been found on the distant star Arcturus, which unfortunately went supernova in 2007, vexing physicists all over the world. Not much is known about Avant-Garde Metallium, apart from the pioneers of this are Pinkly Smooth. While there he found a body of death in the man with the body of death! He was so distraught He became obssessed with Mezmer, a magical substance jizzed out by the rev. While in rehab, The Rev discovered a pixel which he later fused to a nasal and created Nosferatu, who later created a hefty dance. ### UneXpecTium UneXpecTium, otherwise known as uneXpectium, unexpecTium, or ∞, is a rare Avant-Garde Metallium, so far only known to be found in the ChaotH between Heaven and Hell, an area so unexpected it rips holes in the space-time fabric, the contents of which fall into Quebec. Scientists believe the isotope UneXpecTium to be the cause of the French-Canadian Rift. Although this metal can be generally used to be beneficial, some potential threats are involved - such as, a case of the Shivers, strange visions of puppets, heads exploding, and the nonsensical replication of bass guitar strings. UneXpecTium has been known to completely reform its radioactivity every couple years (such as, UneXpecTium 1[Utopium] and UneXpecTium 2[In A Flesh Aquarium]. UneXpecTium is planned to reform again sometime soon. This new UneXpecTium is very mysterious, although it has been divulged that it will be very harmful to Okapis, as the official website of this isotope says. ## Stoner Metallium Completely derived from Black Sabbathium, this form of heavy metal claims to be archaic in structure, but is absolutely fucked in nature. Migraines, the shits, and anal rape are constant symptoms of it. Often copies other elements composition such as Saint Vitusium, Earthium, Sunn O)))ium (pronounced Sunium), Borisium in its early stages. Many of these elements are used as either drugs or medicine. ### ReverendBizarreium One of the rarest of the Stoner Metalliums, this Metal was discovered in the 1800s by witchunter Albert Witchfinder. Scientists generally dislike classifying it under Stoner Metallium, but it has been categorized as such due to the Sodoma Sunrise era Witchfinder lived in. It was first unearthed in The Rectory of the Bizarre Reverend in Finland, and a second mass expidition lead by The Goddess of Doom, Christina Ricci. It is most commonly found in the Eternal Forest of Cromwell and is used as a tool to uncover items From the Void. ### Melvinsium This Metal has ben known to bring great intelligence to the users. In fact, it helped Kurt Cobain acheive Nirvana. Discovered by the brilliant mad sceintist, Buzz Osborne. He discovered the mineral while picking through his gigantic grey afro. He then used it in his pencil to write A History Of Bad Men and History Of Drunks, both widely read in modern universities. Though there have been rumors of Houdini putting the mineral in his Hooch to help him in his magic. Buzz Osbourne has also created the world's first Civilized Worm using Melvinsium. It is now used in most Gluey Porch Treatments and Lysol. ### Sunn O))) The most valuable stoner metallium, the only side effect is contraction of the BWOOOM syndrome. It has been used to build my wall, the gates of Ballard, altars, two whites, the Black One, Monoliths and Demensions. It is defeating Earths gravity. Also be used for hunters and gatherers in Cydonia and Big Churches. One of Atilla Casthar's favorite metals. ### Clutchium Generally Known for being mined in Maryland, Clutchium is used in electric wiring. Though, Electric Worry can complicate this and may spawn a zombie apocalypse (similar to Left 4 Dead 2). But 50,000 Unstoppable watts in the wiring can stop this. When given to a large number of People at once, The Mob Goes Wild (Not zombies).If used often for long periods of time, the user becomes Immortal. It is advised that boats never carry this isotope since it can either Sink 'Em Low, or create a Sea Of Destruction.It can also detect who is the chosen one by giving him a Burning Beard. This mettallium is used in a drug combined with Spacegrass. The Spacegrass is commonly smoked by several people, including Willie Nelson, A Shogun Named Marcus, and the Left 4 Dead 2 survivors. ### Corrosion Of Conformitite This element, Commonly found in the South, is generally formed from the fossils of Hardcore Punks. It is a crucial part of Downium. It is generally beleived to give listeners religious Visions, varying anywhere from being In The Arms Of God, to living in hell. Use this Stuff obsessively, and you will go Blind from Corrosion of Your Eyes, and suffer from Animosity. It is often used in bullets, bullets that people vote with. It can also summon Albatrosses and Clean Your Wounds if used correctly, But misuse could leave someone Drowning in a Daydream. ### Acid Bathium Extremely rare metal isotope, known to cause obscure hallucination - exposed people believe that they hear butterflies to scream or see 13 fingers on their hands. Exposed people tend to become dope fiends. Can be found in the throat of dead girls just after a man jerked off in their mouths. ### Kyussium Known to be found in Sky Valley. Perfect alternative for weed. ### QotStoneAgeium This highly addictive metal can be created by splitting Kyussium into its individual isotopes, Dr. J. Homme discovered this phenomenon while attempting to understand its properties in his parents basement/laboratory. This is the Philosopher Stone of crackheads, dopeheads, junkies, Jesus, or any filthy chickenshit hippie. Properly handling this object can simply create Nicotine for those who are beginning to learn its secrets. Valium and Vicodine can be learned easily in the Streets. Marijuana has been known to have been created on a daily basis on every college. Extasy can be done under a chemistry degree (or not, who cares). Finally the Alcohol is well known to have been first created though this precious metal when Kenny Rogers took over Mordor. It has been speculated that some of the finest alchemists from Hogwarts have been known to make Cocaine, but not much is known. ## Pirateium Metallium A rare form of metal, the first major discovery of it was the Running Wildium that was originally found. After a few years in which it was fairly abundant, it became harder to find. However, when Battleheartium was discovered it gained a small but dedicated consumer base. Battleheartium, however, evolved into Alestormium - a much more successful and popular variant. Excessive exposure will lead to being followed Over The Seas by The Huntmaster, a mythical creature that alledgedly lives Under Jolly Roger who enjoys Wenches and Mead. And another rare form is, Swashbucklium. Frequent exposures will bring you Back to the Noose, to join the Crew of the Damned ## Experimental Metallium An element brought to humans by the Elderly Normal Samurai Tortoises, mainly seen in the form of Estradasphereium. Estradasphereium combines forms of all elements, and is primarily used in Jungle Warfare.
2020-07-13 12:31:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2891943156719208, "perplexity": 4826.750380909289}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143365.88/warc/CC-MAIN-20200713100145-20200713130145-00308.warc.gz"}
https://askthetask.com/112304/equation-given-slope-passes-through-given-point-write-equation
0 like 0 dislike Find equation of a line with the given slope that passes through the given point. Write the equation in the form Ax + By = C. M = -5, (-4,-8) 0 like 0 dislike 5x + y = -28 Step-by-step explanation: Hi there! We're given that a line has a slope of -5 and contains the point (-4, -8) We want to write the equation of this line in standard form Standard form is given as ax+by=c, where a, b, and c are free integer coefficients, but a and b cannot be zero, and a cannot be negative Before we write the equation in standard form however, we need to write it in slope-intercept form, which is y=mx+b, where m is the slope & b is the y intercept As we are already given the slope, we can immediately plug that in for m in y=mx+b y=-5x+b Now to find b: As the equation passes through the point (-4, -8), we can use its values to help solve for b Substitute -4 as x and -8 as y: -8 = -5(-4) + b Multiply -8 = 20 + b Subtract 20 from both sides -8 = 20 + b -20 -20 __________ -28 = b substitute -28 as b y = -5x - 28 We found the equation in slope-intercept form, but remember, we want it in standard form In standard form, both x and y are on the same side; so, let's move 5x to the other side; to do this, we need to add 5x to both sides y = -5x - 28 +5x +5x _________ 5x + y = -28 Hope this helps! by
2023-03-22 03:58:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8263326287269592, "perplexity": 600.5115764045377}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00517.warc.gz"}
http://websrv.cs.umt.edu/isis/index.php?title=Blatter-Pattyn_Boundary_Conditions&diff=prev&oldid=2422
# Difference between revisions of "Blatter-Pattyn Boundary Conditions" We will go through an approximate derivation of the boundary conditions that are implemented with Glimmer/CISM's higher-order scheme. By "approximate" we mean that some of the derivation is guided by physical intuition and what appear to be "reasonable" arguments, rather than through the application of rigorous mathematics. We take comfort in the fact that, in the end, we wind up with the same sets of equations that one ends up with from the more rigorous approach. We will look at the derivation in three parts, (1) the free surface boundary condition, (2) the specified basal traction boundary condition, and (3) lateral boundary conditions. ## Stress Free Surface At the ice surface, a stress-free boundary condition is applied. The traction vector, T, must be continuous at the ice sheet surface and, assuming that atmospheric pressure and surface tension are small, we have \begin{align} & T_{i}=-T_{i(boundary)}\approx 0 \\ & T_{i}=\sigma _{ij}n_{j}=\sigma _{i1}n_{1}+\sigma _{i2}n_{2}+\sigma _{i3}n_{3}=0 \\ \end{align}, where the ni are the components of the outward facing, unit normal vector in Cartesisan coordinates. For a function F(x,y,z) = f(x,y) - z = 0, where z = f(x,y) defines the surface, the gradient of F(x,y,z) gives the components of the surface normal vector: $a=\sqrt{\left( \frac{\partial f}{\partial x} \right)^{2}+\left( \frac{\partial f}{\partial y} \right)^{2}+1^{2}}$ For the case of the ice sheet surface, s = f(x,y) and the surface normal is given by $n_{i}=\left( \frac{\partial s}{\partial x},\frac{\partial s}{\partial y},-1 \right)\frac{1}{a}$
2020-04-01 12:25:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9927854537963867, "perplexity": 660.9806560620225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505730.14/warc/CC-MAIN-20200401100029-20200401130029-00493.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcdss.2012.5.591
Article Contents Article Contents # Rate-independent processes with linear growth energies and time-dependent boundary conditions • A rate-independent evolution problem is considered for which the stored energy density depends on the gradient of the displacement. The stored energy density does not have to be quasiconvex and is assumed to exhibit linear growth at infinity; no further assumptions are made on the behaviour at infinity. We analyse an evolutionary process with positively $1$-homogeneous dissipation and time-dependent Dirichlet boundary conditions. Mathematics Subject Classification: Primary: 74C15; Secondary: 49J45, 74G65. Citation: • [1] I. V. Chenchiah, M. O. Rieger and J. Zimmer, Gradient flows in asymmetric metric spaces, Nonlinear Anal., 71 (2009), 5820-5834.doi: 10.1016/j.na.2009.05.006. [2] S. Conti and M. Ortiz, Dislocation microstructures and the effective behavior of single crystals, Arch. Ration. Mech. Anal., 176 (2005), 103-147.doi: 10.1007/s00205-004-0353-2. [3] G. Dal Maso, G. A. Francfort and R. Toader, Quasistatic crack growth in nonlinear elasticity, Arch. Ration. Mech. Anal., 176 (2005), 165-225.doi: 10.1007/s00205-004-0351-4. [4] R. J. DiPerna and A. J. Majda, Oscillations and concentrations in weak solutions of the incompressible fluid equations, Comm. Math. Phys., 108 (1987), 667-689.doi: 10.1007/BF01214424. [5] G. Francfort and A. Mielke, Existence results for a class of rate-independent material models with nonconvex elastic energies, J. Reine Angew. Math., 595 (2006), 55-91.doi: 10.1515/CRELLE.2006.044. [6] M. Kružík and T. Roubíček, On the measures of DiPerna and Majda, Math. Bohem., 122 (1997), 383-399. [7] M. Kružík and J. Zimmer, A model of shape memory alloys accounting for plasticity, IMA Journal of Applied Mathematics, 76 (2011), 193-216.doi: 10.1093/imamat/hxq058. [8] M. Kružík and J. Zimmer, Vanishing regularisation for gradient flows via $\Gamma$-limit, in preparation. [9] M. Kružík and J. Zimmer, Evolutionary problems in non-reflexive spaces, ESAIM Control Optim. Calc. Var., 16 (2010), 1-22. [10] A. Mainik and A. Mielke, Global existence for rate-independent gradient plasticity at finite strain, J. Nonlinear Sci., 19 (2009), 221-248.doi: 10.1007/s00332-008-9033-y. [11] A. Mielke and T. Roubíček, A rate-independent model for inelastic behavior of shape-memory alloys, Multiscale Model. Simul., 1 (2003), 571-597 (electronic).doi: 10.1137/S1540345903422860. [12] A. Mielke, F. Theil and V. I. Levitas, A variational formulation of rate-independent phase transformations using an extremum principle, Arch. Ration. Mech. Anal., 162 (2002), 137-177.doi: 10.1007/s002050200194. [13] M. Ortiz and E. A. Repetto, Nonconvex energy minimization and dislocation structures in ductile single crystals, J. Mech. Phys. Solids, 47 (1999), 397-462.doi: 10.1016/S0022-5096(97)00096-3. [14] T. Roubíček, "Relaxation in Optimization Theory and Variational Calculus,'' de Gruyter Series in Nonlinear Analysis and Applications, 4, Walter de Gruyter & Co., Berlin, 1997. [15] J. Souček, Spaces of functions on domain $\Omega$, whose $k$-th derivatives are measures defined on $\bar \Omega$, Časopis Pĕst. Mat., 97 (1972), 10-46, 94. [16] D. W. Stroock and S. R. S. Varadhan, "Multidimensional Diffusion Processes,'' Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 233, Springer-Verlag, Berlin-New York, 1979.
2022-12-05 08:21:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.6719822883605957, "perplexity": 2679.8253326958284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711013.11/warc/CC-MAIN-20221205064509-20221205094509-00653.warc.gz"}
http://finmath.net/finmath-lib/apidocs/net/finmath/montecarlo/IndependentIncrementsInterface.html
finMath lib documentation net.finmath.montecarlo ## Interface IndependentIncrementsInterface • ### Method Summary All Methods Modifier and Type Method and Description IndependentIncrementsInterface getCloneWithModifiedSeed(int seed) Return a new object implementing BrownianMotionInterface having the same specifications as this object but a different seed for the random number generator. IndependentIncrementsInterface getCloneWithModifiedTimeDiscretization(TimeDiscretizationInterface newTimeDiscretization) Return a new object implementing BrownianMotionInterface having the same specifications as this object but a different time discretization. RandomVariableInterface getIncrement(int timeIndex, int factor) Return the increment for a given timeIndex. int getNumberOfFactors() Returns the number of factors. int getNumberOfPaths() Returns the number of paths. RandomVariableInterface getRandomVariableForConstant(double value) Returns a random variable which is initialized to a constant, but has exactly the same number of paths or discretization points as the ones used by this BrownianMotionInterface. TimeDiscretizationInterface getTimeDiscretization() Returns the time discretization used for this set of time-discrete Brownian increments. • ### Method Detail • #### getIncrement RandomVariableInterface getIncrement(int timeIndex, int factor) Return the increment for a given timeIndex. The method returns the random variable Δ Xj(ti) := Xj(ti+1)-X(ti) for the given time index i and a given factor (index) j Parameters: timeIndex - The time index (corresponding to the this class's time discretization) factor - The index of the factor (independent scalar increment) Returns: The factor (component) of the increments (a random variable) • #### getTimeDiscretization TimeDiscretizationInterface getTimeDiscretization() Returns the time discretization used for this set of time-discrete Brownian increments. Returns: The time discretization used for this set of time-discrete Brownian increments. • #### getNumberOfFactors int getNumberOfFactors() Returns the number of factors. Returns: The number of factors. • #### getNumberOfPaths int getNumberOfPaths() Returns the number of paths. Returns: The number of paths. • #### getRandomVariableForConstant RandomVariableInterface getRandomVariableForConstant(double value) Returns a random variable which is initialized to a constant, but has exactly the same number of paths or discretization points as the ones used by this BrownianMotionInterface. Parameters: value - The constant value to be used for initialized the random variable. Returns: A new random variable. • #### getCloneWithModifiedSeed IndependentIncrementsInterface getCloneWithModifiedSeed(int seed) Return a new object implementing BrownianMotionInterface having the same specifications as this object but a different seed for the random number generator. This method is useful if you like to make Monte-Carlo samplings by changing the seed. Parameters: seed - New value for the seed. Returns: New object implementing BrownianMotionInterface. • #### getCloneWithModifiedTimeDiscretization IndependentIncrementsInterface getCloneWithModifiedTimeDiscretization(TimeDiscretizationInterface newTimeDiscretization) Return a new object implementing BrownianMotionInterface having the same specifications as this object but a different time discretization. Parameters: newTimeDiscretization - New time discretization Returns: New object implementing BrownianMotionInterface.
2017-06-28 00:11:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22055277228355408, "perplexity": 2615.6844527136777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321961.50/warc/CC-MAIN-20170627235941-20170628015941-00451.warc.gz"}
https://zbmath.org/authors/?q=ai%3Avinokurov.v-a
# zbMATH — the first resource for mathematics ## Vinokurov, V. A. Compute Distance To: Author ID: vinokurov.v-a Published as: Vinokurov, V.; Vinokurov, V. A. Documents Indexed: 82 Publications since 1967 all top 5 #### Co-Authors 49 single-authored 11 Sadovnichiĭ, Viktor Antonovich 7 Plichko, Anatoliĭ Mykolaĭovych 3 Petunin, Yuriĭ Ivanovich 2 Gladun, L. V. 2 Menikhes, Leonid Davidovich 2 Novak, V. I. 2 Repnikov, N. F. 2 Satov, A. K. 2 Shaposhnikov, V. L. 1 Domanskij, E. N. 1 Gaponenko, Yu. L. A. N. 1 Kostikov, Yu. A. 1 Nurgalieva, I. G. 1 Vvekenskaya, E. V. 1 Yuvchenko, N. V. all top 5 #### Serials 23 Soviet Mathematics. Doklady 17 U.S.S.R. Computational Mathematics and Mathematical Physics 14 Zhurnal Vychislitel’noĭ Matematiki i Matematicheskoĭ Fiziki 9 Doklady Mathematics 3 Differential Equations 3 Differential Equations 2 Mathematical Notes 1 Doklady Akademii Nauk SSSR 1 Izvestiya Vysshikh Uchebnykh Zavedeniĭ, Matematika 1 Matematicheskie Zametki 1 Soviet Mathematics 1 Soviet Physics. Doklady 1 Journal of Mathematical Sciences (New York) 1 Izvestiya: Mathematics 1 Doklady Physics 1 Matematicheskie Zapiski all top 5 #### Fields 32 Numerical analysis (65-XX) 22 Ordinary differential equations (34-XX) 20 Operator theory (47-XX) 14 Functional analysis (46-XX) 7 Approximations and expansions (41-XX) 6 Measure and integration (28-XX) 6 Partial differential equations (35-XX) 6 General topology (54-XX) 3 Real functions (26-XX) 2 Category theory; homological algebra (18-XX) 2 Statistics (62-XX) 2 Mechanics of deformable solids (74-XX) 2 Geophysics (86-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Potential theory (31-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Convex and discrete geometry (52-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Probability theory and stochastic processes (60-XX) 1 Computer science (68-XX) 1 Mechanics of particles and systems (70-XX) 1 Quantum theory (81-XX) 1 Biology and other natural sciences (92-XX) #### Citations contained in zbMATH 24 Publications have been cited 73 times in 56 Documents Cited by Year Asymptotics of any order for the eigenvalues and eigenfunctions of the Sturm-Liouville boundary value problem on a segment with a summable potential. Zbl 1001.34021 Vinokurov, V. A.; Sadovnichij, V. A. 2000 The asymptotics of eigenvalues and eigenfunctions and a trace formula for a potential with Delta functions. Zbl 1043.34092 Vinokurov, V. A.; Sadovnichii, V. A. 2002 Asymptotics of eigenvalues and eigenfunctions and the trace formula for a potential that contains $$\delta$$-functions. Zbl 1090.34603 Vinokurov, V. A.; Sadovnichiĭ, V. A. 2001 The eigenvalue and trace of the Sturm-Liouville operator as differentiable functions of a summable potential. Zbl 0967.34076 Vinokurov, V. A.; Sadovnichij, V. A. 1999 Strict regularizability of discontinuous functions. Zbl 0626.54014 Vinokurov, V. A. 1985 On some problems of linear regularizability. Zbl 0544.47012 Vinokurov, V. A.; Domanskij, E. N.; Menikhes, L. D.; Plichko, A. N. 1983 A posteriori estimates of the solutions of ill-posed inverse problems. Zbl 0523.65046 Vinokurov, V. A.; Gaponenko, Yu. L. 1982 On the range of variation of an eigenvalue when the potential is varied. Zbl 1143.34325 Vinokurov, V. A.; Sadovnichii, V. A. 2003 Uniform equiconvergence of a Fourier series in eigenfunctions of the first boundary value problem and of a Fourier trigonometric series. Zbl 1066.34514 Vinokurov, V. A.; Sadovnichij, V. A. 2001 Arbitrary-order asymptotics of the eigenvalues and eigenfunctions of the Sturm-Liouville boundary value problem with integrable potential on a closed interval. Zbl 0953.34019 Vinokurov, V. A.; Sadovnichij, V. A. 1998 Regularizability and analytic representability. Zbl 0321.54006 Vinokurov, V. A. 1975 On a necessary condition for regularizability in the sense of Tikhonov. Zbl 0218.54003 Vinokurov, V. A. 1970 On normalizing subspaces of a conjugate space and regularizability of inverse operators. Zbl 0585.46013 Vinokurov, V. A.; Gladun, L. V.; Plichko, A. N. 1985 Measurability and regularizability of mappings inverse to continuous linear operators. Zbl 0447.46001 Vinokurov, V. A.; Petunin, Yu. I.; Plichko, A. N. 1980 Regularizable functions in topological spaces and inverse problems. Zbl 0423.65032 Vinokurov, V. A. 1979 General properties for the errors of approximate solutions of linear functional equations. Zbl 0217.52803 Vinokurov, V. A. 1971 Mathematical description of artificial sense-of-touch systems. Zbl 1385.92004 Vinokurov, V. A.; Sadovnichy, V. A. 2010 Analytic dependence of the solution of a linear differential equation on integrable coefficients. Zbl 1098.34048 Vinokurov, V. A. 2005 The eigenvalue and the eigenfunction of the Sturm-Liouville problem treated as analytic functions of the integrable potential. Zbl 1092.34578 Vinokurov, V. A. 2005 An explicit solution to a linear ordinary differential equation and the main property of the exponential function. Zbl 0912.34010 Vinokurov, V. A. 1997 The logarithm of the solution of a linear differential equation, Hausdorff’s formula, and conservation laws. Zbl 0772.34012 Vinokurov, V. A. 1991 A method for the numerical solution of linear differential equations. Zbl 0529.65032 Vinokurov, V. A. 1982 The order of the error when computing a function whose argument is specified approximately. Zbl 0293.65037 Vinokurov, V. A. 1974 Two notes on the choice of regularization parameter. Zbl 0257.47007 Vinokurov, V. A. 1973 Mathematical description of artificial sense-of-touch systems. Zbl 1385.92004 Vinokurov, V. A.; Sadovnichy, V. A. 2010 Analytic dependence of the solution of a linear differential equation on integrable coefficients. Zbl 1098.34048 Vinokurov, V. A. 2005 The eigenvalue and the eigenfunction of the Sturm-Liouville problem treated as analytic functions of the integrable potential. Zbl 1092.34578 Vinokurov, V. A. 2005 On the range of variation of an eigenvalue when the potential is varied. Zbl 1143.34325 Vinokurov, V. A.; Sadovnichii, V. A. 2003 The asymptotics of eigenvalues and eigenfunctions and a trace formula for a potential with Delta functions. Zbl 1043.34092 Vinokurov, V. A.; Sadovnichii, V. A. 2002 Asymptotics of eigenvalues and eigenfunctions and the trace formula for a potential that contains $$\delta$$-functions. Zbl 1090.34603 Vinokurov, V. A.; Sadovnichiĭ, V. A. 2001 Uniform equiconvergence of a Fourier series in eigenfunctions of the first boundary value problem and of a Fourier trigonometric series. Zbl 1066.34514 Vinokurov, V. A.; Sadovnichij, V. A. 2001 Asymptotics of any order for the eigenvalues and eigenfunctions of the Sturm-Liouville boundary value problem on a segment with a summable potential. Zbl 1001.34021 Vinokurov, V. A.; Sadovnichij, V. A. 2000 The eigenvalue and trace of the Sturm-Liouville operator as differentiable functions of a summable potential. Zbl 0967.34076 Vinokurov, V. A.; Sadovnichij, V. A. 1999 Arbitrary-order asymptotics of the eigenvalues and eigenfunctions of the Sturm-Liouville boundary value problem with integrable potential on a closed interval. Zbl 0953.34019 Vinokurov, V. A.; Sadovnichij, V. A. 1998 An explicit solution to a linear ordinary differential equation and the main property of the exponential function. Zbl 0912.34010 Vinokurov, V. A. 1997 The logarithm of the solution of a linear differential equation, Hausdorff’s formula, and conservation laws. Zbl 0772.34012 Vinokurov, V. A. 1991 Strict regularizability of discontinuous functions. Zbl 0626.54014 Vinokurov, V. A. 1985 On normalizing subspaces of a conjugate space and regularizability of inverse operators. Zbl 0585.46013 Vinokurov, V. A.; Gladun, L. V.; Plichko, A. N. 1985 On some problems of linear regularizability. Zbl 0544.47012 Vinokurov, V. A.; Domanskij, E. N.; Menikhes, L. D.; Plichko, A. N. 1983 A posteriori estimates of the solutions of ill-posed inverse problems. Zbl 0523.65046 Vinokurov, V. A.; Gaponenko, Yu. L. 1982 A method for the numerical solution of linear differential equations. Zbl 0529.65032 Vinokurov, V. A. 1982 Measurability and regularizability of mappings inverse to continuous linear operators. Zbl 0447.46001 Vinokurov, V. A.; Petunin, Yu. I.; Plichko, A. N. 1980 Regularizable functions in topological spaces and inverse problems. Zbl 0423.65032 Vinokurov, V. A. 1979 Regularizability and analytic representability. Zbl 0321.54006 Vinokurov, V. A. 1975 The order of the error when computing a function whose argument is specified approximately. Zbl 0293.65037 Vinokurov, V. A. 1974 Two notes on the choice of regularization parameter. Zbl 0257.47007 Vinokurov, V. A. 1973 General properties for the errors of approximate solutions of linear functional equations. Zbl 0217.52803 Vinokurov, V. A. 1971 On a necessary condition for regularizability in the sense of Tikhonov. Zbl 0218.54003 Vinokurov, V. A. 1970 all top 5 #### Cited by 57 Authors 7 Mitrokhin, Sergeĭ Ivanovich 4 Leonov, Aleksandr Sergeevich 4 Pechentsov, A. S. 3 Domanskij, E. N. 3 Ostrovskii, Mikhail Iosifovich 3 Plichko, Anatoliĭ Mykolaĭovych 3 Sadovnichaya, I. V. 2 Banakh, Taras Onufrievich 2 Bokalo, Bogdan 2 Sadovnichiĭ, Viktor Antonovich 2 Vinokurov, V. A. 2 Vladimirov, Anton A. 2 Yang, Chuanfu 1 Akhmerova, Eh. F. 1 Bellen, Alfredo 1 Blanes, Sergio 1 Bukovský, Lev 1 Casas, Fernando 1 Cheng, Xiaoliang 1 Chu, Chieping 1 Dragunov, Denys V. 1 Efremova, Liubov S. 1 Ezhak, Svetlana Sergeevna 1 Freiling, Gerhard 1 Gao, Qin 1 Gladun, L. V. 1 Guo, Shuyuan 1 Huang, Zhengda 1 Ignat’ev, Mikhail Yur’evich 1 Kadchenko, Sergeĭ Ivanovich 1 Kakushkin, Sergeĭ Nikolaevich 1 Karulina, Elena S. 1 Kokurin, Mikhail Yur’evich 1 Kolos, Nadiya 1 Koyunbakan, Hikmet 1 Liskovets, O. A. 1 Makarov, Volodymyr Leonidovich 1 Makhnei, Oleksandr V. 1 Menikhes, Leonid Davidovich 1 Moan, Per Christian 1 Morozov, Vladimir Alekseevich 1 Niesen, Jitse 1 Ostrovsky, Alexey Vladimirovich 1 Petunin, Yuriĭ Ivanovich 1 Podol’skiĭ, Vladimir Evgen’evich 1 Popov, Anton Yur’evich 1 Rossokhata, Nataliya O. 1 Shieh, Chung-Tsun 1 Šupina, Jaroslav 1 Telnova, Maria 1 Trynin, Aleksandr Yur’evich 1 Vasin, Vladimir Vasil’evich 1 Viloche Bazán, Fermín S. 1 Wang, Yuping 1 Xu, Guixin 1 Zennaro, Marino 1 Zhang, Meirong all top 5 #### Cited in 29 Serials 6 Mathematical Notes 6 Siberian Mathematical Journal 5 Differential Equations 4 Topology and its Applications 3 Journal of Computational and Applied Mathematics 3 Journal of Soviet Mathematics 2 Computational Mathematics and Mathematical Physics 2 Russian Mathematics 2 Journal of Mathematical Sciences (New York) 2 Vladikavkazskiĭ Matematicheskiĭ Zhurnal 2 Proceedings of the Steklov Institute of Mathematics 1 Journal of Mathematical Analysis and Applications 1 Journal of Mathematical Physics 1 Moscow University Mathematics Bulletin 1 Ukrainian Mathematical Journal 1 Zhurnal Vychislitel’noĭ Matematiki i Matematicheskoĭ Fiziki 1 Archiv der Mathematik 1 Results in Mathematics 1 Acta Mathematicae Applicatae Sinica. English Series 1 Journal of Scientific Computing 1 Linear Algebra and its Applications 1 Russian Journal of Mathematical Physics 1 Sbornik: Mathematics 1 Foundations of Computational Mathematics 1 Central European Journal of Mathematics 1 Vestnik Samarskogo Gosudarstvennogo Tekhnicheskogo Universiteta. Seriya Fiziko-Matematicheskie Nauki 1 Ufimskiĭ Matematicheskiĭ Zhurnal 1 Carpathian Mathematical Publications 1 Sibirskiĭ Zhurnal Chistoĭ i Prikladnoĭ Matematiki all top 5 #### Cited in 18 Fields 29 Ordinary differential equations (34-XX) 24 Operator theory (47-XX) 14 Numerical analysis (65-XX) 8 Functional analysis (46-XX) 4 General topology (54-XX) 3 Calculus of variations and optimal control; optimization (49-XX) 2 Mathematical logic and foundations (03-XX) 2 Measure and integration (28-XX) 2 Partial differential equations (35-XX) 2 Integral equations (45-XX) 1 Nonassociative rings and algebras (17-XX) 1 Real functions (26-XX) 1 Potential theory (31-XX) 1 Difference and functional equations (39-XX) 1 Approximations and expansions (41-XX) 1 Mechanics of deformable solids (74-XX) 1 Operations research, mathematical programming (90-XX) 1 Biology and other natural sciences (92-XX)
2021-02-25 03:07:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7336267232894897, "perplexity": 5460.639451111022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350706.6/warc/CC-MAIN-20210225012257-20210225042257-00087.warc.gz"}
https://math.stackexchange.com/questions/127459/each-discrete-space-is-a-polish-space
# Each discrete space is a Polish Space I'm trying to solve exercise 6.3#7 from Sidney A. Morris' "Topology without tears": "Prove that each discrete space [...] is a Polish space." I started by proving that discrete spaces are always completely metrizable with the discrete metric. But then I got stuck. As far as I know, the only dense subset of a discrete space is the whole space. But does that not mean that only countable discrete spaces are separable (and therefore Polish) spaces? • You're right. Polish spaces are second countable by definition and an uncountable discrete space is not second countable. – t.b. Apr 3 '12 at 0:28 • Thanks! Just a typo then. Do you want to write it as an answer so that I can give you credit? – jerico Apr 3 '12 at 0:30 • Better yet, write your own answer and (after the necessary lapse of time) accept it. – Brian M. Scott Apr 3 '12 at 0:31 ## 1 Answer See t.b.'s comment above confirming my assumption.
2019-08-21 01:14:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9084481000900269, "perplexity": 354.39213013173435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315695.36/warc/CC-MAIN-20190821001802-20190821023802-00086.warc.gz"}
https://gateoverflow.in/25432/tifr2013-a-10
# TIFR2013-A-10 572 views Three men and three rakhsasas arrive together at a ferry crossing to find a boat with an oar, but no boatman. The boat can carry one or at the most two persons, for example, one man and one rakhsasas, and each man or rakhsasas can row. But if at any time, on any bank, (including those who maybe are in the boat as it touches the bank) rakhsasas outnumber men, the former will eat up the latter. If all have to go to the other side without any mishap, what is the minimum number of times that the boat must cross the river? 1. $7$ 2. $9$ 3. $11$ 4. $13$ 5. $15$ edited So, boat crosses river 11 times.. Option (C) selected by 0 is there a general procedure for this problem, for N men and N rakhasasas??? 0 When u try it, u will get a similar sequence automatically, else Rakshas' can eat man.. 0 This is correct, it has to be 11 I am denoting $M$ for man and $R$ for rakhsas. Step 1 : Two rakhsasas row the boat from one side to another. So, one rakshas will reach other side. and one rakhsas will have to row back. So, first side now $(3M + 2R)$ and second side $(1R)$. So, boat crosses $2$ times. Step $2 : 2R$ row to other side and leaves one $R$ there and comes back to first side. So, now first side has $(3M + 1R)$ and second side has $(2R)$. So, boat crosses $2$ times. Step $3: 2$ $M$ row to other side and gets down at other side. So, first side has $(1R + 1M)$ and second side has $(2M+2R).$ So, boat crosses $1$ time. Step 4: Now, $1M$ and $1R$ will have to row back from second side, and they will take $1M + 1M$ to other side. So, in the first side, we now have $2R$ and second side $(1R + 3M)$ So, boat crosses $2$ times. Step 5: $1R$ will row back and take $1$ more $R$ with him. First side $1R$, second side $(2R, 3M).$ So, boat crosses $2$ times. Step 6: Same step $5$ repeated. So, boat crosses $2$ times. Total number of times the boat crosses $=10+1 = 11$ times. Correct Answer: $C$ edited by 0 8+1 rt? 0 1 actually finally we need 4 more crossings - as in 2 crossings only 1 new R can cross. 0 In steps 3 to 5 , u hv made some mistake ,I guess.. 0 boat cannot row while it is empty...step 3 is not correct. 1 In step 4, how did the boat come back? So, 3 M's on the other side and an R came rowing back. Step 5: it will take 2 other R's one by one. That makes total 8+3 = 11 0 answer is clearly wrong it should be 11 steps. @Arjun sir, kindly remove the best tag from this answer. 0 The answer seems right till step 4. The modification there after is given below. At the end of step 4 the boat is at the right bank with 3M and 1R. The boat has to return to the left bank to pick the rest of the Rakshasas. So the boat goes back to left bank with 1R and the count becomes 8 meanwhile at the right bank we've 3M. Now we've 3R in the left bank. We need to row with 2R in the boat to the right bank, leaving 1R with other 3M. The count now is 9. The last Rakshasa can be taken in 2 crosses with any R/M accompanying the last R. Thus the final count is 9 + 2 = 11. ## Related questions 1 372 views Let there be a pack of $100$ cards numbered $1$ to $100$. The $i^{th}$ card states: "There are at most $i - 1$ true cards in this pack". Then how many cards of the pack contain TRUE statements? $0$ $1$ $100$ $50$ None of the above. Consider the following two types of elections to determine which of two parties $A$ and $B$ forms the next government in the 2014 Indian elections. Assume for simplicity an Indian population of size $545545 (=545 * 1001)$. There are only two parties $A$ and $B$ and every ... forms govt. by election TYPE P, then it will also form the govt. by election TYPE C. All of the above None of the above. Three candidates, Amar, Birendra and Chanchal stand for the local election. Opinion polls are conducted and show that fraction $a$ of the voters prefer Amar to Birendra, fraction $b$ prefer Birendra to Chanchal and fraction $c$ ... $(a, b, c) = (0.49, 0.49, 0.49);$ None of the above. Consider a well functioning clock where the hour, minute and the seconds needles are exactly at zero. How much time later will the minutes needle be exactly one minute ahead ($1/60$ th of the circumference) of the hours needle and the seconds needle again exactly at zero? ... an integer multiple of $1/60$ th of the circumference. $144$ minutes $66$ minutes $96$ minutes $72$ minutes $132$ minutes
2020-09-25 19:07:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6651015281677246, "perplexity": 1386.8960462460404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228707.44/warc/CC-MAIN-20200925182046-20200925212046-00575.warc.gz"}
http://www.gradesaver.com/textbooks/math/precalculus/precalculus-mathematics-for-calculus-7th-edition/chapter-1-section-1-7-modeling-with-equations-1-7-exercises-page-79/79
## Precalculus: Mathematics for Calculus, 7th Edition 1) V=180feet$^{3}$ 2) 2 feet by 6 feet by 15 feet 1) Volume = (length)*(width)*(height) Therefore the volume is 180 feet$^{3}$ 2) x(x-4)(x+9)=180 $x^{3}+5x^{2}-36x=180$ $x^{3}+5x^{2}-36x-180=0$ $x^{2}(x+5)-36(x+5)=0$ $(x^{2}-36)(x+5)=0$ $(x+5)(x+6)(x-6)=0$ x+5=0; x=-5 x+6=0; x=-6 x-6=0; x=6 x=6 is the only positive answer Therefore, the box would be 2 feet by 6 feet by 15 feet.
2017-11-18 23:26:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6028406023979187, "perplexity": 3350.666669637492}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805114.42/warc/CC-MAIN-20171118225302-20171119005302-00254.warc.gz"}
http://www.physicsforums.com/showthread.php?p=1189939
## How to prove trace(A.A*) is positive Hello I'd like to know how to prove that the trace of A.A* is positive. I don't really know how to handle the imaginary part of it. If A has any complex number in it, is it possible to get traces like 10-2i? If yes, do I consider it as a positive number or negative? PhysOrg.com science news on PhysOrg.com >> Ants and carnivorous plants conspire for mutualistic feeding>> Forecast for Titan: Wild weather could be ahead>> Researchers stitch defects into the world's thinnest semiconductor Recognitions: Homework Help Science Advisor First of all, the diagonal entries of AA* are real. You can't really compare two compex numbers like that as there is no order on C. Now, what does the (i,j)-th entry of AA* look like? What about the (i,i)-th entry? (Side note: tr(AA*) isn't always positive - it can be zero. So a better thing would be to say that it's nonnegative.) Quote by devoured_elysium I'd like to know how to prove that the trace of A.A* is positive. My sketchy knowledge about linear algebra tells me that you would have to relate the nature of the singular values of AA* to its trace being positive. Recognitions: Homework Help ## How to prove trace(A.A*) is positive That would not be a very easy way of doing this question. The trace of (AA*) is $$\sum_{i,j} A_{ij}(A^*)_{ji}$$ What is the definition of A*? Ah yes, thank you for the note.
2013-05-23 17:23:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5756640434265137, "perplexity": 1055.5247160741535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703635016/warc/CC-MAIN-20130516112715-00086-ip-10-60-113-184.ec2.internal.warc.gz"}
https://science.sciencemag.org/content/318/5852/949?ijkey=b863838b2f1f8b618225f7190d33ae472ef1797a&keytype2=tf_ipsecsha
Report The Simplest Double Slit: Interference and Entanglement in Double Photoionization of H2 See allHide authors and affiliations Science  09 Nov 2007: Vol. 318, Issue 5852, pp. 949-952 DOI: 10.1126/science.1144959 Abstract The wave nature of particles is rarely observed, in part because of their very short de Broglie wavelengths in most situations. However, even with wavelengths close to the size of their surroundings, the particles couple to their environment (for example, by gravity, Coulomb interaction, or thermal radiation). These couplings shift the wave phases, often in an uncontrolled way, and the resulting decoherence, or loss of phase integrity, is thought to be a main cause of the transition from quantum to classical behavior. How much interaction is needed to induce this transition? Here we show that a photoelectron and two protons form a minimum particle/slit system and that a single additional electron constitutes a minimum environment. Interference fringes observed in the angular distribution of a single electron are lost through its Coulomb interaction with a second electron, though the correlated momenta of the entangled electron pair continue to exhibit quantum interference. One of the most powerful paradigms in the exploration of quantum mechanics is the double-slit experiment. Thomas Young was the first to perform such an experiment, as early as 1801, with light. It took until the late 1950s (1), long after the experimental proof of the wave nature of particles was revealed, for a similar experiment to be carried out with electrons. Today, such experiments have been demonstrated for particles as heavy as C60 (2) and for bound electrons inside a highly excited atom (3). All of these experiments were aimed at a demonstration of double-slit self interference for a single particle fully isolated from the environment. If, however, this ideal laboratory situation is relaxed and the quantum particles are put in contact with the environment in a controlled manner, the quantum interference may be diminished so that the particles start behaving in an increasingly classical way (46). Recently, Hackermüller et al. (7) have demonstrated this phenomenon by sending heated C60 clusters through a double slit. The hot molecules couple via the emission of thermal photons to the environment, and a loss of interference as a function of their temperature is observed. The emission of the photons alters the relative phase between different pathways of the particle toward the detector, an effect referred to as decoherence. Such decoherence of a quantum system can be caused by single or multiple interactions with an external system (6). Limiting cases are one single hard interaction causing the decoherence by entanglement with the external system and multiple weak couplings to external perturbers (for instance, a bath) at the other extreme. A gradual transition between these two extremes has been demonstrated for photon scattering (6). We experimentally demonstrated that a system of two electrons is already sufficient to observe the transition from a quantum interference pattern to a classical particle-like intensity distribution for an individual electron. The quantum coherence is not destroyed, however, but remains in the entangled two-electron system. By measuring the correlated momenta of both particles, we illustrate this interference pattern, which is otherwise concealed in the two-body wave function. The idea of using a homonuclear molecule as the slit-scattering center of a photoelectron goes back to a paper published in 1966 by Cohen and Fano (8). Because of the coherence in the initial molecular state, the absorption of one photon by the homonuclear molecule launches two coherent electron waves at each of the protons of the molecule (Fig. 1, A and B). The interference pattern of these waves should be visible in the angular distribution of the electron, with respect to the molecular axis. In K shell ionization of heavy diatomics (e.g., N2 and O2, as discussed by Cohen and Fano), interference is visible only if the symmetry (gerade or ungerade) of the molecule with a single 1s hole is resolved (9, 10). For ground-state H2+ and H2 molecules, only gerade orbitals are populated, and thus these systems constitute clean cases where slitlike behavior is expected (11). Still, the originally proposed experiment on H2 (11) has not been carried out, because it requires knowledge of the direction of the molecular axis (12, 13). A signature of the interference effect has nonetheless been observed in the wavelength dependence of electrons emitted from a randomly oriented sample of H2 molecules by ion impact ionization (14, 15). We extended the idea of Cohen and Fano from single photoionization to double photoionization to study the two-body interference of an electron pair. This electron pair is emitted by absorption of a single circularly polarized photon from the H2 molecule (Eq. 1) $Math$(1) hν symbolizes a photon of frequency ν. The two electrons are distinguishable by their energy, which allows us to study the interference pattern as a function of the interaction strength or momentum exchanged between the two particles. Single photons from beamlines 4.0 or 11.0 at the Advanced Light Source at Lawrence Berkeley National Laboratory were used to photoeject both electrons of each H2 molecule. A supersonic H2 gas jet was crossed with the photon beam. For each ionized molecule, the vector momenta of all fragment particles—both ions and both electrons—were determined in coincidence. The orientation of the H2 molecule, or molecular double slit, was measured for each fragmentation by detecting the emission direction of the two protons. Once the two electrons are ejected, the protons rapidly fly apart along the molecular axis, driven by their mutual Coulomb repulsion. A multiparticle imaging technique (cold target recoil ion momentum spectroscopy) (16, 17) was used to detect all particles. The ions and electrons created in the intersection volume of the photon and gas beams were guided by weak electric (50 V/cm) and magnetic fields (8 G) toward two separate multichannel plate detectors with delayline readouts (18). From the position of impact and the time of flight, the initial vector momentum of each particle can be determined. Only three particles (two protons and one electron) need to be detected. The momentum of the second electron (in the present case the more energetic of the two) is deduced through momentum conservation of the total system. The Coulomb explosion of the two protons at the equilibrium distance of H2 of 1.4 atomic units (au) yields a kinetic energy of about 10 eV per proton (19), and the total electronic binding energy of H2 is about 30 eV. The experiment has been performed at two different photon energies of Eγ = 240 and 160 eV, leaving about 190 and 110 eV of energy to be shared among the two electrons, respectively. At the high photon energies under consideration here, double photoionization of H2 leads in most cases to one fast electron and one slow electron (20). Figure 1D shows, for ionization by 240-eV photons, the measured angular distribution for a highly energetic electron (called “1” here) of energy E1: 185 eV < E1 < 190 eV. The second electron, unobserved here, acquires an energy of only E2 < 5 eV. The angular distribution is in the plane perpendicular to the photon propagation vector, and the molecular axis is oriented horizontally in that plane. (The data plotted include events where electron 1 and the molecular axis lie within 10 degrees of the ideal plane perpendicular to the photon propagation direction) The experimental data show a strong interference pattern that qualitatively resembles the pattern induced by a double slit. For the optical double-slit experiment in which the interference results from a superposition of two coherent spherical waves, the intensity distribution I is given by Eq. 2 $Math$(2) $Math$ In our case, R is the internuclear distance (1.4 au for H2), Φe–mol is the angle of electron emission with respect to the internuclear axis (12), ke is the momentum of the electron, and C is a proportionality constant. An electron energy of 190 eV (as in Fig. 1) corresponds to ke = 3.75 au. The double-slit prediction of Eq. 2 is shown by the blue line in Fig. 1E. The deviations from the double-slit prediction can be understood from the somewhat more elaborate theoretical treatment shown in Fig. 1E. By treating the electrons as spherical waves, the simple approximation in Eq. 2 neglects the fact that the electrons are ejected by circularly polarized light and further that they must escape from the two-center Coulomb potential of the two nuclei. The helicity of the light leads to a slight clockwise rotation of the angular distribution, as seen in the experiment and the more elaborate calculations. The Coulomb interaction with the nuclei has two major effects. First, the wavelength of the electron in the vicinity of the protons is shorter than the asymptotic value. This property modifies, in particular, the emission probability along the molecular axis due to a phase shift in the nearfield (21). Second, the original partial wave emerging from one of the nuclei is scattered at the neighboring nucleus, thereby launching another partial wave. Thus, the final diffraction pattern is the superposition of four (or more) coherent contributions: the primary waves from the left and right nuclei and the singly or multiply scattered waves created subsequently in the molecular potential. We performed two calculations to take the helicity of the photon, as well as multiple scattering effects, into account. The first calculation (red line in Fig. 1E) was based on the random phase approximation (RPA) (22), and the second (black line in Fig. 1E) entailed a multiple scattering calculation, wherein a spherical wave is launched at one proton (23). This wave is then multiply scattered in the two-center potential of two protons, which is terminated at a boundary. The direct and multiple scattered waves are then coherently added and symmetrized. Although conceptually very different, both calculations account for all of the relevant physical features: the two-center interference determining the position of minima and maxima, the molecular potential altering the relative height of the peaks, and the helicity of the ionizing photon inducing a rotation. The details of the molecular potential differ in the calculations. The RPA uses a Hartree-Fock potential, whereas the multiple scattering calculation assumes two bare protons. The full calculations treat the emission of a single electron. Therefore, their good agreement with the experimental data (Fig. 1D) obtained from double ionization might be surprising. This suggests that the additional emission of a slow electron does not substantially alter the wave of the fast particle. For the particular case in which the electron pair consists of a fast and a very slow electron, the diffraction of a coherent electron pair can be treated by simply neglecting the slow electron. This simple one-particle picture completely fails in scenarios where lower primary and higher scattered electron energies result in stronger coupling between the electrons. Figure 2, A and B, shows the results for different energy partitions of the first and second electron after ionization by 160-eV photons. Whereas for E1 ≈ 110 eV and E2 < 1 eV the interference is still visible (Fig. 2A), it completely disappears when E1 ≈ 95 eV and 5 eV < E2 < 25 eV (Fig. 2B). In the latter case, the distribution approaches the isotropic result without two-center interference. By comparing these data to the corresponding theoretical estimates (Fig. 2, C and D), we can now show that the observed loss of interference contrast is a result of decoherence and not of the changing electron wavelength. Coulomb interaction between two quantum mechanical systems (electrons 1 and 2 in our case) does not destroy phases. Rather, it entangles the wave functions of the two subsystems (24, 5). In our experiment, we observed both electrons in coincidence. Therefore we can investigate this entangled two-particle system to search for the origin of the apparent loss of coherence in a single-particle subsystem. Figure 3 shows the correlation in this two-body system. The horizontal axis is the angle of the fast-electron momentum, with respect to the molecular axis (i.e., the angle that is plotted in all other figures). The vertical axis is the angle between the two electron's momenta. It may be helpful to think of the horizontal axis as the scattering of electron 1 by the double slit and the vertical axis as the scattering angle between the electrons. Marked interference patterns emerge in this display of the two-particle wave function. No vestige of these patterns remains, however, if the distribution is integrated over the vertical axis. When subsets of the data with restricted angular ranges of Φe–e = +70 ± 20° (Fig. 3, B and C) and Φe–e = –70 ± 20° (Fig. 3, D and E) are examined, then the interference pattern is resurrected (here, Φe–e is the angle between both electron trajectories). However, depending on the angle between the electrons in the selected subset of the data, the interference pattern is tilted to the right (Fig. 3C) or left (Fig. 3E). Without the restriction of this relative angle, the shifted minima and maxima cancel each other out, leading to the almost isotropic distribution of Fig. 2B. The interference maxima are concentrated along two horizontal lines. These lines of highest intensity lie at a relative angle of about 100° between the two electrons. This distribution is a well-known indication of the mechanism whereby the absorption of a single photon by one electron can induce its ejection, as well as that of the other electron, after their binary collision (20, 25). The angles Φe–e = 90° and Φe–e = –90° correspond to a kick of the second electron, either to the left or the right. This strong electron-electron Coulomb interaction mediates the double ionization and creates an entanglement between the two electrons. Electron collisions of this sort in bound systems have been demonstrated directly in pump-probe experiments (26). This situation is an intramolecular version of a scattering event downstream of a double slit (27, 6). When either photons (6) or particles (27) are scattered from a beam after passage through a double slit, the scattering induces a phase shift, which then leads to a shift of the interference pattern. If the momentum transfer is not measured in coincidence (6), the fringe visibility is lost. In this experiment, both electrons are initially delocalized inside the molecule in a completely coherent single quantum state. Before photoabsorption, both electrons are confined in the hydrogen ground state, which is symmetric with respect to its two atomic centers. Thus, we observed not a scattering between classical localized particles but a coherent entanglement of the wave function of the two electrons. It is instructive to think of the electronic two-body system as split into its subsystems and to consider one subsystem as the environment of the other. The strong Coulomb interaction entangles the two subsystems and leads to a position-dependent modification of phase of the single-particle wave function inside each of the two subsystems. The entanglement of the electrons in the pair is directly visible in their mutual angular distribution and is further evidenced by the observation that selecting the momentum of one electron makes the interference pattern of its partner reappear. In the spirit of discussions dating from the early history of quantum mechanics, one particle can be considered an observer that carries partial information about the other particle and its path through the double slit. The amount of which-way information exchanged between the particles is limited by the observer particle's de Broglie wavelength (28). The key difference between the situation depicted in Fig. 2A (which shows interference) and Fig. 2B (which shows no interference) is that the wavelength of the second, unobserved electron is much shorter in the latter case. Our experiment thus reveals that a very small number of particles suffices to induce the emergence of classical properties, such as the loss of coherence. A four-body system, such as fragmented molecular hydrogen, acts as a double slit in the sense that coherence is lost in a subsystem of entangled electrons. Such a fundamental system facilitates the study of the influence of interelectronic Coulomb interactions on the coherence properties of a single electron. In solid-state–based quantum computing devices, such electron-electron interaction represents a key challenge. One advantageous aspect of the finite system investigated here is that, theoretically, it is fully tractable at present (2932). View Abstract
2019-10-18 09:08:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5334885120391846, "perplexity": 624.9329073090435}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986679439.48/warc/CC-MAIN-20191018081630-20191018105130-00332.warc.gz"}
https://www.gamedev.net/forums/topic/494359-string/
string This topic is 3686 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. Recommended Posts how do i use string, i include <string.h> or <string> in my cpp file , i have tried both , but when i declare it string s1("test"); the compiler give me an error that string is undeclared variable. Can anyone help? Share on other sites Use <string> String resides in the std namespace, so unless you import it, try std::string s("test"); Share on other sites string.h is the c string library the name in C++ is cstring string is the name from the std::string library you must prefix std:: before using string You can also add after you include using namespace std or using std::string if you want to get rid of std:: But don't do this in headers as they will pollute your global namespace. Share on other sites C++ strings live in the "std" namespace (meaning standard). You can use "std::" as a prefix to tell the compiler that you are using the string from this namespace, like so: std::string s1("test"); Alternatively, you can tell the compiler that you are going to be referring to the string inside namespace std like so: #include <string>using std::string;int main(){ string s1("test"); // ...} If you are using lots and lots of things from namespace std, you can tell the compiler to check there for everything: #include <string>using namespace std;int main(){ string s1("test"); // ...} The last two options should not be used in header files. Once you write a "using" directive, there is no way for files including that header to "un-use" a namespace. This can, in extreme cases, change the meaning of code. It is best to fully qualify symbols in header files for this reason. Finally, <string.h> is a C header. To use it in C++ write <cstring>. It does not contain std::string. Share on other sites i have tried , it still give me an error,it tell me that 'std' : a namespace with this name does not exist 'std' : is not a class or namespace name 'string' : undeclared identifier maybe it cant find the path for string.h or cstring.h, i saw that most people use string.h. do i have to put path so the compiler can find string.h or does he find it automaticaly? Share on other sites Quote: Original post by totoro9i have tried , it still give me an error,it tell me that 'std' : a namespace with this name does not exist'std' : is not a class or namespace name'string' : undeclared identifiermaybe it cant find the path for string.h or cstring.h, i saw that most people use string.h. do i have to put path so the compiler can find string.h or does he find it automaticaly? It's not string.h, it's just string: #include <string>int main(){ std::string s = "foo"; return 0;} Share on other sites The standard C++ library headers do not have a file extension. As in my example, and as mentioned by WanMaster, you need to #include <string>. There is no ".h". • 33 • 12 • 10 • 9 • 9 • Forum Statistics • Total Topics 631353 • Total Posts 2999488 ×
2018-06-18 07:38:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19801479578018188, "perplexity": 4956.36306035999}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860089.13/warc/CC-MAIN-20180618070542-20180618090542-00274.warc.gz"}
http://www.researchgate.net/publication/1753126_Confined_and_ejective_eruptions_of_kink-unstable_flux_ropes
Article # Confined and ejective eruptions of kink-unstable flux ropes • ##### B. Kliem The Astrophysical Journal (Impact Factor: 6.73). 07/2005; DOI: 10.1086/462412 Source: arXiv ABSTRACT The ideal helical kink instability of a force-free coronal magnetic flux rope, anchored in the photosphere, is studied as a model for solar eruptions. Using the flux rope model of Titov & Demoulin (1999} as the initial condition in MHD simulations, both the development of helical shape and the rise profile of a confined (or failed) filament eruption (on 2002 May 27) are reproduced in very good agreement with the observations. By modifying the model such that the magnetic field decreases more rapidly with height above the flux rope, a full (or ejective) eruption of the rope is obtained in very good agreement with the developing helical shape and the exponential-to-linear rise profile of a fast coronal mass ejection (CME) (on 2001 May 15). This confirms that the helical kink instability of a twisted magnetic flux rope can be the mechanism of the initiation and the initial driver of solar eruptions. The agreement of the simulations with properties that are characteristic of many eruptions suggests that they are often triggered by the kink instability. The decrease of the overlying field with height is a main factor in deciding whether the instability leads to a confined event or to a CME. Comment: minor update to conform to printed version; typo in table corrected 1 Bookmark · 73 Views • Source ##### Article: Evolution of an eruptive flare loop system [Hide abstract] ABSTRACT: Context. Flares, eruptive prominences and coronal mass ejections are phenomena where magnetic reconnection plays an important role. However, the location and the rate of the reconnection, as well as the mechanisms of particle interaction with ambient and chromospheric plasma are still unclear.Aims. In order to contribute to the comprehension of the above mentioned processes we studied the evolution of the eruptive flare loop system in an active region where a flare, a prominence eruption and a CME occurred on August 24, 2002.Methods. We measured the rate of expansion of the flare loop arcade using TRACE 195 Å images and determined the rising velocity and the evolution of the low and high energy hard X-ray sources using RHESSI data. We also fitted HXR spectra and considered the radio emission at 17 and 34 GHZ.Results. We observed that the top of the eruptive flare loop system initially rises with a linear behavior and then, after 120 mn from the start of the event registered by GOES at 1–8 Å, it slows down. We also observed that the heating source (low energy X-ray) rises faster than the top of the loops at 195 Å and that the high energy X-ray emission (30–40 keV) changes in time, changing from footpoint emission at the very onset of the flare to being coincident during the flare peak with the whole flare loop arcade.Conclusions. The evolution of the loop system and of the X-ray sources allowed us to interpret this event in the framework of the Lin & Forbes model (2000), where the absolute rate of reconnection decreases when the current sheet is located at an altitude where the Alfvén speed decreases with height. We estimated that the lower limit for the altitude of the current sheet is $6 \times 10^{4}$ km. Moreover, we interpreted the unusual variation of the high energy HXR emission as a manifestation of the non thermal coronal thick-target process which appears during the flare in a manner consistent with the inferred increase in coronal column density. Astronomy and Astrophysics 01/2009; · 5.08 Impact Factor • Source ##### Article: Initiation and propagation of coronal mass ejections [Hide abstract] ABSTRACT: This paper reviews recent progress in the research on the initiation and propagation of CMEs. In the initiation part, several trigger mechanisms are discussed; In the propagation part, the observations and modelings of EIT waves/dimmings, as the EUV counterparts of CMEs, are described. Comment: 8 pages, 1 figure, an invited review, to appear in J. Astrophys. Astron Journal of Astrophysics and Astronomy 12/2007; · 0.34 Impact Factor • Source ##### Article: The writhe of open and closed curves [Hide abstract] ABSTRACT: Twist and writhe measure basic geometric properties of a ribbon or tube. While these measures have applications in molecular biology, materials science, fluid mechanics and astrophysics, they are under-utilized because they are often considered difficult to compute. In addition, many applications involve curves with endpoints (open curves); but for these curves the definition of writhe can be ambiguous. This paper provides simple expressions for the writhe of closed curves, and provides a new definition of writhe for open curves. The open curve definition is especially appropriate when the curve is anchored at endpoints on a plane or stretches between two parallel planes. This definition can be especially useful for magnetic flux tubes in the solar atmosphere, and for isotropic rods with ends fixed to a plane. Journal of Physics A General Physics 06/2006; 39(26):8321. 0 Downloads Available from
2014-10-31 23:13:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5821859240531921, "perplexity": 1739.8223318068497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900544.33/warc/CC-MAIN-20141030025820-00214-ip-10-16-133-185.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3036865/weak-homotopy-equivalence-induces-isomorpism-of-sets-of-homotopy-classes
# Weak homotopy equivalence induces isomorpism of sets of homotopy classes? There is a question in my homework on the algebraic topology course asking if two spaces $$X$$ and $$Y$$ are weakly homotopy equivalent in case for every cellular space $$Z$$ sets $$[Z,X]$$ and $$[Z,Y]$$ are naturally bijective. I'm wondering if converse statement holds at least for cell complexes. Suppose $$f\colon K\to K'$$ is weak homotopy equivalence of cell complexes $$K,K'$$, namely, $$f_*\colon \pi_i(K)\to \pi_i(K')$$ is isomorphism for all $$i$$. I need to show that $$f_*\colon [X,K]\to [X,K']$$ is bijection. First I tried to show that it is injective. Suppose $$f_*[\alpha]=f_*[\beta]$$ for some maps $$\alpha,\beta\colon X\to K$$. I want to prove that $$\alpha\simeq \beta$$ then. As soon as $$f\circ\alpha\simeq f\circ\beta$$, then for every spheroid $$\varphi\colon S^k\to X$$ $$f\circ(\alpha\circ\varphi)\simeq f\circ(\beta \circ\varphi)$$, and thus $$\alpha\circ\varphi\simeq \beta \circ\varphi$$, because $$f_*$$ is bijection between $$\pi_k(K)=[S^k,K]$$ and $$\pi_k(K')=[S^k,K']$$. Suppose $$\varphi\colon S^k\to X$$ is attaching map for the cell $$e^k$$ in $$X$$. I have homotopy $$H:S^k\times [0,1]\to K$$, $$H|_{t=0}=\alpha\circ\varphi$$, $$H|_{t=1}=\beta\circ\varphi$$ I wish I could extend on the whole cell. Indeed, I can use HEP in this case for the homotopy $$H$$ and map $$\alpha|_{D^k}\colon D^k\to K$$ and hence I have homotopy $$\tilde H\colon e^k\times [0,1]\to K$$ between $$\alpha$$ and $$\beta$$. However, there is a problem, because these homotopies are not necessarily agreed. How do I deal with that? • Oh my gosh, I've just realized that, in fact, all this time I've been trying to prove Whitehead's theorem – igortsts Dec 12 '18 at 16:58
2019-10-15 08:26:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 31, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9761964082717896, "perplexity": 151.9638663217055}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657949.34/warc/CC-MAIN-20191015082202-20191015105702-00038.warc.gz"}
https://cran.r-project.org/web/packages/ARIbrain/vignettes/ARIbrain_example.html
# Introduction ARIbrain is the package for All-Resolution Inference in neuroscience. Here we show how to compute lower bound for proportion of active voxels (or any other spatially located units) within given clusters. The clusters can be defined a priori, on the basis of previous knowledges or on the basis of anatomical regions. Clusters of such a kind are usually called ROIs. There are no limitations to the number of ROIs that can be evaluated in the same analysis; the lower bounds for each ROI is valid simultaneously for all estimates (i.e. corrected for multiplicity). Even more interestigly, the clusters can be defined on the basis of the same data. This is true, since the ARI allows for circular analysis, still controlling for multiplicity of inferences. In the following we show an analysis where clusters are defined by a supra-threshold-statistic rule. This is the typical case of cluster-wise analysis followed by a multiplicity correction based on Random Field Theory. Here we follow an alternative way: we provide lower bound for proportion for the estimate of active voxels. ## Sintax and parameters The sintax of the function is (type ?ARIbrain::ARI for more details) ARI(Pmap, clusters, mask = NULL, alpha = 0.05, Statmap = function(ix) -qnorm(Pmap[ix]), summary_stat = c("max", "center-of-mass"), silent = FALSE) The main input parameters of ARI() are: • Pmap: the map of p-values and • clusters: the map of cluster index. The function accepts both character file names and 3D arrays. Therefore the minimal sintax is ARI(Pmap, clusters) Others maps (parameters) are: • mask which is a 3D array of logicals (i.e.TRUE/FALSE means in/out of the brain). Alternatively, it may be a (character) nifti file name. If omitted, all voxels are considered. • Statmap which is a 3D array of statistics (usually t-values) on which the summaries are based. File name is also accepted. # Performing the analysis from nifti (nii) files In order to perfom the analysis you need: • a zstat.nii.gz containing the test statistic used in the analysis • a mask.nii.gz (not mandatory, but usefull) • a cluster.nii.gz image of cluster index. ## Making the map cluster.nii.gz with FSL You simply need to run on the shell: cluster -z zstat1.nii.gz -t 3.2 -o cluster.nii.gz This will create the cluster.nii.gz that you need. hint: In case it retun an error message like cluster: error while loading shared libraries: libutils.so: cannot open shared object file: No such file or directory, type into the shell (replacing the path with your own path of the file fsl.sh): source /etc/fsl/5.0/fsl.sh and try again. Get a complete help for FSL at https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Cluster # ARI analysis library(ARIbrain) pvalue_name <- system.file("extdata", "pvalue.nii.gz", package="ARIbrain") cluster_name <- system.file("extdata", "cluster_th_3.2.nii.gz", package="ARIbrain") zstat_name <- system.file("extdata", "zstat.nii.gz", package="ARIbrain") res_ARI=ARI(Pmap = pvalue_name, clusters= cluster_name, mask=mask_name, Statmap = zstat_name) ## A hommel object for 145872 hypotheses. ## Simes inequality is assumed. ## Use p.adjust(), discoveries() or localtest() to access this object. ## ## With 0.95 confidence: at least 10857 discoveries. ## 3387 hypotheses with adjusted p-values below 0.05. ## ## Size FalseNull TrueNull ActiveProp dim1 dim2 dim3 Stat ## cl18 6907 5179 1728 0.74981902 17 57 38 7.826075 ## cl17 4607 3409 1198 0.73996093 76 53 39 7.512461 ## cl16 385 0 385 0.00000000 75 71 52 4.536705 ## cl15 249 15 234 0.06024096 20 65 63 4.882405 ## cl14 168 0 168 0.00000000 55 60 32 4.590014 ## cl13 108 0 108 0.00000000 67 78 36 4.653047 ## cl12 32 0 32 0.00000000 46 92 31 3.532471 ## cl11 30 0 30 0.00000000 71 59 61 3.958764 ## cl10 28 0 28 0.00000000 51 57 41 3.572852 ## cl9 23 0 23 0.00000000 39 51 34 4.069620 ## cl8 18 0 18 0.00000000 52 50 35 3.909603 ## cl7 13 0 13 0.00000000 32 54 36 3.654027 ## cl6 11 0 11 0.00000000 43 49 37 3.496598 ## cl5 9 0 9 0.00000000 15 36 46 3.762685 ## cl4 7 0 7 0.00000000 46 53 40 3.504706 ## cl3 2 0 2 0.00000000 46 44 35 3.330204 ## cl2 1 0 1 0.00000000 19 87 41 3.237725 ## cl1 1 0 1 0.00000000 52 58 46 3.376935 ## cl0 133273 0 133273 0.00000000 42 56 43 3.199741 str(res_ARI) ## num [1:19, 1:8] 6907 4607 385 249 168 ... ## - attr(*, "dimnames")=List of 2 ## ..$: chr [1:19] "cl18" "cl17" "cl16" "cl15" ... ## ..$ : chr [1:8] "Size" "FalseNull" "TrueNull" "ActiveProp" ... # other ARI examples ## using arrays library(RNifti) # compute p-values from Test statistic (refering to Normal distribution, right-side alternative) Pmap=pnorm(-Tmap) # Make sure that it is a logical map by: ()!=0 #Create Clusters using a threshold equal to 3.2 clstr=cluster_threshold(Tmap>3.2) table(clstr) ## clstr ## 0 1 2 3 4 5 6 7 8 9 ## 890030 1 1 2 7 9 11 13 18 23 ## 10 11 12 13 14 15 16 17 18 ## 28 30 32 108 168 249 385 4607 6907 res_ARI=ARI(Pmap,clusters = clstr,mask = mask,Statmap = Tmap) ## A hommel object for 145872 hypotheses. ## Simes inequality is assumed. ## Use p.adjust(), discoveries() or localtest() to access this object. ## ## With 0.95 confidence: at least 10857 discoveries. ## 3387 hypotheses with adjusted p-values below 0.05. ## ## Size FalseNull TrueNull ActiveProp dim1 dim2 dim3 Stat ## cl18 6907 5179 1728 0.74981902 17 57 38 7.826075 ## cl17 4607 3409 1198 0.73996093 76 53 39 7.512461 ## cl16 385 0 385 0.00000000 75 71 52 4.536705 ## cl15 249 15 234 0.06024096 20 65 63 4.882405 ## cl14 168 0 168 0.00000000 55 60 32 4.590014 ## cl13 108 0 108 0.00000000 67 78 36 4.653047 ## cl12 32 0 32 0.00000000 46 92 31 3.532471 ## cl11 30 0 30 0.00000000 71 59 61 3.958764 ## cl10 28 0 28 0.00000000 51 57 41 3.572852 ## cl9 23 0 23 0.00000000 39 51 34 4.069620 ## cl8 18 0 18 0.00000000 52 50 35 3.909603 ## cl7 13 0 13 0.00000000 32 54 36 3.654027 ## cl6 11 0 11 0.00000000 43 49 37 3.496598 ## cl5 9 0 9 0.00000000 15 36 46 3.762685 ## cl4 7 0 7 0.00000000 46 53 40 3.504706 ## cl3 2 0 2 0.00000000 46 44 35 3.330204 ## cl2 1 0 1 0.00000000 52 58 46 3.376935 ## cl1 1 0 1 0.00000000 19 87 41 3.237725 ## cl0 133273 0 133273 0.00000000 42 56 43 3.199741 ## Define threshold and clusters on the basis of concentration set (optimal threshold) hom=hommel::hommel(Pmap[mask]) (thr_p=hommel::concentration(hom)) ## [1] 0.0008627815 (thr_z=-qnorm(thr_p)) ## [1] 3.133804 Tmap[!mask]=0 clstr=cluster_threshold(Tmap>thr_z) table(clstr) ## clstr ## 0 1 2 3 4 5 6 7 8 9 ## 889443 1 2 3 9 16 21 35 38 39 ## 10 11 12 13 14 15 16 ## 66 119 194 257 410 4748 7228 res_ARI_conc=ARI(Pmap,clusters = clstr,mask = mask,Statmap = Tmap) ## A hommel object for 145872 hypotheses. ## Simes inequality is assumed. ## Use p.adjust(), discoveries() or localtest() to access this object. ## ## With 0.95 confidence: at least 10857 discoveries. ## 3387 hypotheses with adjusted p-values below 0.05. ## ## Size FalseNull TrueNull ActiveProp dim1 dim2 dim3 Stat ## cl16 7228 5179 2049 0.71651909 17 57 38 7.826075 ## cl15 4748 3409 1339 0.71798652 76 53 39 7.512461 ## cl14 410 0 410 0.00000000 75 71 52 4.536705 ## cl13 257 15 242 0.05836576 20 65 63 4.882405 ## cl12 194 0 194 0.00000000 55 60 32 4.590014 ## cl11 119 0 119 0.00000000 67 78 36 4.653047 ## cl10 66 0 66 0.00000000 39 51 34 4.069620 ## cl9 39 0 39 0.00000000 51 57 41 3.572852 ## cl8 38 0 38 0.00000000 46 92 31 3.532471 ## cl7 35 0 35 0.00000000 71 59 61 3.958764 ## cl6 21 0 21 0.00000000 52 50 35 3.909603 ## cl5 16 0 16 0.00000000 15 36 46 3.762685 ## cl4 9 0 9 0.00000000 46 53 40 3.504706 ## cl3 3 0 3 0.00000000 19 87 41 3.237725 ## cl2 2 0 2 0.00000000 46 44 35 3.330204 ## cl1 1 0 1 0.00000000 59 70 26 3.154989 ## cl0 132686 0 132686 0.00000000 31 71 52 3.133518
2021-11-28 02:29:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37566033005714417, "perplexity": 6068.728452842229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00051.warc.gz"}
https://tex.stackexchange.com/questions/361477/calling-acronym-in-nomenclature-package
# calling acronym in nomenclature package just like in acronym package we can call the acronym by \ac and glossary by \gls,can someone please guide me how to call acronym while using the package nomenclature \documentclass[a4paper,12pt,times,numbered,print,index,custommargin]{Classes/PhDThesisPSnPDF} \usepackage{nomencl} \makenomenclature here are commands i used to define the acronyms.(z is for acronym definition) \nomenclature[Z]{$WDM$}{Wavelength Division Multiplexing} \nomenclature[Z]{$EDFA$}{Erbium Doped Fiber Amplifiers} i don't know how to call them after the definition if they are appearing in later sections also. • Welcome! nomencl isn't developed for such a usage. It's a simple method to produce a nomenclature. Why don't you use glossaries? – Marco Daniel Mar 31 '17 at 17:21 • Marco, as one of the authors of nomencl, I disagree :) – Boris Mar 31 '17 at 20:25 What's wrong with \nomenclature{WDM}{Wavelength Division Multiplexing} \nomenclature{EDFA}{Erbium Doped Fiber Amplifiers} Note the absence of dollar signs: this is not math. \documentclass{article} \usepackage[norefeq]{nomencl} \renewcommand{\nomname}{Abbreviations} \pagestyle{empty} \makenomenclature \begin{document} In the text we will discuss WDM and EDFA. \nomenclature{WDM}{Wavelength Division Multiplexing}% \nomenclature{DFA}{Erbium Doped Fiber Amplifiers}% \printnomenclature \end{document} • thanks Boris, can you please confirm me if this package is only for list ? actually i want to use it such that for the first usage the name should be complete but for the subsequent use it should be replaced by the acronym..thanks – Shoaib Apr 1 '17 at 9:58 • Yes, this package makes lists of items, optionally accompanied by the reference to the page or equation where the item was first mentioned – Boris Apr 1 '17 at 13:49
2019-09-17 12:19:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8954328894615173, "perplexity": 2390.776366135982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573071.65/warc/CC-MAIN-20190917121048-20190917143048-00406.warc.gz"}
http://www.physicsforums.com/showpost.php?p=4246089&postcount=1
View Single Post P: 5 So I noticed that when learning about Liouville's Theorem in class, that it was described in terms of ensemble: i.e; you have a 6N dimensional phase space for N particles moving in 3 dimensions and each system of the ensemble has a representative system point and that the density function in Liouville's theorem refers to the density of systems in the ensemble in a given microstate and that these points, since essentially they follow a global conservation law and move in phase space, are subject to the continuity equation for the points. This all makes sense, but I was wondering if we can equivalently reformulate Liouville's problem without considering an ensemble of systems. What I mean is this: imagine you have a system of N particles, and for simplicity that they are capable of moving in only one dimension and thus they each have a spatial coordinate x and a momentum p. Let's consider now a 2 dimensional phase space whose axes are x and p (instead of a 2N dimensional phase space whose axes are the xi and pi of all of the N particles). And now instead of plotting ensembles in the phase space, we plot the momenta and position of each particle (even though this is practically impossible for a very large system such as that of a liter of gas, we could do this in theory and that's all that matters, I think). This gives us a space with N particles rather than N system points, and this still follows a global conservation in that the particles of the system can not be created nor destroyed. So it seems that a form of Liouville's theorem would hold for a new density function which rather than representing the density of system points in phase space, represents now the number of particles per unit area in phase space (in this 1D example) whose coordinates and moments are in the interval (x, p, x+dx, p+dp), much like that of the molecular distribution function. Is this indeed true? The reason that I ask is because if this holds, then if we look at the resulting equation where the density function d = d(x,p) and the "velocity" in the continuity equation is given by <$\frac{dx}{dt}$,$\frac{dp}{dt}$>, or , we get: 0 = $\frac{∂d}{∂t}$ + $\frac{∂d}{∂x}$*v + $\frac{∂d}{∂p}$*F, where F= F(x), which follows from ∂d/∂t + div (dv) = 0 where the del operator is given by ∇=$\frac{∂}{∂x}$+$\frac{∂}{∂p}$ in phase space. and in the case where we have thermodynamic equilibrium and d does not depend explicitly on time, we have a separable first order partial differential equation whose solutions give the exponential dependence on F(x) and on v2, i.e; the Boltzmann distribution. I thought this was more than a coincidence but I didn't find sources on this. Thoughts would be greatly appreciated. Thanks!
2014-03-09 18:12:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8547782897949219, "perplexity": 194.8365983625236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010076008/warc/CC-MAIN-20140305090116-00012-ip-10-183-142-35.ec2.internal.warc.gz"}
http://book.caltech.edu/bookforum/showpost.php?s=184d96ca7afacba241bca2c29d33d64e&p=11301&postcount=3
View Single Post #3 07-28-2013, 05:42 PM hsolo Member Join Date: Jul 2013 Posts: 12 Re: computing w, b for soft margin SVM Quote: Originally Posted by yaser 1. All of them for computing , since any vector with will contribute to the derived solution for . 2. Only margin SV's for computing , since we need an equation, not an inequality, to solve for after knowing . Is the 'heuristic' number of parameters (the VC dimension proxy) to be used while reasoning about generalization then the number of margin support vectors << the number of all support vectors? When we use kernel functions with soft SVMs (problem 2 etc), where there is no explicit w, does the above translate to : * 1==> Use all support vectors to compute the sigma term in the hypothesis function g() * 2==> Use only margin support vectors for b (which is also used in g() I was wondering if this aspect was covered in the lecture or any of the additional material -- I seem to have missed.
2020-02-17 03:40:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9493045210838318, "perplexity": 1883.132073914555}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141653.66/warc/CC-MAIN-20200217030027-20200217060027-00535.warc.gz"}
http://evaneschneider.github.io/site/2018/Xray-Profiles/
# X-ray Profiles ## January 31, 2018 In the last post, I showed soft x-ray (0.3 - 2 keV) surface brightness maps for two of my outflow simulations at $512\times512\times1024$ (20 pc) resolution. I’ve now made the same maps for the full $2048\times2048\times4096$ (5 pc) resolution simulations. Here they are at 60 Myr for the isotropic feedback model (adiabatic sim), and for the clustered feedback model (radiative sim): As compared to the 20 pc maps, these two plots are not so different! There’s less collimation in the clustered model, primarily because the scale height of the disk is much smaller. In the adiabatic sim, there’s no way for the disk gas to cool as shocks propagate through it, so it tends to get more and more puffed up as the simulation goes on. Both these sims show a similar vertical extent of soft x-ray emission, and filamentary and large-scale features are visible in both. In addition, the total soft x-ray luminosity is similar, $L_x = 1.9\times10^{40}\,\mathrm{erg}\,\mathrm{s}^{-1}$ for the adiabatic sim, and $L_x = 1.42\times10^{40}\,\mathrm{erg}\,\mathrm{s}^{-1}$ for the cluster sim. The observed soft x-ray luminosity of M82 is $L_x = 4.3\times10^{40}\,\mathrm{erg}\,\mathrm{s}^{-1}$ (Strickland et al. 2004, Table 9). In addition to looking at the surface brightness maps, I can also compare directly with the minor-axis surface brightness profiles derived by Strickland et al. in their 2004 paper. To create the profile, I integrate the surface brightness in slices with $\Delta x = 10\,\mathrm{kpc}$ and $\Delta z = 0.15625\,\mathrm{kpc}$, then divide out the area (so $\Sigma = \int n^2 \Lambda \,\mathrm{d}x \mathrm{d}y \mathrm{d}z \,/ \int \mathrm{d}x \mathrm{d}z$). The resulting profiles (in $\mathrm{erg}\,\mathrm{s}^{-1}\,\mathrm{cm}^{-2}\,\mathrm{arcmin}^{-2}$) can be directly compared with Strickland’s exponential fit for M82. In Table 6, he gives $\Sigma_0 = 103\times10^{-9}\,\mathrm{photons}\,\mathrm{s}^{-1}\,\mathrm{cm}^{-2}\,\mathrm{arcsec}^{-2}$ with an exponential scale height of $H_\mathrm{eff} = 0.73\,\mathrm{kpc}$. The following plots show my calculated surface brightness profiles for each of the above simulations, along with Strickland’s best fit exponential (plotted in red). Note that I’ve converted Strickland’s data by assuming 1 photon in the 0.3 - 2 keV band is approximately $2\times10^{-9}$ erg, as suggested in Section 4.3.2. Note also that Strickland’s exponential fit was made using only the 0.3 - 1.0 keV band data, which appear to be slightly less peaked than the 1.0 - 2.0 keV data (while my profile covers the full 0.3 - 2.0 keV band). For example, below is one of Strickland’s M82 datasets with the 0.3 - 1.0 keV data in blue and the 1.0 - 2.0 keV data in orange. Given that, I’d say both models are looking pretty good. This has some interesting implications for the origin of the soft x-ray emission in M82, which I’ll get to in a later post.
2019-05-22 02:17:34
{"extraction_info": {"found_math": true, "script_math_tex": 12, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8450384140014648, "perplexity": 1016.347075226809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256600.32/warc/CC-MAIN-20190522002845-20190522024845-00295.warc.gz"}
http://encyclopedia.kids.net.au/page/sp/Spritzer
## Encyclopedia > Spritzer Article Content # Spritzer A spritzer is a tall, chilled drink, usually made with white wine and soda water. The word comes from the German spritzen "sprinkle", i.e. adding water and thus diluting the wine so that it can be consumed in larger, thirst-quenching, amounts without the negative effects of alcohol. All Wikipedia text is available under the terms of the GNU Free Documentation License Search Encyclopedia Search over one million articles, find something about almost anything! Featured Article Quadratic formula ... is 4a2. We get $\left(x+\frac{b}{2a}\right)^2=\frac{-4ac+b^2}{4a^2}=\frac{b^2-4ac}{4a^2}.$ Taking square roots of both ...
2020-11-29 14:44:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8536420464515686, "perplexity": 4624.067262631989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141198409.43/warc/CC-MAIN-20201129123729-20201129153729-00415.warc.gz"}
https://team.inria.fr/grace/seminar/
## December 19, Alice Pellet Mary (KU, Leuven) Place : LIX, Salle Philippe Flajolet Title: An LLL Algorithm for Module Lattices Abstract: A lattice is a discrete subgroup (i.e., ZZ-module) of RR^n (where ZZ and RR are the sets of integers and real numbers). The LLL algorithm is a central algorithm to manipulate lattice bases. It takes as input a basis of a Euclidean lattice, and, within a polynomial number of operations, it outputs another basis of the same lattice but consisting of rather short vectors. Lattices are used in post-quantum cryptography, as finding a shortest vector of a lattice, given an arbitrary basis, is supposed to be intractable even with a quantum computer. For practical reasons, many constructions based on lattices use structured lattices, in order to improve the efficiency of the schemes. Most of the time, these structured lattices are R-modules of small rank, where R is the ring of integers of some number field. As an example, 11 schemes among the 12 lattice-based candidates for the NIST post-quantum standardization process use module lattices, with modules of small rank. It is then tempting to try and adapt the LLL algorithm, which works over lattices (i.e., ZZ-modules), to these R-modules, as this would allow us to find rather short vectors of these module lattices. Previous works trying to extend the LLL algorithm to other rings of integers R focused on rings R that were Euclidean, as the LLL algorithm over ZZ crucially relies on the Euclidean division. In this talk, I will describe an LLL algorithm which works in any ring of integers R. This algorithm is heuristic and runs in quantum polynomial time if given access to an oracle solving the closest vector problem in a fixed lattice, depending only on the ring of integers R. This is a joint work with Changmin Lee, Damien Stehlé and Alexandre Wallet ## December 3, 2019. Alessandro Neri (Grace, Marie Curie Fellowship) Place : LIX, room Grace Hopper Title : 3-Tensors: a natural way of representing rank-metric codes Abstract : In coding theory, two important issues concern the encoding and the storage of a code. In the most general case, given a finite set A and a positive integer n, a block code C is a subset of A^n , endowed with a distance function (usually the Hamming one). The space of messages M is then embedded in A^n , via an injective encoding map E, such that E(M) = C. The need to have fast encoding and efficient representation of codes led to look at algebraic structures on all the defining objects (the alphabet, the space of messages and the code), and on the encoding map E. For this reason, one usually only considers the alphabet as a field of q elements, the space of messages as the vector space F_q^k , and the encoding map as a linear function into F_q^n , which leads to the study of linear codes. In this framework, we can locate the generator matrix of an [n, k] code C, which serves as a representation in order to store C, and also as an encoding map. However, one can also read the parameters of the defining code from its algebraic structure. This generator matrix can also be constructed for a vector rank-metric code, and in analogous ways, one can extrapolate information on the code from it. In this talk we explain an analogous concept for rank-metric codes, which are con- sidered as spaces of m × n matrices over a finite field F_q . This is the case of the generator tensor, which is a similar object that one can use for storing, encoding and reading the parameters of a rank-metric code. Moreover, the tensor representation leads to the investigation of a new parameter, namely the tensor rank of an [n × m, k] code C, that gives a measure on the storage and encoding complexity of C. This also produces an interesting relation between rank-metric codes and algebraic complexity theory. It is a joint work with Eimear Byrne, Alberto Ravagnani and John Sheekey. ## November 26, 2019. Isabella Panaccione (Grace) Place : LIX, room Rosalind Franklin. Title : The Power Error Locating Pairs algorithm Abstract : In this talk, we will focus on particular decoding algorithms for Reed Solomon codes. It is known that for these codes, it is possible to correct an amount of errors equal to or even superior than half the minimum distance. In general though, beyond this bound, the uniqueness of the solution is no longer entailed. Hence, given a vector y ∈ (F_q)^n, it is possible to either use list decoding algorithms, like Sudan algorithm and Guruswami-Sudan algorithm, which return the list of all the closest codewords to y, or unique decoding algorithms, like the Power Decoding algorithm, which return the closest codeword to y if unique at the prize of some failure cases. In this scenario, I will present a new unique decoding algorithm, that is the Power Error Locating Pairs algorithm. Based on Pellikaan’s Error Correcting Pairs algorithm, it has for Reed Solomon codes the same decoding radius as Sudan algorithm and Power Decoding algorithm, but with an improvement in terms of complexity. Moreover, like the Error Correcting Pairs algorithm, it can be performed on all codes which dispose from a special structure (among them, Reed Solomon codes, algebraic geometry codes and cyclic codes). ## November 5, 2019 (with a follow-up on November 19). Maxime Romeas (Grace) Place : LIX, room Nicole-Reine Lepaute. Title : Cryptographie Constructive : présentation du modèle et application au contexte du stockage distant. Abstract : La théorie “Constructive Cryptography” (CC) est un nouveau paradigme, introduit par Maurer et al. en 2011, permettant de définir la sécurité des primitives cryptographiques (chiffrement, signature, échange de clé, etc.) et de prouver la sécurité des protocoles utilisant ces primitives. Un schéma cryptographique est définie dans ce modèle comme construisant une certaine ressource (un canal, une clé partagée, un client/serveur) avec certaines garanties de sécurité à partir d’une ressource de même type ne possédant pas ces garanties. Une particularité de ce modèle est qu’il est composable dans le sens où un protocole obtenu par composition de plusieurs constructions sûres est lui même sûr. Cette propriété de composabilité permet de simplifier la description et les preuves de sécurité des protocoles. Bien que cette approche constructive existe déjà dans d’autres modèles comme Universal Composability, le modèle de Maurer est très générique et abstrait ce qui lui confère de nombreux avantages. Dans cet exposé, on commencera par présenter le paradigme CC puis on revisitera les exemples de la communication sécurisée et de la sécurisation de données distantes pour illustrer les gains apportés par leur traduction dans CC. ## October 8, 2019. Kazuhiro Yokoyama (Rikkyo University) Place : LIX, Salle Henri Poincaré. Title : Symbolic computation of isogenies by Velu’s formula. ## November 12, 2019. Simon Abelard (LIX). Place : LIX, Salle Nicole-Reine Lepaute. Title: Counting points on hyperelliptic curves defined over finite fields of large characteristic: algorithms and complexity. Abstract: Counting points on algebraic curves has drawn a lot of attention due to its many applications from number theory and arithmetic geometry to cryptography and coding theory. In this talk, we focus on counting points on hyperelliptic curves over finite fields of large characteristic p. In this setting, the most suitable algorithms are currently those of Schoof and Pila, because their complexities are polynomial in log p. However, their dependency in the genus g of the curve is exponential, and this is already painful even in genus 3. Our contributions mainly consist of establishing new complexity bounds with a smaller dependency in g of the exponent of log p. For hyperelliptic curves, previous work showed that it was quasi-quadratic, and we reduced it to a linear dependency. Restricting to more special families of hyperelliptic curves with explicit real multiplication (RM), we obtained a constant bound for this exponent. In genus 3, we proposed an algorithm based on those of Schoof and Gaudry-Harley-Schost whose complexity is prohibitive in general, but turns out to be reasonable when the input curves have explicit RM. In this more favorable case, we were able to count points on a hyperelliptic curve defined over a 64-bit prime field. In this talk, we will carefully reduce the problem of counting points to that of solving polynomial systems. More precisely, we will see how our results are obtained by considering either smaller or structured systems. Contains joint work with P. Gaudry and P.-J. Spaenlehauer. ## October 17, 13h30, Thomas Debris (Royal Holloway university) Place : LIX – Salle Henri Poincaré Title : Wave: A New Family of Trapdoor One-Way Preimage Sampleable Functions Based on Codes Abstract : We present here a new family of trapdoor one-way functions that are Preimage Sampleable on Average (PSA) based on codes: the Wave-PSA family. Our trapdoor function is one-way under two computational assumptions: the hardness of generic decoding for high weights and the indistinguishability of generalized (U, U + V )-codes. Our proof follows the GPV strategy [GPV08]. By including rejection sampling, we ensure the proper distribution for the trapdoor inverse output. The domain sampling property of our family is ensured by using and proving a variant of the left-over hash lemma. We instantiate the new Wave-PSA family with ternary generalized (U, U + V )-codes to design a “hash-and-sign” signature scheme which achieves existential unforgeability under adaptive chosen message attacks (EUF-CMA) in the random oracle model. For 128 bits of classical security, signature sizes are in the order of 13 thousand bits, the public key size in the order of 3 megabytes, and the rejection rate is limited to one rejection every 100 signatures. ## May 21, 13h30, Adrien Koutsos, LSV (ENS Cachan) Place: LIX, salle Henri Poincaré Title: The 5G-AKA Authentication Protocol Privacy Abstract: We study the 5G-AKA authentication protocol described in the 5G mobile communication standards. This version of AKA tries to achieve a better privacy than the 3G and 4G versions through the use of asymmetric randomized encryption. Nonetheless, we show that except for the IMSI-catcher attack, all known attacks against 5G-AKA privacy still apply. Next, we modify the 5G-AKA protocol to prevent these attacks, while satisfying 5G-AKA efficiency constraints as much as possible. We then formally prove that our protocol is $\sigma$-unlinkable. This is a new security notion, which allows for a fine-grained quantification of a protocol privacy. Our security proof is carried out in the Bana-Comon indistinguishability logic. We also prove mutual authentication as a secondary result. ## April 9, 13h30, Yann Rotella, Ratboud University (Nijmegen) Place: LIX, salle Henri Poincaré Title: Attaques par invariant: comment s’en protéger Abstract: En 2011, Gregor Leander et ses co-auteurs ont décrit un nouveau type d’attaque sur les chiffrements par bloc, qui exploite l’existence d’un espace vectoriel invariant par les composantes utilisées dans ledit chiffrement. Ces attaques ont ensuite été généralisées en 2015 et sont appelées les attaques par invariant nonlinéaires. Depuis, ces attaques ont mis en évidence de nouvelles vulnérabilités sur un grand nombre de chiffrements par bloc, notamment les chiffrements par bloc de type SPN (Substitution-Permutation Network) où les clefs de tours sont égales à la clef maître additionnée à une constant de tour souvent arbitrairement choisie. Dans cette présentation, nous expliquons pourquoi ces attaques réalisables  sur certains chiffrements et nous en déduisons un nouveau critère de conception pour les chiffrements par bloc. Nous verrons comment choisir la couche linéaire ainsi que les constants de tour, afin de s’assurer de l’absence d’invariants. Ce travail apparaît à un moment fondamental, puisqu’il aide les concepteurs de chiffrement et que le NIST standardise actuellement les algorithmes de chiffrements dits “à bas coût”. Ce travail a été publié à CRYPTO en 2017, en collaboration avec Christof Beierle, Anne Canteaut et Gregor Leander: Proving Resistance against Invariant attacks: How to choose the round constants. https://eprint.iacr.org/2017/463 ## March 14, 2019. 13h30. Vincent Neiger (Univ. Limoges) Place: LIX, salle Henri Poincaré Title: On the complexity of modular composition of generic polynomials Abstract: This talk is about algorithms for modular composition of univariate polynomials, and for computing minimal polynomials. For two univariate polynomials a and g over a commutative field, modular composition asks to compute h(a) mod g for some given h, while the minimal polynomial problem is to compute h of minimal degree such that h(a) = 0 mod g. For generic g and a, we propose algorithms whose complexity bound improves upon previous algorithms and in particular upon Brent and Kung’s approach (1978); the new complexity bound is subquadratic in the degree of g and a even when using cubic-time matrix multiplication. Our improvement comes from the fast computation of specific bases of bivariate ideals, and from efficient operations with these bases thanks to fast univariate polynomial matrix algorithms. Contains joint work with Seung Gyu Hyun, Bruno Salvy, Eric Schost, Gilles Villard.
2019-12-12 05:35:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6091251373291016, "perplexity": 4316.661013339724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540537212.96/warc/CC-MAIN-20191212051311-20191212075311-00302.warc.gz"}
https://forum.math.toronto.edu/index.php?PHPSESSID=k441o32850lkospornhudrbhs7&topic=273.0;wap2
MAT244-2013S > Ch 9 9.3 problem 18 (1/1) Brian Bi: I'm having some trouble getting this problem to work out. There are four critical points: (0,0), (2, 1), (-2, 1), and (-2, -4). At the critical point (-2, -4), the Jacobian is \begin{pmatrix} 10 & -5 \\ 6 & 0 \end{pmatrix} with eigenvalues $5 \pm i\sqrt{5}$. Therefore it looks like it should be an unstable spiral point. However, when I plotted it, it looked like a node. Has anyone else done this problem? http://www.math.psu.edu/melvin/phase/newphase.html Victor Ivrii: --- Quote from: Brian Bi on March 25, 2013, 12:32:43 AM ---I'm having some trouble getting this problem to work out. There are four critical points: (0,0), (2, 1), (-2, 1), and (-2, -4). At the critical point (-2, -4), the Jacobian is \begin{pmatrix} 10 & -5 \\ 6 & 0 \end{pmatrix} with eigenvalues $5 \pm i\sqrt{5}$. Therefore it looks like it should be an unstable spiral point. However, when I plotted it, it looked like a node. Has anyone else done this problem? http://www.math.psu.edu/melvin/phase/newphase.html --- End quote --- Explanation: http://weyl.math.toronto.edu/MAT244-2011S-forum/index.php?topic=48.msg159#msg159 Brian Bi: So it is a spiral point but I didn't zoom in closely enough? Victor Ivrii: --- Quote from: Brian Bi on March 25, 2013, 01:28:57 PM ---So it is a spiral point but I didn't zoom in closely enough? --- End quote --- No, the standard spiral remains the same under any zoom. However  your spiral rotates rather slowly in comparison with moving away and as it makes one rotation ($\theta$ increases by $2\pi$) the exponent increases by $5 \times 2\pi/\sqrt{5}\approx 14$ and the radius increases $e^{14}\approx 1.2 \cdot 10^6$ times. If the initial distance was 1 mm, then after one rotation it becomes 1.2 km. Try plotting $x'=a x- y$, $y'=x+ ay$ for $a=.001, .1, .5, 1, 1.5, 2$  to observe that at for some $a$ you just cannot observe rotation.
2022-05-23 18:36:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9552350640296936, "perplexity": 580.9875348167948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662560022.71/warc/CC-MAIN-20220523163515-20220523193515-00035.warc.gz"}
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-the-central-science-13th-edition/chapter-6-electronic-structure-of-atoms-exercises-page-252/6-75c
## Chemistry: The Central Science (13th Edition) The condensed electron configuration of $Se$ is $$Se: [Ar]4s^23d^{10}4p^4$$ *Strategy: 1) Find the nearest noble gas element of the lower atomic number. 2) Find out which shell is the outer shell and how many electrons there are in the outer shell (by looking at the periodic table). 3) Put the outer-shell electron in the orbitals and subshells according to Hund's rule. 1) The nearest noble gas element of the lower atomic number of selenium ($Se$) is argon ($Ar$). Therefore, we would use $Ar$ in the condensed electron configuration of $Se$. 2) Looking at the periodic table, - $Se$ is on the 4th row. The outer shell is the 4th shell. - The atomic number of $Se$ is 34, which means it has 34 electrons. Since it has 4 shells, there are 18 inner-shell electrons, and 34 - 18 = 16 outer-shell electrons. (The maximum electron number in the 1st shell is 2, 2nd shell is 8, 3rd shell is 8. Since the electrons would fill up the shells with lower energy first, all inner shells are fully occupied. In other words, $2+8+8=18$ electrons are inner-shell ones). 3) As we mentioned in the previous part, the first 2 outer-shell electrons would occupy the s-subshell of the 4th shell ($4s^2$). Then, the electrons would occupy the d-subshell of the 3rd shell. There are 5 orbitals in the d-subshell, each orbital can carry at most 2 electrons. So there are maximum 10 electrons that can occupy the d-subshell of the 3rd shell ($3d^{10}$) After 10 electrons occupy the d-subshell, still 4 electrons are left. These 4 electrons would then occupy the next lowest-energy subshell, which is the p-subshell of the 4th shell ($4p^4$). In conclusion, the condensed electron configuration of $Se$ is $$Se: [Ar]4s^23d^{10}4p^4$$
2018-06-18 13:46:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.581355094909668, "perplexity": 880.369498721846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860557.7/warc/CC-MAIN-20180618125242-20180618145242-00370.warc.gz"}
https://zhengzangw.com/notes/quantum-computation/math-foundation/
## Math Foundation 2019-09-05 08:00 CST 2019-12-22 15:33 CST CC BY-NC 4.0 ## Linear Algebra • Vector: $|\psi\rangle$ • Row vector: $\langle\psi|=(\overline{|\psi\rangle})^T$ • Inner product: $\langle\psi|\varphi\rangle=\sum_i\overline{u}_iv_i$ • $\langle\varphi|A|\psi\rangle$: Inner product between $|\varphi\rangle$ and $A|\psi\rangle$ • $\lVert\psi\rVert=\sqrt{\langle\psi|\psi\rangle}$ • Tensor product: $A=(a_{ij}),B=(b_{ij}),A\otimes B=(c_{(i_1,i_2),(j_1,j_2)})$ • $H^{\otimes n}=H\otimes H\otimes \cdots \otimes H$ • $|\psi\rangle|\varphi\rangle=|\psi\rangle\otimes|\varphi\rangle=|\psi\varphi\rangle=|\psi,\varphi\rangle$ • $|0^n\rangle=|00\cdots0\rangle$ • $(A\otimes B)(C \otimes D)=(AC)\otimes(BD)$ • $M^\dagger=\overline{M}^T$ • Unitary matrix $U$: $U^\dagger U=I$ • Hermitian matrix: $H=U^\dagger DU$, $U$ is unitary and $D$ is a real diagonal matrix(eigenvalues of $H$) • Positive semidefinite: $H$ is Hermitina and its eigenvalues are nonnegative • orthonormal basis ${|v_1\rangle,\cdots,|v_d\rangle}$ • Completeness relation: $\sum_i|v_i\rangle\langle v_i|=I$ • Computational basis in $\mathbb{C}^d$: $|i\rangle=(0,\cdots,1,0,\cdots,0)^T$ • noraml operator: $MM^\dagger=M^\dagger M$ • Spectral/Eigenvalue decomposition: $M=\sum_{i=1}^d\lambda_i|v_i\rangle\langle v_i|=U\Lambda U^*$ • $M$ is Hermitian iff $\lambda_i$ are reals • $M$ is a projector if $M$ is Hermitian and $M^2=M$/$\lambda_i\in{0,1}$ • $f:\mathbb{C}^d\rightarrow\mathbb{C},f(M)=\sum_{i=1}^df(\lambda_i)|v_i\rangle\langle v_i|$ • $Tr(A)=\sum_i A_{ii}=\sum_i \langle i|A|i\rangle$ • $Tr(AB)=Tr(BA)$ • $Tr(M)=\sum_{i}\lambda_i$ ## Fourier Transform • $\forall v\in\mathbb{C}^N$ view it as $v:{0,1,\cdots,N-1}\rightarrow\mathbb{C},v(i)=v_i$ • Inner product: $\langle u,v\rangle=\sum_i\overline{u(i)}v(i)$ • orthonormal basis $(\chi_j)_{0\leq j\leq N-1},\chi_j(k)=\frac{1}{\sqrt{N}}\omega_N^{jk},\omega_N=e^{\frac{2\pi i}{N}}$ • $\forall v\in\mathbb{C}^N,v=\sum_{j=0}^N\hat v(j)\chi_j$ • $F_N=\frac{1}{\sqrt{N}}(\chi_j(k))_{N\times N}=\frac{1}{\sqrt{N}}(\chi_1,\cdots,\chi_N)$ • Unitary and Symmetric • Fourier Transform • naïve way $O(N^2)$ steps • $\hat v=F_N v$ • $v = F_N^\dagger, v(j)=\sum_k\hat v(N-k)\chi_j(k)$ • $F_2=H$ • Convolution: $c_l=(a*b)_l=\frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}a_jb_{(l-j)\bmod N}$ • $(\widehat{a*b})_l=\hat a_l\cdot \hat b_l$ • Fast Fourier Transform: $T(N)=2T(\frac{N}{2})+O(N)\Rightarrow T(N)=O(N\log N)$ • $\hat v_j=\frac{1}{\sqrt{2}}(\hat v_{\text{even } j}+\omega_N^j\hat v_{\text{odd } j})$ • Multiplying two polynomials: $p(x)=\sum_{j=0}^d a_jx^j,q(x)=\sum_{k=0}^d b_kx^k$ • Naïve Alg: $O(d^2)$ • FFT: $O(d\log d)$ • $O(d\log d)$: $\hat a,\hat b$ • $O(d)$: $\widehat{a*b}$ • $O(d\log d)$: $a*b$ • Quantum Fourier transform $N=2^n$ • $F_N|k\rangle=\frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}e^{2\pi ijk/N}|j\rangle$ • $F_N|k_1k_2\cdots k_n\rangle=\bigotimes_{l=1}^n\frac{1}{\sqrt{2}}(|0\rangle+e^{2\pi ik/2^l}|1\rangle)=\bigotimes_{l=1}^n\frac{1}{\sqrt{2}}(|0\rangle+e^{2\pi i0.k_{n-l+1}\cdots k_n}|1\rangle)$ • $F_N|0^n\rangle=H^{\otimes n}|0^n\rangle$ • 经典与量子的区别 • $\hat v=F_N v$ • $|\hat v\rangle=F_N|v\rangle$ • 不考虑量子态的制作时间 • 无法全部读出 $|\hat v\rangle$ ## Number Theory and Group Theory • $Z_n={0,\cdots,n-1},Z_n^*={x\in Z_n:\gcd(x,n)=1}$ • $Z_{p^\alpha}$ is cyclic if $p$ is an odd prime and $\alpha$ is a positive integer • Continued fractions: $x=[a_0,a_1,\cdots],[a_0,\cdots,a_n]=\frac{p_n}{q_n}$, then $|x-\frac{p_n}{q_n}|\leq\frac{1}{q_n^2}$ • $p_0=a_0,p_1=a_1a_0+1,p_n=a_np_{n-1}+p_{n-2}$ • $q_0=1,q_1=a_1,q_n=a_nq_{n-1}+q_{n-2}$ • homomorphism: $\rho:G\rightarrow H$ is a homomorphism if $\rho(g\cdot h)=\rho(g)\cdot\rho(h),\forall g,h\in G$ • $\omega_N=e^{\frac{2\pi i}{N}}$ • representation: homomorphism $\rho: G\rightarrow GL(n,\mathbb{C})$, the character $\chi_\rho(g)=\text{Tr}(\rho(g))$ • Basi theorem: find Abelian group is isomorphic to $\mathbb{Z}_{N_1}\times\cdots\times\mathbb{Z_{N_t}}$ • Represention of $\mathbb{Z}_{N}$: ${\chi_k}_{0\leq k\leq N-1},\chi_k(j)=\omega_N^{jk}$ • Represention of $\mathbb{Z}_{N_1}\times\cdots\times\mathbb{Z_{N_t}}$: ${\chi_{k1}\cdots\chi_{k_t}}_{0\leq k_i\leq N_i-1}$ • dual group: $\hat G={\text{all characters of } G}$ • $|\chi\rangle =\frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}\chi(j)|j\rangle$ • $F_N|k\rangle=|\chi_k\rangle$
2020-02-29 07:27:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9861734509468079, "perplexity": 5430.74007013441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148671.99/warc/CC-MAIN-20200229053151-20200229083151-00068.warc.gz"}
https://short-question.com/why-is-a-portion-of-profit-on-incomplete-contracts-transferred-to-the-profit-and-loss-account/
Short-Question # Why is a portion of profit on incomplete contracts transferred to the profit and loss account? ## Why is a portion of profit on incomplete contracts transferred to the profit and loss account? (i) Incomplete Contracts with Little Progress The entire amount of notional profit on such contracts is kept by way of provision for contingencies. It is transferred to the credit of the work-in-progress account so as to bring it down to its actual cost. How do you find the profit and loss of an incomplete contract? The estimated total profit on the contract can be calculated by deducting the total cost from the contract price. The profit and loss account should be credited with the proportion of total estimated profit on cash basis, which the value of work certified bears to the total contract price. ### When there is loss on an incomplete contract the transfer for profit and loss account? When there is loss on an incomplete contract, it is fully transferred to the profit and loss account. When the contract account of an incomplete contract shows profit, it should not be treated as profit earned but only as ‘Notional profit. How is profit on an incomplete contract brought into account? Different proportions of profit may be transferred to the P&L A/c by different people. Of course, the actual profit will be ascertained by debiting the Contract Account with all the relevant expenditure and crediting it with the value of work-in-progress—certified and uncertified—and materials and plant etc., at site. #### What is profit on incomplete contract? PROFIT ON INCOMPLETE CONTRACTS. At the end of an accounting period it may be found that certain contracts have been completed while others are still in process and will be completed in the coming years. The total profit made on completed contracts may be safely taken to the credit of Profit and Loss Account. When contract is 45% complete amount of profit to be credited will? If the contract is 60 % complete , 2/3 of the profit is credited to the profit and loss account . 13. If the contract is 45% complete , if there is notional loss , entire loss is transferred to the profit and loss account . ## When there is loss on an incomplete contract the transferred to P&L A C is? 4. Profit and loss account – It represents the profit which is transferred to P&L A/C. In case of incomplete contract, part of profit is transferred to profit and loss account and remaining profit is kept as reserve. What do you mean by profit in incomplete contract? ### When a contract is 60% complete the amount of Profit to be credited to Profit and Loss Account is? 2/3 If the contract is 60 % complete , 2/3 of the profit is credited to the profit and loss account . What do you mean by Profit in incomplete contract? #### Which process loss should be transferred to costing profit and loss account? Solution(By Examveda Team) Abnormal process loss can be transferred to costing profit and loss a/c. Process account is to be credited by abnormal loss account with cost of material, labour and overhead equivalent to good units and the loss due to abnormal is transferred to Costing Profit and Loss Account. Can a profit and loss account be used for incomplete contracts? At the end of an accounting period it may be found that certain contracts have been completed while others are still in process and will be completed in the coming years. The total profit made on completed contracts may be safely taken to the credit of Profit and Loss Account. But the same cannot be done in case of incomplete contracts. ## When is profit credited to profit and loss account? In contract accounts there would be no difficulty in dealing with profits if the contracts were completed in the course of the financial year. Then the profit would be credited to the Profit and Loss Account. But difficulty arises when we have to deal with profits arising on contracts which are not complete at the end of the year. When does a contract show profit or loss? When there is loss on an incomplete contract, it is fully transferred to the profit and loss account. When the contract account of an incomplete contract shows profit, it should not be treated as profit earned but only as ‘Notional profit. ### When to take no profit on a contract? No profit should be taken in respect of contracts which have just commenced, as it is impossible to foresee clearly the future position. Generally if the work completed (i.e., completion stage) is ½ th or less than –½ of the total work, no profit shall be transferred to the Profit and Loss Account.
2023-04-01 13:14:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8615570068359375, "perplexity": 1221.7039382095206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950030.57/warc/CC-MAIN-20230401125552-20230401155552-00313.warc.gz"}
http://mathhelpforum.com/differential-geometry/187843-complex-analysis-maximal-modulus-principle-question.html
# Thread: Complex Analysis: Maximal Modulus Principle question 1. ## Complex Analysis: Maximal Modulus Principle question Hi all, I am stuck with this problem It is suggested that the Maximal Modulus Principle would help but in fact, I couldn't find a place to apply it! Anyone can suggest me how to approach this problem because I am really lost now. Thanks a lot. 2. ## Re: Complex Analysis: Maximal Modulus Principle question I've been thinking of this problem but I just can't complete a proof, here's a possible route (hopefully) towards a solution: By a simple continuity argument there is an $\displaystyle r>0$ such that on $\displaystyle 0\leq Im(z) \leq r$ we have that $\displaystyle f$ does tend to zero. Now assume the following is true Claim: If $\displaystyle f\to 0$ on the line $\displaystyle Im(z)=s$ with $\displaystyle 0<s<1$ then $\displaystyle f\to 0$ on $\displaystyle 0\leq Im(z) \leq s$ then by an argument identical to the first $\displaystyle f\to 0$ on $\displaystyle 0\leq Im(z) \leq s+r_1$ so the set on which $\displaystyle f\to 0$ has to be $\displaystyle D$. I'm having a little trouble wih the claim though (particularly estimating $\displaystyle f$ in the verticla boundary of the set $\displaystyle 0\leq Im(z) \leq s, Re(z)>R$ for some $\displaystyle R>0$; this is enough by the MMP). Sorry I can't be of more help, if you find a proof please post it here. 3. ## Re: Complex Analysis: Maximal Modulus Principle question There's probably a nicer way than this (i.e. a way that invokes maximum modulus at least). But for now I don't see why this doesn't work. Take the sequence of points $\displaystyle \left\{a_n = n+i\left(\frac{1}{n}\right)\right\}_{n=2}^\infty$. This sequence is in D. By continuity of f we can flex the limit in and out of the function: $\displaystyle \lim_{n \to \infty} f(a_n) = f\left(\lim_{n \to \infty} a_n\right) = f\left(\lim_{n \to \infty} n + \lim_{n \to \infty} i\left(\frac{1}{n}\right)\right) = f\left(\lim_{n \to \infty} n\right)= f\left(\lim_{x \to \infty} x\right) = \lim_{x \to \infty} f(x) = A.$ Now f is holomorphic and bounded on D, so it has no singularity at infinity (in particular, no erratic essential singularity behavior). So $\displaystyle \lim_{z \to \infty}f(z)$ exists. So all unbounded sequences in D tend to this limit. We just showed one such sequence tends to A. (So A is what all unbounded sequences in D tend to. In particular, any unbounded sequence with fixed imaginary component between 0 and 1 tends to A.) 4. ## Re: Complex Analysis: Maximal Modulus Principle question Originally Posted by gosuman Now f is holomorphic and bounded on D, so it has no singularity at infinity (in particular, no erratic essential singularity behavior). So $\displaystyle \lim_{z \to \infty}f(z)$ exists. This I don't get, isn't this even stronger than the claim to be proved: We know that the function tends to a limit along the reals so by these two lines the problem becomes trivial. More to the point, can you prove your statement above?
2018-04-26 10:28:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.93565434217453, "perplexity": 383.69289052074447}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948125.20/warc/CC-MAIN-20180426090041-20180426110041-00356.warc.gz"}
https://www.biostars.org/p/370221/
Correct formatting for IDs in OMA standalone .splice files to properly identify splice variants 1 1 Entering edit mode 3.2 years ago eschang1 ▴ 10 I see that for the input .splice files, OMA standalone requires that the individual IDs are unique prefixes of your FASTA headers, and proteins that are splice variants of the same gene should be listed in the splice file like "ENSP00000384207; ENSP00000263741; ENSP00000353094". It looks like NCBI and Ensembl have annotation tables that can be downloaded that will make associating proteins with genes fairly straightforward. In the annotation tables, proteins are usually identified by the shortest version of their name, something like NP_001027594.1. To keep it brief, will OMA be able to recognize a splice file line like "NP_001027594.1;NP_001027593.1" if the actual FASTA headers are more complicated, i.e. something like: NP_001027594.1 homeobox transcription factor Pax1/9 [Ciona intestinalis] Is this what the manual means by "unique prefixes of FASTA headers"? Just wanted to make sure that I didn't need to reformat my FASTA headers before diving into And secondly, does the All vs All step use the splice variant information? Or is it possible to do the All vs. All and then try running the OMA orthology algorithms with and without this information? Thank you so much! Running OMA standalone on some of my own test data sets so far has been super smooth. Cheers, Sally OMA orthologs orthology • 967 views 0 Entering edit mode Okay great, this all in line with that I had gathered from the manual, but wanted to clarify before I started to put together those .splice files. Thanks so much! Chers, Sally 2 Entering edit mode 3.2 years ago Hi, yes, this is exactly what is meant with "unique prefixes of FASTA headers". You don't have to specify the full fasta headers, but the protein ID (or even parts of it) is sufficient if it identifies the protein uniquely. The All-vs-All step does not make use of the splicing information. We still compute the all-vs-all for all proteins and will only select the best variant (based on the total nr of homologous hits with all other genomes) in the later step. You can turn on or off the UseOnlyOneSplicingVariant, and check for the different output that gets produced. But changing the *.splice files will not work - the internal database will not be updated, or, if previously no splicing file has been defined, invalidated. In case you realize some problem with the splicing variants after the AllAll, it might still be possible to update it manually. It might become a feature in a future release of OMA standalone. 0 Entering edit mode Realized I just responded to my own post: Okay great, this all in line with that I had gathered from the manual, but wanted to clarify before I started to put together those .splice files. Thanks so much! Cheers, Sally
2022-05-20 15:04:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2626661956310272, "perplexity": 2872.9913409349883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662532032.9/warc/CC-MAIN-20220520124557-20220520154557-00754.warc.gz"}
https://www.physicsforums.com/threads/envelope-paradox.245146/
1. Jul 15, 2008 ### colin9876 Maths can sometimes be nearly as interesting as physics... There are two envelopes on a table. Both have cash in, you dont know how much but one has twice as much as the other You can pick just one envelope, and you are allowed at most one swap. You choose envelope A say, and see that has £100 in it. Are you better off on average to swap and choose envelope B??? Envelope B could have £50, or £200 in it - 50% chance of each So expected value if you swap is 0.5*50 + 0.5*200 = £125 So on average you are better off swapping - how can that be???? 2. Jul 15, 2008 ### WarPhalange Exactly how you calculated it. Look at it this way: 200 is 100 more than what you have, where as 50 is 50 less than what you have. So basically, if you lose, you lose 50. If you win, you win 100. So on average you will win more money than you lose, even if you lose and win the same amount of times. Say 4 wins and 4 losses, 4 wins = 800, 4 losses = 200, that's 1000, divided by 8 = 125 per win. The only reason this works is because you already have a benchmark by looking at the first envelope. 3. Jul 15, 2008 ### matt grime No, that is not the explanation. The analysis - that you're best to switch - in the paradox is independent of looking in the envelope. The real problem is that the paradox assumes that there is a uniform distribution on the infinitely many values that may be in the envelopes. This is not possible. 4. Jul 15, 2008 Right, a different version of that paradox which might clarify the issue is to allow unlimited swaps, but without looking inside the envelopes. If swapping envelopes really yields a 25% expected increase, then swapping again should yield even more increase, and so on without bound. I wonder if this paradox has any implications for improper priors that are sometimes used in Bayesian statistics? The same thing is often done there, trying to assign a uniform distribution on infinite support. 5. Jul 15, 2008 ### DaveC426913 I don't follow the logic here. Are you saying it is better to switch, or it is not? 6. Jul 15, 2008 The whole point of a paradox is that there is a valid line of reasoning supporting either conclusion. 7. Jul 15, 2008 ### matt grime I don't believe that is the case, quadraphonic. The only way that the reasoning is valid to swap is if there is such a thing as the uniform distribution on the natural numbers. There isn't. Let's put it this way. Suppose that this is a game show, then the monies in the envelope must be less than the budget on the show. So if you open the envelope and see that the amount is more than half the show's budget, then you stick, otherwise you change. But here we have to observe the money in the envelope (and make a reasonable assumption about the budget of the game show) in order know what to do. So, to answer Dave's question: since you know neither the priors nor the posteriors, in the original case switching doesn't do you any good (or harm). 8. Jul 15, 2008 ### DaveC426913 Ok, we know this intuitively. It seems that the key is to find the flaw in the logic of the OP's analysis. Why does his math seem to show that switching will on average yield a better result? 9. Jul 15, 2008 ### WarPhalange Are you assuming you have an infinite amount of envelopes or what? Your rules aren't making any sense to me. If you have $100 in your envelope and switch to one and see that it has$200, switching again you would switch back to the first one which had \$100, right? You only have 2 envelopes to pick from. 10. Jul 15, 2008 ### matt grime For the 4th time: because you have assumed a prior distribution that is uniform on the natural numbers. You have no idea what the correct distributions are, so you can't do any reasoning. If you had some idea you could decide what to do, as I explained above. E.g. if you know there is at most 1000 dollars in there, and you open to find 800, don't swap! Dually, if you open to find 1 dollar, and know that the amount is an integer number of dollars, then do swap. 11. Jul 15, 2008 ### matt grime The point of the paradox is that the reasoning is independent of the envelope's contents, so you can assume that you didn't open the first envelope. So you swap, but now you're in the same situation, so swap back, do this arbitrarily many times and you have an infinite expected gain. This is an utterly bog standard paradox explained in hundreds of places all over the place. Google for an explanation for Devlin, say from the AMA. 12. Jul 15, 2008 ### DaveC426913 This is too abstract an answer to be meaningful (to me at least). You might as well simply say "The OP's argument is flawed because it contains an error." Again, I know it's wrong but I can't say where the OP's argument contains a flaw. I suspect this is an incorrect conclusion: "50% chance of each". Is it because the doubling/halving rule means that the distribution is geometric rather than linear? i.e a 50% chance of each would actually only be the case if the other envelope had L50 or L150 (evenly spaced, not multiplied)? The more I ponder the more I think that's what you're saying, but my brain is hurting. Last edited: Jul 15, 2008 13. Jul 15, 2008 ### matt grime I have explained it to you by example twice, though you may not have seen the second time as it was an edit. Sorry, that you don't get the explanation, but that isn't my fault. Let me repeat the example. Suppose you know that there is *at most* 1000 dollars in the envelopes ('cos that is all that the offerer can afford, say). Then clearly opening and discovering 800 Dollars means you don't swap. The posterior (loosely the guess at what the monies might be) has changed 'cos of this information. The only way you can say, in the original version, that both possibilities of there being more or less in the other envelope are the same is if you assume that *all* possible dollar amounts might be in the envelopes. Since we can assume that there really are only a finite number of dollars in the world, this is clearly nonsense (and mathematically impossible anyway since there is no such thing as a probability measure on the natural numbers that assigns equal probability to all of them). 14. Jul 15, 2008 ### mrandersdk 15. Jul 15, 2008 Right, of course my statement was in the context of the assumptions, which are, as you say, problematic. But I don't think it provides a very satisfying resolution to say that "since you don't know the priors, switching doesn't help you." Because, indeed, its our very ignorance of the priors that leads us to want a uniform prior on the amounts in the first place. On top of that, using an improper prior still leaves us with perfectly sensible posteriors, so we lack the usual easy way of removing them. My understanding is that this paradox is still an open question. Perhaps that's not the case... 16. Jul 15, 2008 ### Hurkyl Staff Emeritus No, it doesn't! Lack of evidence for an alternate prior distribution does not constitute evidence for a uniform prior distribution! And even worse, what would ostensibly be "uniform" here isn't even well-defined! What would be uniform in one parametrization of the sample space is non-uniform in other parametrizations. Last edited: Jul 15, 2008 17. Jul 15, 2008 ### matt grime Perhaps I could have chosen my words better, but under no prior distribution is it always preferable to switch irrespective of the amount in the envelope, and that is provable. With some priors switching is preferable if you know the amount in the first envelope, under others it is not. Under many the expected gain is zero. Such a thing can't exist, so this doesn't help. Open in what sense? There is a perfectly good explanation for it. 18. Jul 15, 2008 ### colin9876 I find it the most intriguing paradox- many sites give poor and differing explanations. For the record, the point about the upper budget is not relevant - I could have said a genie with infinite resources filled the envelope. (i) So either its better to swap .... or it makes no difference and therefore (ii) the probability that envelope B only has £50 in is 2/3 (and not 1/2 as we assume) ... well (ii) is unlikely to be correct so IT IS better to swap if you find ur envelope has £100 in it!! 19. Jul 15, 2008 ### matt grime But could not have put a uniform distribution on the possible values which is what the paradox requires. If you believe the 'budget' is the important point then you've not grasped the theory of probability. The main point of using a budget is to demonstrate that a non-uniform (and mathematically possible) prior shows that in some cases switching is bad, and in others is good. 20. Jul 15, 2008
2017-05-26 00:31:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7067834138870239, "perplexity": 761.4337237306715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608617.80/warc/CC-MAIN-20170525233846-20170526013846-00456.warc.gz"}
https://www.physicsforums.com/threads/many-body-bogoliubov-transformation.896136/
# Many Body bogoliubov transformation Tags: 1. Dec 6, 2016 ### Hl3 1. The problem statement, all variables and given/known data The occupation of each single-particle state with wave vector k =/= 0 in the ground state is given by nk = <0|bkbk|0> where b and b† are bogoliubov transformaition. Find an expression for nk. bk = cosh(θ)ak - sinh(θ) a-k bk = cosh(θ)ak - sinh(θ)a-k 2. Relevant equations 3. The attempt at a solution I don't fully understand the notaition with zeros. I belive nk would equal to 0, however question asks for the expression for nk. thnaks in advance 2. Dec 6, 2016 ### eys_physics No, it is not. The state $|0>$ is the vacuum of the annihilation operators $a_k$. That is, $$a_k|0>=0$$ for all $k$, but $$b_k|0>\neq 0$$ (unless $\theta =0$).
2017-12-13 15:26:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8597796559333801, "perplexity": 1880.5033116575355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948527279.33/warc/CC-MAIN-20171213143307-20171213163307-00229.warc.gz"}
https://ssconlineexam.com/forum/980/Siphon-will-fail-to-work-if-%E2%80%93
# Siphon will fail to work if – [ A ]    the densities of the liquids in the two vessels are not equal [ B ]    the level of the liquids in the two vessels are at the same height [ C ]    both its limbs are of unequal length [ D ]    the temperature of the liquids in the two vessels are same Answer : Option B Explanation :
2020-02-17 12:41:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8446040153503418, "perplexity": 1537.8557060201508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875142323.84/warc/CC-MAIN-20200217115308-20200217145308-00495.warc.gz"}
http://physics.stackexchange.com/tags/models/hot?filter=year
# Tag Info ## Hot answers tagged models 9 Actually a paper recently came out, and highlighted in Popular Science, discussing using fermionic field concepts to model crowd avoidance at Netflix. You can imagine that the same concept could be used to consider in any situation where there are large numbers of people competing for limited preferred items. Update Now that we have a few minutes, ... 7 No. There is nothing wrong with perturbation theory, or with theories with known, restricted accuracy. The point of theory is to explain the results of observation from as simple an initial theoretical standpoint as possible. Therefore: Since experiment always has a finite uncertainty, one can only ask that theory match the experimental value within its ... 5 Here's my quantitative attempt at $4.$ and $1.$: The Coandă effect here is the tendency of the airflow to adhere to the surface of the ball. This means that near the surface of the ball, the streamlines are curved with a radius of curvature approximately equal to the radius of the ball $R$; this curvature results in a pressure gradient just as it does in ... 4 I can't see how a negatively charged electron can stay in "orbit" around a positively charged nucleus. Even if the electron actually orbits the nucleus, wouldn't that orbit eventually decay? Yes. What you've given is a proof that the classical, planetary model of the atom fails. I can't reconcile the rapidly moving electrons required by the ... 4 Let me give the naivest possible estimate, so that people have something to criticize. Assuming that the most of the jet interacts with the ball and is deflected at a substantial angle then the force on the ball is roughly the momentum flow through the pen. In your units this is $\rho_{air} Q^2/(\pi d^2)$. Saying the force to levitate a ball is $1\times ... 3 Ok, I'm still not sure on what level you want to do this, but I will start you off with some basics. The most important factor is probably the solar elevation angle,$\theta$. As described on the wiki-page it can be calculated using this formula: $$\sin\theta=\cos h\cos\delta\cos\Phi+\sin\delta\cos\Phi$$ where$h$is the hour angle,$\delta$is the solar ... 3 Mathematics is just a systematic way of stating facts about the world. It is only useful inasmuch as it is internally self-consistent. The latter fact means that there is nothing to "assume" about mathematics. It is a relationship between axioms and conclusions that enables one to succinctly summarize many observations. Something like Galileo's ... 2 I'm a first year physics student, so my answer might not be satisfactory - but I hope it will give some insight to the problem. 1) from what I know we need to consider: Drag - which I will address Turbulence - which I know next to nothing about, and therefore I will ignore with hope someone will be able to expand. we need the drag force to be equal to ... 1 There will always be solutions that can't be analytical. For example, any model of more than two bodies without any special constraints, cannot be solved analytically. From the gravitational interactions between three planets to three particles interacting (electromagnetically or otherwise) in quantum theory. To have mathematically analytical solutions, ... 1 Short answer: that depends on your definition of sound theory. For instance, it is possible to find peer-reviewed papers considering such possibilities. The idea that antimatter can be gravitationally repulsed from ordinary matter is definitely not the most popular one. Nevertheless, some people do try to apply it in astrophysical context. Let us have a ... 1 Presumably, the analytical solution is using $$P(x,y,z) = \lim_{T\to \infty} \frac{1}{T} \int_{-T/2}^{T/2} P(x,y,z,t)\, dT$$ Note the limit that takes$T$to infinity. If the solution is periodic with period$T$, then this is precisely equivalent to writing P(x,y,z) = \frac{1}{T} \int_{-T/2}^{T/2} P(x,y,z,t)\, dT ... 1 I'm surprised the "real" schematic doesn't include a transformer turns ratio (actually, a model for the high voltage transformer, which has a primary winding with$N_p$turns connected to the power oscillatory circuit and a secondary winding with$N_s$turns (with$N_s > N_p$) connected to the discharge gap). Maybe everything is referenced to its ... 1 I'm no expert, but ... MIT's "Magnetic Circuits and Transformers" discusses eddy current losses in chapter V.2. An approximate formula for a magnetic sine wave at frequency$f$and peak amplitude$B_{max}$(in Tesla, mks units throughout) is: $$P_e = k_e f^2 t^2 B_{max}^2 V$$ where$ k_e = \pi^2/(6 \rho) $theoretically (but in practice is often ... 1 Yes. For photons in vacuum, the energy per photon is proportional to the photon's classical, electromagnetic frequency, as$E = \hbar \omega = h f\$. Here, we see a connection between two classical properties of light: the energy and frequency. What is surprising is that the relation holds for matter, where there is no classical equivalent of the frequency. ... 1 No. When you hit the wall, the bicycle rotates around the front axis. The angular momentum L that you create for an arbitrary number of mass particles is $$L=\Sigma_i(r_i \times m_iv_i) .$$ If you split location r=R+r_i and v=V+v_i with R and V being center of mass location and velocity, respectively, and r_i and v_i deviation from it, then it can be shown ... 1 OK, now we have a better idea what you're trying to do. If we can assume elastic collisions, then answer should be independent of the mass of the bike (though it will depend on the relative distribution of the mass, i.e. center of gravity and moments of inertia). However you will need to think how to scale the velocity of your model. One way to think of ... 1 There are many possible examples of this, and you may need to be more specific in what you want. Here are two that immediately come to mind: 1) A bead in a harmonic trap (or a bending cantilever) that is undergoing thermal kicks from Brownian motion. The strength of these fluctuations depends on temperature; if the temperature of the system changes over ... Only top voted, non community-wiki answers of a minimum length are eligible
2013-12-22 07:50:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.7800232172012329, "perplexity": 362.1721064979031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345777253/warc/CC-MAIN-20131218054937-00088-ip-10-33-133-15.ec2.internal.warc.gz"}
https://subasah.wordpress.com/
# Magoosh issue1 practice test writing Technology, while apparently aimed to simplify our lives, only makes our lives more complicated. Write a response in which you discuss the extent to which you agree or disagree with the statement and explain your reasoning for the position you take. In developing and supporting your position, you should consider ways in which the statement might or might not hold true and explain how these considerations shape your position. ………………………………………………………………………………………………………………………………………….. I see it is completely the way we interact and use the technology will benefit or make the life complicated than ever. We are having the luxury to enjoy the global news and infomation that are necessary to keep our body and soul together is just due to the fact of easily accessibility of the technology. As we go further the life of people around the globe have been drastically improved due to the contribution of technology in individuals life. We see that poverty level in around the globe has declined than ever recoreded before and this is only with technology as people have started selling their hand-made products on Facebook by creating pages. People that were not that skiled are making money by seling their vegetables on faceboook.Thereby, increasing overall prosperity of their kind and have led their childrens in good school that were not the case before, they are having good night sleep and meal to be good healthwise. At the same time some people are just wasting their time hanging with facebook and chating with their friends and that have reduced their perfomance in their study and a competent contributor of this world are being incompetent, wasting time on facebook and getting involved into libertine activities may be instigated by one of his or her friend. Another thing is we can of course do whatever we want with the technology, chance of thriving as a resposible people and irresponsibe people are amost 50-50. We see in the most of the cases by using computer even construction of building and roads are crafted and at the same time one can simulate a atomic bomb to dispel the existence of the humanity. One is going towards something to make this world a better place to live while another one is in its destruction. This school of thought has made a lot to worry the way people are using computer. So individual intesion matters a lot in case making life better or making complicated than ever. Even when we see in case of voting system that employed to make the life of human easier and we see being misused it in different ways — hacking into system, one of the vivid example is reportedly told Russis interference in the USA election. Actually technologies are to make the life easier but it is upto the way it is used, can be fruitful and catestrophic at the same time. This essay has only 407 word count that needs to be increased for the writing section in toefl to 450 to 500 word count to get a perfect score, 27-30/30, in the writing section. # discretion:n: If you have the freedom to decide something on your own, the decision is left to your discretion. You’re in charge. Discretion traces back to the Latin verb discernere “to separate, to discern” from the prefix dis- “off, away” plus cernere “separate, sift.” If you use discretion, you sift away what is not desirable, keeping only the good. If you have the freedom to choose, something is “at your discretion.” Watch out when you hear the phrase, “viewer discretion advised” on TV or at the movies, you will be watching something quite violent or explicitly sexual. # dissertation:n: A dissertation is a long piece of writing that uses research to bring to light an original idea. Don’t go to grad school unless you’re prepared to write, say, a 300-page dissertation on some topic. In everyday speech, we sometimes accuse people of delivering dissertations when they overload us with dull information. If you’re annoyed with a long memo from your office manager about keeping the kitchen clean, you could mutter to a coworker, “How’d you like that dissertation Felix posted about rinsing out our mugs?” asd # sift:nv: sift is used in case of discretion. To bake a cake, you sift the flour to get out the lumps. When you sift, you separate out one thing from another. When you sort through the mail looking for the bills or go through your photos to find that shot of your dog, that’s sifting, too. Detectives sift through piles of evidence when investigating crimes, and you might sift through the hundred applications you get from drummers eager to join your band, to find Ms. Right. When you’re at the beach, you can sift sand through your fingers, and you might see big machines that sift the sand to clean it. # moot:nv: debatable, consider When a point is moot, it’s too trivial to think about. If your basketball team loses by 40 points, the bad call by the official in the first quarter is moot: it isn’t important. Though moot can mean to debate endlessly without any clear decision or to think about something carefully, it most often describes ideas and arguments that don’t really matter. If your plane is crashing, whether or not your socks match is a moot point. When someone accuses you of making a moot point, he’s basically saying, “Come on! Let’s talk about what’s important.” As with so many things, people don’t always agree on what’s moot and what’s not. The adjective retroactive refers to something happening now that affects the past. For example, a retroactive tax is one that is passed at one time, but payable back to a time before the tax was passed. The Latin word retroagere, an ancestor of the adjective retroactive, means “drive or turn back,” and goes along with the meaning of the word. Sometimes governments pass rulings that are set as if they were in effect before the ruling was even made, and that means they are retroactive. On the bright side, you might be awarded a salary raise that is retroactive, meaning you’ll get paid more for work you did in the past. And, retroactive fads in clothing keep vintage clothing stores in business. # meander:nv: sinuous, wander To meander means to wander aimlessly on a winding roundabout course. If you want some time to yourself after school, you might meander home, taking the time to window shop and look around. Meander comes from a river in modern-day Turkey, the Maiandros, which winds and wanders on its course. Today, a stream or a path meanders, as does a person who walks somewhere in a roundabout fashion. If your speech meanders, you don’t keep to the point. It’s hard to understand what your teacher is trying to impart if he keeps meandering off with anecdotes and digressions. Pronounce meander with three syllables not two — me-AN-der. If you’re in a fight with a friend and you want to end it, you should make a conciliatory gesture, such as inviting her to a party you’re having. Conciliatory describes things that make other people less angry. The context is often a situation in which a dispute is settled by compromise. A synonym is propitiatory, though this adjective usually refers to avoiding the anger of someone who has the power to harm. In the word conciliatory, the –ory suffix means “relating to or doing,” and the root is from Latin conciliatus, from conciliare “to bring together, win over,” from concilium “council.” Inclement usually refers to severe or harsh weather that is cold and wet. When packing for a trip to the Caribbean bring tank tops and shorts, but don’t forget a raincoat in case of inclement weather. This adjective can also refer to a person or action that is harsh and unmerciful. Inclement is from a Latin root formed from the prefix in- “not” plus clemens “clement.” This English adjective clement can mean either mild or merciful; the more commonly used noun clemency can mean mildness or mercy. A desperate search for a Maine elementary school teacher missing since Sunday has so far resulted in more questions than clues. — Look into above part of the sentence is without any tense in it so it is not bad to include such into your own writiting. If you see a goldfish fly out of a melting clock and offer you tango lessons, you’re having a surreal experience! Either that or you’re asleep and dreaming. Things that are surreal combine unrelated elements to create a bizarre scene. The adjective surreal comes from Surrealism, a movement that produced films, writing, painting, and other art forms that often contained irrational, disjointed images. So, surreal describes something that’s a bizarre mix of elements, often jarring and seemingly nonsensical. Images can be surreal, like the melting clocks in Salvador Dali’s paintings, but so can strange, dream-like moments in everyday life. # enact:v: You often hear that Congress is going to enact a new statute, which means that they will make it into a law. But enact also means to perform, like in a play. (Makes you wonder if the lawmakers are actors!) Inside the word enact is that little word act, meaning “to do.” That makes sense, because when you enact something, you make it happen. And of course, we know that to act also means to perform, and so enact means “to act out,” like on stage. Now that the new rules have been enacted, you’ll have to stop wearing your gorilla suit to work. Even after Labor Day. # ordain:v: To ordain is to make someone a minister, priest, monk, or other member of the clergy. In the Catholic church, for example, a bishop ordains new priests. When you say that people have been ordained, you usually mean that they’ve been invested with special religion-related powers. In many Buddhist traditions, senior monks ordain new monks and, increasingly, female monks (or nuns) as well. Occasionally, this chiefly religious verb is used to mean “officially declare” or “decree” in a secular matter, as when a court ordains desegregation. # sartorial: adj: tailor, one who patches and mends. If it’s the day before a big event and you have no idea what to wear and nothing in your closet is going to cut it, you are facing a sartorial dilemma — one that pertains to clothing, fashion, or dressing. Sartorial comes from the Modern Latin word sartor which means “tailor,” literally “one who patches and mends.” In English the adjectives sartorial and sartorially are used to refer to any matter pertaining to the consideration of clothing or fashion. The root word sartor has also made its way into the field of biology. The sartorius — a muscle in the leg and the longest muscle in the human body — gets its name because it is used when crossing the legs, also known as the “tailor’s position.” # dumbfound:v: = dumb+confound, nonplus The verb dumbfound means to puzzle, mystify, or amaze. If people never expected you to amount to much in high school, but you grew up to be a rocket scientist, you will surely dumbfound your former classmates at your next reunion. The word dumbfound is a combination of the words dumb and confound. Dumb, in the original sense, means unable to speak. Confound is from the Latin word confundere, which means to mix together as well as to confuse. Thus the blended word dumbfound has the sense of to confuse to the point of speechlessness. If you see a solar eclipse for the first time, it might dumbfound you. # nonplus:v: To nonplus is to baffle or confuse someone to the point that they have nothing to say. Something weird and mysterious can nonplus you, like a play that is performed entirely by chickens. If you know a little French or Latin, you’ll recognize that “non plus” means “no more.” When something bewildering nonpluses you, there’s no more you can say or do about it. A goal of getting poor grades, running with a bad crowd, and refusing to eat would leave your parents nonplussed. Sometimes people misuse nonplus to mean “unimpressed,” but that’s not correct: to nonplus is to puzzle, confuse, and dumbfound. Use the adjective treacly to describe something that has a sticky, sweet flavor. Your dad’s chocolate pecan pie might be a little too treacly for your taste. Something that’s way too sugary is treacly. Your little brother might love treats like fudge and caramels and syrupy soft drinks that just taste treacly to you. You can also use the word in a more figurative way, to talk about overly sweet talk or behavior, like the treacly language on a sentimental greeting card. Treacly comes from treacle — a British term for molasses — originally “an antidote to poison,” from the Greek root theriake, “antidote for poisonous wild animals.” # enjoin:v: official order To enjoin is to issue an urgent and official order. If the government tells loggers to stop cutting down trees, they are enjoining the loggers to stop. Enjoin looks like it should mean bring together, and at one time, it did have that meaning. But in current usage, the only thing enjoin brings together is a command and the person on the receiving end of that order. If your doctor enjoins you to stop smoking, he is suggesting strongly that you quit. besotted /bɪˈsɒtɪd/ 1. strongly infatuated. “he became besotted with a local barmaid” synonyms: infatuated with, smitten with, in love with, love-struck by, head over heels in love with, hopelessly in love with, obsessed with, passionate about, consumed with desire for, devoted to, doting on, greatly enamoured of, very attracted to, very taken with, charmed by, captivated by, enchanted by, enthralled by, bewitched by, beguiled by, under someone’s spell, hypnotized by; informalbowled over by, swept off one’s feet by, struck on, crazy about, mad about, wild about, potty about, nuts about, very keen on, gone on, really into, hung up on, carrying a torch for; informaltwitterpated by; literaryensorcelled by “she won’t listen to me—she’s besotted with him” 2. archaic intoxicated; drunk. # Math logs, 23% correct An algebraic solution to this problem is quite complicated and unwieldy.  A much simpler solution method is backsolving. As always, start with (C).  Let’s say P = 50%.  That means 50% gets paid as rent, so K = 0.5(5000) = 2500.  Then 12\dfrac{1}{2} of this gets paid for groceries, so L = 1250.  Then 13\dfrac{1}{3} (L) = (13)(1250)≈416\bigg(\dfrac{1}{3}\bigg)(1250)\approx416 25L=(25)(1250)=500\dfrac{2}{5}\text{L}=\bigg(\dfrac{2}{5}\bigg)(1250)=500 416 + 500 < 1000, so 1250 – (416 + 500) > 1250 – 1000 > 250 With this choice, more than $250 would be left. This is too high. First of all, we know that (C) is not the right answer, but the tricky question is: in which direction should we eliminate answers. Recall the P is the percent paid to rent & other fixed bills: as P goes up, the amount left over goes down. If we want to get a smaller amount at the end, we need P to go up. Thus, we eliminate (A), (B), and (C). We could pick either of the remaining answers. Pick (E), P = 70%. Now, 70% goes to rent & other fixed expenses, so what’s left is 30%. We know 10% of$5000 is $500, so 30% is three times this,$1500.  Thus, K = 1500.  Then 12\dfrac{1}{2} of this gets paid for groceries, so L = 750. 13\dfrac{1}{3} (L) = (13)\bigg(\dfrac{1}{3}\bigg)(750) = 250 25\dfrac{2}{5} (L) = (25)\bigg(\dfrac{2}{5}\bigg)(750) = 300 750 – 250 – 300 = 200 Bingo!  This leaves the exact right amount left.  Answer = (E) # Math Q logs the fastest way to solve this problem doesn’t involve any arithmetic. We can do this short cut because the problem asks us for a percentage. It doesn’t actually matter how many birds there really are. Pretend that there were only 10 birds. If 20% are tagged, then two are tagged. If we want half of the birds to be tagged, that would mean we would need to tag three more to get a total of five tagged birds. Here’s the tricky part. The problem asks us what percentage needs to be tagged of the remaining untagged birds. That means that we need to tag three of the remaining eight (because two were already tagged, leaving eight untagged). That’s three over eight, or: If A, B, C and D are positive integers such that 4A = 9B, 17C = 11D, and 5C = 12A, then the arrangement of the four numbers from greatest to least is Remark: written well in scratch paper but wrote again wrong inequality, b If AB = BD, and AB is 3/5 of AC, what is the ratio of circumference of the larger semicircle to that of the combined circumference of the two semicircles? In this case taking ratio as value is perfectly fine, as we have to find the ratio. Since square ABCD has area 25, each side of the square must have length 5. Let’s take the smaller shaded square and label one side as x. And we’ll label one side of the larger shaded square as y. Our goal is to find x. Since each side of the entire square has length of 5, we know that x + y = 5. The question tells us that the area of the larger shaded square is 9 times the area of the smaller shaded square. In other words, y2^2 is 9 times x2^2. So: 9x2^2 = y2^2 We can now take the square root of both sides of the equation to get: 3x = y Now, we can solve for x. Let’s take our first equation, x + y = 5, and replace y with 3x (because we know that 3x = y). x + y = 5 x + 3x = 5 4x = 5 x = 54\dfrac{5}{4} If we simply add or subtract these equations as is, we don’t get equal coefficients on A and B.  Notice, though, we could multiply the top equation by 2 then subtract the bottom equation: That procedure led directly to the answer.  The cost of 1 apple and 1 banana is \$1.30. # Question When should I implement the two different percent increase strategies? When should I use the formula of taking the differences between the two numbers, then dividing by the original. And when should I take a number and divide by the other number to see the percent increase (as in this example). It all comes down to the wording of these problems, which can be admittedly confusing. There are a lot of different ways to word the questions, but we can break them down into two categories: Category 1 The original amount is contained in the result: look for the phrase “is what percent of“. Here, you simply use new/old * 100 Example: Car sales increased from 500 in July to 600 in August. August car sales are what percent of July car sales? 600/500 * 100 = 120% The second category is probably a bit more common. Category 2 Comparing the difference to the original: look for the phrase “is what percent greater/less than”. Here, use the formula (new-old)/old * 100 Example: Car sales increased from 500 in July to 600 in August. August car sales are what percent greater than July car sales? (600-500)/500 * 100 = 20% Reference: https://magoosh.zendesk.com/hc/en-us/articles/204308745–Student-Activities-at-Two-High-Schools-When-should-I-implement-the-two-different-percent-increase-strategies- Melpomene High School has 400 students, and Thalia High School has 700 students. The following table shows the percentage breakdown for various groups in each school. The total number of people in honor society at Melpomene High School, regardless of other activities, is approximately what percent higher than the total number of people in honor society at Thalia High School, regardless of other activities? Lets think this way, 4 is what percent higher than 2. (4-2)/2*100 % = 50% , in the similar fashion above question can be solved. In this question, we need to find the two numbers, and then find how much bigger one is than the other. At Melpomene, the four categories that include the honor society add up to 16% + 26% + 10% + 4% = 56%, and 56% of 400 = (0.56)*400 = 224 students. At Thalia, the four categories that include the honor society add up to 2% + 15% + 8% + 2% = 27%, and 27% of 700 = (0.27)*700 = 189 students. The question really is: 224 is what percent higher than 189? Estimate — 10% of 189 is approximately 19. 189 + 19 = 208, a 10% increase. 208 + 19 = 227, a 20% increase, and that’s just over 224, so 224 should be very close to, maybe slight less than, a 20% increase. This leads us to the answer of (C). Melpomene High School has 400 students, and Thalia High School has 700 students. The following table shows the percentage breakdown for various groups in each school. How many non-band members at Melpomene, regardless of other activities, would have to join the band so that they had the same number of band members as does Thalia High School? Ans is 25. This question is just asking us to figure out the number in the band at Thalia and at Melpomene, and then subtract them. At Melpomene, the four categories that include the band add up to 11% + 14% + 26% + 4% = 55%, and 55% of 400 = (0.55)*400 = 220 students. At Thalia, the four categories that include the band add up to 8% + 10% + 15% + 2% = 35%, and 35% of 700 = (0.35)*700 = 245 students. The difference is 25 students. Answer = (B). FAQ: Q: I thought the question was asking about non-band members. Why are we adding up band members? A: There are two important phrases in the question that will help explain. 🙂 First “regardless of other activities” means that we should only pay attention to whether or not someone is in the band. So, whether someone is in “band only,” “honor society & band only,” “band & athletic team only,” (and so on) we will count them as band members. Second, “would have to join the band” means that we want to know how many people who are not in the band (non-band members) would need to join the band. So, we need to find out how many people are in the band in both schools. Then we can see how many band members there are at Thalia and at Melpomene. Once we find out that there are 245 at Thalia and 220 at Melpomene, we know that 25 students would have to join band at Melpomene. That way there are the same number of band members at both schools. The diagram shows the 44 nations that occupy the continent of Europe.  (The diagram excludes Russia, which occupies both Europe & Asia.)  Every dot is a smaller nation, with a national population less than 500,000; the circles are nations each with more than half a million people.  Those nations in the “NATO” circle, as of 2013, are members of the NATO military alliance.  Those nations in the “euro” circle, as of 2013, use the euro as their primary currency. Of the nations with national populations more than half a million people, approximately what percent of European nations are neither members of NATO nor primary users of the euro? it took Ellen 6 hours to ride her bike a total distance of 120 miles. For the first part of the trip, her speed was constantly 25 miles per hour. For the second part of her trip, her speed was constantly 15 miles per hour. For how many miles did Ellen travel at 25 miles per hour? how 2 can be made 4, what percent of increase is needed to get 4. that is 100%. Similar idea would be enough for this question. # exemplar:n: A high school valedictorian is an exemplar of dedication and hard work. Most parents would love for their children to emulate a student with such excellent grades. Notice the similarity between the words exemplar and example. This word can mean both “perfect example” and “typical example.” A fireman can be an exemplar of courage, and a building can be an exemplar of the architecture from a certain period. The definition of an exemplar is person or thing that is considered as a pattern to be copied. An example of an exemplar is a person that others try to imitate, such as Michael Jackson. An example of an exemplar is a copy of a manuscript. # precursor:n: You’ve heard the old saying “Pride comes before the fall?” Well, you could just as easily say pride is a precursor to the fall. A precursor is something that happens before something else. You don’t have to be a dead languages scholar to guess that this word springs from a Latin source — praecursor, “to run before.” A precursor is usually related to what it precedes. It’s a catalyst or a harbinger, leading to what follows or providing a clue that it’s going to happen. Binging on holiday candy is a precursor to tummy aches and promises to exercise more. Draconian policies in unstable nations are often a precursor to rebellion. # impunity:n: exemption, freedom If doing something usually results in punishment, but you do it with impunity, you will not be punished for the deed. Students are not allowed to chew gum in school, but teachers do it with impunity. Not fair! The noun, impunity, comes from the Latin roots im- “not” plus poena “punishment,” a root which has also produced the word pain. Impunity, then, is the freedom from punishment or pain. If someone has committed a punishable offense but does not have to fear punishment, he or she does it “with impunity.” Cybercriminals operate with impunity from some Eastern European countries. # chaste:n: celibate, continent If you belong to a chastity club, you might have to take a pledge to be chaste until marriage. Chaste can be defined as “pure and virtuous,” but basically it means “not having sex.” This word is related to the Latin source of the verb castrate “to remove a man’s testicles,” so it’s definitely related to sex. And chaste is from the same Latin source as the noun caste “a Hindu social class separated from other classes.” So the word chaste means no sex, and the word caste means pure and virtuous. Use erotic to describe a sexy, sexy person. What makes that person so sexy? Maybe his or her erotic attitude or looks, meaning “arousing.” The word erotic came into English from French — of course! — and can be traced back to the Greek word erōtikos, from erōs or erōt-, meaning “sexual love.” The adjective erotic is often used to describe a person’s carnal desires, but it can be used to characterize anything that’s sexual in nature or that arouses sexual desires, such as the erotic themes in a racy movie, an erotic dancer in a club, or erotic images in a painting. # clamber: nv: scramble To clamber is to climb awkwardly. Hamlet’s Ophelia was said to have been clambering on a weak branch of a willow when she met her “muddy death.” It’s never a good idea to clamber, let alone on weak willow branches. We associate the word clamber far more often with toddlers (than Shakespearean tragedy). Toddlers are known for naturally clumsy, ill-coordinated movements we deem cute not foolish. Suitably enough, the word comes from the delightful and long obsolete Middle English word clamb, meaning the past tense of climb, a word that has all the happy logic of a toddler’s imagination. # clamor:nv: To clamor is to make a demand — LOUDLY. It’s usually a group that clamors — like Americans might clamor for comprehensive health care coverage. The noun clamor is often used specifically to describe a noisy outcry from a group of people, but more generally, the word means any loud, harsh sound. You could describe the clamor of sirens in the night or the clamor of the approaching subway in the tunnel. If you are circumspect, you think carefully before doing or saying anything. A good quality in someone entrusted with responsibility, though sometimes boring in a friend. The word circumspect was borrowed from Latin circumspectus, from circumspicere “to be cautious.” The basic meaning of Latin circumspicere is “to look around.” Near synonyms are prudent and cautious, though circumspect implies a careful consideration of all circumstances and a desire to avoid mistakes and bad consequences. # circumvent: v: dodge 1. find a way around (an obstacle). “if you come to an obstruction in a road you can seek to circumvent it” # reconnaissance: n: Reconnaissance is checking out a situation before taking action. Often it’s used as a military term, but you could also do reconnaissance on a new employee before you hire her, or a resort before you take a vacation. Reconnaissance is a noun, and it technically means “the act of reconnoitering.” Whoa. Never heard that word before? Reconnoitering is just a fancy way of saying that you’re checking something out — sometimes in a sneaky way. If you like a girl in your Spanish class, you might ask a friend to do some reconnaissance to find out what she’s like. The word comes from the French reconnaître, which means “recognize.” # Renaissance:n: The Renaissance was the period in Europe between the 14th and 17th centuries when there was a surge of interest in and production of art and literature. “Renaissance art” describes the style of art that came out of this period. When you see the word Renaissance spelled with a capital R, you can be sure it’s referring to the European cultural movement, or the art, literature, and architecture it inspired. The Renaissance began in Italy, largely as an growth of interest in classical art and ideas. The word itself comes from the French phrase renaissance des lettres, used by the 19th century historian Jules Michelet. In Old French renaissance means “rebirth.” # chide:n: To chide someone is to ride them or get on their case, without really getting in their face. People have been nagging since well before the 12th century, when the word chide came along as a new way to say “complain” or “rail.” If you want to remind someone of a flaw they have or an error they keep repeating, you might chide them with sarcasm, humor, or some seriousness. Where a sharp elbow in the ribs lets you know “Stop it, right now!,” a chide is more like a gentle elbow in the belly, saying “Come on, you’re late; did you forget your watch again?” If you say something complimentary, like “Grandma, that plastic flower looks so pretty in your hair,” you are flattering, praising or admiring someone. “Resembling a compliment” is one way to define the word complimentary, when you use it in the sense of giving praise. A second meaning of complimentary is “free.” If your hotel includes breakfast with the price of your room, they may call it a complimentary breakfast. It’s easy to get complimentary confused with complementary, which sounds exactly the same but means “filling in or completing.” If something is complementary, then it somehow completes or enhances the qualities of something else. If your beautiful voice is completely complementary to your brother’s song writing skills, you should form a family band! You’ve probably heard of “complementary colors,” colors that are opposite in hue on the color wheel but actually go well together. When combined, they make a harmonious palette. People’s personalities can also be complementary, as can certain food pairings. But be careful not to confuse this adjective with the closely spelled complimentary, which means “supplied free of charge.” # Magoosh Languorous: adj: Lacking spirit or liveliness. Languor:n: To be languorous is to be dreamy, lackadaisical, and languid. When someone is languorous, she’s lying around, daydreaming, possibly fanning herself lazily. It’s a little self-indulgent. Languorous refers to a certain kind of mood everyone gets in sometimes — when you’d rather lie around thinking than doing work or having fun. When you’re languorous, you’re tired and maybe a little depressed. Things can be languorous, too — like a hot, languorous summer afternoon or a languorous song that’s slow and mournful. If you’ve ever lounged in bed for an hour after you were supposed to get up, you’re familiar with feeling languorous. Precipitate usually means “bringing something on” or “making it happen” — and not always in a good way. An unpopular verdict might “precipitate violence” or one false step at the Grand Canyon could precipitate you down into the gorge. Precipitate, as a verb, can also mean specifically, “to fall from clouds,” such as rain, snow, or other forms of precipitation. When used as an adjective, precipitate means “hasty” or “acting suddenly.” If you decide to throw your class project in a trash masher just because someone in your class had a similar idea, then your actions might be described as precipitate. Or if you do that sort of thing regularly, you may be a precipitate person. A sharp, steep drop — whether it’s in a stock price, a roller coaster, or a star’s popularity — could be described as a precipitous one. Put simply, precipitous means perilously steep. Look closely and you’ll spot most of the word precipice (a sheer, almost vertical cliff) in precipitous. Now imagine how you’d feel standing at the edge peering over, and you’ll grasp the sense of impending danger that precipitous tends to imply. Precipitous declines in sales lead to bankruptcy. Precipitous mountainside hiking trails are not for the acrophobic. It can describe an ascent, but precipitous is most often used for things going literally or figuratively downhill. # voodoo:nv: Voodoo is a set of religious beliefs mainly followed by people in Caribbean countries and the Southern United States. People who practice voodoo believe that death is a transition from the visible to the invisible world. You may see voodoo portrayed on TV and in movies as a scary, violent cult that uses black magic and voodoo dolls to torment people. In reality, voodoo is a varied, community-centered religion with deep African influences, which often incorporates Catholic saint imagery. Voodoo comes from the Louisiana French voudou, ultimately from a West African language. # wont:n: A wont is a custom or habit, like my wont to drink at least ten cups of coffee a day. (In this particular example, some people might call my wont an addiction.) Wont is a tricky word, in terms of pronunciation; some people argue it sounds like want, while others insist it’s pronounced like won’t. Perhaps the confusion over pronunciation explains why this word is used relatively infrequently in everyday speech. It’s most people’s wont to use a synonym like custom or habit. # covet:v: If you covet something, you eagerly desire something that someone else has. If it’s 95 degrees out and humid, you may find yourself coveting your neighbor’s air conditioner. If the word covet sounds familiar, you’re thinking of the Tenth Commandment: “Thou shalt not covet thy neighbor’s house, thou shalt not covet thy neighbor’s wife, nor his manservant, nor his maidservant, nor his ox, nor his ass, nor any thing that is thy neighbor’s.” Basically this means you should be happy with your electronic gadgets and not be jealous when a friend gets something better. Typical means what you would expect—a typical suburban town has lots of neat little houses and people. Atypical means outside of type—an atypical suburban town might be populated by zombies in damp caves. Atypical is a synonym of “unusual,” but it carries a more objective feel—scientific studies might mention atypical results, suggesting that there is a clear definition of what is typical and what is not. “Unusual” is more of a casual observation that one might make in a non-scientific context. # lapse: nv: A lapse is a temporary slip, failure or break in continuity. Eating a second helping of cake when you’re otherwise doing well on your diet is a lapse. Eating the whole cake in one sitting is a serious lapse in judgment. First used to imply a “slip of the memory,” the noun lapse evolved in the sixteenth century from the Latin lapsus, meaning “a slipping and falling, falling into error.” The connotation of “a moral slip” developed later, and the verb form came into existence even later than that. Behaving badly one day when you’re usually on your best behavior is a lapse; Behaving badly again after a short stint being well-mannered means you’re lapsing back into nasty old habits. # relapse:nv: A relapse is a decline, especially of someone’s health. If your grandmother survived cancer only to have it return two years later, you could say she suffered a relapse. Relapse implies that someone has recovered from an illness and slid back into a worse state, like when you are getting over a cold but then you suddenly feel bad all over again. Unwanted behavior can also be described this way; if you find yourself biting your nails again, it’s a nail-biting relapse. The Latin word relabi is the root of relapse, slip back. # corollary:n: Corollary describes a result that is the natural consequence of something else. You could say that your weight gain is a corollary of the recent arrival of a bakery across the street from your house. The noun corollary describes an action’s consequence, such as having to study more, a corollary to getting a bad grade. The word is often seen with the prepositions “to” or “of,” as in “a corollary to fortune is fame.” Math enthusiasts may already be familiar with the word corollary, which can be used more formally to describe a new proof or proposition that follows naturally from an established one. A crusader is a person who works hard or campaigns forcefully for a cause. Most crusaders advocate dramatic social or political change. You can call a fierce champion for a cancer cure a crusader, and another kind of crusader could be an activist who works for school reform. Crusaders tend to be radical or at least progressive, embracing some kind of change. Crusader comes from crusade, which meant “campaign against a public evil” in the 18th century, but which earlier referred mainly to the religious-based military Crusades of the Middle Ages. # onslaught:n: Onslaught is a military term that refers to an attack against an enemy. It’s safe to say that no one wants to be caught on the receiving end of an onslaught, because there will be lots of danger, destruction and probably death. One way to help you remember the brutal meaning of onslaught is through the word’s English origin, slaught, meaning “slaughter.” But onslaught can be used in non-military ways, too. It can mean a barrage of written or spoken communication, like an onslaught of emailed birthday wishes. Taken individually, the birthday wishes are nice but an onslaught is too many, too fast, all at once. Onslaught can also mean a sudden and severe start of trouble. For example, if your office is unprepared for the onslaught of flu season, the entire sales force will be home sick at the same time. # outset:n: ### n: the time at which something is supposed to begin Use quasi when you want to say something is almost but not quite what it describes. A quasi mathematician can add and subtract adequately, but has trouble figuring out fractions. The adjective quasi is often hyphenated with the word it resembles. Quasi-scientific ideas are ideas that resemble real science, but haven’t been backed up with any real evidence. A quasi-religious person may attend church services, but he doesn’t take much interest in what’s being said. Get the idea? It’s a great alternative for “kind of. If you’re reading this on a treadmill or while taking a walk, you may know about the peripatetic, or walking, philosopher Aristotle, who taught while strolling with his students. Or, maybe you just like being a peripatetic, a walking wanderer. Peri- is the Greek word for “around,” and peripatetic is an adjective that describes someone who likes to walk or travel around. Peripatetic is also a noun for a person who travels from one place to another or moves around a lot. If you walk in a circle, you are peripatetic, or walking, but you aren’t a peripatetic, or wanderer, unless you actually go somewhere. An itinerant is a person who moves from place to place, typically for work, like the itinerant preacher who moves to a new community every few years. Itinerant is pronounced “eye-TIN-er-ant.” It might remind you of itinerary, the traveler’s schedule that lists flights, hotel check-in times, and other plans. It’s no surprise that both words come from the Latin word itinerare, meaning “to travel.” Itinerant was first used in the 16th century to describe circuit judges who traveled to faraway courtrooms. Today, almost anyone can be an itinerant. # botch:nv: If you botch something, you make a mess of it or you ruin it. If you totally botch your lines in the school play, you stammer and stutter your way through the whole thing. Interestingly, the word botch originally meant the opposite of what it means today. The Middle English word bocchen meant to mend or repair. As a noun botch means an embarrassing mistake or something that is done poorly, especially due to lack of skill. If they’ve never painted before, your friends working on set design might make a complete botch of the scenery for the play, which might involve repainting the whole thing. Studied describes a result achieved, not spontaneously, but by calculated and deliberate effort. It will probably take a studied effort to not appear nervous when you give an oral presentation. Leaders often do not respond immediately to important events. They get a little background information first so they can give a studied response. When stars have to stand around on the red carpet before the Oscars to have their pictures taken, their smiles become less spontaneous and more studied. Even if you walk past a group of girls with studied nonchalance, they still know that you have noticed them. # eclipse:nv: (occultation, occult) Have you ever seen an eclipse? That’s when the sun, earth or moon cross paths and cover each other up temporarily. A solar eclipse happens when the moon blocks our view of the sun for a bit. A lunar eclipse happens when the moon is on one side of the earth and the sun directly opposite, so the moon disappears. A TV eclipse, perhaps the most serious of all, is when your dad walks in at the most crucial part of the movie and blocks your view of the TV while he lectures about taking out the trash. Having a blemish or flaw Marred(impaired appearance or quality of something) by imperfections # spate:n: A spate is a large number. If a spate of new coffee shops open in your neighborhood, it’ll be easy for you to stay wide awake. You’ll have easy access to plenty of caffeine. Though it’s now used to describe a large number or unusually large amount of something, the word spate originally described a sudden flood of water, such as a river overflowing after a downpour. Thinking about being overwhelmed by a sudden rush of water will help you remember to use spate when you encounter an unexpected overflow of anything, whether it’s books, robberies, celebrity break-ups, or corporate mergers. If something is proverbial, it’s referred to in a familiar saying. If your little brother knocks over his milk and starts crying, you might think of the proverbial spilled milk. Proverb is the root of proverbial, and it comes from the Latin word proverbium, “a common saying.” Proverbs are little stories or expressions that usually teach a lesson, like “Don’t cry over spilled milk,” which means “It’s a waste of time to be upset about something that can’t be helped.” You could say to your dog, “Well, aren’t you the proverbial best friend?” or tell your sister, who’s dyed her hair purple, “You stick out like the proverbial sore thumb.” # nerve:nv: A nerve is a group of fibers that send sensation or physical feeling to the brain. Back pain can sometimes be caused by a damaged or pinched nerve. Your body depends on your nerves for sensing pain, heat, and cold — not to mention making it possible for you to move your muscles. You can also use the word nerve to mean bravery or daring: “She didn’t know if she’d have the nerve to skydive when she was finally up in the plane.” In the 1500s, to nerve was “to ornament with threads.” All of these come from a Latin root, nervus, “sinew, tendon, cord, or bowstring.” family: unnerve:nv, nervous. Use the adjective unnerving to describe situations and experiences that cause you to lose your courage. No matter how brave you are, a walk alone through a cemetery at night is bound to be a little unnerving. You might find it unnerving to get a flat tire on a deserted country road at sunset, or to find yourself onstage mid-play having completely forgotten your lines. In the 1620’s, the root word unnerve meant “to destroy the strength of,” but by the early 1700’s it came to mean “to deprive of courage.” Redoubtable means honorable, maybe even intimidatingly so. If your grandmother worked tirelessly to raise four kids on her own and start her own taxi cab business and to this day, keeps all of her cabbies in line, she is without a doubt redoubtable. The adjective redoubtable traces back to the French word redute, meaning “to dread,” a combination of the prefix re-, which adds emphasis, and duter, which mean “to doubt.” But it isn’t the redoubtable person that you doubt — it’s yourself or your ability to compete against or be compared to him or her. That’s where the dread comes in. But you can learn a lot from and be inspired by redoubtable people, if you can just get over being afraid of them. # redoubt:n: fortification A redoubt is a fort or retreat, like a temporary military shelter. Want to see a redoubt? Go to the US Military Academy at West Point where there are redoubts from the Revolutionary War. It’s also spelled “redout.” Redoubts were often built around existing fortifications out of earth or stone to protect the most vulnerable soldiers outside the main area. Redoubt means “place of retreat,” and a figurative redoubt might be the comfort you get from your group of friends or even your own certainty about the truth of your beliefs. Forte means an area in which you are strong or good. Having two left feet and no sense of rhythm, dancing would not be considered your forte. Better to impress people with card tricks, if that’s your area of expertise, or your forte. Your forte is what you would focus on if you decided to enter a talent show. The word forte actually comes from the similar-sounding Latin word fortis, which means “strong.” Romans (and countless groups since) called the big, barricaded structures they built “forts” because they were supposed to stay strong and keep out the hordes of invading barbarians. In music, playing forte means playing loud. # retiring:adj: unassuming, modest, shy, self-effacing, reticent If you are a retiring person, you avoid being at the center of attention. You can often be found in the library and other quiet places, and if someone compliments you, you’re likely to blush and change the subject. If you call someone retiring, it isn’t necessarily clear whether you mean it as a compliment or something closer to a put-down. Usually, the word is used to describe someone who is shy or modest to a fault. But it can also be used to suggest that someone isn’t arrogant, which is usually a good thing. And, of course, retiring can also refer to someone who stepped down from their last job and doesn’t intend to work anymore. If you see something that seems fake since it was too perfectly planned out, call it contrived. If you can easily predict the final minutes of a made-for-TV movie, then call it contrived. The adjective contrived describes something that is artificially planned, especially in an obvious way, so it comes across as faked or forced. It’s not just drama that can come off as contrived. Someone’s speech habits, wardrobe, or even personality can seem contrived. Whenever someone appears as if he or she is “trying too hard,” they might seem contrived, or the opposite of “natural.” The word unassuming means modest, lacking in arrogance, pleasant, or polite. You’ll find that some of the most unassuming people are actually the most interesting and powerful of all. They’re just decent enough not to display it all the time. It’s been said that when you assume, you make an ass of you and me: that’s because when you assume you draw conclusions that you shouldn’t. If you’re unassuming, you don’t make that mistake. Even though he was a rock star, I found Jason to be unassuming and delightful. He treated everyone like a friend. It’s the height of irony that the real Wizard of Oz turns out to be an unassuming country gentleman, when the image he projected was of fearsome, raw, tyrannical power. # proscribe:v: veto, interdict, forbid, disallow To proscribe something is to forbid or prohibit it, as a school principal might proscribe the use of cell phones in class. Proscribe sounds similar to the word prescribe, but be careful: these words are essentially opposite in meaning. While proscribe means forbid, prescribe is used when a doctor recommends a medicine or remedy. Of course, if you want an excuse for not following your doctor’s orders, you could say you were confused about the meaning of these two words — but that would be lying, which is proscribed by most people’s value systems. And it would also be bad for your health. # confer:v: If you gab, chat, and talk it up with someone, you have conversation, but if you’re looking for input from each other as you talk, you confer, or consult, together. They had a family meeting to confer about a schedule for sharing the new laptop. Many uses of the verb confer involve consulting with another person or as a group. Confer has a second use meaning “bestow,” which means to award or hand over something. You can confer a medal on a winner or hero, or you can confer status through a promotion or assignment. Each year the teacher would confer the special honor of summer hamster-sitter on one responsible student. ### v: present “The university conferred a degree on its most famous former student, who never graduated” ### v: have a conference in order to talk something over “We conferred about a plan of action” # wield:v: If you wield a tool or a weapon, you handle it effectively. Picture a gallant knight wielding a sword or a skillful chef wielding a whisk. You don’t just have to wield something physical; you can also wield or exert influence or authority. Wield is frequently followed by the word power. If you were a king, you could wield great power in your kingdom — exerting your influence over everything from food rations to castle upkeep. As it is, though, you might just wield power over your pet goldfish. Note: wield follows the i before e, except after c spelling rule. ### v: handle effectively “The burglar wielded an axe” Synonyms: handle, manage ### v: have and exercise wield power and authority” Synonyms: exert, maintain If you volunteer to remove a huge, hairy spider from your bathroom ceiling, your whole family will be grateful for your gallant actions. The adjective gallant means “heroic or brave.” In the past, gallant was used to describe a man’s behavior toward a woman, especially if he saved her from something or helped her with something she was unable to do on her own. It can still be used that way, but more often it describes any kind of bravery, and it is just as correct to describe a woman’s bravery as gallant as it is a man’s. # intimation: n: inkling, glimmering. The noun intimation means a hint or an indirect suggestion. Your teacher’s intimation that there could be a quiz the next day might send you into a panic, while your friend sitting beside you might not even notice. Intimation comes from the Latin word intimationem, which means an announcement. In English, intimation refers to a less direct form of communication. It’s a suggestion or hint, rather than a blatant statement of fact. Your first intimation that your brother had a girlfriend was the amount of time he spent whispering into the phone. The second intimation was when he asked your parents for money for two movie tickets. An impudent person is bold, sassy, and shameless. If you want to get into a fancy nightclub and you tell the bouncer, “Let me in, I’m much more beautiful than all these ugly losers in line,” that’s impudent behavior. Impudent comes from the Latin combination of im, meaning without, and pudens, meaning shame. We often call someone impudent if they’re disrespectful, snotty, or inappropriate in a way that makes someone feel bad. If you know someone has just lost all their money on the stock market, don’t be impudent and ask them how they’re going to afford gas money for their yacht. gratuity:n: Gratuitous means “without cause” or “unnecessary.” Telling ridiculous jokes at a somber occasion would be a display of gratuitous humor. Gratuitous can be used to refer to something that’s unnecessary and mildly annoying. If a friend frequently gives you fashion tips, even though you’ve expressed no interest in receiving them, you’d be correct in labeling her advice as gratuitous. In addition, gratuitous can be used to indicate that something is not only unnecessary but also inappropriate. Some people claim that some films and video games contain gratuitous violence — that is, violence that is excessive and offensive. Huh? Are you scratching your head at this word? The ineluctable conclusion is that you haven’t the faintest idea what it means. Ineluctable means impossible to avoid. A five syllable beauty like ineluctable is obviously not the kind of word you throw around in daily speech. It’s far more often used as a written word, as in the common phrase “ineluctable conclusion.” Used interchangeably with the more common unavoidable, though ineluctable implies an unsuccessful attempt to battle against whatever is ineluctable: after all, it comes from the Latin word “to struggle.”
2020-09-27 13:02:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3462667167186737, "perplexity": 2909.9143642083964}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400279782.77/warc/CC-MAIN-20200927121105-20200927151105-00008.warc.gz"}
https://math.stackexchange.com/questions/1213530/which-of-these-two-factorizations-of-5-in-mathcalo-mathbbq-sqrt29
# Which of these two factorizations of $5$ in $\mathcal{O}_{\mathbb{Q}(\sqrt{29})}$ is more valid? $$5 = (-1) \left( \frac{3 - \sqrt{29}}{2} \right) \left( \frac{3 + \sqrt{29}}{2} \right)$$ or $$5 = \left( \frac{7 - \sqrt{29}}{2} \right) \left( \frac{7 + \sqrt{29}}{2} \right)?$$ $\mathcal{O}_{\mathbb{Q}(\sqrt{29})}$ is supposed to be a unique factorization domain, so the two above factorizations are not distinct. But I can divide factors in both of them by units to obtain yet more seemingly different factorizations. The presence of the unit $-1$ in the first factorization does not trouble me, since for example on this page http://userpages.umbc.edu/~rcampbel/Math413Spr05/Notes/QuadNumbFlds.html the factorization of $3$ in $\mathcal{O}_{\mathbb{Q}(\sqrt{13})}$ is given as $(-1)(7 - 2 \sqrt{13})(7 + 2 \sqrt{13})$. I honestly find rings of complex numbers far easier to understand! • I don't understand what you mean by "more valid." Mar 30, 2015 at 22:07 • I was tempted to answer the one with the $-1$, but it's easy to accomplish the sign change with a judicious choice of units to multiply by. And heck, given that the fundamental unit in this ring is a so-called "half-integer," you could even argue that a factorization like $(11 - 2\sqrt{29})(11 + 2\sqrt{29})$ is also valid. Mar 31, 2015 at 0:30 • @Robert: Yes, because $\frac{3+\sqrt{29}}2\cdot\frac{5+\sqrt{29}}2=11+2\sqrt{29}$, so that's still the same prime. Mar 31, 2015 at 16:28 • @Henning: Alright, what do you make of this factorization of 4 in $\mathcal{O}_{\mathbb{Q}(\sqrt{17})}$? $(-1)(8 - 2\sqrt{17})(8 + 2\sqrt{17})$ Isn't that less "fundamental" than this other one? $$\left(\frac{5}{2} - \frac{\sqrt{17}}{2}\right)^2 \left(\frac{5}{2} + \frac{\sqrt{17}}{2}\right)^2$$ Bob, you too feel free to chime in on this one. Apr 1, 2015 at 0:43 • @RobertSoupe: Yes, because $8\pm2\sqrt{17}$ is not irreducible. Apr 1, 2015 at 2:14 Note that $$\frac{7-\sqrt{29}}2 \cdot \frac{5+\sqrt{29}}2 = \frac{3+\sqrt{29}}2$$ and $$\frac{5+\sqrt{29}}2 \cdot \frac{-5+\sqrt{29}}2 = 1$$ So the two factorizations are equally good, just related by unit factors. (Remember that even in an UFD, factorizations are only unique up to associatedness). • Good. It’s really hard doing arithmetic in a real quadratic number field, ’cause the ominipresence of units thoroughly obscures the situation. Mar 30, 2015 at 23:06 • These two factorizations are related by multiplication by a fundamental unit. Though I'm not sure I yet fully understand what makes a fundamental unit fundamental. But I could also be wrong about $\frac{5 + \sqrt{29}}{2}$ being a fundamental unit. – user153918 Apr 1, 2015 at 14:30 • @AlonsodelArte: I'm not sure it matters that the unit is fundamental. Unique factorizations are unique up to multiplication with any unit, not restricted to fundamental ones. (MathWorld agrees with you that this particular unit is fundamental, though). Apr 1, 2015 at 14:42 • @AlonsodelArte: If someone's trying to single out one factorization as more valid or canonical than another (among ones related by unit factors), they're doing it wrong. Apr 1, 2015 at 17:49 • They could be doing it wrong. Let's suppose one of the factorizations in the OP's question is the canonical one. Let's also suppose someone has somehow obtained a ridiculous but technically good factorization with $2373704925213782176 \pm 440785938820106883 \sqrt{29}$. Multiply that by the wrong unit and they'd get themselves further away from the canonical factorization. Fortunately there's only two directions to go here. Assuming there is a factorization that can be called more canonical, what would be the more efficient way of getting there? Apr 1, 2015 at 21:13 This question is a real humdinger. In $\frac{a + b \sqrt{29}}{2}$ its worthwhile to make the absolute values of it's $a$ and $b$ as small as possible, which makes the $3/2$ factorization look much better. But I feel the unit in that factorization is misleading because $5$ is a positive number, not a negative number. I think if a unit is included in a factorization, it should give you an idea as to what side of $0$ (or in what quadrant, in the case of a imaginary ring) the composite number is. So I'm going to have to say the $7/2$ factorization is more valid, in my opinion. Fundamental units provide an important clue to this vexing question, but they also provide a red herring. If $u$ is the fundamental unit in $\mathbb{Z}\left[\frac{1 + \sqrt{29}}{2}\right]$ and $p$ is a prime such that $|N(p)| = 5$, then it is $\pm u^n p$ that will lead us to all the other factorizations of $5$, not $\pm up^n$ (this is the red herring). Then we must look to see if there is another property of $u$ that we can try to carry over to $p$. There is. If $u$ is the fundamental unit, then it is the smallest unit greater than $1$. Verify this numerically: $$u = \frac{5 + \sqrt{29}}{2} \approx 5.192582403567252.$$ See then that $$p = \frac{3 + \sqrt{29}}{2} \approx 4.192582403567252$$ but $$p = \frac{7 + \sqrt{29}}{2} \approx 6.192582403567252.$$ The first $p$ is closer to $1$ than the second $p$. Therefore the more valid factorization is $$(-1) \left( \frac{3 - \sqrt{29}}{2} \right) \left( \frac{3 + \sqrt{29}}{2} \right) = 5.$$ • I don't think proximity to $1$ is that important. But still a thumbs up from me. Apr 11, 2015 at 18:57 In $\mathbb{Z}^+$, if $n > 1$ and $n = ab$, we expect that $n > a$ or $n > b$, maybe both. Our choices for $a$ and $b$ are considerably limited if $n$ is prime. With $n$ composite, we can choose $a$ and $b$ such that both $n > a$ and $n > b$ are true. When we broaden our view to all of $\mathbb{Z}$, we can make similar statements: if $|n| > 1$ and $n = ab$, then $|n| > |a|$ or $|n| > |b|$, maybe both. In a ring like $\mathcal{O}_{\mathbb{Q}(\sqrt{29})}$, we can make $a$ or $b$ arbitrarily large. One of the comments to another answer alludes to the formula $$\left(\frac{5}{2} + \frac{\sqrt{29}}{2}\right)^k \left(\frac{3}{2} + \frac{\sqrt{29}}{2}\right),$$ with $k \in \mathbb{Z}$. But is it possible to choose $ab = 5$ such that both $|a| < 5$ and $|b| < 5$ hold true? In fact, ignoring multiplication by $-1$, there is only one possible choice: $$\left(-\frac{3}{2} + \frac{\sqrt{29}}{2}\right) \left(\frac{3}{2} + \frac{\sqrt{29}}{2}\right).$$ Any other choice can make $a$ very small, but $b$ concomitantly much larger than $5$. From this it is much easier to arrive at $$5 = (-1)\left(\frac{3}{2} - \frac{\sqrt{29}}{2}\right) \left(\frac{3}{2} + \frac{\sqrt{29}}{2}\right)$$ than it is to get to the factorization involving $\frac{7}{2}$. For this reason, I think the factorization with $\frac{3}{2}$ is more valid. • This might not necessarily work in a ring like $\mathbb{Z}[\sqrt{6}]$, but it is the best effort I have seen among these answers in carrying over from $\mathbb{Z}$ what we feel is right about a factorization. Apr 11, 2015 at 18:55 I hesitated to answer because I thought I'd have to invent a scale with which to weigh factorizations by validity, and you might disagree with my priorities for the scale. Plus I have to define a bunch of terms, like "unnecessary," and you might not agree with my definitions. But then I thought the worst that could happen is that you and a few others downvote my answer, so here goes. Given multiple factorizations that differ by multiplication by units, weigh them according to the following priorities: 1. In the highest priority, the factorization contains no unnecessary multiplication by units. For example, $-6 = (-1)^{756478254} \times 2 \times 3$ has unnecessary units, but $-6 = (-1) \times 2 \times 3$ does not. Also, no unnecessary divisions that get canceled out by unnecessary factors. For example, do $(\sqrt{29})^2$ but not $$\left(\frac{0}{2} + \frac{2\sqrt{29}}{2}\right)^2.$$ 2. If it can be done while also satisfying the highest priority, the next priority is that the factorization conveys as much information as possible about the forms algebraic integers may take in the ring at hand, including, but not limited to, the presence of integers with negative norms and the presence of integers of the form $$\sum_{i = 1}^n \frac{a_i \theta^{i - 1}}{n},$$ where $n$ is the algebraic degree of $\theta$, and each $a_i \in \mathbb{Z}$. 3. With the above priorities satisfied, the absolute value of each $a_i$ is the smallest possible. Maybe you think I have my priorities backwards. Maybe you completely disagree with my concept and you hope an answer much better than the ones you've gotten so far comes along. But if you agree with these priorities, then $$(-1)\left(\frac{3}{2} - \frac{\sqrt{29}}{2}\right)\left(\frac{3}{2} + \frac{\sqrt{29}}{2}\right) = 5$$ is the more valid factorization, by a nose. 1. Both of the factorizations you presented satisfy the first priority. 2. Only the first factorization you presented conveys the fact that numbers in $\mathcal{O}_{\mathbb{Q}(\sqrt{29})}$ can have negative norms, though both convey the fact that numbers in this domain can be of the form $$\frac{a_1}{2} + \frac{a_2\sqrt{29}}{2}.$$ 3. In both factorizations $|a_2| = 1$, but $|a_1|$ in the first factorization is smaller. • First, wouldn't $(-2)\times 3$ be a better factorization than $(-1)\times 2\times 3$? A better example would be factoring $-4$ in the usual integers, where arguably $(-1)\times 2^2$ is better than $(-2)\times 2$ because the latter contains different factors related by units. Apr 2, 2015 at 14:02 • Second, $2^2\left(\frac{\sqrt{29}}2\right)^2$ is not a valid factorization at all, because $\frac{\sqrt{29}}2$ is not an algebraic integer. $\frac{a+b\sqrt{29}}2$ is an algebraic integer if and only if $a$ and $b$ are both odd (or both even, in which case the division is redundant). Apr 2, 2015 at 14:04 • Third, it is not clear what you mean by "conveying information" in your second criterion. Are you trying to say that a factorization is better if the factors look pedagogically instructive? That would be somewhat hard to make precise, I think. Apr 2, 2015 at 14:11 • @HenningMakholm I think its a good answer but not worthy of the bounty. He doesn't make it clear enough that the invalid factorizations he presents are meant to be invalid. My answer won't be worth the bounty ether and I look forward to you ripping it apart. – user155234 Apr 2, 2015 at 21:53 • Yes, it does seem rather unnecessary, kind of like raising $-1$ to an odd exponent, or 1 to any exponent. Apr 3, 2015 at 19:51
2022-05-26 11:34:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7232717871665955, "perplexity": 322.716821427951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604794.68/warc/CC-MAIN-20220526100301-20220526130301-00526.warc.gz"}
https://wiki.math.ntnu.no/ma3150/2019v/start?rev=1547468479&do=diff
# Forskjeller Her vises forskjeller mellom den valgte versjonen og den nåværende versjonen av dokumentet. ma3150:2019v:start [2019-01-14]seip [Oral presentations] ma3150:2019v:start [2019-03-22] (nåværende versjon)seip [Oral presentations] Begge sider forrige revisjon Forrige revisjon 2019-01-22 seip [Contents of the lectures] 2019-01-22 seip [Contents of the lectures] 2019-01-22 seip [Contents of the lectures] 2019-01-22 seip [Contents of the lectures] 2019-01-22 seip [Contents of the lectures] 2019-01-22 seip [Contents of the lectures] 2019-01-22 seip [Contents of the lectures] 2019-01-21 seip [Contents of the lectures] 2019-01-21 seip [Contents of the lectures] 2019-01-21 seip [Contents of the lectures] 2019-01-21 seip [Contents of the lectures] 2019-01-21 seip [Contents of the lectures] 2019-01-21 seip [Contents of the lectures] 2019-01-16 seip [Oral presentations] 2019-01-16 seip [Oral presentations] 2019-01-16 seip [Contents of the lectures] 2019-01-16 seip [Contents of the lectures] 2019-01-16 seip [Contents of the lectures] 2019-01-16 seip [Contents of the lectures] 2019-01-15 seip [Exercises] 2019-01-15 seip [Exercises] 2019-01-15 seip [Contents of the lectures] 2019-01-15 seip [Contents of the lectures] 2019-01-15 seip [Reference Group] 2019-01-14 seip [Contents of the lectures] 2019-01-14 seip [Oral presentations] 2019-01-14 seip [Exercises] 2019-01-14 seip [Syllabus and requirements for the examination] 2019-01-10 seip [Oral presentations] 2019-01-09 seip [Contents of the lectures] 2019-01-09 seip [Contents of the lectures] 2019-01-08 seip [Contents of the lectures] 2019-01-08 seip [Lectures] 2019-01-07 seip [Contents of the lectures] 2019-01-07 seip [Contents of the lectures] 2019-01-07 seip [Contents of the lectures] 2019-01-07 seip [Contents of the lectures] 2018-12-11 seip [Lectures] 2018-12-11 seip [Lectures] 2018-12-06 seip [Lectures] 2018-11-14 seip [MA3150 Analytic Number Theory] 2018-11-14 seip [MA3150 Analytic Number Theory] 2018-11-14 seip 2018-11-14 seip 2018-11-13 seip opprettet Neste revisjon Forrige revisjon 2019-03-22 seip [Oral presentations] 2019-03-19 seip [Guidance and consultation before the exam] 2019-03-19 seip [Guidance and consultation before the exam] 2019-03-19 seip [Syllabus and requirements for the examination] 2019-03-19 seip [Guidance and consultation before the exam] 2019-03-19 seip [Guidance and consultation before the exam] 2019-03-19 seip [Oral presentations] 2019-03-19 seip [Syllabus and requirements for the examination] 2019-03-19 seip [Oral presentations] 2019-03-19 seip [Contents of the lectures] 2019-03-18 seip [Oral presentations] 2019-03-15 seip [Oral presentations] 2019-03-13 seip [Exercises] 2019-03-12 seip [Oral presentations] 2019-03-11 seip [Contents of the lectures] 2019-03-11 seip [Contents of the lectures] 2019-03-07 seip [Exam, dates and location] 2019-03-07 seip [Contents of the lectures] 2019-03-05 seip [Exercises] 2019-03-05 seip [Exercises] 2019-03-05 seip [Contents of the lectures] 2019-03-03 seip [Contents of the lectures] 2019-03-03 seip [Contents of the lectures] 2019-03-03 seip [Contents of the lectures] 2019-03-02 seip [Oral presentations] 2019-03-01 seip [Exam, dates and location] 2019-03-01 seip [Oral presentations] 2019-02-28 seip [Oral presentations] 2019-02-28 seip [Oral presentations] 2019-02-28 seip [Contents of the lectures] 2019-02-26 seip [Contents of the lectures] 2019-02-26 seip [Contents of the lectures] 2019-02-26 seip [Contents of the lectures] 2019-02-26 seip [Contents of the lectures] 2019-02-25 seip [MA3150 Analytic Number Theory, Spring 2019] 2019-02-25 seip [MA3150 Analytic Number Theory, Spring 2019] 2019-02-25 seip [Contents of the lectures] 2019-02-24 seip [Oral presentations] 2019-02-22 seip [Exercises] 2019-02-22 seip [Exercises] 2019-02-22 seip [Exercises] 2019-02-19 seip [Contents of the lectures] 2019-02-19 seip [Contents of the lectures] 2019-02-17 seip [Contents of the lectures] 2019-02-06 seip [Contents of the lectures] 2019-02-06 seip [Contents of the lectures] 2019-02-05 seip [Contents of the lectures] 2019-02-05 seip [Contents of the lectures] 2019-02-05 seip [Contents of the lectures] 2019-02-05 seip [Contents of the lectures] 2019-02-05 seip [Contents of the lectures] Linje 1: Linje 1: ====== MA3150 Analytic Number Theory, Spring 2019 ====== ====== MA3150 Analytic Number Theory, Spring 2019 ====== - Analytic number theory studies the distribution of the prime numbers, based on methods from mathematical analysis. Of central importance is the study of the Riemann zeta function, which embodies both the additive and the multiplicative structure of the integers. It turns out that the localization ​of the zeros of this meromorphic function is closely related ​to the distribution of the primes. At the end of the nineteenth century, this insight led to the celebrated prime number theorem. The zeta function has been subject to intensive research ever since, but many fundamental questions remain open, of which the Riemann hypothesis undoubtedly is the most famous. ​ + Analytic number theory studies the distribution of the prime numbers, based on methods from mathematical analysis. Of central importance is the study of the Riemann zeta function, which embodies both the additive and the multiplicative structure of the integers. It turns out that the location ​of the zeros of this meromorphic function is intimately linked ​to the distribution of the primes. At the end of the nineteenth century, this insight led to the celebrated prime number theorem. The zeta function has been subject to intensive research ever since, but many fundamental questions remain open, of which the Riemann hypothesis undoubtedly is the most famous. ​ Key words for the course: Arithmetic and multiplicative functions, Abel summation and Möbius inversion, Dirichlet series and Euler products, the Riemann zeta function, the functional equation for the zeta function, the gamma function, The Mellin transformation and Perron'​s formula, the prime number theorem, the Riemann hypothesis, Dirichlet characters, Dirichlet'​s theorem on primes in arithmetic progressions. Key words for the course: Arithmetic and multiplicative functions, Abel summation and Möbius inversion, Dirichlet series and Euler products, the Riemann zeta function, the functional equation for the zeta function, the gamma function, The Mellin transformation and Perron'​s formula, the prime number theorem, the Riemann hypothesis, Dirichlet characters, Dirichlet'​s theorem on primes in arithmetic progressions. Linje 27: Linje 27: The syllabus for the course is as defined by the lectures described in detail below. I expect you to be able to present the basic concepts and ideas discussed during the lectures. The exercises should be viewed as an integral part of the syllabus. The syllabus for the course is as defined by the lectures described in detail below. I expect you to be able to present the basic concepts and ideas discussed during the lectures. The exercises should be viewed as an integral part of the syllabus. + + The final lecture took place on March 19. You are supposed to work on the topic for your oral presentation during the four remaining weeks of the semester. ​ ===== Reference Group ===== ===== Reference Group ===== Linje 34: Linje 36: [[larsmagnus.overlier@gmail.com|Lars Magnus Øverlier]] [[larsmagnus.overlier@gmail.com|Lars Magnus Øverlier]] - * First meeting January 14. + * {{ :​ma3150:​2019v:​report_refgroup_jan14-2019.pdf |Minutes}} from the first meeting January 14. ===== Contents of the lectures ===== ===== Contents of the lectures ===== * Lecture 1, January 8: The Poisson summation formula with the example of the Gaussian function. Definition of the Riemann zeta function $\zeta(s)$. Euler product for $\zeta(s)$ and Euler'​s proof of the divergence of the series of reciprocals of the primes (from Davenport Ch. 1; se also Ch. Vanden Eynden, "​Proofs that $\sum 1/p$ diverges",​ Amer. Math. Monthly **87** (1980), 394--397). See {{ :​ma3150:​2019v:​euler.pdf |this note}} for a more precise consequence of Euler'​s argument. * Lecture 1, January 8: The Poisson summation formula with the example of the Gaussian function. Definition of the Riemann zeta function $\zeta(s)$. Euler product for $\zeta(s)$ and Euler'​s proof of the divergence of the series of reciprocals of the primes (from Davenport Ch. 1; se also Ch. Vanden Eynden, "​Proofs that $\sum 1/p$ diverges",​ Amer. Math. Monthly **87** (1980), 394--397). See {{ :​ma3150:​2019v:​euler.pdf |this note}} for a more precise consequence of Euler'​s argument. * Lecture 2, January 14: Apostol, sections 2.1 - 2.9. The Möbius function $\mu(n)$, Euler'​s totient function $\varphi(n)$;​ basic properties and the relation between these two functions, Dirichlet convolution,​ Möbius inversion, the von Mangoldt function $\Lambda(n)$, multiplicative functions. * Lecture 2, January 14: Apostol, sections 2.1 - 2.9. The Möbius function $\mu(n)$, Euler'​s totient function $\varphi(n)$;​ basic properties and the relation between these two functions, Dirichlet convolution,​ Möbius inversion, the von Mangoldt function $\Lambda(n)$, multiplicative functions. - * Lecture 3, January 15: Apostol, sections 2.10 - 2.12, 3.1 - 3.4. More on multiplicative functions, big oh notation, Abel summation and Euler'​s ​summation formula, some asymptotic formulas. ​ + * Lecture 3, January 15: Apostol, sections 2.10 - 2.12, 3.1 - 3.4 (see also Thm. 4.2 in 4.3 which implies (6) on page 54). More on multiplicative functions, big oh notation, Abel summation and the Euler--Maclaurin ​summation formula, some asymptotic formulas, meromorphic continuation of $\zeta(s)$. (See {{ :​ma3150:​2019v:​continuation_zeta.pdf |this note}} for both Abel summation and the Euler--Maclaurin formula applied to $\zeta(s)$.) + * Lecture 4, January 21: Apostol, sections 3.5, 3.7, 3.11, 4.1 - 4.3. Further applications of Abel summation and the Euler--Maclaurin formula (more on the analytic continuation of $\zeta(s)$, relation between $\pi(x)$ and $\psi(x)$, a weak version of Stirling'​s formula), average order of $d(n)$; for the average order of $\varphi(n)$,​ see Exercise 1, problems 3.4, 3.5 in Apostol. + * Lecture 5, January 22: Apostol 3.10 - 3.11, 4.5 - 4.8; see also Ch. 7 in Davenport. Chebyshev'​s bounds on $\pi(x)$ and Mertens'​s theorem on the asymptotics of $\sum_{p\le x} p^{-1}$ (see {{ :​ma3150:​2019v:​chebyshev_mertens.pdf |this note}} for a short route to these results; see also [[https://​arxiv.org/​pdf/​math/​0504289v3.pdf|here]] for an interesting account of Mertens'​s theorems); the sum $\sum_{n\le x} \mu(n)/n$; the number of square-free numbers less than or equal to $x$ (see Ex. 2.6 in Apostol for the "​groundwork"​). + * **Special lecture** related to this course: Christian Skau, [[https://​www.math.ntnu.no/​seminarer/​perler/​|"​A century of Brun's sieve - a momentous event in prime number history"​]],​ January 25, 12:15 - 13:00, 1329 SB2. Coffee and cakes are served during the lecture. + * Lecture 6, January 28: Preparation for our study of $\zeta(s)$:​ the basics of the Gamma function; functional equation, infinite product representation,​ reflection formula, Stirling'​s formula. See {{ :​ma3150:​2019v:​gamma_function_notes.pdf |Note on the Gamma function}}. + * Lecture 7, January 29: The gamma function continued; proof of Stirling'​s formula and some consequences,​ Legendre'​s duplication formula. Riemann'​s memoir and the functional equation for $\zeta(s)$ (see Ch. 8 in Davenport). + * Lecture 8, February 4: Entire functions of order 1 and product representation of ξ(s) (see Ch. 11 and 12 in Davenport). (Notice our usage of Cauchy estimates and Jensen'​s formula from complex analysis.) + * Lecture 9, February 5: We have now two representations of ζ′(s)/​ζ(s):​ 1) in terms of $\Lambda(n)$ via the Euler product of ζ(s) and 2) in terms of the pole and the zeros of ζ(s). In this and the next lectures, we will see what this relation leads to. We begin by deducing the facts that $N(T+1)-N(T)=O(\log T)$ and that $\zeta'​(s)/​\zeta(s)$ for large $t$ is $\sum_{|\gamma-t|\le 1} \frac{1}{s-\rho} + O(\log |t|)$ (see Ch. 15 of Davenport). + * Lecture 10, February 11: We establish the argument principle and apply it to deduce the Riemann--von Mangoldt formula for $N(T)$. We introduce and discuss the function $S(T)$, which can be viewed as the argument of $\zeta(s)$ on the critical line. We show that $S(T)=O(\log T)$ (see Ch. 15 of Davenport). (The lecture is given by Kamalakshya Mahatab.) + * Lecture 11, February 12: We discuss the most interesting points of Exercise 1 and Exercise 2. (The lecture is given by Kamalakshya Mahatab.) + * **NB! The lecture on February 18 is cancelled** because of sickness absence of the lecturer. + * Lecture 12, February 19: We establish Perron'​s formula (this is done in a special case in Ch. 17 of Davenport and in the general case in 11.12 of Apostol). ​ + * Lecture 13, February 22, 14:15-16, Room 734 SB2 (**This is an extra lecture**): We now apply Perron'​s formula with remainder term, our estimates for $\zeta'​(s)/​\zeta(s)$,​ and our knowledge about the nontrivial zeros of $\zeta(s)$ to deduce von Mangoldt'​s explicit formula for the Chebyshev function $\psi(x)$ (see Ch. 17 in Davenport). We finish the lecture by observing what the Riemann hypothesis implies about the prime counting function $\pi(x)$. + * Lecture 14, February 25: We establish the zero-free region of de la Vallée Poussin (Ch. 13 of Davenport) and use it to prove the prime number theorem (Ch. 18 of Davenport). + * Lecture 15, February 26: We start preparing for Dirichlet'​s theorem on primes in arithmetic progressions by considering some facts about finite Abelien groups (see Ch. 6 in Apostol and Ch. 1 in Davenport), including the character group and orthogonality of the characters. Finally, we define Dirichlet characters and consider the simplest examples (for $k \le 5$). + * Lecture 16, March 4. We start discussing Dirichlet'​s theorem. We will mostly follow Apostol'​s proof (see Ch. 7 of Apostol), but will use some arguments from Davenport'​s Ch. 4 as well. We may interpret "​Dirichlet'​s theorem"​ either as the easier "Euler formula"​ $\sum_{p\in A(h,k)} p^{-\sigma}=\frac{1}{\phi(k)}\log\frac{1}{\sigma-1}+O(1),​ \sigma>​1$ or the "​Mertens formula"​ $\sum_{p\le x, p\in A(h,​k)}\frac{\log p}{p}=\frac{1}{\phi(k)}\log x+O(1)$, where $A(h,k)$ is the arithmetic progression $\{m: m=h+nk, n\ge 0\}$ and $(h,​k)=1$. + * Lecture 17, March 5: We finish the proof of Dirichlet'​s theorem, stressing that the key point is to show that $L(1,​\chi)\neq 0$ for all nonprincipal characters $\chi$. + * Lecture 18, March 11. We will in the remaining lectures discuss the ideas that eventually lead to the prime number theorem for arithmetic progressions (see Ch. 20 and 22 in Davenport). We begin by introducing the notion of a primitive character, see Ch. 5 of Davenport and Ch. 8 of Apostol. We will mostly follow Apostol, relying on finite Fourier transforms (Gauss sums). This lecture will roughly cover 8.1 - 8.2, 8.5 - 8.7 in Apostol. (Be aware that Theorem 8.11 in Apostol is just Parseval'​s relation for the Fourier transform on $\mathbb{Z}_k$.) + * Lecture 19, March 12. We continue our discussion of primitive characters and Gauss sums, covering roughly 8.8 - 8.12 in Apostol. + * Lecture 20, March 18. We establish the functional equation for $L(s,\chi)$ when $\chi$ is a primitive character, following Ch. 9 in Davenport. ​ + * Lecture 21, March 19 (FINAL LECTURE). We recall our route to the prime number theorem and discuss how to proceed to obtain the corresponding result for arithmetic progressions. The lecture will include some words on zero-free regions and Siegel zeros (see Ch. 14 in Davenport). + + ===== Exercises ===== ===== Exercises ===== - ​* Exercise 1, from Apostol: 2.1, 2.2, 2.6, 2.21, 2.26, 3.1, 3.2, 3.4, 3.5, 4.7, 4.18, 4.19 (a). + You are welcome to work on the exercises in room 1329 SB2 on Friday 14:​00--15:​00,​ under my guidance, as indicated below. Solutions to the problems will be provided in due course. + + ​* Exercise 1, from Apostol: 2.1, 2.2, 2.6, 2.21, 2.26, 3.1, 3.2, 3.4, 3.5, 4.7, 4.18, 4.19 (a) (guidance offered on January 18 and 25). {{ :​ma3150:​2019v:​solution_exercise1_2019.pdf |Solution}}. + * {{ :​ma3150:​2019v:​problems2_2019.pdf |Exercise 2}} (guidance offered on February 1 and February 8). + * Exercise 3, from Apostol: 11.1 (a)--(d), 11.3, 13.2 (observing that $A(x)=\pi(x)+\pi(x^{1/​2})/​2+\pi(x^{1/​3})/​3+\cdots$, you may prove the stronger result that $A(x)=\pi(x)+O(\sqrt{x})$),​ 13.3, 13.4 (guidance offered on March 1). + * Exercise 4, from Apostol: 6.14, 6.15, 7.1, 7.2, 7.3, 7.4, 7.6 plus {{ :​ma3150:​2019v:​problem_march15_19.pdf |this problem}} that was left as a challenge in the lecture on March 5 (guidance offered on March 8 and March 15). + * Exercise 5, from Apostol: 8.5, 8.6, 8.7, 8.8, 8.9 (guidance offered on March 15 and March 22). ​ ​ ===== Oral presentations ===== ===== Oral presentations ===== Linje 49: Linje 80: As part of the oral exam, the students should give short presentations of topics assigned to them. These are topics that are not covered by the lectures. You may choose a topic yourself (to be approved by me) or choose one from the following list: As part of the oral exam, the students should give short presentations of topics assigned to them. These are topics that are not covered by the lectures. You may choose a topic yourself (to be approved by me) or choose one from the following list: - - Mertens'​s theorems and Mertens'​s constant + - Mertens'​s theorems and Mertens'​s constant ​(chosen by ''​Tor Kringeland''​) ​ - - The Bertrand--Chebyshev theorem, including Ramanujan and Erdős'​s work on it + - The Bertrand--Chebyshev theorem, including Ramanujan and Erdős'​s work on it (chosen by ''​Claudia Wohlgemuth''​) - Ramanujan primes ​ - Ramanujan primes ​ + - Skewes'​s number and sign changes in $\pi(x)-\operatorname{li}(x)$ - General distribution of nontrivial zeros of $\zeta(s)$ ​ - General distribution of nontrivial zeros of $\zeta(s)$ ​ - - Zeros on the critical line, including density results ​ + - Zeros on the critical line, including density results ​(chosen by ''​Terje Bull Karlsen''​) - The error term in the prime number theorem and zero-free regions ​ - The error term in the prime number theorem and zero-free regions ​ - The Lindelöf hypothesis and the density hypothesis - The Lindelöf hypothesis and the density hypothesis - Mean value theorems - results and conjectures - Mean value theorems - results and conjectures - Zeta functions for which RH fails - Zeta functions for which RH fails - - Dirichlet'​s divisor problem, including Voronoi'​s summation formula + - Dirichlet'​s divisor problem, including Voronoi'​s summation formula ​(chosen by ''​Lars Magnus Øverlier''​) - - Elementary sieve methods and Brun's theorem on twin primes ​ + - Elementary sieve methods and Brun's theorem on twin primes ​(chosen by ''​Daniel Olaisen''​) - Voronin'​s universality theorem and value distribution of the Riemann zeta function ​ - Voronin'​s universality theorem and value distribution of the Riemann zeta function ​ - - Lagarias'​s version of Guy Robin'​s criterion ​ + - Lagarias'​s version of Guy Robin'​s criterion ​(chosen by Morten ''​Ravnemyr''​) ​ - The Beurling--Nyman condition for RH - The Beurling--Nyman condition for RH - - Li's criterion for RH + - Li's criterion for RH (chosen by ''​William Tell''​) - The Bohr--Cahen formulas for abscissas of convergence and the growth of $\sum_{n\le x} \mu(n)$.  ​ - The Bohr--Cahen formulas for abscissas of convergence and the growth of $\sum_{n\le x} \mu(n)$.  ​ - Alternate proofs of the functional equation for $\zeta(s)$ (Titchmarsh gives 7 proofs; take a look and make your own selection of some of them) - Alternate proofs of the functional equation for $\zeta(s)$ (Titchmarsh gives 7 proofs; take a look and make your own selection of some of them) - Approximations of $\zeta(s)$,​ including the approximate functional equation - Approximations of $\zeta(s)$,​ including the approximate functional equation - - The Riemann--Weil explicit formula. + - The Riemann--Weil explicit formula + - Siegel zeros. - The aim of the presentations is to convey to your peers what the topic is about and the most important and interesting problems and results associated with it. You are not expected to study proofs of deep theorems, but you should be able to say a little more than what we can find on the Wikipedia. + The aim of the presentations is to convey to your peers what the topic is about and the most important and interesting problems and results associated with it. You are not expected to study proofs of deep theorems, but you should be able to say a little more than what we can find on the Wikipedia. ​You may choose to give a blackboard lecture or a Beamer/​Powerpoint presentation. Each presentation should last for about 15--20 minutes. + + Please let me know your choice of topic before **March 29**. There should be only one student per topic. Once a topic is chosen, I will make a note of it in the list above. Accordingly,​ we follow the principle of "​first-come,​ first-served"​ when assigning topics. + + ===== Exam, dates and location ===== + + The oral presentations will be given on May 8. Oral examinations will take place on May 9. Both events will take place in Room 656 SB2. A detailed schedule will be announced in due course. + + ===== Guidance and consultation before the exam ===== + + Before the Easter break, I will be available for consultation until April 3. I will be traveling April 4 -- 11, and will again be available on April 12. After the Easter break, I will be available only May 6 -- 7. + + You may in principle come at "​any"​ time during the days I am in my office, but I would recommend that you contact me in advance to make an appointment. ​ + + - Each presentation should last for about 15--20 minutes.
2019-03-25 20:50:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7109609246253967, "perplexity": 2845.788942606281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204300.90/warc/CC-MAIN-20190325194225-20190325220225-00075.warc.gz"}
http://math.stackexchange.com/questions/331121/number-of-prime-ideals-of-a-ring
# Number of prime ideals of a ring Could anyone tell me how to find the number of distinct prime ideals of the ring $$\mathbb{Q}[x]/\langle x^m-1\rangle,$$ where $m$ is a positive integer say $4$, or $5$? What result/results I need to apply to solve this problem? Thank you for your help. - This is obtained by stringing together several basic facts. 1) Let $P(x) = x^n-1$. Then $P(x) = \prod_{d \mid n} \Phi_d(x)$ is a product of cyclotomic polynomials. The product extends over all positive integers $d$ dividing $n$. It is known that each $\Phi_i$ is irreducible over $\mathbb{Q}$: see e.g. $\S$ 10.1.2 of these notes. Since $\operatorname{gcd}(P(x),P'(x)) = 1$, $P(x)$ is a squarefree polynomial: it has no repeated irreducible factors. (Or look directly at the roots of $\Phi_d$ -- they are the primitive $d$th root of unity -- to see that the $\Phi_d$'s are distinct monic polynomials.) 2) If $P(x) \in \mathbb{Q}[x]$ is a product of distinct irreducible polynomials $P_1(x) \cdots P_r(x)$, then by the Chinese Remainder Theorem, $\mathbb{Q}[x]/(P(x)) \cong \prod_{i=1}^r \mathbb{Q}[x]/(P_i(x))$. Since each $P_i$ is irreducible, this is a product of $r$ fields. 3) The prime ideals in a finite product $\prod_{i=1}^r R_i$ of commutative rings are precisely those of the form $R_1 \times \ldots \times R_{i-1} \times \mathfrak{p}_i \times R_{i+1} \times \ldots \times R_r$, with $\mathfrak{p}_i$ a prime ideal of $R_i$. In particular a product of $r$ fields has precisely $r$ ideals: we must take each $\mathfrak{p}_i = 0$. Putting these together we get that the number of prime ideals in $\mathbb{Q}[x]/(x^n-1)$ is $d(n)$, the number of (positive) divisors of $n$. - The prime ideals of $\mathbb Q[x]/(x^m-1)$ correspond to prime ideals in $\mathbb Q[x]$ containing $x^m-1$, by taking the preimage under the surjection $\mathbb Q[x] \to \mathbb Q[x]/(x^m-1)$. Since $\mathbb Q[x]$ is a PID, those ideals are $(f)$ for $f$ an irreducible factor of $x^m-1$. We have the factorization $$x^m-1 = \prod_{d|m} \Phi_d$$ where $\Phi_d$ is the $d$th cyclotomic polynomial, which is irreducible. So the number of irreducible factors of $x^m-1$ is $\tau(m)$, the number of positive divisors of $m$. - @City and so more generally, $x^m-1$ can be replaced with any polynomial, and the same discussion about prime ideals corresponding to prime factors of the polynomial applies. –  rschwieb Mar 15 '13 at 13:10 $\sigma(m)$ usually means the sum of the positive divisors of $m$. –  M Turgeon Jun 21 '13 at 18:47 Either use $\sigma_0(m)$ or $\tau(m)$. –  lhf Jun 21 '13 at 18:51 Thanks, I've corrected that. –  marlu Jun 25 '13 at 0:07 A small addendum to marlu's answer: The Chinese Remainder Theorem implies $$\mathbb{Q}[x]/(x^m-1) \cong \prod_{d|m} \mathbb{Q}(\zeta_d),$$which is a finite direct product of fields. So the (prime) ideal structure is quite easy.
2014-03-08 22:53:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9502328634262085, "perplexity": 86.64600085094234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999665917/warc/CC-MAIN-20140305060745-00043-ip-10-183-142-35.ec2.internal.warc.gz"}
https://brilliant.org/problems/14-angry-numbers/
# 14 Angry Numbers! Number Theory Level 4 $\large a_1^4+a_2^4+a_3^4+a_4^4+\ldots +a_{14}^4=49999$ Let there be integers $$a_1,a_2,a_3,\ldots a_{14}$$, such that the above equation is satisfied. How many $$14-$$tuples of $${a_1,a_2,a_3\ldots,a_{14}}$$ exist such that the above condition is met? ×
2016-10-26 13:12:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8131908178329468, "perplexity": 1397.251573423315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720945.67/warc/CC-MAIN-20161020183840-00179-ip-10-171-6-4.ec2.internal.warc.gz"}
https://research-information.bris.ac.uk/en/publications/on-the-structure-of-random-graphs-with-constant-r-balls
# On the structure of random graphs with constant $r$-balls David Ellis, Itai Benjamini Research output: Contribution to journalArticle (Academic Journal)peer-review ## Abstract We continue the study of the properties of graphs in which the ball of radius $r$ around each vertex induces a graph isomorphic to the ball of radius $r$ in some fixed vertex-transitive graph $F$, for various choices of $F$ and $r$. This is a natural extension of the study of regular graphs. More precisely, if $F$ is a vertex-transitive graph and $r \in \mathbb{N}$, we say a graph $G$ is {\em $r$-locally $F$} if the ball of radius $r$ around each vertex of $G$ induces a graph isomorphic to the graph induced by the ball of radius $r$ around any vertex of $F$. We consider the following random graph model: for each $n \in \mathbb{N}$, we let $G_n = G_n(F,r)$ be a graph chosen uniformly at random from the set of all unlabelled, $n$-vertex graphs that are $r$-locally $F$. We investigate the properties possessed by the random graph $G_n$ with high probability, for various natural choices of $F$ and $r$. We prove that if $F$ is a Cayley graph of a torsion-free group of polynomial growth, and $r$ is sufficiently large depending on $F$, then the random graph $G_n = G_n(F,r)$ has largest component of order at most $n^{5/6}$ with high probability, and has at least $\exp(n^{\delta})$ automorphisms with high probability, where $\delta > 0$ depends upon $F$ alone. Both properties are in stark contrast to random $d$-regular graphs, which correspond to the case where $F$ is the infinite $d$-regular tree. We also show that, under the same hypotheses, the number of unlabelled, $n$-vertex graphs that are $r$-locally $F$ grows like a stretched exponential in $n$, again in contrast with $d$-regular graphs. In the case where $F$ is the standard Cayley graph of $\mathbb{Z}^d$, we obtain a much more precise enumeration result, and more precise results on the properties of the random graph $G_n(F,r)$. Our proofs use a mixture of results and techniques from geometry, group theory and combinatorics. Original language English Journal of the European Mathematical Society Accepted/In press - 30 Dec 2020
2021-06-24 19:43:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8456191420555115, "perplexity": 89.77665834895646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488556482.89/warc/CC-MAIN-20210624171713-20210624201713-00552.warc.gz"}
http://mathhelpforum.com/advanced-math-topics/165214-pulse-response-phase-function-image-processing-wavelets-print.html
# Pulse response and Phase function (image processing-Wavelets)? Printable View • December 3rd 2010, 05:17 PM nasil122002 Pulse response and Phase function (image processing-Wavelets)? Hello, How can I calculate the Fourier transform of impulse response ^h(w) and phase function psi(w) of the given impulse response h= (kh) is defined by: h0= 1/2, h1= 1, h2= 1/2, otherwise= 0 ? Thanks for your help. • December 3rd 2010, 11:09 PM chisigma The [discrete] 3-points Fourier Tranform of h(n) is given by... $\displaystyle H(k)= \sum_{n=0}^{2} h(n)\ e^{- j 2 \pi \frac{k n}{3}$ , $k=0,1,2$ (1) Kind regards $\chi$ $\sigma$ • December 4th 2010, 01:59 AM nasil122002 Thanks for the answer, how can I now calculate the pulse response h^(w) and psi (w) on this formula? I need really a complete solution. • December 6th 2010, 02:14 AM chisigma The Z-Transform os the finite 3-points sequence h(n) by definition is given by... $\displaystyle H(z)= \sum_{n=0}^{2} h(n)\ z^{-n}$ (1) The complex transfer function [FIR filter...] is easily obtained from (1) setting $z=e^{j \omega}$. A more complex alternative is the derivation of the H(z) fron the H(k) I have defined in my previous post using the inverse discrete Fourier transform... $\displaystyle H(z)= \frac{1}{3}\ \sum_{k=0}^{2} H(k)\ \frac{1-z^{-3}}{1- e^{-j 2 \pi \frac{k}{3}}\ z^{-1}}$ (2) ... and again You can find the complex transfer function setting in (2) $z=e^{j \omega}$... Kind regards $\chi$ $\sigma$
2014-03-12 07:28:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9179590940475464, "perplexity": 5828.7264662501675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021453828/warc/CC-MAIN-20140305121053-00055-ip-10-183-142-35.ec2.internal.warc.gz"}
https://datascience.stackexchange.com/questions/24543/convolution-neural-network-loss-and-performance
# Convolution Neural Network Loss and performance I have a set of about ~100,000 training examples. Ratio of positive to negative example is roughly 1:2. The true ratio is more like 1:100 so this represents a major downsampling of the negative class. It is also a very noisy dataset - the examples were automatically generated through distant supervision. Each example represents a set of sentences and has 700 columns. Number of rows may vary from 10 to 100 (maybe even more). I used a Convolution Neural Network in Tensorflow to train my model (model architecture similar to the one described here) with only 2 epochs and stored the loss, f-score , precision and recall every 10 steps. I evaluated the model on a validation set (which too was generated automatically through distant supervision with negative class downsampling resulting in pos:neg ratio of ~1:2) every 100 steps. Here are the hyperparameters: batch size: 60 for train, 100 for validation epochs: 2 convolution filter sizes: 700x5, 700x6, 700x7, 700x8, 700x9, 700x10 number of convolution filters per filter size: 125 (so total of 750 filters) dropout: 0.5 l2reg: 0.001 lr: 0.001 I'm seeing some strange behavior with the model and I don't understand why. My training precision, recall and f-score go over 0.95 in about a 100 steps (6000 examples) and then plateaus. The loss falls down from 0.8 to 0.2 in about 200 steps and then fluctuates between 0.1 and 0.4. On the validation set my precision, recall and f-score are over 0.95 starting from the first time I evaluate it on the 100th step. Loss fall slightly from 0.3 to 0.2. When I evaluated on a real-world test set (without downsampling negative class so it has the true ratio of pos:neg), the actual precision and recall were 0.37 and 0.85. My results are not making any sense to me. I use tensorflow metrics for calculating training precision, recall and fscore and scikit-learn metrics for calculation validation precision, recall and fscore. I can't find anything wrong in the code but I don't understand why I should have such results unless there is a bug. I would have understood having low precision and recall all through - the class imbalance favors the negative class and my set is noisy. However, I am very confused about why I'm having such misleadingly high scores all through.. Given that my dev dataset is also noisy and generated in the same manner as the train set, the dev results might just be useless and it is possible that the model is overfitting the noisy set. But I still don't understand why the scores are so high so soon. Also, if overfitting is the issue, do you think I should make the dropout even higher? I've attached a screenshot of the graphs and would really appreciate your thoughts on this. Blue is train and red is dev. Thanks a lot! • Couple of questions: What is the positive/negative balance in your cv set? When used in "real world" set (I would call that the test set in your scenario) are you re-balancing the data to match how you trained the classifier, or are these the scores now made against the "true ratio" (i.e. "more like 1:100")? Nov 9 '17 at 21:54 • I edited the question to address your comment. The cv set has the same positive:negative balance as the train set and was generated through distant supervision as well. The test set has the true ratio of pos:neg. Thanks! – ltt Nov 9 '17 at 22:15 • "number of filters: 125" does this mean your convet is 125 layers deep ? Nov 9 '17 at 23:03 • I edited the question. Model architecture is similar to the one described in wildml.com/2015/12/…. Filters refer to the convolution filters. Thanks! – ltt Nov 10 '17 at 0:26 ### Precision Your step change in precision looks to be almost entirely explained by the change in positive class frequency. It is reasonable to expect the proportion of false positives to increase when increasing the proportion of negative examples. Even if you assume your cv results were perfect, then you would see some increase. As an example, assume you have cv results representative of test results - which means same distribution before random under-sampling, and no over-fit to the cv set. Say you measured precision at 0.97 with a t:f ratio of 1:2, and for the sake of simplicity that this was due to the following confusion table: Predicted: T F Real T 97 3 Real F 3 197 What precision should you expect when going to the real distribution? That is the same as multiplying the bottom row of the confusion table by 50. Precision is $\frac{TP}{TP+FP}$, so your expected precision would be $\frac{97}{97+150} \approx 0.39$ ### Recall The same effect does not impact recall, because it is about the ratio between true positive and false negative. So when you change the ratio of positive to negative classes, in theory recall should be unaffected. In your case, recall has been affected, but a lot less than precision. that is promising. A drop from 0.95 to 0.85 between cv and test is not great perhaps, but it doesn't point to a really major problem, just room for improvement. There are a few possible causes. The ones that I can think of are: • Your test set might be too small, so estimates of precision and recall have large error. So in fact there is no problem . . . • Test distribution might be different to train and cv set. • Train/CV set split might allow some data leakage (e.g. they share some common features such as data about the same person, and should be split by that common feature). In which case CV estimates could be too high. • Your mechanism for under-sampling the negative class may be biased. ### What to do? First of all, these results are unlikely to be anything directly do with faults in the model, and are not informed much by the training curves. They are also not that bad out of context (i.e. they are much better than simply guessing which items are in the positive class) - the question is more whether you could improve on them, and what the costs are to you for the different types of error. It might be worth you actually assigning real-world comparable costs to each type of error, to help decide whether your model is successful/useful and to pick the best model later on. One thing from the training curves is that your cv and training loss look pretty close. It implies you are not over-fitting to the training data (or you should check to a train/cv data leak). You may have room to add more parameters and improve the model in general. It is possible you could make the model even better with different hyper-parameter choices, feature engineering etc, and that would improve the scores. There is no general advice for that though, it depends on what you have available. It might be worth experimenting with training on the unbalanced training set (take the raw data without undersampling) and instead weighting the loss function, so that costs are larger for inaccurate classification of positive class. This is not guaranteed to fix your problem, but will increase the amount of data you use for training. Otherwise, you should investigate whether any of the possible causes listed above is likely and try to apply fixes. Finally, in this situation, it is not unheard of to have a four-way data split: • A ratio-adjusted set split two ways: • Training data • CV or "Dev" set A • A same as production set split two ways: • CV or "Dev" set B • Test set CV set A is used to perform early stopping and low-level model selection. CV set B is used to perform high-level model selection against production metric. Test set is used to assess the chosen "best" model without bias. • Thank you so much for your detailed response! This makes a lot of sense. I'll follow through with your suggestions. They are very, very helpful. I couldn't upvote the answer since my reputation isn't high enough here but will return and upvote when I can – ltt Nov 10 '17 at 23:45
2022-01-17 04:20:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5214663743972778, "perplexity": 1032.1003311360284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300289.37/warc/CC-MAIN-20220117031001-20220117061001-00692.warc.gz"}
https://math.libretexts.org/Bookshelves/Calculus/Book%3A_Calculus_(OpenStax)/01%3A_Functions_and_Graphs/1R%3A_Chapter_1_Review_Exercises
# 1R: Chapter 1 Review Exercises True or False? Justify your answer with a proof or a counterexample. 1) A function is always one-to-one. 2) $$f∘g=g∘f$$, assuming $$f$$ and $$g$$ are functions. False 3) A relation that passes the horizontal and vertical line tests is a one-to-one function. 4) A relation passing the horizontal line test is a function. False State the domain and range of the given functions: $$f=x^2+2x−3$$, $$g=\ln(x−5)$$, $$h=\dfrac{1}{x+4}$$ 5) h 6) g Domain: $$x>5$$, Range: all real numbers 7) $$h∘f$$ 8) $$g∘f$$ Domain: $$x>2$$ and $$x<−4$$, Range: all real numbers Find the degree, $$y$$-intercept, and zeros for the following polynomial functions. 9) $$f(x)=2x^2+9x−5$$ 10) $$f(x)=x^3+2x^2−2x$$ Degree of 3, $$y$$-intercept: $$(0,0),$$  Zeros: $$0, \,\sqrt{3}−1,\, −1−\sqrt{3}$$ Simplify the following trigonometric expressions. 11) $$\dfrac{\tan^2x}{\sec^2x}+{\cos^2x}$$ 12) $$\cos^2x-\sin^2x$$ $$\cos(2x)$$ Solve the following trigonometric equations on the interval $$θ=[−2π,2π]$$ exactly. 13) $$6\cos 2x−3=0$$ 14) $$\sec^2x−2\sec x+1=0$$ $$0,±2π$$ Solve the following logarithmic equations. 15) $$5^x=16$$ 16) $$\log_2(x+4)=3$$ $$4$$ Are the following functions one-to-one over their domain of existence? Does the function have an inverse? If so, find the inverse $$f^{−1}(x)$$ of the function. Justify your answer. 17) $$f(x)=x^2+2x+1$$ 18) $$f(x)=\dfrac{1}{x}$$ One-to-one; yes, the function has an inverse; inverse: $$f^{−1}(x)=\dfrac{1}{x}$$ For the following problems, determine the largest domain on which the function is one-to-one and find the inverse on that domain. 19) $$f(x)=\sqrt{9−x}$$ 20) $$f(x)=x^2+3x+4$$ $$x≥−\frac{3}{2},\quad f^{−1}(x)=−\frac{3}{2}+\frac{1}{2}\sqrt{4x−7}$$ 21) A car is racing along a circular track with diameter of 1 mi. A trainer standing in the center of the circle marks his progress every 5 sec. After 5 sec, the trainer has to turn 55° to keep up with the car. How fast is the car traveling? For the following problems, consider a restaurant owner who wants to sell T-shirts advertising his brand. He recalls that there is a fixed cost and variable cost, although he does not remember the values. He does know that the T-shirt printing company charges $440 for 20 shirts and$1000 for 100 shirts. 22) a. Find the equation $$C=f(x)$$ that describes the total cost as a function of number of shirts and b. determine how many shirts he must sell to break even if he sells the shirts for $10 each. Answer: a. $$C(x)=300+7x$$ b. $$100$$ shirts 23) a. Find the inverse function $$x=f^{−1}(C)$$ and describe the meaning of this function. b. Determine how many shirts the owner can buy if he has$8000 to spend. For the following problems, consider the population of Ocean City, New Jersey, which is cyclical by season. 24) The population can be modeled by $$P(t)=82.5−67.5\cos[(π/6)t]$$, where $$t$$ is time in months ($$t=0$$ represents January 1) and $$P$$ is population (in thousands). During a year, in what intervals is the population less than 20,000? During what intervals is the population more than 140,000? The population is less than 20,000 from December 8 through January 23 and more than 140,000 from May 29 through August 2 25) In reality, the overall population is most likely increasing or decreasing throughout each year. Let’s reformulate the model as $$P(t)=82.5−67.5\cos[(π/6)t]+t$$, where t is time in months ($$t=0$$ represents January 1) and $$P$$ is population (in thousands). When is the first time the population reaches 200,000? For the following problems, consider radioactive dating. A human skeleton is found in an archeological dig. Carbon dating is implemented to determine how old the skeleton is by using the equation $$y=e^{rt}$$, where $$y$$ is the percentage of radiocarbon still present in the material, $$t$$ is the number of years passed, and $$r=−0.0001210$$ is the decay rate of radiocarbon. 26) If the skeleton is expected to be 2000 years old, what percentage of radiocarbon should be present? 78.51% 27) Find the inverse of the carbon-dating equation. What does it mean? If there is 25% radiocarbon, how old is the skeleton? ## Contributors 1R: Chapter 1 Review Exercises is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by OpenStax via source content that was edited to conform to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
2022-06-30 23:22:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7222597599029541, "perplexity": 876.3657137569422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103915196.47/warc/CC-MAIN-20220630213820-20220701003820-00514.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-15-differentiation-in-several-variables-15-3-partial-derivatives-exercises-page-781/33
## Calculus (3rd Edition) $$U_r =(\frac{-1}{r^2}-\frac{t}{r})e^{-rt},\quad U_t= -e^{-rt}.$$ Recall the product rule: $(uv)'=u'v+uv'$ Recall that $(e^x)'=e^x$ Since $U=e^{-rt}/r=r^{-1}e^{-rt}$, then by using the product rule, we have $$U_r=-r^{-2}e^{-rt}+r^{-1}e^{-rt}(-t)=(\frac{-1}{r^2}-\frac{t}{r})e^{-rt},\\ U_t= r^{-1}e^{-rt}(-r)=-e^{-rt}.$$
2020-03-30 10:37:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9425473809242249, "perplexity": 253.4664216970231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496901.28/warc/CC-MAIN-20200330085157-20200330115157-00254.warc.gz"}
https://pacviz.sriley.dev/introduction-to-the-package.html
# Chapter 1 Introduction to the package Provides a broad-view perspective on data via linear mapping of data onto a radial coordinate system. The package contains functions to visualize the residual values of linear regression and Cartesian data in the defined radial scheme. See the ‘pacviz’ documentation page for more information: https://spencerriley.me/pacviz/book/. The primary functions that are enclosed in this package include: • pac.plot • pac.resid • pac.lsvm (In development) Some secondary functions include: • svm.partition • deg2rad • rad2deg • linMap ## 1.1 Installation Guide For the most up-to-date version of the package, install the package directly from GitHub. devtools::install_github("PharaohCola13/pacviz") For offical releases, install the package through CRAN install.package('pacviz') ## 1.2 Package Dependencies R (>= 4.0.0) Packages: circlize, e1071, graphics, plotrix, stats, utils ## 1.3 Recommendations The discussions in this section will revolve around preferred color schemes and helpful character codes for UTF-8 symbols that can be used as units. ### 1.3.1 Color Scheme Since one of the two colors in the visualization is white, the other is a user input with the default being gold. The following colors are predefined in R, with the whole list available here, and are a good fit in terms of contrast and readability. • lightskyblue • lightsteelblue • darksalmon • palegreen • gray86 • plum ### 1.3.2 Characters • Angstrom: \uc5
2023-03-31 16:50:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27557703852653503, "perplexity": 7102.454478420014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00055.warc.gz"}
https://www.encyclopediaofmath.org/index.php?title=Acyclic_group&oldid=35367
# Acyclic group (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) 2010 Mathematics Subject Classification: Primary: 20J05 [MSN][ZBL] A group having the same constant coefficient homology as the trivial group (cf. also Homology). This means that its classifying space is an acyclic space. In the literature the earliest examples are Higman's four-generator four-relator group [Hi] $$\langle x_0, x_1, x_2, x_3 : x_{i+1}x_ix_{i+1}^{-1} = x_i^2, i\in \mathbb{Z}/4\rangle$$ and others found in combinatorial group theory [BaGr], [BaDyHe], [BeMi]. Further examples arise in geometry ([Ep], [Ma], [Se], [SaVa], [GrSe]) or as automorphism groups of large objects ([HaMc]; for example, the group of all bijections of an infinite set). Algebraically closed groups are acyclic. Many proofs of acyclicity of infinitely generated groups rely on the property that all binate groups are acyclic [Be3] (cf. also Binate group). An important result in the plus-construction approach to the higher algebraic $K$-theory of rings and operator algebras is that the infinite general linear group of the cone of a ring is acyclic [Wa], [Be]. Topologically, the plus-construction of a topological space is completely determined by a certain perfect, locally free, and hence acyclic, group [BeCa]. Ubiquity results for acyclic groups include the following: Every perfect group is the homomorphic image of an acyclic group [He]. Every group is a normal subgroup of a normal subgroup of an acyclic group. This result has applications to algebraic topology [KaTh]. Every Abelian group is the centre of an acyclic group [BaDyHe], [Be2]. In contrast to the above are results indicating that acyclic groups have "few" normal subgroups. Thus, the following acyclic groups admit no non-trivial finite-dimensional linear representations over any field: algebraically closed groups; Higman's group [Hi]; torsion-generated acyclic groups [Be4]; binate groups [AlBe]; the automorphisms groups of [HaMc], see [Be5], [Be6]. Moreover, many of the above groups are simple modulo the centre. #### References [AlBe] R.C. Alperin, A.J. Berrick, "Linear representations of binate groups" J. Pure Appl. Algebra, 94 (1994) pp. 17–23 MR1277521 Zbl 0813.20060 [BaDyHe] G. Baumslag, E. Dyer, A. Heller, "The topology of discrete groups" J. Pure Appl. Algebra, 16 (1980) pp. 1–47 MR0549702 Zbl 0419.20026 [BaGr] G. Baumslag, K.W. Gruenberg, "Some reflections on cohomological dimension and freeness" J. Algebra, 6 (1967) pp. 394–409 MR0232827 [Be] A.J. Berrick, "An approach to algebraic -theory", Pitman (1982) MR649409 [Be2] A.J. Berrick, "Two functors from abelian groups to perfect groups" J. Pure Appl. Algebra, 44 (1987) pp. 35–43 MR0885094 [Be3] A.J. Berrick, "Universal groups, binate groups and acyclicity", Proc. 1987 Singapore Group Theory Conf., W. de Gruyter (1989) MR0981847 Zbl 0663.20053 [Be4] A.J. Berrick, "Remarks on the structure of acyclic groups" Bull. London Math. Soc., 22 (1990) pp. 227–232 MR1041135 Zbl 0749.20001 [Be5] A.J. Berrick, "Groups with no nontrivial linear representations" Bull. Austral. Math. Soc., 50 (1994) pp. 1–11 MR1285653 Zbl 0815.20026 [Be6] A.J. Berrick, "Corrigenda: Groups with no nontrivial linear representations" Bull. Austral. Math. Soc., 52 (1995) pp. 345–346 MR1348495 [BeCa] A.J. Berrick, C. Casacuberta, "A universal space for plus-constructions" Topology (to appear) MR1670384 Zbl 0933.55016 [BeMi] A.J. Berrick, C.F. Miller, III, "Strongly torsion generated groups" Proc. Cambridge Philos. Soc., 111 (1992) pp. 219–229 MR1142741 Zbl 0762.20017 [Ep] D.B.A. Epstein, "A group with zero homology" Proc. Cambridge Philos. Soc., 68 (1968) pp. 599–601 MR0229692 Zbl 0162.27502 Zbl 0157.30703 [GrSe] P. Greenberg, V. Sergiescu, "An acyclic extension of the braid group" Comment. Math. Helv., 66 (1991) pp. 109–138 MR1090167 Zbl 0736.20020 [HaMc] P. de la Harpe, D. McDuff, "Acyclic groups of automorphisms" Comment. Math. Helv., 58 (1983) pp. 48–71 Zbl 0522.20034 [He] A. Heller, "On the homotopy theory of topogenic groups and groupoids" Ill. Math. J., 24 (1980) pp. 576–605 MR0586797 Zbl 0458.18006 [Hi] G. Higman, "A finitely generated infinite simple group" J. London Math. Soc., 26 (1951) pp. 61–64 MR0038348 Zbl 0042.02201 [KaTh] D.M. Kan, W.P. Thurston, "Every connected space has the homology of a " Topology, 15 (1976) pp. 253–258 MR0413089 Zbl 0355.55004 [Ma] J.N. Mather, "The vanishing of the homology of certain groups of homeomorphisms" Topology, 10 (1971) pp. 297–298 MR0288777 Zbl 0207.21903 [SaVa] P. Sankaran, K. Varadarajan, "Acyclicity of certain homeomorphism groups" Canad. J. Math., 42 (1990) pp. 80–94 MR1043512 Zbl 0711.57022 [Se] G.B. Segal, "Classifying spaces related to foliations" Topology, 17 (1978) pp. 367–382 MR0516216 Zbl 0398.57018 [Wa] J.B. Wagoner, "Developping classifying spaces in algebraic -theory" Topology, 11 (1972) pp. 349–370 How to Cite This Entry: Acyclic group. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Acyclic_group&oldid=35367 This article was adapted from an original article by A.J. Berrick (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
2018-12-13 19:53:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5778083801269531, "perplexity": 4087.533195096939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825098.68/warc/CC-MAIN-20181213193633-20181213215133-00048.warc.gz"}
http://math.stackexchange.com/questions/67863/a-criterion-for-a-group-to-be-abelian/67907
# A criterion for a group to be abelian I noted a discussion on groups being abelian under a certain restriction on powers of elements, e.g. http://tiny.cc/chs45. Maybe this result (probably not too well-known) concludes it all. Let and $m$ and $n$ be coprime natural numbers. Assume that $G$ is a group such that $m$-th powers commute and $n$-th powers commute (that is for all $g, h$ $\in$ $G$: $g^mh^m=h^mg^m$ and $g^nh^n=h^ng^n$). Then $G$ is abelian. - So what is your question? – Derek Holt Sep 27 '11 at 9:00 @Derek: Let $G$ be a finite group. Assume that there are two coprime integers $m$ and $n$ such that for all $g, h \in G$ holds (1) $g^mh^m = h^mg^m$ and (2) $g^nh^n = h^ng^n$. How do you prove that $G$ is abelian? – Someone Sep 27 '11 at 14:06 $(m,n)=1\implies pn+qm=1$. $(g^nh^m)^{np}=g^n(((h^mg^n)^p)^n(h^mg^n)^{-1})h^m= (h^mg^n)^{pn}g^n(h^mg^n)^{-1}h^m=(h^mg^n)^{pn}$. $(g^nh^m)^{mq}=g^n((h^mg^n)^{mq} (h^mg^n)^{-1})h^m=g^n((h^mg^n)^{-1} (h^mg^n)^{mq})h^m=(h^mg^n)^{qm}$. $(g^nh^m)^{np}(g^nh^m)^{mq}=(h^mg^n)^{np}(h^mg^n)^{qm}\implies g^nh^m=h^mg^n$. $gh=(gh)^{pn+mq}=(hg)^{pn+mq}=hg$. - This is false. In any nonabelian group of exponent $m$, $m$th powers and $n$th powers commute for any $n$, in particular any $n$ coprime to $m$. - Maybe I was not clear enough: I did not mean the $m$-powers commuting with the $n$-powers, but the $m$-powers among each other and the $n$-powers among each other. – Nicky Hekster Sep 27 '11 at 7:59 G does not have to be finite. Let $M \subset G$ be the subgroup generated by all m-th powers and let $N \subset G$ be the subgroup generated by all n-th powers. These subgroups are clearly abelian normal subgroups. Since m and n are coprime $G = MN$ and $M \cap N$ is contained in the center $Z(G)$ of $G$. To prove that G is abelian it suffices to show that M and N commute, that is $[M,N]=1$. Note that $[M,N] \subset (M \cap N)$. Let $a \in M$ and $b \in N$. Then $[a, b] = a^{−1}b^{−1}ab \in M \cap N$. Hence $[a, b] = z$ with $z \in Z(G)$. Hence $b^{−1}ab = za$, whence $b^{−1}a^nb=z^na^n$. Since $a^n \in N$ it commutes with $b$, so $z^n=1$. Similarly $z^m=1$. Since $m$ and $n$ are relatively prime, we conclude $z=1$. - Why do you tag the question finite-groups, if you don't assume your groups to be finite? – j.p. Sep 27 '11 at 19:40 What question? As far as I can see, no question was asked. – Derek Holt Sep 27 '11 at 22:00 I'm confused - were you looking for help, or just posting a result for all to see? The latter isn't really the intended use of this site. – Alon Amit Sep 27 '11 at 22:37 I am also confused with the question ! Isn`t the term " Abelian " synonymous of " "commutative" ? Meaning that a group G is " Abelian " if $ab=ba$ for all $a,b \in G$ - The hypothesis is not "$ab=ba$ for all $a,b\in G$." Only special cases of elements commuting are assumed. – anon Sep 27 '11 at 11:31 Assuming that $G$ is finite (looking at the tags), you can prove that for any fixed prime $p$ all $p$-elements commute ($p$ does not divide $m$ or $n$). Then conclude that the $p$-Sylow sugroups are normal and abelian for all $p$. -
2016-02-12 14:25:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9709469676017761, "perplexity": 252.51971621368796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701164268.69/warc/CC-MAIN-20160205193924-00051-ip-10-236-182-209.ec2.internal.warc.gz"}
http://www.gradesaver.com/textbooks/math/prealgebra/prealgebra-7th-edition/chapter-4-section-4-1-introduction-to-fractions-and-mixed-numbers-practice-page-215/11
## Prealgebra (7th Edition) The improper fraction is $\frac{5}{4}$ The mixed number is 1$\frac{1}{4}$ One big triangle has 4 small triangles. The denominator is 4. There are 5 triangles shaded. The improper fraction is $\frac{5}{4}$ The mixed number is 1$\frac{1}{4}$
2018-04-21 14:09:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7042683959007263, "perplexity": 1826.4480360370364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945222.55/warc/CC-MAIN-20180421125711-20180421145711-00401.warc.gz"}
https://solvedlib.com/if-3035x104-j-of-heat-are-added-to-264-g-of,240203
# If 3.035x104 J of heat are added to 264 g of mashed potatoes at 32°C. What... ###### Question: If 3.035x104 J of heat are added to 264 g of mashed potatoes at 32°C. What is the final temperature of the potatoes if the specific heat of mashed potatoes is 3.35 J/(g*°C)? #### Similar Solved Questions ##### Mvi Imn 736Aa"78"91nc61/2471869N126,vev4kuJv+idwtus Ua %a Evnlunte: tu: Iutevyatum 61; Mrhanhmte) (mvas) aMficult uz %m (mm In 'ull Mvi Imn 736Aa "78"91nc61/2471869N126,vev4ku Jv+idwtus Ua %a Evnlunte: tu: Iutevyatum 61; Mrhanhmte) (mvas) aMficult uz %m (mm In 'ull... ##### Stuck on Income statement Interest Expense and Depreciation Expense Previous year Balance sheet Previous year Income... Stuck on Income statement Interest Expense and Depreciation Expense Previous year Balance sheet Previous year Income Statement AutoSave Problem1_Student File_F20V2.xlsx - Read-Only - Excel OB 2 Data View Team Review Help Formulas File Home Insert Page Layout - General - 11 - A A 2 Wrap Text... ##### Let's say that you are the Dean! Would you fire the faculty member who share the... let's say that you are the Dean! Would you fire the faculty member who share the student's information with other co workers? Why or why not??... ##### Sccte: of 1 ptJ0 0"complgioHuy &core: 27.27*0, 3 0f 11 pts5.1.55Aasigned MeduOrasbon HelpFind Iha vdume ol Uro soid Gara d [ revolving Ihe rogion bounded by the praboaad U line % about tho Ialowing InesTha liney =Tho lirto } - 2The IineThu volumo Iha given sold t (Type uxrct ansttet, Using as nuoded ) sccte: of 1 pt J0 0" complgio Huy &core: 27.27*0, 3 0f 11 pts 5.1.55 Aasigned Medu Orasbon Help Find Iha vdume ol Uro soid Gara d [ revolving Ihe rogion bounded by the praboa ad U line % about tho Ialowing Ines Tha liney = Tho lirto } - 2 The Iine Thu volumo Iha given sold t (Type uxrct ans... ##### Show addition in n, n2 is well-defined. We were unable to transcribe this imageWe were unable... Show addition in n, n2 is well-defined. We were unable to transcribe this imageWe were unable to transcribe this image... ##### How do you graph (2x^2+3x+1)/(x^2-5x+4) using asymptotes, intercepts, end behavior? How do you graph (2x^2+3x+1)/(x^2-5x+4) using asymptotes, intercepts, end behavior?... ##### When may confidentiality be breached in the public interest? Under what conditions can health information be... When may confidentiality be breached in the public interest? Under what conditions can health information be disclosed for purposes other than the provision of care?... ##### 04 SRaJ tet W n ~t Kubsbake Q V-Z wRevc Veeia fhrke L b defi kA 0) W = 5 (9b,9) : az 6, C<o 04 SRaJ tet W n ~t Kubsbake Q V-Z wRevc Veeia fhrke L b defi kA 0) W = 5 (9b,9) : az 6, C<o... ##### 7Folnte iagleAaenThis question n3s Sever parts hat mus: be complated sequentizly. [ You skip Can Of (NE QuesdonWl no: recelveany cointsthe skpped part;, andyoumor be able to come backIutona BrerdiseFind the volume ofthe solid ootained rotating the region bounded by the given Curves about the X-axis =7- 7X ,Y? Find the volume V ofthis sclid_Step " When y = 7 _ 7x? rotated around -he *-axis Veto srip iicth _x treates diskTre rdius of this disk uSketch the region 7Folnte iagle Aaen This question n3s Sever parts hat mus: be complated sequentizly. [ You skip Can Of (NE Quesdon Wl no: recelveany coints the skpped part;, andyou mor be able to come back Iutona Brerdise Find the volume ofthe solid ootained rotating the region bounded by the given Curves about the ... ##### 2.14. 40% of students took Spanish as a foreign language and 30% took French, while 60%... 2.14. 40% of students took Spanish as a foreign language and 30% took French, while 60% took at least one of these languages. Wha t percent of pdents took both Spanish and French?... ##### (a) lamp has two Dulbs each of type with average lifetime 1400 hours. Assuming that we can model the probability of failure of these bulbs by an exponential density function with mean L 1400_ find the probability that both of the amp bulbs fail within 1500 nours Round your answer to four decimal places 4323(b) Another lamp has just one bulb of the same type as in part (a}. If one bulb burns out and is replaced by bulb of the same type find the probability that the two bulbs fail within total of (a) lamp has two Dulbs each of type with average lifetime 1400 hours. Assuming that we can model the probability of failure of these bulbs by an exponential density function with mean L 1400_ find the probability that both of the amp bulbs fail within 1500 nours Round your answer to four decimal pla... ##### Peter molloy is considering making a contribution to an IRA, but his employer has a profit-sharing... peter molloy is considering making a contribution to an IRA, but his employer has a profit-sharing plan. Plan benefits vest over 6 years, and peter is 60% vested. The employer made no contribution to the plan for the year. No employees have terminated during the year. Which of the following statemen... ##### Digi 4G 下午6:54 51% . 1. MCDE ax Or Which one eq. for n-type and p-type... Digi 4G 下午6:54 51% . 1. MCDE ax Or Which one eq. for n-type and p-type semiconductor? Why are these called "diffusion equations" Why are these called "minority carrier" diffusion equations? Why these eqs. valid under lower-injection condition? Consider n-type Si sample ... ##### Consider the Riccatti equation v′=v^2−(7v)/x +8/x^2a. Use the substitution v=−y′/yto obtain the linear second-order equation for yb. Find the general solution of the resultingsecond-order equation;c. Use the information from b) to find one particularsolutionv1of(1). Then use the substitution v=w+v1 to solve (1). Consider the Riccatti equation v′=v^2−(7v)/x +8/x^2 a. Use the substitution v=−y′/y to obtain the linear second-order equation for y b. Find the general solution of the resulting second-order equation; c. Use the information from b) to find one particular solutionv1of(1). Th... ##### How many moles of KCsHsO- react with each mole of NaOH (Mole-Mole Ratio)?How many moles of KCaHsOa are presentin 5.0 mL of0.500 M?0.500 mol 0.500 M 1000 mE 5.0 mLHow many moles of NaOH are needed to fully neutralize this much KCsHs04?How many mL of NaOH would produce this manymoles?Given that we only wish to reduce the concentration of KCaHsO. (raise the pH by 1.0}, rather than neutralize it completely (pH of 7), should we use more or less of the amount of NaOH found in part e)? How many moles of KCsHsO- react with each mole of NaOH (Mole-Mole Ratio)? How many moles of KCaHsOa are presentin 5.0 mL of0.500 M? 0.500 mol 0.500 M 1000 mE 5.0 mL How many moles of NaOH are needed to fully neutralize this much KCsHs04? How many mL of NaOH would produce this manymoles? Given that w... ##### Queston 40.Determine uhether tke series is covergent or divergent: If the Series is conrergent delemine lhether te Conrergence is absolute or Gnd;tianal (-1) 0 + Ia (4) n31 Queston 40. Determine uhether tke series is covergent or divergent: If the Series is conrergent delemine lhether te Conrergence is absolute or Gnd;tianal (-1) 0 + Ia (4) n31... ##### Of the Laplace transform of the function f(t), find the Laplace transform off(t) = eat cosh btwhere cosh btebt + e-6t) /2 and a.b are real constants of the Laplace transform of the function f(t), find the Laplace transform of f(t) = eat cosh bt where cosh bt ebt + e-6t) /2 and a.b are real constants... ##### The name of CBr4 is:carbide tetrabrominecarbide tctrabromidecarbidc brominecarbon bromidecarbon brominccarbon tctrabromidecarbon tctrabrominecarbidc bromide The name of CBr4 is: carbide tetrabromine carbide tctrabromide carbidc bromine carbon bromide carbon brominc carbon tctrabromide carbon tctrabromine carbidc bromide... ##### Determine whether a precipitate forms when a $0.00050 M$ solution of magnesium nitrate is brought to a pH of $8.70$. Determine whether a precipitate forms when a $0.00050 M$ solution of magnesium nitrate is brought to a pH of $8.70$.... ##### ParagraphIn an outbreak of varicella (chickenpox) in Oregon in 002, varicella was diagnosed in 18 of 152 vaccinated children compared with unvaccinated children; Assuming can follow this group of children exposed to chickenpox like both Case-Control and Cohort study_ answer the following questions_Using the data in Table below, calculate: After contracting chickenpox, the odds of being unvaccinated compared to vaccinated (hint: use only kids who Eot sick}:InfectedNot infected 134totavaccinated u Paragraph In an outbreak of varicella (chickenpox) in Oregon in 002, varicella was diagnosed in 18 of 152 vaccinated children compared with unvaccinated children; Assuming can follow this group of children exposed to chickenpox like both Case-Control and Cohort study_ answer the following questions_... ##### 5 J2y' 4 + = 7xB/et: YI =etIn 5 J2y' 4 + = 7xB/et: YI =et In... (1C poitts} ki dry Ulr movt % upward cwobi . Let $bc tha function that awrigns; 40 eadh teirit ground Ievel, the temperature of the Air, in degree Cesius; nt this height Boule wua 6l nhov? gen If we want t0 skttch the graph of f, what shouldl wv: put On the horizontel axis and whnt sbould put Qu th... 1 answer ##### 23. Describe the two categories of long-term memory and their subtypes (there are five). Discuss the brain regions involved in each, and provide a real-world example of three of them from your own li... 23. Describe the two categories of long-term memory and their subtypes (there are five). Discuss the brain regions involved in each, and provide a real-world example of three of them from your own life. (As a reminder, you can find a discussion of these types of memory in the Video Learning and Memo... 4 answers ##### Let a,b,c be distinct points in the complex plane and let € be one of the values of arg Prove that b - Ic _ b12 = Ic _ al2 + Ib _ al2 2lc - allb alcose Let a,b,c be distinct points in the complex plane and let € be one of the values of arg Prove that b - Ic _ b12 = Ic _ al2 + Ib _ al2 2lc - allb alcose... 2 answers ##### Question 1 (10 marks)The following are ten statements (1 mark each): In each case, state whether the statement is true or false If it is false, provide a brief explanation for why:StatementResponsefeedforward controller requires measurement of the disturbance variable.If a system's gain margin is negative, it can be said that the open- loop system is unstable:Consider the following level control system with a pump. This system is an example of a cascade control system and the flow controlle Question 1 (10 marks) The following are ten statements (1 mark each): In each case, state whether the statement is true or false If it is false, provide a brief explanation for why: Statement Response feedforward controller requires measurement of the disturbance variable. If a system's gain ma... 1 answer ##### A person in a parked car sounds the horn. A driver in an approaching car, which is moving at a speed of$15.1 \mathrm{~m} / \mathrm{s}$, measures the frequency of the horn's sound as$579.4 \mathrm{~Hz}$. What is the frequency of the sound that the person in the parked car hears? (Use$343 \mathrm{~m} / \mathrm{s}$for the speed of sound.) A person in a parked car sounds the horn. A driver in an approaching car, which is moving at a speed of$15.1 \mathrm{~m} / \mathrm{s}$, measures the frequency of the horn's sound as$579.4 \mathrm{~Hz}$. What is the frequency of the sound that the person in the parked car hears? (Use$343 \mat... A firm uses capital and labor to produce output according to the production q = 4VLK (a) Find the marginal product of labor (MPL) and marginal product of capital (MPK). (b) If the wage w=$1/labor-hr. and the rental rate of capital r-$4/machine-hr., what is the least expensive way to produce 16 units...
2023-02-06 00:32:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6543130278587341, "perplexity": 14514.249185157812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500294.64/warc/CC-MAIN-20230205224620-20230206014620-00468.warc.gz"}
https://byjus.com/free-ias-prep/upsc-exam-comprehensive-news-analysis-feb01-2019/
# UPSC Exam Comprehensive News Analysis Feb01 2019 A. GS1 Related GEOGRAPHY B. GS2 Related POLITY AND GOVERNANCE C. GS3 Related ECONOMICS D. GS4 Related E. Editorials INTERNAL SECURITY (DEFENCE PREPAREDNESS) SOCIAL JUSTICE ETHICS AND INTEGRITY F. Tidbits G. Prelims Facts H. UPSC Prelims Practice Questions I. UPSC Mains practice Questions A. GS1 Related 1. Polar Vortex Context • Many parts of the northern United States are experiencing record cold temperatures this week, attributed to the polar vortex. • The polar vortex has broken into ‘two swirling blobs of cold air’, bringing the most frigid conditions in decades to the midwest. What is the polar vortex? • As its name suggests, the polar vortex is found around the north pole. It’s a band of strong winds, high up in the atmosphere that keeps bitterly cold air locked around the Arctic region. This circulation isn’t considered a single storm, or even a weather pattern as such. • Occasionally, the vortex can become distorted and meander far further south than normal. The phenomenon became widely known to Americans during a particularly frigid spell in 2014, when the media first started using the term “polar vortex”. It was also a factor in the “bomb cyclone” that battered the US east coast last year. • Studies have pointed to a recent increase in instances where the polar vortex has bulged down into heavily populated areas. Scientists are gaining a better understanding of why this is happening, with many identifying climate change as an influence. • There’s some evidence that the jet stream, a meandering air current that flows over North America and Europe, is slowing and becoming wavier as the planet warms. The jet stream interacts with the polar vortex, helping bring numbing temperatures further south. • Scientists also point to a complex sequence of events involving sea ice, which is rapidly diminishing in the Arctic. As the ice retreats, summertime heat is absorbed by the dark ocean that lies underneath. This heat is released into the atmosphere during winter, spurring winds that can disrupt the polar vortex. B. GS2 Related 1. SC to hear Sabarimala pleas on Feb. 6 Context • A five-judge Constitution Bench, led by Chief Justice of India Ranjan Gogoi, will hear the Sabarimala review petitions from February 6. Review Petition • A binding decision of the Supreme Court/High Court can be reviewed in Review Petition. • The parties aggrieved on any order of the Supreme Court on any apparent error can file a review petition. • Article 137 of the Constitution provides that subject to provisions of any law and rule made under Article 145, the Supreme Court of India has the power to review any judgement pronounced (or order made) by it. • Under Supreme Court Rules, 1966 such a petition needs to be filed within 30 days from the date of judgement or order. • It is also recommended that the petition should be circulated without oral arguments to the same bench of judges that delivered the judgement (or order) sought to be reviewed. • Furthermore, even after dismissal of a review petition, the SC may consider a curative petition in order to prevent abuse of its process and to cure gross miscarriage of justice. Curative petition • The concept of Curative petition was evolved by the Supreme Court of India in the matter of Rupa Ashok Hurra vs. Ashok Hurra and Anr (2002) where the question was whether an aggrieved person is entitled to any relief against the final judgement/order of the Supreme Court, after dismissal of a review petition. • The Supreme Court in the said case held that in order to prevent abuse of its process and to cure gross miscarriage of justice, it may reconsider its judgements in exercise of its inherent powers. • For this purpose the Court has devised what has been termed as a “curative” petition. In the Curative petition, the petitioner is required to aver specifically that the grounds mentioned therein had been taken in the review petition filed earlier and that it was dismissed by circulation. • This has to be certified by a senior advocate. The Curative petition is then circulated to the three senior most judges and the judges who delivered the impugned judgement, if available. No time limit is given for filing Curative petition. Details of the issue • Justice Malhotra is part of the Constitution Bench hearing the Sabarimala petitions against the September 28 judgment of the Supreme Court revoking the bar on women of menstrual age from entering the Sabarimala temple for worship. • The Sabarimala review hearing, which was earlier scheduled for January 22, had to be postponed as Justice Malhotra was on medical leave. Responding to an oral mention by advocate Mathew Nedumpara to fix a date of hearing for the case, Justice Gogoi had conveyed the court’s decision to wait for Justice Malhotra’s return from leave. • Justice Malhotra had delivered the lone dissent in the majority judgment of the five-judge Constitution Bench on September 28. • Justice Malhotra, in her dissent, had declared the prohibition on women aged between 10 and 50 years to be an “essential practice”. The judge had held that imposing the court’s morality on religion would negate the freedom to practise religion according to one’s faith and beliefs. • Justice Malhotra’s judgment has since then become a rallying point for the review petitioners. • Review petitions were filed by a range of persons, from the Sabarimala temple’s chief priest to individuals and Ayyappa organisations, including women devotees’ groups. They urged the court that ‘reform’ does not mean rendering a religious practice out of existence on the basis of a PIL petition filed by “third parties” who do not believe in the Sabarimala deity. • The review petitioners have argued that the right to move the Supreme Court for violation of fundamental rights must be reserved for those whose personal rights to worship have been violated. C. GS3 Related 1. Centre firm on FDI rules deadline Context • The government has recently said it would not be extending the deadline (February 1, 2019) for implementation of the new rules governing FDI in e-commerce. • Large e-commerce firms such as Amazon and Flipkart have repeatedly approached the Centre seeking either dilution of the rules or extension of the deadline. New rules for e-commerce • Vendors that have any stake owned by an e-commerce company cannot sell their products on that e-commerce company’s portal. • Vendor who purchases 25% or more of its inventory from an e-commerce group company will be considered to be controlled by that e-commerce company, and thereby barred from selling on its portal. • This provision aims to ensure that vendors in which marketplaces, such as Amazon, have a stake do not sell the bulk of their items to a third-party vendor who then goes on to sell those items on the e-commerce marketplace. • In other words, the provision seeks to deny control by the marketplace entity over vendors. The third major provision says the e-commerce firm will not be allowed to influence the price of a product sold on its portal by giving incentives to particular vendors. Some basics to understand the new rules • E-commerce companies can operate under two different models in India. • The first is the marketplace model where the e-commerce firm simply acts as a platform that connects buyers and sellers. FDI is allowed in e-commerce companies in this model. • The second model is inventory-based where the inventory of goods sold on the portal is owned or controlled by the e-commerce company. FDI is not allowed under this model. • What has been happening is that large e-commerce companies such as Amazon and Flipkart, while not owning inventory themselves, have been providing a platform for their group companies such as CloudTail and WS Retail respectively. • Some see this as skewing the playing field, especially if these vendors enjoyed special incentives from the e-commerce firm, over others. These controlled or owned vendors may then be able to offer discounts to customers that competitors may not be able to match. 2. Govt. revises up GDP growth to 7.2% Context • The government had recently revised its GDP growth forecast for 2017-18 to 7.2% from the earlier estimate of 6.7%. It revised the actual growth rate in 2016-17 to 8.2% from 7.1%. GDP Calculation – Recent changes • In 2015, the government adopted a new method for the calculation of the gross domestic product of the country, and also adopted the Gross Value Added measure to better estimate economic activity. • Further, the change involved a bringing forward of the base year used for calculations to 2011-12 from the previous 2004-05. • According to the Ministry of Statistics and Programme Implementation, the method for preparing the back series is largely the same as what is used to calculate the data using the new base, which is how all national accounts calculations will be made going forward. • While doing the exercise, the government adopted the recommendations of the United Nations System of National Accounts, which included measuring the GVA, Net Value Added (NVA), and the use of new data sources wherever available. One of these data sources is the Ministry of Corporate Affairs MCA-21 database, which became available since 2011-12. • One problem encountered was in finding matching data for the older series as what the MCA-21 database provided. The key difference between the two was that the old method measured volumes — actual physical output in the manufacturing sector, crop production, and employment for the services sector. The MCA-21 database allows for a more granular approach, looking at the balance sheet data of each company and aggregating the performance of the sector from that, after adjusting for inflation. • For most sectors, simply changing the price vectors from a 2004-05 to a 2011-12 base was enough, but others required splicing of new and old data in the relevant proportions to arrive at the closest approximation. • The new method is also statistically more robust as it tries to relate the estimates to more indicators such as consumption, employment, and the performance of enterprises, and also incorporates factors that are more responsive to current changes, unlike the old series that usually took 2-3 years to register an underlying change. 3. Only an interim Budget, says PM Context • Prime Minister Narendra Modi had assured the Opposition that his government would be presenting an interim budget and not a full one Interim Budget and Vote on Account • An Interim Budget is not the same as a ‘Vote on Account’. While a ‘Vote on Account’ deals only with the expenditure side of the government’s budget, an Interim Budget is a complete set of accounts, including both expenditure and receipts. An Interim Budget gives the complete financial statement, very similar to a full Budget. • Full Budget deals with both expenditure and revenue side but Vote-on-account deals only with the expenditure side of the government’s budget. • The vote-on-account is normally valid for two months but full budget is valid for 12 months (a financial year). • As a convention, a vote-on-account is treated as a formal matter and passed by Lok Sabha without discussion. But passing for budget happens only after discussions and voting on demand for grants. D. GS4 Related Nothing here today!!! E. Editorials Editorial Analysis: • In late 2018, the government decided to set up three new agencies — the Defence Cyber Agency, the Defence Space Agency and the Special Operations Division — in order to address the new age challenges to national security. • While this is indeed a useful step in the right direction, it is also important to note that the constitution of these agencies is a far cry from the crucial recommendations given by the Naresh Chandra Task Force and the Chiefs of Staff Committee, both of which had suggested the formation of three separate joint commands to deal with new challenges to India’s national security in the cyber, space and special operations domains. • Critics have opined that this rather lacklustre response to major ‘futuristic’ challenges to India’s national security raises a larger question: is India adequately prepared for the new age wars in general or is it still preparing for the last war it fought, and won? High-tech innovations • There is a revolution in military affairs that seems to have attracted the attention of strategic analysts and policy planners across the world. • The current focus in military thinking across the world is increasingly moving away from traditional heavy-duty military hardware to high-tech innovations such as artificial intelligence (AI), big data analytics, satellite jammers, hypersonic strike technology, advanced cyber capabilities and spectrum denial and high-energy lasers. • In the light of the unprecedented capabilities that these systems offer, there is also an increased focus on developing suitable command and control as well as doctrinal concepts to accommodate and calibrate them. • The arrival of these technologies might deeply frustrate strategic stability as we know it given their disruptive nature. • Strategic stability in the contemporary international system, especially among the nuclear weapon states, depends on several age-old certainties, the most important being the issue of survivability of a state’s nuclear arsenal and its ability to carry out a second strike after a first attack. • Once accuracies get better, hypersonic glide vehicles replace conventional delivery systems, real-time tracking and surveillance make major strides, and AI-enabled systems take over, survivability of nuclear arsenal, which lies at the heart of great power stability, could take a severe beating. There was, for instance, an assumption that the naval leg of a nuclear triad is the most survivable part since it is hidden away in the depths of the ocean away from the adversary’s gaze. • However, the potential ability of deep-sea drones to detect ballistic-missiles, armed nuclear submarines or SSBNs may make this assurance a thing of the past, thereby frustrating traditional calculations. • Further, it is important that we add the arrival of these new technologies to the emerging strategic competition among great powers. • For instance, experts have opined that the U.S.’s withdrawal from the Intermediate-Range Nuclear Forces treaty is perhaps an indication of a potential arms race in the offing. • As a matter of fact, in a January 2018 article, the Economist put it succinctly: “Disruptive new technologies, worsening relations between Russia and America and a less cautious Russian leadership than in the cold war have raised fears that a new era of strategic instability may be approaching.” Fears of a Possible Conflict: • There is an inherent paradox vis-à-vis high technology-enabled military systems. While on the one hand, it is imperative for states to redesign their systems in the light of these new technologies, especially the digital and cyber components, this also makes the cyber- and digital-enabled systems vulnerable to covert cyberattacks. • More so, given that such surreptitious attacks might take place in the early stages of a conflict, ensuing confusion and scare might lead to uncontrolled escalation with little time for assessment and judgement. • The biggest fear about these technologies, the implications of which we don’t fully understand yet, is their potential to increase the risks of intentional and inadvertent nuclear use. Such scenarios may be unlikely but not improbable. • Here’s what the Economist had to say on precisely such a scenario: “Both China and Russia fear that new American long-range non-nuclear strike capabilities could be used to deliver a disarming attack on a substantial part of their strategic forces or decapitate their nuclear command and control. Although they would still launch their surviving nuclear missiles, improved missile-defence systems of the U.S. would mop up most of the remainder before their warheads could do any damage.” • The fear of a bolt-from-the-blue attack against one’s command and control systems or a disabling strike against strategic arsenal using new technological solutions is likely to dominate the strategic mindspace of great powers in the days ahead, thereby further deepening mistrust and creating instability. • Therefore, the possibility of emerging military technologies prompting inadvertent escalation and conflict cannot and should not be ruled out. Chinese capabilities: • China has emerged as a key actor in the field of emerging military technologies. This is something that will concern New Delhi in the days ahead. Some analysts believe that Beijing is in the lead position in emerging technologies with potential military applications such as quantum computing, 3D printing, hypersonic missiles and AI. • If indeed, Beijing continues to develop hypersonic systems, for instance, it could potentially target a range of targets in the U.S. • While the Chinese focus is evidently on U.S. capabilities, which China interprets as a potential threat, this is not without latent concerns for New Delhi. India might, in turn, consider developing some of these technologies which will create dilemmas for Islamabad. The cascading strategic competition then looks unavoidable at this point, and that is worrisome. And yet, it might be difficult to avoid some of these developments given their dual use. • However, there is a need to ask how survivable India’s naval platforms are given the feverish developments of advanced sensory capability in the neighbourhood. • Questions arise: Is it sufficiently prepared to face the new age wars? Has the urgency associated with these technological developments dawned on the security planners in New Delhi? • It is in this context that we must revisit the government’s decision to set up the agencies to address cyber and space challenges. • Clearly, this is a timely effort from the government to have finally decided to set them up — though they are not yet in place. It is unfortunate that unlike what was envisioned earlier, these agencies will be reduced in their powers and their standing in the pecking order of defence planning in the country. • Moreover, reports indicate that the Space Command will be headed by the Air Force, the Army will head the Special Operations Command, and the Navy will be given the responsibility of the Cyber Command. If indeed that happens, their effectiveness in terms of tri-service synergy will be much less than anticipated. • Even more so, given that the higher defence decision-making in the country is still civil services-dominated, despite the recent attempts to correct it, the effectiveness of these agencies will remain weak. 1. Is there a case for reservation for the forward classes? Context: This is an analysis-based article which presents different perspectives to the reader on whether or not there exists a case for reservation for the forward classes. UPSC aspirants would benefit from these points as they can use much of this content to present answers on the topic in case it is asked in the UPSC Mains Examination, be it as a part of the General Studies- II paper or the Essay paper. Editorial Analysis: Analysis- I: The points mentioned here, take the view that social justice is not possible if we exclude the economically backward sections of our society. • Social justice is inclusive in nature. It means ensuring that no marker of backwardness is left untouched. Poverty is one such marker of backwardness, and a very strong one, which denies certain basic rights and equality in society to individuals affected by it. • The Preamble, which is the soul of the Constitution, promises to all citizens social, economic and political justice. • The economic status of citizens constitutes one of the three tests of backwardness. • Hence, the ends of social justice cannot be truly met if we exclude the economically backward sections of society from availing the fruits of development in an equal manner. A move to help the poor • Poverty denies equality of opportunity to individuals in education and employment. It denies them the opportunity of a decent and sustainable livelihood. • Reservation, by the prevalent logic, ensures participation of the disadvantaged sections in employment through positive discrimination. Hence, there was a strong case for making a provision for reservation for the economically backward in the general category in education and employment to ensure that they also get reasonable opportunities to advance in life. • The present provision of 10% reservation for the economically backward in the general category is being referred to as reservation for the ‘savarnas’, or upper castes. • However, reservation under this category is not limited to upper caste Hindus; it is available to the poor in all general categories, who were not eligible for reservation under any other category hitherto. As for the upper caste Hindus, a significant proportion of the population live in the villages and in remote areas with limited economic opportunities. • They face disadvantages in the matter of getting access to education and employment. Hence, it was necessary to lend a helping hand to them as well. The test of constitutionality: • To those who point to the Supreme Court’s capping of reservation at 50% in the famous Indira Sawhney case, some experts mention that this ceiling is applicable only for reservation for the socially and educationally backward category, i.e. to the Scheduled Castes/Scheduled Tribes (SCs/STs) and the Other Backward Classes (OBC) categories under Articles 15(4) and 16(4) of the Constitution. • It does not apply to the present case of reservation, which has been provided as a special provision through a Constitution amendment. • Further, to those who mistake the provision of reservation under the Constitution to be applicable only to the SCs/STs and OBCs, some experts make the point that the present quota, introduced through the 124th Constitution Amendment Bill, is provided through adequate amendments in Articles 15 and 16 of the Constitution, which allow for making “special provision for the advancement of any economically weaker sections of the citizens”. Hence, it can stand the test of constitutionality in the Supreme Court. • Social justice is a dynamic concept which has evolved over time in accordance with the changing needs and circumstances of our society. The concept has not been defined in our Constitution. • It has rightly been left to the wisdom of the lawmakers to increase its ambit from time to time, according to the needs of the time. A quota for poor citizens was a crying need of our times. • Some experts are of the view that the present government at the Centre realised this and, under the true spirit of ‘Sabka Saath Sabka Vikas’, made the dream of 10% reservation a reality. For other political parties, this had been nothing more than an electoral gambit all along. Analysis- II: The points mentioned here, take the view that when you allow reservation for the advanced classes, it changes the meaning of reservation. • During the Lok Sabha debate on the 124th Constitution Amendment Bill, to provide reservation in jobs and education for the economically weaker sections in the general category, an opinion was expressed that 50% of the States have to approve it. But that is not the case. Under Article 368(2), Parliament can amend the Constitution by passing the Bill in each House by a majority of the total membership of that House present and voting. Thereafter, the President shall give his assent to the Bill and the Constitution will stand amended. • But amendments which seek to make a change in certain specific provisions, including Articles 54, 55, 73, Chapter IV of Part V, Chapter V of Part VI or Chapter I of Part XI, or any of the Lists in the Seventh Schedule, or the representation of States in Parliament, shall require to be ratified by the Legislatures of not less than one-half of the States. Providing the context • Article 15 guarantees the fundamental right of prohibition of discrimination on grounds of religion, race, caste, sex, or place of birth. Article 15(1) and (2) broadly state that the “State” shall not discriminate against “any citizen” on grounds only of religion, race, caste, sex, place of birth or any of them. Article 15(3) onwards, the Constitution lays down provisions relating to protective discrimination — the policy of granting special privileges to underprivileged sections. Articles 15(3) and 15(4) are the foundations for reservations in education and employment in the country. • Article 15(5) was introduced by the Constitution (93rd Amendment) Act, 2005. It is an enabling clause that empowers the State to make such provision for the advancement of SCs, STs and socially and educationally backward classes of citizens in relation to a specific subject, namely, admission to educational institutions including private educational institutions, whether aided or unaided by the state, notwithstanding the provisions of Article 19(1)(g). This was challenged in the court. In 2008, a five-judge Bench headed by the then Chief Justice of India, K.G. Balakrishnan, upheld the law providing 27% quota for OBCs in IITs, IIMs and other central educational institutions, but said it would not apply to the creamy layer. The Supreme Court upheld the validity of the Constitution (93rd Amendment) Act, 2005. It also held that the amendment does not violate the basic structure of the Constitution. • It is in this context that the reservation for the economically weaker sections is to be considered. A nine-judge Bench of the Supreme Court had ruled that reservation is a remedy for historical discrimination and its continuing ill-effects. The court had also said that reservation is not aimed at economic uplift or poverty alleviation. Economic weakness is on account of social backwardness. The economic criteria will lead, in effect, to the virtual deletion of Article 16(4) from the Constitution. Is this the new poverty line? • Since the new amendment talks of economic criteria and addresses the grievances of Brahmins, Baniyas, Patels, Marathas, Gujjars, Thakurs and even Muslims and Christians for the first time, many think it will be broad-based. It is the responsibility of the state to uplift the poor. • Traditionally marginalised sections need affirmative action. But the current policy says those households earning less than Rs. 8 lakh annually or owning less than 5 acres of land can avail of the quota. • That is a salary of Rs. 66,000 a month. If so, is this the new poverty line of India? And if so, why are those earning more than Rs. 25,000 a month being taxed? • The moment you make reservation for the advanced classes, it changes the meaning of reservation altogether. Reservation is not an anti-poverty programme. Analysis- III: The points mentioned here, take the view that nothing stopped the government from providing jobs or scholarships to the poor. • Critics take the view that the 124th Constitution Amendment Bill, proposed and promulgated in just a few days, is a gross and wilful subversion of the principle of social justice, which the Supreme Court has held to be the part of the basic structure of the Constitution. • They further allege that it is hard to understand as to how the government, which has all the legal resources and counsel at its disposal, chose to characterise reservations mandated by the Constitution as a job guarantee or a poverty alleviation programme. • Reservations for students in public institutions of higher education and jobs in the public sector were envisioned to bring about adequate representation to those sections of society that are oppressed by caste discrimination. • Reservations along with legal protections against discrimination form the juridical structure for social upliftment of the backward classes of Indian society. Constitutionally invalid • The Constituent Assembly amended Article 15 by inserting Clause (4), which states: “Nothing in this article or in Clause (2) of Article 29 shall prevent the State from making any special provision for the advancement of any socially and educationally backward classes of citizens or for the Scheduled Castes and the Scheduled Tribes.” The use of income or economic criteria for providing reservation for those not included in the backward classes, or for those belonging to the general sections, is thus constitutionally invalid. • Critics point out that if the present government wished to benefit the poorer sections of those not included in the backward classes, Scheduled Castes and Tribes, there was nothing that stopped it from creating jobs along the lines of the Mahatma Gandhi National Rural Employment Guarantee Act, which also created rural infrastructure. Nothing stopped it from instituting new universities and colleges and providing need-based scholarships for poor students. • Granting 10% reservation in government jobs and education institutions to households in the general category with an income of less than Rs. 8 lakh per annum will make little difference to their poverty levels as corporate-led jobless growth has increased income inequality exponentially. Concluding Remarks: • In conclusion, critics take the view that the present government at the Centre chose not to increase the size of the pie, but to cut away another slice from the already shrinking pie of public sector institutions. • The promise of existing reservations is nowhere near to being fully realised. • Public spending for scholarships for students in the SC/ST/OBC categories (and minority students) has come to a near halt. • Critics further take the view that the move reverses the progress made in India over decades. It was perhaps put in place as the government was unable to provide any relief from the economic distress felt by small farmers, manufacturers, entrepreneurs, traders and the working class. In fact, this distress was worsened by the impact of the rash decision called demonetisation and the poor implementation of the Goods and Services Tax. 1. Not kosher (ICICI Bank: Conflict of Interest Issues) Editorial Analysis: • The inquiry by former Supreme Court judge Justice B.N. Srikrishna into the allegations against former ICICI Bank CEO Chanda Kochhar has taken eight long months to confirm what seems apparent – that she did not conduct herself as she should have in relation to conflict-of-interest issues. • Recently, the Central Bureau of Investigation (CBI) filed an FIR against Ms. Kochhar, her husband Deepak Kochhar, head of the Videocon group Venugopal Dhoot and ICICI Bank executives for sanction of credit facilities in violation of rules, that caused a loss of Rs. 1,730 crore to the bank. • The investigating agency has a long way to go before it establishes whether the loans were given in return for financial favours, a charge that is at the heart of booking them for criminal conspiracy, cheating and corruption. The crux of what happened: • Experts opine that clearly, Ms. Kochhar erred in not disclosing to the bank’s board her husband’s business connections with the Videocon group, which was a client of the bank. • Worse, she failed to display the correctness expected of her by sitting on committees that sanctioned credit facilities to Videocon when she ought to have recused herself. • As a matter of fact, just a day after a Rs. 300-crore loan was disbursed to Videocon International Electronics in 2009, Mr. Kochhar’s NuPower Renewables received Rs. 64 crore from the Videocon group. • Whether this was a quid pro quo for the loan, as the CBI suggests, needs to be proved. But there is no denying that it made for poor, even suspicious, optics. • The inquiry report holds her guilty of violation of the bank’s “code of conduct, its framework for dealing with conflict of interest and fiduciary duties, and in terms of applicable Indian laws, rules and regulations.” • The bank’s board has accepted the report and decided to treat her voluntary resignation from the bank in October 2018 as “termination for cause”, also deciding to claw back all bonuses paid to her since April 2009, hold back unpaid amounts and divest her of her stock option entitlements. Concluding Remarks: • These are strong penalties, but the question is: how did the board give her a clean chit as recently as March last year (2018)? • It had then reposed its full confidence and faith in Ms. Kochhar and commended her and the management team for their “hard work and dedication”. • It is impossible to believe the board was not aware of the allegations against the CEO given that a whistle-blower had made them public in October 2016. • Questions arise: Was the board then influenced by Ms. Kochhar into giving her a good conduct certificate? These are uncomfortable questions that raise doubts over the standards of corporate governance at one of India’s largest banks. • In conclusion, the ICICI Bank episode is only one among several instances of governance lapses in corporate India in recent times. • Clearly, regulators need to up their game F. Tidbits 1. Celestial billboards spark debate on who owns the sky • StartRocket, a Russian start-up aiming to put billboards in space; the firm plans to turn hundreds of tiny satellites into a massive display visible from the earth — something its CEO, Vlad Sitnikov, said would make him the first man to draw in space since the ancient Greeks grouped stars into constellations. • “New ages demand new gods,” the advertising expert said, adding that the world is no longer ruled by Greek deities but by brands and events. • Sitnikov said he came up with the space billboard idea last year after U.S.-New Zealand rocket propulsion company Rocket Lab launched a shiny disco ball called Humanity Star into orbit, where it remained visible to the human eye for months. • To work out technical details he teamed up with experts from Skoltech, a Moscow private university, he said. The team aims to put 200 tiny satellites, known as CubeSats, at an altitude of about 500 kilometres in the lower orbit by 2021. • The satellites, each equipped with a sun-reflecting sail, would fly close together to comprise the pixels of a giant screen that could be switched on and off to display short words or logos. • Production costs alone are expected to be more than $150 million, he said. The firm plans to charge about$200,000 for every eight hours of advertising. • The display would look roughly the size of a half moon and be visible for six minutes at a time, potentially from anywhere. • But to make ads worthwhile the firm intents to target mostly big cities — where light pollution is already high — at dusk or dawn, when people are out in the streets, Mr. Sitnikov said. • But since it was announced in January, the initiative has angered astronomers and raised questions about the need to better regulate who owns the skies — and what is in them. G. Prelims Facts Nothing here today!!! H. UPSC Prelims Practice Questions Question 1.Consider the following statements 1. An interim Budget is a complete set of accounts, including both expenditure and receipts. 2. It made by the government just before the election. Select the correct ones 1. 1 only 2. 2 only 3. Both 1 and 2 4. Neither 1 nor 2 See Question 2. Consider the following statements regarding India Water Impact Summit 2018 1. It was organised jointly by the National Mission for Clean Ganga (NMCG) and the UNEP. 2. The 2018 Summit introduced the inaugural Ganga Financing Forum that will bring a number of institutions to a common knowledge, information and partnership platform Which of the above statement(s) is/are correct? 1. 1 only 2. 2 only 3. Both 1 and 2 only 4. Neither 1 nor 2 See Question 3. Recently, the species Sumatran, Amur (Siberian), Malayan, Amoy (South China) and Indochinese in news, belongs to the category of 1. Subspecies of Tiger 2. Migratory Bird 3. Turtles 4. Tribal groups in Indonesia island See Question 4. Consider the following statements regarding Namdapha National Park 1. It is India’s only reserve to have four big cat species — the tiger, leopard and the severely endangered clouded and snow leopards 2. Noa-Dihing River, a tributary of the Brahmaputra flows through Namdapha National Park. Which of the above statement(s) is/are correct? 1. 1 only 2. 2 only 3. Both 1 and 2 only 4. Neither 1 nor 2 See
2021-11-28 08:44:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20970933139324188, "perplexity": 4152.071661551108}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358480.10/warc/CC-MAIN-20211128073830-20211128103830-00474.warc.gz"}