url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-6th-edition/chapter-5-review-page-330/54
Intermediate Algebra (6th Edition) Published by Pearson Chapter 5 - Review: 54 8 Work Step by Step Substitute 0 for x in the equation. 9$(0)^{2}$ - 7(0) + 8 = 0 - 0 + 8 = 8 After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2017-06-24 21:20:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39127612113952637, "perplexity": 1479.129457366712}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320338.89/warc/CC-MAIN-20170624203022-20170624223022-00406.warc.gz"}
http://www.zentralblatt-math.org/zmath/en/advanced/?q=an:0808.76011
Language:   Search:   Contact Zentralblatt MATH has released its new interface! For an improved author identification, see the new author database of ZBMATH. Query: Fill in the form and click »Search«... Format: Display: entries per page entries Zbl 0808.76011 Camassa, Roberto; Holm, Darryl D.; Hyman, James M. A new integrable shallow water equation. (English) [A] Adv. Appl. Mech. 31, 1-33 (1994). ISBN 0-12-002031-9/hbk From the introduction: We discuss a newly discovered completely integrable dispersive shallow water equation $$u\sb t+ 2\kappa u\sb x- u\sb{xxt}+ 3uu\sb x= 2u\sb x u\sb{xx}+ uu\sb{xxx},\tag 1$$ where $u$ is the fluid velocity in the $x$ direction (or equivalently, the height of the water's free surface above a flat bottom), $\kappa$ is a constant related to the critical shallow- water wave speed.\par After briefly discussing the Boussinesq class of equations for small amplitude dispersive shallow water equations, in Section II we derive the one-dimensional Green-Naghdi equations. In Section III, we use Hamiltonian methods to obtain equation (1) for unidirectional waves. In Section IV, we analyze the behavior of the solutions of (1) and show that certain initial conditions develop a vertical slope in finite time. We also show that there exist stable multisoliton solutions and derive the phase shift that occurs when two of these solitons collide. Section V demonstrates the existence of an infinite number of conservation laws for equation (1) that follow from its bi-Hamiltonian property. Section VI uses this property to derive the isospectral problem for this equation and others in its hierarchy. MSC 2000: *76B15 Wave motions (fluid mechanics) 76B25 Solitary waves, etc. (inviscid fluids) 35Q51 Solitons 37N10 Dynamical systems in fluid mechanics, oceanography and meteorology Keywords: Boussinesq class of equations; one-dimensional Green-Naghdi equations; Hamiltonian methods; vertical slope; stable multisoliton solutions; phase shift; conservation laws; bi-Hamiltonian property; isospectral problem Cited in: Zbl 0940.35177 Zbl 0952.35114 Highlights Master Server
2013-05-20 16:40:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6054151058197021, "perplexity": 2169.085565584012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699113041/warc/CC-MAIN-20130516101153-00068-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.mapleprimes.com/users/nielsle/replies
## 20 Reputation 3 years, 287 days ## Thank you for all the answers. They were... Thank you for all the answers. They were very helpful. Here are some more ideas It could be great to have 2-dimensional and 3-dimensional vectors in the "expression-menu". Sometimes my students try to use the binomial coefficient as a vector, and that leads to trouble. It could be nice to have helpful error messages such as "Did you mean X instead of x". Similarly it could be great to get a helpful warning if they forget the arrow over a vector. (The problem is that \vec {a} and a are interpreted as two separate variables) It could be useful to have some kind of document-wide setting that caused all variables to be scoped within an exercise. In other words the variables should be reset after each section header.  Alternatively Maple could warn the user whenever a variable is shadowing another variable. ( I tell my students to write "restart; with(Gym)" at the beginning of each exercise, but sometimes they forget and that leads to trouble.) It would be great to have a document-wide setting where all variables are assumed to be real and all angles are assumed to be in degrees until further notice. (This would probably require trigonometric functions to be redefined) If the students write "With(Gym):" then Maple could return an error due to "With" being upper case", but the ":" causes Maple to return no output. This error is difficult for new users to track down. it would be nice to have a way to write no output on success, but an error message if the line failed. Is there a simple way to do this? Kind regards Niels Page 1 of 1 
2023-01-28 19:50:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8155859708786011, "perplexity": 866.6278935242894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499654.54/warc/CC-MAIN-20230128184907-20230128214907-00546.warc.gz"}
https://math.gatech.edu/seminars-and-colloquia-by-series?series_tid=39&page=18
Seminars and Colloquia by Series Topological K-Theory Series Geometry Topology Student Seminar Time Wednesday, October 16, 2013 - 14:05 for 1 hour (actually 50 minutes) Location Skiles 006 Speaker Shane ScottGeorgia Tech To any compact Hausdorff space we can assign the ring of (classes of) vector bundles under the operations of direct sum and tensor product. This assignment allows the construction of an extraordinary cohomology theory for which the long exact sequence of a pair is 6-periodic. Exotic 7-Spheres Series Geometry Topology Student Seminar Time Wednesday, October 9, 2013 - 14:00 for 1 hour (actually 50 minutes) Location Skiles 006 Speaker Jamie ConwayGeorgia Tech We will discuss Milnor's classic proof of the existence of exotic smooth structures on the 7-sphere. Symplectic fillings of 3-torus. Series Geometry Topology Student Seminar Time Wednesday, September 25, 2013 - 14:05 for 1 hour (actually 50 minutes) Location Skiles 006. Speaker Amey KalotiGeorgia Tech The aim of this talk is to give fairly self contained proof of the following result due to Eliashberg. There is exactly one holomorphically fillable contact structure on $T^3$. If time permits we will try to indicate different notions of fillability of contact manifolds in dimension 3. Todays Seminar will be a Geometry/Topology Research Seminar Series Geometry Topology Student Seminar Time Wednesday, September 4, 2013 - 14:00 for 1 hour (actually 50 minutes) Location Skiles 006 Speaker NoneNone Todays Seminar will be a Geometry/Topology Research Seminar Series Geometry Topology Student Seminar Time Wednesday, April 10, 2013 - 13:00 for 1 hour (actually 50 minutes) Location Skiles 006 Speaker nonenone Teichmuller polynomials for a fibered face of the Thurston norm ball Series Geometry Topology Student Seminar Time Wednesday, March 13, 2013 - 13:00 for 1 hour (actually 50 minutes) Location Skiles 006 Speaker Hyunshik ShinGeorgia Tech We will briefly talk about the introduction to Thruston norm and fibered face theory. Then we will discuss polynomial invariants for fibered 3-manifolds, so called Teichmuller polynomials. I will give an example for a Teichmuller polynomial and by using it, determine the stretch factors (dilatations) of a family of pseudo-Anosov homeomorphisms. "Transverse knots and Khovanov homology" Series Geometry Topology Student Seminar Time Wednesday, March 6, 2013 - 13:00 for 1 hour (actually 50 minutes) Location Skiles 006 Speaker Alan DiazGeorgia Tech I'll discuss Plamenevskaya's invariant of transverse knots, how it can be used to determine tightness of contact structures on some 3-manifolds, and efforts to understand more about this invariant. This is an Oral Comprehensive Exam; the talk will last about 40 minutes. Higher Prym Representations of the Mapping Class Group Series Geometry Topology Student Seminar Time Wednesday, February 20, 2013 - 11:05 for 1 hour (actually 50 minutes) Location Skiles 006 Speaker Becca WinarskiGeorgia Tech A conjecture of Ivanov asserts that finite index subgroups of the mapping class group of higher genus surfaces have trivial rational homology. Putman and Wieland use what they call higher Prym representations, which are extensions of the representation induced by the action of the mapping class group on homology, to better understand the conjecture. In particular, they prove that if Ivanov's conjecture is true for some genus g surface, it is true for all higher genus surfaces. On the other hand, they also prove that if there is a counterexample to Ivanov's conjecture, it is of a specific form. Hyperbolicity of the Arc and Curve Complex Series Geometry Topology Student Seminar Time Wednesday, February 13, 2013 - 13:00 for 1 hour (actually 50 minutes) Location Skiles 005 Speaker Jamie ConwayGeorgia Tech Given any surface, we can construct its curve complex by considering isotopy classes of curves on the surface. If the surface has boundary, we can construct its arc complex similarly, with isotopy clasess of arcs, with endpoints on the boundary. In 1999, Masur and Minsky proved that these complexes are hyperbolic, but the proof is long and involved. This talk will discuss a short proof of the hyperbolicity of the curve and arc complex recently given by Hensel, Przytycki, and Webb. TBA by Meredith Casey Series Geometry Topology Student Seminar Time Wednesday, January 30, 2013 - 13:00 for 1 hour (actually 50 minutes) Location Skiles 006 Speaker Meredith CaseyGeorgia Tech TBA
2021-09-23 14:09:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4611821174621582, "perplexity": 1486.246984120373}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00411.warc.gz"}
https://deepspeech.readthedocs.io/en/r0.9/Scorer.html
# External scorer scripts¶ DeepSpeech pre-trained models include an external scorer. This document explains how to reproduce our external scorer, as well as adapt the scripts to create your own. The scorer is composed of two sub-components, a KenLM language model and a trie data structure containing all words in the vocabulary. In order to create the scorer package, first we must create a KenLM language model (using data/lm/generate_lm.py, and then use generate_scorer_package to create the final package file including the trie data structure. The generate_scorer_package binary is part of the native client package that is included with official releases. You can find the appropriate archive for your platform in the GitHub release downloads. The native client package is named native_client.{arch}.{config}.{plat}.tar.xz, where {arch} is the architecture the binary was built for, for example amd64 or arm64, config is the build configuration, which for building decoder packages does not matter, and {plat} is the platform the binary was built-for, for example linux or osx. If you wanted to run the generate_scorer_package binary on a Linux desktop, you would download native_client.amd64.cpu.linux.tar.xz. ## Reproducing our external scorer¶ Our KenLM language model was generated from the LibriSpeech normalized LM training text, available here. It is created with KenLM. cd data/lm wget http://www.openslr.org/resources/11/librispeech-lm-norm.txt.gz Then use the generate_lm.py script to generate lm.binary and vocab-500000.txt. As input you can use a plain text (e.g. file.txt) or gzipped (e.g. file.txt.gz) text file with one sentence in each line. If you are using a container created from Dockerfile.build, you can use --kenlm_bins /DeepSpeech/native_client/kenlm/build/bin/. Else you have to build KenLM first and then pass the build directory to the script. cd data/lm python3 generate_lm.py --input_txt librispeech-lm-norm.txt.gz --output_dir . \ --top_k 500000 --kenlm_bins path/to/kenlm/build/bin/ \ --arpa_order 5 --max_arpa_memory "85%" --arpa_prune "0|0|1" \ --binary_a_bits 255 --binary_q_bits 8 --binary_type trie Afterwards you can use generate_scorer_package to generate the scorer package using the lm.binary and vocab-500000.txt files: cd data/lm curl -LO http://github.com/mozilla/DeepSpeech/releases/... tar xvf native_client.*.tar.xz ./generate_scorer_package --alphabet ../alphabet.txt --lm lm.binary --vocab vocab-500000.txt \ --package kenlm.scorer --default_alpha 0.931289039105002 --default_beta 1.1834137581510284 The generate_scorer_package binary is part of the released native_client.tar.xz. If for some reason you need to rebuild it, please refer to how to Compile generate_scorer_package. With a text corpus in hand, you can then re-use generate_lm.py and generate_scorer_package to create your own scorer that is compatible with DeepSpeech clients and language bindings. Before building the language model, you must first familiarize yourself with the KenLM toolkit. Most of the options exposed by the generate_lm.py script are simply forwarded to KenLM options of the same name, so you must read the KenLM documentation in order to fully understand their behavior. After using generate_lm.py to create a KenLM language model binary file, you can use generate_scorer_package to create a scorer package as described in the previous section. Note that we have a lm_optimizer.py script which can be used to find good default values for alpha and beta. To use it, you must first generate a package with any value set for default alpha and beta flags. For this step, it doesn’t matter what values you use, as they’ll be overridden by lm_optimizer.py later. Then, use lm_optimizer.py with this scorer file to find good alpha and beta values. Finally, use generate_scorer_package again, this time with the new values.
2022-01-17 01:11:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25760313868522644, "perplexity": 5039.927377522907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300253.51/warc/CC-MAIN-20220117000754-20220117030754-00680.warc.gz"}
http://www.jessebett.com/Radial-Basis-Function-USRA/
View on GitHub ## Interpolating Scattered Data in N-Dimensions This project explores the use of Radial Basis Functions (RBFs) in the interpolation of scattered data in N-dimensions. It was completed Summer 2014 by Jesse Bettencourt as an NSERC-USRA student under the supervision of Dr. Kevlahan in the Department of Mathematics and Statistics at McMaster University, Hamilton, Ontario, Canada. *Undergraduate Student Research Awards (USRA) are granted by the Natural Sciences and Engineering Research Council of Canada to 'stimulate interest in research in the natural sciences and engineering' and to encourage graduate studies and the pursuit of research careers in these fields. This repository contains resources and working documents associated with the project. The early stages of the project focused on reviewing the published literature on RBF interpolation. This was summarized in a presentation given at the Canadian Undergraduate Mathematics Conference (CUMC) 2014 at Carelton University. The pdf, LaTeX, and figures as well as the python script to generate figures from this presentation can be found in the CUMC Presentation folder. Following the presentation, the project shifted focus to demonstrating the SciPy implementation of RBF interpolation. The Interpolation Demonstration folder contains the python files associated with this exploration. Of note, the file SphericalHarmonicInterpolation.py demonstrates how RBFs can be used to interpolate spherical harmonics given data sites and measurements on the surface of a sphere. This folder also contains iPython notebooks from early experimentation with SciPy's RBF and a Mathematica notebook from preliminary assessment of Mathematica implementation of RBF interpolation. ## What is interpolation? Given a set of measurements $\{f_i\}_{i=1}^N$ taken at corresponding data sites $\{x_i\}_{i=1}^N$ we want to find an interpolation function $s(x)$ that informs us on our system at locations different from our data sites. Further, we want our function, $s(x)$ to satisfy what's called the interpolation condition which is that we want to our interpolation function to exactly match our measurements at our data sites. Interpolation Condition: $$s(x_i)=f_i$$ $\forall i\in{0 ... N }$ This is how interpolation differs from approximation, where approximation does not necessitate that our function exactly equals our measurements at the data sites. This can be achieved through different methods, e.g., Least Squares approximation. Sometimes, when accuracy at data sites is not necessary, approximation is preferred over interpolation because it can provide a 'nicer' function which could better illustrate the relationship among the data sites and measurements. For instance, approximation is heavily utilized in experimental science where measurements can contain a measurement error associated with experimental procedures. In this environment, the interpolation condition may be undesirable because it forces the interpolation to match exactly with potential measurement error, where approximation may alleviate error influence and illustrate measured correlations better. For the purposes of this project, we focus on interpolation only. ## Interpolation Assumption Many interpolation methods rely on the convenient assumption that our interpolation function, $s(x)$, can be found through a linear combination of basis functions, $\psi_i(x)$. Linear Combination Assumption: \begin{equation*} s(x)=\sum_{i=1}^N \lambda_i \psi_i \end{equation*} This assumption is convenient as it allows us to utilize solving methods for systems of linear equations from linear algebra to find our interpolation function. As such, we can express our interpolation problem as a linear system. Interpolation as Linear System: \begin{equation*} A\boldsymbol{\lambda}=\boldsymbol{f} \end{equation*} Where $\boldsymbol{f}$ is the vector of datasite measurements $\left[ f_1, ..., f_N \right]^T$ and $\boldsymbol{\lambda}$ is the vector of linear combination coefficients $\left[ \lambda_1, ..., \lambda_N \right]^T$. For a system with N measurement data sites, $A$ is an NxN-matrix called the interpolation matrix or Vandermonde Matrix The elements of A are given by the basis functions, $\psi_j$ evaluated at each data site, $x_i$. Elements of $A$: \begin{equation*} a_{ij}=\psi_j(x_i) \end{equation*} By using numerical methods and solving this linear system, we will have our interpolation function as a linear combination of our basis functions. #### Familiar Example of Interpolation Basis A choice of basis functions, $\psi_i$, which may familiar to undergraduate students is the basis of (N-1)-degree polynomials. If we wish to find a 1-Dimensional interpolation function from N distinct data sites, we can find an (N-1)-degree polynomial which goes exactly through all sites. In other words, by choosing our basis functions to be successive powers of x up to (N-1), we can solve our interpolation system for our function. Polynomial Interpolation Basis: \begin{equation*} \psi_{i=1}^N={1,x,x^2,x^3, ..., x^{N-1}} \end{equation*} An example of this interpolation with 6 data sites can be seen in Figure 3. Here the interpolation function, as a linear combination, is $s(x)=-0.02988 x^5 + 0.417 x^4 - 2.018 x^3 + 3.694 x^2 - 1.722 x - 5.511e^{-14}$ However, while polynomial basis is simple for 1-Dimensional interpolation, this method is not ideal for higher dimensions. To accommodate higher dimension interpolation, we must choose our basis differently. ## Well-Posedness in Higher Dimensional Interpolation When defining our linear system we must consider whether our system is well-posed. That is, does there exist a solution to our interpolation problem, and if so is that solution unique? Well-Posedness in Linear Systems: Our system will be well-posed if and only if $A$ is non-singular, i.e. $\det(A)\neq0$ For 1-D interpolation, many choices in basis functions will guarantee a well-posed system. In our example of polynomial interpolation, for instance, it was guaranteed that for N-distinct data sites a unique (N-1)-degree polynomial will interpolate the measurements. So without predetermining any information about our data sites (other than that they are distinct from each other), or their measurements, we can define our basis functions independently of our data and expect a unique, well-posed solution. However, for n-Dimensions where $n\geq2$ this is never guaranteed! That is, no matter what we choose for our set of basis functions, there will always be data sites which produce ill-posed systems. The implication of this is that we can not define our basis functions independently of our data and expect a well-posed system. This results from the Haar-Mairhuber-Curtis Theorem. ### Haar-Mairhuber-Curtis Theorem Through the work of Alfréd Haar and his description of Haar Spaces we gain the negative result that well-posedness is not guaranteed in higher dimensional linear systems with independently chosen basis functions. To state the theorem we first define Haar spaces. Definition of Haar Space: Let $\Omega \subset \mathbb{R}^N$ be a set with at least $N$ sites in it. Let $V \subset C(\Omega)$ be an $N$-dimensional subspace of continuous functions. Then, we say that $V$ is a Haar Space if for any collection of $N$ sites ${x_1,...,x_N}$ with any corresponding set of values ${f_1,...,f_N}$, we can find a unique function $s \in V$ such that $s(x_k)=f_k$. From this definition we have the following lemma. Lemma: Let $\Omega \subset \mathbb{R}^N$ be a set with at least $N$ sites in it and $V \subset C(\Omega)$ be a subspace. Then, $V$ is an $N$-dimensional Haar Space if and only if for any distinct sites ${x_1,...,x_N} \in \Omega$ and any basis of functions ${\psi_1,...,\psi_N} \in V$, we have $\det(\psi_j(x_i))\neq0$. In other words: $V$ is a Haar Space $\iff$ any set of basis function produce well-posed system for any set of distinct data sites. For the purposes of interpolation, then, interpolating within a Haar space is ideal, because then we can choose our basis independently of our data and, as per the lemma, we are guaranteed a well-posed system and a unique solution. However, by the negative result of the Mairhuber-Curtis Theorem, there can be no Haar Spaces in $N$-Dimensions for $N \geq 2$ Mairhuber-Curtis Theorem: Let $\Omega \subset \mathbb{R}^N$, $N \geq 2$ contain an interior site. Then, there is no Haar space of dimension $N \geq 2$ for $\Omega$. So, if we are interpolating scattered data in higher dimensions, by the Haar-Mairhuber-Curtis Theorem we cannot choose our basis functions independent from our data sites. However, this does not mean we cannot interpolate in higher dimensions using our interpolation assumption. If we can't guarantee well-posedness with independently chosen basis functions, we must choose our basis functions depending on our data sites. ## Basis Functions for Higher Dimension Interpolation One method for defining basis functions depending on our data sites is to take a single function and translate it for each site. That is, our basis functions will be translates of a single function for each data site. ### Translates of the Basic Function If our basis functions are translates of a function, which function should we translate? By answering this question we will arrive at the definition of Radial Basis Functions, but first let's consider a preliminary function: the basic function. The Basic Function: \begin{equation*} \psi_i(x)=||x-x_i|| \end{equation*} The basic function (Figure 5) is the absolute valued function given by the Euclidean distance from a center point $x_i \in \mathbb{R}^N$. The basic function has the feature that it is radially symmetric about this center point. We can define our set of basis functions, $\{\psi_i(x)\}_{i=1}^N$, as translates of our basic function such that the center points are located at our data sites. Set of Basis Functions: \begin{equation*} {\psi_i(x)=||x-x_i||}_{i=1}^N \end{equation*} In other words, our set of basis functions is composed of basic functions centered each of our data sites. We can visualize one of these centered basic functions in Figure 6. Now that we have chosen our basis functions, we can look at the linear system which it produces. For instance, our interpolation matrix, $A$, now becomes: \begin{equation*} A= \begin{bmatrix} ||x_1-x_1|| & ||x_1-x_2|| & \cdots & ||x_1-x_N||\\ ||x_2-x_1|| & ||x_2-x_2||& \cdots & ||x_2-x_N||\\ \vdots & \vdots & \ddots & \vdots\\ ||x_N-x_1|| & ||x_N-x_2||& \cdots & ||x_N-x_N|| \end{bmatrix} \end{equation*} This matrix is known as the distance matrix with Euclidean distance. But, if we're not interpolating in 1-D, then we know we're not in a Haar Space. How do we know that our linear interpolation system with the distance matrix is well-posed? Lemma from Linear Algebra: Distance Matricies with Euclidean distance, for distinct points in $\mathbb{R}^n$ are always non-singular. From the above lemma we know that our interpolation matrix is non-singular. Therefore, we know our system is well-posed and that there exists a unique interpolation function! However, the choice of $\psi_i(x)=||x-x_i||$ as our basic function is not ideal. As we can see from our above plot of the basic function centered at $x_i$, the first derivative of the basic function is discontinuous at our center point, $x_i$. This has the consequence that, at each of our data sites, the first derivative of our interpolation function will be discontinuous. This is problematic because, ideally, we would like to have a $C^\infty$ smooth interpolation function so we can use methods from calculus to analyze our function. How can we remedy our derivative discontinuities in our interpolation function? ### Building a Better Basic Function In 1968, R.L. Hardy suggested that by using a $C^\infty$ smooth function as our basic function, we can produce smooth interpolation functions. These functions are called Kernels. The kernel suggested by Hardy was the Multiquadric Kernel. Hardy's Multiquadric Kernel: \begin{align*} \psi(x)=\sqrt{c^2 + x^2} \end{align*} where $c \neq 0$. Notice that if we allow $c=0$ in the multiquadric kernel then we are actually describing the basic function used above. So, in other words, Hardy's multiquadric kernel is like the basic function but smoothed with a parameter $c$. By looking at a plot of the multiquadric kernel, we can see that the discontinuity from the basic function has been addressed. In fact, the multiquadric function is, as desired, $C^\infty$ smooth. As before, we will define our basis functions, ${\psi_i}_{i=1}^{N}$, as a set of multiquadric kernels translated such that they are centered at our data sites, $x_i$. Basis of Multiquadric Kernels: \begin{equation*} {\psi_i(x)=\sqrt{c^2 + (||x-x_i||)^2}}_{i=1}^N \end{equation*} We can visualize one of these translated multiquadric kernels in the figure below. Notice that the multiquadric kernel is also radially symmetric about its center, $x_i$. Because of this radial symmetry, the multiquadric kernel can be described as a Radial Basis Function. In other words, it is a basis function which depends only on the radial distance from its center. Since our basis functions $\psi_i(x)$ depend only on distance, we can re-express them as such. Radial Basis Functions: \begin{equation*} \psi(||x-x_i||)= \phi(r) \end{equation*} where $r=||x-x_i||$ With our interpolation assumption, we can express our interpolation function as a linear combination of these functions, as before: Interpolation as Linear Combination of Radial Basis Functions: \begin{equation*} s(x)=\sum_{i=1}^N \lambda_i \psi(||x-x_i||)=\sum_{i=1}^N \lambda_i \phi(r) \end{equation*} There are a few commonly used radial basis function kernels: • Multiquadric: $\phi(r)=\sqrt{1+(\epsilon r)^2}$ • Inverse Multiquadric: $\phi(r)=\frac{1}{\sqrt{1+(\epsilon r)^2}}$ • Inverse Quadratic: $\phi(r)=\frac{1}{1+(\epsilon r)^2}$ • Gaussian: $\phi(r)=e^{-(\epsilon r)^2}$ As before, we can use translates of these functions centered on our data sites as basis for our interpolation linear system. Further, notice that the multiquadric kernel has been rearranged to replace $c$ with a shape parameter, $\epsilon$ consistent with the other kernels. However, by using the radial basis kernels as our basis, we change the interpolation matrix so that it is no longer the distance matrix as before. Interpolation matrix with RBF kernels: \begin{equation*} A= \begin{bmatrix} \phi_1(r_1) & \phi_1(r_2) & \cdots & \phi_1(r_N)\\ \phi_2(r_1) & \phi_2(r_2)& \cdots & \phi_2(r_N)\\ \vdots & \vdots & \ddots & \vdots\\ \phi_N(r_1) & \phi_N(r_2)& \cdots & \phi_N(r_N) \end{bmatrix} \end{equation*} If $N\geq2$ then we are still not interpolating in a Haar Space, and since we are no longer using a distance matrix, can we expect well-posedness? To answer this question we determine if our interpolation matrix is positive-definite. A matrix, $A$, is positive-definite if \begin{align*} & t^TAt>0 & \forall t=\left[ t_1, t_2, ..., t_n\right]\neq 0 \in \mathbb{R}^n \end{align*} Using this definition we have the following condition: If interpolation matrix, $A$, is symmetric, positive-definite , then $A$ is nonsingular and our system is well-posed. So we can guarantee the existence of a unique solution if we choose our kernels such that $A$ will be positive-definite. In fact, we can produce positive-definite interpolation matrices by using positive-definite kernels. A function, $\phi: \mathbb{R}^n\times \mathbb{R}^n \rightarrow \mathbb{R}$, is said to be positive definite if : \begin{align*} &\sum_{i=1}^N \sum_{j=1}^N \phi(||x-x_i||)t_i\bar{t_j}>0 &\forall t=\left[ t_1, t_2, ..., t_n\right]\neq 0 \in \mathbb{C}^n \end{align*} Of the common RBF kernels described above, all are positive-definite except Hardy's Multiquadric kernel. However, the multiquadric kernel is guaranteed to produce well-posed systems for other, similar reasons (that it is conditionally negative-definite). With the exception of Hardy's multiquadric kernel, by using positive-definite kernels we can produce positive-definite interpolation matrices which guarantee well-posed systems! So, by using Radial Basis Kernels for interpolation, we have shown that there exists a unique interpolation function $s(x)$ which interpolates scattered data in N-dimensions. Radial Basis Interpolation \begin{equation*} s(x)=\sum_{i=1}^N \lambda_i \psi(||x-x_i||)=\sum_{i=1}^N \lambda_i \phi(r) \end{equation*} ### Well-posed v.s. Well-conditioned In the discussion above we have shown that radial basis interpolation is well-posed, so there exists a unique solution for the interpolation problem. However, because these systems are solved using numerical methods on computers, they are subject to computational limitations. By using computational methods we introduce a complication, just because a solution exists, doesn't mean that it is accessible through numerical methods. A common example of the limitations that can cause a solution to be inaccessible is the accumulation of rounding errors. If our solution exists, and the system behaves 'nicely' with computational solving methods, then we say the solution is well-conditioned. Radial basis interpolation problems, although well-posed, have the propensity to be very ill-conditioned. This is in part due the choice shape parameter, $\epsilon$. For some systems, small changes in $\epsilon$ may have potentially significant influences on the system. In the two figures below we can see how increasing the value of epsilon will change the shape of the individual kernel basis functions. For $\epsilon=0.4$ For $\epsilon=1$ In the three figures below, we can see how increasing the value of epsilon will cause the interpolation system to become ill-conditioned. Keep in mind that the interpolation solution for each $\epsilon$ value still exists, but the computation methods create noise and are unable to find the function. So we can see that in order to use radial basis function interpolation we must choose epsilon in such a way that the system does not become ill-conditioned. Another limitation of radial basis function interpolation is that any error that occurs, as with ill-conditioning, occurs to a greater extent near the boundaries. This can be seen in the above three figures as the solution becomes more noisy, the noise is greater at the boundaries. This is because radial basis function interpolation relies on the radial symmetry of the basis functions. Basis functions centered at data sites on or close to the boundaries of the interpolation space become asymmetric. Of course, this can be avoided entirely by using radial basis function interpolation to interpolate functions in spaces without boundaries, e.g. surface of a sphere. ## Demonstrating Radial Basis Interpolation on Surface of Sphere As part of this project I demonstrate how SciPy's implimentation of Radial Basis Function interpolation can be used to interpolate spherical harmonic functions on the surface of a sphere. The complete code for this demonstration can be found in this repository under the Interpolation Demonstration folder in the file SphericalHarmonicInterpolation.py. ### Dependencies To fully use the python code you need the following libraries: ### Setting Up Coordinates There are two sets of points used throughout the code. The data sites, which is used to train the RBF interpolation, called coarse coordinates, and the interpolation sites, where the function is being interpolated, called fine coordinates. I define the function which produces both set of points def coordinates(n_fine, n_coarse): where n_fine and n_coarse are parameters given to define the resolution of the interpolation space and the number of data sites respectively. Because of the way the interpolation space grid is defined, n_fine is given as a complex number. #### Fine Coordinate Grid The interpolation space grid, produced by the function make_coor(n) called with n_fine produces a grid of points on a sphere corresponding to latitude-longitude style grid points. In other words, the function creates an (n x n)-sized grid of points on the ($\phi, \theta$)-space. Then the function converts those points to Cartesian (x,y,z)-coordinates. def make_coor(n): '''Creates points on the surface of sphere using lat-lon grid points''' phi, theta = np.mgrid[0:pi:n, 0:2 * pi:n] Coor = namedtuple('Coor', 'r phi theta x y z') r = 1 x = r * sin(phi) * cos(theta) y = r * sin(phi) * sin(theta) z = r * cos(phi) return Coor(r, phi, theta, x, y, z) The resulting points and spherical mesh can be seen in the figure below for n_fine = 20j From the figure we can see that the fine grid produces points at a much higher density at the poles than along the 'equator'. For this reason, we cannot use these points to train our radial basis function. Instead we must use a different method to produce our data sites. #### Coarse Coordinate (Data Sites) Grid For the RBF interpolation of an arbitrary function on the surface of a sphere we want to choose our points so they are equally spaced from each other. As it happens, the problem of uniformly distributing n-many points on the surface of a sphere is an open problem. For a large ($N \geq 10$) number of points, a sufficient method for pseudo-uniformly distributing $N$ points on the surface of the sphere is the Golden Section Spiral method. This algorithm places the points according to the golden spiral and can be visualized in the video below. This algorithm was implemented in my code as the following function def uniform_spherical_distribution(N): """n points distributed evenly on the surface of a unit sphere""" pts = [] r = 1 inc = pi * (3 - sqrt(5)) off = 2 / float(N) for k in range(0, int(N)): y = k * off - 1 + (off / 2) r = sqrt(1 - y * y) phi = k * inc pts.append([cos(phi) * r, y, sin(phi) * r]) return np.array(pts) However, since this only produces the Cartesian coordinates for these points and we will need the spherical coordinates, a we also append the spherical coordinates using the function: def appendSpherical_np(xyz): '''Appends spherical coordinates to array of Cartesian coordinates''' ptsnew = np.hstack((xyz, np.zeros(xyz.shape))) xy = xyz[:, 0] ** 2 + xyz[:, 1] ** 2 ptsnew[:, 3] = np.sqrt(xy + xyz[:, 2] ** 2) # for elevation angle defined from Z-axis down ptsnew[:, 4] = np.arctan2(np.sqrt(xy), xyz[:, 2]) # ptsnew[:,4] = np.arctan2(xyz[:,2], np.sqrt(xy)) # for elevation angle # defined from XY-plane up ptsnew[:, 5] = np.arctan2(xyz[:, 1], xyz[:, 0]) return ptsnew where parameter xyz is an array of Cartesian coordinates. To produce our uniformly spaced coarse coordinates for interpolation, we call the function def make_uni_coor(n): '''Make named tuple of unifromly distrubed points on sphere''' Coor = namedtuple('Coor', 'theta phi x y z') pts = uniform_spherical_distribution(n) pts = appendSpherical_np(pts) return Coor(pts[:, 5], pts[:, 4], pts[:, 0], pts[:, 1], pts[:, 2]) However, we've only generated the points of the sphere without providing information on how these points relate to each other. If we wish to plot these points as though they all belong on the surface of the sphere, we need to define a mesh. For this we use Delaunay triangulation to produce triangles between three adjacent points on the sphere. To do this, we use DiPy's sphere object which allows us to define a sphere using our Cartesian coordinates. The object has the method faces() which are an array of triangles for our Delaunay mesh. We can visualize our pseudo-uniformly distributed points and their Delaunay mesh in the figure below for n_coarse = 100 ### Using Named Tuples The coordinate systems produced above use python's namedtuples as a variable naming convention. For instance, when defining our fine spherical grid we first define a namedtuple: Coor = namedtuple('Coor', 'r phi theta x y z') Then, when the function is returning the coordinates, it stores them as a named tuple as follows: return Coor(r, phi, theta, x, y, z) As such we can address the coordinates by selecting named elements of the tuple. The parent function for the producing the coordinates stores the fine and coarse coordinates as named tuples. def coordinates(n_fine, n_coarse): ... Coordinates = namedtuple('Coordinates', 'fine coarse ') return Coordinates(make_coor(n_fine), make_uni_coor(n_coarse)) Now, if we generate our coordinates by calling this function and naming the output Coor. Coor = coordinates(n_fine, n_coarse): We can access all our coordinate information by addressing named elements inside the tuple. For instance, if we wish to address the fine coordinate's $\phi$ component, we can do so as follows Coor.fine.phi Using named tuples for variable names allows for flexible and readable python code. ### Interpolating function Now that we have our coordinates we can define a function on the surface of the sphere at those coordinates. For an example function on a sphere's surface we use the real part of spherical harmonics. Specifically, we use the real part of spherical harmonic $Y^3_4$. To implement this, we use SciPy's Spherical Harmonic Function and define a function: def harmonic(m, l, coor): '''Produce m,l spherical harmonic at coarse and fine coordinates''' Harmonic = namedtuple('Harmonic', 'fine coarse') return Harmonic( special.sph_harm(m, l, coor.fine.theta, coor.fine.phi).real, special.sph_harm(m, l, coor.coarse.theta, coor.coarse.phi).real ) Again, we make use of the namedtuple to define a fine and coarse harmonic. Since we are using RBF to interpolate the spherical harmonic from the coarse sites, we technically only need to evaluate the harmonic at the coarse coordinates. However, we are also interested in comparing the interpolated result to the actual function, to do this we also find the values of the function at the fine coordinates. We call our function using our coordinates function = harmonic(3, 4, coordinates) ### Implementing the Radial Basis Function Interpolation We use SciPy implementation of RBF interpolation to define a function: def rbf_interpolate(coor, coarse_function, epsilon=None): '''Radial Basis Function Interpolation from coarse sites to fine cooridnates''' # Train the interpolation using interp coordinates rbf = Rbf(coor.coarse.x, coor.coarse.y, coor.coarse.z, coarse_function, norm=angle, epsilon=epsilon) # The result of the interpolation on fine coordinates return rbf(coor.fine.x, coor.fine.y, coor.fine.z) We call our RBF interpolation using our coarse Cartesian coordinates and the value of the harmonic at those coordinates: rbf = Rbf(coor.coarse.x, coor.coarse.y, coor.coarse.z, coarse_function, norm=angle, epsilon=epsilon) Notice that can we provide a value for epsilon (if given None SciPy will compute a default value). Further, note that we define a norm. This is the distance metric used to determine the radial distance from the data sites. As a default, SciPy will use the Euclidean Distance as the distance norm. However, since we are training our function using Cartesian coordinates on the surface of the unit sphere, we must use a distance metric for points on the surface of that sphere. For the $S^2$ distance norm we define a function to be called from our Rbf: def angle(x1, x2): '''Distance metric on the surface of the unit sphere''' xx = np.arccos((x1 * x2).sum(axis=0)) xx[np.isnan(xx)] = 0 return xx Now once we train our radial basis function, rbf() we can use it to interpolate the spherical harmonic on our fine coordinates: return rbf(coor.fine.x, coor.fine.y, coor.fine.z) ### Optimizing our Choice of Epsilon We can define errors of our interpolation to be the difference between the interpolated function and the actual spherical harmonic function at each of the fine coordinates. We define a python function to give us these values: def interp_error(fine_function, interp_results): '''Error between interpolated function and actual function''' Error = namedtuple('Error', 'errors max') errors = fine_function - interp_results error_max = np.max(np.abs(errors)) return Error(errors, error_max) Further, we can assess the overall error of the interpolation by using the maximum difference between the interpolation and the function and calling this value error_max. If we preform multiple RBF interpolation, each with different values of the shape parameter, $\epsilon$, we can see how epsilon effects the maximum error of the interpolation. Further, we can use this to choose the epsilon which minimizes this error. I plot the maximum error for increasing values of epsilon, colouring the optimal choice red. By RBF interpolating our function with the optimal value of epsilon, we can minimize the interpolation error. ### Visualizing the Results of the Interpolation Using the MayaVi scientific data visualization library we can visualize the results of this interpolation. Note: The following images are stills from the MayaVi visualization environment, which is interactive. I highly recommend downloading and playing with these figures yourself, as you can rotate around the sphere. First, we plot the spherical harmonic function on the sphere. We also add small 'warts' which indicate where the data sites being used for interpolation are, coloured to the value of the function at those sites. mlab.figure() vmax, vmin = np.max(fun.fine), np.min(fun.fine) mlab.mesh(coor.fine.x, coor.fine.y, coor.fine.z, scalars=fun.fine, vmax=vmax, vmin=vmin) mlab.points3d(coor.coarse.x, coor.coarse.y, coor.coarse.z, fun.coarse, scale_factor=0.1, scale_mode='none', vmax=vmax, vmin=vmin) mlab.colorbar(title='Spherical Harmonic', orientation='vertical') mlab.savefig('Figures/functionsphere.png') Then, we can see the interpolated function: Finally, we can see where the error occurs on our sphere by visualizing the error: Note that the above interpolation uses a relatively high number of data sites (N=350). We can see how this interpolation worsens with a fewer number of sites (N=100). Again, here is the spherical harmonic we are interpolating with the 100 data sites. Here is the interpolation trained with fewer sites. Predictably, this causes the error of the interpolation to increase. ### Conclusion Radial Basis Interpolation is an effective method to interpolate high dimensional scattered data, especially if the interpolation space has no boundaries. The method solves the problem of Well-Posedness in higher dimensional scattered data, but still has the propensity to be ill-conditioned. However, like other interpolation methods, there are adjustments to RBF methods not discsussed in this project that can be used to improve the conditioning of the interpolation problem. This project represented my introduction to interpolation theory, and as a summer project it was very rewarding to explore both theoretical aspect of Radial Basis Function interpolation while also applying the theory using computational methods. The project allowed me to develop knowledge of the Python language, including the implementation of SciPy and other common libraries. I also found great support from the Python community, especially through StackExchange, which supported a secondary goal of this project in exploring open-source community resources. This exploration also caused me to learn Git version control, and to keep my entire project stored on this GitHub, which was invaluable in managing my progress in the project, but also allowed me to easily communicate code and troubleshoot problems with peers and my supervisor. GitHub proved to be an excellent resource for this project, and allowed me to easily generate a website detailing my project through GitHub Pages, which links to the original repository and is easily accessible and digestible overview of my project. Through this project I was also given the oppurunity to present my research at the Canadian Undergraduate Mathematics Conference 2014 in Ottawa. This experience was extremely rewarding, and developed skills in presenting mathematical information that I had previously not explored. Overall, the project was very successful in providing insight into mathematics research, and in developing my skills as a mathematician. I also gained many experiences outside of the mathematical focus, including Python literacy, version control, presentation methods, and access to the online support community. This project was extremely important to my undergraduate education, so thank you to NSERC for funding the USRA and to Dr. Nicholas Kevlahan for his supervision and guidance in the project.
2021-05-17 22:17:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.6686689853668213, "perplexity": 852.775592032414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00083.warc.gz"}
http://techgtholpadi.blogspot.com/2014/10/using-latex-in-blogger.html
## Wednesday, October 1, 2014 ### Using LaTeX in Blogger I found this answer on tex.stackexchange about how to include LaTeX formulas on Blogger. It worked for me! Thanks to MathJax and Matthew!
2017-11-19 14:15:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9689443111419678, "perplexity": 6796.479393098598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805649.7/warc/CC-MAIN-20171119134146-20171119154146-00056.warc.gz"}
https://tex.stackexchange.com/questions/527909/prettyref-bug-when-used-with-babel-french
prettyref bug when used with babel-french I'm writting in French so I use babel. I'd like to use prettyref to keep from writting "Section" and co. The problem is that I can not use it with French babel. It throws me: ! Paragraph ended before \@prettyref was complete. \par l.36 ? Here is an MWE: \documentclass{article} \usepackage{prettyref} % Form package, just for info. \makeatletter \def\prettyref#1{\@prettyref#1:} \def\@prettyref#1:#2:{% \expandafter\ifx\csname pr@#1\endcsname\relax% \PackageWarning{prettyref}{Reference format #1\space undefined}% \ref{#1:#2}% \else% \csname pr@#1\endcsname{#1:#2}% \fi% } \makeatother \usepackage[french]{babel} \usepackage[T1]{fontenc} \begin{document} \section{Introduction}\label{sec:intro} See \prettyref{fig:defs} \section{Definitions}\label{sec:defs} See \prettyref{sec:conclusion} \section{Conclusion}\label{sec:conclusion} See \prettyref{sec:intro} and \prettyref{fig:defs}. With prettyref we simply write \verb|\prettyref{sec:intro}| to get \prettyref{sec:intro}. \end{document} Conclusion Ok ! For all French peaple looking here, you can't define labels like fig:a. This causes problems as soon as an automatic reference package is used. The reason is that the : "letter" becomes an active character in French babel, which leads all these problems :s What I made is to use cleveref with labels like fig-a. Thanks everyone ! • prettyref is simply not compatible with babel-french. – egreg Feb 11 at 11:06 • What egregs says, it is probably because of the active : – daleif Feb 11 at 11:07 • @Mico IIRC, current LaTeX kernel copes with : in labels, even under babel-french. I recommend against using : with this setup though, precisely because of packages like cleveref that don't like it when active. – frougon Feb 11 at 12:02 • You should probably post that cleveref question as a separate question as it will never be found by others under the current title (which you should not edit as it suits the original question). Besides the cleveref manual explicitly mentions that it is not compatible with active chars. – daleif Feb 11 at 13:08 • I added an example with varioref and cleveref. – egreg Feb 11 at 14:51 You can make prettyref compatible with babel-french (but the fix will only work with it). \documentclass{article} \usepackage[T1]{fontenc} \usepackage[french]{babel} \usepackage{prettyref} % fix \prettyref to first detokenize its argument % kudos to daleif for proposing the simplification \makeatletter \def\prettyref#1{\expandafter\@prettyref\detokenize{#1:}} \makeatother \begin{document} \section{Introduction}\label{sec:intro} See \prettyref{fig:defs} \section{Definitions}\label{sec:defs} See \prettyref{sec:conclusion} \section{Conclusion}\label{sec:conclusion} See \prettyref{sec:intro} and \prettyref{fig:defs}. With prettyref we simply write \verb|\prettyref{sec:intro}| to get \prettyref{sec:intro}. \begin{figure}[htp] \caption{caption}\label{fig:defs} \end{figure} \end{document} On the other hand, a combination of varioref and cleveref would be more robust. However, cleveref doesn't like active colons, but doesn't rely on a precise label naming scheme, so you can use any other separator (avoid French special punctuation, though). The advantage over prettyref is that you can decide whether to use \ref (just the number), \cref (with the type) or \vref (with type and reference to the page). \documentclass{article} \usepackage[T1]{fontenc} \usepackage[french]{babel} \usepackage{varioref,cleveref} \crefname{figure}{figure}{Figure} \begin{document} \section{Introduction}\label{sec-intro} See \vref{fig-defs} \section{Definitions}\label{sec-defs} See \cref{sec-conclusion} \section{Conclusion}\label{sec-conclusion} See \cref{sec-intro} and \cref{fig-defs}. With cleveref we simply write \verb|\vref{sec-intro}| to get \vref{sec-intro}. \clearpage \begin{figure}[htp] \caption{caption}\label{fig-defs} \end{figure} \end{document} • I have my own personal unreleased reimplementation of fancyref, which also uses : as the separator. AFAIR I detokenize the label and that seems to work fine with french babel. Not sure if that helps here. – daleif Feb 11 at 11:17 • @daleif Seems to work, thanks. – egreg Feb 11 at 11:28
2020-09-22 20:20:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7216110825538635, "perplexity": 6440.160878071124}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206763.24/warc/CC-MAIN-20200922192512-20200922222512-00129.warc.gz"}
https://chem.libretexts.org/Bookshelves/Introductory_Chemistry/Book%3A_Introductory_Chemistry_(CK-12)/13%3A_States_of_Matter/13.11%3A_Vapor_Pressure_Curves
# 13.11: Vapor Pressure Curves One of the first lessons in cooking is how to boil water. Yes, it sounds simple, but there are a couple of hints that speed things up. One hint is to put a lid on the pot. The picture above has water boiling uncovered with the steam escaping to the atmosphere. If the lid is on the pot, less water will be boiled off and the water will boil faster. The buildup of pressure inside the pot helps speed up the boiling process. ### Vapor Pressure Curves The boiling points of various liquids can be illustrated in a vapor pressure curve (figure below). A vapor pressure curve is a graph of vapor pressure as a function of temperature. To find the normal boiling point of liquid, a horizontal line is drawn from the left at a pressure equal to standard pressure. At whatever temperature that line intersects the vapor pressure curve of a liquid is the boiling point of that liquid. Figure 13.11.1: Vapor pressure curves. The boiling points of liquid also correlate to the strength of the intermolecular forces. Recall that diethyl ether has weak dispersion forces, which meant that the liquid has a high vapor pressure. The weak forces also mean that it does not require a large input of energy to make diethyl ether boil and so it has a relatively low normal boiling point of $$34.6^\text{o} \text{C}$$. Water, with its much stronger hydrogen bonding, has a low vapor pressure and a higher normal boiling point of $$100^\text{o} \text{C}$$. As stated earlier, boiling points are affected by external pressure. At higher altitudes, the atmospheric pressure is lower. With less pressure pushing down on the surface of the liquid, it boils at a lower temperature. This can also be seen from the vapor pressure curves. If one draws a horizontal line at a lower vapor pressure, it intersects each curve at a lower temperature. The boiling point of water is $$100^\text{o} \text{C}$$ at sea level, where the atmospheric pressure is standard. In Denver, Colorado at $$1600 \: \text{m}$$ above sea level, the atmospheric pressure is about $$640 \: \text{mm} \: \ce{Hg}$$ and water boils at about $$95^\text{o} \text{C}$$. On the summit of Mt. Everest the atmospheric pressure is about $$255 \: \text{mm} \: \ce{Hg}$$ and water boils at only $$70^\text{o} \text{C}$$. On the other hand, water boils at greater than $$100^\text{o} \text{C}$$ if the external pressure is higher than standard. Pressure cookers do not allow the vapor to escape and the vapor pressure increases. Since water now boils at a temperature above $$100^\text{o} \text{C}$$, the food cooks more quickly. Figure 13.11.2: Pressure cooker. The effect of decreased air pressure can be demonstrated by placing a beaker of water in a vacuum chamber. At a low enough pressure, about $$20 \: \text{mm} \: \ce{Hg}$$, water will boil at room temperature. ### Summary • A vapor pressure curve is a graph of vapor pressure as a function of temperature. • Boiling points are affected by external pressure. ### Contributors • CK-12 Foundation by Sharon Bewick, Richard Parsons, Therese Forsythe, Shonna Robinson, and Jean Dupon.
2019-04-22 12:04:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5357370376586914, "perplexity": 435.6502899419459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578553595.61/warc/CC-MAIN-20190422115451-20190422141451-00491.warc.gz"}
https://lexique.netmath.ca/en/lemma/
# Lemma ## Lemma In the context of formal argumentation, proposition deduced from one or more axioms, the demonstration of which paves the way for the demonstration of a theorem that will follow. ### Etymological Note The word lemma comes from the Greek word lêmma (λημμα)that means “result” or “received” or even, by extension, “consequence”. In a mathematical context, the lemma is added to the statement of one or more axioms and thereby completes the argumentation, making it possible to demonstrate a new proposition (theorem).
2022-07-04 11:19:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9359355568885803, "perplexity": 1182.5785893670695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104375714.75/warc/CC-MAIN-20220704111005-20220704141005-00364.warc.gz"}
http://clay6.com/qa/25655/a-given-sample-of-ideal-gas-is-made-to-undergo-a-process-in-which-the-tempe
Browse Questions # A given sample of ideal gas is made to undergo a process in which the temperature decreases by 10% and the pressure decreases by 8%. By what % does the volume change? (A) Increases by 22% (B) Decreases by 22% (C) Increases by 2.2% (D) Decreases by 2.2% Using ideal gas equation, $\large\frac{P_1 V_1}{T_1} $$= \large\frac{P_2 V_2}{T_2} \large\frac{P_1 V_1}{T_1} = \large\frac{0.92 P_1 V_1}{0.90 T_1} \Rightarrow V_2 = \large\frac{0.90 V_1}{0.92}$$=0.978 V_1 \rightarrow$ Pressure decreases by 2.2%
2017-01-17 11:07:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8453294634819031, "perplexity": 4320.770818492459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00251-ip-10-171-10-70.ec2.internal.warc.gz"}
http://www.jstor.org/stable/30063686
## Access You are not currently logged in. Access your personal account or get JSTOR access through your library or other institution: ## If You Use a Screen Reader This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader. # Iron-Stained Sands and Clays G. R. MacCarthy The Journal of Geology Vol. 34, No. 4 (May - Jun., 1926), pp. 352-360 Stable URL: http://www.jstor.org/stable/30063686 Page Count: 9 Preview not available ## Abstract It is pointed out that the distribution of color in a sediment is as important in determining the general color effect as is the total amount of coloring matter present. It is shown that quartz will become iron-stained only in the absence of more active adsorbants, that orthoclase acquires iron stain more readily than quartz, and that while A1(OH)₃ is a good adsorbant of iron, pure kaolin will adsorb but little unless activated by some such substance as the alkali carbonates. The iron content of clays up to about 5.0 per cent Fe₂O₃ is shown to be a linear function of the alkali content, the equation $A=\frac{10 B-4}{7}$ where A = percentage of Fe₂O₃ present and B = percentage of Na₂O+K₂O present holds true for averages of several analyses, but not for individual clays, within these limits. It is suggested that the deep red color developed by so many tropical soils is a result of the retention of iron by the hydroxides which occur plentifully in such soils. • 352 • 353 • 354 • 355 • 356 • 357 • 358 • 359 • 360
2016-12-06 02:21:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48505014181137085, "perplexity": 4873.762492497887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541864.44/warc/CC-MAIN-20161202170901-00005-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.brightstorm.com/math/algebra-2/roots-and-radicals/introduction-to-radicals/
# Introduction to Radicals - Concept 14,948 views When simplifying square roots, we give the positive and negative answer if solving an equation that did not originally have a square root. Otherwise, we give only the principal root. Square roots of negative numbers have non-real answers, which is why square roots of variables sometimes include the condition that the variable is greater than zero. Knowledge of math radicals is important when solving quadratic equation problems. Hopefully by now you have seen the square root symbol that I have behind me and what I want to talk about right now is just some common mistakes that people use when they, sorry people do when they use the square root. Okay so what I want to talk about is the difference between the square root of 25 and the difference of x squared is equal to 25. Okay, the square root is what we call the principle root okay and what that means is the main thing that comes out of it and typically that is going to be positive when you're dealing with this square root so the square root of 25 is just 5 okay a lot of students like to put plus or minus in front of it, it's not the case it's just the positive 5. Okay, the difference is dealing with x squared is equal to 25. In order to solve this we want to take it square root so what we do is we put a square root in, okay and then in this case whenever you take the square root you need to end up with plus or minus. Okay so the difference is, is that whenever there is a square root in the problem, it's the principle root, if it's, you're dealing with a square root is just going to be positive. When you are solving something and you put in the square root you then need to include plus or minus okay so when you're including a square root as a tool so you put it in as a tool to solve something you need to think plus or minus when the square root is already there it's just positive.
2014-04-25 04:04:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8019213676452637, "perplexity": 163.15627317536146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/48320/trying-to-code-for-calculating-cosine-series
# Trying to code for calculating cosine series [closed] I was trying to calculate cosine series : cos x = 1 - x^2/2! + x^4/4! - x^8/8! .... , where x is in radians: include iostream include math.h using namespace std; long factorial (long num) { if (num >1) return num*factorial(num-1); else return 1; } int main() {int X,sum=0; cout<<"Enter value of x in radians : "; cin>>X; for (int i = 0; i<=4; ++i) { int z = pow(-1,i); int p = (pow(X,2*i)*z)/factorial(2*i); sum += p; } cout< return 0; } on putting x as 1.57( 90 degrees), I am getting 1, instead of 0. Can anyone explain why? ## closed as off-topic by Yuval Filmus, David Richerby, Raphael♦Oct 16 '15 at 10:31 This question appears to be off-topic. The users who voted to close gave this specific reason: • "Questions about software development or programming tools are off-topic here, but can be asked on Stack Overflow." – Yuval Filmus, Raphael If this question can be reworded to fit the rules in the help center, please edit the question. • Take a look at your question. Would you be able to read your code? If not, how do you expect us to? – Yuval Filmus Oct 16 '15 at 9:55 • @YuvalFilmus That's not very constructive. I'm pretty sure the asker would have formatted it legibly if they knew how to do that. However, since the question is off-topic (I know you know that, Yuval; I'm pointing it out for the asker's sake), the formatting doesn't make a lot of difference. – David Richerby Oct 16 '15 at 10:17 • @DavidRicherby I disagree. There are quite a few questions sporting this kind of illegible code, which for me shows a lack of respect for our community. The OP should try a bit harder. – Yuval Filmus Oct 16 '15 at 10:26 • @YuvalFilmus The policy is vote to close and comment in a constructive way. Please try to adhere to that. Even if you perceive a lack of respect (I'd recommend Hanlon's razor instead), it's a bad idea to respond in kind. – Raphael Oct 16 '15 at 10:30 • This question is offtopic here, as are all programming questions. If this is actually an algorithms question, please get rid of the source code, use pseudo code instead and explain your ideas, what you've tried to isolate the issue, and what questions remain. Regarding formatting, note the little question mark above the text box; it sends you so a detailed introduction to Markdown. – Raphael Oct 16 '15 at 10:31
2019-08-22 22:23:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5304260849952698, "perplexity": 1574.4335243954797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317516.88/warc/CC-MAIN-20190822215308-20190823001308-00140.warc.gz"}
https://www.r-econometrics.com/methods/ols/
# An Introduction to Ordinary Least Squares (OLS) in R Formulated at the beginning of the 19th century by Legendre and Gauss the method of least squares is a standard tool in econometrics to assess the relationships between different variables. This site gives a short introduction to the basic idea behind the method and describes how to estimate simple linear models with OLS in R. ## The method of least squares To understand the basic idea of the method of least squares, imagine you were an astronomer at the beginning of the 19th century, who faced the challenge of combining a series of observations, which were made with imperfect instruments and at different points in time.1 One day you draw a scatter plot, which looks similar to the following: As you look at the plot, you notice a clear pattern in the data: The higher the value of variable $$x$$, the higher the value of variable $$y$$. But the points do not lie on a single line, although we would expect that behaviour from an astronomical law of nature, because such a law should be invariant to any unrelated factors such as when, where, or how we look at it. But since we made our observations under imperfect conditions, measurement errors prevent the points from lying on the expected straight line. Instead, they seem to be scatterd around an imaginary straight line, which goes from the bottom-left to the top-right of the plot. From an astronomical perspective, our main interest in the graph is the slope of that imaginary line, because it describes the strength of the relationship between the variables. If the slope is steep $$y$$ will change considerably after a change in $$x$$. If the slope is rather flat, $$y$$ will change only moderately. Under perfect conditions with no measurement errors we could just connect the points in the graph and directly measure the slope of the resulting line. But since there are errors in the data, this approach is not feasible. This is where the method of least squares comes in. Basically, this method is nothing else than a mathematical tool, which helps in finding the imaginary line through the point cloud. You could think of it as trying out all possible ways to draw a line through the scatter plot until you have found the line, which describes the data in the best way. But what does best mean in this context? You would need some kind of normative criterium to describe which line fits the data better than another. A quite intuitive approach to this problem would be to search for the line, which minimises the measurement errors in our data. So, we could draw a random line through the point cloud and calculate the sum of squared errors for it – i.e. the sum over the squared differences between the points and the line. After we have done this for all possible choices, we would choose the line that produces the least amount of squared errors. This is also what gives the method its name, least squares. The only problem with this approach is that there are infinitely many ways to draw a line through the plot. So, in practice, we would not be able to find the best line just by trial and error. Luckily, there is an elegant mathematical way to do it, which Legendre and Gauss proposed independently of each other at the beginning of the 19th century. They reduced the challenge of drawing infinitely many lines and calculating their errors to a relatively simple mathematical problem2 that can be solved with basic algebra. Their least squares approach has become a basic tool for data analysis in different scientific disciplines. It is so common now that it is meanwhile called ordinary least squares (OLS) and should be implemented in every modern statistical software package, including R. ## Creating an artificial sample Before we apply OLS in R, we need a sample. For convenience, I use the artificial sample from above, which consists of 50 observations from the following relationship: $y_i = 40 + 0.5 x_i + e_i,$ where $$e_i$$ is normally distributed with zero mean and variance 4, i.e. $$e_i \sim N(0, 4)$$, and $$x_i$$ is simulated from a uniform distribution between 1 and 40, which can be written as $$x_i \sim U(1, 40)$$. So, if there were no measurement errors, variable $$y$$ would lie on a straight line and increase by 0.5 when $$x$$ increases by 1 and take the value 40 if $$x$$ was zero. The following R code produces the sample # Reset random number generator for reproducibility set.seed(1234) # Total number of observations N <- 50 # Simulate variable x from a uniform distribution x <- runif(N, 1, 40) # Simulate y y <- 40 + .5 * x + rnorm(N, 0, 2) # Note that the 'rnorm' function requires the standard deviation instead # of the variance of the distribution. So, we have to enter 2 in order # to draw from a normal distribution with variance 4. # Store data in a data frame ols_data <- data.frame(x, y) If you want to be sure, execute plot(ols_data) to check whether the data is really the same as above. ## OLS regression in R The standard function for regression analysis in R is lm. Its first argument is the estimation formula, which starts with the name of the dependent variable – in our case y – followed by the tilde sign ~. Every variable name, which follows the tilde, is used as an explanatory variable and has to be separated from the other predictors with a plus sign +. But since we just have one explanatory variable, we just use x after the tilde. Next, we have to specify, which data R should use. This is done by adding data = ols_data as a further argument to the function. After that, we can estimate the model, save its results in object ols, and print the results in the console. # Estimate the model and same the results in object "ols" ols <- lm(y ~ x, data = ols_data) # Print the result in the console ols ## ## Call: ## lm(formula = y ~ x, data = ols_data) ## ## Coefficients: ## (Intercept) x ## 39.3411 0.5107 As you can see, the estimated coefficients are quite close to their true values. Also note that we did not have to specify an intercept term in the formula, which describes the expected value of $$y$$ when $$x$$ is zero. The inclusion of such a term is so usual that R adds it to every equation by default unless specified otherwise. However, the output of lm might not be enough for a researcher who is interested in test statistics to decide whether to keep a variable in a model or not. In order to get further information like this, we use the summary function, which provides a series of test statistics when printed in the console: summary(ols) ## ## Call: ## lm(formula = y ~ x, data = ols_data) ## ## Residuals: ## Min 1Q Median 3Q Max ## -3.9384 -1.6412 -0.4262 1.4503 5.6626 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 39.34110 0.66670 59.01 <2e-16 *** ## x 0.51066 0.03054 16.72 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 2.2 on 48 degrees of freedom ## Multiple R-squared: 0.8535, Adjusted R-squared: 0.8504 ## F-statistic: 279.6 on 1 and 48 DF, p-value: < 2.2e-16 Please, refer to an econometrics textbook for a precise explanation of the information shown by the summary function for the output of lm. A more in-depth treatment of it would be beyond the scope of this introduction. Finally, we can also draw the line, which results from the estimation of our model, into the graph from above. Given the method of least squares, which we used to calculate the slope and position of the line, this is our best estimate of the relation between $$y$$ and $$x$$. Neat, isn’t it? library(ggplot2) ggplot(data, aes(x = x, y = y)) + geom_point() + geom_smooth(method = "lm", se = FALSE) + scale_x_continuous(limits = c(0, 45), expand = c(0, 0)) + theme_bw() ## Annex: Excluding the intercept term If you wanted to estimate the above model without an intercept term, you have to add the term -1 or 0 to the formula: # Estimate the model ols_no_intercept <- lm(y ~ -1 + x, data = ols_data) # Look at the summary results summary(ols_no_intercept) ## ## Call: ## lm(formula = y ~ -1 + x, data = ols_data) ## ## Residuals: ## Min 1Q Median 3Q Max ## -25.025 -4.318 8.595 22.068 36.850 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## x 2.1044 0.1209 17.4 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 18.67 on 49 degrees of freedom ## Multiple R-squared: 0.8607, Adjusted R-squared: 0.8579 ## F-statistic: 302.7 on 1 and 49 DF, p-value: < 2.2e-16 As you can see, the estimated coefficient value for $$x$$ differs significantly from its true values. Hence, it is important to be careful with restricting the intercept term, unless there is a good reason to assume that it has to be zero. 1. This was the actual problem for which Legendre proposed the method of least squares in 1805. 2. The mathematical problem consists in a set of $$N$$ equations with $$n$$ unknown variables, but where the amount of equations must be higher than the amount of unknown variables.
2018-12-15 12:47:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6929776072502136, "perplexity": 367.87796808523984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.55/warc/CC-MAIN-20181215105142-20181215131142-00607.warc.gz"}
https://scikit-rf.readthedocs.io/en/stable/api/calibration/generated/skrf.calibration.calibration.UnknownThru.coefs_ntwks.html
# skrf.calibration.calibration.UnknownThru.coefs_ntwks¶ UnknownThru.coefs_ntwks Dictionary of error coefficients in form of Network objects
2019-11-21 00:47:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9628878235816956, "perplexity": 9118.118728466585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670643.58/warc/CC-MAIN-20191121000300-20191121024300-00161.warc.gz"}
https://www.physicsforums.com/threads/find-components-of-vectors-on-f.464733/
# Find components of vectors on F 1. Problem is pictured here. http://engineeringhomework.net/statics/hw1p14.html" [Broken] I have already found the x and y components, but I don't know how to get the u and v as shown. Last edited by a moderator: You would find them in exactly the same way you found x and y. To find the x component you can consider the right triangle between the force vector, the the x component and the y component. You know that the cos of the angle equals the adjacent component (the x comp in this case) divided by the hypotenuse (the magnitude of the force), and in this way you can solve for the x comp cos Θ = A/H = x/F Well, for the u and v, you do the same thing. You consider a right triangle, where the hypotenuse is F, the adjacent side is the u component, and the other side is a component perpendicular to u (not v). Θ is the angle between the force vector and u, in this case 21, and you can solve for u cos Θ = A/H = u/F Solving for v is a similar problem
2021-10-16 10:15:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8203510642051697, "perplexity": 303.7909189108689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584554.98/warc/CC-MAIN-20211016074500-20211016104500-00212.warc.gz"}
http://clay6.com/qa/2435/the-derivative-of-cos-2x-2-1-w-r-t-cos-x-is
Browse Questions # The derivative of $\cos^{-1}(2x^2-1)$ w.r.t $\cos^{-1}x$ is $\begin{array}{1 1}(A)\;2 & (B)\;\frac{-1}{2\sqrt{1-x^2}}\\(C)\;\frac{2}{x} & (D)\;1-x^2\end{array}$ Toolbox: • Let $u=f(x)$ and $v=g(x)$ be two functions of $x$.The derivative of $f(x)$ w.r.t $g(x)$ (i.e) $\large\frac{du}{dv}=\large\frac{\Large\frac{du}{dx}}{\Large\frac{dv}{dx}}$ Step 1: Let $u=\cos^{-1}(2x^2-1)$ and $v=\cos^{-1}x$ Put $x=\cos \theta\Rightarrow \theta=\cos^{-1}x$ $u=\cos^{-1}(2\cos^2\theta-1)$ But $2\cos^2\theta-1=\cos 2\theta$ Therefore $u=\cos^{-1}(\cos 2\theta)$ $\Rightarrow u=2\theta$ $\qquad=2.\cos^{-1}x$ $\large\frac{du}{dx}=$$-2\big(\large\frac{-1}{\sqrt{1-x^2}}\big)$ Step 2: Consider $v=\cos^{-1}x$ $\large\frac{dv}{dx}=\large\frac{-1}{\sqrt{1-x^2}}$ Therefore $\large\frac{\Large\frac{du}{dx}}{\Large\frac{dv}{dx}}=\frac{\Large\frac{-2}{\sqrt{1-x^2}}}{\Large\frac{-1}{\sqrt{1-x^2}}}$ $\Rightarrow 2$ Hence the correct option is $A$
2017-01-22 20:37:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987929463386536, "perplexity": 1070.997341904306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00315-ip-10-171-10-70.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/241504/central-limit-theorem-for-square-roots-of-sums-of-i-i-d-random-variables
# Central Limit Theorem for square roots of sums of i.i.d. random variables Intrigued by a question at math.stackexchange, and investigating it empirically, I am wondering about the following statement on the square-root of sums of i.i.d. random variables. Suppose $X_1, X_2, \ldots, X_n$ are i.i.d. random variables with finite non-zero mean $\mu$ and variance $\sigma^2$, and $\displaystyle Y=\sum_{i=1}^n X_i$. The central limit theorem says $\displaystyle \dfrac{Y - n\mu}{\sqrt{n\sigma^2}} \ \xrightarrow{d}\ N(0,1)$ as $n$ increases. If $Z=\sqrt{|Y|}$, can I also say something like $\displaystyle \dfrac{Z - \sqrt{n |\mu|-\tfrac{\sigma^2}{4|\mu|}}}{\sqrt{\tfrac{\sigma^2}{4|\mu|}}}\ \xrightarrow{d}\ N(0,1)$ as $n$ increases? For example, suppose the $X_i$ are Bernoulli with mean $p$ and variance $p(1-p)$, then $Y$ is binomial and I can simulate this in R, say with $p=\frac13$: set.seed(1) cases <- 100000 n <- 1000 p <- 1/3 Y <- rbinom(cases, size=n, prob=p) Z <- sqrt(abs(Y)) which gives approximately the hoped-for mean and variance for $Z$ > c(mean(Z), sqrt(n*p - (1-p)/4)) [1] 18.25229 18.25285 > c(var(Z), (1-p)/4) [1] 0.1680012 0.1666667 and a Q-Q plot which looks close to Gaussian qqnorm(Z) • @MichaelM: Thanks for those comments. I had started with the $X_i$ non-negative, but I thought the intuitive asymptotic behaviour you describe allowed a generalisation to more distributions. My surprises were (a) the variance of the square root of the sum apparently tending to a constant not depending on $n$ and (b) the appearance of a distribution which looks very close to Gaussian. A counter-example would be welcome, but when I tried other cases which initially seemed non-Gaussian, increasing $n$ further seemed to bring the distribution back to a CLT-type result. – Henry Oct 21 '16 at 12:00 • A corollary of this is the root-mean-square (or quadratic mean) of i.i.d. random variables suitably scaled (multiply by $\sqrt{n}$ as with an arithmetic mean) also converges to a Gaussian distribution provided that the $4$th moment of the underlying distribution is finite. – Henry Oct 26 '16 at 20:13 • Just a short comment: the claim is a special case of the Delta method, see Theorem 5.5.24 in the book "Statistical inference" by Casella & Berger. – Michael M Oct 30 '16 at 10:34 • @Michael: Perhaps you see something that I am not at the moment, but I do not think this particular problem fits within the assumptions of the classical Delta method (e.g., as stated in the theorem you reference). Note that $Y$ does not converge in distribution (nontrivially on $\mathbb R$) and so "applying the Delta method with $g(y) = \sqrt{|y|}$" does not satisfy the requisite requirements. However, as S. Catterall's answer demonstrates, it provides a useful heuristic which leads to the correct answer. – cardinal Jul 22 '17 at 21:13 • (I believe you could adapt the proof of the Delta method to cases similar to the above in order to make fully rigorous the aforementioned heuristic.) – cardinal Jul 22 '17 at 21:15 Suppose that $X_1,X_2,X_3,...$ are IID random variables with mean $\mu\gt 0$ and variance $\sigma^2$, and define the sums $Y_n=\sum_{i=1}^n X_i$. Fix a number $\alpha$. The usual Central Limit Theorem tells us that $P(\frac{Y_n-n\mu}{\sigma\sqrt n}\leq \alpha)\to\Phi(\alpha)$ as $n\to\infty$, where $\Phi$ is the standard normal cdf. However, the continuity of the limiting cdf implies that we also have $$P\Big(\frac{Y_n-n\mu}{\sigma\sqrt n}\leq \alpha+\frac{\alpha^2 \sigma^2}{4\mu\sigma\sqrt n}\Big)\to\Phi(\alpha)$$ because the additional term on the right hand side of the inequality tends to zero. Rearranging this expression leads to $$P\Big(Y_n\leq (\frac{\alpha\sigma}{2\sqrt \mu}+\sqrt{n\mu})^2\Big)\to\Phi(\alpha)$$ Taking square roots, and noting that $\mu\gt 0$ implies that $P(Y_n\lt 0)\to 0$, we obtain $$P\Big(\sqrt{|Y_n|}\leq \frac{\alpha\sigma}{2\sqrt \mu}+\sqrt{n\mu}\Big)\to\Phi(\alpha)$$ In other words, $\frac{\sqrt{|Y_n|}-\sqrt{n\mu}}{\sigma/{2\sqrt\mu}}\xrightarrow{d}N(0,1)$. This result demonstrates convergence to a Gaussian in the limit as $n\to\infty$. Does this mean that $\sqrt{n\mu}$ is a good approximation to $E[\sqrt{|Y_n|}]$ for large $n$? Well, we can do better than this. As @Henry notes, assuming everything is positive, we can use $E[\sqrt{Y_n}]=\sqrt{E[Y_n]-\text{Var}(\sqrt{Y_n})}$, together with $E[Y_n]=n\mu$ and the approximation $\text{Var}(\sqrt{Y_n})\approx \frac{\sigma^2}{4\mu}$, to obtain the improved approximation $E[\sqrt{|Y_n|}]\approx\sqrt{n\mu- \dfrac{\sigma^2}{4\mu}}$ as stated in the question above. Note also that we still have $$\frac{\sqrt{|Y_n|}-\sqrt{n\mu-\frac{\sigma^2}{4\mu}}}{\sigma/{2\sqrt\mu}}\xrightarrow{d}N(0,1)$$ because $\sqrt{n\mu-\frac{\sigma^2}{4\mu}}-\sqrt{n\mu}\to 0$ as $n\to\infty$. • You may need to add $\sqrt{n \mu}-\sqrt{n \mu-\tfrac{\sigma^2}{4\mu}} \to 0$ as ${n \to \infty}$ to get my result – Henry Oct 24 '16 at 21:44 • @Henry You can replace $\sqrt{n\mu}$ with $\sqrt{n\mu-k}$ for any constant $k$ and this won't change the limiting distribution, but it may change the degree to which $\frac{\sqrt{|Y_n|}-\sqrt{n\mu-k}}{\sigma/{2\sqrt\mu}}$ is a good approximation to $N(0,1)$ for specific large $n$ . How did you come up with $\sqrt{n \mu-\tfrac{\sigma^2}{4\mu}}$? – S. Catterall Reinstate Monica Oct 25 '16 at 9:44 • We have $\text{Var}(Z)=E[Z^2]-(E[Z])^2$ so $E[Z]=\sqrt{E[Z^2]-\text{Var}(Z)}$. Assuming everything is positive, $E[Z^2]=E[Y]=n\mu$ while the denominator of $\frac{\sqrt{|Y_n|}-\sqrt{n\mu}}{\sigma/{2\sqrt\mu}}$ suggests $\text{Var}(Z) \approx \dfrac{\sigma^2}{4\mu}$, and combining these leads to $E[Z] \approx \sqrt{n\mu- \dfrac{\sigma^2}{4\mu}}$. – Henry Oct 25 '16 at 14:51
2020-01-21 11:11:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9221125841140747, "perplexity": 212.44440304015205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250603761.28/warc/CC-MAIN-20200121103642-20200121132642-00539.warc.gz"}
https://www.studysmarter.us/explanations/math/geometry/geometric-mean/
Suggested languages for you: | | ## All-in-one learning app • Flashcards • NotesNotes • ExplanationsExplanations • Study Planner • Textbook solutions # Geometric Mean Save Print Edit By this point, you have probably heard the term 'mean' used many times in math. But what do we actually mean? (pardon the pun). There are different types of mean, and one that you have probably come across is called the arithmetic mean. This is where we take a set of numbers, add them up and divide this number by however many numbers you have to find an "average" of the numbers. For example, you might wish to know the mean score in a class test to help determine whether you performed on average, below average or above average. In this article, however, we will be looking at a different type of mean called the geometric mean. So, what do we mean by geometric mean? ## Definition of Geometric Mean Geometric mean is defined as the average rate of return of set of values which is calculated using the products of its terms. Suppose we have a set of n numbers. The geometric mean is where we multiply together the set of numbers and then take the positive nth root. So, if we have two numbers we would multiply them and then take the positive square root, if we had three numbers we would multiply them and then take the positive cube root, if we had four numbers we would multiply them and then take the positive fourth root and so on. ## Geometric Mean Formula ### Geometric Mean Definition For the set of n numbers, , the formula for the geometric mean is given by the following: ### Geometric Mean Examples Suppose we have the set of two numbers 9 and 4. To find the geometric mean, we would first multiply together 9 and 4 to get 36 and since we have two numbers we would take the square root of 36 to get six. Mathematically, we can write . Thus, the geometric mean is 6. Suppose we have the set of numbers 4, 8 and 16. To calculate the geometric mean we first multiply together 4, 8 and 16 to obtain 512. Since there are three numbers we then take the cube root. Mathematically, we can write . Thus 8 is the geometric mean of our numbers. Suppose we have the set of numbers 1, 2, 3, 4 and 5. To find the geometric mean we first multiply together 1, 2, 3, 4, and 5 to obtain 120. Since we have five numbers, we take the fifth root of 120 which is 2.61 to 2 decimal places. Mathematically, we can write . Thus, the geometric mean is 2.61. ## The Geometric Mean in a Triangle Calculating the geometric mean can be particularly useful in geometry. Consider the below triangle ABCD: Geometric Mean Triangle, Jordan Madge- StudySmarter Originals The altitude of a triangle is a line drawn from the particular vertex of a triangle which forms a perpendicular line to the base of the triangle. So in this triangle, the altitude is the line AC. We also have the left side of BD, which is BC, as well as the right side of BD, which is CD. Now, notice that if we "pull apart" the above triangle, we get two smaller triangles. We also notice that if we rotate the triangle on the left, we simply have a smaller version of the triangle on the right. This is shown in the diagram below. Explaining the geometric mean, Jordan Madge- StudySmarter Originals Now, notice that the two triangles BAC and ADC are mathematically similar, so we can use ratios to find missing lengths. Naming the left side a, the right side b, and the altitude x, we have the following: Therefore, the altitude, x can be calculated by finding the geometric mean of a and b. This is known as the geometric means theorem for triangles. Geometric mean example, Jordan Madge- StudySmarter Originals In the triangle ABCD, BC=6 cm, CD=19 cm and AC= x cm as shown above. Find the value of the altitude x. Solution: Using results from the geometric means theorem for triangles, we find that cm (1. d.p) Geometric mean example, Jordan Madge- StudySmarter Originals In the triangle ABCD, BC=4 cm, CD=9 cm and AC= x cm as shown above. Find the value of the altitude x. Solution: Using results from the geometric means theorem for triangles, we obtain that cm ## Geometric vs Arithmetic mean When we refer to the mean of a set of numbers, we usually are referring to the arithmetic mean. The arithmetic mean is when we take the sum of the set of numbers and then divide it by how many numbers we have. ### Arithmetic Mean Formula The formula for the arithmetic mean is given by the following: Here, A is defined as the value of the arithmetic mean, n is how many values there are in the set, and are the numbers in the set. Find both the arithmetic and geometric mean of the numbers 3, 5 and 7. Solution: To obtain the arithmetic mean, we would first add together 3, 5 and 7 to obtain 15. Then, since we have three numbers in our set, we would divide 15 by 3 to get 5. Mathematically, we can write: To obtain the geometric mean, we would first multiply together 3, 5 and 7 to get 105 and then take the cube root of 105 (since we have three numbers in our set). The cube root of 105 is 4.72 to 2. d.p and thus the geometric mean of the numbers is 4.72. Mathematically, we can write: Notice that the arithmetic mean of 5 is quite close to the geometric mean of 4.72. We will now explore the different reasons we may use the geometric mean as opposed to the arithmetic mean. ### Geometric and Arithmetic Mean Differences There are several key differences between both the geometric and arithmetic mean. The first, most obvious difference is the fact that they are calculated using two different formulae. In the previous example, we obtained an arithmetic mean of 5 and a geometric mean of 4.72. It is important to note that the geometric mean is always less than or equal to the arithmetic mean. For example, if we take the singleton set ,since there is only one number in this set, the geometric mean is 2 and the arithmetic mean is also 2. Moreover, the arithmetic mean can be used for both positive and negative numbers. However, this is not the case for the geometric mean; the geometric mean can only be used for a set of positive numbers. This is due to the fact that error may arise in eventualities such as taking the square root of negative numbers. Further, we use the geometric and arithmetic mean for different reasons. The arithmetic mean has a plethora of everyday uses, however, the geometric mean is more commonly used when there is some sort of correlation between the set of numbers. For example, in finance, the geometric mean is used when calculating interest rates. The arithmetic mean may be useful when finding the average temperature over a week. There is actually a third type of mean called the harmonic mean. The harmonic mean is calculated by squaring the geometric mean and dividing it by the arithmetic mean. This type of mean is commonly used in machine learning. ## Geometric Mean - Key takeaways • The geometric mean is where we multiply together the set of numbers and then take the positive nth root. • It can be represented by the formula . • Calculating the geometric mean can be particularly useful in geometry. • The altitude of a triangle is a line drawn from the particular vertex of a triangle which forms a perpendicular line to the base of the triangle. • The geometric mean theorem for triangles can be used to calculate the altitude of a triangle. • The geometric mean is always less than or equal to the arithmetic mean. • The arithmetic mean is represented by the formula . • The geometric mean is more commonly used when there is some sort of correlation between the set of numbers. For example, when calculating interest rates. Suppose we have a set of n numbers. The geometric mean is where we multiply together the set of numbers and then take the positive nth root. Multiply together the two numbers and take the square root. The geometric mean is commonly used when there is some sort of correlation between the set of numbers. For example, in finance, the geometric mean is used when calculating interest rates ## Final Geometric Mean Quiz Question How do you calculate the geometric mean for a set of numbers? Multiply together the set of numbers and then take the positive nth root. Show question Question Find the geometric mean of the numbers 3 and 27. If we multiply 3 and 27 we get 81 and the square root of 81 is 9. Thus, the geometric mean of 3 and 27 is 9. Show question Question What is the geometric mean of the numbers 7, 8 and 12? If we multiply together 7, 8 and 12 we obtain 672 and the cube root of 672 is 8.76 (2.d.p). Thus the geometric mean is 8.76. Show question Question Calculate the geometric mean of the numbers 1, 4, 8 and 10. If we multiply together the numbers we get 320 and the fourth root of 320 is 4.23. Thus, the geometric mean is 4.23. Show question Question What is the altitude of a triangle? The altitude of a triangle is a line drawn from the particular vertex of a triangle which forms a perpendicular line to the base of the triangle. Show question Question What is the geometric means theorem for triangles? This theorem allows us to find the altitude of the geometric mean triangle. Show question Question What is the arithmetic mean of a set of numbers? The arithmetic mean is when we take the sum of the set of numbers and then divide it by how many numbers we have. Show question Question True or false: the arithmetic mean is always bigger than the geometric mean. False. The geometric mean is always greater than or equal to the arithmetic mean. Show question Question True or false: the geometric mean can be used for negative numbers. False. The geometric mean can be used for only positive numbers. Show question Question True or false: the arithmetic mean can be used for negative numbers. True. The arithmetic mean can be used for both positive and negative numbers. Show question Question When will the geometric mean always equal the arithmetic mean? When there is only a single number in the set Show question 60% of the users don't pass the Geometric Mean quiz! Will you pass the quiz? Start Quiz ## Study Plan Be perfectly prepared on time with an individual plan. ## Quizzes Test your knowledge with gamified quizzes. ## Flashcards Create and find flashcards in record time. ## Notes Create beautiful notes faster than ever before. ## Study Sets Have all your study materials in one place. ## Documents Upload unlimited documents and save them online. ## Study Analytics Identify your study strength and weaknesses. ## Weekly Goals Set individual study goals and earn points reaching them. ## Smart Reminders Stop procrastinating with our study reminders. ## Rewards Earn points, unlock badges and level up while studying. ## Magic Marker Create flashcards in notes completely automatically. ## Smart Formatting Create the most beautiful study materials using our templates.
2022-10-02 06:35:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8388901352882385, "perplexity": 325.59608920697934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00778.warc.gz"}
https://bodheeprep.com/average-speed-time-speed-and-distance
# Average Speed: Time Speed and Distance Average speed of an object is defined as the ratio of the total distance covered to the total time taken. Mainly there are two different types of questions based on the concept of average speed. The first type is when a body travels the same distance at different speeds i.e. the distance is constant. And the second is when a body travels at different speeds, each speed for equal duration i.e. the time is constant. ### Average speed when the distance is constant A body is traveling constant distance with two different speed say X and Y respectively then the average speed for its entire journey is the harmonic mean of X and Y. Derivation Refer to the figure given below. Let us assume that a person moves from A to B with a speed of x and from B to A with a speed of y.  Also, let the distance between AB be  D. The total distance traveled by him = 2D. Time taken by him to travel from A to B = D/x  and from B to A = D/y. Therefore, the total time taken by him during his entire journey = D/x+D/y Now,  the average speed= Total distance/total time Let us understand the application of the formula with help of few examples Example: If Aakash goes from Raipur to Bilaspur at a speed of 30 km per hour and comes back at a speed of 70 km per hour then what is his average speed during the entire journey? Explanation Here we can see that Akash is traveling from Raipur to Bilaspur and back to Raipur. In this case, we can say that he is traveling the same distance which two different speed that is 30 km per hour and 70 km per hour. Therefore, the average speed would be the harmonic mean of the two speed or we can say that: The average speed= 2x30x70/(30+70)= 42 km per hour. Let’s take one more example Example: A person goes around an equilateral triangle shaped field at a speed for X, Y and Z kilometer per hour on the first, second and the third side. Find his average speed during the journey Explanation Now, this question is not different than the previous question. The only difference is that here the person is traveling 3 equal distance each at different speeds. In this is also, the average speed would be the harmonic mean of all the three speeds given. Let the measure of each side of the triangle be D kilometer. The person traveled the distance from A to B with X kilometer per hour, B to C with Y kilometer per hour and C to A with Z kilometer per hour. Time taken by him to travel from A to B = D/x. Similarly, the time taken to travel from B to C and C to A is D/Y and D/Z  respectively. So, the total time taken by him = D/x+D/y+D/z, and the total distance traveled is 3D His average speed= Example: A bird flying 400 km covers the first 100 KM @ 100 km per hour, the second 100 KM at 200 km per hour, third 100 KM @ 300 km per hour and the last 100 KM @ 400 km per hour. Determine the average speed of the bird in the entire journey. Explanation Total time taken= 100/100 +   100/200 + 100/300+100/400=25/12 hours Therefore, average speed =400/(25/12)=192 kmph. I hope the above examples clear the concept of average speed in case of same distance traveled at different speeds. ### Average speed when the time is constant When a body moves with different speed but for an equal duration in each time then the average speed of the entire journey is the Arithmetic mean of all the speeds. Example:  Rahul travels with the speed of 20 km per hour for the first 2 hours. For the next two hours, he travels with a speed of 30 km per hour. Find the average speed of his entire journey. Explanation Total distance travelled = 20×2+30×2=100 kilometers Total time taken =2+2 = 4 hours Therefore, the average speed=100/4=25 kmph.
2020-09-21 20:18:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8027569651603699, "perplexity": 471.1724924323918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202007.15/warc/CC-MAIN-20200921175057-20200921205057-00333.warc.gz"}
http://www.g3journal.org/highwire/markup/487503/expansion?width=1000&height=500&iframe=true&postprocessors=highwire_tables%2Chighwire_reclass%2Chighwire_figures%2Chighwire_math%2Chighwire_inline_linked_media%2Chighwire_embed
Table 1 Statistical power (Lea-Coulson model) p × 1081.251.51.752.02.53.0 LR test11.025.145.262.888.897.6 C.I. overlap10.524.744.962.488.797.5 Normality7.9318.935.652.880.893.9 • Six groups of simulated experiments were compared with a baseline group. The mutation rate in the baseline group 1.0 × 10−8 is smaller than the mutation rates in the six other groups. The final cell population size in the baseline group is twice as large as in the other groups. Three comparison methods, namely, the LRT, the method of checking C.I. overlapping, and the asymptotic normality method, were used to test for equality of mutation rates between experiments in the baseline group and experiments from one of the other six groups. Each entry in the table is the percentage of tests that are significant at the 0.05 level. Hence, each entry is an estimate of statistical power at the 0.05 level.
2018-07-23 15:30:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.890162467956543, "perplexity": 677.8057355951184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596542.97/warc/CC-MAIN-20180723145409-20180723165409-00117.warc.gz"}
https://www.sarthaks.com/1231945/piece-having-resistance-equal-parts-resistance-each-part-combination-compare-original
# A piece of wire having resistance R is cut into four equal parts. (i) How will the resistance of each part combination compare with the original wir 36 views in Physics closed A piece of wire having resistance R is cut into four equal parts. (i) How will the resistance of each part combination compare with the original wire ? by (71.7k points) selected by (i) R//4 (ii) R//16.
2023-03-26 22:14:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17618662118911743, "perplexity": 2157.8395312147704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00592.warc.gz"}
https://wikieducator.org/User:KKamal/SANDBOX
# User:KKamal/SANDBOX Dr. Kamal Karunananda > Employer:Open University of Sri Lanka Occupation:Senior Lecturer Other roles:Learning4Content Facilitator Nationality:Sri Lanka Country:Sri Lanka email This user was certified a WikiBuddy by Pschlicht . This user is a WikiNeighbourfor WikiEducator. This user is a WikiAmbassadorfor WikiEducator. Today is: 21, April 2021 Course Titile - Reinforced Concrete Design Course Code - CEX4231 Target Group - Level 4 Engineering Technology (Civil) Undergradutes Aims: 1. To introduce the theory and applications of analysis and design in reinforced concrete. 2. To develop an understanding in to the behaviour and design of reinforced concrete structures. To prepare designers on the effective use of design code and standard formulas in the design of reinforced concrete members. Learning Outcomes: At the end of lessons, the student should be able to understand the theoretical background and practical application of concrete design. He should also be able to apply the knowledge in designing safe and economical elements/structures in concrete. Following summarizes the learning outcomes of the subject, (1) Basics concepts of structural concrete design: loads, strengths, design codes, limit states (2) Design of superstructure components: slab, beams, columns and staircases (3) Design of substructure components: foundations (4) Reinforcement detailing of components (5) Different analysis techniques |Syllabus= > You can see following viedos 3. [Different methods of design http://www.youtube.com/watch?v=ba3mZhOpsTM] ## My Profile Hi all, I am working in Open University of Sri Lanka |Photo= >
2021-04-21 12:33:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27148276567459106, "perplexity": 10131.913010284112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039536858.83/warc/CC-MAIN-20210421100029-20210421130029-00232.warc.gz"}
https://puzzling.meta.stackexchange.com/tags/questions/hot
# Tag Info ## Hot answers tagged questions 39 votes Accepted ### What is our reason for wanting bounties on questions? Increase the value of an upvote for a question. Currently, an upvote on a question provides only half the reputation that an upvote on an answer does. (5 reputation vs. 10 reputation.) This would ... 36 votes Accepted ### Puzzles whose creators have asked them not to be shared (This is one of multiple answers I'm posting so that they can be voted on. It does not necessarily reflect my own opinion.) Such questions should be forbidden. In cases where the creator of the ... • 111k 29 votes ### Should we require that puzzles remain apolitical? Provided they don't violate the Be Nice policy, what's the problem? Just as there's a difference between discussing politics disrespectfully and discussing politics at all, there's also a difference ... • 114k 26 votes ### Are math-textbook-style problems on topic? Math puzzles are on topic, math problems are not Let me first give some examples to illustrate the distinction I mean. Math problems: Solve for $x$: $2x+3=7$. My friend gave me a riddle: ... • 23.6k 25 votes Accepted ### Has the scope of the site changed? No. I don't think the site's scope has changed, and I disagree with the closing of that question - especially the reason behind the closing. Questions about puzzles are still perfectly acceptable here.... • 136k 21 votes ### A policy on plagiarism To start with the easy bit: copyright notices are red herrings. Take, as an example, the now deleted Air Crash Dilemma. A search for the key phrase: bury the survivors? turns up "About 94,700 ... • 539 21 votes ### What is our reason for wanting bounties on questions? Implement "bounties" for questions that can be awarded instantaneously. This would be a method where one user could transfer some reputation to another user to reward them for an exemplary question. ... 18 votes ### Why was my question removed from the HNQs? I propose a simpler solution: hide the image behind a link. That way unsuspecting users from this site or others will not be confronted with unwanted nudity. The problem isn't the Hot Questions list; ... • 539 18 votes ### Is it appropriate to pose a puzzle where I don't know the answer? Good question. Seems to me that just so long as you include the info that you don't know what the solution is and there might not be a solution (if that's the case), there wouldn't be much for anyone ... • 8,712 17 votes Accepted ### Puzzle hidden in revision history? This puzzle Now that the puzzle in question has been solved, it seems to me that it really was just a gimmick without much value. As far as I can tell this puzzle could exist entirely as-is on the ... • 15.8k 16 votes ### Can we get +10 reputation for upvotes on questions? As of yesterday, November 13th 2019, this is now in effect network wide: We’re changing the reputation earned from getting a question upvote to ten points, making it equal to the reputation earned ... • 27k 15 votes ### Is it appropriate to pose a puzzle where I don't know the answer? Not that I disagree with the other answer, but I wanted to chime in - I think this is one of the best kinds of questions for this site. Finding a puzzle and being unable to solve it would lead ... • 1,664 15 votes Accepted ### Sharing and rewarding what went into making a good puzzle An outline for wrap-up posts can be found at Wrap-up posts: What should the formal part of it contain? Please copy the outline’s header, being sure to include the words ... • 17.7k 14 votes ### Difficulty rating on questions And who's supposed to judge the "difficulty rating" of a puzzle? If it's the original poster of the question, that's almost certainly going to be subject to lots of bias. "Oh, I don't want beginners ... • 4,537 14 votes Accepted ### What's with all the spaghetti? Yesterday I asked What's the password, again? because it felt like a nice follow-on to the first password puzzle (I think it's the first, anyhow). It seems that some other people saw the number ... • 5,705 14 votes ### Google Earth Challenges - Should we keep them? Without commenting on any of the specific examples you've included, I don't think a simple satellite photo with "where is this place?" constitutes a puzzle. This type of thing has much more in common ... • 36.2k 14 votes Accepted ### Re-asking a question when an unintended answer is given Okay, since it's been a day I'll post my thoughts for voting. I think that re-asking a question with a different phrasing is good, provided that the new phrasing removes ambiguity and invalidates the ... • 9,085 13 votes ### Questions from on-going contests I can agree with the policy above albeit it might be difficult to "police" this really. However, isn't posting a question from a contest already defying existing copyright policy and can be acted upon ... • 17.7k 13 votes Accepted ### Should we have weekly/fortnightly topic challenges? Many in the community still have a sour taste in their mouths after the recent disagreements over policy that went nuts. The community is still in disagreement over a lot of the basics and many like ... • 3,785 12 votes ### Should we require that puzzles remain apolitical? There's nothing inherently wrong with political-themed puzzles, so they should be allowed. Can there be bad political questions? Absolutely, just like bad riddles and ciphers. Suggesting the ... • 6,619 11 votes ### What's with all the spaghetti? Believe it's called 'enthusiasm' and possibly even 'community'. • 8,712 11 votes Accepted ### Questions from on-going contests I propose we take Math's policy on contest questions: Why do we have a policy? First and foremost: we believe that the responsibility for the integrity of an exam, contest, competition, etc. ... • 2,780 11 votes ### What is our reason for wanting bounties on questions? Implement bounties for questions that work the same as the current "exemplary answer" bounties for answers. We would use the current bounty facilities, but when placing a bounty, you would be given ... 11 votes ### Sample commentaries on puzzle creation This is an answer that would potentially be posted on my most-popular puzzle, Hearken, now, and listen close! Meta-note (this is in response to the question above, and not part of my actual answer): I ... 10 votes ### Should we be referencing/using specific users in puzzles? I don't think there's any need for a ruling on using other peoples names in your puzzle - it just adds a little twist to the puzzle and for the most part I don't see people minding. I wouldn't be ... • 5,705 10 votes Accepted No. • 136k 10 votes Accepted ### What do we do when a question relies on an image that's no longer available? Gareth rescued the example you mentioned, but to try and answer the general case, I'd suggest*: Check Google's cache - If the content went missing fairly recently then Google will likely have a ... • 36.2k 10 votes Accepted ### Is it OK to use specific users as part of a puzzle mechanism? I would highly recommend that you don't use specific users as part of a puzzle, for two main reasons: Preservation: One of the many goals of this site is to be an archive of high-quality puzzles that ... • 136k 10 votes Accepted ### Are full crosswords allowed here? "Too broad" as a close reason doesn't exactly apply here, in the same way it does across the rest of the network. Some puzzles have many parts leading to a single solution: this is okay. For example, ... • 136k Only top scored, non community-wiki answers of a minimum length are eligible
2022-05-27 04:00:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4769423007965088, "perplexity": 2236.2594144492364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662631064.64/warc/CC-MAIN-20220527015812-20220527045812-00620.warc.gz"}
http://www.googoodolls.com/blog/category?page=216
• Posted by Goo Goo Dolls |   February 21, 2001 Good Afternoon. . . .to one and all. . . .and Welcome to the Daily Goo. . . .edition # 236. . . we're here for ya folks. . . thanks for joinin' us. . . and here's your Daily. . . . This Round 5 of the Whose Stuff is Whose Contest is at it's halfway point. . . the online communities are buzzin' about this one. . . without a doubt the most difficult of all 5. . . .every day we show you another band name. . . .5 in total. . . .3 of them are unsigned bands that Robby, mike and Johnny were members of in their pasts. . . .so. • Posted by Goo Goo Dolls |   February 20, 2001 Hey folks. . . it's Tuesday.. . and it's the Daily Goo. . . thanks for joinin' us. . . . .it's gonna be a short one today. . . . gotta catch a flight. . . . racin' against the clock. . . . . you know. . . . .so, welcome welcome welcome. . . it's edition # 235. . . and. . . on we go. . . . . . OK. . . real quicklike. . . . .entry #2 in round # 5 of the Whose Stuff is Whose contest. . . . . (the final round). . . .it lay beneath the button marked picture #1. . . . .the WSIW? contest runs like this. . . .we'll show you 5 band names. . . .these band names are former bands of the boys of goo. • Posted by Goo Goo Dolls |   February 19, 2001 Happy Presidents' Day to all of you. . . .hope you're one of the lucky ones that has this day to yourself. . .5 former presidents still with us. . .that's happened 3 times in the past 25 years. . . the last time that happened before then was during Lincoln's presidency. . . .factoids. . . .ok. . . I'll turn the CNN off. . . .Welcome to the Daily Goo. . . .it's all in here. . . .this is edition # 234. . . .and we're glad you came by for a visit. . .lots of things to address this mornin. . . .so enough gumflappin'. . . let's get to your Daily. . . . . The Daily GooQuickie for yesterday was a • Posted by Goo Goo Dolls |   February 18, 2001 Shoot. . .thanks for stoppin' in. . . doin' some bicoastal travelin' today. . . Welcome to the Daily Goo. . it's edition #233. . . and it's about time. . . .we'll talk about round #5 of the WSIW? contest. . . the final round. . .talk smack for a bit. . . you know. . . . and. . . we'll deal y'all a game of 5 card DGQ stud. . . .no wild cards . . .uh huh. . . . so. . . . what are ya waitin' for ?. . . . get scrollin'. . . . .here's your Daily. . . . And a DGQ for ya.. . . for you Daily virgins. . .your first Daily Goo Quickie. . . let us explain. . . . we ask a question. • Posted by Goo Goo Dolls |   February 17, 2001 Hello?. . . hello?. . . . is this thing on?. . .ok. . . . Welcome to the Daily Goo. . . 232 consecutive days of information, giveaways and an uncountable number of little dots. . .today we announce the winner of round #4 of the Whose Stuff is Whose ? contest. . .big day here at the office. . . .so. . . enough of the windup. . . . time for the pitch. . . . straight and fast. . . .right over the inside corner. . . here's your Daily. . . . . Before we announce the winner. . . .we'd like to show you some more of the new gear in our store. • Posted by Goo Goo Dolls |   February 16, 2001 Hey you. . . .don't watch that. . . . watch this. . . it's the Daily Goo. . . edition # 231. . . and it's Friday. . . .and that means the world to alot of folks. . . . you know ?. . . .well. . . today here at the Daily it means another day of voting for the WSIW? contest. . . . and we'll show you some new stuff from the store. . . . so. . . it 's that time. . . .aaaaaaaaaand. . . . .let's Goo. . . . As we mentioned yesterday. . . the store is up and runnin' again. . . under new management. . . ours. . . .so you'll get the goodies your lookin' for. • Hope you've all sobered up from the intoxicating vibes of Valentines Day. . . . snap out of it. . . .unless of course you're one of those folks who live it on a Daily basis. . . . and in that case. . . you're probably pretty happy and inspiring to be around. . . . or just annoying . . . . either way. . . . hope you had a great holiday. . . .we've got another Daily for ya. . . . edition # 230. . . . (tooth hurty. . . .that's the punchline to one of our favorite jokes). . . anyways. . .thanks for stoppin' in. . . . . big day here at the Daily. . . . lots and lots to deal with. • Posted by Goo Goo Dolls |   February 14, 2001 Happy Valentines Day to all of ya. . . . hope you've got someone special to share this V-Day with . . . if not. . . . keep an eye out. . . . with a little bit of effort (and fate) this just might be your lucky day. . . . .either way. . . . welcome to the Daily Goo. . . . edition # 229. . . .so let's begin this one with a big box of chocolates and a buncha flowers the size of. . . oh. . . . what the hell. . . . TEXAS. . .. (everybody's doin' it). . . . . so. . . . we proceed. . . . here's that Daily. . . . One more day of album entries in the WSIW? • Posted by Goo Goo Dolls |   February 13, 2001 Another rainy day here in LA. . . . you tend to get spoiled in a place like this. . . . .well. . . . it's just natures way of keepin' you in check. . . .and this is natures way of keepin' you up on the Daily activities of the people of goo. . . . .Welcome to the Daily Goo. . . . edition # 229.. . . thanks for comin'. . . . and. . . here's your Daily. . . And today. . . is the second to last day of entries for the WSIW? contest round #4. . . . .under the button marked picture 1 is a picture of our 3 dashing young men, along with their good friend Lance Diamond. • Posted by Goo Goo Dolls |   February 12, 2001 Its rainin in sunny southern California today. . . . but its all right. . . .its warm and dry inside. . . .and were gettin into another edition of the Daily Goo. . . . edition # 228 as a matter of fact. . .thanks for givin us a few minutes outta your busy day. . . .well do the same for you . . . . daily. . . . and on that note. . . lets get to it. . . . Lets get right to day #3 of the WSIW contest Round #4. . . .appropriately titled the Whose Rock is Whose round. . . . click on picture 1 for a picture of the boys in the band along with our friend Mr. Diamond.
2016-02-12 05:41:13
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.852033793926239, "perplexity": 3910.9174201895808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701163438.83/warc/CC-MAIN-20160205193923-00230-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/calculus-3-question.151561/
# Calculus 3 question 1. ### feelau 61 1. The problem statement, all variables and given/known data So we're suppose to show that the intervals connecting vertices of a tetrahedron with centers of gravity of opposite sides interesect at one point, namely the center of gravity of a normal tetrahedron. The center of gravity is (P+Q+R+S)/4. 2. Relevant equations All I know is we have to somehow work with vectors so we at the end, we end up with result. 2. ### Tom1992 112 you have to use triple integration. it's not really a proof, but a calculation. 3. ### feelau 61 That can't be right, we just barely started this class and we're only learning about vectors and haven't done anything with integration 4. ### Tom1992 112 you said center of gravity of a normal tetrahedron. that must involve triple integration, unless the density is uniform throughout the tetrahedron, which you did not state. if the density is uniform, simply intersect two lines, each of which go from one vertex to the center of the opposite triangle, which is at one third the height of the triangle. Last edited: Jan 15, 2007 5. ### feelau 61 Well we show that the lines going from one vertice to the center of gravity of opposite sides will end up crossing each other right in the middle of the tetrahedron(the very center). The problem says that at that point, we know the answer is (P+Q+R+S)/4 where the letters correspond to vertices. We're suppose to use vectors to show that, at that middle point, we'll get (P+Q+R+S)/4. I think you're interpreting it another way, perhaps this made more sense? Last edited: Jan 15, 2007 6. ### Tom1992 112 you can think along the lines of use the center of gravity formula for 4 masses: cg = [m1(P) + m2(Q) +m3(R) + m4(S)]/(m1+m2+m3+m4) think of what m1,m2,m3,m4 would be in your special case. but the symmetry argument using the interesection of two lines should also work too. 7. ### feelau 61 well there would be four lines all crossing at the center of gravity and i'm suppose to show that with vectors though cuz this is a vector problem. I know the physics concept of it(learned it last semester) but I need to show it with vectors. Do you have any ideas? 8. ### Tom1992 112 using nothing but vectors? how about place the center of gravity at the origin, by symmetry (which we can use due to the simple conditions of the object) the four vertices must be equidistant from the origin. you can figure out the other symmetry properties to argue that the sum of the four vectors must be zero. i don't like this solution very much, but i haven't taken vector courses since i was 11. Last edited: Jan 15, 2007 9. ### feelau 61 hm....ok...i'll look into it. thanks
2015-07-01 15:21:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8196390271186829, "perplexity": 604.048400394189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094957.74/warc/CC-MAIN-20150627031814-00067-ip-10-179-60-89.ec2.internal.warc.gz"}
https://research.tue.nl/en/publications/optimal-subgraph-structures-in-scale-free-configuration-models
Optimal subgraph structures in scale-free configuration models Abstract Subgraphs reveal information about the geometry and functionalities of complex networks. For scale-free networks with unbounded degree fluctuations, we count the number of times a small connected graph occurs as a subgraph (motif counting) or as an induced subgraph (graphlet counting). We obtain these results by analyzing the configuration model with degree exponent $\tau\in(2,3)$ and introducing a novel class of optimization problems. For any given subgraph, the unique optimizer describes the degrees of the nodes that together span the subgraph. We find that every subgraph occurs typically between vertices with specific degree ranges. In this way, we can count and characterize {\it all} subgraphs. We refrain from double counting in the case of multi-edges, essentially counting the subgraphs in the {\it erased} configuration model. Original language English 1709.03466 26 arXiv 1709.03466 Published - 11 Sep 2017 counting configurations apexes exponents optimization geometry Bibliographical note 26 pages, 6 figures Cite this @article{84fd716522fc4e70929a396aaf6a2d52, title = "Optimal subgraph structures in scale-free configuration models", abstract = "Subgraphs reveal information about the geometry and functionalities of complex networks. For scale-free networks with unbounded degree fluctuations, we count the number of times a small connected graph occurs as a subgraph (motif counting) or as an induced subgraph (graphlet counting). We obtain these results by analyzing the configuration model with degree exponent $\tau\in(2,3)$ and introducing a novel class of optimization problems. For any given subgraph, the unique optimizer describes the degrees of the nodes that together span the subgraph. We find that every subgraph occurs typically between vertices with specific degree ranges. In this way, we can count and characterize {\it all} subgraphs. We refrain from double counting in the case of multi-edges, essentially counting the subgraphs in the {\it erased} configuration model.", keywords = "math.PR", author = "{van der Hofstad}, R.W. and {van Leeuwaarden}, J.S.H. and C. Stegehuis", note = "26 pages, 6 figures", year = "2017", month = "9", day = "11", language = "English", journal = "arXiv", publisher = "Cornell University Library", number = "1709.03466", } In: arXiv, No. 1709.03466, 1709.03466, 11.09.2017. TY - JOUR T1 - Optimal subgraph structures in scale-free configuration models AU - van der Hofstad, R.W. AU - van Leeuwaarden, J.S.H. AU - Stegehuis, C. N1 - 26 pages, 6 figures PY - 2017/9/11 Y1 - 2017/9/11 N2 - Subgraphs reveal information about the geometry and functionalities of complex networks. For scale-free networks with unbounded degree fluctuations, we count the number of times a small connected graph occurs as a subgraph (motif counting) or as an induced subgraph (graphlet counting). We obtain these results by analyzing the configuration model with degree exponent $\tau\in(2,3)$ and introducing a novel class of optimization problems. For any given subgraph, the unique optimizer describes the degrees of the nodes that together span the subgraph. We find that every subgraph occurs typically between vertices with specific degree ranges. In this way, we can count and characterize {\it all} subgraphs. We refrain from double counting in the case of multi-edges, essentially counting the subgraphs in the {\it erased} configuration model. AB - Subgraphs reveal information about the geometry and functionalities of complex networks. For scale-free networks with unbounded degree fluctuations, we count the number of times a small connected graph occurs as a subgraph (motif counting) or as an induced subgraph (graphlet counting). We obtain these results by analyzing the configuration model with degree exponent $\tau\in(2,3)$ and introducing a novel class of optimization problems. For any given subgraph, the unique optimizer describes the degrees of the nodes that together span the subgraph. We find that every subgraph occurs typically between vertices with specific degree ranges. In this way, we can count and characterize {\it all} subgraphs. We refrain from double counting in the case of multi-edges, essentially counting the subgraphs in the {\it erased} configuration model. KW - math.PR M3 - Article JO - arXiv JF - arXiv IS - 1709.03466 M1 - 1709.03466 ER -
2020-02-26 19:31:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8578444123268127, "perplexity": 1602.8487037883124}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146485.15/warc/CC-MAIN-20200226181001-20200226211001-00534.warc.gz"}
https://math.stackexchange.com/questions/829898/what-is-the-probability-of-transmission-between-two-nodes-in-a-neural-network
# What is the Probability of Transmission Between Two Nodes in a Neural Network? I have a network which is an Erdős–Rényi graph. It is a simple neural network with degree 0.7N where N is the number of nodes. Each weight between neurons is 1/N, meaning that if node n has fired the probability that any connected node k will fire in the next time step is 1/N (there is no temporal integration of inputs to any neuron). My question is as follows. If node n fires at t=0 what is the probability that a specific node m will fire at t=d ( d time steps later)? I know that if the weighted adjacency matrix W was a stochastic matrix then the probability would be ${W}_{mn}^{d}$. However the matrix is not stochastic (since the rows do not sum to 1). Furthermore this calculation fails after direct experimentation. This problem is related to the question: What is the probability of any path of length n between the two nodes in a random graph where the existence of any edge has probability 0.7/N? It is most useful if I could do this in terms of W. I think the answer may be related to Schur decomposition, as this is alluded to in "Networks: An Introduction" by M.E.J. Newman.
2019-07-23 01:05:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8371721506118774, "perplexity": 295.8306569876623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528635.94/warc/CC-MAIN-20190723002417-20190723024417-00163.warc.gz"}
http://mfleck.cs.illinois.edu/study-problems/inequality-induction/inequality-induction-1-hints.html
# Inequality Induction problem 1 ### Hints for inductive step You know that $$2^{k}k! < (2k)!$$ and you need to show that $$2^{k+1} (k+1)! < (2(k+1))!$$ Do a compare and contrast between corresponding parts of your known information and your goal. That is, compare $$2^{k}k!$$ to $$2^{k+1} (k+1)!$$ and compare $$(2k)!$$ to $$(2(k+1))!$$. In each case, can you express the larger quantity as the smaller quantity plus some extra factors? Now, write down just the parts that are different in the two equations. Why are the extra factors in $$(2(k+1))!$$ larger than the extra factors in $$2^{k+1} (k+1)!$$? Remember that k isn't just some random integer. We have a lower bound on its size. Look back at the outline if you don't remember what the bound was. If that isn't working, double-check how you expressed $$(2(k+1))!$$ in terms of $$(2k)!$$. How many extra factors does $$(2(k+1))!$$ have?
2018-01-23 19:23:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8272930979728699, "perplexity": 295.65740204152263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892238.78/warc/CC-MAIN-20180123191341-20180123211341-00765.warc.gz"}
https://www.techwhiff.com/learn/oracle-sql-many-of-the-pages-in-brewbeans-needs/193246
Oracle SQL Many of the pages in Brewbean’s needs to display the product type as coffee... Question: Oracle SQL Many of the pages in Brewbean’s needs to display the product type as coffee or equipment. Type value for these product categories is a ‘C’ or ‘E’. Create a procedure that accepts the type C or E and returns the full description as ‘Coffee’ or ‘Equipment’. Use only 1 parameter in the procedure for this task. Similar Solved Questions A. Joe, a 42-year-old woodwind musician with the symphony orchestra, is started on a gener dingiersay-Converting... A. Joe, a 42-year-old woodwind musician with the symphony orchestra, is started on a gener dingiersay-Converting enzyme (ACE) inhibitor for his newly diagnosed hypertension. He returns to the clinic the following week and demonstrates that his original blood pressure of 150/90 mm Hg has improved to ... ESSAY 51. What are some of the challenges researchers have faced in gathering data when they... ESSAY 51. What are some of the challenges researchers have faced in gathering data when they study teenage sexual behavior?... Given two dependent random samples with the following results: Population 1 77 65 62 80 76... Given two dependent random samples with the following results: Population 1 77 65 62 80 76 59 71 74 Population 2 79 56 | 71 72 79 63 | 68 80 Can it be concluded, from this data, that there is a significant difference between the two population means? Let d = (Population 1 entry)–(Population 2 ... Consider the following equations. 3 A + 6 B → 3 D, ΔH = -440. kJ/mol... Consider the following equations. 3 A + 6 B → 3 D, ΔH = -440. kJ/mol E + 2 F → A, ΔH = -102.5 kJ/mol C → E + 3 D, ΔH = +64.5 kJ/mol Suppose the first equation is reversed and multiplied by 1/6, the second and third equations are divided by 2, and the three adjusted...
2022-12-01 00:06:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3774130642414093, "perplexity": 1814.3611472804898}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710777.20/warc/CC-MAIN-20221130225142-20221201015142-00064.warc.gz"}
https://gamedev.stackexchange.com/questions/193899/how-to-create-something-similar-to-warcraft-3-local-avoidance
how to create something similar to warcraft 3 local avoidance? I want to write a script that can lead to something like https://gyazo.com/90d3a0ec82a41f53c831b00c403dc7df (to surround the enemy through local avoidance). i am using navmesh unity, to solve this problem i am using this code //targetPosition.position - this is the enemy. //transforms[i].position - my agents. //stoppingDistance = 5. if ((targetPosition.position - transforms[i].position).sqrMagnitude < Mathf.Pow(navMeshAgents[i].stoppingDistance, 2) ) { agents[i].GetComponent<NavMeshObstacle>().enabled = true; agents[i].GetComponent<NavMeshAgent>().enabled = false; } else { agents[i].GetComponent<NavMeshObstacle>().enabled = false; agents[i].GetComponent<NavMeshAgent>().enabled = true; } it looks like this https://gyazo.com/d3c4e401d424cdd42b80c40fe5d3998f . please give ideas for implementing something similar to local avoidance from warcraft 3 I hope I was able to clearly describe my problem, thanks.
2021-06-13 02:47:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7482107281684875, "perplexity": 3555.2563030565743}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487598213.5/warc/CC-MAIN-20210613012009-20210613042009-00480.warc.gz"}
https://plainmath.net/1363/solve-differential-equation-dy-dx-equal-e-4x-y-3
Question # Solve differential equation dy/dx = e^4x(y-3) First order differential equations Solve differential equation $$dy/dx = e^4x(y-3)$$ 2020-10-26 $$dy/dx= e^{4x}(y-3)$$ $$dy/dx= (e^{4x})(y-3)$$ $$dy/((y-3))= (e^{4x})dx$$ $$\int 1/(y-3) dy= \int (e^{4x})dx$$ $$\ln(y-3)= e^{4x}/4+c$$ Apply exponential on both sides $$e^{\ln(y-3)}= e^{(e^4x)/4+c)}$$ Thus, the solution of the given first order differential equation is $$y= e^{(e^{4x}/4+c)}+3$$
2021-09-27 11:09:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7502436637878418, "perplexity": 2613.957327850495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058415.93/warc/CC-MAIN-20210927090448-20210927120448-00135.warc.gz"}
https://judge.u-aizu.ac.jp/onlinejudge/description.jsp?id=1132
Time Limit : sec, Memory Limit : KB # Problem D: Circle and Points You are given N points in the xy-plane. You have a circle of radius one and move it on the xy-plane, so as to enclose as many of the points as possible. Find how many points can be simultaneously enclosed at the maximum. A point is considered enclosed by a circle when it is inside or on the circle. Fig 1. Circle and Points ## Input The input consists of a series of data sets, followed by a single line only containing a single character '0', which indicates the end of the input. Each data set begins with a line containing an integer N, which indicates the number of points in the data set. It is followed by N lines describing the coordinates of the points. Each of the N lines has two decimal fractions X and Y, describing the x- and y-coordinates of a point, respectively. They are given with five digits after the decimal point. You may assume 1 <= N <= 300, 0.0 <= X <= 10.0, and 0.0 <= Y <= 10.0. No two points are closer than 0.0001. No two points in a data set are approximately at a distance of 2.0. More precisely, for any two points in a data set, the distance d between the two never satisfies 1.9999 <= d <= 2.0001. Finally, no three points in a data set are simultaneously very close to a single circle of radius one. More precisely, let P1, P2, and P3 be any three points in a data set, and d1, d2, and d3 the distances from an arbitrarily selected point in the xy-plane to each of them respectively. Then it never simultaneously holds that 0.9999 <= di <= 1.0001 (i = 1, 2, 3). ## Output For each data set, print a single line containing the maximum number of points in the data set that can be simultaneously enclosed by a circle of radius one. No other characters including leading and trailing spaces should be printed. ## Sample Input 3 6.47634 7.69628 5.16828 4.79915 6.69533 6.20378 6 7.15296 4.08328 6.50827 2.69466 5.91219 3.86661 5.29853 4.16097 6.10838 3.46039 6.34060 2.41599 8 7.90650 4.01746 4.10998 4.18354 4.67289 4.01887 6.33885 4.28388 4.98106 3.82728 5.12379 5.16473 7.84664 4.67693 4.02776 3.87990 20 6.65128 5.47490 6.42743 6.26189 6.35864 4.61611 6.59020 4.54228 4.43967 5.70059 4.38226 5.70536 5.50755 6.18163 7.41971 6.13668 6.71936 3.04496 5.61832 4.23857 5.99424 4.29328 5.60961 4.32998 6.82242 5.79683 5.44693 3.82724 6.70906 3.65736 7.89087 5.68000 6.23300 4.59530 5.92401 4.92329 6.24168 3.81389 6.22671 3.62210 0 ## Output for the Sample Input 2 5 5 11
2021-05-10 22:11:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.50984787940979, "perplexity": 269.3964081302919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00291.warc.gz"}
https://cameramath.com/es/expert-q&a/Algebra/13-Are-the-given-functions-inverses-Your-work-for-this-problem-will
¿Todavía tienes preguntas de matemáticas? Pregunte a nuestros tutores expertos Algebra Pregunta 13. Are the given functions inverses? (Your work for this problem will need $$4$$ points to be submitted at the end of the test) * $$f ( x ) = \frac { 8 - x } { 2 }$$ , $$g ( x ) = - 2 x + 8$$ Yes, they are inverses No, they are not inverses
2022-05-21 16:28:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5596058368682861, "perplexity": 3756.901522263341}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539131.21/warc/CC-MAIN-20220521143241-20220521173241-00086.warc.gz"}
http://ict4m.org/not-work/word-2010-hyperlink-within-document-not-working.php
Home > Not Work > Word 2010 Hyperlink Within Document Not Working # Word 2010 Hyperlink Within Document Not Working ## Contents In the File name box, select index as the default name for your home page, and then click Save. You give the illustration of "Select a place in the document". Katrina Lunceford I really do understand this hyperlink stuff at all. Select the text or picture that you want visitors to your page to click to open the external file. ## Hyperlink In Word Document Does Not Work Do one of the following: To follow a hyperlink from your Web publication before you publish it to the Web, hold down CTRL while you click the linked text or picture. How to check if a given string is a substring of an element of a list Can you keep flying after being Restrained? Thanks in advance for your help. You can also check out previously reviewed guides on How to change default font settings in Word 2010 & Fill font with Gradient Color Pattern in Word 2010. Dattta Ditto on this question. If you select (or even just click in) a recognizable email address, URL, or file path and click this button, Word will convert the text to a hyperlink. Then, in your Publisher Web publication, check for the following: The destination might have moved or might not exist anymore    Right-click the hyperlink, and click Edit Hyperlink. Hyperlinks Not Working In Word 2013 In Word 2000 and earlier, both these dialogs are accessed via Tools | AutoCorrect. In that case, upload the subfolder manually. You can use an icon or picture as a hyperlink. What do you want to do? It is shown in Figure 5. Loading... Why Won't My Hyperlink Work In Word All the best Subject: Comment: The contents of this post will automatically be included in the ticket generated. Should I have doubts if the organizers of a workshop ask me to sign a behavior agreement upfront? \left \{ fitting a box (not centered) 80s Sci-Fi movie with "fire-lion / Sometimes a hyperlink will link to a different section of the same page. ## Hyperlinks In Word Not Displaying Correctly A hyperlink is defined as “an icon, graphic, or word in a file that, when clicked on with the mouse, automatically opens another file for viewing.” If you were around (and http://ict4m.org/not-work/word-2007-hyperlink-not-working.php Key Yessaad 11,941 views 5:02 How to Hyperlink on Microsoft Word 2003 - Duration: 1:06. Select the new picture that you want to use. mooseman645a 8,142 views 6:56 How to insert object, bookmark, hyperlink, and express? - Duration: 8:59. Hyperlink Not Working In Excel In my document when I open this box, I don't have the indented list of heading (in my case, chapter names) so I have nothing to link to. In Word 2002 and 2003, the Tools menu entry is called AutoCorrect Options. It supports almost all types of hyperlinks that users frequently use in the document. http://ict4m.org/not-work/word-hyperlink-not-working-word-2007.php You can restore the hyperlinks in your Web publication by removing any BorderArt from the text boxes or AutoShapes that contain hyperlinks. Top of Page Change a hyperlink's destination Right-click the hyperlink that you want to change, and then click Edit Hyperlink. Hyperlinks Not Working In Powerpoint In order to change them, I had to display the field codes (using Alt+F9) and run the Replace operation again. Uploading When you publish your Web site, you must upload the external file manually. ## I just purchased Office 2010 with Word 2010 and now matter what I do the hyperlink does not work. Figure 1. Less You can change the way a hyperlink looks, and you can change where it goes (its destination). Loading... Hyperlinks Not Working In Word 2007 Higher up doesn't carry around their security badge and asks others to let them in. This safety feature, introduced in Word 2002, was intended to make it easier to edit the display text of hyperlinks. Removing a hyperlinkAfter you create a hyperlink, you should test it. Change the underlying HYPERLINK field code. have a peek at these guys If the bookmark has been deleted, click Cancel. Paul DeBrino has reminded me of another issue that causes Microsoft Word to change and perhaps break your hyperlinks, by altering the link from an absolute to relative path or vice NanoTechTips 902,329 views 19:12 How to Hyperlink in Microsoft Word 2007 - Duration: 6:56. Word’s Help topic “Create a hyperlink” includes detailed instructions for creating hyperlinks to a variety of targets using this dialog. Or create a new account X ProZ.com ideas(Powered by UserVoice)ViewIdeas submitted by the communityPostYour ideas for ProZ.comVotePromote or demote ideasGet started now » Terminology Jobs & directories Click OK to continue. You are viewing the field code (see Figure 5) instead of the field result. Top of Page Share Was this information helpful? I have a table that needs to be filled out, and in the instructions about the table, I want to link to it.
2018-04-21 22:52:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3506244122982025, "perplexity": 3119.121681256678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945459.17/warc/CC-MAIN-20180421223015-20180422003015-00624.warc.gz"}
http://sioc-journal.cn/Jwk_hxxb/CN/10.6023/A14090659
### 基于有机二硫化物氧化还原电对的非碘系染料敏化太阳能电池 1. 北京大学物理学院 人工微结构和介观物理国家重点实验室 北京 100871 • 投稿日期:2014-09-24 发布日期:2014-11-21 • 通讯作者: 肖立新 E-mail:lxxiao@pku.edu.cn • 基金资助: 项目受国家自然科学基金(Nos. 61177020, 11121091)资助. ### A Novel Organic Disulfide/Thiolate Redox Mediator for Iodine-free Dye-sensitized Solar Cells Ma Yingzhuang, Zheng Lingling, Zhang Lipei, Chen Zhijian, Wang Shufeng, Qu Bo, Xiao Lixin, Gong Qihuang 1. State Key Laboratory for Mesoscopic Physics and Department of Physics, Peking University, Beijing, 100871 • Received:2014-09-24 Published:2014-11-21 • Supported by: Project supported by the National Natural Science Foundation of China (Nos. 61177020, 11121091). Over the last 20 years, much attention has been paid to renewable energy technology. Photovoltaic is a promising alternative to conventional fossil fuels. Dye-sensitized solar cells (DSCs) attract notable interest, not only due to their high efficiency and environmentally friendly nature, but also their easy fabrication and relatively low manufacture costs. Despite the high efficiencies, iodine/triiodine electrolytes have some disadvantages, such as the corrosion of the metallic electrodes and the sealing materials. It also absorbs visible light around 430 nm. Therefore, it is important to exploit the iodine-free redox couple in DSCs. An organic disulfide material of 2,5-dimercapto-1,3,4-thiadiazole (DMcT) is proved here to reduce and oxidize independently via homopolymerization and depolymerization. DMcT has been applied as cathode active material for lithium rechargeable batteries. Meanwhile, the self-redox property could be used as redox mediator in lieu of iodine/triiodine electrolytes. DMcT can be oxidized by self-polymerizing into PDMcT, which can be reduced by depolymerizing back to DMcT. In contrast to the conventional redox couples consisted of two different materials, DMcT can independently act as the redox mediator, which is the main difference between DMcT and the redox couples reported previously. Dye-sensitized solar cells consist of mesoporous TiO2, N719 dye, and this novel electrolyte achieved power conversion efficiency of 1.6% under 100 mW·cm-2 simulated sunlight (AM 1.5G) and a higher efficiency of 2.6% at weak illumination (13 mW·cm-2), implying its promising application prospect. Although the conversion efficiency is relatively low to the iodine/triiodine-based DSCs, this novel single self-redox mediator provides a new promising way to the iodine-free dye-sensitized solar cells.
2022-07-05 19:22:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17431169748306274, "perplexity": 9043.704909083066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104597905.85/warc/CC-MAIN-20220705174927-20220705204927-00071.warc.gz"}
https://mathoverflow.net/questions/281560/idea-of-base-change-for-division-algebras-over-local-field
# Idea of base change for Division algebras over local field Let $F$ be a non-Archimedean local field of characteristic $0$ and $K/F$ be a finite extension. Let $D_F$ be the central division algebra of dimension $n^2$ over $F.$ Write $D_K=D_F\otimes_FK$, which is again a central division algebra over $K$ of dimension $n^2$. Does there exist an idea of base change for division algebra. In case of $GL(n),$ the following diagram is commutative. $$\matrix{ \widehat{W_F} & \longrightarrow^{{}^{LLoc}} &\widehat{ GL(n,F)} \cr \downarrow^{res_{K/F}}& & \downarrow^{BC_{K/F}} \cr \widehat{W_K} & \longrightarrow^{{}^{LLoc}} & \widehat{GL(n,K)} \cr }$$ where LLoc is Local Langlands correspondence, $res_{K/F}$ is restriction map and $BC_{K/F}$ is base change map. Do we have similar diagram in case of Division algebras. If yes, how the base change map look like ? More generally, $A_F$ be the finite central simple algebra over $F$, which is isomorphic to $M_n(D)$ for some division algebra $D$(unique upto isomorphism) over $F$ of index $d$. Set $A_K=A_F\otimes_FK.$ Do we have similar commutative diagram in case of central simple algebras as above. If the answer is affirmative,suggest some reference regarding this..? Thank you. • I think your formulation is a bit clumsy. You first have to fix a (finite dimensional and central) division algebra $D_F$ over F (it is in general not unique even up to isomorphisms) and then set $D_K = D_F \otimes_F K$. You may make your question more general by considering $A_F$ and $A_K = A\otimes_F K$, where $A_F$ is a central finite dimensional simple algebra. – Paul Broussous Sep 20 '17 at 7:53 An obvious idea to get a commutative square diagram, where $GL(n,F)$ and $GL(n,K)$ are respectively replaced by $D_F^\times$ and $D_K^\times$, is to use the Jacquet-Langlands transfer between representations of $D_F^\times$ (resp. $D_K^\times$) and square integrable representations of $GL(n,F)$ (resp. $GL(n,K)$) (cf. works of Deligne-Kazdhan-Vignéras, Rogawski, Badulescu). Of course this is not satisfactory for base change does not preserve square-integrability in general. But you could may be use the extension of the Jacquet-Langlands transfer to all representations, written by Badulescu.
2019-04-21 05:13:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9100894331932068, "perplexity": 224.16424047609124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530176.6/warc/CC-MAIN-20190421040427-20190421062427-00089.warc.gz"}
https://www.physicsforums.com/threads/locate-absolute-extrema.649412/page-2
# Locate Absolute Extrema Mark44 Mentor I am having trouble with the exercise f(x) = 3x^2/3 - 2x ; (-1,1) You need parentheses around the exponent. This is what you wrote: f(x) = $\frac{3x^2}{3} - 2x$ This is what I think you meant f(x) = 3x2/3- 2x Without using LaTeX or the HTML tags that I used, you can write it this way: f(x) = 3x^(2/3) - 2x I already found (-1,5) and (1,1) by plugging the intervals back into the function. But -1 and 1 aren't in the domain. But I have f'(x) = 2x^-1/3 -2 = 0 Again, you need parentheses. This is f'(x) = 2x^(-1/3) - 2 Now, I am having trouble finding the answer. I found 1 which would give me (1,1). However, the answer should be MIN (0,0) and MAX (-1,5). And I don't understand it. From an earlier post: Maxima or minima can occur at these places: 1. Numbers in the domain at which the derivative is zero. 2. Numbers in the domain at which the original function is defined, but the derivative is undefined. 3. Endpoints of the domain.
2020-02-28 19:43:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7073898315429688, "perplexity": 984.6402231772804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147628.27/warc/CC-MAIN-20200228170007-20200228200007-00424.warc.gz"}
https://www.gamedev.net/forums/topic/413501-site-search-form-usability/
# [web] Site search form usability ## Recommended Posts Sorry if the subject line isn't too useful. I'm developing a web site based fundamentally around a search algorithm. The search page needs to be able to accommodate some averagely complex queries, which I will demonstrate with reference to AD+D statistics. [smile] "find all characters with dexterity > 15" "find all characters with strength > 16 OR constitution > 17" "find all characters with (strength > 10 AND intelligence > 10) OR (wisdom > 15)" "find all characters with strength > 10 AND (intelligence > 10 OR wisdom > 15)" As you can see, this will map onto SQL simply enough, so the actual data retrieval is not a problem. What is a problem, is how to formulate a clean interface for this. My first thought is to have 2 dropdowns, one for attribute (eg. strength, intelligence) and another for the required value. This works fine for example 1 above. Next to that would be a '+' button, which adds another 2 identical dropdowns below, to query a second attribute, with a button you can toggle from AND to OR and back again. That addresses example 2 above. However, examples 3 and 4 are more awkward - the AND operator would take precedence in the first and the OR operator takes precedence in the 2nd, yet my proposed UI has no good way of accommodating that. My users are not going to be comfortable with entering a boolean search, no matter how simple it is. On the other hand though, users will perform very few searches and therefore it doesn't matter if it takes a little longer to formulate than a Google search does. However, it mustn't be intimidating or complex. Any ideas how I can comfortably represent all the above 4 examples visually? Wacky and off-the-wall suggestions welcomed, as long as they're practical - this isn't a formal business site by any means. I'm also happy to only target IE6 and Firefox 1.5 upwards, so assume javascript, CSS, and AJAX/JSON are all available. ##### Share on other sites Condition 1: [FIELD] [OPERATOR] [VALUE]Conjuction A: [AND|OR] [LINK|UNLINK]Condition 2: [FIELD] [OPERATOR] [VALUE]Conjuction B: [AND|OR] [LINK|UNLINK]Condition 3: [FIELD] [OPERATOR] [VALUE] When "Link" is clicked at Conj. A, C1 and C2 both have a border around it, now if "link" is clicked at Conj. B, there will be a border around (C1+C2) and (C3), maybe of a different color to differenciate between the two. Of course, this will be difficult to turn into code, but seeing as you are the modding forums moderator, you could probably use a JS implementation of this. If not, I might give it a try later tonight, just let me know if you want me to give it a try. ##### Share on other sites The last two statements are basically two searches in one: 3. (S > 10 && I > 10) combined with (W > 15) 4. (S > 10 && I > 10) combined with (S > 10 && W > 15) So, an alternative would be to let users search twice, and append results to previous results. Alternately this can be done in the search menu by allowing not only additional requirements, but additional queries as well. Grouping them inside boxes is then essentially the same as using parenthesises in queries. So, a (+) button as well, and in each block, a + button. :) ##### Share on other sites Is there a limit of three statements or is it unbounded? If you're limited to 3 statements you could visualy show grouping using checkboxes on the left side of the statements. The statements that have the checkbox checked would be considered grouped with parenthesis. If you allow more then three statements per query it gets quite a bit more complicated because you could easily have nested statements which becomes quite difficult to group on. If you can provide some more information on that I'll try working up an example of what I mean. ##### Share on other sites Another variation if you are not going to have nested lists more than a depth of 1, then you could conver it into 2 dimensions like so: Condition 1: [FIELD] [OPERATOR] [VALUE] [AND|OR] [FIELD] [OPERATOR] [VALUE] [+]Conjuction: [AND|OR]Condition 2: [FIELD] [OPERATOR] [VALUE] [AND|OR] [FIELD] [OPERATOR] [VALUE] [+]Conjuction: [AND|OR]Condition 3: [FIELD] [OPERATOR] [VALUE] [AND|OR] [FIELD] [OPERATOR] [VALUE] [+] [++] Where pressing [+] on the right of the line, would add another set of Field/Op/Value, with a conjuction, and clicking on [++] at the bottom would add another row. Here, each row represents a condition in parenthesis. So you could have for your eg.4: Condition 1: [strength] [>] [10]Conjuction: [AND]Condition 2: [intelligence] [>] [10] [OR] [wisdom] [>] [15] ##### Share on other sites Hmm. Verminox - An explicit 'link' option to set the precedence would work, but I don't think it would be intuitive to the users. Your second suggestion looks better though; I'll give it some thought. Captain P - Yeah, I suppose there is always the way of reducing any set of AND and OR expressions into an ORed list of AND expressions, or vice versa. I would really prefer to be able to do it in one search, but an 'append results' options might not be a bad idea, thanks. tstrimp - practically, I doubt they're going to want to supply more than 3 conditions to be ORed together, but each one could perhaps be up to 3 conditions ANDed together. However I'd rather not impose a hard limit if possible. The checkbox would work but, as with Verminox's link suggestion, I don't think it would be immediately obvious to users, who have no concept of grouping expressions or boolean logic. I don't expect the expression tree to be more than 2 operators deep - eg. (A and B) or (C and D), though I would like it to be arbitrarily wide. ##### Share on other sites I would probably supply a form with 6 fields (one per attribute). Before each field is a drop-down: "ANY, <, >", and a numeric field. The six fields are tied with an AND clause. Next to the form is a javascript "more" button. Clicking it will create a new, identical form. The two forms are tied with an OR clause. More forms can be created this way. Normal disjonctive form theorem dictates that this is equivalent to arbitrary choices. So, for your examples: "find all characters with dexterity > 15" form1: DEX > 15 "find all characters with strength > 16 OR constitution > 17" form1: STR > 16 form2: CON > 17 "find all characters with (strength > 10 AND intelligence > 10) OR (wisdom > 15)" form1: STR > 10 INT > 10 form2: WIS > 15 "find all characters with strength > 10 AND (intelligence > 10 OR wisdom > 15)" form1: STR > 10 INT > 10 form2: STR > 10 WIS > 15 ##### Share on other sites Yeah, thinking about it, using disjunctive normal form works well both visually and theoretically. Each 'person specification' contains a number of ANDed conditions, and each search contains a number of person specifications, ORed together. (Which in practical terms will probably be done with a separate SQL query for each, with the results appended together.) Thanks everybody, I'll give this a go. Any further thoughts or comments on layout, appearance, or ease of use welcomed. ##### Share on other sites Perhaps you can draw some inspiration from bugzilla's search interface (link). Not the large top form, but the boolean charts the have at the bottom. I think that's a nice approach. [Edited by - Sander on September 8, 2006 11:14:15 AM] ##### Share on other sites Yeah, that's conjunctive normal form, isn't it. I think disjunctive makes more sense for my application but the layout looks reasonable. ## Create an account Register a new account • ### Forum Statistics • Total Topics 628356 • Total Posts 2982252 • 10 • 9 • 13 • 24 • 11
2017-11-22 20:41:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2844997048377991, "perplexity": 2299.16313860203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806660.82/warc/CC-MAIN-20171122194844-20171122214844-00185.warc.gz"}
https://advances.sciencemag.org/content/5/7/eaau1156.full
Research ArticleSOCIAL SCIENCES # It’s not just how the game is played, it’s whether you win or lose See allHide authors and affiliations Vol. 5, no. 7, eaau1156 ## Abstract Growing disparities of income and wealth have prompted extensive survey research to measure the effects on public beliefs about the causes and fairness of economic inequality. However, observational data confound responses to unequal outcomes with highly correlated inequality of opportunity. This study uses a novel experiment to disentangle the effects of unequal outcomes and unequal opportunities on cognitive, normative, and affective responses. Participants were randomly assigned to positions with unequal opportunities for success. Results showed that both winners and losers were less likely to view the outcomes as fair or attributable to skill as the level of redistribution increased, but this effect of redistribution was stronger for winners. Moreover, winners were generally more likely to believe that the game was fair, even when the playing field was most heavily tilted in their favor. In short, it’s not just how the game is played, it’s also whether you win or lose. ## INTRODUCTION The steep increase in economic inequality has raised growing concerns about the effects on political polarization, support for policies designed to promote economic redistribution and economic growth (15), social mobility (68), equality of opportunity (912), and social cohesion (13). Survey research has shown that economic inequality is deemed unacceptable when it is perceived as the outcome of an uneven playing field (9, 12, 14, 15). While previous research has convincingly demonstrated a close association among normative conceptions of fairness, cognitive explanatory beliefs, and individual attainment, the causal dynamics are difficult to parse using observational data, for two reasons. First, previous research has operationalized fairness using respondents’ beliefs about the causes of inequality. For example, in a 2011 study, Alesina and Giuliano (4) measured “a (possibly vague) sense of what is ‘fair’ and ‘unfair’ by whether people feel that there is a difference between wealth accumulated, for instance, by playing the roulette tables in Las Vegas and wealth accumulated by working one’s way up from an entry-level job to a higher-level one with effort, long days at the office and short hours of sleep.” In another study, Alesina and La Ferrara (2) measured “‘fair’ versus ‘unfair’ differences in opportunities (e.g., whether family wealth matters, or it matters whom you know, etc.).” Similarly, Isaksson and Lindskog (16) measured “an input based concept of fairness [...] captured by the effect of beliefs about the causes of income differences.” Second, the causal direction runs both ways: disproportionate shares of income and wealth also confer unequal opportunities that amplify further differences in rewards and resources, a phenomenon referred to as “cumulative advantage” (17, 18) and crystallized as the “Great Gatsby curve” (6). This circularity confounds observational studies of the effects of unequal opportunity on beliefs about the fairness of unequal outcomes. The conundrum is deepened because information about the processes that generate unequal outcomes is complex and difficult to access, with the consequence that people instead rely on biased perceptions of personal experience and ideologically motivated partisan narratives (19, 20). We unpacked the causal puzzle using an experiment that fixes the level of outcome inequality while exogenously manipulating the distribution of opportunities that lead to unequal outcomes. The experiment complements findings from survey research by prioritizing internal over external validity, and we therefore caution against generalizing the results to actual economic inequality. However, randomized trials make it possible to tease out the effects of unequal opportunity and unequal outcomes, which is not possible in observational studies. The results contribute to our understanding of differences in perceptions of fairness and attributions of success between those who stand to gain and those who stand to lose from social policies that alter economic opportunities (2124). We are not the first to use experiments to study the effects of inequality on perceptions of fairness. Previous experiments show that respondents are more likely to accept unequal economic outcomes if they perceive those outcomes as the consequence of talent and effort (21, 2527). We also extend previous experimental research on the psychological determinants of fairness perceptions. Research on attribution error demonstrates a tendency to explain personal success as the result of intrinsic properties such as ability and effort while pointing to external factors (e.g., unequal opportunity or misfortune) to make sense of failures (2831). Research in economics has documented the effect of self-serving bias on fairness perceptions (32, 33) and on preferences for redistribution (26, 33, 34). Studies of social preferences in distributional games show how the economically successful tend to overstate the role of talent, while unsuccessful individuals point to external circumstances (25, 26, 35). Our study also compares the responses of winners and losers but with the innovation that we manipulate equality of opportunity independently of outcomes, skill, and effort, thereby shifting the focus from psychological to structural influences on perceptions of fairness. We use a novel experimental design to disentangle the causal effect of two sources of inequality on (i) cognitive beliefs about attributions of success to internal versus external factors; (ii) normative beliefs about fairness of the outcomes; and (iii) affective responses regarding satisfaction with the outcome. We find that winners are more likely than losers to deem outcome inequality as fair, perceive talent as its most important factor, and express positive emotions, regardless of the underlying rules that govern the distribution of outcomes. The structure of the paper is as follows. We first detail our research design by introducing the Swap Game, an online experiment, and then explain our key manipulations of opportunity and luck, present our results, and conclude. ### Research design The experiment consists of a simple two-person seven-round card game, “the Swap Game” (see the “Online implementation” section in the Supplementary Materials for details). Following a short training session (described in the next section), two participants are randomly assigned to structurally unequal positions as player 1 (P1) or player 2 (P2). P1 is randomly assigned at the beginning of the game, and that player remains P1 for all seven rounds. At the beginning of each round, each player is dealt nine cards that cannot be seen by the other player. P1 starts each round by playing any card, which is then removed from P1’s hand. P2 must then play a card that is higher than the one played by P1, where all four suits have equal value. If P2 has no higher card, then P2 “passes.” P1 then plays another card. If P2 has a higher card, then P2 plays that card and it is removed from P2’s hand. P1 must then play a card that is higher than that of P2 and so on, until one player has no more cards and becomes the winner of that round. Round 2 then begins and the players are again dealt nine cards and play proceeds exactly as in round 1, except that the players must first swap up to two cards. In the progressive exchange (PR), the winner of the previous round must exchange the strongest card(s) while the loser exchanges the weakest. In the regressive exchange (RE), winners of the previous round swap their weakest card(s) and losers exchange their strongest. RE creates a Matthew effect, making previous winners more likely to win again. PR is compensatory redistribution, making winners less likely to win again, similar to rules used to promote league parity in some professional sports. In the baseline exchange condition (RA), the players exchange cards that are randomly chosen. This “placebo” maintains procedural equivalence with the PR and RE conditions but has zero redistributive effect on the strength of the hand. Following the exchange, players are shown the cards that they sent and received. The game continues for a total of seven rounds, after which the player who won the most rounds is declared the winner. The players see a short stack of coins for the loser and a large stack for the winner, labeled with the amounts ($2.50 and$7.50, respectively). After completing the game, the players were administered a short post-treatment survey consisting of three items. These items (listed in Materials and Methods) measure normative beliefs about the fairness of the game, cognitive beliefs about the causes of inequality, and affective responses related to satisfaction with the outcome. The cognitive item asked participants to indicate the most and the least important factors that determine the game outcome: luck, skill, or the rules of the game. The choices were presented as mutually exclusive so that participants who chose “rules of game” over “luck” could only be referring to the rules governing the exchange of cards and not to advantages conferred by luck (i.e., being chosen as P1 or P2 or the “luck of the draw” in the cards that were dealt). The normative item asked participants to judge the outcome as fair or unfair, and the affective item asked participants to indicate their satisfaction with the outcome, choosing between emoticons for happiness, indifference, sadness, and anger. The survey concluded with a series of standard sociodemographic items. The normative and cognitive items and their choices were randomly ordered to avoid learning effects, but the affective item was always asked after the first two to avoid confounding whether the participant’s response (e.g., “happy face” or “sad face”) indicated that they enjoyed (or were bored by) the game or were satisfied/dissatisfied with the outcomes of the exchange rules. By placing the affective item after the cognitive and normative items, the item is framed as a response to the experimental manipulations. ### Manipulation of opportunity and luck The Swap Game minimizes the intrinsic effects of skill and effort by removing the need for strategic decision-making. The only choice that players make is which card to play, and players learn in the training session before the experiment that they should always lead with their highest card (which maximizes the odds that the opponent will have to pass, allowing the player to then play their next highest card, and so on). The training session consisted of three rounds played against a simulated opponent, using rules identical to those in effect for the actual game. This simulated opponent always played the highest card. All players were given the same cards to play in the three training rounds and played against the same simulated opponent. Thus, any differences in performance in the training rounds could only reflect differences in skill, attention, and effort. However, no differences were found nor did we find that winners in the training session were more likely to win in the actual game, indicating that winning in the training session did not confer a performance advantage that might confound the effects of our experimental manipulations (see fig. S3 and the “The role of individual skills” section in the Supplementary Materials). Removing dependence on skill is necessary to rule out the possibility that a participant’s attribution to skill has a basis in fact. The experiment tests whether participants will attribute unequal outcomes to differences in skill even in a contest in which skill plays little or no role. Instead of skill, the game is heavily dependent on the effects of luck and the rules of exchange (see the “Luck and the redistribution of opportunity” section in the Supplementary Materials). Luck affects the opportunity to win by conferring both a structural advantage and a material advantage. The structural advantage is the random designation as P1, who enjoys the opportunity to initiate each of the seven rounds by discarding any card, without the need to play only a card that is higher than the opponent’s (see the “The player ID effect” section in the Supplementary Materials). The material advantage is the luck of the draw that gives one player a stronger hand. The higher the values of a player’s cards, the greater the opportunity to win—a literal implementation of the familiar aphorism that a successful individual was “dealt a great hand” because of the advantages of birth, gender, or ethnicity. In addition to luck, outcomes also depend on the “redistribution of opportunity.” Although the term “redistribution” is generally used to refer to the leveling of economic outcomes via transfers and taxes, redistribution in the Swap Game operates not on outcomes but on opportunities to obtain those outcomes. Opportunity is redistributed by requiring players to exchange one or two cards at the beginning of each round (except the first). In the RA condition, opportunity depends maximally on luck in the assignment of player order (structural advantage) and the luck of the draw (material advantage). Relative to the level playing field in the baseline condition, RE and PR tilt the playing field to favor either the winner (RE) or loser (PR) of the previous round. The card exchange manipulates the level of redistribution and the direction. The level is manipulated by increasing the number of cards that must be exchanged in the PR and RE conditions. The more cards that are exchanged (either one or two), the fewer the cards in the hand that depend only on the luck of the draw and the more that depend on the outcome of the previous round. In summary, the Swap Game uses a fully crossed design to test for differences between winners and losers in the effect of the redistribution of opportunity on normative beliefs about the fairness of the game, cognitive beliefs about the causes of inequality, and affective responses related to satisfaction with the outcome. ## RESULTS The experiment tests differences between winners and losers in normative (Fig. 1), cognitive (Fig. 2), and emotional responses (Fig. 3) to unequal outcomes, broken down by the direction and level of redistribution of opportunity. Each figure displays results for PR on the left and RE on the right, with the level of redistribution ranging from 0 to 2 on the x axis. In the baseline condition (RA), cards are randomly chosen for exchange with no redistributive effect, indicated by 0 redistribution on the x axis. As expected, the number of cards exchanged was irrelevant in the random exchange (RA), hence we combined results from the one- and two-card baseline exchanges (see the “Baseline exchange conditions” section in the Supplementary Materials), labeled as 0 redistribution in the figures. All figures show the baseline condition twice, at the lower left of each panel, to facilitate comparisons between conditions. Figures 1 to 3 report 95% credible intervals using Bayesian logistic regression with uninformative priors. Confidence intervals estimated using frequentist statistics confirm that the results are robust to different estimation methods (see the “Credible and confidence intervals” section in the Supplementary Materials). Normative responses are shown in Fig. 1. The figure reports declining perceptions of fairness as the level of redistribution increases from 0 to 2 in both PR and RE conditions. Winners in the regressive one-card exchange are less likely to regard the outcomes as fair compared to their counterparts in the PR. Compared to losers, winners are twice as likely to perceive the outcome as fair, regardless of the level or direction of redistribution. There is no condition in which losers are more likely than winners to regard outcomes as fair. Losers’ perceptions of fairness in the absence of redistribution (the RA) are similar to winners’ perceptions of fairness in the two-card RE. Nevertheless, normative responses indicate a “Warren Buffet effect” as regressive redistribution increases from zero (RA) to two. Although winners have a higher baseline, perceptions of fairness decline more sharply for winners than for losers, indicating that winners’ perceptions are more sensitive than those of losers to a system that is rigged in their favor. In the PR, perceptions of fairness in the one-card exchange are similar to those in the baseline condition but drop to levels observed in the regressive conditions when two cards are exchanged, indicating participants’ awareness of the increased importance of the rules of game (see fig. S7). Participants regard a compensatory one-card exchange as fair, but the two-card exchange tilts the playing field too far, even when it is to the benefit of the disadvantaged player. Cognitive responses are shown in Fig. 2. The figure reports perceptions of the declining importance of talent (Fig. 2A) and luck (Fig. 2B) as the level of redistribution increases from 0 to 2 in both PR and RE conditions, in parallel with the decline in perceptions of fairness in Fig. 1 (see also table S2). Winners generally attribute outcomes to talent more than to luck, while losers are the opposite, but this difference is attenuated when exchanges are progressive. Comparing Figs. 1 and 2, cognitive differences in the responses of winners and losers are not as large as the normative, and their cognitive attributions converge as the level of redistribution increases. In the baseline RA, winners see talent and luck as equally important, while losers attribute outcomes largely to luck. In short, the results indicate that both the level and the direction of redistribution of opportunity are important in shaping participants’ cognitive responses. On a level playing field, causal attributions to talent versus luck reflect who won, but as the playing field is tilted, participants’ responses reflect the tilt more than the outcome, especially in the two-card RE. Positive and negative affective responses are reported in Fig. 3 (A and B, respectively). Comparison across conditions shows that winners’ affective responses are nearly always positive (“happy”), even when they perceive the game as unfair and unrelated to personal talent. Conversely, losers’ responses are consistently negative (“sad” and “angry”), whether exchanges are progressive or regressive. In summary, the Swap Game experiment reveals notable differences in the cognitive, normative, and affective responses of winners and losers. These differences depend, in part, on whether the rules of the game favor winners (in RE) or losers (in PR). However, cognitive and normative differences generally attenuate as redistribution increases from zero to two. In the two-card exchanges, winners and losers differ markedly in normative and emotional responses to unequal outcomes but not in their beliefs about the causes. ## DISCUSSION Survey responses to unequal opportunity and unequal outcomes are generally confounded in studies of attitudes about inequality and distributive justice. This study introduced a novel experimental design that resembles real-life stratification processes in which the distribution of opportunity matters for the distribution of outcomes. We used randomized trials to measure cognitive, normative, and emotional responses to unequal outcomes in a card game in which unequal opportunity could be manipulated independently of other determinants of outcome inequality. Results showed the largest differences between winners and losers in the RA condition in which opportunity was not redistributed, consistent with research in psychology on attribution error and research in economics on self-serving bias. Winners were more likely than losers to attribute unequal outcomes to talent instead of luck, to see the outcomes as fair, and to express personal satisfaction. As the level of redistribution increased with the number of cards that were exchanged, the normative and affective differences persisted but the cognitive differences disappeared. The differences between winners and losers were attenuated (but not eliminated) when redistribution tilted the playing field in either direction via PR or RE. Both winners and losers were less likely to view the outcomes as fair or attributable to skill, regardless of who benefited. This result is consistent with other studies showing that higher inequality triggers concerns about equal opportunities (12, 36) and suggests that perceptions of fairness are not entirely motivated by self-interest or by a need to rationalize or justify success and failure. Winners appear to be especially sensitive to regressive redistribution in their perceptions of fairness, reminiscent of repeated calls by Warren Buffet and Bill Gates for higher taxes on the wealthy to level the playing field. Without further study, we urge caution about generalizing to actual socioeconomic inequality from the results of a card game, for five reasons. First, the redistribution of opportunity through the exchange of high or low cards likely violates implicit norms that games should be played on a level playing field. Further research is needed to test whether these norms extend to real-world economic competition outside the context of a parlor game. By contrast, durable income inequality in real life may dictate fairness norms that should not be impartial and instead directly benefit the materially disadvantaged. Second, we use redistribution to refer to redistribution of opportunities, not of outcomes (via transfers and taxes). Therefore, our findings do not directly generalize to the effects of redistribution of outcomes on beliefs about inequality and fairness. Third, the Swap Game was designed to minimize the effects of skill and effort, and participants were given complete information regarding the rules that govern the distribution of opportunities. In empirical settings, the role of luck is far less transparent and easier to rationalize away (31). Fourth, the PR rules differ from real-life progressive conditions such as affirmative action and need-based scholarships, in that progressive redistribution in the experiment does not preclude the first-mover advantage. Last, affective responses may reflect the phrasing of the question, which asks about “your results” rather than “the” results of the game (see Materials and Methods), thereby focusing attention on personal loss or gain, without taking into account the fate of the partner. These differences notwithstanding, the findings have two potentially important—and contradictory—implications for public support for policies to equalize opportunity, such as affirmative action, investments in early childhood education, and job training. In real-life situations, unequal opportunities often operate in inconspicuous ways—as when a person’s income results from an opaque combination of family background, talent, effort, and luck. The increase in the conspicuousness of redistribution as the level of transfer increased from zero (random) to two cards points to what may happen to public opinion as the tilt of the playing field becomes blatantly obvious. As the level of redistribution increased from zero to two, both winners and losers became increasingly likely to see unequal outcomes as unfair and unjustified by differences in talent, whether the rules were regressive or progressive. In the regressive condition, the responses of winners indicate opposition to excessively unequal opportunity, although they are the ones who benefit (the Warren Buffet effect). In the progressive condition, the responses of losers indicate opposition to excessively redistributive interventions, although they are the ones with the most to gain. In short, the two-card exchanges suggest that responses to unequal opportunity depend on how far the playing field is tilted more than on who benefits from the tilt. On the other hand, in almost all other conditions (with fewer than two cards exchanged), winners were more likely than losers to see outcomes as attributable to talent, although talent played no role in the game, and across all conditions, including the two-card exchanges, winners were more likely than losers to regard unequal outcomes as fair. Winners were also more likely to express personal satisfaction with the outcomes, even when they perceived the game as unfair and unrelated to talent. In short, beliefs about inequality and fairness seem to reflect “how the game is played” when the rules go too far, but otherwise, what matters most is “whether you win or lose.” ## MATERIALS AND METHODS ### Human participant approval Our research was reviewed and approved by the Institutional Review Board at Cornell University. The protocol ID no. for this project was 1607006465. Informed consents were collected from all participants. ### Sample characteristics We conducted our experiment on Amazon Mechanical Turk between 23 August and 25 August 2016. The distribution of participants’ sociodemographics is presented in table S8 in the Supplementary Materials. ## SUPPLEMENTARY MATERIALS Supplementary Text Table S1. Sample by outcome, exchange condition, and intensity. Table S2. Inequality of opportunity by exchange condition. Table S3. Logistic regression on winning the Swap Game. Table S4. Proportion of winners by player ID. Table S5. Proportion of respondents that evaluate the results of the game as fair. Table S6. Proportion of respondents that mention luck/talent/rules as the most important factor to explain the results of the game. Table S7. Proportion of respondents that mention positive/negative/indifferent feelings about his/her results in the game. Table S8. Sociodemographic characteristics. Table S9. Sociodemographic characteristics by exchange condition. Fig. S1. Sample instructions for the one-card RE exchange condition. Fig. S2. Individual hand strength across all rounds for winners (blue) and losers (red) by intensity (columns) and direction (rows) of the exchange condition. Fig. S3. Predicted probability of winning the game. Fig. S4. Proportions of normative responses by exchange condition, level of redistribution (0 = random, one-card, and two-card exchange), and outcome as winner (blue) or loser (red). Fig. S5. Proportions of talent responses as the most important factor by exchange condition, level of redistribution (0 = random, one-card, and two-card exchange), and outcome as winner (blue) or loser (red). Fig. S6. Proportions of luck responses as the most important factor by exchange condition, level of redistribution (0 = random, one-card, and two-card exchange), and outcome as winner (blue) or loser (red). Fig. S7. Proportions of rules of the game responses as the most important factor by exchange condition, level of redistribution (0 = random, one-card, and two-card exchange), and outcome as winner (blue) or loser (red). Fig. S8. Decomposition of the one-card and two-card RA exchange conditions. Fig. S9. Proportions of the least important factor by winning status, exchange condition, and intensity of redistribution. Reference (37) This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited. ## REFERENCES AND NOTES Acknowledgments: We thank the Social Dynamics Lab, the Center for the Study of Inequality, and the Center for the Study of Economy & Society at Cornell University, and the 2016 Conference on Digital Experimentation at MIT for helpful feedback from presentations. We also thank two anonymous reviewers for their careful reading and thoughtful comments. Funding: This research was supported by the Center for the Study of Inequality at Cornell University, the U.S. National Science Foundation (SES 1226483), the Minerva Initiative (FA9550-15-1-0162), the Ministry of Education of the Republic of Korea, and the National Research Foundation of Korea (NRF-2016S1A3A2925033). Author contributions: M.D.M., M.B., and M.W.M. designed the experiment and wrote the manuscript. M.D.M. and M.B. built the experimental platform, conducted the experiment, and analyzed the data. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data and a replication package will be available on https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/BCOZ6N. View Abstract
2021-01-19 16:21:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43016892671585083, "perplexity": 2292.4393577636274}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519395.23/warc/CC-MAIN-20210119135001-20210119165001-00711.warc.gz"}
https://labs.tib.eu/arxiv/?author=Kelly%20Holley-Bockelmann
• ### Cosmological Hydrodynamic Simulations of Preferential Accretion in the SMBH of Milky Way Size Galaxies(1610.01155) June 5, 2018 astro-ph.GA Using a new, high-resolution cosmological hydrodynamic simulation of a Milky Way-type (MW-type) galaxy, we explore how a merger-rich assembly history affects the mass budget of the central supermassive black hole (SMBH). We examine a MW-mass halo at the present epoch whose evolution is characterized by several major mergers to isolate the importance of merger history on black hole accretion. This study is an extension of Bellovary et. al. 2013, which analyzed the accretion of high mass, high redshift galaxies and their central black holes, and found that the gas content of the central black hole reflects what is accreted by the host galaxy halo. In this study, we find that a merger-rich galaxy will have a central SMBH preferentially fed by merger gas. Moreover, we find that nearly 30$\%$ of the accreted mass budget of the SMBH enters the galaxy through the two major mergers in its history, which may account for the increase of merger-gas fueling the SMBH. Through an investigation of the angular momentum of the gas entering the host and its SMBH, we determine that merger gas enters the galaxy with lower angular momentum compared to smooth accretion, partially accounting for the preferential fueling witnessed in the SMBH. In addition, the presence of mergers, particularly major mergers, also helps funnel low angular momentum gas more readily to the center of the galaxy. Our results imply that galaxy mergers play an important role in feeding the SMBH in MW-type galaxies with merger-rich histories. Our results imply that galaxy mergers play an important role in feeding the SMBH in MW-type galaxies with merger-rich histories. • ### The Stellar Orbital Structure in Axisymmetric Galaxy Models with Supermassive Black Hole Binaries(1805.03828) May 10, 2018 astro-ph.GA It has been well-established that particular centrophilic orbital families in non-spherical galaxies can, in principle, drive a black hole binary to shrink its orbit through three-body scattering until the black holes are close enough to strongly emit gravitational waves. Most of these studies rely on orbital analysis of a static SMBH-embedded galaxy potential to support this view; it is not clear, however, how these orbits transform as the second SMBH enters the center, so our understanding of which orbits actually interact with a SMBH binary is not ironclad. Here, we analyze two flattened galaxy models, one with a single SMBH and one with a binary, to determine which orbits actually do interact with the SMBH binary and how they compare with the set predicted in single SMBH-embedded models. We find close correspondence between the centrophilic orbits predicted to interact with the binary and those that are actually scattered by the binary, in terms of energy and Lz distribution, where Lz is the z component of a stellar particle's angular momentum. Of minor note: because of the larger mass, the binary SMBH has a radius of influence about 4 times larger than in the single SMBH model, which allows the binary to draw from a larger reservoir of orbits to scatter. Of the prediction particles and scattered particles, nearly half have chaotic orbits, 40% have fx:fy=1:1 orbits, 10% have other resonant orbits. • The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) has been in operation since July 2014. This paper describes the second data release from this phase, and the fourteenth from SDSS overall (making this, Data Release Fourteen or DR14). This release makes public data taken by SDSS-IV in its first two years of operation (July 2014-2016). Like all previous SDSS releases, DR14 is cumulative, including the most recent reductions and calibrations of all data taken by SDSS since the first phase began operations in 2000. New in DR14 is the first public release of data from the extended Baryon Oscillation Spectroscopic Survey (eBOSS); the first data from the second phase of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE-2), including stellar parameter estimates from an innovative data driven machine learning algorithm known as "The Cannon"; and almost twice as many data cubes from the Mapping Nearby Galaxies at APO (MaNGA) survey as were in the previous release (N = 2812 in total). This paper describes the location and format of the publicly available data from SDSS-IV surveys. We provide references to the important technical papers describing how these data have been taken (both targeting and observation details) and processed for scientific use. The SDSS website (www.sdss.org) has been updated for this release, and provides links to data downloads, as well as tutorials and examples of data use. SDSS-IV is planning to continue to collect astronomical data until 2020, and will be followed by SDSS-V. • ### Sowing black hole seeds: Direct collapse black hole formation with realistic Lyman-Werner radiation in cosmological simulations(1803.01007) March 2, 2018 astro-ph.GA We study the birth of supermassive black holes from the direct collapse process and characterize the sites where these black hole seeds form. In the pre-reionization epoch, molecular hydrogen (H$_2$) is an efficient coolant, causing gas to fragment and form Population III stars, but Lyman-Werner radiation can suppress H$_2$ formation and allow gas to collapse directly into a massive black hole. The critical flux required to inhibit H$_2$ formation, $J_{\rm crit}$, is hotly debated, largely due to the uncertainties in the source radiation spectrum, H$_2$ self-shielding, and collisional dissociation rates. Here, we test the power of the direct collapse model in a self-consistent, time-dependant, non-uniform Lyman-Werner radiation field -- the first time such has been done in a cosmological volume -- using an updated version of the SPH+N-body tree code Gasoline with H$_2$ non-equilibrium abundance tracking, H$_2$ cooling, and a modern SPH implementation. We vary $J_{\rm crit}$ from $30$ to $10^3$ in units of $J_{21}$ to study how this parameter impacts the number of seed black holes and the type of galaxies which host them. We focus on black hole formation as a function of environment, halo mass, metallicity, and proximity of the Lyman-Werner source. Massive black hole seeds form more abundantly with lower $J_{\rm crit}$ thresholds, but regardless of $J_{\rm crit}$, these seeds typically form in halos that have recently begun star formation. Our results do not confirm the proposed atomic cooling halo pair scenario; rather black hole seeds predominantly form in low-metallicity pockets of halos which already host star formation. • ### Chandra and XMM-Newton Observations of the Abell 3391/Abell 3395 Intercluster Filament(1802.08688) Feb. 23, 2018 astro-ph.GA, astro-ph.HE We present Chandra and XMM-Newton X-ray observations of the Abell 3391/Abell 3395 intercluster filament. It has been suggested that the galaxy clusters Abell 3395, Abell 3391, and the galaxy group ESO-161 located between the two clusters, are in alignment along a large-scale intercluster filament. We find that the filament is aligned close to the plane of the sky, in contrast to previous results. We find a global projected filament temperature kT = $4.45_{-0.55}^{+0.89}$~keV, electron density $n_e=1.08^{+0.06}_{-0.05} \times 10^{-4}$~cm$^{-3}$, and $M_{\rm gas} = 2.7^{+0.2}_{-0.1} \times 10^{13}$~M$_\odot$. The thermodynamic properties of the filament are consistent with that of intracluster medium (ICM) of Abell 3395 and Abell 3391, suggesting that the filament emission is dominated by ICM gas that has been tidally disrupted during an early stage merger between these two clusters. We present temperature, density, entropy, and abundance profiles across the filament. We find that the galaxy group ESO-161 may be undergoing ram pressure stripping in the low density environment at or near the virial radius of both clusters due to its rapid motion through the filament. • The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) began observations in July 2014. It pursues three core programs: APOGEE-2, MaNGA, and eBOSS. In addition, eBOSS contains two major subprograms: TDSS and SPIDERS. This paper describes the first data release from SDSS-IV, Data Release 13 (DR13), which contains new data, reanalysis of existing data sets and, like all SDSS data releases, is inclusive of previously released data. DR13 makes publicly available 1390 spatially resolved integral field unit observations of nearby galaxies from MaNGA, the first data released from this survey. It includes new observations from eBOSS, completing SEQUELS. In addition to targeting galaxies and quasars, SEQUELS also targeted variability-selected objects from TDSS and X-ray selected objects from SPIDERS. DR13 includes new reductions of the SDSS-III BOSS data, improving the spectrophotometric calibration and redshift classification. DR13 releases new reductions of the APOGEE-1 data from SDSS-III, with abundances of elements not previously included and improved stellar parameters for dwarf stars and cooler stars. For the SDSS imaging data, DR13 provides new, more robust and precise photometric calibrations. Several value-added catalogs are being released in tandem with DR13, in particular target catalogs relevant for eBOSS, TDSS, and SPIDERS, and an updated red-clump catalog for APOGEE. This paper describes the location and format of the data now publicly available, as well as providing references to the important technical papers that describe the targeting, observing, and data reduction. The SDSS website, http://www.sdss.org, provides links to the data, tutorials and examples of data access, and extensive documentation of the reduction and analysis procedures. DR13 is the first of a scheduled set that will contain new data and analyses from the planned ~6-year operations of SDSS-IV. • We describe the Sloan Digital Sky Survey IV (SDSS-IV), a project encompassing three major spectroscopic programs. The Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2) is observing hundreds of thousands of Milky Way stars at high resolution and high signal-to-noise ratio in the near-infrared. The Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey is obtaining spatially-resolved spectroscopy for thousands of nearby galaxies (median redshift of z = 0.03). The extended Baryon Oscillation Spectroscopic Survey (eBOSS) is mapping the galaxy, quasar, and neutral gas distributions between redshifts z = 0.6 and 3.5 to constrain cosmology using baryon acoustic oscillations, redshift space distortions, and the shape of the power spectrum. Within eBOSS, we are conducting two major subprograms: the SPectroscopic IDentification of eROSITA Sources (SPIDERS), investigating X-ray AGN and galaxies in X-ray clusters, and the Time Domain Spectroscopic Survey (TDSS), obtaining spectra of variable sources. All programs use the 2.5-meter Sloan Foundation Telescope at Apache Point Observatory; observations there began in Summer 2014. APOGEE-2 also operates a second near-infrared spectrograph at the 2.5-meter du Pont Telescope at Las Campanas Observatory, with observations beginning in early 2017. Observations at both facilities are scheduled to continue through 2020. In keeping with previous SDSS policy, SDSS-IV provides regularly scheduled public data releases; the first one, Data Release 13, was made available in July 2016. • ### Galaxy Rotation and Supermassive Black Hole Binary Evolution(1704.03490) May 19, 2017 astro-ph.GA Supermassive black hole (SMBH) binaries residing at the core of merging galaxies are recently found to be strongly affected by the rotation of their host galaxies. The highly eccentric orbits that form when the host is counterrotating emit strong bursts of gravitational waves that propel rapid SMBH binary coalescence. Most prior work, however, focused on planar orbits and a uniform rotation profile, an unlikely interaction configuration. However, the coupling between rotation and SMBH binary evolution appears to be such a strong dynamical process that it warrants further investigation. This study uses direct N-body simulations to isolate the effect of galaxy rotation in more realistic interactions. In particular, we systematically vary the SMBH orbital plane with respect to the galaxy rotation axis, the radial extent of the rotating component, and the initial eccentricity of the SMBH binary orbit. We find that the initial orbital plane orientation and eccentricity alone can change the inspiral time by an order of magnitude. Because SMBH binary inspiral and merger is such a loud gravitational wave source, these studies are critical for the future gravitational wave detector, LISA, an ESA/NASA mission currently set to launch by 2034. • Following the selection of The Gravitational Universe by ESA, and the successful flight of LISA Pathfinder, the LISA Consortium now proposes a 4 year mission in response to ESA's call for missions for L3. The observatory will be based on three arms with six active laser links, between three identical spacecraft in a triangular formation separated by 2.5 million km. LISA is an all-sky monitor and will offer a wide view of a dynamic cosmos using Gravitational Waves as new and unique messengers to unveil The Gravitational Universe. It provides the closest ever view of the infant Universe at TeV energy scales, has known sources in the form of verification binaries in the Milky Way, and can probe the entire Universe, from its smallest scales near the horizons of black holes, all the way to cosmological scales. The LISA mission will scan the entire sky as it follows behind the Earth in its orbit, obtaining both polarisations of the Gravitational Waves simultaneously, and will measure source parameters with astrophysically relevant sensitivity in a band from below $10^{-4}\,$Hz to above $10^{-1}\,$Hz. • ### Probing Galactic Structure with the Spatial Correlation Function of SEGUE G-dwarf Stars(1507.01593) July 6, 2015 astro-ph.GA We measure the two-point correlation function of G-dwarf stars within 1-3 kpc of the Sun in multiple lines-of-sight using the Schlesinger et al. G-dwarf sample from the SDSS SEGUE survey. The shapes of the correlation functions along individual SEGUE lines-of-sight depend sensitively on both the stellar-density gradients and the survey geometry. We fit smooth disk galaxy models to our SEGUE clustering measurements, and obtain strong constraints on the thin- and thick-disk components of the Milky Way. Specifically, we constrain the values of the thin- and thick-disk scale heights with 3% and 2% precision, respectively, and the values of the thin- and thick-disk scale lengths with 20% and 8% precision, respectively. Moreover, we find that a two-disk model is unable to fully explain our clustering measurements, which exhibit an excess of clustering at small scales (< 50 pc). This suggests the presence of small-scale substructure in the disk system of the Milky Way. • ### A First Look at Galaxy Flyby Interactions. II. Do Flybys matter?(1505.07910) May 29, 2015 astro-ph.GA In the second paper of this series, we present results from cosmological simulations on the demographics of flyby interactions to gauge their potential impact on galaxy evolution. In a previous paper, we demonstrated that flybys -- an interaction where two independent halos inter-penetrate but detach at a later time and do not merge -- occur much more frequently than previously believed. In particular, we found that the frequency of flybys increases at low redshift and is comparable to or even greater than the frequency of mergers for halos $\gtrsim 10^{11} M_\odot/h$. In this paper, we classify flybys according to their orbits and the level of perturbation exacted on both the halos involved. We find that the majority of flybys penetrate deeper than $\sim R_{half}$ of the primary and have an initial relative speed $\sim 1.6\times V_{vir}$ of the primary. The typical flyby mass-ratio is $\sim 0.1$ at high $z$ for all halos, while at low $z$, massive primary halos undergo flybys with small secondary halos. We estimate the perturbation from the flyby on both the primary and the secondary and find that a typical flyby is mostly non-perturbative for the primary halo. However, since a massive primary experiences so many flybys at any given time, they are nearly continually a victim of a perturbative event. In particular, we find flybys that cause $\sim 1\%$ change in the binding energy of a primary halo occurs $\gtrsim 1$ Gyr$^{-1}$ for halos $> 10^{10} M_\odot/h$ for $z \lesssim 4$. Secondary halos, on the other hand, are highly perturbed by the typical encounter, experiencing a change in binding energy of nearly order unity. Our results imply that flybys can drive a significant part of galaxy transformation at moderate to lower redshifts ($z \lesssim 4$). We touch on implications for observational surveys, mass-to-light ratios, and galaxy assembly bias. • ### Galaxy Rotation and Rapid Supermassive Black Hole Binary Coalescence(1505.06203) May 22, 2015 astro-ph.GA During a galaxy merger, the supermassive black hole (SMBH) in each galaxy is thought to sink to the center of the potential and form a supermassive black hole binary; this binary can eject stars via 3-body scattering, bringing the SMBHs ever closer. In a static spherical galaxy model, the binary stalls at a separation of about a parsec after ejecting all the stars in its loss cone -- this is the well-known final parsec problem. However it has been shown that SMBH binaries in non-spherical galactic nuclei harden at a nearly constant rate until reaching the gravitational wave regime. Here we use a suite of direct N-body simulations to follow SMBH binary evolution in both corotating and counterrotating flattened galaxy models. For N larger than 500K, we find that the evolution of the SMBH binary is convergent, and is independent of the particle number. Rotation in general increases the hardening rate of SMBH binaries even more effectively than galaxy geometry alone. SMBH binary hardening rates are similar for co- and counterrotating galaxies. In the corotating case, the center of mass of SMBH binary settles into an orbit that is in a corotation resonance with the background rotating model, and the coalescence time is roughly few hundred Myr faster than a non-rotating flattened model. We find that counterrotation drives SMBHs to coalesce on a nearly radial orbit promptly after forming a hard binary. We discuss the implications for gravitational wave astronomy, hypervelocity star production, and the effect on the structure of the host galaxy. • ### Voronoi Tessellation and Non-parametric Halo Concentration(1504.04307) April 16, 2015 astro-ph.GA, astro-ph.IM We present and test TesseRACt, a non-parametric technique for recovering the concentration of simulated dark matter halos using Voronoi tessellation. TesseRACt is tested on idealized N-body halos that are axisymmetric, triaxial, and contain substructure and compared to traditional least-squares fitting as well as two non-parametric techniques that assume spherical symmetry. TesseRACt recovers halo concentrations within 0.3% of the true value regardless of whether the halo is spherical, axisymmetric, or triaxial. Traditional fitting and non-parametric techniques that assume spherical symmetry can return concentrations that are systematically off by as much as 10% from the true value for non-spherical halos. TesseRACt also performs significantly better when there is substructure present outside $0.5R_{200}$. Given that cosmological halos are rarely spherical and often contain substructure, we discuss implications for studies of halo concentration in cosmological N-body simulations including how choice of technique for measuring concentration might bias scaling relations. • ### Massive Black Hole Science with eLISA(1410.2907) Jan. 9, 2015 hep-th, hep-ph, gr-qc, astro-ph.HE The evolving Laser Interferometer Space Antenna (eLISA) will revolutionize our understanding of the formation and evolution of massive black holes along cosmic history by probing massive black hole binaries in the $10^3-10^7$ solar mass range out to redshift $z\gtrsim 10$. High signal-to-noise ratio detections of $\sim 10-100$ binary coalescences per year will allow accurate measurements of the parameters of individual binaries (such as their masses, spins and luminosity distance), and a deep understanding of the underlying cosmic massive black hole parent population. This wealth of unprecedented information can lead to breakthroughs in many areas of physics, including astrophysics, cosmology and fundamental physics. We review the current status of the field, recent progress and future challenges. • ### Early Growth in a Perturbed Universe: Exploring Dark Matter Halo Populations in 2LPT and ZA Simulations(1412.4815) Dec. 15, 2014 astro-ph.CO, astro-ph.GA We study the structure and evolution of dark matter halos from z = 300 to z = 6 for two cosmological N-body simulation initialization techniques. While the second order Lagrangian perturbation theory (2LPT) and the Zel'dovich approximation (ZA) both produce accurate present day halo mass functions, earlier collapse of dense regions in 2LPT can result in larger mass halos at high redshift. We explore the differences in dark matter halo mass and concentration due to initialization method through three 2LPT and three ZA initialized cosmological simulations. We find that 2LPT induces more rapid halo growth, resulting in more massive halos compared to ZA. This effect is most pronounced for high mass halos and at high redshift. Halo concentration is, on average, largely similar between 2LPT and ZA, but retains differences when viewed as a function of halo mass. For both mass and concentration, the difference between typical individual halos can be very large, highlighting the shortcomings of ZA-initialized simulations for high-z halo population studies. • ### Classification of Stellar Orbits in Axisymmetric Galaxies(1412.2134) Dec. 5, 2014 astro-ph.GA It is known that two supermassive black holes (SMBHs) cannot merge in a spherical galaxy within a Hubble time; an emerging picture is that galaxy geometry, rotation, and large potential perturbations may usher the SMBH binary through the critical three-body scattering phase and ultimately drive the SMBH to coalesce. We explore the orbital content within an N-body model of a mildly- flattened, non-rotating, SMBH-embedded elliptical galaxy. When used as the foundation for a study on the SMBH binary coalescence, the black holes bypassed the binary stalling often seen within spherical galaxies and merged on Gyr timescales (Khan et al. 2013). Using both frequency-mapping and angular momentum criteria, we identify a wealth of resonant orbits in the axisymmetric model, including saucers, that are absent from an otherwise identical spherical system and that can potentially interact with the binary. We quantified the set of orbits that could be scattered by the SMBH binary, and found that the axisymmetric model contained nearly seven times the number of these potential loss cone orbits compared to our equivalent spherical model. In this flattened model, the mass of these orbits is roughly 3 times of that of the SMBH, which is consistent with what the SMBH binary needs to scatter to transition into the gravitational wave regime. • ### Effects of Inclination on Measuring Velocity Dispersion and Implications for Black Holes(1405.0286) Sept. 23, 2014 astro-ph.GA The relation of central black hole mass and stellar spheroid velocity dispersion (the M-$\sigma$ relation) is one of the best-known and tightest correlations linking black holes and their host galaxies. There has been much scrutiny concerning the difficulty of obtaining accurate black hole measurements, and rightly so; however, it has been taken for granted that measurements of velocity dispersion are essentially straightforward. We examine five disk galaxies from cosmological SPH simulations and find that line-of-sight effects due to galaxy orientation can affect the measured $\sigma$ by 30%, and consequently black hole mass predictions by up to 1.0 dex. Face-on orientations correspond to systematically lower velocity dispersion measurements, while more edge-on orientations give higher velocity dispersions, due to contamination by disk stars when measuring line of sight quantities. We caution observers that the uncertainty of velocity dispersion measurements is at least 20 km/s, and can be much larger for moderate inclinations. This effect may account for some of the scatter in the locally measured M-$\sigma$ relation, particularly at the low-mass end. We provide a method for correcting observed $\sigma_{\rm los}$ values for inclination effects based on observable quantities. • ### Expansion Techniques for Collisionless Stellar Dynamical Simulations(1406.4254) June 17, 2014 astro-ph.IM We present GPU implementations of two fast force calculation methods, based on series expansions of the Poisson equation. One is the Self-Consistent Field (SCF) method, which is a Fourier-like expansion of the density field in some basis set; the other is the Multipole Expansion (MEX) method, which is a Taylor-like expansion of the Green's function. MEX, which has been advocated in the past, has not gained as much popularity as SCF. Both are particle-field method and optimized for collisionless galactic dynamics, but while SCF is a "pure" expansion, MEX is an expansion in just the angular part; it is thus capable of capturing radial structure easily, where SCF needs a large number of radial terms. We show that despite the expansion bias, these methods are more accurate than direct techniques for the same number of particles. The performance of our GPU code, which we call ETICS, is profiled and compared to a CPU implementation. On the tested GPU hardware, a full force calculation for one million particles took ~ 0.1 seconds (depending on expansion cutoff), making simulations with as many as $10^8$ particles fast on a comparatively small number of nodes. • ### Ultramassive Black Hole Coalescence(1405.6425) May 25, 2014 astro-ph.GA Although supermassive black holes (SMBHs) correlate well with their host galaxies, there is an emerging view that outliers exist. Henize 2-10, NGC 4889, and NGC1277 are examples of SMBHs at least an order of magnitude more massive than their host galaxy suggests. The dynamical effects of such ultramassive central black holes is unclear. Here, we perform direct N-body simulations of mergers of galactic nuclei where one black hole is ultramassive to study the evolution of the remnant and the black hole dynamics in this extreme regime. We find that the merger remnant is axisymmetric near the center, while near the large SMBH influence radius, the galaxy is triaxial. The SMBH separation shrinks rapidly due to dynamical friction, and quickly forms a binary black hole; if we scale our model to the most massive estimate for the NGC1277 black hole, for example, the timescale for the SMBH separation to shrink from nearly a kiloparsec to less than a parsec is roughly 10 Myr. By the time the SMBHs form a hard binary, gravitational wave emission dominates, and the black holes coalesce in a mere few Myr. Curiously, these extremely massive binaries appear to nearly bypass the 3-body scattering evolutionary phase. Our study suggests that in this extreme case, SMBH coalescence is governed by dynamical friction followed nearly directly by gravitational wave emission, resulting in an rapid and efficient SMBH coalescence timescale. We discuss the implications for gravitational wave event rates and hypervelocity star production. • ### Bar Formation from Galaxy Flybys(1405.5832) May 22, 2014 astro-ph.GA Recently, both simulations and observations have revealed that flybys - fast, one-time interactions between two galaxy halos - are surprisingly common, nearing/comparable to galaxy mergers. Since these are rapid, transient events with the closest approach well outside the galaxy disk, it is unclear if flybys can transform the galaxy in a lasting way. We conduct collisionless N-body simulations of three co-planer flyby interactions between pure-disk galaxies to take a first look at the effects flybys have on disk structure, with particular focus on stellar bar formation. We find that some flybys are capable of inciting a bar with bars forming in both galaxies during our 1:1 interaction and in the secondary during our 10:1 interaction. The bars formed have ellipticities >0.5, sizes on the order of the host disk's scale length, and persist to the end of our simulations, ~5 Gyr after pericenter. The ability of flybys to incite bar formation implies that many processes associated with secular bar evolution may be more closely tied with interactions than previously though. • ### Hypervelocity Star Candidates in the SEGUE G & K Dwarf Sample(1308.3495) Jan. 3, 2014 astro-ph.GA We present 20 candidate hypervelocity stars from the Sloan Extension for Galactic Understanding and Exploration (SEGUE) G and K dwarf samples. Previous searches for hypervelocity stars have only focused on large radial velocities; in this study we also use proper motions to select the candidates. We determine the hypervelocity likelihood of each candidate by means of Monte Carlo simulations, considering the significant errors often associated with high proper motion stars. We find that nearly half of the candidates exceed their escape velocities with at least 98% probability. Every candidate also has less than a 25% chance of being a high-velocity fluke within the SEGUE sample. Based on orbits calculated using the observed six-dimensional positions and velocities, few, if any, of these candidates originate from the Galactic center. If these candidates are truly hypervelocity stars, they were not ejected by interactions with the Milky Way's supermassive black hole. This calls for a more serious examination of alternative hypervelocity-star ejection scenarios. • ### A First Look at Galaxy Flyby Interactions: Characterizing the Frequency of Flybys in a Cosmological Context(1306.1548) June 6, 2013 astro-ph.CO, astro-ph.GA Hierarchical structure formation theory is based on the notion that mergers drive galaxy evolution, so a considerable framework of semi-analytic models and N-body simulations has been constructed to calculate how mergers transform a growing galaxy. However, galaxy mergers are only one type of major dynamical interaction between halos -- another class of encounter, a close flyby, has been largely ignored. We analyze a 50 Mpc/h, $1024^3$ collisionless cosmological simulation and find that the number of close flyby interactions is comparable to, or even surpasses, the number of mergers for halo masses $\ga 10^{11}\,{h^{-1} M_\odot}$ at $z \la 2$. Halo flybys occur so frequently to high mass halos that they are continually perturbed, unable to reach a dynamical equilibrium. We also find tentative evidence that at high redshift, $z \ga 14$, flybys are as frequent as mergers. Our results suggest that close halo flybys can play an important role in the evolution of the earliest dark matter halos and their galaxies, and can still influence galaxy evolution at the present epoch. • ### Supermassive Black Hole Binary Evolution in Axisymmetric Galaxies: the final parsec problem is not a problem(1302.1871) Feb. 7, 2013 astro-ph.GA During a galaxy merger, the supermassive black hole (SMBH) in each galaxy is thought to sink to the center of the potential and form a supermassive black hole binary; this binary can eject stars via 3-body scattering, bringing the SMBHs ever closer. In a static spherical galaxy model, the binary stalls at a separation of about a parsec after ejecting all the stars in its loss cone -- this is the well-known final parsec problem. Earlier work has shown that the centrophilic orbits in triaxial galaxy models are key in refilling the loss cone at a high enough rate to prevent the black holes from stalling. However, the evolution of binary SMBHs has never been explored in axisymmetric galaxies, so it is not clear if the final parsec problem persists in these systems. Here we use a suite of direct N-body simulations to follow SMBH binary evolution in galaxy models with a range of ellipticity. For the first time, we show that mere axisymmetry can solve the final parsec problem; we find the the SMBH evolution is independent of N for an axis ratio of c/a=0.8, and that the SMBH binary separation reaches the gravitational radiation regime for c/a=0.75. • ### Can a Satellite Galaxy Merger Explain the Active Past of the Galactic Center?(1107.2923) Feb. 6, 2013 astro-ph.CO, astro-ph.GA Observations of the Galactic Center (GC) have accumulated a multitude of "forensic" evidence indicating that several million years ago the center of the Milky Way galaxy was teaming with starforming and accretion-powered activity -- this paints a rather different picture from the GC as we understand it today. We examine a possibility that this epoch of activity could have been triggered by the infall of a satellite galaxy into the Milky Way which began at the redshift of 10 and ended few million years ago with a merger of the Galactic supermassive black hole with an intermediate mass black hole brought in by the inspiralling satellite. • ### The HI environment of the M101 group(1210.8333) Nov. 27, 2012 astro-ph.CO We present a wide (8.5x6.7 degree, 1050x825 kpc), deep (sigma(N_HI)=10^(16.8-17.5) cm^-2) neutral hydrogen (HI) map of the M101 galaxy group. We identify two new HI sources in the group environment, one an extremely low surface brightness (and hitherto unknown) dwarf galaxy, and the other a starless HI cloud, possibly primordial in origin. Our data show that M101's extended HI envelope (Huchtmeier & Witzel 1979) takes the form of a ~100 kpc long tidal loop or plume of HI extending to the southwest of the galaxy. The plume has an HI mass ~ 10^8 Msun and a peak column density of N_HI=5x10^17 cm^-2, and while it rotates with the main body of M101, it shows kinematic peculiarities suggestive of a warp or flaring out of the rotation plane of the galaxy. We also find two new HI clouds near the plume with masses ~ 10^7 Msun, similar to HI clouds seen in the M81/M82 group, and likely also tidal in nature. Comparing to deep optical imaging of the M101 group, neither the plume nor the clouds have any extended optical counterparts down to a limiting surface brightness of mu_B = 29.5. We also trace HI at intermediate velocities between M101 and NGC 5474, strengthening the case for a recent interaction between the two galaxies. The kinematically complex HI structure in the M101 group, coupled with the optical morphology of M101 and its companions, suggests that the group is in a dynamically active state that is likely common for galaxies in group environments.
2020-11-28 17:21:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5258016586303711, "perplexity": 2279.717562063187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195687.51/warc/CC-MAIN-20201128155305-20201128185305-00138.warc.gz"}
https://www.flyingcoloursmaths.co.uk/sum-five-squares-never-square/
# Why is the sum of five consecutive squares never a square? In a recent Wrong But Useful podcast, @reflectivemaths (who is Dave Gale in real life) asked the audience to: Prove that the sum of five consecutive square numbers is never a square. This one's a bit easier than it looks: I chose to call the middle number of the five $n$. That makes my sum: $(n-2)^2 + (n-1)^2 + n^2 + (n+1)^2 + (n+2)^2$ The reason I've done it like that is that I get a lot of cancelling if I do! In fact, when I expand the brackets, all of the $n$ terms vanish and I'm left with $5n^2 + 10 = 5(n^2 + 2)$. That can only be a square if $n^2 + 2$ is a multiple of five - so $n^2$ must end in 3 or 8. Sadly, all squares end in either 0, 1, 4, 5, 6, or 9, so there's no such $n$. ## Colin Colin is a Weymouth maths tutor, author of several Maths For Dummies books and A-level maths guides. He started Flying Colours Maths in 2008. He lives with an espresso pot and nothing to prove. ### 3 comments on “Why is the sum of five consecutive squares never a square?” • ##### Cav Bit shorter than your usual posts! • ##### Colin Never mind the length, feel the quality! This site uses Akismet to reduce spam. Learn how your comment data is processed. Sign up for the Sum Comfort newsletter and get a free e-book of mathematical quotations. No spam ever, obviously. ##### Where do you teach? I teach in my home in Abbotsbury Road, Weymouth. It's a 15-minute walk from Weymouth station, and it's on bus routes 3, 8 and X53. On-road parking is available nearby.
2020-02-18 12:02:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47358158230781555, "perplexity": 1713.3546452363794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143695.67/warc/CC-MAIN-20200218120100-20200218150100-00556.warc.gz"}
https://codereview.stackexchange.com/questions/106231/random-forest-code-optimization
# Random Forest Code Optimization I am new to Python. I have built a model with randomforest in python. But I think my code is not optimized. Please look into my code and suggest if I have deviated from best practices. Overview about the data I have: The data has response columns and predictor columns. Also there is a column 'TestOrTrainingDataRandom' which specifies the test and training data.(There are also columns like index, Timestamp,etc which have to be removed) The predictor columns start with '3000' and ends at '3680' with a step increase of 5 (i.e in total there are 137 predictor columns) But there are some predictor columns missing. So the missing predictor columns has been interpolated. Here is my code. Please give the feedback. from sklearn.ensemble import RandomForestClassifier #Package for random forest classification import pandas as pd from sklearn.metrics import classification_report,confusion_matrix #creating response column data["response"] = None s = pd.Series(["verylow","low","medium","high","veryhigh"], dtype="category") n=0 for row in data["H2S"]: if row < 120: data['response'].iloc[n] = s[0] #Assign 'verylow' if H2S concentration is less than 120 and do indexing n=n+1 elif row >=120 and row <500: data['response'].iloc[n] = s[1] #Assign 'low' if H2S concentration is between 120 and 500 and also do indexing n=n+1 elif row >=500 and row <1000: data['response'].iloc[n] = s[2] #Assign 'medium' if H2S concentration is between 500 and 1000 and also do indexing n=n+1 elif row >=1000 and row <1300: data['response'].iloc[n] = s[3] #Assign 'high' if H2S concentration is between 1000 and 1300 and also do indexing n=n+1 else: data['response'].iloc[n] = s[4] #Assign 'veryhigh' if H2S concentration is greater than 1300 and do indexing n=n+1 #create the training & test sets b=len(data) a=data.TestOrTrainingDataRandom [data.TestOrTrainingDataRandom == 1].count() #Count the number of training data new_data=data.drop(data.columns[[0,1,2,3,4,5,6,7,8,121,120,119,118]], axis=1) colnames=list(new_data) len_column = len(new_data.columns) len_iteration=len_column-1 j=3000;i=0;k=0 new_col=pd.DataFrame(index=range(0,b),columns=['temp']) # To insert missing columns while i < len_iteration: if int(colnames[i])== j: i=i+1 j=j+5; else: for k in range(0,b): new_col.iloc[k]=(new_data.iloc[k,i-1]+new_data.iloc[k,i+1])/2 new_data.insert(i,str(j),new_col) colnames=list(new_data) j=j+5 i=i+1 len_iteration=len_iteration+1; new_data.insert(1,"dataselection_col",data['TestOrTrainingDataRandom']) new_data.insert(1,'response',data['response']) train = pd.DataFrame(index=range(0,a),columns=list(new_data)) #Creating dataframe for training data test = pd.DataFrame(index=range(0,abs(a-b)),columns=list(new_data)) #Creating dataframe for test data m=0;n=0;p=0; for value in new_data['dataselection_col']: if value == 1 : train.iloc[m]=new_data.irow(n) #If 'TestOrTrainingDataRandom' column has 1, then append that row data to train and also do indexing n=n+1 m=m+1 else: test.iloc[p]=new_data.irow(n) #If 'TestOrTrainingDataRandom' column has other than 0, then append that row data to test and also do indexing p=p+1 n=n+1 trainRes = train['response'] #Response column Actuals=test['response'] #Actuals new_train=train.drop(train.columns[[1,2]],axis=1) colnames=list(new_train) trainArr = train.as_matrix(colnames) #Convert dataframe into array matrix representation #For random forest rf = RandomForestClassifier(n_estimators=100) #Random forest generation for Classification rf.fit(trainArr, trainRes) #Fit the random forest model testArr = test.as_matrix(colnames) results = rf.predict(testArr) print 'confusion matrix\n', confusion_matrix(Actuals, results) • Welcome to Code Review! By optimised, do you mean you want to improve performance in particular? – SuperBiasedMan Oct 1 '15 at 10:10 • @SuperBiasedMan since I'm new to python I would like to know whether I deviated from best practices that is normally followed. Also please suggest the possible ways to improve the performance. – Lurch Oct 1 '15 at 10:16 I have notes on style and redundancies, but you should read the Python Style Guide: PEP0008. It has a lot of good info on how to format your code to be clear and readable for yourself and others. I'll miss pointing out some of it's recommendations so do read it too. You have a lot of unnecessary inline comments. from package import Class makes it pretty clear that you're importing Class from package. Python is designed to be readable so that you don't need to explicitly tell people these things. I'd remove most of them. In particular, with your long if and elif chains it's be better to cut down on repetition. Both by having only one comment at the top, and by putting n=n+1 at the end. Though, you can rewrite that as n+=1, and you should put spaces around operators, makes them easier to read: for row in data["H2S"]: #Assign response based on H2S concentration and do indexing if row < 120: data['response'].iloc[n] = s[0] elif row >= 120 and row < 500: data['response'].iloc[n] = s[1] elif row >= 500 and row < 1000: data['response'].iloc[n] = s[2] elif row >= 1000 and row < 1300: data['response'].iloc[n] = s[3] else: data['response'].iloc[n] = s[4] n += 1 But actually you could just use enumerate for your for loop instead. It allows you to do the same as what you're using n for, it contains the number iteration you're on. So you can save a line and just do this: for n, row in enumerate(data["H2S"]): #Assign response based on H2S concentration and do indexing if row < 120: data['response'].iloc[n] = s[0] ... else: data['response'].iloc[n] = s[4] No need for the n += 1 any more. You never use len_column except to assign to len_iteration, so just assign directly to len_iteration: len_iteration = len(new_data.columns) - 1 You shouldn't use the semicolon line separator, it's rarely a good idea and often just looks unPythonic. Anyway, Python lets you assign multiple values at once by comma separating them: j, i, k = 3000, 0, 0 Having them out of alphabetical order is a bit confusing though. You'd be better giving these meaningful names since their usage isn't simple. In you're while loop you increment i and j in both conditions, so you should just do that at the end and just run the middle block if your condition is False: while i < len_iteration: if int(colnames[i]) != j: for k in range(0, b): new_col.iloc[k] = (new_data.iloc[k,i-1]+new_data.iloc[k,i+1])/2 new_data.insert(i,str(j),new_col) colnames = list(new_data) len_iteration += 1; j += 5 i += 1 That block could really do with some comments though. Especially this line: new_col.iloc[k] = (new_data.iloc[k, i - 1] + new_data.iloc[k, i + 1]) / 2 I have no idea what that's doing. You also use m, n and p for indexing. If you're not going to give them more meaningful names, you can at least re-use i, j and k which are more commonly used for looping over indices.
2019-11-19 01:40:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2790193259716034, "perplexity": 4126.787106878697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669868.3/warc/CC-MAIN-20191118232526-20191119020526-00509.warc.gz"}
http://clay6.com/qa/49643/6-boys-and-6-girls-sit-in-a-row-at-random-the-probability-that-all-the-girl
Browse Questions # 6 boys and 6 girls sit in a row at random.The probability that all the girls sit together is $\begin{array}{1 1}(A)\;\large\frac{1}{432}\\(B)\;\large\frac{12}{432}\\(C)\;\large\frac{1}{132}\\(D)\;\text{None of these}\end{array}$ Step 1: Given 6 boys and 6 girls $\therefore$ Number of ways in which 6 boys and 6 girls together sitting in a row =7! 6 girls sitting arrangement =6! Step 2: $\therefore$ Required probability =$\large\frac{7!\times 6!}{12!}$ $\Rightarrow \large\frac{1}{132}$ Hence (C) is the correct answer.
2016-12-11 02:20:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6133461594581604, "perplexity": 608.6268062035259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543782.28/warc/CC-MAIN-20161202170903-00378-ip-10-31-129-80.ec2.internal.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/probability-density-function-random-variable-x-given-following-k-constant-determine-probab-q4155257
Need help solving these problems. Need step by step process. Image text transcribed for accessibility: The probability density function of a random variable X is given by the following: where k; is a constant. Determine the probability that X is between 0 and 1/2 given that it is between 1/4 and 1. A certain measured voltage is a Gaussian random variable with mean = 50 V and standard deviation = 5 V. Determine the probability that the measured voltage will be between 42 V and 52 V. A box contains balls that are identical except for color. Initially the box contains three blue and two green balls. A ball is drawn at random from the box and its color is observed. This first ball is then put back in the box, and then four balls of the other color are added to the box. After the extra balls are added, a ball is drawn at random from the box. Given that this ball is blue, determine the probability that the first ball drawn from the box was green. The probability density function of a random variable X is given by the following: Determine the algebraic formula or formulas (in closed form) for the cumulative distribution function of X, and sketch a graph of the cdf for -1 < x < 3. A coin is tossed until two "heads" have been obtained or four tosses have been made, whichever comes first. List the elements of the sample space of this experiment. A box contains balls that are identical except for color. Initially the box contains one red and two green balls. A ball is drawn at random from the box and then put back in the box. If the chosen ball was red, three red balls, one green ball, and two blue balls are added to the box. If the chosen ball was green, three green balls and one blue ball are added to the box. After the extra balls are added, a ball is drawn at random from the box. Determine the probability that this ball is red.
2014-12-22 16:20:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8812494277954102, "perplexity": 87.80784808518202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775404.88/warc/CC-MAIN-20141217075255-00071-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.gamedev.net/forums/topic/666311-source-code-with-errors-httpideonecomeswkwf/
# Source code with errors: http://ideone.com/eSWKwf This topic is 1050 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts How can I fix these errors. I tried to remove the errors with // but from that came out more different errors ##### Share on other sites A few more things which don't cause errors but caught my eye: position could be set to -1 once before the loop instead of every time within the loop - "<= SIZE - 1" is the same as "< SIZE" - I personally don't like to open namespaces, instead of "using namespace std" go with std::cout, etc., this is not a problem right now, but will be later on ##### Share on other sites Eh people, that looks like homework. I was about to say the same thing. Also, you should avoid this: using namespace std; and instead, just pull in the symbols you want or reference them directly, e.g. std::cout. Using entire namespaces can introduce variable name clashes. Edit: Ninja'd by Epi... Edited by braindigitalis
2018-01-17 22:38:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22101590037345886, "perplexity": 2921.3435595062556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886979.9/warc/CC-MAIN-20180117212700-20180117232700-00257.warc.gz"}
http://norakvitou.ligeracademyblog.org/
## Modeling & Texturing I first start 3D modeling in 2017 and now is 2019 it been 2 year for me. For that 2 year I learn and explore how to make this make that until 2019 January I start to make some thing our of what I know. I first try to make something small like a Cup Something like that not really look interesting because I’m just a beginner. Now I try to make something look more interesting like this This house I didn’t finish the texture it yet because I just finish modeling. This is Call a Toon shader is texture for cartoon example like Spider man in to spider verse texture. A good texture also need lighting if our light is not look good our model also not look good. Photorealistic is a lot of work because you need to work on texturing, a realistic texture need normal map, bump map, rough, displasment map, spec map and more that I’m not sure. I’m still learning how to texture for 3d model. ## FLL Competition On 12 march 2019 my team been to Singapore to Join FLL Competition. Is my first time on a plane, after we arrived Singapore at 4 pm which is 3 pm in Cambodia, our teacher drop us at our host family. My host family was very friendly, they have 1 son and 1 daughter, they was 7 and 5 year old. On 13 march we (my team) went to the school that where we join the competition. The school name is CIA, the school was big, 13 is not our day to compete so we were look around what are they doing how do they compete the robot and to know the school well because we can get lost in that school cause is so big. On 14 march is our compete day, our time to compete is around 2:00 to 2:40 pm. We were all nervous because there is a lot of team that we need to compete with and we need to present to the Judge, but it went really well on our presentation the Judge love it, on the robot performance my team robot doesn’t work, we only get about 16 to 15 which is very low the point when we practice at school is 166 to 170. It was very disappointed I was so sad on that day. After the competition I didn’t do anything else just go back home get dinner and sleep. On 15 march, our teacher took us to downtown to just to release our stress, we saw a lot of thing, we saw a giant tree but not a real tree, we saw a floating baby statue, and more. In the afternoon Bopha host family invite us to go to his work, he work in Cloudera, Cloudera is just google drive, it keep data in cloud. One thing we wouldn’t believe  that we been to the final both team. On the final day they they lead us upstairs to their basketball court, there were 120 team from different school include us. One fact, we are the only team from Cambodia. They give us two practice round and three round for the final, but only one round is used to compare with other teams. That round is the one that has the most points out of the three final rounds we were given. However on the first two round, my robot didn’t work, but luckily it work on the third round. After the competition round we for 3 hour for the result. When the result come my get one award it call ” Oversea Team Award best performance team” ## Python To understand programming language is not easy. I wouldn’t understand if I’m not practice the programming on something. To make the code work you need to write a lot of function into it such as, for loop, variable, list, dictionary, and etc. Sometime the code is not work properly so you need to find an error and fix that error. Some above that I list is from Python but there are more programming language such as, Java, JavaScript, C++, HTML, CSS, etc. Different program language different function. I have learn two type of python there is python 2 and python 3, the different is :                                                                                 this how python 3 prints function look: print ("Hello earth") and this how python 2 print function  look like print "Hello earth" It a bit confusing sometime but when you do more practice you gonna be better for it. I didn’t try to make game on my own yet but I try to follow there instruction  first  to some python game on this website: https://trinket.io/ ## ប្រវត្តិអក្សរសាស្រ្តខ្មែរ ខ្ញុំ​បានយល់ដីងច្រើន​ពីអក្សសាស្ត្រខ្មែរ ថាតើវាមានប្រវត្តិមកពីណា​ ។ ក្រុមខ្ញុំបានស្រាវជ្រាវក្នុងInternetបានឲ្យឃើញថា មាន​មនុស្សច្រើនណាស់បាននិយាយខុសគ្នាពីប្រវត្តិ        អក្សសាស្ត្រខ្មែរ ដូចជា • ព្រះតេជគុណ អ៊ឹម ទៀង មាន​ដីកា​ពន្យល់​ថា អក្សរ​ខ្មែរ​មាន​ប្រភព​មក​ពី​ព្រហ្មញ្ញ​សាសនា​និង​ពី​បាលី​សំស្រ្កឹត ព្រមទាំង​មក​ពី​ខ្មែរ​មន​ផង​ដែរ ៖ «អក្សរសាស្រ្ដ​ខ្មែរ​មាន​ប្រវត្តិ ទី​១ មក​ពី​ព្រហ្មី ព្រហ្មី​ហ្នឹង​ព្រហ្មមញ្ញ​សាសនា ដែល​មាន​មក​តាំង​ពី​មុន​សម័យ​អង្គរ ហើយ​អក្សរសាស្រ្ដ​ខ្មែរ​យើង​បាន​កើត​ចេញ​ពី​បាលី​សំស្រ្កឹត​ខ្លះ​ដែរ។ ហើយ​មួយ​ទៀត កំណើត​ប្រភព​មក​ពី​    ខ្មែរ​មន» • ព្រះតេជគុណ គិន ណារ៉ា សាស្រ្តាចារ្យ​ផ្នែក​ភាសា​បាលី​នៅ​ពុទ្ធិក​វិទ្យាល័យ​ព្រះ​សុរាម្រិត មាន​ដីកា​ឲ្យ​ដឹង​ថា អក្សរ​ខ្មែរ​ត្រូវ​បាន​បង្កើត​ឡើង​ដោយ​បុព្វបុរស​របស់​ខ្លួន ហើយ​ក៏​មាន​ជាប់​ទាក់ទង​ទៅ​នឹង​បាលី​សំស្រ្កឹត​ខ្លះៗ​ដែរ ៖ «អក្សរ​ខ្មែរ​យើង​តាម​ពិត​ទៅ​បង្កើត​ដោយ​អ្នក​ប្រាជ្ញ​ខ្មែរ​យើង​ខ្លួន​ឯង គ្រាន់​ថា យើង​ខ្ចី​ភាសា​បាលី​សំស្ក្រឹត​យក​មក​ប្រើ​ទេ»។ ## Percentage Percentage is % this one if you don’t know that. Percentage I found out that is use full for number like 10% is real money and 90% is fake money of 10,000,000, it different from this 1,000,000 is real money and 9,000,000 is fake money of 10,000,000\$, see the different. Percentage is use full but also really hard if you solve percentage in word problem. ## PH This round I learn a lot about PH, so PH is the balance of Acid and Base. An Acid is an ionic compound that produces positive hydrogen ions (H+) when dissolved in water. A Base is an ionic compound that produces negative hydroxide ions (OH) when dissolved in water. Acid = HCl−→−H2OH++Cl− Base = NaOH−→−H2OOH+Na+ PH are important because many living thing, Example many plant grow best in soil that has a PH between 6 and 7. fish also need a PH close to 7. the neutral of PH is around 7 to 7.5, water is neutral has a PH of 7. ## Khmer Round 3 Khmer class this round our teacher let us research about Khmer grammar. In Khmer grammar, there are a lot of things to know, teacher, divide the group for researching the subject there are noun, pronoun, Adjective, Verb and, more. ## Robotic Robotic in this is not an advanced one but it a Lego robotic it First Lego league. This year project is Intorbit which is about space problem.  How the competition work is, they have two sections one is present about the space problem and the solution, and one more fills the mission on board. There is 15 mission on the board, we need to build our robot to solve the mission on the board as many as we can and there are only 2:30 minute. ## English Literacy round 3 This rounds my teacher Hannah let us choose a book to read there are four choices Looking for Alaska, Wonder, Monster and The Outsider. She also divides a role to do an activity while we read there are four roles, summarizer, visualizer, inference,  symbolizer and word detector. In those four roles, we change every week in our team who read the same book. I choices The Outsider to read it look interesting (it about greasers). ## Stem round 3 Stem this round we learn about Carbon. The topic is in Physical Science, in Physical Science we learn in Chemistry of Carbon. I have the property of carbon, a hydrocarbon. We learn what is carbon and where is carbon(fact: most of the thing made out of carbon most of the thing on earth have carbon in).
2019-10-16 19:31:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17129328846931458, "perplexity": 2623.4896836470284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669546.24/warc/CC-MAIN-20191016190431-20191016213931-00337.warc.gz"}
http://www.hpmuseum.org/forum/post-27691.html
Faster inverse gamma and factorial for the WP 34S 02-08-2015, 05:27 PM (This post was last modified: 02-08-2015 05:29 PM by Dieter.) Post: #21 Dieter Senior Member Posts: 1,666 Joined: Dec 2013 RE: Faster inverse gamma and factorial for the WP 34S (02-08-2015 05:04 PM)BarryMead Wrote:  Wow Dieter that is nice. I am going to compile and experiment with this one. Thanks! I just noticed an error in the original listing (now corrected): Code: ... #1/2 RCL*03 - RCL-02    // this line was missing RCL 00 RCL 01 Γ - ... Dieter 02-08-2015, 05:39 PM Post: #22 Bit Member Posts: 265 Joined: Jan 2014 RE: Faster inverse gamma and factorial for the WP 34S Awesome. I can't work on this right now but I'm glad the real experts have taken on the issue. 02-08-2015, 07:10 PM (This post was last modified: 02-08-2015 11:05 PM by Dieter.) Post: #23 Dieter Senior Member Posts: 1,666 Joined: Dec 2013 RE: Faster inverse gamma and factorial for the WP 34S (02-08-2015 05:39 PM)Bit Wrote: (02-08-2015 05:27 PM)Dieter Wrote: Awesome. I can't work on this right now but I'm glad the real experts have taken on the issue. Real experts would have included a perfect solution for arguments at or close to the Gamma minimum. ;-) There was a similar problem when Lambert's W was implemented in XROM, but back then I could provide a solution. Let's see if the community can come up with something similar here. Test cases: Code:    0,88560 31944 10888 70027 88159 00582 5887 => error    0,88560 31944 10888 70027 88159 00582 5888 =>  1,4616 32144 96836 23537 47806 24573 3288 Otherwise it's not perfect. ;-) Dieter Edited to correct the last value 1,4616... 02-08-2015, 08:56 PM Post: #24 rprosperi Senior Member Posts: 2,248 Joined: Dec 2013 RE: Faster inverse gamma and factorial for the WP 34S (02-08-2015 07:10 PM)Dieter Wrote: (02-08-2015 05:39 PM)Bit Wrote:  Awesome. I can't work on this right now but I'm glad the real experts have taken on the issue. Real experts would have included a perfect solution for arguments at or close to the Gamma minimum. ;-) There was a similar problem when Lambert's W was implemented in XROM, but back then I could provide a solution. Let's see if the community can come up with something similar here. Test cases: Code:    0,88560 31944 10888 70027 88159 00582 5887 => error    0,88560 31944 10888 70027 88159 00582 5888 =>  1,4616 32155 73196 15209 53068 05276 6163 Otherwise it's not perfect. ;-) Dieter I'm curious - and I'll be up-front here, I really know little about this topic, other than the barest familiarity of gamma, inverse gamma, digamma, etc. - how is accurarcy for such values important? I am seriously not trying to minimze it; as many people are busily testing and pushing the envelope, it clearly must have some practical application, I (and I suspect other lurkers) would like to know how it applies, and why such precision can have practical use or impact. --Bob Prosperi 02-08-2015, 09:22 PM (This post was last modified: 02-09-2015 06:15 AM by BarryMead.) Post: #25 BarryMead Senior Member Posts: 361 Joined: Feb 2014 RE: Faster inverse gamma and factorial for the WP 34S (02-08-2015 08:56 PM)rprosperi Wrote:  I (and I suspect other lurkers) would like to know how it applies, and why such precision can have practical use or impact. From a mathematical perspective the "algorithm designer" does not know how the algorithm will be used, or how accurate the requirements of it's application will be. So he strives to achieve the "Maximum Possible" accuracy given the limits of the floating point number system used in the implementation. Many algorithms exhibit strange behaviors when their results approach minimums, maximums, zero, or infinity, so it is a challenge to keep errors to a minimum over the algorithm's full operational range. Dieter was illustrating how his algorithm performed at the bottom end of this operational range (where the gamma function reaches its minimum value in the positive domain). He demonstrated how the algorithm gracefully gave an error when it reached this lower limit. This is why he needed to show so many significant digits. 02-08-2015, 09:44 PM Post: #26 rprosperi Senior Member Posts: 2,248 Joined: Dec 2013 RE: Faster inverse gamma and factorial for the WP 34S (02-08-2015 09:22 PM)BarryMead Wrote: (02-08-2015 08:56 PM)rprosperi Wrote:  I (and I suspect other lurkers) would like to know how it applies, and why such precision can have practical use or impact. From a mathematical perspective the "Algorithm" designer does not know how the algorithm will be used, or how accurate the requirements of it's application will be. So he strives to achieve the "Maximum Possible" accuracy given the limits of the floating point number system used in the implementation. Sure, this is certainly correct and admirable when the balance of effort and resources are justified by the need. I'm just asking about the need, as I don't see the practical applications driving this need. As truly impressive as the 34S is in its functional breadth, depth and accuracy, there are probably thousands of areas, which if examined with enough close scrutiny, also could need similar tweaking. I am curious why it's warranted here. Note my comments are due to simply not knowing how a function's innaccurcy in the 34th digit matters to any real-world application. Maybe the answer is to simply fix it because we now know it's broken - which is a perfectly fine explanation; it just seems like there is more to it. Thanks for your patience with my curiosity. --Bob Prosperi 02-08-2015, 10:13 PM (This post was last modified: 02-23-2015 02:26 AM by BarryMead.) Post: #27 BarryMead Senior Member Posts: 361 Joined: Feb 2014 RE: Faster inverse gamma and factorial for the WP 34S (02-08-2015 09:44 PM)rprosperi Wrote:  Sure, this is certainly correct and admirable when the balance of effort and resources are justified by the need. I'm just asking about the need, as I don't see the practical applications driving this need. Not everyone "Needs" to put the calculator into Double Precision mode, but when one does s/he expects the results to be more accurate. It is very rare indeed that one NEEDS 34 digits of accuracy, but when you perform an operation in Double Precision mode wouldn't you expect the 34 digit result of any function to be reasonably correct? My point is why ask a person why he NEEDS something? Isn't it better to just assume that he has a valid need, and deliver the results he expects. The alternative would be to inject arbitrary errors assuming that his need for accuracy is not valid. (In this case one would usually add a disclaimer explaining the accuracy limits of the algorithm so that people don't rely on it.) There are rarely "PERFECT" algorithms, so researchers are always STRIVING to improve them balancing factors such as program size, computation speed, and accuracy. Some find these challenges interesting, competitive, and fun, and others may not. 02-08-2015, 10:54 PM Post: #28 rprosperi Senior Member Posts: 2,248 Joined: Dec 2013 RE: Faster inverse gamma and factorial for the WP 34S (02-08-2015 10:13 PM)BarryMead Wrote: (02-08-2015 09:44 PM)rprosperi Wrote:  Sure, this is certainly correct and admirable when the balance of effort and resources are justified by the need. I'm just asking about the need, as I don't see the practical applications driving this need. Not everyone "Needs" to put the calculator into Double Precision mode, but when one does he expects the results to be more accurate. It is very rare indeed that one NEEDS 34 digits of accuracy, but when you perform an operation in Double Precision mode wouldn't you expect the 34 digit result of any function to reasonably correct? My point is why ask a person why he NEEDS something? Isn't it better to just assume that he has a valid need, and deliver the results he expects. The alternative would be to inject arbitrary errors assuming that his need for accuracy is not valid. (In this case you would need to add a disclaimer explaining the accuracy limits of the algorithm so that people don't rely on it.) There are rarely "PERFECT" algorithms, so researchers are always STRIVING to improve them balancing factors such as program size, computation speed, and accuracy. Some find these challenges interesting, competitive, and fun, and others may not. Thanks for explaining that viewpoint Barry, it helps. I think I asked my question the wrong way. It seems it came out sounding like "why bother doing this?", which was not my intent. I guess I meant to ask closer to "what application benefits from these functions having such precise capabilities?" I realize that this thread is more about building accurate and precise tools to be used by others for who knows what purpose, I'm just fishing for some of those purposes, if anyone here knows. Maybe not. Even if that's the case, it is rewarding to see such interesting collaboration and contributions to continually sharpen the tools. --Bob Prosperi 02-08-2015, 11:06 PM (This post was last modified: 02-09-2015 06:16 AM by BarryMead.) Post: #29 BarryMead Senior Member Posts: 361 Joined: Feb 2014 RE: Faster inverse gamma and factorial for the WP 34S (02-08-2015 10:54 PM)rprosperi Wrote:  Thanks for explaining that viewpoint Barry, it helps. I think I asked my question the wrong way. It seems it came out sounding like "why bother doing this?", which was not my intent. I guess I meant to ask closer to "what application benefits from these functions having such precise capabilities?" I realize that this thread is more about building accurate and precise tools to be used by others for who knows what purpose, I'm just fishing for some of those purposes, if anyone here knows. Maybe not. Even if that's the case, it is rewarding to see such interesting collaboration and contributions to continually sharpen the tools. To be honest I don't know what one would use a high accuracy inverse gamma function for. I have never needed one. I am sure that in some obscure corner of science people use this function and need it to be accurate, but I couldn't tell you off the top of my head how it would relate to a real-world application or need. As a nuclear physicist Walter probably uses more of the functions in the WP-34S for his everyday computations than just about anyone in this forum. Perhaps he or someone else in this forum can answer your question. Even though I don't personally know how/why the function is needed, I can appreciate the elegance of it's implementation like looking at fine painting or sculpture. I am relatively new to "Algorithm Design" myself. I did help fix the WP34S's complex hyperbolic tangent function, and Polar to Rectangular function, and helped Torsten Manz improve several algorithms in his HP-15C simulator including the Gamma function, and most of the complex trig, and hyperbolic trig functions here, but I am by no means an expert. I could not have written the algorithms that Bit and Dieter submitted. I know just enough to begin to appreciate some of the techniques used to keep the errors small, but not enough to optimize them for program size or speed. There is a real science and art to "Algorithm Design". I own several books on the subject and it gets deep pretty fast. 02-09-2015, 06:50 AM (This post was last modified: 02-09-2015 06:51 AM by Dieter.) Post: #30 Dieter Senior Member Posts: 1,666 Joined: Dec 2013 RE: Faster inverse gamma and factorial for the WP 34S (02-08-2015 11:06 PM)BarryMead Wrote:  To be honest I don't know what one would use a high accuracy inverse gamma function for. I have never needed one. Squeezing out the best possible, most accurate result is the fun of it. Who needs a real world application? ;-) (02-08-2015 11:06 PM)BarryMead Wrote:  I could not have written the algorithms that Bit and Dieter submitted. I know just enough to begin to appreciate some of the techniques used to keep the errors small, That's about the level I'm working on, as far as the Gamma/Digamma function is concerned. ;-) The tricky part still remains unsolved: getting accurate results for arguments extremely close to the Gamma minimum at 1,46163... etc. The current algorithm could do this, provided the Digamma function is sufficiently accurate and the working precision is about 53 (!) digits (required for Gamma of a 34-digit argument). So there still is a lot to do. Dieter 02-16-2015, 07:07 AM Post: #31 Dieter Senior Member Posts: 1,666 Joined: Dec 2013 RE: Faster inverse gamma and factorial for the WP 34S (02-08-2015 07:02 AM)Paul Dale Wrote:  If the digamma route is interesting and custom images are acceptable, I've implemented this function both in xrom (trunk/xrom/digamma.wp34s) and native C (trunk/unused/digamma.c). I would support this suggestion of a high-precision digamma function. Looking at the current code that comes with the emulator (both in the lib and the source in digamma.wp34s) the implementation seems to be a "based on Jean-Marc Baillard's HP-41 digamma code". Here the chosen method (use an asymptotic series expansion up to x8 for all x>8) is fine because it returns approx. 10 valid digits, i.e. the 41's working precision. On a 34s we should expect more. ;-) Example: Ψ(1) = –γ = –0,57721 56649 01532 9. The current code returns –0,57721 56648 94767 0. Ψ(2) =1–γ = –0,42278 43350 98467 1. The current code returns –0,42278 43351 05233 0 So an accurate digamma function still is missing. On the other hand the one or other 34s function is, hmmm... "optional". Consider the inverse of Lambert's W: a simple ex RCLx L does the trick just as well. Dieter 02-16-2015, 05:46 PM Post: #32 Bit Member Posts: 265 Joined: Jan 2014 RE: Faster inverse gamma and factorial for the WP 34S (02-16-2015 07:07 AM)Dieter Wrote:  So an accurate digamma function still is missing. On the other hand the one or other 34s function is, hmmm... "optional". Consider the inverse of Lambert's W: a simple ex RCLx L does the trick just as well. The built-in digamma function that Pauli mentioned, and which isn't available by default, is more accurate. Perhaps up to 33 digits in double precision mode if I see it correctly. If you don't have a build environment but would like to play with the built-in function, I can send you some binaries that have it enabled. 02-16-2015, 07:09 PM (This post was last modified: 02-16-2015 07:11 PM by Dieter.) Post: #33 Dieter Senior Member Posts: 1,666 Joined: Dec 2013 RE: Faster inverse gamma and factorial for the WP 34S (02-16-2015 05:46 PM)Bit Wrote:  The built-in digamma function that Pauli mentioned, and which isn't available by default, is more accurate. Perhaps up to 33 digits in double precision mode if I see it correctly. That's what should be expected with the internal 39-digit precision of C-coded functions. ;-) Back to the current solution: The Ψ code in the user code library was designed for 10-digit accuracy on a 41 series calculator. It uses the asymptotic method given in Abramowitz & Stegun (1972) 6.3.18 for x > 8 and four terms of the series. Smaller arguments are evaluated via Ψ(x+1) = Ψ(x)+1/x. For the 34s I suggest using x > 16 (in SP) and six terms, which – when evaluated with sufficient precision – should provide an absolute error order of 10–18. The results are even better with a slight tweak: instead of B12 = –691/2730 I simply use –1/4. ;-) Here is some experimental code that should work for positive arguments. It is not yet complete and cannot replace the current Ψ routine, but I think it points into the right direction: Code: LBL"PSI" #016 DBL? x² STO 01 STO-01 x<> Y 1/x STO+01 x<> L INC X x<? Y BACK 005 STO 00 1/x x² FILL #004 +/- / #011 1/x + x #020 1/x - x #021 1/x + x #010 1/x - x INC X x #012 / RCL 00 STO+X 1/x + RCL 00 LN RCL-01 x<> Y - RTN Here are some results: Code: SP mode:   1            XEQ"PSI"  -0,5772156649015329   exact   2            XEQ"PSI"   0,4227843350984671   exact  20            XEQ"PSI"   2,970523992242149    exact 1,46163211449  XEQ"PSI"  -2,949306481 E-8      7 digits correct (abs. err ~ 1E-15) DP mode:   1            XEQ"PSI"  -0,5772156649015328606065120900824037   32 digits correct   2            XEQ"PSI"   0,4227843350984671393934879099175963   32 digits correct  20            XEQ"PSI"   2,970523992242149050877256978825982    33 digits correct 1,46163211449  XEQ"PSI"  -2,9493065735632104820662566586 E-8     25 digits correct (abs. err ~ 3E-33) (02-16-2015 05:46 PM)Bit Wrote:  If you don't have a build environment but would like to play with the built-in function, I can send you some binaries that have it enabled. Thank you very much. But let me try first what can be done with user code. The mentioned results might give a first impression, now let's see what happens if the results move very close to zero, i.e. near the Gamma minimum at 1,4616... I wonder what the internal digamma function might return for Ψ(1,4616321449768362) and ...363? Can you post the results with all 34 digits? Dieter 02-16-2015, 07:30 PM Post: #34 Bit Member Posts: 265 Joined: Jan 2014 RE: Faster inverse gamma and factorial for the WP 34S (02-16-2015 07:09 PM)Dieter Wrote: (02-16-2015 05:46 PM)Bit Wrote:  The built-in digamma function that Pauli mentioned, and which isn't available by default, is more accurate. Perhaps up to 33 digits in double precision mode if I see it correctly. That's what should be expected with the internal 39-digit precision of C-coded functions. ;-) I should've clarified that I tried the XROM code (trunk/xrom/digamma.wp34s), not the C version. (02-16-2015 07:09 PM)Dieter Wrote:  I wonder what the internal digamma function might return for Ψ(1,4616321449768362) and ...363? Can you post the results with all 34 digits? Here you go, only 21 correct digits in these cases (the omitted digits are zeros): 8.199 917 911 936 391 396 113 e-12 8.200 014 679 160 935 407 842 e-12 02-16-2015, 08:46 PM Post: #35 Paul Dale Senior Member Posts: 1,237 Joined: Dec 2013 RE: Faster inverse gamma and factorial for the WP 34S (02-16-2015 07:09 PM)Dieter Wrote:  I wonder what the internal digamma function might return for Ψ(1,4616321449768362) and ...363? Can you post the results with all 34 digits? I don't think I ever tuned up the C version of the digamma routine. The series expansion there is relatively short and aimed for single precision. The XROM version uses significantly more terms (if you are in double precision). I also don't remember why I stored the series constants as reciprocals in XROM -- probably saving a few bytes but slower. - Pauli 02-16-2015, 09:00 PM (This post was last modified: 02-16-2015 09:15 PM by Dieter.) Post: #36 Dieter Senior Member Posts: 1,666 Joined: Dec 2013 RE: Faster inverse gamma and factorial for the WP 34S (02-16-2015 07:30 PM)Bit Wrote: (02-16-2015 07:09 PM)Dieter Wrote:  I wonder what the internal digamma function might return for Ψ(1,4616321449768362) and ...363? Can you post the results with all 34 digits? Here you go, only 21 correct digits in these cases (the omitted digits are zeros): 8.199 917 911 936 391 396 113 e-12 8.200 014 679 160 935 407 842 e-12 Oh, I see there is a typo in the argument, there is one digit too much. It should have been Ψ(1,461632144968362) resp. ...363. This should return two results close to zero, but with opposite signs. Anyway, I tried the same cases with my digamma version and indeed I got three more nonzero digits. But not more the same 21 correct ones as in digamma.wp34s. #-) Regarding the latter version, please see my reply to Pauli's post. Dieter 02-16-2015, 09:09 PM (This post was last modified: 02-16-2015 09:22 PM by Dieter.) Post: #37 Dieter Senior Member Posts: 1,666 Joined: Dec 2013 RE: Faster inverse gamma and factorial for the WP 34S (02-16-2015 08:46 PM)Paul Dale Wrote:  I don't think I ever tuned up the C version of the digamma routine. The series expansion there is relatively short and aimed for single precision. The XROM version uses significantly more terms (if you are in double precision). Yes, I just looked at the code in digamma.wp34s. Generally there is a tradeoff between the number of terms used and the threshold for the minimum x used for the series. The current version has a constant threshold of 10 (SP) resp. 20 (DP) and a fixed number of terms in the series. This number of terms can be substantially reduced if the threshold is increased. I am using 16 in SP resp. 256 (!) in DP, combined with merely six terms (up to x^12) with good results and a similar accuracy level (cf. my reply to Bit's post). IMHO fine-tuning this relation (threshold vs. number of terms) is the clue to adequate accuracy. The major relevant problem that still exists is accuracy for results very close to zero. Some additional digits can be squeezed out by carefully rearranging the order intermediate results are added together. But much is lost due to digit cancelling, so a certain absolute error remains. Which reduces the number of valid digits in results close to zero. (02-16-2015 08:46 PM)Paul Dale Wrote:  I also don't remember why I stored the series constants as reciprocals in XROM -- probably saving a few bytes but slower. I wonder where these constants (recalled by CNST→J) are stored. Is it possible to generate custom constants in XROM code? Dieter 02-16-2015, 09:43 PM Post: #38 Paul Dale Senior Member Posts: 1,237 Joined: Dec 2013 RE: Faster inverse gamma and factorial for the WP 34S (02-16-2015 09:09 PM)Dieter Wrote:  I wonder where these constants (recalled by CNST→J) are stored. They are in compile_consts.c: Code: #ifdef INCLUDE_XROM_DIGAMMA     SYSCONST("DG02",    "DG02",        "-12"),     SYSCONST("DG04",    "DG04",        "120"),     SYSCONST("DG06",    "DG06",        "-252"),     SYSCONST("DG08",    "DG08",        "240"),     SYSCONST("DG10",    "DG10",        "-132"),     SYSCONST("DG12",    "DG12",        "47.40955137481910274963820549927641099855282199710564"),     SYSCONST("DG14",    "DG14",        "-12"),     SYSCONST("DG16",    "DG16",        "2.25601327066629803704727674868675698092341719657174"),     SYSCONST("DG18",    "DG18",        "-0.32744432033191237148653885608771969817858526910889"),     SYSCONST("DG20",    "DG20",        "0.03779830594865157407036211922502018773158621163615"), #ifdef XROM_DIGAMMA_DOUBLE_PRECISION     SYSCONST("DG22",    "DG22",        "-0.00355290089208707181751477157164373157576303695789"),     SYSCONST("DG24",    "DG24",        "0.00027719946681748709451809242885375511629640900063"),     SYSCONST("DG26",    "DG26",        "-0.000018238994666613976237629781846424625074665884416450965222"),     SYSCONST("DG28",    "DG28",        "0.000001025707487435377285588673305803802205519095192937062420"),     SYSCONST("DG30",    "DG30",        "-0.000000049868606702005666912020069073686577470166427786528917"),     SYSCONST("DG32",    "DG32",        "0.0000000021169179377466567237109532498135231238180401887003845789189798"),     SYSCONST("DG34",    "DG34",        "-0.0000000000791406916620372916230853867700251482720064744876389233715498"), #endif #endif Quote:Is it possible to generate custom constants in XROM code? As in constants from the CNST function? Then yes. There are a number of such constants in compile_consts.c -- look for most of the SYSCONST entries. Due to some thoughtful code by Marcus, these are stored in either single or double precision -- the smaller format is used if possible. - Pauli « Next Oldest | Next Newest » User(s) browsing this thread: 1 Guest(s)
2017-12-16 14:56:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.506991982460022, "perplexity": 1836.4222449852616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588251.76/warc/CC-MAIN-20171216143011-20171216165011-00760.warc.gz"}
https://email.esm.psu.edu/pipermail/macosx-tex/2008-February/033938.html
[OS X TeX] Voting on feature requests Alain Schremmer schremmer.alain at gmail.com Fri Feb 8 13:27:54 EST 2008 On Feb 8, 2008, at 9:45 AM, Herbert Schulz wrote: > > On Feb 7, 2008, at 3:22 PM, rhomunu-list at yahoo.fr wrote: > >> Hello, >> I use TeXShop for years now, so that I recommend it to some people >> who switched to the Mac recently... >> But some of them (coming from Kile on Linux) ask for things I have >> 1- is there a way to show the line number in the source file window? > > Howdy, > > This can't be done although you can go to a particular line using > the Edit->Line Number... command (Cmd-L). I believe this is a > feature request. Speaking of which, it seems to me it might be useful to have some kind of straw vote on this list on the desirability of certain features in TeXShop. After all, most TeXShop users probably read this list. In any case, on a scale from 0 to 5, here are my wishes which I am letter letter coding for simplicity of reference, if any. A—line number in the source file window (0) (I usually can figure out, one way or another where the error is but, yes, it would be nice if the Console could point at a line number. On the other hand, I mostly work with included files and the Console can't even point to the right file! So, while I think that the Console needs a lot of work, on the other hand, I don't think it's anywhere near being worth it.) B—font size in the source window (3) (I came very near last Summer to needing font size 24 or probably even more. On the other hand, by the time one gets to that point …) C—change Command I from meaning Source > Font > Italic to a keyboard command for \emph (5) (This on the basis that I use \emph infinitely more than Source > Font > Italic with which, by the way, there is an interesting bug I encounter whenever I hit Command I inadvertently, i.e. when I forget that I am in TeXShop.) D— change Command B from meaning Source > Font > Bold to a keyboard command for \emph (1) (Mostly, though, on the basis of consistency with C.) E—a way to collapse one or more sections' text (5) (For example collapse Section1 title - text, Section 2 title - text, Section 3 title - text to, for instance, Section 1 title - text, Section 2 title, Section 3 title - text.) F—a Typeset button that does NOT save at the same time (5) (See "Save date-time,not Print date" thread.) G—Let the bottom of the source window be standard, with a horizontal scroll bar. (2) (If only to facilitate scroll down with a fully extended window. I am using a patch of TeXShop that someone—I am ashamed to say I can't recall who—wrote and mentioned on this list and sent me last summer. But also, occasionally to be able to scroll horizontally to edit, say, a long formula.) Well, that's it. (For the time being.) I would be curious to see what the people thinks. Hopeful regards --schremmer
2020-09-18 23:24:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8170676231384277, "perplexity": 3470.64843341535}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189264.5/warc/CC-MAIN-20200918221856-20200919011856-00591.warc.gz"}
https://www.math.su.se/english/education/phd-studies/research-projects/possible-topics-for-phd-theses-in-mathematics-1.129749?cache=%2Fcourses-in-cinema-studies%2Fswedish-film-and-tv-culture
For information about ongoing research at the department, please see the webpages of the research groups and the personal homepages of our researchers. The department of mathematics has three research group in pure mathematics: Algebra, Geometry and Combinatorics, Analysis and Logic. Current and/or potential PhD advisors are Algebra, Geometry and Combinatorics: Gregory Arone, Jörgen Backelin, Alexander Berglund, Jonas Bergström, Rikard Bøgvad, Wushi Goldring, Samuel Lundqvist, Dan Petersen, Boris Shapiro. A few suggestions for PhD topics are presented below. ### Calculus of functors and applications Main supervisor: Gregory Arone The goal of the project is to use calculus of functors, operads, moduli spaces of graphs, and other techniques from algebraic topology, to study spaces of smooth embeddings, and other important spaces. High-dimensional long knots constitute an important family of spaces that I am currently interested in. But it is by no means the only example. Let $\mathbb{R}^m, \mathbb{R}^{m+i}$ be Euclidean spaces. An $m$-dimensional long knot in $\mathbb{R}^{m+i}$ is a smooth embedding $\mathbb{R}^m \hookrightarrow \mathbb{R}^{m+i}$ that agrees with the inclusion outside a compact set. Let $\text{Emb}_c(\mathbb{R}^m, \mathbb{R}^{m+i})$ be the space of all such knots. The overarching goal of this project is to understand the dependence of the space $\text{Emb}_c(\mathbb{R}^m, \mathbb{R}^{m+i})$ on $m$ and $i$. The framework for doing this is provided by orthogonal calculus of functors, that was developed by Michael Weiss. The following are some of the specific objectives of this project. • To analyse the structure to polynomial functors in orthogonal calculus. • To show that it is possible to use orthogonal calculus to study the space of long knots $\text{Emb}_c(\mathbb{R}^m, \mathbb{R}^{m+i})$. • To describe explicitly the derivatives (in the sense of orthogonal calculus) of functors such as $\text{Emb}_c(\mathbb{R}^m, \mathbb{R}^{m+i})$ in terms of moduli spaces of graphs similar to ones introduced by Culler and Vogtmann. The $n$-th derivative functor should be closely related to the moduli space of graphs that are homotopy equivalent to a wedge of $n$ circles. • To show that rationally these derivatives are equivalent to hairy graph complexes that have been shown by Arone-Turchin and Turchin-Willwacher to calculate the rational homotopy of spaces of long knots. • To compare two known $m+1$-fold deloopings of the space $\text{Emb}_c(\mathbb{R}^m, \mathbb{R}^{m+i})$. One of these deloopings, due to Dwyer-Hess, is homotopy-theoretic in nature and is given in terms of mapping spaces between operads. The other one is geometric in nature, and is given in terms of "topological Stiefel manifolds" $TOP(m+i)/TOP(i,m)$. I would like to show that the two deloopings are equivalent when $i\ge 3$, and also, by contrast, that they are not even rationally equivalent when $i=0$. The reason for this is that one delooping has the Pontryagin classes in its rational homotopy, while the other one does not. • To clarify the connection between the delooping of $\text{Emb}_c(\mathbb{R}^m, \mathbb{R}^{m+i})$, and $G_i$ -- the group of self-homotopy equivalences of the sphere $S^{i-1}$. More precisely I want to show that $G_i$ is the limit of the delooping, as $m$ goes to infinity. • To clarify the relationship of the first derivative to topological cyclic homology and to Waldhausen's algebraic K-theory. ### Algebraic models for spaces and manifolds Main supervisor: Alexander Berglund Algebraic topology studies continuous objects such as spaces or manifolds by attaching discrete invariants to them, e.g., the Euler characteristic, the fundamental group, cohomology groups, etc. As the invariants are refined by adding more algebraic structure, complete classification becomes possible in favorable situations. For example, for closed surfaces the fundamental group is a complete algebraic invariant, for simply connected manifolds the de Rham complex with its wedge product is a complete invariant of the real homotopy type, and for simply connected topological spaces the singular cochain complex with its E-infinity algebra structure is a complete invariant of the integral homotopy type. My research revolves around algebraic models for spaces and their applications. Here are some topics for possible PhD projects within this area: • #### Automorphisms of manifolds The cohomology ring of the automorphism group of a manifold M is the ring of characteristic classes for fiber bundles with fiber M, which is an important tool for classification. Tractable differential graded Lie algebra models can be constructed for certain of these automorphism groups. A possible PhD project here is to further develop these algebraic models, and in particular to further investigate a newfound connection to Kontsevich graph complexes. This will involve a wide variety of tools from algebraic and differential topology as well as representation theory and homological algebra. • #### String topology and free loop spaces The space of strings in a manifold carries important information, e.g., about geodesics. Its homology carries interesting algebraic structure such as the Chas-Sullivan loop product. Tools such as Koszul duality theory, A-infinity algebras and Hochschild cohomology can be used to construct tractable algebraic models for free loop spaces. A possible PhD project is to further develop these models, in particular to endow them with more algebraic structure, and use them to make new computations. ### Moduli spaces, varieties over finite fields and Galois representations Main supervisor: Jonas Bergström Moduli spaces are spaces that parametrize some set of geometric objects. These spaces have become central objects of study in modern algebraic geometry. One way of getting a better understanding of a space is to find information about its cohomology. In my research I have tried to extend the knowledge about the cohomology of moduli spaces when the objects parametrized are curves or abelian varieties. The main tool has been the so called Lefschetz fixed point theorem which connects the cohomology to counts over finite fields. That is, counting isomorphism classes of, say, curves defined over finite fields gives information about the cohomology (by comparison theorems also in characteristic zero) of the corresponding moduli space. I have often used concrete counts over small finite fields using the computer to find such information. The cohomology of an algebraic variety (that is defined over the integers) comes with an action of the absolute Galois group of the rational numbers. Such Galois representations are in themselves very interesting objects. A count over finite fields also gives aritmethic information about the Galois representations that appear. In the case of Shimura varieties (at least according to a general conjecture which is part of the so called Langlands program) one has a good idea of which Galois representations that should appear, namely ones coming from the corresponding modular (and more generally, automorphic) forms. If one is not considering a Shimura variety, as for example the moduli space of curves with genus greater than one, it is much less clear what Galois representations to expect even though they are still believed to come from automorphic forms. Key words: algebraic geometry, moduli spaces, curves, abelian varieties, finite fields, modular forms ### Asymptotic properties of zero-sets of polynomials in higher dimensions, currents and holonomic systems of differential equations Consider the polynomials $p_n = x^n-1$. As the parameter $n$ increases, the n complex zeroes cluster in a very regular manner on a curve, the unit circle. This kind of behaviour is common in many other examples of sequences of polynomials, that, as here, are solutions to parameter dependent differential equations. The sequences occur in different areas, such as combinatorics, or special functions in Lie theory and algebraic geometry, and it is useful and interesting to understand the asymptotic properties of the polynomials through their zeroes.  A large amount of work has been done on this, in particular to determine what kind of curves in the complex plane that arise as asymptotic zero-sets. The main  idea in these papers is often to consider the zero set as a measure and then use harmonic analysis, related to an algebraic curve, the so-called characteristic curve of the equation. There are as yet few papers that consider the corresponding problem in higher dimensions, and this is the suggested topic, and one that I have just started with. It is then natural to use the differential-geometric concept of currents, instead of measures, and connected complex algebraic geometry. Instead of having just one parameter dependent differential equation, one would consider holonomic systems of differential equations, such as GKZ-systems, that are important in some parts of algebraic geometry and algebraic topology. Holonomic systems come from the algebraic study of systems of differential equations, so-called D-module theory, and is a nice mixture of commutative algebra and analysis. In particular I am interested in understanding the relation to the characteristic variety better, since I expect this to also give a better understanding of the one-variable case. Key words: complex algebraic geometry, D-module theory, varieties, hyper-geometric functions, harmonic analysis ### The quest for algebraicity: The Langlands Program and beyond Main supervisor: Wushi Goldring In 1967, R. P. Langlands wrote a letter to A. Weil. It would revolutionize mathematics. It launched the now-famous Langlands Program. For almost half-a-century, this program has been a driving force in several areas of mathematics, particularly harmonic analysis, representation theory, algebraic geometry, number theory and mathematical physics. It has seen spectacular progress and varied applications, such as Wiles' proof of Fermat's Last Theorem and Ngô's proof of the Fundamental Lemma. At the same time, most instances of Langlands' conjectures remain unsolved. Fifty years later, all agree that the Langlands Program is indispensable for the unification of abstract mathematics. But many -- including perhaps Langlands himself -- grapple with the ultimate raison dêtre of the program. So what is really at the heart of the Langlands Program? The prevailing common view has been that large swaths of the Langlands Program are inherently analysis-bound, that an algebraic understanding of them is impossible. Under this view, the Langlands Program is seen as injecting analytic methods to solve classical problems in number theory and algebraic geometry. My research is focused on inverting the common view: My working algebraicity thesis is that, on the contrary, the Langlands Program is deeply algebraic and unveiling its algebraic nature leads to new results, both within it and in the myriad of areas it impinges upon. In pursuit of my algebraicity theme, building on joint work with Jean-Stefan Koskivirta and other collaborators, I have begun a program to make simultaneous progress in the following four seemingly unrelated areas, by developing the connections between them: (A) Algebraicity of automorphic representations (B) The Deligne-Serre interchange of characteristic'' approach to algebraicity, concerning both (A) but also a variety of other questions (C) The geometry of stacks of G-Zips, the Ekedahl-Oort (EO) stratification of Shimura varieties and their Hasse invariants. (D)  Algebraicity of Griffiths-Schmid manifolds. ### Discrete and continuous quantum graphs Main supervisor: Pavel Kurasov Credit: Erlend Davidson (Thomas Young Centre) Quantum graphs - differential operators on metric graphs - is a rapidly growing branch of mathematical physics lying on the border between differential equations, spectral geometry and operator theory. The goal of the project is to compare dynamics given by discrete equations associated with (discrete) graphs with the evolution governed by quantum graphs. Discrete models can be successfully used to describe complex systems where the geometry of the connections between the nodes can be neglected. It is more realistic to use instead metric graphs with edges having lengths. The corresponding (continuous) dynamics is described by differential equations coupled at the vertices. Such models are used for example in modern physics of nano-structures and microwave cavities. Understanding the relation between discrete and continuous quantum graphs is a challenging task leaving a lot of freedom, since this area has not been studied systematically yet. In special cases such relations are straightforward, sometimes methods originally developed for discrete graphs can be generalized, but often studies lead to new unexpected results. To find explicit connections between the geometry and topology of such graphs on one side and spectral properties of corresponding differential equations on the other is one of the most exciting directions in this research area. As an example one may mention an explicit formula connecting the asymptotics of eigenvalues to the number of cycles in the graph, or the estimate for the spectral gap (the difference between the two lowest eigenvalues) proved using a classical Euler theorem dated to 1736! Possible directions of the research project are •  Study spectral properties of continuous quantum graphs in relation to their connectivity and complexity (this question is well-understood for discrete graphs); •  Investigate relations between quantum graphs and quasicrystals; •  Develop new models combining features of discrete and continuous graphs and  study their properties; •  Transport properties of networks and their complexity. It is expected that the candidate will take an active part in the work of the Analysis group at Stockholm University, Cooperation group "Continuous Models in the Theory of Networks" (ZIF, Bielefeld, http://www.uni-bielefeld.de/ZIF/KG/2012Models/) and Research and Training Network "QGRAPH" joining 15 research teams from all over the world. ### Herglotz-Nevanlinna functions Main supervisor: Annemarie Luger When complex analysis and functional analysis meet, many interesting things can happen. One such topic are questions involving so-called Herglotz-Nevanlinna functions, these are functions mapping the complex upper half plane analytically onto itself. They appear in surprisingly many situations, both in pure mathematics as well as in applications, for example, in connection with both ordinary and partial differential operators, in perturbation theory and extension theory, as transfer functions for passive systems, or as Fouriertransform of certain distributions, just to name a few occasions. One typical example of how to utilize such connections is e.g. that eigenvalues of a differential operator are given as the zeros of a related Herglotz-Nevanlinna function. In this project the PhD-student will work with such functions and in particular their generalisations. It will also be possible to connect the research questions to concrete applications in electro-engineering. ### Mathematical Logic - constructive and category-theoretic foundations for mathematics Main supervisor:  Erik Palmgren Project description: Constructive or algorithmic methods are important in mathematics, and are often the nal result when a mathematical theory is to be applied. In this wide-ranging research project, general logical and categorytheoretic methods are developed and studied in order to ensure constructive content of mathematical theorems. This covers for instance the study of constructive type theories and set theories with the help of models or properties of formal proofs. It could also include case studies where a limited area of mathematics is constructivized. Apart from the purely mathematical-logical questions there are also interesting philosophical aspects, and also applications in computer science, for example extraction of computer programs from mathematical proofs. For further information contact Erik Palmgren. ### Geometry and topology of moduli spaces Main supervisor: Dan Petersen I am interested broadly in the interface between algebraic geometry and algebraic topology. For example, the cohomology of algebraic varieties carries all kinds of "extra" structures that has no counterpart in the cohomology of, say, a manifold or a cell complex. I have been particularly interested in questions concerning moduli spaces in algebraic geometry. Some ideas for projects are: • Partially compactified moduli spaces of genus zero curves, operads, multiple zeta values. In work with Johan Alm, we studied a partial compactification of the moduli space of marked genus zero curves from an operad-theoretic perspective. There are still several open questions from that project. Optimistically, these kinds of results could be useful for the study of period integrals over these spaces; after the work of Francis Brown, these period integrals are known to be linear combinations of multiple zeta values. • Cohomology of moduli spaces of principally polarized abelian varieties. Let A(g) be the moduli space of principally polarized abelian varieties of genus g. Recent work in the theory of automorphic representations of Chenevier, Renard, Lannes, Taïbi and others has made it possible to calculate the intersection cohomology of the Satake compactification of A(g) with twisted coefficients in a large range. It would be interesting to try to leverage these calculations to understand the cohomology of A(g) itself with twisted coefficients, by trying to understand the "perverse" Leray spectral sequence associated to the inclusion of A(g) into its Satake compactification. • Tautological classes with twisted coefficients. The tautological ring of the moduli space of curves is an interesting subring of its cohomology ring/Chow ring. In work with Qizheng Yin and Mehdi Tavakol, we defined a notion of tautological classes inside the cohomology with twisted coefficients of the uncompactified moduli space. It would be useful to have a similar theory also on the compactified moduli space, but it is not clear how to set everything up. This will require working with intersection cohomology and perverse sheaves. Optimistically, it should be possible to do this "motivically". ### Analysis of Rough Linear and Multilinear Pseudodifferential Operators In the study of Partial Differential Equations and in Harmonic Analysis, an important role is played by the so-called pseudodifferential operators. For instance, for equations that describe electric potential and steady-state heat flow (elliptic equations) one can construct explicit solutions using pseudodifferential operators. Roughly speaking, these operators act on functions (or signals) by filtering (attenuating or amplifying) specific frequencies of those. For equations that describe wave propagation (hyperbolic equations), a similar role is played by Fourier integral operators. These tools allow us to obtain a priori estimates for the solutions, and study their behaviour and properties. Therefore, being able to estimate these operators in different function spaces is important for measuring the size and regularity of the solutions of PDEs in those spaces. In controlling height and width of a solution, the most important example of such spaces are the Lebesgue spaces Lp. Due to their rearrangement-invariant nature, these spaces are blind to the description of where solutions are concentrated, and thus the consideration of Lebesgue spaces with weights appears naturally. An important role is played by the so-called Muckenhoupt Ap weights. For nonlinear PDEs , the multilinear counterpart of pseudodifferential and Fourier integral operators play a crucial role. My recent research interests have been dealing with questions regarding both linear and multilinear operators of those described above, and in particular with those of rough type. To get involved in a project in these areas requires a strong background and interest in Harmonic Analysis and PDEs. Some examples of lines of research that one could pursue: • To develop an Ap-weighted theory for some classes of rough and mildly regular pseudodifferential operators, and find up-to-end-point improvements of existing results in the literature. • To investigate the validity of corresponding end-point estimates for such operators. • To develop the theory of spectral properties of rough pseudodifferential operators. • Study multilinear end-point results and results of minimal regularity assumptions, for paraproducts and their application to the study of boundedness properties of multilinear pseudodifferential and Fourier integral operators. ### Spectral properties of differential operators on domains and graphs Main supervisor: Jonathan Rohleder Eigenvalues and, more generally, spectra of differential operators appear naturally in numerous physical models, for instance as frequencies of vibrating strings and membranes, or as energies of quantum systems. Their investigation is an important and lively field in mathematical physics. Since the spectra of most models cannot be calculated explicitly, there is a strong need for qualitative and quantitative estimates. In my current research I focus on the spectral investigation of Laplacian and Schrödinger operators on domains in the Euclidean space and on metric graphs. Possible research areas for a PhD include: • Eigenvalue inequalities for Schrödinger operators on domains with mixed boundary conditions and their dependence on the geometry of the boundary. • Spectral estimates for Sturm-Liouville operators on metric graphs in relation to the vertex conditions and the geometry and topology of the graph. • Estimation of non-real spectra for Schrödinger operators with non-self-adjoint boundary conditions or non-real potentials. • Spectral properties of infinite quantum graphs. • Connections between the spectra of domains and graphs. ### Operator theory and function theory in polydisks Main supervisor: Alan Sola Coordinate shifts acting on Banach spaces of analytic functions represent a concrete and compelling incarnation of operator theory. At first glance, considering the action of a such simple operators on specific function spaces may appear to lose much of the generality that makes operator theory a powerful and flexible tool in mathematics. One can show, however, that many Hilbert space contractions are unitarily equivalent to the shift acting on model subspaces of the Hardy space. The class of analytic 2-isometries has the Dirichlet shift as a natural realization, and there are many other such models. Thus, understanding the invariant subspaces and cyclic vectors of coordinate shifts is important, and leads to a better understanding of more general operators. Moreover, by working with analytic functions, we are able to connect operator-theoretic questions to deep problems in complex function theory such as boundary behavior, vanishing properties, and so on. For example, answering the question of whether a specific analytic function is cyclic with respect to shifts acting on a function space frequently amounts to analyzing the size and properties of the zero set of the function. These types of questions have been studied by many mathematicians in the one-variable setting of the unit disk, and many remarkable results have been obtained. However, the higher-dimensional analogs of coordinate shifts acting on function spaces in polydisks have received somewhat less attention, especially for function spaces beyond the Hardy spaces. In recent years, I have been particularly interested in weighted Dirichlet spaces, which can be defined in terms of area-integrability of partial derivatives of an analytic function. These spaces are challenging due to the relative smoothness of their elements, yet are rich enough to allow for an interesting subspace structure. In a recent series of papers, my coauthors and I have started making headway on the problem of identifying cyclic vectors in weighted Dirichlet spaces in the bidisk, and we have found techniques for checking membership in such spaces of functions having singularities on the boundary of the bidisk. Possible directions for a PhD project might be to: • Study the multiplier algebra of weighted Dirichlet spaces on polydisks. • Develop a machinery to analyze integrability and regularity of rational functions by examining the geometry of their singularities. • Develop an understanding of the structure of invariant subspaces for the coordinate shifts. • Study zero sets and boundary zero sets for function spaces in polydisks.
2018-03-23 13:00:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7400239109992981, "perplexity": 437.23518392086777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648226.72/warc/CC-MAIN-20180323122312-20180323142312-00575.warc.gz"}
https://www.physicsforums.com/threads/determining-temperature-of-heat-bath.157664/
# Determining temperature of heat bath 1. Feb 22, 2007 ### fizziks I have a bunch of degenerate particles in a heat bath, which I can "measure" some of their properties to approximate the temperature of the heat bath. Like energy at ground state, 2nd, etc, and their degeneracies. But I was told that the exact temperature of the heat bath cannot be measured and only the approximation can be made. I was reading my stats mechanics book and all I can find is something about thermal equilibrium requiring a Boltzmann distribution and a formulation to find Temperature of a heat bath of non-degenerate particles. I tried searching the internet for some more information but I keep ending up with research papers that have nothing to do with my question. 2. Feb 22, 2007 ### tim_lou do some counting and calculate the multiplicity of the system in terms of some known qualities. then find the entropy and the temperature is equaled to: $$T=\left ( {\frac{\partial S}{\partial U}}\right )^{-1}_{V,N}$$ Last edited: Feb 22, 2007
2017-01-18 03:58:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.636999249458313, "perplexity": 495.7637753261658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00148-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.clutchprep.com/chemistry/practice-problems/76380/for-the-reaction-n2-g-3h2-g-2nh3-g-what-is-the-value-of-kc-at-500-c-if-the-equil
# Problem: For the reaction N2 (g) + 3H2 (g) ⇌ 2NH3 (g) what is the value of Kc at 500°C if the equilibrium concentrations are as follows: [H 2] = 0.30 M, [N2] = 0.40 M, and [NH3] = 2.2 M? Express the equilibrium constant to two significant figures. 98% (2 ratings) ###### Problem Details For the reaction N2 (g) + 3H2 (g) ⇌ 2NH3 (g) what is the value of Kc at 500°C if the equilibrium concentrations are as follows: [H 2] = 0.30 M, [N2] = 0.40 M, and [NH3] = 2.2 M? Express the equilibrium constant to two significant figures.
2021-01-25 18:16:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9333164691925049, "perplexity": 3652.1785752412234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703587074.70/warc/CC-MAIN-20210125154534-20210125184534-00530.warc.gz"}
https://ricerca.sns.it/handle/11384/59793
Measurement of the $pp \to ZZ$ production cross section and constraints on anomalous triple gauge couplings in four-lepton final states at $\sqrt s=$8 TeV
2021-10-19 13:04:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9777132868766785, "perplexity": 3235.121543468682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585265.67/warc/CC-MAIN-20211019105138-20211019135138-00621.warc.gz"}
https://www.lmfdb.org/knowledge/show/content.how_to_cite
show · content.how_to_cite all knowls · up · search: If you would like to acknowledge the LMFDB, please use a citation in the following form, alphabetized under LMFDB. [42] The LMFDB Collaboration, The L-functions and modular forms database, http://www.lmfdb.org, 2020, [Online; accessed 31 July 2020]. Of course you should change the year and date accessed to match your usage (the same comment applies to all the examples below). The BibTeX entry is: @misc{lmfdb, shorthand = {LMFDB}, author = {The {LMFDB Collaboration}}, title = {The {L}-functions and modular forms database}, howpublished = {\url{http://www.lmfdb.org}}, year = {2020}, note = {[Online; accessed 31 July 2020]}, } Then you can cite a certain object much like quoting a specific theorem from a paper. For example, you may refer to the elliptic curve with LMFDB label 11a.2 by \cite[\href{http://www.lmfdb.org/EllipticCurve/Q/11.a2}{Elliptic Curve 11.a2}]{lmfdb} In order to make the \url and \href commands to work, one should use the hyperref package. To cite a specific page in the LMFDB, such as the home page of the L-function of the first rank 4 elliptic curve/$\Q$ shown in the example below, you can cite it as [42] The LMFDB Collaboration, The L-functions and modular forms database, home page of the L-function $L(s,E)$ for elliptic curve isogeny class $\texttt{234446.a}$, http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/, 2020 , [Online; accessed 31 July 2020]. The BibTeX entry is: @misc{lmfdb:234446.a, shorthand = {LMFDB E234446.a}, author = {The {LMFDB Collaboration}}, title = {The {L}-functions and modular forms database, Home page of the L-function $L(s,E)$ for elliptic curve isogeny class \texttt{234446.a}}, howpublished = {\mbox{\url{http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/}}}, year = {2020}, note = {[Online; accessed 31 July 2020]}, } Authors: Knowl status: • Review status: beta • Last edited by Edgar Costa on 2021-06-25 08:40:55 Referred to by: History: \r\n\r\n\r\n", 1597417736657070: "If you would like to acknowledge the LMFDB, please use a citation in the following form, alphabetized under _LMFDB_.\r\n\r\n[42] The LMFDB Collaboration, _The L-functions and Modular Forms Database,_ \r\n[http://www.lmfdb.org](http://www.lmfdb.org), 2020, [Online; accessed 31 July 2020].\r\n\r\nOf course you should change the year and date accessed to match your usage (the same comment applies to all the examples below).\r\nThe BibTeX entry is:\r\n\r\n @misc{lmfdb,\r\n shorthand = {LMFDB},\r\n author = {The {LMFDB Collaboration}},\r\n title = {The {L}-functions and Modular Forms Database},\r\n howpublished = {\\url{http://www.lmfdb.org}},\r\n year = {2020},\r\n note = {[Online; accessed 31 July 2020]},\r\n} \r\n\r\nThen you can cite a certain object much like quoting a specific theorem from a paper. For example, you may refer to the elliptic curve with LMFDB label 11a.2 by\r\n\r\n\\cite[\\href{http://www.lmfdb.org/EllipticCurve/Q/11.a2}{Elliptic Curve 11.a2}]{lmfdb}\r\n\r\nIn order to make the \\url and \\href commands to work, one should use the [hyperref](http://ctan.org/pkg/hyperref) package.\r\n\r\n----------------\r\n\r\nTo cite a specific page in the LMFDB, such as the home page of the L-function of the first\r\nrank 4 elliptic curve/$\\Q$ shown in the example below, you can cite it as\r\n\r\n[42] The LMFDB Collaboration, _The L-functions and Modular Forms Database, \r\nhome page of the L-function $L(s,E)$ for Elliptic Curve Isogeny Class $\\texttt{234446.a}$_,\r\n[http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/](http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/), 2020 , [Online; accessed 31 July 2020].\r\n\r\nThe BibTeX entry is:\r\n\r\n @misc{lmfdb:234446.a,\r\n shorthand = {LMFDB E234446.a},\r\n author = {The {LMFDB Collaboration}},\r\n title = {The {L}-functions and Modular Forms Database, \r\n Home page of the L-function $L(s,E)$ for Elliptic Curve Isogeny Class \\texttt{234446.a}},\r\n howpublished = {\\mbox{\\url{http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/}}},\r\n year = {2020},\r\n note = {[Online; accessed 31 July 2020]},\r\n} \r\n\r\n", 1622975248917633: "If you would like to acknowledge the LMFDB, please use a citation in the following form, alphabetized under _LMFDB_.\r\n\r\n[42] The LMFDB Collaboration, _The L-functions and Modular Forms Database,_ \r\n[http://www.lmfdb.org](http://www.lmfdb.org), 2020, [Online; accessed 31 July 2020].\r\n\r\nOf course you should change the year and date accessed to match your usage (the same comment applies to all the examples below).\r\nThe BibTeX entry is:\r\n\r\n @misc{lmfdb,\r\n shorthand = {LMFDB},\r\n author = {The {LMFDB Collaboration}},\r\n title = {The {L}-functions and modular forms database},\r\n howpublished = {\\url{http://www.lmfdb.org}},\r\n year = {2020},\r\n note = {[Online; accessed 31 July 2020]},\r\n} \r\n\r\nThen you can cite a certain object much like quoting a specific theorem from a paper. For example, you may refer to the elliptic curve with LMFDB label 11a.2 by\r\n\r\n\\cite[\\href{http://www.lmfdb.org/EllipticCurve/Q/11.a2}{Elliptic Curve 11.a2}]{lmfdb}\r\n\r\nIn order to make the \\url and \\href commands to work, one should use the [hyperref](http://ctan.org/pkg/hyperref) package.\r\n\r\n----------------\r\n\r\nTo cite a specific page in the LMFDB, such as the home page of the L-function of the first\r\nrank 4 elliptic curve/$\\Q$ shown in the example below, you can cite it as\r\n\r\n[42] The LMFDB Collaboration, _The L-functions and modular forms database, \r\nhome page of the L-function $L(s,E)$ for Elliptic Curve Isogeny Class $\\texttt{234446.a}$_,\r\n[http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/](http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/), 2020 , [Online; accessed 31 July 2020].\r\n\r\nThe BibTeX entry is:\r\n\r\n @misc{lmfdb:234446.a,\r\n shorthand = {LMFDB E234446.a},\r\n author = {The {LMFDB Collaboration}},\r\n title = {The {L}-functions and modular forms database, \r\n Home page of the L-function $L(s,E)$ for Elliptic Curve Isogeny Class \\texttt{234446.a}},\r\n howpublished = {\\mbox{\\url{http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/}}},\r\n year = {2020},\r\n note = {[Online; accessed 31 July 2020]},\r\n} \r\n\r\n", 1622975285608002: "If you would like to acknowledge the LMFDB, please use a citation in the following form, alphabetized under _LMFDB_.\r\n\r\n[42] The LMFDB Collaboration, _The L-functions and modular forms database,_ \r\n[http://www.lmfdb.org](http://www.lmfdb.org), 2020, [Online; accessed 31 July 2020].\r\n\r\nOf course you should change the year and date accessed to match your usage (the same comment applies to all the examples below).\r\nThe BibTeX entry is:\r\n\r\n @misc{lmfdb,\r\n shorthand = {LMFDB},\r\n author = {The {LMFDB Collaboration}},\r\n title = {The {L}-functions and modular forms database},\r\n howpublished = {\\url{http://www.lmfdb.org}},\r\n year = {2020},\r\n note = {[Online; accessed 31 July 2020]},\r\n} \r\n\r\nThen you can cite a certain object much like quoting a specific theorem from a paper. For example, you may refer to the elliptic curve with LMFDB label 11a.2 by\r\n\r\n\\cite[\\href{http://www.lmfdb.org/EllipticCurve/Q/11.a2}{Elliptic Curve 11.a2}]{lmfdb}\r\n\r\nIn order to make the \\url and \\href commands to work, one should use the [hyperref](http://ctan.org/pkg/hyperref) package.\r\n\r\n----------------\r\n\r\nTo cite a specific page in the LMFDB, such as the home page of the L-function of the first\r\nrank 4 elliptic curve/$\\Q$ shown in the example below, you can cite it as\r\n\r\n[42] The LMFDB Collaboration, _The L-functions and modular forms database, \r\nhome page of the L-function $L(s,E)$ for Elliptic Curve Isogeny Class $\\texttt{234446.a}$_,\r\n[http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/](http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/), 2020 , [Online; accessed 31 July 2020].\r\n\r\nThe BibTeX entry is:\r\n\r\n @misc{lmfdb:234446.a,\r\n shorthand = {LMFDB E234446.a},\r\n author = {The {LMFDB Collaboration}},\r\n title = {The {L}-functions and modular forms database, \r\n Home page of the L-function $L(s,E)$ for Elliptic Curve Isogeny Class \\texttt{234446.a}},\r\n howpublished = {\\mbox{\\url{http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/}}},\r\n year = {2020},\r\n note = {[Online; accessed 31 July 2020]},\r\n} \r\n\r\n", 1623705889268194: "If you would like to acknowledge the LMFDB, please use a citation in the following form, alphabetized under _LMFDB_.\r\n\r\n[42] The LMFDB Collaboration, _The L-functions and modular forms database,_ \r\n[http://www.lmfdb.org](http://www.lmfdb.org), 2020, [Online; accessed 31 July 2020].\r\n\r\nOf course you should change the year and date accessed to match your usage (the same comment applies to all the examples below).\r\nThe BibTeX entry is:\r\n\r\n @misc{lmfdb,\r\n shorthand = {LMFDB},\r\n author = {The {LMFDB Collaboration}},\r\n title = {The $L$-functions and modular forms database},\r\n howpublished = {\\url{http://www.lmfdb.org}},\r\n year = {2020},\r\n note = {[Online; accessed 31 July 2020]},\r\n} \r\n\r\nThen you can cite a certain object much like quoting a specific theorem from a paper. For example, you may refer to the elliptic curve with LMFDB label 11a.2 by\r\n\r\n\\cite[\\href{http://www.lmfdb.org/EllipticCurve/Q/11.a2}{Elliptic Curve 11.a2}]{lmfdb}\r\n\r\nIn order to make the \\url and \\href commands to work, one should use the [hyperref](http://ctan.org/pkg/hyperref) package.\r\n\r\n----------------\r\n\r\nTo cite a specific page in the LMFDB, such as the home page of the L-function of the first\r\nrank 4 elliptic curve/$\\Q$ shown in the example below, you can cite it as\r\n\r\n[42] The LMFDB Collaboration, _The L-functions and modular forms database, \r\nhome page of the L-function $L(s,E)$ for Elliptic Curve Isogeny Class $\\texttt{234446.a}$_,\r\n[http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/](http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/), 2020 , [Online; accessed 31 July 2020].\r\n\r\nThe BibTeX entry is:\r\n\r\n @misc{lmfdb:234446.a,\r\n shorthand = {LMFDB E234446.a},\r\n author = {The {LMFDB Collaboration}},\r\n title = {The {L}-functions and modular forms database, \r\n Home page of the L-function $L(s,E)$ for elliptic curve isogeny class \\texttt{234446.a}},\r\n howpublished = {\\mbox{\\url{http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/}}},\r\n year = {2020},\r\n note = {[Online; accessed 31 July 2020]},\r\n} \r\n\r\n", 1623705942310369: "If you would like to acknowledge the LMFDB, please use a citation in the following form, alphabetized under _LMFDB_.\r\n\r\n[42] The LMFDB Collaboration, _The L-functions and modular forms database,_ \r\n[http://www.lmfdb.org](http://www.lmfdb.org), 2020, [Online; accessed 31 July 2020].\r\n\r\nOf course you should change the year and date accessed to match your usage (the same comment applies to all the examples below).\r\nThe BibTeX entry is:\r\n\r\n @misc{lmfdb,\r\n shorthand = {LMFDB},\r\n author = {The {LMFDB Collaboration}},\r\n title = {The $L$-functions and modular forms database},\r\n howpublished = {\\url{http://www.lmfdb.org}},\r\n year = {2020},\r\n note = {[Online; accessed 31 July 2020]},\r\n} \r\n\r\nThen you can cite a certain object much like quoting a specific theorem from a paper. For example, you may refer to the elliptic curve with LMFDB label 11a.2 by\r\n\r\n\\cite[\\href{http://www.lmfdb.org/EllipticCurve/Q/11.a2}{Elliptic Curve 11.a2}]{lmfdb}\r\n\r\nIn order to make the \\url and \\href commands to work, one should use the [hyperref](http://ctan.org/pkg/hyperref) package.\r\n\r\n----------------\r\n\r\nTo cite a specific page in the LMFDB, such as the home page of the L-function of the first\r\nrank 4 elliptic curve/$\\Q$ shown in the example below, you can cite it as\r\n\r\n[42] The LMFDB Collaboration, _The L-functions and modular forms database, \r\nhome page of the L-function $L(s,E)$ for elliptic curve isogeny class $\\texttt{234446.a}$_,\r\n[http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/](http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/), 2020 , [Online; accessed 31 July 2020].\r\n\r\nThe BibTeX entry is:\r\n\r\n @misc{lmfdb:234446.a,\r\n shorthand = {LMFDB E234446.a},\r\n author = {The {LMFDB Collaboration}},\r\n title = {The {L}-functions and modular forms database, \r\n Home page of the L-function $L(s,E)$ for elliptic curve isogeny class \\texttt{234446.a}},\r\n howpublished = {\\mbox{\\url{http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/}}},\r\n year = {2020},\r\n note = {[Online; accessed 31 July 2020]},\r\n} \r\n\r\n", 1624610455893875: "If you would like to acknowledge the LMFDB, please use a citation in the following form, alphabetized under _LMFDB_.\r\n\r\n[42] The LMFDB Collaboration, _The L-functions and modular forms database,_ \r\n[http://www.lmfdb.org](http://www.lmfdb.org), 2020, [Online; accessed 31 July 2020].\r\n\r\nOf course you should change the year and date accessed to match your usage (the same comment applies to all the examples below).\r\nThe BibTeX entry is:\r\n\r\n @misc{lmfdb,\r\n shorthand = {LMFDB},\r\n author = {The {LMFDB Collaboration}},\r\n title = {The {L}-functions and modular forms database},\r\n howpublished = {\\url{http://www.lmfdb.org}},\r\n year = {2020},\r\n note = {[Online; accessed 31 July 2020]},\r\n} \r\n\r\nThen you can cite a certain object much like quoting a specific theorem from a paper. For example, you may refer to the elliptic curve with LMFDB label 11a.2 by\r\n\r\n\\cite[\\href{http://www.lmfdb.org/EllipticCurve/Q/11.a2}{Elliptic Curve 11.a2}]{lmfdb}\r\n\r\nIn order to make the \\url and \\href commands to work, one should use the [hyperref](http://ctan.org/pkg/hyperref) package.\r\n\r\n----------------\r\n\r\nTo cite a specific page in the LMFDB, such as the home page of the L-function of the first\r\nrank 4 elliptic curve/$\\Q$ shown in the example below, you can cite it as\r\n\r\n[42] The LMFDB Collaboration, _The L-functions and modular forms database, \r\nhome page of the L-function $L(s,E)$ for elliptic curve isogeny class $\\texttt{234446.a}$_,\r\n[http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/](http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/), 2020 , [Online; accessed 31 July 2020].\r\n\r\nThe BibTeX entry is:\r\n\r\n @misc{lmfdb:234446.a,\r\n shorthand = {LMFDB E234446.a},\r\n author = {The {LMFDB Collaboration}},\r\n title = {The {L}-functions and modular forms database, \r\n Home page of the L-function $L(s,E)$ for elliptic curve isogeny class \\texttt{234446.a}},\r\n howpublished = {\\mbox{\\url{http://www.lmfdb.org/L/EllipticCurve/Q/234446.a/}}},\r\n year = {2020},\r\n note = {[Online; accessed 31 July 2020]},\r\n} \r\n\r\n", }; $(function() {$("body").on("click", ".diff_button", function (evt) { evt.preventDefault(); $('#differences').toggle('slow'); var topselect =$('#lhsselect').offset().top var topoption = $('#lhsselect').find("option[value="+$('#lhsselect').val() +"]").offset().top; $('#lhsselect').scrollTop(topoption - topselect - 10);$('#compare').resize(); }); }); Differences
2022-07-06 07:32:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6038388609886169, "perplexity": 5940.62207394082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104668059.88/warc/CC-MAIN-20220706060502-20220706090502-00627.warc.gz"}
https://stats.stackexchange.com/questions/309256/mar-assumption-missing-data
# MAR Assumption missing data I want to do predictions of missing data MAR. I do the simulations by generating normal data with 1 y and some x. after that I remove the data: Proportion of missing value from 5%, 10% to 50%. I use imputation mice. I would to ask, how much imputation should I do for each of the missing data proportions. I have not found literature about it. I just get the MCAR, the amount of imputation is as much as a lot of missing data.
2022-08-15 08:38:13
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8039642572402954, "perplexity": 710.1451788757743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572161.46/warc/CC-MAIN-20220815054743-20220815084743-00574.warc.gz"}
https://cstheory.stackexchange.com/questions/40439/is-agda-sound-as-a-proof-system
# Is Agda sound as a proof system? [closed] I've asked the same question on CSSE but with no luck (https://cs.stackexchange.com/questions/89611/is-agda-sound-as-a-proof-system). Therefore I ask it again here in cstheory and hope that more research-oriented audience can give more insights. I was browsing Agda's stdlib source code, since I was trying to get into it seriously and therefore wanted to know more. I was amazed at that Agda is way more developed than I thought and it's significantly much closer to Haskell than Coq. However, I was quite a bit panicked when I see some code like following: toList∘fromList : ∀ s → toList (fromList s) ≡ s toList∘fromList s = trustMe It seems there is an observable hole in the system, and it means Agda is not entirely built from ground up by axiomization. Then I saw this, https://github.com/agda/agda-stdlib/blob/master/src/Data/Colist.agda data Colist {a} (A : Set a) : Set a where [] : Colist A _∷_ : (x : A) (xs : ∞ (Colist A)) → Colist A I took Colist is the same as List in Haskell, allowing optionally infinite length, and from Wiki https://agda.readthedocs.io/en/v2.5.3/language/coinduction.html The type constructor ∞ can be used to prove absurdity! Just as I suspected, optional infinity introduces absurdity. To this point, I felt I was more scared than amazed. I understand that being practical must come with some trade off. However, Agda is more or less considered as a proof system, arguably more than a progarmming language. There are lots of papers these days are based on Agda. However, a quick code scan has shown holes in many disguises. (Sure Coq also has that, but it's considerably easy to discover: just grep axiom, admit will tell a lot, and Coq supports printing axioms for each lemma.) Since I am trying to enter Agda, I have no idea what I should expect from it. So the title says all my question: are the system and the results based on it, sound? ## closed as off-topic by D.W., Emil Jeřábek, Lev Reyzin♦Mar 23 '18 at 15:03 This question appears to be off-topic. The users who voted to close gave this specific reason: • "Our site policy prohibits simultaneous crossposting: it duplicates effort and fractures discussion. Crossposting is permitted after a week has passed without a satisfying answer elsewhere. When crossposting please summarize the relevant discussions from other sites in your question and link between the copies in both directions." – D.W., Emil Jeřábek, Lev Reyzin If this question can be reworded to fit the rules in the help center, please edit the question. • Cross-posted: cstheory.stackexchange.com/q/40439/5038, cs.stackexchange.com/q/89611/755. Please do not post the same question on multiple sites. Each community should have an honest shot at answering without anybody's time being wasted. – D.W. Mar 23 '18 at 0:54 • @D.W. if my question was read, nobody's time was wasted. i've explained why i posted here again. – Jason Hu Mar 23 '18 at 0:56 • I understand why it seems to make sense to you, but that's not how our site policies work. See the link I gave for explanation of why and details. (If you want the short version, just imagine what would happen if everyone did this -- but don't debate it here. See the link, and if you'd like to propose changing the policy, you can post on Theoretical Computer Science Meta. In the meantime, I hope you'll respect the site's policies.) – D.W. Mar 23 '18 at 0:59
2019-06-19 23:34:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4838966131210327, "perplexity": 1799.0255524257934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999066.12/warc/CC-MAIN-20190619224436-20190620010436-00189.warc.gz"}
http://gmatclub.com/blog/2013/09/gmat-question-of-the-day-sept-9-arithmetic-and-sentence-correction/
# GMAT Question of the Day (Sept 9): Arithmetic and Sentence Correction - Sep 9, 02:00 AM Comments [0] Math (PS) What is the least $N$ such that $N!$ is divisible by 1000? (A) 8 (B) 10 (C) 15 (D) 20 (E) 25 Question Discussion & Explanation Correct Answer - C - (click and drag your mouse to see the answer) GMAT Daily Deals • Only Knewton Gmat prep gives you access to >3,400 GMAT practice questions, all online. Save \$351 @ GMAT Club. \$351 savings @ GMAT Club! • GMAT below 720? Let Aringo help you get accepted at top MBA Progams. GMAT Club Special = 30% off! Save today! • Amerasia Consulting: 10% off OR sign up for 3 schools and get a 4th free thru GMAT Club. Sign up now. Learn more! • Stratus Prep has served clients from across the country and around the globe. Save up to \$500 @ GMAT Club. Learn more! Verbal (SC) In its most recent approach, the comet Crommelin passed the Earth at about the same distance and in about the same position, some 25 degrees above the horizon, that Halley’s comet will pass the next time it appears. (A) that Halley’s comet will pass (B) that Halley’s comet is to be passing (C) as Halley’s comet (D) as will Halley’s comet (E) as Halley’s comet will do Question Discussion & Explanation Correct Answer - D - (click and drag your mouse to see the answer) Like these questions? Get the GMAT Club question collection: online at GMAT Club OR on your Kindle OR on your iPhone/iPad Browse all GMAT Questions of the Day Subscribe to GMAT Question of the Day: E-mail | RSS
2016-06-29 18:11:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20706941187381744, "perplexity": 10453.697849732278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00049-ip-10-164-35-72.ec2.internal.warc.gz"}
https://wade.ulisboa.pt/seminars.php
# Planned seminars ## 28/01/2021, Thursday, 14:00–15:00 Europe/Lisbon — Online Enrico Serra, Politecnico di Torino We consider the minimization of the NLS energy on a metric tree, either rooted or unrooted, subject to a mass constraint. With respect to the same problem on other types of metric graphs, several new features appear, such as the existence of minimizers with positive energy, and the emergence of unexpected threshold phenomena. We also study the problem with a radial symmetry constraint that is in principle different from the free problem due to the failure of the Polya-Szego inequality for radial rearrangements. A key role is played by a new Poincaré inequality with remainder. ## 04/02/2021, Thursday, 14:00–15:00 Europe/Lisbon — Online Yvan Martel, École Polytechnique The talk concerns stability properties of kinks for (1+1)-dimensional nonlinear scalar field models of the form $\partial_t^2 \phi - \partial_x^2 \phi + W'(\phi) = 0 \quad (t,x) \in {\bf \mathbb R}\times {\mathbb R}.$ We establish a simple and explicit sufficient condition on the potential $W$ for the asymptotic stability of a given moving or standing kink. We present applications of the criterion to some models of the Physics literature. Work in collaboration with Michał Kowalczyk, Claudio Muñoz and Hanne Van Den Bosch. See also the related work with Michał Kowalczyk and Claudio Muñoz. ## 11/02/2021, Thursday, 14:00–15:00 Europe/Lisbon — Online Harbir Antil, George Mason University Fractional calculus and its application to anomalous diffusion has recently received a tremendous amount of attention. In complex/heterogeneous material mediums, the long-range correlations or hereditary material properties are presumed to be the cause of such anomalous behavior. Owing to the revival of fractional calculus, these effects are now conveniently modeled by fractional-order differential operators and the governing equations are reformulated accordingly. Similarly, the potential of fractional operators has been harnessed in various scientific domains like geophysical electromagnetics, imaging science, deep learning, etc. In this talk, fractional operators will be introduced and both linear and nonlinear, fractional-order differential equations will be discussed. New notions of optimal control and optimization under uncertainty will be presented. Several applications from geophysics, imaging science, and deep learning will be presented. ## 25/02/2021, Thursday, 14:00–15:00 Europe/Lisbon — Online Boyan Sirakov, PUC - Rio To be announced ## 04/03/2021, Thursday, 14:00–15:00 Europe/Lisbon — Online Bozhidar Velichkov, Università di Pisa To be announced ## 04/03/2021, Thursday, 15:00–16:00 Europe/Lisbon — Online Dario Mazzoleni, Università di Pavia To be announced ## 18/03/2021, Thursday, 14:00–15:00 Europe/Lisbon — Online Gabriele Benomio, Princeton University To be announced ## 18/03/2021, Thursday, 15:00–16:00 Europe/Lisbon — Online Shrish Parmeshwar, University of Bath To be announced
2021-01-27 15:41:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5023120641708374, "perplexity": 3798.743885202131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704828358.86/warc/CC-MAIN-20210127152334-20210127182334-00713.warc.gz"}
https://tex.stackexchange.com/questions/361887/embed-gif-to-latex-beamer
# embed gif to latex-beamer? [closed] I'm trying to embed the this GIF https://en.wikipedia.org/wiki/File:Bernstein_Approximation.gif to beamer but I can't succeed. I converted GIF to the PNG but all websites convert that GIF wrongly (for ex. it looks like half picture), therefore such kind of codes doesn't work. \animategraphics[loop,controls,width=\linewidth]{12}{something-}{0}{16} So, I tried to embed as MP4 but it didn't work either. What can I do, how can I embed this GIF to the beamer. Could you help me please? ## closed as off-topic by user36296, Stefan Pinnow, CarLaTeX, gernot says Reinstate Monica, WernerApr 6 '17 at 11:23 • This question does not fall within the scope of TeX, LaTeX or related typesetting systems as defined in the help center. If this question can be reworded to fit the rules in the help center, please edit the question. • You could use imagemagick to convert the gif into png, see for example tex.stackexchange.com/a/240247/36296 – user36296 Apr 3 '17 at 11:39 • For aesthetic reasons, I'd rebuild this kind of plot as a vector graphics, using TikZ or pgfplots. – AlexG Apr 3 '17 at 11:54 • I'm voting to close this question as off-topic because as the problem is the conversion of the .gif – user36296 Apr 6 '17 at 10:01
2019-11-21 21:54:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8285388350486755, "perplexity": 2517.6242075076793}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670987.78/warc/CC-MAIN-20191121204227-20191121232227-00016.warc.gz"}
https://warwick.ac.uk/fac/sci/masdoc/current/msc-modules/ma916/sgm/2supercritical/
Supercritical Case The supercritical case is where $\alpha<1$. In this case we have that longer edges are preferred over shorter ones; in particular, $t^{-1}\mathcal{B}_t$ can extend outside $B_1$. Furthermore, since any vertex $\mathbf{x}$ has probability $p$ of being reached in the optimal time $||\mathbf{x}||^{\alpha}$ we will not have the same result as in the critical case. In this case our simulations suggested that the limiting shape of $t^{-\frac{1}{\alpha}}\mathcal{B}^{(\alpha)}_t$ should coincide approximately with the $l^1$ ball $B_1$. The figure below shows one quadrant from a simulation where the edge weights have distribution $$f(x)=p\chi_{\{1\}}(x)+(1-p)e^{1-x}\chi_{\{(1,\infty)\}}(x)$$ for $p=0.7$. This is a distribution with an atom at $\{1\}$ and an exponential tail. The white region shows the unoccupied space, the black regions shows the standard ball $B_1$ and the scaled ball $t^{\frac{1}{\alpha}}B_{1}$, the dark grey region shows the active vertices under the LRFPP model and the light grey points are those which are active in both the nearest neighbour and long-range models. For this simulation the nearest neighbour model has been coupled with the long range one so that the nearest neighbour growth set cannot extend further than the corresponding long range one. The growth set for the long-range model has small fluctuations around a flat piece which extends over the entire edge of the ball. This suggests that the limiting shape of $t^{-\frac{1}{\alpha}}\mathcal{B}^{(\alpha)}_t$ exists and is $B_1$. Our main result for the subcritical case is indeed exactly what the simulations have suggested in that the limiting shape of the scaled growth set is the unit $l^1$ ball. Theorem 2: If $\alpha<1, p \in (0,1]$ we have that for $\mu \in \mathcal{M}_p$ and any $\varepsilon>0$ that: $$\mathbb{P}\left[B_{1-\varepsilon} \subseteq t^{-\frac{1}{\alpha}}\mathcal{B}^{(\alpha)}_t \subseteq B_{1+\varepsilon} \; for \; large \; t\right]=1$$
2022-06-27 00:42:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 21, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7925203442573547, "perplexity": 235.7500758510497}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103322581.16/warc/CC-MAIN-20220626222503-20220627012503-00750.warc.gz"}
https://psamm.readthedocs.io/en/stable/api/metabolicmodel.html
# psamm.metabolicmodel – Metabolic model representation¶ Representation of metabolic network models. class psamm.metabolicmodel.FlipableFluxBounds(view, reaction) FluxBounds object for a FlipableModelView. This object is used internally in the FlipableModelView to represent the bounds of flux on a reaction that can be flipped. lower Lower bound upper Upper bound class psamm.metabolicmodel.FlipableLimitsView(view) Provides a limits view that flips with the flipable model view. This object is used internally in FlipableModelView to expose a limits view that flips the bounds of all flipped reactions. class psamm.metabolicmodel.FlipableModelView(model, flipped=set([])) Proxy wrapper of model objects allowing a flipped set of reactions. The proxy will forward all properties normally except that flipped reactions will appear to have stoichiometric values negated in the matrix property, and have bounds in the limits property flipped. This view is needed for some algorithms. class psamm.metabolicmodel.FlipableStoichiometricMatrixView(view) Provides a matrix view that flips with the flipable model view. This object is used internally in FlipableModelView to expose a matrix view that negates the stoichiometric values of flipped reactions. class psamm.metabolicmodel.FluxBounds(model, reaction) Represents lower and upper bounds of flux as a mutable object This object is used internally in the model representation. Changing the state of the object will change the underlying model parameters. Deleting a value will reset that value to the defaults. bounds Bounds as a tuple lower Lower bound upper Upper bound class psamm.metabolicmodel.LimitsView(model) Provides a view of the flux bounds defined in the model This object is used internally in MetabolicModel to expose a dictonary view of the FluxBounds associated with the model reactions. class psamm.metabolicmodel.MetabolicModel(database, v_max=1000) Represents a metabolic model containing a set of reactions The model contains a list of reactions referencing the reactions in the associated database. add_reaction(reaction_id) copy() Return copy of model get_compound_reactions(compound_id) Iterate over all reaction ids the includes the given compound get_reaction_values(reaction_id) Return stoichiometric values of reaction as a dictionary is_exchange(reaction_id) Whether the given reaction is an exchange reaction. is_reversible(reaction_id) Whether the given reaction is reversible classmethod load_model(database, reaction_iter=None, exchange=None, limits=None, v_max=None) Get model from reaction name iterator. The model will contain all reactions of the iterator. remove_reaction(reaction) Remove reaction from model psamm.metabolicmodel.create_exchange_id(existing_ids, compound) Create unique ID for exchange of compound. psamm.metabolicmodel.create_transport_id(existing_ids, compound_1, compound_2) Create unique ID for transport reaction of compounds.
2018-04-21 01:54:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3119105100631714, "perplexity": 6710.35788670409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944851.23/warc/CC-MAIN-20180421012725-20180421032725-00048.warc.gz"}
https://crypto.stackexchange.com/questions/53219/how-to-encrypt-plain-message-with-rsa
# How to encrypt plain message with RSA Let's say I want to use RSA to encrypt a message 'Rocket will be launched at 2am' that has 30 letters. I use the keys from this example: https://etherhack.co.uk/asymmetric/docs/rsa_key_breakdown.html, so the modulus is 129 bytes long (1032 bits long number). So my message is shorter than the modulus and I can encrypt it. (Let's assume I don't use padding scheme.) What I have to do now with my message to encrypt it? I have to convert it to a number $m$, so I use some encoding to convert each letter to byte and I concatenate the bytes? (how do I choose that, is there a convention, will encoding and endianness be written into ASN.1 keys) According to RSA I now have to compute: $$c(m)=m^{65537} \pmod{\text{...1032_bits_long_number...}}$$ and message is 240 bits long number. Is this the procedure? • Your RSA modulus is really 1024 bits. RSA math works for any size, but it is conventional to use sizes that are powers of 2 or small multiples like 1024 1536=512x3 2048 3072=1024x3. Your link shows a key as stored in ASN.1 DER and ASN.1DER/BER represents integers in two's-complement, so a positive number of exactly 8n bits (like an RSA modulus) requires an extra '00' octet to contain the sign, thus storing 1032 bits for a 1024-bit value. Your actual raw-and-horribly-weak RSA values must be < N which is not even the full range of 1024 bits. – dave_thompson_085 Nov 17 '17 at 23:33 • Also RSA 1024 has not been considered to provide an adequate margin of safety since 2011, almost 7 years ago. The standard now is 2048, although in principle 1536 could be safe. – dave_thompson_085 Nov 17 '17 at 23:34 The straight-forward way to do this is: • Convert the ASCII string into an array of bytes. • Convert that byte array into a large integer. For that you need a library with support for arbitrary large integers (e.g. BigInteger) • Endianness does not matter for this. • For encryption, the library should already offer some modular exponentiation method, because otherwise you will have to write square-and-multiply yourself, where you apply the modulus after every step. • Decryption is the same as encryption, it's just modular exponentiation again. Of course this is just for this basic exercise. In an actuall application, you definately need: • A proper padding scheme, e.g. RSA-OAEP • You usually don't encrypt the message directly. Instead, hybrid encryption is used in all cases, so that it doesn't matter how long the message is. • Follow the standards, e.g. PKC#1, section 4 for the data conversion from integers to byte strings (thanks to @MaartenBodewes for pointing that out) • And to be honest: You should just use an implementation of RSA (and other encryption schemes) provided by a cryptographic library. • You may want to mention the OS2IP and I2OSP algorithms in PKCS#1 to convert (or interpret) integers to bytes and back. I've made an implementation in Java here (of course without the stupidity of implementing them using the math that describe how the numbers are represented by the bytes). – Maarten Bodewes Nov 17 '17 at 15:58 • Is OS2IP talking about converting array of message bytes to integer (bullet 2 from this answer)? So that according to 4.2 from the PKCS: x=R*256^29 + o*256^28 + c*256^27 + ... – croraf Nov 17 '17 at 16:38 • How do I know I have to use ASCII encoding, is this part of PKCS or another standard (or maybe encoded in ASN.1), or we have to communicate encoding separately? – croraf Nov 17 '17 at 16:46 • @croraf: PKCS>1< doesn't specify (anything about) encoding or other semantics of the bytes. Standards for encrypting messages (like PKCS 7 and S/MIME) generally RSA-encrypt only a key (which is bytes to start with), and for the symmetrically-encrypted data they don't specify encoding but sometimes enable users to do so. In practice most things this century either use ASCII or Unicode -- and they usually encode Unicode in UTF-8 which by design is compatible with ASCII. – dave_thompson_085 Nov 17 '17 at 23:48 • @croraf You don't have to use ASCII encoding. It is just one possible way to do this, and it is quite convenient as there are usually built-in functions for conversion. You could also use arithmetic encoding, but you probably have to write that yourself. But usually, encodings and format are outside of scope of cryptosystems - they work for anything that can be expressed as message with a bounded number of bits - but how you interpret those bits is completely up to you. – tylo Nov 20 '17 at 12:28 It's too late—you've already revealed your message to the world! ‘But no,’ you say. ‘That was just an example message. The real messages aren't that.’ In that case, what is the distribution on real messages? Your job, in fitting it into RSA, is to map the distribution on real messages into a uniform distribution on elements of $\mathbb Z/n\mathbb Z$. Why? The RSA trapdoor permutation is good at concealing a uniform distribution on $\mathbb Z/n\mathbb Z$, but terrible at concealing other distributions. For example, if all your messages were under 256 bits long, and the exponent were $e = 3$ (which is a completely sensible choice for sensible RSA-based encryption schemes), then anyone could take a ciphertext $c$ as an integer and compute the real number cube root to recover the plaintext. So do you have $n$ different messages, where $2^{1031} < n < 2^{1032}$, or something very near it? If not, then because of the modulo bias, you may find it difficult to shoehorn your message distribution into a near-uniform distribution on $\mathbb Z/n\mathbb Z$. That is why sensible RSA-based encryption schemes do not attempt to shoehorn messages themselves into elements of $\mathbb Z/n\mathbb Z$ for the RSA trapdoor permutation. For example, RSA-KEM simply picks an element $x \in \mathbb Z/n\mathbb Z$ uniformly at random, independent of your message; conceals it as $y \equiv x^3 \pmod n$; and uses the hash $H(x)$ as a secret key for a standard AEAD scheme such as AES-GCM to hide your message. Unlike the RSA trapdoor permutation, AES-GCM is really good at concealing messages of arbitrary lengths with arbitrary distributions. Other kludgier RSA-based encryption schemes such as RSAES-OAEP try hard to shoehorn certain classes of messages, like up to 256-bit keys, into $\mathbb Z/n\mathbb Z$, which are the ‘padding schemes’ you sometimes hear of. These are much more complicated to work with and understand, so I don't bother with the details, but they are perhaps more widely used because of the historical mistake of focusing on using the RSA trapdoor permutation as a public-key encryption scheme rather than a public-key key encapsulation method. • Whaaaat? :D. This is way too advanced for me to understand. – croraf Nov 17 '17 at 17:09 • Is there a part of this message you find particularly confusing on which you would like elaboration? – Squeamish Ossifrage Nov 17 '17 at 17:12 • chat.stackexchange.com/rooms/68896/… – croraf Nov 17 '17 at 17:15 • The RSA trapdoor permutation is good at concealing a uniform distribution on Z/nZZ/nZ, but terrible at concealing other distributions do you have some more resources about that please? – gusto2 Nov 26 '17 at 22:51 (Let's assume I don't use padding scheme.) Padding is very important part, never do RSA without padding in reality. Now lets assume you are doing schoolbook - learning excercise. What I have to do now with my message to encrypt it? the most practical way until now is hybrid encryption but I believe you want to do pen-and-pencil RSA (is it so?) I have to convert it to a number m, so I use some encoding to convert each letter to byte and I concatenate the bytes? Indeed, all operations are over array of bytes at the end. It's up to you to how you make the byte array (or rather - bit array for RSA) from your message. For schoolbook text you can just concatenate the ascii of the message characters is there a convention, will encoding and endianness be written into ASN.1 keys you see, I always use out-of-box libraries to read and parse the keys, so I don't really recall how the ASN.1 format stores the numbers. I strongly believe the first bit is the most significant (big-endian), however I really may be wrong (if someone knows, please correct). However with RSA itself - it doesn't matter in this case According to RSA I now have to compute: c(m)=m^65537 (mod ...1032_bits_long_number...) yes, you need to do $c = M^e\ mod\ n$ you could use exponentiation by squaring which is just shitfing $e$ and summing. Though for 1024 but key you will do it... many times if you are doing it by hand. • Thanks! I'm doing a study not real life encryption. What I want to know if I use a module for encryption and you use another for decryption, and we agree to use RSA and I provide you ASN.1 (or PEM) public key, will you have the info of how to convert plain text into array of bytes and how to convert this array of bytes into big integer so you can do the RSA encoding? – croraf Nov 17 '17 at 16:42
2019-05-21 01:02:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4519585967063904, "perplexity": 1315.096065447105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256184.17/warc/CC-MAIN-20190521002106-20190521024106-00492.warc.gz"}
https://dr.ntu.edu.sg/handle/10356/68898
Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/68898 Title: Efficient index structures for reachability and shortest path queries Authors: Wang, Sibo Keywords: DRNTU::Engineering::Computer science and engineering::Information systems::Database management Issue Date: 2016 Source: Wang, S. (2016). Efficient index structures for reachability and shortest path queries. Doctoral thesis, Nanyang Technological University, Singapore. Abstract: Graphs are a fundamental data structure to represent objects and their relations in various domains, e.g., social science, citation analysis, web link analysis, and navigation systems. Reachability and shortest path queries are two types of primitive and well-studied graph queries. In this thesis, we investigate on how to build efficient index structures to answer the reachability queries and two variants of the shortest path queries. The first variant of the shortest path query considers the shortest path in timetable graphs, where it requires the consideration of spatiotemporal constraints between different edges. The second variant aims to provide the shortest path under an additional cost constraint. These problems find many real world applications, and are challenging to achieve high efficiency on sizable graphs. First, we study the reachability queries on directed graphs. Given a directed graph $G$, a source $s$, and a target $t$, a reachability query asks whether there exists a path from $s$ to $t$ in $G$. The state-of-the-art solutions for answering reachability queries are a class of labelling schemes referred to as {\em Hierarchical Hub Labelings (HHL)}. One distinct characteristic of {\em HHL} is that any {\em HHL} index implicitly or explicitly defines a total order of vertices in the graph, and the performance of the index is solely decided by the ordering. Several heuristic approaches have been proposed for vertex ordering in {\em HHL}, but none of them provides any worst-case guarantee. We present a novel study on the vertex ordering in {\em HHL}, and devise a polynomial-time algorithm for vertex ordering that yields an {\em HHL} index whose size is at most $O(\sqrt{n})$ times the optimal one. Motivated by our theoretical study, we design a new heuristic for vertex ordering, and show that it leads to an {\em HHL} index with superior practical performance for reachability queries. Besides, identifying fast routes in public transportation networks is an important problem with applications in map services and navigation systems. In public transportation networks, any traversal between different network locations relies on transportation services (e.g., buses, subways, and ferries.) that run on fixed routes with pre-defined schedules. A public transportation network can often be modeled as a timetable graph where (i) each node represents a station; and (ii) each directed edge $\langle u, v\rangle$ is associated with a timetable that records the departure (resp. arrival) time of each vehicle at station $u$ (resp. $v$). There are several types of shortest path queries on timetable graphs. We study three types of shortest path queries for route planning that have been extensively studied on timetable graphs: {\em earliest arrival path (EAP)}, {\em latest departure path (LDP)}, and {\em shortest duration path (SDP)} queries. These three types of queries aim to find a path that arrives at a place as early as possible, departs as late as possible but arrives on time, and has the shortest travel time, respectively. We present {\em Timetable Labelling (TTL)}, an efficient indexing technique for shortest path queries on timetable graphs. The basic idea of {\em TTL} is to associate each node $u$ with a set of labels, each of which records the shortest travel time from $u$ to some other node $v$ given a certain departure time from $u$; such labels would then be used during query processing to improve efficiency. In addition, we propose query algorithms that enable {\em TTL} to support EAP, LDP, and SDP queries, and investigate how we reduce the space consumption of {\em TTL} with advanced preprocessing and label compression methods. By conducting an extensive set of experiments on real world datasets, we demonstrate that {\em TTL} significantly outperforms the states of the art in terms of query efficiency, while incurring moderate preprocessing and space overheads. Finally, {\em constrained shortest path (CSP)} is a variant of the shortest path problem with an additional total cost constraint. Specifically, in a CSP query, each edge in the road network is associated with both a length and a cost, e.g., travel distance and time. Given an origin $s$, a destination $t$, and a cost constraint $\theta$, the goal is to find the path from $s$ to $t$ that minimizes its total length, while satisfying that its total cost does not exceed $\theta$. Deriving the exact answer for a CSP query has been proven to be NP-hard, and the majority of previous work focuses on approximate solutions. We propose {\em COLA}, the first practical solution for approximate CSP processing on large road networks. {\em COLA} exploits the facts that a road network can be effectively partitioned, and that there exists a relatively small set of landmark vertices that commonly appear in CSP results. Meanwhile, {\em COLA} includes both an effective indexing scheme for partition boundary vertices, and an efficient on-the-fly algorithm called $\alpha$-Dijk for path computation within a partition. Extensive experiments demonstrate that on continent-sized networks with tens of millions of vertices, {\em COLA} answers an approximate CSP query in sub-second time, whereas existing methods take hours. Interestingly, even without an index, the $\alpha$-Dijk algorithm in {\em COLA} still outperforms previous solutions by more than an order of magnitude. URI: https://hdl.handle.net/10356/68898 DOI: 10.32657/10356/68898 Fulltext Permission: open Fulltext Availability: With Fulltext Appears in Collections: SCSE Theses ###### Files in This Item: File Description SizeFormat #### Page view(s) 50 272 Updated on May 6, 2021 146 Updated on May 6, 2021
2021-05-06 22:10:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.872519850730896, "perplexity": 1088.608033155423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988763.83/warc/CC-MAIN-20210506205251-20210506235251-00520.warc.gz"}
http://math.stackexchange.com/questions/267032/determine-the-remainder-when-21930-is-divided-by-840
Determine the remainder when $2^{1930}$ is divided by $840$ Determine $\Phi(840)$. Hence, determine the remainder when $2^{1930}$ is divided by $840$. I determined $\Phi(840) = \Phi(2^3*3*5*7) = 480$, however I don't know how I can use this to solve the problem. Since $\operatorname{gcd}(2, 840)$ is not $1$, how can I apply Euler's theorem which states if $n \geq 1$ and $\operatorname{gcd}(a, n) = 1$ then $a^{\Phi(n)}$ is congruent to $1 (\operatorname{mod} n)$? Or is there a different approach to solving a problem like this? - Before you run, learn to walk (yes, even though running is fun). Translation: Before you apply Euler's Theorem, apply Modulo Arithmetic (yes, even though Euler's Theorem seems all powerful to deal with these scenarios). Like you mentioned, we know that $2^{1930} \equiv 0 \pmod{8}$. Hence, it remains to calculate the value of $2^{1930} \pmod{105}$. Once you have this value, use Chinese remainder theorem to calculate the value of $2^{1930} \pmod {840}$. - We can not apply Euler's Totient Theorem directly as $2,840$ are not relatively prime. Let $2^{1930}=840r+s\implies s=8(2^{1927}-105r)\implies 8\mid s,s=8t$(say) So, $2^{1927}\equiv t\pmod {105}$ Now we can safely apply Totient Theorem. As $\phi(105)=\phi(3)\phi(5)\phi(7)=2\cdot4\cdot6=48\implies 2^{48}\equiv1\pmod{105}--->(1)$ In fact,we can potentially find some lower exponent $(e)$ of $2$ such that $2^e\equiv1\pmod{105}$ by applying Carmichael Function. As $\lambda(105)=lcm(\lambda(3),\lambda(5),\lambda(7))$ $=lcm(\phi(3),\phi(5),\phi(7))=lcm(2,4,6)=12 \implies 2^{12}\equiv1\pmod {105}--->(2)$ So, $2^{1920}=(2^{48})^{40}\equiv1^{40}\pmod{105}=1$ by $(1)$ or $2^{1920}=(2^{12})^{160}\equiv1^{160}\pmod{105}=1$ by $(2)$ So in any case, $2^{1920}=1+105u$ for some integer $u$ $2^{1930}=2^{10}\cdot2^{1920}=2^{10}(1+105u)$ $\equiv2^{10}\pmod{2^{10}\cdot105}\equiv1024\pmod{2^{10}\cdot105}\equiv 1024\pmod{840}\equiv184$ Alternatively, $1927\equiv7\pmod {48}\equiv7\pmod {12}$ So, $2^{1927}\equiv 2^7\pmod {105}\equiv128\equiv 23$ So, $2^{1930}\equiv 2^3\cdot 23\pmod {840}\equiv 184$ - Note: You didn't need to apply Carmichael, since Euler gives us $\phi(105) = 2 \times 4 \times 6 = 48$ and $1927 = 40 \times 48 + 7$. –  Calvin Lin Dec 29 '12 at 13:12 @CalvinLin, in general it's always better to use Carmichael function as it allows us to play with smaller exponents. Admittedly, here it did not make any difference. –  lab bhattacharjee Dec 29 '12 at 13:35 I agree with you in principle. My point was to flaming, that he should think of the basic ways to approach this problem, since he doesn't have it down solidly. Instead of trying to learn even more high powered different approaches. –  Calvin Lin Dec 29 '12 at 13:49 @CalvinLin: Since $840=2^3\cdot 3\cdot 5\cdot 7$, we have $a^2\equiv 1\pmod{3}$ and so on. So $a^{12}\equiv 1\pmod{840}$. Perhaps Carmichael could have been mentioned after solution was given. –  André Nicolas Dec 29 '12 at 15:41 @AndréNicolas Precisely! My point was that he should have learnt how to apply Modulo Arithmetic properly first. By being unable to solve this problem, he demonstrates that he doesn't think of modulo arithmetic in the prime power sense, which is (one of) the first thing you should be doing. –  Calvin Lin Dec 29 '12 at 15:52
2014-03-07 11:06:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7837609648704529, "perplexity": 285.19969854758506}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999642170/warc/CC-MAIN-20140305060722-00095-ip-10-183-142-35.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/582920/continuous-function-from-compact-metric-space-to-metric-space-is-uniformly-conti
# Continuous function from Compact Metric Space to Metric Space is Uniformly Continuous In Rudin 4.10 he wants us to show that a continuous function from a compact metric space to a metric space is uniformly continuous by deriving that if $$f$$ is not uniformly continuous then there is an $$\epsilon >0$$ such that there are two sequences $$(p_n)$$ and $$(q_n)$$ such that $$d(p_n, q_n) \to 0$$ and $$d(f(p_n,f(q_n))>\epsilon$$ and then, using the fact that every infinite subset of a compact metric space has a limit point in that space, get a contradiction. ### Proof $$f$$ not uniformly continuous on $$X$$ implies that $$\exists \epsilon >0 : \forall \delta >0, x \in X: \exists x'\in X : d(x,x')<\delta$$ but $$d(f(x),f(x'))>\epsilon$$ So let $$p_{n}\rightarrow x$$ and $$q_{n}\rightarrow x'$$ (Need to show existence?) Since $$\{p_n\}$$ and $$\{q_n\}$$ are infinite subsets of $$f(X)$$, a compact set by continuity, both of them have their limits in $$f(X)$$ but that would mean $$f$$ is discontinuous at $$x$$, as two sequences are converging to different values at $$x$$. Is this the contradiction that they're looking for? Thanks. You are correct. To make it precise, first assume that $f$ is not uniformly continuous, then there is a fixed $\epsilon >0$ and $p_n, q_n$ such that $$d(p_n, q_n) \leq 1/n \ \text{ and }d(f(p_n)), f(q_n) ) > \epsilon$$ By passing to subsequence if necessary, we can assume that $p_n \to x$ as $X$ is compact. As $d(p_n, q_n) \to 0$, we also have $q_n \to x$. By continuity of $f$, $$\lim_{n\to \infty} f(p_n) = f(x) = \lim_{n\to \infty} f(p_n) .$$ But this is a impossible as $d(f(p_n), f(q_n)) > \epsilon$. Thus no such two sequences could be found and $f$ is uniformly continuous.
2021-01-23 01:51:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9637451767921448, "perplexity": 45.48412420121869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531702.36/warc/CC-MAIN-20210123001629-20210123031629-00294.warc.gz"}
https://math.stackexchange.com/questions/930478/remainder-of-polynomials
# Remainder of Polynomials A polynomial $P(x)$ of degree $n \geq 2$ has a remainder of $9$ when it is divided by $(x+2)$ and a remainder of $-1$ when it is divided by $(x-3)$. Find the remainder of $P(x)$ when it is divided by $(x^2 -x-6)$. We must have $$p(x)=(x^2-x-6)q(x)+a(x+2)+9=(x^2-x-6)q(x)+a(x-3)-1$$ solving for $a$ we get $a=-2$ so the remainder is $$-2x+5$$
2019-11-13 19:39:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4891185760498047, "perplexity": 21.814417391376274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667333.2/warc/CC-MAIN-20191113191653-20191113215653-00168.warc.gz"}
https://forum.allaboutcircuits.com/threads/lm324-op-amp-non-inverting-amplifier-to-measure-current-from-solar-panel-on-arduino.133356/
# LM324 - Op-Amp Non-Inverting Amplifier to measure current from solar panel on Arduino #### Nuno Tavares Joined Mar 18, 2017 7 Hi everyone, I'm designing a circuit to measure the current from a mini-solar panel (0.5 W) which normally reaches 80-100 mA. My circuit is attached to this post. I have a Gain of 6 with the resistors R2=5k and R1=1k because G=1+(R2/R1). Normally the input voltage on op-amp (LM324) is 0.2-0.4 V and always the output voltage is under 3.7-3.8 V. I don't understand why this happens. If 0.3 V is the input voltage and with gain of 6, the output voltage would be 1.8 V (0.3*6=1.8). The power supply of op-amp is the +5V and GND from Arduino. Can you help me with this ? #### Attachments • 25.4 KB Views: 35 #### hp1729 Joined Nov 23, 2015 2,304 Hi everyone, I'm designing a circuit to measure the current from a mini-solar panel (0.5 W) which normally reaches 80-100 mA. My circuit is attached to this post. I have a Gain of 6 with the resistors R2=5k and R1=1k because G=1+(R2/R1). Normally the input voltage on op-amp (LM324) is 0.2-0.4 V and always the output voltage is under 3.7-3.8 V. I don't understand why this happens. If 0.3 V is the input voltage and with gain of 6, the output voltage would be 1.8 V (0.3*6=1.8). The power supply of op-amp is the +5V and GND from Arduino. Can you help me with this ? 0.3 V in across 0.5 Ohms is 0.6 Amps. With 5 V VCC the output is maxed out. If the current should be < 100 mA the voltage across the 0.5 ohm resistor should be < 50 mV. Is your 56 ohm resistor 5.6 ohms? Last edited: #### crutschow Joined Mar 14, 2008 24,376 Have you measured the op amp (+) input voltage? Where is this 0.2-0.4V coming from? 100mA through 0.5Ω is only 50mV. #### Nuno Tavares Joined Mar 18, 2017 7 0.3 V in across 0.5 Ohms is 0.6 Amps. With 5 V VCC the output s maxed out. Sorry didn't understand what you mean, can you explain it better please ? #### ericgibbs Joined Jan 29, 2010 9,563 hi NT, Why have you connected Vcc to the wiper of the pot. If you said one half if the pot is Ra and the other half is Rb, this means Ra and Rb are in parallel, so the gain is R5/Ra||Rb E #### Nuno Tavares Joined Mar 18, 2017 7 Have you measured the op amp (+) input voltage? Where is this 0.2-0.4V coming from? 100mA through 0.5Ω is only 50mV. The op amp (+) input voltage is normally 0.2 V - 0.4 V #### dl324 Joined Mar 30, 2015 10,072 I have a Gain of 6 with the resistors R2=5k and R1=1k The way you have drawn the schematic isn't very useful. It is easier to discern desired function if you draw the schematic as shown below. The part designators in your description don't match the schematic: What is the panel voltage? #### Nuno Tavares Joined Mar 18, 2017 7 The way you have drawn the schematic isn't very useful. It is easier to discern desired function if you draw the schematic as shown below. The part designators in your description don't match the schematic: View attachment 122758 What is the panel voltage? The panel voltage is around 5-6 V #### dl324 Joined Mar 30, 2015 10,072 The panel voltage is around 5-6 V The voltage divider on the panel is going to give you essentially no voltage: $$\small V_{div} = V_{panel}*\frac{0.5}{56.5}=5*0.009=0.04V$$ #### Nuno Tavares Joined Mar 18, 2017 7 The voltage divider on the panel is going to give you essentially no voltage: $$\small V_{div} = V_{panel}*\frac{0.5}{56.5}=5*0.009=0.04V$$ The op amp serves to amplify that voltage to be possible read on analog pin the voltage coming from that voltage divider. #### dl324 Joined Mar 30, 2015 10,072 The op amp serves to amplify that voltage to be possible read on analog pin the voltage coming from that voltage divider. By choosing such a low voltage, the non-ideal attributes of the opamp can become a significant percentage of of the input signal. #### Nuno Tavares Joined Mar 18, 2017 7 By choosing such a low voltage, the non-ideal attributes of the opamp can become a significant percentage of of the input signal. So what's the best way to measure the current by Arduino ? And I can't use a current sensor, need to develop one. #### crutschow Joined Mar 14, 2008 24,376 The op amp (+) input voltage is normally 0.2 V - 0.4 V Where is the 0.2V-0.4V "normal" input voltage coming from? Op amp inputs don't generate voltage (unless they are an open circuit) and there is nothing on your schematic that would generate such a voltage. #### dl324 Joined Mar 30, 2015 10,072 So what's the best way to measure the current by Arduino ? And I can't use a current sensor, need to develop one. The usual method is to use a low value series resistor. A voltage divider across the solar panel tells you nothing about the current being drawn by a load. #### Alec_t Joined Sep 17, 2013 10,913 I'm designing a circuit to measure the current from a mini-solar panel Are you trying to measure the short-circuit current of the panel, or the current through some load driven by the panel? #### Nuno Tavares Joined Mar 18, 2017 7 Are you trying to measure the short-circuit current of the panel, or the current through some load driven by the panel? I'm trying to mesure the current through the Load (which is my RL) driven by the panel. #### dl324 Joined Mar 30, 2015 10,072 I'm trying to mesure the current through the Load (which is my RL) driven by the panel. Then use more gain in the opamp to get a more meaningful number. #### hp1729 Joined Nov 23, 2015 2,304 Sorry didn't understand what you mean, can you explain it better please ? You measure about 0.3 V at the input? it should only be 50 mV. It acts like your 56 ohm load resistor is 5.6 ohms. Check the other resistor values also. That 3.8 ish volts out is about all you will get out of an LM324 when it is maxed out so your gain calculations are meaningless. What you describe for symptoms sounds like wrong resistor values. Your schematic looks workable as it is designed. #### KeepItSimpleStupid Joined Mar 4, 2014 3,905 Having been in the solar cell research biz, that's probably a bad way of doing it, but it depends on what you need to do. Our research cells were less than 25 mA. I did design a 4-terminal I-V converter using the feedback ammeter approach that could handle +-100 mA. Although it was optimized for AC measurements, in DC it had about < 1 mV drop and 40 pA of offset. #### elnona Joined Nov 16, 2017 2 Hi! I already made my own "USB charge Doctor" to display current draw from usb. Using: arduino 1802 display, lm324, 47k resistor, 2k2 resistor as R2 and r1 (gain of 22) shunt resistor of 0.05 ohms. using your circuit and it works well. at 1 amp i read 50mv at the input of the opamp and 1150mv at the output.
2020-04-05 21:18:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3604612946510315, "perplexity": 3458.5105286989633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371609067.62/warc/CC-MAIN-20200405181743-20200405212243-00293.warc.gz"}
https://stats.stackexchange.com/questions/331943/how-to-compare-mixed-effect-model-with-one-random-effect-to-one-without-random-e?noredirect=1
# How to compare mixed effect model with one random effect to one without random effect in R [duplicate] Essentially comparing: glm1 = glmer(Mortality ~ CCI + PatientRace + PatientSex + age_cat + (CCI | FacilityIdentifier), data = tmp, family = binomial, control = glmerControl(optimizer = "bobyqa"), nAGQ = 1) to m1 = glm(Mortality ~ CCI + PatientRace + PatientSex + age_cat, family=binomial, data = tmp) To determine if the random effect is a significant contributor, hopefully to show that each facility doesn't have varying practices in measuring CCI that may affect interpretation of mortality. Would appreciate any advice. ## marked as duplicate by amoeba, kjetil b halvorsen, Peter Flom♦Mar 6 '18 at 12:08 • Why not just bootstrap confidence intervals and see if that of the random effect's variance includes zero (non-significant) or not (significant)? You can simply do this with confint(glm1, method = "boot"). – Frans Rodenburg Mar 6 '18 at 5:45
2019-08-23 22:23:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3722103238105774, "perplexity": 9064.610373468922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319082.81/warc/CC-MAIN-20190823214536-20190824000536-00432.warc.gz"}
https://computergraphics.stackexchange.com/questions/10958/how-compute-new-camera-parameters-given-a-velocity-vector
# How compute new camera parameters given a velocity vector? My goal is to update camera parameters given a velocity vector so that the camera points in the direction of the velocity vector. How should one compute the update matrix for the camera parameters? • when you say camera points, you mean to change the look at direction to conform with the velocity vector or just the position of the camera? – gallickgunner Jun 10 at 22:37 • @gallickgunner Thank you for the question. I meant to say I need both the position and the look direction should conform with the velocity vector – Amir Jun 11 at 1:06 • @gallickgunner Let me give you a better picture of what I actually need to do. In the framework I am using I need to set the camera's look direction (look_dir), the distance (dis) from the camera to look_dir, azimuth and elevation manually. Initially I was thinking I should first compute the rotation matrix replace the camera rotation matrix with the updated one, but it seems that this is not possible and I need to manually compute these camera parameters. I'm not entirely sure how to obtain these parameters starting from a given velocity vector. Do you know how I can do this? – Amir Jun 11 at 3:10 • I still don't understand why you want to change the look_dir to that of the velocity vector. Also azimuth and elevation, are you using trackball style camera defined using spherical coordinates? A normal camera usually is defined by side, up, look_at and the position vectors. For the camera's position you could do something as simple as divide the velocity vector by 60 if let's say the app is running at 60 FPS then add it to the position, so after 1 complete second it's where it should be. If you wanna change the look_at as well you could lerp between the original and the velocity? – gallickgunner Jun 11 at 5:36 • @gallickgunner I'm not sure what the camera type is but it's possible that it's a trackball style camera. What you suggested in terms of dividing the velocity vector by a number and adding it to the previous one is what I am going to use temporarily. However, this only works well if I assume that velocity vector does not rotate the camera and only moves the camera in space. I'm also not entirely sure if this would work for a trackball camera. – Amir Jun 15 at 17:01 The velocity vector becomes your forward vector, just normalize it. The up direction doesn't change, use whatever version up the camera usually uses. The last vector (call it the "right hand" vector) is the cross product of up, and forward. Then, do a second cross with the forward vector and the right hand vector to guarantee an orthonormal basis. The last bit needed is the translation which will have to come from somewhere. The exact ordering of all the operations is going to depend on the setup you are using. Is up direction y or z ect. Once the matrix if formed, find its inverse. (Since you went through the trouble of making sure it is orthonormal, the inverse is equivalent to the transpose.) This is the new view matrix. Edit: When dealing with Azimuth and Elevation you can extract the angles from a vector using the geometric asin and acos functions. For example: vec3d fwd = target_point - camera_point; // generate a vector somehow fwd = normalize(fwd); // normalize it float azimuth = acos( fwd.x ); float elevation = asin( fwd.y ); // extract angles if( fwd.z < 0 ) { elevation = -elevation; } // fix quadrant based sgn(z) The above assumes that z is the up axis, and that azimuth/elevation are given in radians. Now that the look direction is decided. Save it off to the side, and begin turning the camera. A good approach to turning the camera is to have a "radians per a second" turn rate. Get the time delta between updates from the system. And use those values to compute how much the camera should turn each frame just by computing the total number of radians that is needed for each direction. float time_delta_in_seconds = elapsed_seconds_since_last_frame(); const float radians_per_second = 0.1; // slowish float turn_amount = time_delta_in_seconds * radians_per_second; float turn_amount = min(radians_to_turn, turn_amount); // don't overlook radians_to_turn -= turn_amount; // update - converges on zero So each frame: 1. update the velocity vector 2. compute the new camera settings but don't use them, save them 3. compute your camera turn amount and update the camera with the new values Make the 3 steps independent and the camera will just keep turning, chasing the velocity vector. Handling user intervention seems self evident, if the user turns the camera disable the auto camera turning. If you want the camera to start turning after some preset amount of time, then just set a timer that enables auto turning and reset it every time the user moves the camera. While this isn't the best system for auto camera following, it works and should (hopefully) get you going. The camera turning will be very mechanical since the turn rate is fixed. But once its all set its not to hard to add acceleration using the same ideas. It just occurred to me that this is like 3 separate questions in one, really these should have been asked separately. At any rate, this should help some I hope. • Thank you for your response. The issue is I am not sure how to use the View matrix due to the setup of the camera in the framework I am using. In this framework, I can only set the camera's look direction (look_dir -- has components x, y, z), the distance (dis) from the camera to look_dir, azimuth and elevation manually. In some simple scenarios you can imagine look_dir being equivalent to velocity_vec but there are two issues with this: 1) I don't want to camera to have a sudden shift like 2) Velocity vector does not always determine where the camera looks. – Amir Jun 15 at 16:44 • However, in my case it is okay if look_dir becomes equivalent to the velocity vector after some time (say 1 second -- assuming each second has 30 frames or so). However, I am not sure how to update all of the parameters such as look_dir, dis, azimuth and elevation given a velocity vector. Maybe I should use the View Matrix in some way for this? I would appreciate if you can help me with this. – Amir Jun 15 at 16:49 • Thats the beauty of using a vector and the approach I outlined above. The velocity vector has that info built into it. Once you have the matrix built you can extract all the other information from the matrix itself. (I can add an edit showing that if you need, but I won't probably get to it today sorry) – pmw1234 Jun 15 at 17:15 • I'd highly appreciate it if you can update your answer and include the new solution. Also, would you mind looking through the comments I posted above that were exchanged with another user before posting your updated solution? I think those comments might also provide you some more information on what I need. Really appreciate it – Amir Jun 15 at 17:41 • This documentation for the framework I am using might be helpful as well: mujoco.org/book/haptix.html#uiCamera – Amir Jun 16 at 4:24
2021-06-21 16:13:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39071550965309143, "perplexity": 697.953987215287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488286726.71/warc/CC-MAIN-20210621151134-20210621181134-00575.warc.gz"}
https://ask.libreoffice.org/en/question/175977/how-do-i-prevent-libreoffice-from-capitalising-a-particular-word/?sort=votes
# How do I prevent LibreOffice from capitalising a particular word I am writing a document that has a table listing usernames and passwords. When I type a password and then tab to the next column it changes the first letter to a capital. How do I tell it "don't alter this word"? I know I can undo the change using Ctrl-Z, but it does it again when I edit the document. I can turn off the AutoCorrect option "Capitalise first letter of every sentence", but that is a global option which affects not only the whole document, but all new documents I type. What I was hoping for is a way to turn it off for a specific part of the document, maybe just for the word alone, or perhaps for the password column or for the whole table. The same question applies to URL recognition. If the username contains an @ symbol, it gets treated as a hyperlink, and I would like to turn that feature off for entries in the username column. edit retag close merge delete Sort by » oldest newest most voted See this question. Based on that, I've implemented that in tdf#121779 for version 6.3, due to release in summer 2019. Wrt @ in a particular column - it's not possible. You either may disable URL recognition totally, or have it active everywhere. more Not quite the same thing. I would end up with an auto-correct exception list containing everyone's password! I was hoping for a way to flag a section of text as "don't mess with this", in the same way you can highlight some text and specify Language / For Selection / None (Do not check spelling). Any other workarounds? ( 2018-12-12 13:47:22 +0100 )edit Well - as I said, there's no way to mark parts of text as "do not apply autocorrection". But you have written "How do I tell it "don't alter this word"?", and "What I was hoping for is a way to turn it off for a specific part of the document, maybe just for the word alone". Well - how do you expect to disable something for a single word when it isn't yet typed, if not by placing to some exception list? And when you already have typed it, you are done with autocorrection. Simply define a hotkey to toggle autocorrection (look for While Typing in the Format category of Customize dialog), and use it before/after passwords. Don't think there will be other possible workflow. ( 2018-12-12 14:01:00 +0100 )edit "Well - how do you expect to disable something for a single word when it isn't yet typed, if not by placing to some exception list?" - I thought it might be possible by a mechanism similar to highlighting a word and selecting "Do not check spelling" as I mentioned above. Thanks for the suggestion of toggling "While Typing" ( 2018-12-12 14:40:00 +0100 )edit
2019-12-06 10:37:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21177423000335693, "perplexity": 1678.8066174286885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540487789.39/warc/CC-MAIN-20191206095914-20191206123914-00231.warc.gz"}
http://forum.allaboutcircuits.com/threads/practice-diode-problem-from-old-test.39937/
# Practice Diode Problem From Old Test Discussion in 'Homework Help' started by Sleepcakez, Jun 26, 2010. 1. ### Sleepcakez Thread Starter New Member Jun 26, 2010 13 0 So the circuit has an input Vi = 1.4 + 0.1cos(wt) V. Both Diodes have a turn on value of .7V There is an initial current of 17mA running through Diode 1. n = 1.2 I am supposed to find the DC current through Diode 2 and then I have to find the small signal resistance r1 and r2. I do not have the answer to the problem, and I do not believe I'm going quite in the right direction on this. At first I went and solved out that the current running through the 1k resistor was .7mA, and the current running through D2 was 16.3mA. I don't have my work in front of me right now but I know I used the genero rd equation to try and solve for r1 and r2. Thanks for any help. File size: 9.2 KB Views: 38 2. ### t_n_k AAC Fanatic! Mar 6, 2009 5,448 782 If the DC current in D1 is 17mA then what will be the DC current in D2? Note that the 1k resistor across D2 will take 0.7mA - so how much is left from 17mA to flow in D2? Once you know the DC currents in each diode you calculate their effective dynamic resistances. $r_{dynamic}=\frac{nV_T}{I_{diode$$DC$$}}$ where $V_T=\frac{kT}{q}=0.026 \ at \ room \ temp$ 3. ### Sleepcakez Thread Starter New Member Jun 26, 2010 13 0 Where did you get the equation Rdynamic? Did you manipulate some equations to obtain that or what? I was unable to find that in my book. Because I knew the question wanted me to use (n), I figured I needed to use the ideal diode equation since I saw no other equations in my book that involved (n). That being said: So the dc current running through D2 is 16.3mA. Using that equation to solve for Rd1 and Rd2 I obtain the following: Rd1 = (1.2)(.026v)/(.017A) Rd2 = (1.2)(.026v)/(.0163A) Solve for each of those and I'm done? I really didn't take this problem for being this easy after working on it for so long, although in my first few attempts I did get pretty close to going in that direction. 4. ### Georacer Moderator Nov 25, 2009 5,142 1,266 The formula for the diode dynamic resistance is pretty much given in any textbook. The steps for getting to it are the following: We know that $i_{\small{D}}(t)=I_{\small{D}}e^{\small{v_{\tiny{d}}/nV_{\tiny{T}}}}$ The small signal approach supposes that $\frac{v_{\small{d}}}{n{\cdot}V_{\small{T}}}<<1$ and the above equations can be rewritten as its Laplace Transform as $ i_{\small{D}}(t)=I_{\small{D}}\left(1+\frac{v_{\small{d}}}{n{\cdot}V_{\small{T}}}\right)$ wich can be groupped as follows:$i_{\small{D}}(t)=I_{\small{D}}+\frac{I_{\small{D}}}{n{\cdot}V_{\small{T}}}{\cdot}v_{\small{d}}$ Now we can see that the total current is composed by a DC current and an AC component wich has the following formula: $i_{\small{d}}=\frac{I_{\small{D}}}{n{\cdot}V_{\small{T}}}{\cdot}v_{\small{d}}$ (note: in the denominator it says Vt, I don't know why TEX keeps skewing it) As a result, the dynamic resistance is $\frac{v_{\small{d}}}{i_{\small{d}}}=r_{\small{d}}=\frac{n{\cdot}V_{\small{T}}}{I_{\small{D}}}$ I hope this is clear enough. 5. ### Sleepcakez Thread Starter New Member Jun 26, 2010 13 0 Thank you very much for the reply. It's about 5:30 a.m. here right now, so when I wake up I'll check out all that derivation. 6. ### Ron H AAC Fanatic! Apr 14, 2005 7,050 656 What does the (t) refer to in $i_{\small{D}}(t)$? The equation I am familiar with is $i_{\small{d}}=I_{\small{s}}e^{\small{v_{\tiny{d}}/nV_{\tiny{T}}}}$ I would have just solved for Vd and taken the derivative with respect to Id. 7. ### Georacer Moderator Nov 25, 2009 5,142 1,266 I' m glad you asked! This is a point that really got me thinking when I started studying circuits. What I 'm about to say is an exact transfer of what Sedra/Smith teaches: The (t) is just meant to show that the diode current is a function of time and not a constant. But the real juice of the analysis is coming up next: When describing a quantity (voltage or current) we will be using the following assumptions (d refers to diode and can be substituted with any letter): $x_{\small{d}}$ refers to a time-variable quantity (AC signal) $X_{\small{D}}$ refers to a constant quantity (DC signal) $x_{\small{D}}$ refers to the sum of the above, the overlay of the AC and the DC signal. For our example, the AC signal is the small signal applied to the diode and the DC is the bias signal. The equation you are familiar with is written, with respect of the above: $i_{\small{D}} =I_{\small{S}}{\cdot}e^{v_{\tiny{D}}/n{\cdot}V_{\tiny{T}}$ wich can be rewritten by distinguishing the DC and the AC components: $i_{\small{D}}=I_{\small{S}}{\cdot}e^{(V_{\tiny{D}}+v_{\tiny{d}})/n{\cdot}V_{\tiny{T}}} {\Leftrightarrow}$ $i_{\small{D}}=I_{\small{S}}{\cdot}e^{V_{\tiny{D}}/n{\cdot}V_{\tiny{T}}}\ {\cdot}\ e^{v_{\tiny{d}}/n{\cdot}V_{\tiny{T}}}$ and replacing $I_{\small{D}}=I_{\small{S}}{\cdot}e^{V_{\tiny{D}}/n{\cdot}V_{\tiny{T}}}$ in the above equation, we get: $i_{\small{D}}=I_{\small{D}}{\cdot}e^{v_{\tiny{d}}/n{\cdot}V_{\tiny{T}}}$ This equation conveniently relates the dynamic response of a diode in a small signal, when the diode is biased by a given (or known) dc current. Last edited: Jun 27, 2010 8. ### Ron H AAC Fanatic! Apr 14, 2005 7,050 656 I still think solving for Vd and differentiating WRT Id is simpler. Nov 25, 2009 5,142 1,266 10. ### t_n_k AAC Fanatic! Mar 6, 2009 5,448 782 Hopefully that question is resolved in your mind. Seems that was all that the question was asking. So yes you are probably done. However, I'm not sure why in the posted circuit, the source has a small signal ac component superimposed on the DC. Was there no other question in relation to the ac voltage across the diode or something of that nature? 11. ### Georacer Moderator Nov 25, 2009 5,142 1,266 Maybe the teacher has my formula in mind, wich can only be applied if the AC component of voltage on the diode is under 10mV (sufficiently small). 12. ### Sleepcakez Thread Starter New Member Jun 26, 2010 13 0 I have no clue as to why the circuit was presented in that form. Literally all that was asked was to find the current through Diode 2 and the small signal resistance of Diode 1 and 2. Edit: I used LTSpice to draw that circuit up. On the problem itself, it was shown as a single sinusoidal source *1.4 + 0.1cos(wt)* Last edited: Jun 28, 2010
2016-12-04 16:22:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 24, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8308483362197876, "perplexity": 1383.2036213566087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541324.73/warc/CC-MAIN-20161202170901-00479-ip-10-31-129-80.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1172617/finding-r-for-a-power-series
# Finding R for a power series Let $\sum_2^\infty a_nx^n$ be a power series. Find the radius of convergence when $\lim \limits_{n \to \infty} \frac {a_n}{n^3}$ = 1. I've tried using root test but that gets messy, can't find a way to use ratio test. It is not that messy the root test. As $\lim a_n/n^3 = 1$ you have for sufficiently large $n$ that $1/2 \le a_n/n^3 \le 3/2$ so $n^3/2 \le a_n \le 3n^3/2$. And the $n$-th root of both sides of the inequality is seen easily to approach $1$. • Okay great. So R=1 then? – Ronique Hossain Mar 2 '15 at 23:39 • Yes, the radius of convergence is $1$. – quid Mar 2 '15 at 23:52 hint: compare it with the series $\displaystyle \sum_{n=1}^\infty n^3x^n$ whose radius of convergence is $1$. For all natural $\;n$' s big enough, we have that $$\frac{n^3}2<a_n<\frac{3n^3}2\;\;\;and\;\;\;a_n\ge 0$$ ( just apply the definition of limit with $\;\epsilon=\frac12\;$) , and from here $$\sqrt[n]\frac{n^3}2\le\sqrt[n]{a_n}\le\sqrt[n]\frac{3n^3}2$$ Apply now the squezze theorem to get $\;\sqrt[n]{a_n}\xrightarrow[n\to\infty]{}1\;$
2019-08-19 10:09:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9676917195320129, "perplexity": 254.23146950204742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314721.74/warc/CC-MAIN-20190819093231-20190819115231-00213.warc.gz"}
https://brilliant.org/discussions/thread/current/
× # current when current flows from - to + terminal then why it is shown that it moves from + to - Note by Amritesh Anand 4 years, 3 months ago Sort by: This direction is chosen by convention. Actually when the concept of current came to be discovered by physicists, they did not know about the negatively charged electrons and hence they thought that current must flow in the direction of positively charged protons. So, even after the discovery of electrons, scientists refused to change the convention due to some reasons. Hence it is shown that current moves form + to - in the direction of conventional current (flow of protons). - 4 years, 3 months ago magnetic flow going to north pole from south pole but why magnetic flow did not going to south pole from north pole? pls help me? - 4 years, 3 months ago
2017-10-18 18:43:56
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8362114429473877, "perplexity": 862.6566656176292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823067.51/warc/CC-MAIN-20171018180631-20171018200631-00313.warc.gz"}
https://marocstreetfood.com/quebec/2d-divergence-theorem-in-the-plane-example.php
1 Statement of Stokes’ theorem uni-osnabrueck.de. Lecture 33 classical integration theorems in the plane greenвђ™s theorem and the divergence theorem, examples: 1. the vector п¬ѓeld f, what is the geometric meaning of divergence and curl? imagine a 2d plane with a river that the divergence theorem says that the total "spreading out. Notes on Green’s Theorem and Related Topics UMass Lowell The Divergence Theorem Math24. Lecture 33 classical integration theorems in the plane greenвђ™s theorem and the divergence theorem, examples: 1. the vector п¬ѓeld f, example 16.9.2 let ${\bf f we compute the two integrals of the divergence theorem. ex 16.9.8 let$e$be the solid cone above the$x$-$y$plane and inside$z=1. The divergence theorem by a double integral in 2d. example 1 let s be the surface bounded by the paraboloid: z = 4 24/07/2004в в· divergence and stoke's theorems in 2d jul 17, (for example, a subset u of r^2 which is the divergence theorem in 2d, and that: Fact it is shown in в§7 that the boundary of every open set in the plane has the the gauss-green theorem 45 2d divergence theorem: question on the integral over the boundary curve. then those integrals are zero regardless of orientation.for example, to find $\int_{rq} Math 241 - calculus iii spring 2012, section cl1 x16.8. stokesвђ™ theorem in these notes, we illustrate stokesвђ™ theorem by a few examples, and highlight the fact that 24/07/2004в в· divergence and stoke's theorems in 2d jul 17, (for example, a subset u of r^2 which is the divergence theorem in 2d, and that: The theorem is also called gaussвђ™ theorem. example 1. integrals are inherently 2d. hence, the divergence theorem essen- projection of c onto the xy-plane, and the divergence theorem; example 16.8.2 let${\bf f} the plane $z=2x+2y-1$ and the paraboloid $z=x^2+y^2$ intersect in a closed curve. Ee2: greenвђ™s, divergence & stokesвђ™ theorems plus maxwellвђ™s equations greenвђ™s theorem in a plane: let p(x,y) and q(x,y) be arbitrary functions in the x,y 2d divergence theorem: question on the integral over the boundary curve. understanding this very generic divergence theorem where the open set have border $c^k$ 1. 12/10/2014в в· the divergence theorem in complex coordinates, (for example), and is useful in general in 2d cft/string and using the greenвђ™s theorem in the plane. iii.f flux and the divergence theorem plane. d s now the surface s is example 4. use the divergence theorem to calculate rrr d 1dv where v is the Math 241 - calculus iii spring 2012, section cl1 x16.8. stokesвђ™ theorem in these notes, we illustrate stokesвђ™ theorem by a few examples, and highlight the fact that examples illustrating how to use stokes' theorem. (the $xz$-plane for above example). for stokes' theorem, the idea behind the divergence theorem; math 2374. 1 Statement of Stokes’ theorem uni-osnabrueck.de. The 2d divergence theorem is to divergence what green's {2d -curl}\,\bluee so any of the actual computations in an example using this theorem would be, 2d divergence theorem: question on the integral over the boundary curve. then those integrals are zero regardless of orientation.for example, to find $\int_{rq}. Lecture 9 Divergence Theorem Astrophysics THE GAUSS-GREEN THEOREM American Mathematical Society. Divergence, gradient, curl and laplacian content . divergence gradient curl divergence theorem laplacian for a simple 2d example: Notes on greenвђ™s theorem and related topics divergence theorem which is a special case of gaussвђ™s theorem in the plane,. Examples of stokes' theorem. example 1. evaluate the circulation of around the curve c where c is the circle x 2 + y 2 = 4 that lies in the plane z= -3, tor calculus are gaussвђ™s divergence theorem (projected down into the 2d tan-gent plane to the surface d) theorem worked: ~0 =~0. example #3. Iii.f flux and the divergence theorem plane. d s now the surface s is example 4. use the divergence theorem to calculate rrr d 1dv where v is the 12/10/2014в в· the divergence theorem in complex coordinates, (for example), and is useful in general in 2d cft/string and using the greenвђ™s theorem in the plane. Greenвђ™s theorem, stokesвђ™ theorem, and the divergence theorem 338 on physical plane, for example, stokesвђ™ theorem, and the divergence theorem v10.2 the divergence theorem the closed surface s projects into a region r in the xy-plane. we assume s is vertically simple, cylinder would be an example.) Example 16.9.2 let${\bf f we compute the two integrals of the divergence theorem. ex 16.9.8 let $e$ be the solid cone above the $x$-$y$ plane and inside \$z=1 greenвђ™s theorem is the second and last integral theorem in the two dimensional plane. hx+y,yxi for example is no gradient п¬ѓeld curl and divergence Lecture 33 classical integration theorems in the plane greenвђ™s theorem and the divergence theorem, examples: 1. the vector п¬ѓeld f the 2d divergence theorem is to divergence what green's {2d -curl}\,\bluee so any of the actual computations in an example using this theorem would be Tor calculus are gaussвђ™s divergence theorem (projected down into the 2d tan-gent plane to the surface d) theorem worked: ~0 =~0. example #3. math 241 - calculus iii spring 2012, section cl1 x16.8. stokesвђ™ theorem in these notes, we illustrate stokesвђ™ theorem by a few examples, and highlight the fact that Notes on greenвђ™s theorem and related topics divergence theorem which is a special case of gaussвђ™s theorem in the plane, the by greenвђ™s theorem in the plane, z z a divvdxdy~ = z @a we can show that the divergence theorem in two dimensions can for example, then the unit
2020-10-24 17:42:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926712691783905, "perplexity": 1219.325546210091}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107884322.44/warc/CC-MAIN-20201024164841-20201024194841-00405.warc.gz"}
http://www.cs.iit.edu/~dbgroup/bibliography/G10a.html
## IIT Database Group ### Abstract In many application areas like scientific computing, data-warehousing, and data integration detailed information about the origin of data is required. This kind of information is often referred to as data provenance. The provenance of a piece of data, a so-called data item, includes information about the source data from which it is derived and the transformations that lead to its creation and current representation. In the context of relational databases, provenance has been studied both from a theoretical and algorithmic perspective. Yet, in spite of the advances made, there are very few practical systems available that support generating, querying and storing provenance information (We refer to such systems as provenance management systems or PMS). These systems support only a subset of SQL, a severe limitation in practice since most of the application domains that benefit from provenance information use complex queries. Such queries typically involve nested sub-queries, aggregation and/or user defined functions. Without support for these constructs, a provenance management system is of limited use. Furthermore, existing approaches use different data models to represent provenance and the data for which provenance is computed (normal data). This has the intrinsic disadvantage that a new query language has to be developed for querying provenance information. Naturally, such a query language is not as powerful and mature as, e.g., SQL. In this thesis we present Perm, a novel relational provenance management system that addresses the shortcoming of existing approaches discussed above. The underlying idea of Perm is to represent provenance information as standard relations and to generate and query it using standard SQL queries; ”Use SQL to compute and query the provenance of SQL queries”. Perm is implemented on top of PostgreSQL extending its SQL dialect with provenance features that are implemented as query rewrites. This approach enables the system to take full benefit from the advanced query optimizer of PostgreSQL and provide full SQL query support for provenance information. Several important steps were necessary to realize our vision of a ”purely relational” provenance management system that is capable of generating provenance information for complex SQL queries. We developed new notions of provenance that handle SQL constructs not covered by the standard definitions of provenance. Based on these provenance definitions rewrite rules for relational algebra expressions are defined for transforming an algebra expression q into an algebra expression that computes the provenance of q (These rewrites rules are proven to produce correct and complete results). The implementation of Perm, based on this solid theoretical foundation, applies a variety of novel optimization techniques that reduce the cost of some intrinsically expensive provenance operations. By applying the Perm system to schema mapping debugging - a prominent use case for provenance - and extensive performance measurements we confirm the feasibility of our approach and the superiority of Perm over alternative approaches. ### bibtex @phdthesis{G10a, author = {Glavic, Boris}, date-modified = {2012-12-14 18:55:49 +0000}, keywords = {Provenance; Perm}, pdfurl = {http://cs.iit.edu/%7edbgroup/assets/pdfpubls/G10a.pdf}, projects = {Perm}, school = {University of Zurich}, title = {{Perm: Efficient Provenance Support for Relational Databases}}, venueshort = {PhD Thesis}, year = {2010}, bdsk-url-1 = {http://cs.iit.edu/%7edbgroup/assets/pdfpubls/G10a.pdf} } ### Reference Perm: Efficient Provenance Support for Relational Databases Boris Glavic University of Zurich.
2020-08-04 08:50:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24397416412830353, "perplexity": 2544.3346255483452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735867.23/warc/CC-MAIN-20200804073038-20200804103038-00020.warc.gz"}
http://lingayasuniversity.edu.in/bvoc/assessment-examination/
Assessment / Examination Lingaya’s Vidyapeeth shall conduct the examinations at the University Campus and evaluate as per university rules and regulations. However for the assessment for the skill development components, university may consult the respective Sector Skill Council empanelled with NSDC for designing the examination and assessment pattern for the skill development components. The university may also consider using the designated assessors of Sector Skill Councils/industry associations to conduct the practical assessment.
2019-07-23 21:01:35
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8261412978172302, "perplexity": 8601.346515202875}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529664.96/warc/CC-MAIN-20190723193455-20190723215455-00069.warc.gz"}
https://chemistry.stackexchange.com/questions/98122/can-hydrogen-peroxide-be-oxygenated
# Can hydrogen peroxide be oxygenated? As per the heading, would peroxide be able to dissolve oxygen by whatever method, to have free oxygen and hydrogen peroxide mixed (under pressure?) similarly to $\ce{CO2}$ or $\ce{O2}$ in water? I understand peroxide is trying to break down & would produce similar over time anyway, though with a loss of peroxide. Question is just for the sake of knowing, no pressing need. Thanks... • In general, all gases have some non-zero solubility in any solvent. It is generally a small number. The amount of gas dissolved also depends on the partial pressure of that gas. So, sure, why not? – Eashaan Godbole Jun 10 '18 at 14:01 If we assume hydrogen peroxide solution (often 3%) is quite similar to water in its ability to dissolve oxygen and carbon dioxide, then yes some oxygen is always dissolved in the solution at STP. However, the amount of carbon dioxide that can dissolve in water is much higher as compared to oxygen. Carbon dioxide reacts with water to form carbonic acid and that drives water to dissolve something like 25 times more (according to a cursory internet search) carbon dioxide than water under similar conditions. Secondly, the addition of hydrogen peroxide to the water, which is constantly decomposing to produce free oxygen, would lead the solution to dissolve even less atmospheric oxygen than normal water. So, yes, it dissolves some. Nowhere near similar to carbon dioxide. I'm not sure it would be useful in any context where it might compare to soda water =) This is really what hydrogen peroxide already is if it wasn't for the brown bottle slowing down the decomposition. You would see slow bubbles of "fizz" as the peroxide gradually goes "flat." Now higher concentrations of peroxide would release more oxygen and can be quite dangerous. Lab quality 30% solutions are corrosive, strong oxidizers, and can be an explosion risk. But the oxygen still would not remain dissolved in the water for long, thus the danger. They have special handling precautions and even special expandable "accordion" containers for storage.
2019-03-26 16:25:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.727821409702301, "perplexity": 1577.3975165807192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205597.84/warc/CC-MAIN-20190326160044-20190326182044-00489.warc.gz"}
http://www.uow.edu.au/informatics/maths/research/seminar/index.html
School of Mathematics & Applied Statistics (SMAS) # Seminars ### Applied mathematics/general seminars Speaker: Ray Withers (ANU) Time and Date: Friday 26 April 2013, 3:30pm Room: 24-203 Title: Order and ‘disorder’, a chemist's view: what we know, what we don’t know and what we (often wrongly!) assume. Abstract: In 1912 von Laue, Friedrich & Knipping first exposed a crystal to a beam of X-rays. The experiment was initially carried out in order to understand the nature of the radiation itself; instead, its real importance was the discovery of X-ray diffraction. In the same year, 1912, WL Bragg developed his famous law thereby making it possible to calculate the positions of atoms within crystals from the intensities of diffracted beams. The “diffraction” of X‑rays thus changed from the status of being a physical phenomenon to that of a tool for exploring the arrangement of atoms within crystals. The extraordinary success of X-ray crystallography ever since has led to the now largely “mature” science of crystallography. Like all successful disciplines, however, its very success inevitably led to the imposition of rigid “rules” as to what constitutes crystalline order and what doesn’t e.g. “ .. A unit cell of a crystal is a .. parallel-sided region .. from which the entire crystal can be built up by purely translational displacements ..” Shriver and Atkins, P66, 2009!! Wrong, as demonstrated by the (eventual) widespread acceptance of aperiodic/quasicrystalline order! Likewise, the direct observation of curved graphite planes in Transmission Electron Microscope (TEM) images of carbon support films in the 1960’s was ignored because “ .. lattice planes can’t curve ..”. Bye-bye the chance to discover bucky balls and bucky tubes much earlier than they were! In this contribution, a range of other fundamentalist type structural notions will be discussed ranging from the strange use of “nodal planes” when describing the molecular orbitals of 1-D “crystalline”, periodic ring molecules such as benzene or cyclopentadienyl to the question of why there is extensive notation describing “real”, but not “reciprocal”, space to the notion that a crystal structure refined from ISIS or synchrotron data with a good R‑factor is necessarily correct. The need to always keep thinking and to extend our ideas of what constitutes order to encompass whatever we experimentally encounter is still with us and continues to separate thoughtful structural chemists from handle‑turners. ‘Ordered’ crystalline materials are often far more subtle than the straitjackets imposed by crystallographic or chemical fundamentalism. Functionally useful materials (piezoelectrics, relaxor ferroelectrics, ionic conductors, solid solutions etc.), for example, are often modulated and frequently inherently flexible [1-3]. A detailed understanding of the structure, both average as well as local (on the relevant length and time scales) of such materials, is essential for an understanding of their properties and of methods to optimize and manipulate them. In this contribution, the results obtained from several such systems will be described including inherently Pb-free polar functional materials and the Li3xLn2/3‑xTiO3, 0.047 < x < 0.147, family of Li ion conductors. The local crystal chemistry underlying the inherent structural flexibility of these materials will be discussed along with the characteristic diffraction signatures of such behaviour. _____________________ Speaker: Anne Thomas (University of Sydney) Time and Date: Friday 19 April 2013, 3:30pm Room: 24-203 Title: 3-manifolds, cube complexes and lattices Abstract: A recent spectacular result in low-dimensional topology is that every closed 3-manifold has a finite cover which fibres over a circle. This was conjectured by Thurston in the 1970s, and proved by Agol in 2012, using geometric group theory, in particular group actions on cube complexes. I will explain some of the key ideas, and then give some applications to the study of lattices in locally compact groups. _____________________ Speaker: Zhiyuan Liu (Monash University) Time and Date: Friday 8 March 2013, 3:30pm Room: 24.G03 Title: Optimal Toll Design Problem of Urban Congestion Pricing Abstract: In a road transport network, the drivers’ route choice behaviour is un-cooperative, which would lead to an unwise use of the network and severe traffic congestions in some areas. Congesting pricing is one of the few instruments used by the transport authorities to properly adjust drivers’ route choice decisions. Based on given pricing locations, the Toll Design Problem aims to obtain the optimal toll pricing rate such that the total level of congestions in the network can mitigated. This presentation will first briefly review some congestion pricing practices in Singapore, London and Scandinavia. Then discuss about the modelling skills for the optimal toll deign problem. Subsequently, modelling for the Toll Design Problem for some newly proposed pricing schemes will be covered. _____________________ Speaker: Lisa Clark (University of Otago) Time and Date: Friday 22 February 2013, 3:30pm Room: 24.204 Title: Spectral properties of C*-algebras associated to groupoids Abstract: Groupoids appear in a number of different branches of pure mathematics. In operator algebras, we associate a C*-algebra to a grouoid so that properties of the algebra can be seen in propoerties of the groupoid. In this talk, I will begin by describing how a groupoid is a generalisation of the action of a group on a set. Then, I will describe how to associate a C*-algebra to a groupoid and demonstrate how spectral properties of the algebra correspond with topological properties of the groupoid. This talk should be accessible to a general math audience. ________________________ Speaker: Dana Williams (Dartmouth College) Time and Date: 3:30pm, Friday 30 November 2012 Room: 24-103 Title: The Equivariant Brauer Group Abstract: In algebraic topology, we learn to associate groups $H^{n}(T)$ to locally compact spaces which count the $n$-dimensional holes in T''. In this talk, I want to describe how to realize $H^{3}(T)$ as a set $\mathop{\rm Br}(T)$ of equivalence classes of certain well-behaved $C^*$-algebras. The group structure imposed on $\mathop{\rm Br}(T)$ via its identification with $H^{3}(T)$ is very natural in its $C^*$-setting. With this group structure, $\mathop{\rm Br}(T)$ is called the \emph{Brauer group} of $T$. Depending on your point of view, this result can be viewed either as a concrete realization of $H^{3}(T)$ or as a classification result for a class of $C^*$-algebras. In the last part of the talk, I want to describe an equivariant version of $\mathop{\rm Br}(T)$ developed jointly with David Crocker, Alex Kumjian and Iain Raeburn. No prior knowledge of $C^{*}$-algebras or operator algebras will be assumed. ___________________________ Speaker: Xiang Xu (Michigan State University) Time and Date: 3:30pm, Friday 23 November 2012 Room: 24.103 Title: Mathematical analysis for fractional diffusion equations: modeling, forward problems and inverse problems Abstract: Time-fractional diffusion equations are of practical interest and importance, since they well describe the power law decay for the diffusion in porous media. In this talk, recent progresses on time-fractional diffusion equations are discussed, especially on some typical inverse problems, including backward problem, inverse source problem, inverse boundary problem, inverse coefficient problem etc. _________________________ Speaker: Andrew Francis (University of Western Sydney) Room: 15.206 Time and Date: 3:30pm, Friday 2 November 2012 Title: Bacterial genome evolution with algebra!? Abstract: The genome of a bacterial organism consists of a single circular chromosome that can undergo changes at several different levels. There is the very local level of errors that are introduced through the replication process, giving rise to changes in the nucleotide sequence (A,C,G,T); there are larger scale sequence changes occurring during the lifetime of the cell that are able to insert whole segments of foreign DNA, delete segments, or invert segments (among other things); and there are even topological changes that give rise to knotting in DNA. Algebra might be defined as the study of sets with structure", and has been used over the past century to describe the symmetries of nature, most especially in areas like physics and crystallography, but it also plays a role in technological problems such a cryptography. In this talk I will describe how algebraic ideas can be used to model some bacterial evolutionary processes. In particular I will give an example in which modelling the inversion process gives rise to new algebraic questions, and show how algebraic results about the affine symmetric group can be used to calculate the inversion distance" between bacterial genomes. This has applications to phylogeny reconstruction. _________ Speaker: Scott Morrison (Australian National University) Time and Date: 3:30pm, Friday 26 October 2012 Room: 15.206 Title: Knots and quantum computation Abstract: I'll begin with the Jones polynomial, a knot invariant discovered 30 years ago that radically changed our view of topology. From there, we'll visit the complexity of evaluating the Jones polynomial, the topological quantum field theories related to the Jones polynomial, and how all these ideas come together to offer an unorthodox model for quantum computation. ______________ Speaker: Peter Kim Room: 15.206 Time and Date: 3:30pm on 19 October 2012 Title: Modelling protective anti-tumour immunity using a hybrid agent-based and delay differential equation approach Abstract: Although cancers seem to consistently evade current medical treatments, the body’s immune defences seem quite effective at controlling incipient tumours. Understanding how our immune systems provide such protection against early-stage tumours and how this protection could be lost will provide insight into designing next-generation immune therapies against cancer. To engage this problem, we formulate a mathematical model of the immune response against small, incipient tumours. The model considers the initial stimulation of the immune response in lymph nodes and the resulting immune attack on the tumour and is formulated as a hybrid agent-based and delay differential equation model. ____________ Speakers: Ngamta Thamwattana and Alexander Gerhardt-Bourke (abstracts below). Time and Date: 3:30pm Friday, 5 September 2012 Location: Room 15.206 Talk 1: Ngamta Thamwattana Title: Modelling peptide nanotubes for artificial ion channels Abstract: We investigate the van der Waals interaction of D,L-Ala cyclo peptide nanotubes and various ions, ion–water clusters and C60 fullerenes, using the Lennard-Jones potential and a continuum approach. Our results predict that Li+, Na+, Rb+ and Cl− ions and ion–water clusters are accepted into peptide nanotubes of 8 amino acid residues whereas the fullerene C60 molecule is rejected. The model indicates that the C60 molecule is accepted into peptide nanotubes of 12 amino acid residues, suggesting that the interaction energy depends on the size of the molecule and the internal diameter of the peptide nanotube. This result may be useful may be useful in the size-selective molecular delivery of pharmacologically active agents. Further, we also find that the ions prefer a position inside the peptide ring where the energy is minimum. In contrast, Li–water clusters prefer to be in the space between each peptide ring. Talk 2: Alexander Gerhardt-Bourke Title: Continuous Logic and Operator Algebras Abstract: How can we define a continuous analogy of logic, and why would we want to? We will answer both of these questions by defining continuous logic, and then seeing how we can use continuous logic to classify some special C*-algebras. ____________________________ Day: Friday 3 August 2012 Location: Rm15.206 Time:  3:30pm Talk: Directed graphs and their higher dimensional analogues Speaker: Sam Webster Abstract: Last colloquium,  Aidan Sims spoke about how we study C*-algebras associated to directed graphs. There is a higher-dimensional version of a directed graph called a higher-rank graph that we also like to associate C*-algebras too, but they can be a little tricky to picture and understand. I'll speak about some recent work of Aidan Sims, Iain Raeburn , Robbie Hazlewood and myself. I'll show how we can think of higher-rank graphs as directed graphs with different coloured edges and some additional hypotheses. Talk:  Voices in bounded soft media Speaker: Luke Sciberras Abstract: Background information into and reasons for using polarised light beams (lasers) to formsolitons (or optical waves) in a liquid crystal will be initially discussed in this seminar. Extending from this, there will be an examination of a specific type of optical wave called an optical vortex and a brief discussion of its formation. Using this background knowledge along with variational techniques and Lagrangian methods in a nonlinear system of pde's, conversations will be directed towards a study on the evolution of an optical vortex in a finite nematic liquid crystal cell. Indeed, this study requires linearised stability analysis about the steady state for the given system to determined a relationship between the instability of an optical vortex and the minimum distance of approach to the boundary. Results from the mathematical study, show that the simple asymptotic approximations capture the amplitude of the optical vortex and its path towards its final steady state within a finite cell. The variational analysis results are compared to the full numerical solution for the non linear system. Good agreement is shown with all results. Day: Friday 1 June 2012 Location: Rm15.206 Time:  3:30pm Talk : tba Speaker: Aidan Sims and Andrew Holder Abstract: Day: Friday  18 May 2012 Location: Rm15.206 Time:  3:30pm Talk : Fancy semigroups and nice C*-algebras Speaker: Dr Nathan Brownlowe Abstract: A semigroup is a group without inverses. A C*-algebra is… well, more complicated!  There is a natural way to construct a C*-algebra from a semigroup. We will describe a class of semigroups with a fancy name that give rise to nice C*-algebras . Day: Friday  4 May 2012 Location: Rm15.206 Time:  3:30pm Talk 1: Preparing to Write a Mathematics/Statistics Thesis – MATH407/907 Research Methods Speaker: Carole Birrell and Michael McCrae Abstract: Michael McCrae will introduce the issues and challenges associated with designing and writing mathematical and statistical theses that are covered in MATH907. Carole Birrell will then lead an inter-active discussion about what other issues they might want training in. Talk 2: Becoming relevant at a local area: small area estimates from a state health survey Speaker: Diane Hindmarsh Abstract: Obtaining estimates of health risk factors at the local area is becoming more important than ever. NSW Health has collected data on health risk factors and health status across the state through a continuous population health survey since 2002, but it was designed to provide estimates for the state and for the health administrative units into which the state is split. The sample size of about 1000 observations per year from each administrative area is not sufficient to produce estimates at the local level. This talk will compare various model-based estimates obtained from applying small area estimation methods to the NSW population health survey data. It will focus on some of the issues faced when applying SAE methods to an ongoing survey. Day: Friday 20 April 2012 Location: Rm15.206 Time:  3:30pm Talk 1:Comparing evolving hypersurfaces Speaker: James McCoy Abstract: We give a modern proof using the so-called "double coordinate" method that initially disjoint hypersurfaces remain disjoint during their common interval of existence when evolving by a given curvature flow. No special background is required for this talk. Talk 2: Fully nonlinear curvature flow of axially symmetric surfaces Speaker: Fatemah Mofarreh Abstract: The deformation of surfaces by speeds dependent on their curvature has a variety of mathematical and practical applications.  I will briefly outline some of these applications before discussing some key ingredients in the analysis of the evolution of axially symmetric surfaces by fully nonlinear curvature-dependent speeds Day: Friday 23 March 2012 Location: Rm15.206 Time:  3:30pm Talk 1: The fractal dual of the pinwheel tiling Speaker: Michael Whittaker Abstract: A tiling of the plane refers to a covering of the euclidean plane by euclidean motions of a finite set of polygons that only intersect on their borders. I will introduce a selection of interesting tilings culminating in the pinwheel tiling, discovered by Conway and Radin. I will discuss a selection of interesting properties of the pinwheel tiling. I will then present fractals we discovered in the Pinwheel tiling along with a method for connecting the fractals to obtain a new tiling. This is joint work with Natalie Priebe Frank. Talk 2: The notions of directed graphs and topological graphs Speaker: Hui Li Abstract: First of all we define directed graphs and induce the C*-algebras of each graph. We will give some examples about graphs and algebras. Then we introduce topological graphs and try to "draw" one Day: Friday 9 March 2012 Location: Rm15.206 Time:  3:30pm Talk 1: Groups, trees, and operator algebras Speaker: Jacqui Ramagge Talk 2: Elaborate Distribution Semiparametric Regression via Mean Field Variational Bayes Speaker: Sarah Neville Speaker: Emeritus Professor Carl Chiarella, School of Finance and Economics, UTSTitle: The evaluation of American compound option prices under stochastic volatility and stochastic interest rates Day: Wed 27 April 2011 Location: Access Grid Room (15.113) Time:  12:30pm Abstract: A compound option (the mother option) gives the holder the right, but not obligation to buy (long) or sell (short) the underlying option (the daughter option). In this paper, we consider the problem of pricing American-type compound options when the underlying dynamics follow Heston’s stochastic volatility process with a stochastic interest rate driven by a Cox-Ingersoll-Ross (CIR) process.  We use a partial differential equation (PDE) approach to obtain a numerical solution. The problem is formulated as the solution to a two-pass free boundary PDE problem which is solved via a sparse grid approach and is found to be accurate and efficient compared with the results from a benchmark solution based on a least-squares Monte Carlo simulation combined with the PSOR method. Abstract: Last colloquium,  Aidan Sims spoke about how we study C*-algebras associated to directed graphs. There is a higher-dimensional version of a directed graph called a higher-rank graph that we also like to associate C*-algebras too, but they can be a little tricky to picture and understand. I'll speak about some recent work of Aidan Sims, Iain Raeburn , Robbie Hazlewood and myself. I'll show how we can think of higher-rank graphs as directed graphs with different coloured edges and some additional hypotheses. Speaker: Luke Sciberras Title: Vortices in bounded soft media Abstract: Background information into and reasons for using polarised light beams (lasers) to formsolitons (or optical waves) in a liquid crystal will be initially discussed in this seminar. Extending from this, there will be an examination of a specific type of optical wave called an optical vortex and a brief discussion of its formation. Using this background knowledge along with variational techniques and Lagrangian methods in a nonlinear system of pde's, conversations will be directed towards a study on the evolution of an optical vortex in a finite nematic liquid crystal cell. Indeed, this study requires linearised stability analysis about the steady state for the given system to determined a relationship between the instability of an optical vortex and the minimum distance of approach to the boundary. Results from the mathematical study, show that the simple asymptotic approximations capture the amplitude of the optical vortex and its path towards its final steady state within a finite cell. The variational analysis results are compared to the full numerical solution for the non linear system. Good agreement is shown with all results. Last reviewed: 26 April, 2013
2013-05-18 18:51:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44415873289108276, "perplexity": 1509.7710954797285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382705/warc/CC-MAIN-20130516092622-00025-ip-10-60-113-184.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-solve-this-linear-equation-x-5-3-2
How do you solve this linear equation x+5/3 =2? Apr 29, 2018 $x = \frac{1}{3}$ Explanation: You need to find the value of $x$. Using basic arithmetic where $\frac{5}{3} = 1 \frac{2}{3}$ What must be added to $1 \frac{2}{3}$ to give an answer of $2$? $\frac{1}{3}$ is needed to make a whole number. Using an equation: isolate $x$ Subtract $1 \frac{2}{3}$ from both sides: $x + 1 \frac{2}{3} - 1 \frac{2}{3} = 2 - 1 \frac{2}{3}$ $x = \frac{1}{3}$
2019-11-15 07:04:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7669944763183594, "perplexity": 385.1797771650436}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668594.81/warc/CC-MAIN-20191115065903-20191115093903-00129.warc.gz"}
http://math.eretrandre.org/tetrationforum/printthread.php?tid=776
Permeable Shell-Thron-Boundary? - Printable Version +- Tetration Forum (https://math.eretrandre.org/tetrationforum) +-- Forum: Tetration and Related Topics (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=1) +--- Forum: Mathematical and General Discussion (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=3) +--- Thread: Permeable Shell-Thron-Boundary? (/showthread.php?tid=776) Permeable Shell-Thron-Boundary? - bo198214 - 02/24/2013 It seems possible to flawlessly (i.e. base-holomorphic tetration) transform a real fixpoint pair (one attracting below $\eta$, one repelling above $\eta$) into a conjugated fixpoint pair with same absolute value of the repelling multiplier. There seems no disturbance of passing the attractive fixpoint through the STB turning it into a repelling fixpoint. I conclude that from that there is no disturbance in the sickles between the fixpoints while transforming (Well this all really far from being conclusive, its just some good guessing). To visualize this I took our famous (2,4) fixpoint pair. Just as clarification some symbols and formulas. We consider the function $f(z)=exp{c*z}$ (i.e. $b=e^c$ is our base). $L=exp{A}$ is our fixpoint, then we have the relations $c = A*exp{-A}$ $A = - W_k ( -c )$ $f'(L) = A$ (which is called multiplier) So for a each fixpoint with $|A|=\log(4)$, we can calculate "the other" fixpoint by $A_2=-W_k(- A * exp{-A})$ by choosing $k$ suitably. And this is what I did in the next picture. At the left side I go through $A=\log(4)*exp^{i\phi}$ via the red line, and connect it with the corresponding $A_2$ (blue line) via straight lines. On the right side I apply $f$ to the straight line on the left side. So the left picture shows the left side of the sickles and the right picture shows the right sides of the sickles. The black curve is the fixpoints with |A|=1 (their base corresponds to the STB). [attachment=998][attachment=999] And on the next two pictures, this is shown with the values $L$ instead of $A$. [attachment=1001][attachment=1000] For a given value of the repelling real fixpoint we can calculate the base, for which they are transformed into a conjugate fixpoint pair with same absolute value of the multiplier. For this we are seeking two fixpoints which are conjugate having the same base corresponding to $c$ and same $|A|=r$. Say the upper fixpoint is $A=r exp{i\phi}=a+ib$. Same base: $r exp{i\phi} exp{-a-ib} = r exp{-i\phi} exp{-a+ib}$ $exp{i 2\phi } = exp{i 2b}$ $\phi = b + \pi k$ $\phi = r sin(\phi) + \pi k$ In our case for $r=log(4)$ we get $\phi\approx 1.354$. So the base is: $c = r exp{i\phi} exp{-a-i\phi} = r exp{-a}$ $a=r\cos(\phi)$ $c = r exp{-r\cos(\phi)}$ $b = exp{r e^{-r\cos(\phi)}} \approx 2.794$ In the brown line we follow the base $b$ corresponding to our red fixpoints (black is STB): [attachment=1002]
2019-10-16 13:12:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 30, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7796315550804138, "perplexity": 961.3647556209171}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668569.22/warc/CC-MAIN-20191016113040-20191016140540-00214.warc.gz"}
https://cs.stackexchange.com/questions/4797/fast-k-mismatch-string-matching-algorithm?noredirect=1
# Fast k mismatch string matching algorithm I am looking for a fast k-mismatch string matching algorithm. Given a pattern string P of length m, and a text string T of length n, I need a fast (linear time) algorithm to find all positions where P matches a substring of T with at most k mismatches. This is different from the k-differences problem (edit distance). A mismatch implies the substring and the pattern have a different letter in at most k positions. I really only require k=1 (at most 1 mismatch), so a fast algorithm for the specific case of k=1 will also suffice. The alphabet size is 26 (case-insensitive english text), so space requirement should not grow too fast with the size of the alphabet (eg., the FAAST algorithm, I believe, takes space exponential in the size of the alphabet, and so is suitable only for protein and gene sequences). A dynamic programming based approach will tend to be O(mn) in the worst case, which will be too slow. I believe there are modifications of the Boyer-Moore algorithm for this, but I am not able to get my hands on such papers. I do not have subscription to access academic journals or publications, so any references will have to be in the public domain. I would greatly appreciate any pointers, or references to freely available documents, or the algorithm itself for this problem. • If the pattern is fixed (but the text to match varies), you could potentially create a finite automaton and run the text through that. There are also algorithms using suffix trees (usually good if the text is constant and pattern varies, but also applicable if both vary), you might be able to find some references on the web. (Not adding an answer yet as I am not very sure of the suffix tree based algorithms, if some one knows please feel free to ignore this comment). – Aryabhata Sep 29 '12 at 21:34 • @Aryabhata Thanks! Both the pattern and the text change. In that context, building a finite automaton would be too expensive, especially when including the scope for 1 mismatch. As for suffix trees/suffix arrays, I have never used them, and know little about them, but was under the impression that they are slow to build and efficient mainly for exact matching. But I will explore this option further. Any pointers in this direction, or in any other direction would be most useful! – Paresh Sep 29 '12 at 22:33 • No, suffix trees can be used for approximate matches too. Atleast the wiki claims so: en.wikipedia.org/wiki/Suffix_tree – Aryabhata Sep 30 '12 at 1:43 Suffix arrays can be used for this problem. They contain the starting positions of each suffix of the string sorted in lexicographic order. Even though they can be constructed naively in $O(n\log n)$ complexity, there are methods to construct them in $\Theta(n)$ complexity. See for example this and this. Let us call this suffix array SA. Once the suffix array has been constructed, we need to construct a Longest Common Prefix (LCP) array for the suffix array. The LCP array stores the length of the longest common prefix between two consecutive prefixes in the suffix array (lexicographic consecutive suffixes). Thus, LCP[i] contains the length of the longest common prefix between SA[i] and SA[i+1]. This array can also be constructed in linear time: see here, here and here for some good references. Now, to compute the length of the longest prefix common to any two suffixes in the suffix tree (instead of consecutive suffixes), we need to use some RMQ data structure. It has been shown in the references above (and can be seen easily if the array is visualized as a suffix tree), that the length of the longest common prefix between two suffixes having positions $u$ and $v$ ($u < v$) in the suffix array, can be obtained as $min_{u<=k<=v-1}{LCP[k]}$. A good RMQ can pre-process the $LCP$ array in $O(n)$ or $O(n\log n)$ time and respond to queries of the form $LCP[u, v]$ in $O(1)$ time. See here for a succint RMQ algorithm, and here for a good tutorial on RMQ's, and the relationship (and reductions) between LCA and RMQs. This has another nice alternative approach. With this information, we construct the suffix array and associated arrays (as described above) for the concatenation of the two strings with a delimiter in between (such as T#P, where '#' does not occur in either string). Then, we can perform k mismatch string matching using the "kangaroo" method. This and this explain the kangaroo method in the context of suffix trees, but can be directly applied to suffix arrays too. For every index $i$ of the text $T$, find the $LCP$ of the suffix of $T$ starting at $i$ and the suffix of $P$ starting at 0. This gives the location after which the first mismatch occurs when matching $P$ with $T[i]$. Let this length be $l_0$. Skip the mismatching character in both $T$ and $P$ and try to match the remaining strings. That is, again find the $LCP$ of $T[i + l_0 + 1]$ and $P[l_0 + 1]$. Repeat this till you obtain $k$ mismatches, or either string finishes. Each $LCP$ is $O(1)$. There are $O(k)$ $LCP$'s for each index $i$ of $T$, giving this a total complexity of $O(nk)$. I used an easier to implement RMQ giving a total complexity of $O(nk + (n+m)\log(n+m))$, or $O(nk + n\log n)$ if $m = O(n)$, but it can be done in $O(nk)$ too as described above. There may be other direct methods for this problem, but this is a powerful and generic approach that can be applied to a lot of similar problems. Below is an expected $\mathcal{O}(n + m )$ algorithm (which can be extended to other $k$, making it $\mathcal{O}(nk +m )$). (I haven't done the calculations to prove it is so, though). The idea is similar to the Rabin-Karp rolling hash algorithm for exact substring matches. The idea is to separate each string of length $m$ into $2k$ blocks of $m/2k$ size each and compute the rolling hash for each block (giving $2k$ hash values) and compare those $2k$ hash values against the one from the pattern. We allow at most $k$ mismatches in those values. If more than $k$ mismatches occur, we reject and move on. Otherwise we try and confirm an approximate match. I expect (caveat: haven't tried it myself) this will probably be faster in practice, and perhaps easier to code/maintain, than using a suffix tree based approach. • Just need a clarification. By "..separate each string of length m into 2k blocks of m/2k size each...", you mean that separate each substring of length m in T (of length n) into 2k blocks. And this hash can be calculated in O(n) by the rolling hash method. Then, the pattern string will also be divided into 2k blocks, and the corresponding hashes will be compared, giving allowance for atmost k blocks to mismatch. If so, then we would be able to potentially discard all cases where the number of mismatches is more than k. Did I understand right? – Paresh Sep 30 '12 at 15:42 • @Paresh: Yes, you got it right, except because there are $k$ hashes, it is $\Omega(nk)$, rather than $O(n)$. – Aryabhata Sep 30 '12 at 17:49 • I like this approach! Yet, this approach is fast in general, but degrades to O(mnk) if the number of matches is high (O(n) matches). Keeping this in mind, I maintained two rolling hashes, under the assumption that both cannot have a collision for the same input (I did not do this mathematically since I wanted to see the speed). This way, we don't have to verify a match char-by-char if the two hashes agree. This is pretty fast in general, but this too is slow if the number of matches are large. With this and with the way you suggested, it was slow for large matches. – Paresh Oct 1 '12 at 14:10 • This could be made faster in the worst case if we divide the text into $\sqrt{m}$ sized blocks instead of $m/2k$ blocks. The pattern will also be divided into $\sqrt{m}$ (+1 if not perfect square) blocks, and we compare each of the blocks. This will be slower than your approach if the number of mismatches are small, but I think it should only be $O(nk\sqrt{m})$ in the worst case (I haven't checked this properly though). I have not tried this, but I will first explore suffix trees/arrays as you suggested. They seem to offer good bounds. Thanks! – Paresh Oct 1 '12 at 14:20 • @Paresh: You cannot see it (in the revision history), but I initially had the $\sqrt{m}$ approach, but changed it to the current one. I think using $m/2k$ is better. You are unnecessarily computing too many hash values. Of course, which is better depends on your data. Of course, instead of $2k$, you could also try $k+1$ or $k+c$ etc. btw, worst case is $\Omega(nm)$ for both $\sqrt{m}$ and $m/2k$ approaches... – Aryabhata Oct 1 '12 at 15:04
2020-07-03 14:52:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6457837820053101, "perplexity": 451.6049847448781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882051.19/warc/CC-MAIN-20200703122347-20200703152347-00206.warc.gz"}
https://zenodo.org/record/3533099/export/csl
Journal article Open Access # Inverse engineering of shortcut pulses for high fidelity initialization on qubits closely spaced in frequency Yan, Ying; Li, Yichao; Kinos, Adam; Walther, Andreas; Shi, Chunyan; Rippe, Lars; Moser, Joel; Kröll, Stefan; Chen, Xi ### Citation Style Language JSON Export { "DOI": "10.1364/OE.27.008267", "author": [ { "family": "Yan, Ying" }, { "family": "Li, Yichao" }, { }, { "family": "Walther, Andreas" }, { "family": "Shi, Chunyan" }, { "family": "Rippe, Lars" }, { "family": "Moser, Joel" }, { "family": "Kr\u00f6ll, Stefan" }, { "family": "Chen, Xi" } ], "issued": { "date-parts": [ [ 2019, 3, 18 ] ] }, "abstract": "<p>High-fidelity qubit initialization is of significance for efficient error correction in fault tolerant quantum algorithms. Combining two best worlds, speed and robustness, to achieve high-fidelity state preparation and manipulation is challenging in quantum systems, where qubits are closely spaced in frequency. Motivated by the concept of shortcut to adiabaticity, we theoretically propose the shortcut pulses via inverse engineering and further optimize the pulses with respect to systematic errors in frequency detuning and Rabi frequency. Such protocol, relevant to frequency selectivity, is applied to rare-earth ions qubit system, where the excitation of frequency-neighboring qubits should be prevented as well. Furthermore, comparison with adiabatic complex hyperbolic secant pulses shows that these dedicated initialization pulses can reduce the time that ions spend in the excited state by a factor of 6, which is important in coherence time limited systems to approach an error rate manageable by quantum error correction. The approach may also be applicable to superconducting qubits, and any other systems where qubits are addressed in frequency.</p>", "title": "Inverse engineering of shortcut pulses for high fidelity initialization on qubits closely spaced in frequency", "type": "article-journal", "id": "3533099" } 79 77 views
2023-04-02 00:12:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4665432572364807, "perplexity": 9319.777900026262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00366.warc.gz"}
https://www.gamedev.net/forums/topic/413674-64-bit-compiling-again/
# Unity 64-bit compiling (again) This topic is 4184 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I was going to post a reply to my last thread, but it's been retired. I'm still having issues with 64-bit compiling, this time on a much simpler project (I gave up with x64 on my larger project). I get the exact same error message: DruinkEdit.obj : error LNK2001: unresolved external symbol __CxxFrameHandler3 I can't disable exceptions, since then I get a bunch of errors about exceptions being disabled comming from the STL. Even this simple program doesn't link: #include <windows.h> #include <string> static void ErrorBox(const std::wstring& strError) { MessageBox(NULL, strError.c_str(), L"Error", MB_OK | MB_ICONERROR); } int WINAPI WinMain(HINSTANCE, HINSTANCE, LPSTR, int) { ErrorBox(L"Test"); } With the following errors: ------ Build started: Project: bar, Configuration: Release Win32 ------ Compiling... Main.cpp Main.obj : error LNK2001: unresolved external symbol "__declspec(dllimport) protected: wchar_t const * __cdecl std::basic_string<wchar_t,struct std::char_traits<wchar_t>,class std::allocator<wchar_t> >::_Myptr(void)const " (__imp_?_Myptr@?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@IEBAPEB_WXZ) Main.obj : error LNK2001: unresolved external symbol "__declspec(dllimport) public: wchar_t const * __cdecl std::basic_string<wchar_t,struct std::char_traits<wchar_t>,class std::allocator<wchar_t> >::c_str(void)const " (__imp_?c_str@?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@QEBAPEB_WXZ) Main.obj : error LNK2001: unresolved external symbol "__declspec(dllimport) public: __cdecl std::basic_string<wchar_t,struct std::char_traits<wchar_t>,class std::allocator<wchar_t> >::~basic_string<wchar_t,struct std::char_traits<wchar_t>,class std::allocator<wchar_t> >(void)" (__imp_??1?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@QEAA@XZ) Main.obj : error LNK2001: unresolved external symbol "__declspec(dllimport) public: __cdecl std::basic_string<wchar_t,struct std::char_traits<wchar_t>,class std::allocator<wchar_t> >::basic_string<wchar_t,struct std::char_traits<wchar_t>,class std::allocator<wchar_t> >(wchar_t const *)" (__imp_??0?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@QEAA@PEB_W@Z) c:\foo\bar\Release\bar.exe : fatal error LNK1120: 4 unresolved externals Build log was saved at "file://c:\foo\bar\bar\Release\BuildLog.htm" bar - 5 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Anyone got any [more] ideas? EDIT: It links fine if I remove the STL stuff and don't use any exceptions. But that's not very useful... ##### Share on other sites It looks to me like it simply isn't linking the libs into the rest of the code. Which seems strange if it can include standard libs. I am in no way meaning to insult you but just thought I'd try a memory test, did you check and insure the libs you're using are included in the project? Since STL deals with templates perhaps that has something to do with it. Have you tried compiling using a library that doesn't use templates? Not that this is a long term solution but it may help isolate or define the problem. ##### Share on other sites Yup, it's only linking in the standard libs. I tried creating an application not using the STL, and that works fine. An empty application with exceptions (try, catch, throw) in it also won't link. ##### Share on other sites Bump. Anyone had any problems with x64 and exception stuff? ##### Share on other sites Try including string before windows.h? ##### Share on other sites That won't affect anything, it's a link problem rather than a compile problem. Tried it, no difference. ##### Share on other sites Quote: Original post by Evil SteveThat won't affect anything, it's a link problem rather than a compile problem.Tried it, no difference. Stupid question, but are you including the x64 versions of the standard libs? I'm new to the x64 platform myself, however I do know that a x64 application cannot link to an x86 dll (and vice versa). If I understand it correctly, it is because the 32 bit applications must be executed by the WOW64 service. In order to provide linking between the two, it would be quite a costly operation. I would imagine that an x64 app could not link to previously compiled x86 lib's because of the differences in code generation between the two. Not sure if this helps, you may even have looked into this already. ##### Share on other sites can you please post you complete compilation and ofcourse linking line(s) ... may be he links to the rwong libraries? im just guessing that the standard setup links to the 32bit version and thus its crashing. (and yes i have no visual studio 2005 avail, to try it myself) did you set up the correct library and include directories in your settings ? ( i remember, that there are some subdirectorys with "x64" in it.) ... why is there a penguin at the title (without this penguin i probably would have never whatched this one :lol:) ##### Share on other sites All the compiler directories are set to the /win64/amd64 folders where possible (E.g. E:\Microsoft Platform SDK\Bin\win64\AMD64). My compiler line is: /Od /I "E:\Source Control\DruinkClient\/" /D "WIN32" /D "_DEBUG" /D "_WINDOWS" /D "_CRT_SECURE_NO_DEPRECATE" /D "_CRT_NON_CONFORMING_SWPRINTFS" /D "_UNICODE" /D "UNICODE" /FD /EHsc /MDd /GS- /GR- /Fo"Debug x64\\" /Fd"Debug x64\vc80.pdb" /W4 /WX /nologo /c /Wp64 /Zi /TP /errorReport:prompt /OUT:"E:\Source Control\DruinkClient\Debug x64\DruinkClient.exe" /INCREMENTAL /NOLOGO /MANIFEST /MANIFESTFILE:"Debug x64\DruinkClient.exe.intermediate.manifest" /DEBUG /PDB:"e:\Source Control\DruinkClient\Debug x64\DruinkClient.pdb" /SUBSYSTEM:WINDOWS /MACHINE:X64 /ERRORREPORT:PROMPT kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib I'm using VC2005 Professional if it makes a difference (Which does support x64). ##### Share on other sites well i dont understand much of this, but i doubt that this is everything ... e.g. where is the linking of this crt stuff? can you enable verbose output or sth like that? or what does you build log say? (post it) and did you try it without unicode? (at least i think that this "w"-stuff is unicode) e.g. my gcc (amd64 ubuntu) says: -------------------------------- #include <string> int main( int argc, char **argv ) { std::string s("asdf"); return 0; } --------------------------------------------- --------Compile------------------------------ --------------------------------------------- $g++ -v -c test.cpp Ziel: x86_64-linux-gnu Konfiguriert mit: ../src/configure -v --enable-languages=c,c++,java,f95,objc,ada,treelang --prefix=/usr --enable-shared --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --enable-nls --program-suffix=-4.0 --enable-__cxa_atexit --enable-clocale=gnu --enable-libstdcxx-debug --enable-java-awt=gtk-default --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-4.0-1.4.2.0/jre --enable-mpfr --disable-werror --enable-checking=release x86_64-linux-gnu Thread-Modell: posix gcc-Version 4.0.3 (Ubuntu 4.0.3-1ubuntu5) /usr/lib/gcc/x86_64-linux-gnu/4.0.3/cc1plus -quiet -v -D_GNU_SOURCE test.cpp -quiet -dumpbase test.cpp -mtune=k8 -auxbase test -version -o /tmp/ccPTwlkQ.s nicht vorhandenes Verzeichnis »/usr/local/include/x86_64-linux-gnu« wird ignoriert nicht vorhandenes Verzeichnis »/usr/include/x86_64-linux-gnu« wird ignoriert #include "..." - Suche beginnt hier: #include <...> - Suche beginnt hier: /usr/lib/gcc/x86_64-linux-gnu/4.0.3/../../../../include/c++/4.0.3 /usr/lib/gcc/x86_64-linux-gnu/4.0.3/../../../../include/c++/4.0.3/x86_64-linux-gnu /usr/lib/gcc/x86_64-linux-gnu/4.0.3/../../../../include/c++/4.0.3/backward /usr/local/include /usr/lib/gcc/x86_64-linux-gnu/4.0.3/include /usr/include Ende der Suchliste. GNU C++ version 4.0.3 (Ubuntu 4.0.3-1ubuntu5) (x86_64-linux-gnu) compiled by GNU C version 4.0.3 (Ubuntu 4.0.3-1ubuntu5). GGC-Heuristik: --param ggc-min-expand=100 --param ggc-min-heapsize=131072 as -V -Qy --64 -o test.o /tmp/ccPTwlkQ.s GNU assembler version 2.16.91 (x86_64-linux-gnu) using BFD version 2.16.91 20060118 Debian GNU/Linux --------------------------------------------- --------Link--------------------------------- ---------------------------------------------$ g++ -v -o test test.o Ziel: x86_64-linux-gnu Konfiguriert mit: ../src/configure -v --enable-languages=c,c++,java,f95,objc,ada,treelang --prefix=/usr --enable-shared --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --enable-nls --program-suffix=-4.0 --enable-__cxa_atexit --enable-clocale=gnu --enable-libstdcxx-debug --enable-java-awt=gtk-default --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-4.0-1.4.2.0/jre --enable-mpfr --disable-werror --enable-checking=release x86_64-linux-gnu gcc-Version 4.0.3 (Ubuntu 4.0.3-1ubuntu5) /usr/lib/gcc/x86_64-linux-gnu/4.0.3/collect2 --eh-frame-hdr -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o test /usr/lib/gcc/x86_64-linux-gnu/4.0.3/../../../../lib64/crt1.o /usr/lib/gcc/x86_64-linux-gnu/4.0.3/../../../../lib64/crti.o /usr/lib/gcc/x86_64-linux-gnu/4.0.3/crtbegin.o -L/usr/lib/gcc/x86_64-linux-gnu/4.0.3 -L/usr/lib/gcc/x86_64-linux-gnu/4.0.3 -L/usr/lib/gcc/x86_64-linux-gnu/4.0.3/../../../../lib64 -L/usr/lib/gcc/x86_64-linux-gnu/4.0.3/../../.. -L/lib/../lib64 -L/usr/lib/../lib64 test.o -lstdc++ -lm -lgcc_s -lgcc -lc -lgcc_s -lgcc /usr/lib/gcc/x86_64-linux-gnu/4.0.3/crtend.o /usr/lib/gcc/x86_64-linux-gnu/4.0.3/../../../../lib64/crtn.o ##### Share on other sites Is the project set as Win32? If so, have you tried setting it as x64 in the configuration manager? Look around page 5 for the correct VS 2005 settings: This may be useful to you ##### Share on other sites Quote: Original post by Billr17Is the project set as Win32? If so, have you tried setting it as x64 in the configuration manager?Look around page 5 for the correct VS 2005 settings:This may be useful to you Ah ha! I didn't think of configuration from Win32, it seems to be working great now, thanks [smile] ##### Share on other sites I have a different question. what is this 64-bit compiling? and what is 64x and 86x. ##### Share on other sites 64-bit compiling is compiling applications for 64-bit operating systems like Windows XP x64. x64 refers to 64-bit CPUs and OSs, x86 refers to the 32-bit 386, 486, 586, etc family of CPUs. ##### Share on other sites Quote: Original post by Evil Steve64-bit compiling is compiling applications for 64-bit operating systems like Windows XP x64. x64 refers to 64-bit CPUs and OSs, x86 refers to the 32-bit 386, 486, 586, etc family of CPUs. To be correct, x86 refers to the "86" line of intel products (+ clones and derivative products, such as the 8088). The list includes: * 8086, 8088 (first generation, 16 bit processor) * 80186, 80188 (never really hit the shelves) * 80286 * 80386 (first 32 bit CPU in this line) * 80486 * Pentium (80586) * Pentium II (80686) * Pentium III * Pentium IV * AMD processors (Duron, Athlon, ...) Current dual core processors and 64 bits processors are also part of this product line. Regards, ##### Share on other sites when you say 32 bit and 64 bit, are you refering to the register size of the cpu? ##### Share on other sites It's a reference to the native word size of the platform. IA64 and IA32 have very different register mappings. In fact, IA32 technically has provision for registers of many sizes which are not 32 bits (floating point registers, in particular, as well as stuff for SIMD instruction sets). • 27 • 11 • 17 • 11 • 13 • ### Similar Content • Hello fellow devs! Once again I started working on an 2D adventure game and right now I'm doing the character-movement/animation. I'm not a big math guy and I was happy about my solution, but soon I realized that it's flawed. My player has 5 walking-animations, mirrored for the left side: up, upright, right, downright, down. With the atan2 function I get the angle between player and destination. To get an index from 0 to 4, I divide PI by 5 and see how many times it goes into the player-destination angle. In Pseudo-Code: angle = atan2(destination.x - player.x, destination.y - player.y) //swapped y and x to get mirrored angle around the y axis index = (int) (angle / (PI / 5)); PlayAnimation(index); //0 = up, 1 = up_right, 2 = right, 3 = down_right, 4 = down Besides the fact that when angle is equal to PI it produces an index of 5, this works like a charm. Or at least I thought so at first. When I tested it, I realized that the up and down animation is playing more often than the others, which is pretty logical, since they have double the angle. What I'm trying to achieve is something like this, but with equal angles, so that up and down has the same range as all other directions. I can't get my head around it. Any suggestions? Is the whole approach doomed? Thank you in advance for any input! • By devbyskc Hi Everyone, Like most here, I'm a newbie but have been dabbling with game development for a few years. I am currently working full-time overseas and learning the craft in my spare time. It's been a long but highly rewarding adventure. Much of my time has been spent working through tutorials. In all of them, as well as my own attempts at development, I used the audio files supplied by the tutorial author, or obtained from one of the numerous sites online. I am working solo, and will be for a while, so I don't want to get too wrapped up with any one skill set. Regarding audio, the files I've found and used are good for what I was doing at the time. However I would now like to try my hand at customizing the audio more. My game engine of choice is Unity and it has an audio mixer built in that I have experimented with following their tutorials. I have obtained a great book called Game Audio Development with Unity 5.x that I am working through. Half way through the book it introduces using FMOD to supplement the Unity Audio Mixer. Later in the book, the author introduces Reaper (a very popular DAW) as an external program to compose and mix music to be integrated with Unity. I did some research on DAWs and quickly became overwhelmed. Much of what I found was geared toward professional sound engineers and sound designers. I am in no way trying or even thinking about getting to that level. All I want to be able to do is take a music file, and tweak it some to get the sound I want for my game. I've played with Audacity as well, but it didn't seem to fit the bill. So that is why I am looking at a better quality DAW. Since being solo, I am also under a budget contraint. So of all the DAW software out there, I am considering Reaper or Presonus Studio One due to their pricing. My question is, is investing the time to learn about using a DAW to tweak a sound file worth it? Are there any solo developers currently using a DAW as part of their overall workflow? If so, which one? I've also come across Fabric which is a Unity plug-in that enhances the built-in audio mixer. Would that be a better alternative? I know this is long, and maybe I haven't communicated well in trying to be brief. But any advice from the gurus/vets would be greatly appreciated. I've leaned so much and had a lot of fun in the process. BTW, I am also a senior citizen (I cut my programming teeth back using punch cards and Structured Basic when it first came out). If anyone needs more clarification of what I am trying to accomplish please let me know.  Thanks in advance for any assistance/advice. • Hi , I was considering this start up http://adshir.com/, for investment and i would like a little bit of feedback on what the developers community think about the technology. So far what they have is a demo that runs in real time on a Tablet at over 60FPS, it runs locally on the  integrated GPU of the i7 . They have a 20 000 triangles  dinosaur that looks impressive,  better than anything i saw on a mobile device, with reflections and shadows looking very close to what they would look in the real world. They achieved this thanks to a  new algorithm of a rendering technique called Path tracing/Ray tracing, that  is very demanding and so far it is done mostly for static images. From what i checked around there is no real option for real time ray tracing (60 FPS on consumer devices). There was imagination technologies that were supposed to release a chip that supports real time ray tracing, but i did not found they had a product in the market or even if the technology is finished as their last demo  i found was with a PC.  The other one is OTOY with their brigade engine that is still not released and if i understand well is more a cloud solution than in hardware solution . Would there  be a sizable  interest in the developers community in having such a product as a plug-in for existing game engines?  How important  is Ray tracing to the  future of high end real time graphics? • Good day, I just wanted to share our casual game that is available for android. Description: Fight your way from the ravenous plant monster for survival through flips. The rules are simple, drag and release your phone screen. Improve your skills and show it to your friends with the games quirky ranks. Select an array of characters using the orb you acquire throughout the game.
2018-02-24 07:37:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2806771993637085, "perplexity": 3963.892673136936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815500.61/warc/CC-MAIN-20180224073111-20180224093111-00420.warc.gz"}
https://hrj.episciences.org/124
## Lou Shituo ; Yao Qi - A Chebychev's type of prime number theorem in a short interval II. hrj:124 - Hardy-Ramanujan Journal, January 1, 1992, Volume 15 - 1992 - https://doi.org/10.46298/hrj.1992.124 A Chebychev's type of prime number theorem in a short interval II. Authors: Lou Shituo ; Yao Qi In this paper, we show that $0.969\frac{y}{\log x}\leq\pi(x)-\pi(x-y)\leq1.031\frac{y}{\log x}$, where $y=x^{\theta}, \frac{6}{11}<\theta\leq 1$ with $x$ large enough. In particular, it follows that $p_{n+1}-p_n<\!\!\!<p_n^{6/11+\varepsilon}$ for any $\varepsilon>0$, where $p_n$ denotes the $n$th prime. Volume: Volume 15 - 1992 Published on: January 1, 1992 Imported on: March 3, 2015 Keywords: complementary sum.,number of primes in short intervals,[MATH] Mathematics [math]
2022-01-27 10:51:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8987176418304443, "perplexity": 2411.0773491611417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305260.61/warc/CC-MAIN-20220127103059-20220127133059-00313.warc.gz"}
https://physics.stackexchange.com/questions/560562/could-cosmological-cold-dark-matter-be-a-neutrino-condensate
# Could cosmological cold dark matter be a neutrino condensate? What is wrong with the following? (Note that the question is not about galactic dark matter, but about cosmological dark matter.) 1. Neutrinos are dark matter. 2. A neutrino condensate would be cold. (Often, neutrinos are dismissed as being automatically hot dark matter.) 3. Cold neutrinos could be generated continuously by the horizon. Their number would increase with time. So their density could be significant. 4. Their temperature would be much lower than the cosmological neutrino background. The neutrino condensate would be a separate neutrino bath, much colder than the 1.95K of the CNB. 5. A condensate (in Cooper pairs) would not encounter any density limit (in contrast to free fermions, such a warm or hot netrinos). 6. And a cold condensate would not wash out early fluctuations (in contrast to hot neutrinos). (7. They could also form galactic dark matter. - No, they could not, as several answers and comments pointed out.) • Possible duplicates: physics.stackexchange.com/q/17227/2451 , physics.stackexchange.com/q/158319/2451 and links therein. Jun 20, 2020 at 11:31 • I looked at those questions (and a third thread): they do not discuss neutrino condensates. In fact, a neutrino condensate seems to invalidate the answers given in both threads. Thus I made a new one. Jun 20, 2020 at 11:35 • has not caught the mainstream arxiv.org/abs/0911.5012 , only 20 citations Jun 20, 2020 at 14:18 • see hyperphysics.phy-astr.gsu.edu/hbase/Particles/spinc.html#c4 . To make a bson two neutrinos would have to be in a bound state : in the standard model the weak interaction cannot bind two neutrinos . gravity is very very weak plus the masses of the neutrinos very small so another mechanism is needed . all I can find is new models beyond the standard model Jun 22, 2020 at 10:48 • A condensate (in Cooper pairs) would not encounter any density limit” - This is incorrect. The Pauli exclusion principle still applies to constituent particles of the pair. A super fluid is not compressible. Thus your condensate argument is moot and no different from individual neutrinos, but +1 for trying :) Jun 26, 2020 at 4:44 Aside from this - what is the long-range pair-forming interaction that can work between particles that only interact via the short-range weak force. Gravity? Two neutrinos separated by $$10^{-5}$$ m (from the number density of ~0.1 eV neutrinos needed to explain galaxy halos) have a gravitational potential energy of $$\sim 10^{-23}$$ eV. So they would have to be colder than $$10^{-19}$$ K to avoid the pairs being thermally broken. If the neutrinos are just meant to contribute to some general background rather than to galaxy halos then the question arises - why wouldn't they concentrate in galaxy halos in the same way? In any case the universal average density of dark matter would suggest a neutrino number density (assuming the same rest mass) about 3-4 orders of magnitude lower than the concentrations in galactic halos. This changes the average separation to $$10^{-4}$$ m and thus the pairs would have to be colder than $$10^{-18}$$ K. • This is true. But the question was about cosmological dark matter, not about galactic dark matter. And cosmological dark matter/beutrinos could be as cold as $10^{-30} K$. Nov 24, 2020 at 14:21
2022-05-22 21:00:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5789069533348083, "perplexity": 1099.2462529885208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662546071.13/warc/CC-MAIN-20220522190453-20220522220453-00563.warc.gz"}
https://msp.org/index/ail.php?jpath=pjm&l=G
Volume 306 Number 2 Download current issue For Screen For Printing Recent Issues Vol. 306: 1  2 Vol. 305: 1  2 Vol. 304: 1  2 Vol. 303: 1  2 Vol. 302: 1  2 Vol. 301: 1  2 Vol. 300: 1  2 Vol. 299: 1  2 Online Archive Volume: Issue: The Journal Editorial Board Subscriptions Officers Special Issues Submission Guidelines Submission Form Contacts ISSN: 1945-5844 (e-only) ISSN: 0030-8730 (print) Author Index To Appear Other MSP Journals Author Index – G Gabe, James A note on non-unital absorbing extensions Pacific Journal of Mathematics 284 (2016) 383–393 Gabel, Michael Lower bounds on the stable range of polynomial rings Pacific Journal of Mathematics 61 (1975) 117–120 Gaddum, Jerry Linear inequalities and quadratic forms Pacific Journal of Mathematics 8 (1958) 411–414 Gadea, Pedro Spaces of constant para-holomorphic sectional curvature Pacific Journal of Mathematics 136 (1989) 85–101 Gadgil, Siddhartha Equivariant framings, lens spaces and contact structures Pacific Journal of Mathematics 208 (2003) 73–84 Gaffey, William A real inversion formula for a class of bilateral Laplace transforms Pacific Journal of Mathematics 7 (1957) 879–883 Gagliardi, C. Crystallisation moves Pacific Journal of Mathematics 100 (1982) 85–103 Gagliardo, Emilio Fixed points for orientation preserving homeomorphisms of the plane which interchange two points Pacific Journal of Mathematics 59 (1975) 27–32 Gagola, Stephen Characters fully ramified over a normal subgroup Pacific Journal of Mathematics 55 (1974) 107–126 Characters vanishing on all but two conjugacy classes Pacific Journal of Mathematics 109 (1983) 363–385 Gagrat, Mani Proximity approach to semi-metric and developable spaces Pacific Journal of Mathematics 44 (1973) 93–105 Gaier, Dieter On conformal mapping of nearly circular regions Pacific Journal of Mathematics 12 (1962) 149–162 On the change of index for summable series Pacific Journal of Mathematics 5 (1955) 529–539 Gaifman, Haim Concerning measures on Boolean algebras Pacific Journal of Mathematics 14 (1964) 61–73 Gaines, Fergus Kato-Taussky-Wielandt commutator relations and characteristic curves Pacific Journal of Mathematics 61 (1975) 121–128 Gaines, Robert Continuous dependence for two-point boundary value problems Pacific Journal of Mathematics 28 (1969) 327–336 Gajer, Pawel Concordances of metrics of positive scalar curvature Pacific Journal of Mathematics 157 (1993) 257–268 Gál, I. S. On the theory of $(m,\,n)$-compact topological spaces Pacific Journal of Mathematics 8 (1958) 721–734 Uniformizable spaces with a unique structure Pacific Journal of Mathematics 9 (1959) 1053–1060 Galazka, Piotr The straightening theorem for tangent-like maps Pacific Journal of Mathematics 237 (2008) 77–85 Galdi, Giovanni Asymptotic structure of a Leray solution to the Navier–Stokes flow around a rotating body Pacific Journal of Mathematics 253 (2011) 367–382 Existence of time-periodic solutions to the Navier-Stokes equations around a moving body Pacific Journal of Mathematics 223 (2006) 251–267 On the best conditions on the gradient of pressure for uniqueness of viscous flows in the whole space Pacific Journal of Mathematics 104 (1983) 77–83 On the two-dimensional steady-state problem of a viscous gasin an exterior domain Pacific Journal of Mathematics 179 (1997) 65–100 Gale, David A note on polynomial and separable games Pacific Journal of Mathematics 8 (1958) 735–741 A theorem on flows in networks Pacific Journal of Mathematics 7 (1957) 1073–1082 Galé, José Gel'fand theory in algebras of differentiable functions on Banach spaces Pacific Journal of Mathematics 119 (1985) 303–315 Galego, Elói An Amir-Cambern theorem for quasi-isometries of $C_{0}(K, X)$ spaces Pacific Journal of Mathematics 297 (2018) 87–100 Complemented copies of $c_0(\tau)$ in tensor products of $L_p[0,1]$ Pacific Journal of Mathematics 301 (2019) 67–88 The Hilbert vector-valued Banach-Stone theorem via isomorphisms with the large distortion $\sqrt{2}$ Pacific Journal of Mathematics 290 (2017) 321–332 Galichon, Alfred Variational representations for $N$-cyclically monotone vector fields Pacific Journal of Mathematics 269 (2014) 323–340 Galindo Martínez, César Braid group representations from braiding gapped boundaries of Dijkgraaf-Witten theories Pacific Journal of Mathematics 300 (2019) 1–16 Galindo Pastor, Carlos On the structure of the value semigroup of a valuation Pacific Journal of Mathematics 205 (2002) 325–338 Gallaugher, Michael Non-K\"{a}hler expanding Ricci solitons, Einstein metrics, and exotic cone structures Pacific Journal of Mathematics 273 (2015) 369–394 Galli, Matteo On the classification of complete area-stationary and stable surfaces in the subriemannian Sol manifold Pacific Journal of Mathematics 271 (2014) 143–157 Galloway, Gregory A note on the fundamental group of a compact minimal hypersurface Pacific Journal of Mathematics 126 (1987) 243–251 Galvin, Fred Completeness in semimetric spaces Pacific Journal of Mathematics 113 (1984) 67–75 Gamara, Najoua CR Yamabe conjecture -- the conformally flat case Pacific Journal of Mathematics 201 (2001) 121–175 Gamboa Mutuberría, José Manuel A note on projections of real algebraic varieties Pacific Journal of Mathematics 115 (1984) 1–11 On projections of real algebraic varieties Pacific Journal of Mathematics 121 (1986) 281–291 Gamelin, Theodore Bounded approximation by rational functions Pacific Journal of Mathematics 45 (1973) 129–150 Decomposition theorems for Fredholm operators Pacific Journal of Mathematics 15 (1965) 97–106 Iversen's theorem and fiber algebras Pacific Journal of Mathematics 46 (1973) 389–414 Localization of the corona problem Pacific Journal of Mathematics 34 (1970) 73–81 The essential spectrum of a class of ordinary differential operators Pacific Journal of Mathematics 14 (1964) 755–776 The polynomial hulls of certain subsets of $C^{2}$ Pacific Journal of Mathematics 61 (1975) 129–142 Uniform algebras spanned by Hartogs series Pacific Journal of Mathematics 62 (1976) 401–417 Weak compactness of representing measures for $R(K)$ Pacific Journal of Mathematics 114 (1984) 95–107 Gammella, Angela An approach to the tangential Poisson cohomology based on examples in duals of Lie algebras Pacific Journal of Mathematics 203 (2002) 283–320 Gammella, Angela Chevalley cohomology for Kontsevich's graphs Pacific Journal of Mathematics 218 (2005) 201–239 Gan, Wee Liang Batalin-Vilkovisky coalgebra of string topology Pacific Journal of Mathematics 247 (2010) 27–45 Gangolli, Ramesh Sample functions of certain differential processes on symmetric spaces Pacific Journal of Mathematics 15 (1965) 477–496 Gannot, Oran From quasimodes to resonances: exponentially decaying perturbations Pacific Journal of Mathematics 277 (2015) 77–97 Gansner, Emden Matrix correspondences of plane partitions Pacific Journal of Mathematics 92 (1981) 295–315 Gantos, Richard Completely injective semigroups Pacific Journal of Mathematics 31 (1969) 359–366 Gao, Fan Distinguished theta representations for certain covering groups Pacific Journal of Mathematics 290 (2017) 333–379 Gao, Hongzhu The rational cohomology Hopf algebra of a generic Kac-Moody group Pacific Journal of Mathematics 305 (2020) 757–766 Gao, Laiyuan Evolving convex curves to constant-width ones by a perimeter-preserving flow Pacific Journal of Mathematics 272 (2014) 131–145 Gao, Xing Free Rota-Baxter family algebras and (tri)dendriform family algebras Pacific Journal of Mathematics 301 (2019) 741–766 Weighted infinitesimal unitary bialgebras on rooted forests and weighted cocycles Pacific Journal of Mathematics 302 (2019) 741–766 Gao, Yun Realizations of $BC_r$-graded intersection matrix algebras with grading subalgebras of type $B_r$, $r \geq 3$ Pacific Journal of Mathematics 263 (2013) 257–281 Gao, Yun New invariants for complex manifolds and rational singularities Pacific Journal of Mathematics 269 (2014) 73–97 Gao, Zhiyong On the compactness of a class of Riemannian manifolds Pacific Journal of Mathematics 166 (1994) 23–42 Garabedian, Paul A partial differential equation arising in conformal mapping Pacific Journal of Mathematics 1 (1951) 485–524 Calculation of axially symmetric cavities and jets Pacific Journal of Mathematics 6 (1956) 611–684 Orthogonal harmonic polynomials Pacific Journal of Mathematics 3 (1953) 585–603 Garay, O. J. Willmore--Chen tubes on homogeneous \\spaces in warped product spaces Pacific Journal of Mathematics 188 (1999) 201–207 Garay, Oscar J. A classification of certain $3$-dimensional conformally flat Euclidean hypersurfaces Pacific Journal of Mathematics 162 (1994) 13–25 Garay López, Cristhian The fundamental theorem of tropical differential algebraic geometry Pacific Journal of Mathematics 283 (2016) 257–270 Garbagnati, Alice Calabi-Yau 4-folds of Borcea–Voisin type from F-theory Pacific Journal of Mathematics 299 (2019) 1–31 Garbanati, Dennis Classes of circulants over the $p$-adic and rational integers Pacific Journal of Mathematics 50 (1974) 435–447 Classes of unimodular abelian group matrices Pacific Journal of Mathematics 43 (1972) 633–646 García, Gastón On pointed Hopf algebras over dihedral groups Pacific Journal of Mathematics 252 (2011) 69–91 García, Isaac On the existence of limit cycles for real quadratic differential systems with an invariant cubic Pacific Journal of Mathematics 223 (2006) 201–218 García Iglesias, Agustín Representations of the category of modules over pointed Hopf algebras over $S_3$ and $S_4$ Pacific Journal of Mathematics 252 (2011) 343–378 García Lara, René Three dimensional SOL manifolds and complex Kleinian groups Pacific Journal of Mathematics 294 (2018) 1–18 García-Martínez, Sandra Biharmonic hypersurfaces in complete Riemannian manifolds Pacific Journal of Mathematics 263 (2013) 1–12 García-Máynez Cervantes, Adalberto Unicoherent plane Peano sets are $\sigma$-unicoherent Pacific Journal of Mathematics 82 (1979) 493–497 García-Río, Eduardo Bach-flat isotropic gradient Ricci solitons Pacific Journal of Mathematics 293 (2018) 75–99 Complete locally conformally flat manifolds of negative curvature Pacific Journal of Mathematics 226 (2006) 201–219 Four-dimensional Osserman metrics of neutral signature Pacific Journal of Mathematics 244 (2010) 21–36 García-Sánchez, Pedro Correction to:\\Modular Diophantine inequalities\\and numerical semigroups Pacific Journal of Mathematics 220 (2005) 199–199 Modular diophantine inequalities and numerical semigroups Pacific Journal of Mathematics 218 (2005) 379–398 Numerical semigroups generated by intervals Pacific Journal of Mathematics 191 (1999) 75–83 On Buchsbaum simplicial affine semigroups Pacific Journal of Mathematics 202 (2002) 329–339 Uniquely presented finitely generated commutative monoids Pacific Journal of Mathematics 248 (2010) 91–105 Gardiner, Stephen Removable singularities for subharmonic functions Pacific Journal of Mathematics 147 (1991) 71–80 Gardner, Barry A going down'' theorem for certain reflected radicals Pacific Journal of Mathematics 55 (1974) 381–389 Radical classes of regular rings with Artinian primitive images Pacific Journal of Mathematics 99 (1982) 337–349 Radicals of supplementary semilattice sums of associative rings Pacific Journal of Mathematics 58 (1975) 387–392 Semi-simple radical classes of algebras and attainability of identities Pacific Journal of Mathematics 61 (1975) 401–416 Some aspects of $T$-nilpotence Pacific Journal of Mathematics 53 (1974) 117–130 Some aspects of $T$-nilpotence. II: Lifting properties over $T$-nilpotent ideals Pacific Journal of Mathematics 59 (1975) 445–453 Some closure properties for torsion classes of abelian groups Pacific Journal of Mathematics 42 (1972) 45–61 Torsion classes and pure subgroups Pacific Journal of Mathematics 33 (1970) 109–116 Gardner, Richard Relative width measures and the plank problem Pacific Journal of Mathematics 135 (1988) 299–312 Garfinkel, Boris Singularities in a variational problem with an inequality Pacific Journal of Mathematics 16 (1966) 273–283 Garfinkel, Gerald Generic splitting algebras for $\mathrm{Pic}$ Pacific Journal of Mathematics 35 (1970) 369–380 Garge, Shripad M. Maximal tori determining the algebraic groups Pacific Journal of Mathematics 220 (2005) 69–85 Garibaldi, Ryan Rost invariant of the center, revisited Pacific Journal of Mathematics 291 (2017) 369–397 Garibay, F. The geometry of sum-preserving permutations Pacific Journal of Mathematics 135 (1988) 313–322 Gariepy, Ronald Geometric properties of Sobolev mappings Pacific Journal of Mathematics 47 (1973) 427–433 Multiplicity and the area of an $(n$ $-$ $1)$ continuous mapping Pacific Journal of Mathematics 44 (1973) 509–513 Garimella, Gayatri Weak Paley--Wiener property\\for completely solvable Lie groups Pacific Journal of Mathematics 187 (1999) 51–63 Garity, Dennis Homogeneity groups of ends of open $3$-manifolds Pacific Journal of Mathematics 269 (2014) 99–112 Intrinsically $(n-2)$-dimensional cellular decompositions of $E^{n}$ Pacific Journal of Mathematics 102 (1982) 275–283 Menger spaces and inverse limits Pacific Journal of Mathematics 131 (1988) 249–259 Uncountably many inequivalentLipschitz homogeneous Cantor sets in $\mathbb{R}^{3}$ Pacific Journal of Mathematics 222 (2005) 287–299 Garnett, John A topological characterization of Gleason parts Pacific Journal of Mathematics 20 (1967) 59–63 BMO from dyadic BMO Pacific Journal of Mathematics 99 (1982) 351–371 Bounded approximation by rational functions Pacific Journal of Mathematics 45 (1973) 129–150 Harmonic measures supported on curves Pacific Journal of Mathematics 138 (1989) 233–236 Interpolating Blaschke products generate $H^\infty$ Pacific Journal of Mathematics 173 (1996) 501–510 On a theorem of Mergelyan Pacific Journal of Mathematics 26 (1968) 461–467 Sobolev approximation by a sum of subalgebras on the circle Pacific Journal of Mathematics 65 (1976) 55–63 Weights and $L\,\mathrm{log}\,L$ Pacific Journal of Mathematics 120 (1985) 33–45 Garoutte, Dennis A note on extremal properties characterizing weakly $\lambda$-valent principal functions Pacific Journal of Mathematics 25 (1968) 109–115 Garrido, Danilo Entire sign changing solutions with finite energy to the fractional Yamabe equation Pacific Journal of Mathematics 283 (2016) 85–114 Garrity, Thomas The equivalence problem for higher-codimensional CR structures Pacific Journal of Mathematics 177 (1997) 211–235 Garsia, Adriano An embedding of Riemann surfaces of genus one Pacific Journal of Mathematics 11 (1961) 193–204 Entropy and singularity of infinite convolutions Pacific Journal of Mathematics 13 (1963) 1159–1169 The calculation of conformal parameters for some imbedded Riemann surfaces Pacific Journal of Mathematics 10 (1960) 121–165 Gary, John Higher dimensional cyclic elements Pacific Journal of Mathematics 9 (1959) 1061–1070 Gaskill, Herbert Distributive lattices with finite projective covers Pacific Journal of Mathematics 81 (1979) 45–59 Gasper, George Products of terminating $_{3}F_{2}(1)$ series Pacific Journal of Mathematics 56 (1975) 87–95 Gasull, Armengol Chebyshev property\\ of complete elliptic integrals\\ and its application to abelian integrals Pacific Journal of Mathematics 202 (2002) 341–361 Upper bounds for the number of limit cycles through linear differential equations Pacific Journal of Mathematics 226 (2006) 277–296 Gatica, Juan A fixed point theorem for $k$-set-contractions defined in a cone Pacific Journal of Mathematics 53 (1974) 131–136 Gatterdam, Ronald The word problem and power problem in $1$-relator groups are primitive recursive Pacific Journal of Mathematics 61 (1975) 351–359 Gaudry, Garth Bad behavior and inclusion results for multipliers of type $(p,\,q)$ Pacific Journal of Mathematics 35 (1970) 83–94 Multipliers of type $(p,\,q)$ Pacific Journal of Mathematics 18 (1966) 477–488 Quasimeasures and operators commuting with convolution Pacific Journal of Mathematics 18 (1966) 461–476 Gauger, Michael A. Some remarks on the center of the universal enveloping algebra of a classical simple Lie algebra Pacific Journal of Mathematics 62 (1976) 93–97 Gaussent, Stéphane Iwahori-Hecke algebras for Kac-Moody groups over local fields Pacific Journal of Mathematics 285 (2016) 1–61 Gavarini, Fabio A new equivalence between super Harish-Chandra pairs and Lie supergroups Pacific Journal of Mathematics 306 (2020) 451–485 Quantization of Poisson groups Pacific Journal of Mathematics 186 (1998) 217–266 Gay, David On the degree of the splitting field of an irreducible binomial Pacific Journal of Mathematics 78 (1978) 117–120 Partially normal radical extensions of the rationals Pacific Journal of Mathematics 72 (1977) 403–417 The torsion group of a radical extension Pacific Journal of Mathematics 92 (1981) 317–327 Gay, David Diagrams for relative trisections Pacific Journal of Mathematics 294 (2018) 275–305 Gazik, R. J. Coarse uniform convergence spaces Pacific Journal of Mathematics 61 (1975) 143–150 Convergence in spaces of subsets Pacific Journal of Mathematics 43 (1972) 81–92 Non-Hausdorff convergence spaces Pacific Journal of Mathematics 106 (1983) 257–264 Ge, Huabin 3-dimensional discrete curvature flows and discrete Einstein metrics Pacific Journal of Mathematics 287 (2017) 49–70 Ge, Jianquan A proof of the DDVV conjecture and its equality case Pacific Journal of Mathematics 237 (2008) 87–95 Ge, Zhong Horizontal path spaces and Carnot-Carathéodory metrics Pacific Journal of Mathematics 161 (1993) 255–286 On a constrained variational problem and the spaces of horizontal paths Pacific Journal of Mathematics 149 (1991) 61–94 Geatti, Laura Univalence of equivariant Riemann domains over the complexification of rank-one Riemannian symmetric spaces Pacific Journal of Mathematics 238 (2008) 275–330 Gee, Alice Cubic singular moduli, Ramanujan's class invariants $\lambda_n$ and the explicit Shimura Reciprocity Law Pacific Journal of Mathematics 208 (2003) 23–37 Geer, Nathan An invariant supertrace for the category of representations of Lie superalgebras Pacific Journal of Mathematics 238 (2008) 331–348 Kuperberg and Turaev-Viro invariants in unimodular categories Pacific Journal of Mathematics 306 (2020) 421–450 Gehret, Allen A tale of two Liouville closures Pacific Journal of Mathematics 290 (2017) 41–76 Gehring, Frederick A note on a paper by L. C. Young Pacific Journal of Mathematics 5 (1955) 67–72 Geiger, David Closed systems of functions and predicates Pacific Journal of Mathematics 27 (1968) 95–100 Geiges, Hansjörg Contact structures on $(n-1)$-connected $(2n+1)$-manifolds Pacific Journal of Mathematics 161 (1993) 129–137 Geitz, Robert Vector-valued functions as families of scalar-valued functions Pacific Journal of Mathematics 95 (1981) 75–83 Gekhtman, Dmitri Cyclic pursuit on compact manifolds Pacific Journal of Mathematics 289 (2017) 153–168 Gelbaum, Bernard Banach algebra bundles Pacific Journal of Mathematics 28 (1969) 337–349 Bases of tensor products of Banach spaces Pacific Journal of Mathematics 11 (1961) 1281–1286 Some theorems in probability theory Pacific Journal of Mathematics 118 (1985) 383–391 Tensor products of group algebras Pacific Journal of Mathematics 22 (1967) 241–250 Geline, Michael On Tate duality and a projective scalar property for symmetric algebras Pacific Journal of Mathematics 293 (2018) 277–300 Gellar, Ralph A new look at some familiar spaces of intertwining operators Pacific Journal of Mathematics 47 (1973) 435–441 Genet, Gwenaëlle On the commutator formula of a split BN-pair Pacific Journal of Mathematics 207 (2002) 177–181 Gentili, Graziano Landau--Toeplitz theorems for slice regular functions over quaternions Pacific Journal of Mathematics 265 (2013) 381–404 Gentry, Roosevelt New diagram proofs of the Hausdorff-Young theorem and Young's inequality Pacific Journal of Mathematics 97 (1981) 97–104 Geoghegan, Ross Splitting homotopy idempotents which have essential fixed points Pacific Journal of Mathematics 95 (1981) 95–103 The homomorphism on fundamental group induced by a homotopy idempotent having essential fixed points Pacific Journal of Mathematics 95 (1981) 85–93 Geramita, Joan Automorphisms on cylindrical semigroups Pacific Journal of Mathematics 43 (1972) 93–105 Gérardin, Paul Asymptotic behaviour of eigenfunctions on semi-homogeneous tree Pacific Journal of Mathematics 196 (2000) 415–427 Gerber, Leon The orthocentric simplex as an extreme simplex Pacific Journal of Mathematics 56 (1975) 97–111 The volume cut off a simplex by a half-space Pacific Journal of Mathematics 94 (1981) 311–314 Gerber, Thomas Generalized Mullineux involution and perverse equivalences Pacific Journal of Mathematics 306 (2020) 487–517 Gerek, Aydin On Isoperimetric Surfaces in General Relativity Pacific Journal of Mathematics 231 (2007) 63–84 Gergen, J. J. Convergence of extended Bernstein polynomials in the complex plane Pacific Journal of Mathematics 13 (1963) 1171–1180 Gerhard, James Subdirectly irreducible idempotent semigroups Pacific Journal of Mathematics 39 (1971) 669–676 Word problems for free objects in certain varieties of completely regular semigroups Pacific Journal of Mathematics 104 (1983) 351–359 Gerhardt, Claus $L^{p}$-estimates for solutions to the instationary Navier-Stokes equations in dimension two Pacific Journal of Mathematics 79 (1978) 375–398 A free boundary value problem for capillary surfaces Pacific Journal of Mathematics 88 (1980) 285–295 A note on inverse curvature flows in ARW spacetimes Pacific Journal of Mathematics 256 (2012) 309–316 On the CMC foliation of future ends of a spacetime Pacific Journal of Mathematics 226 (2006) 297–308 Geroldinger, Alfred Monoids of modules and arithmetic of direct-sum decompositions Pacific Journal of Mathematics 271 (2014) 257–319 Gershenson, Hillel A problem in compact Lie groups and framed cobordism Pacific Journal of Mathematics 51 (1974) 121–129 Gersting, Judith A rate of growth criterion for universality of regressive isols Pacific Journal of Mathematics 31 (1969) 669–677 Gerth, Frank The Iwasawa invariant $\mu$ for quadratic fields Pacific Journal of Mathematics 80 (1979) 131–136 Gervais, Jean-Jacques Stability of unfoldings in the context of equivariant contact-equivalence Pacific Journal of Mathematics 132 (1988) 283–291 Sufficiency of jets Pacific Journal of Mathematics 72 (1977) 419–422 Gerver, Joseph Long walks in the plane with few collinear points Pacific Journal of Mathematics 83 (1979) 349–355 On certain sequences of lattice points Pacific Journal of Mathematics 83 (1979) 357–363 Getoor, Ronald Additive functionals of a Markov process Pacific Journal of Mathematics 7 (1957) 1577–1591 Infinitely divisible probabilities on the hyperbolic plane Pacific Journal of Mathematics 11 (1961) 1287–1308 Markov operators and their associated semi-groups Pacific Journal of Mathematics 9 (1959) 449–472 On characteristic functions of Banach space valued random variables Pacific Journal of Mathematics 7 (1957) 885–896 The asymptotic distribution of the eigenvalues for a class of Markov operators Pacific Journal of Mathematics 9 (1959) 399–408 Getz, Jayce A general simple relative trace formula Pacific Journal of Mathematics 277 (2015) 99–118 Nonabelian Fourier transforms for spherical representations Pacific Journal of Mathematics 294 (2018) 351–373 Geue, Andrew S. Precompact and collectively semi-precompact sets of semi-precompact continuous linear operators Pacific Journal of Mathematics 52 (1974) 377–401 Ghaemi, Mohammad Bagher Boolean algebras of projections \& algebras of spectral operators Pacific Journal of Mathematics 209 (2003) 1–16 Ghahramani, Fereidoun Compact elements of weighted group algebras Pacific Journal of Mathematics 113 (1984) 77–84 The $L^p$ theory of standard homomorphisms Pacific Journal of Mathematics 168 (1995) 49–60 Ghate, Eknath (p,p)-Galois representations attached to automorphic forms on GLn Pacific Journal of Mathematics 252 (2011) 379–406 Gheorghe, Dana Spectral Theory for Linear Relations via Linear Operators Pacific Journal of Mathematics 255 (2012) 349–372 Ghez, P. $W^\ast$-categories Pacific Journal of Mathematics 120 (1985) 79–109 Ghezzi, Laura Homology multipliers and the relation type of parameter ideals Pacific Journal of Mathematics 226 (2006) 1–39 Ghimenti, Marco On Yamabe type problems on Riemannian manifolds with boundary Pacific Journal of Mathematics 284 (2016) 79–102 Ghioca, Dragos A variant of a theorem by Ailon-Rudnick for elliptic curves Pacific Journal of Mathematics 295 (2018) 1–15 Ghomi, Mohammad Centers of disks in Riemannian manifolds Pacific Journal of Mathematics 304 (2020) 401–418 Ghorpade, Sudhir Hilbert series of certain jet schemes of determinantal varieties Pacific Journal of Mathematics 272 (2014) 147–175 Ghosh, Shamindra Kumar Higher exchange relations Pacific Journal of Mathematics 210 (2003) 299–316 Ghoussoub, Nassif Variational representations for $N$-cyclically monotone vector fields Pacific Journal of Mathematics 269 (2014) 323–340 Weak compactness in spaces of Bochner integrable functions and the Radon-Nikodým property Pacific Journal of Mathematics 110 (1984) 65–70 Ghrist, Robert Flowlines transverse to knot and link fibrations Pacific Journal of Mathematics 217 (2004) 61–86 Giacomini, Hector Upper bounds for the number of limit cycles through linear differential equations Pacific Journal of Mathematics 226 (2006) 277–296 Giambruno, Antonio A commutativity theorem for rings with derivations Pacific Journal of Mathematics 102 (1982) 41–45 Derivations with invertible values in rings with involution Pacific Journal of Mathematics 123 (1986) 47–54 On rings which are sumsof two PI-subrings: a combinatorial approach Pacific Journal of Mathematics 209 (2003) 17–31 Giaquinta, Mariano On the regularity up to the boundary for second order nonlinear elliptic systems Pacific Journal of Mathematics 99 (1982) 1–17 Gibson, Archie Triples of operator-valued functions related to the unit circle Pacific Journal of Mathematics 28 (1969) 503–531 Gibson, Christopher On stratifying pairs of linear mappings Pacific Journal of Mathematics 102 (1982) 329–345 Giertz, Magnus On generalized elements with respect to linear operators Pacific Journal of Mathematics 23 (1967) 47–67 Gierz, Gerhard A duality principle for rational approximation Pacific Journal of Mathematics 125 (1986) 79–92 Injective Banach lattices with strong order units Pacific Journal of Mathematics 110 (1984) 297–305 On the Dunford-Pettis property of function modules of abstract $L$-spaces Pacific Journal of Mathematics 121 (1986) 73–82 Gigante, Giuliana $M$-hyperbolic real subsets of complex spaces Pacific Journal of Mathematics 172 (1996) 101–115 Gil De Lamadrid, Jesus Bases of tensor products of Banach spaces Pacific Journal of Mathematics 11 (1961) 1281–1286 Gilbert, J. E. Characterization of Hardy spaces by singular integrals and Divergence-Free' wavelets Pacific Journal of Mathematics 193 (2000) 79–105 Gilbert, John Convolution operators on $L^{p}(G)$ and properties of locally compact groups Pacific Journal of Mathematics 24 (1968) 257–268 On the ideal structure of some algebras of analytic functions Pacific Journal of Mathematics 35 (1970) 625–634 Gilbert, Richard Extremal spectral functions of a symmetric operator Pacific Journal of Mathematics 14 (1964) 75–84 The deficiency index of a third order operator Pacific Journal of Mathematics 68 (1977) 369–392 Gilbert, Robert On a class of elliptic partial differential equations in four variables Pacific Journal of Mathematics 14 (1964) 1223–1236 On harmonic functions of four variables with rational $p_{4}$-associates Pacific Journal of Mathematics 13 (1963) 79–96 Singularities of three-dimensional harmonic functions Pacific Journal of Mathematics 10 (1960) 1243–1255 Gilbert, Simcha Generalized splines on arbitrary graphs Pacific Journal of Mathematics 281 (2016) 333–364 Gilbert, Walter Completely monotonic functions on cones Pacific Journal of Mathematics 6 (1956) 685–689 Giles, John Generic differentiability of convex functions on the dual of a Banach space Pacific Journal of Mathematics 172 (1996) 413–431 Geometrical implications of upper semi-continuity of the duality mapping on a Banach space Pacific Journal of Mathematics 79 (1978) 99–109 On numerical ranges of elements of locally $m$-convex algebras Pacific Journal of Mathematics 49 (1973) 79–91 Gilfeather, Frank Isomorphisms modulo the compact operators of nest algebras Pacific Journal of Mathematics 122 (1986) 263–286 Operator valued roots of abelian analytic functions Pacific Journal of Mathematics 55 (1974) 127–148 Gilkey, Peter B. Four-dimensional Osserman metrics of neutral signature Pacific Journal of Mathematics 244 (2010) 21–36 Invariants of the heat equation Pacific Journal of Mathematics 117 (1985) 233–254 The eta invariant, $\mathrm{Pin}^c$ bordism, and equivariant $\mathrm{Spin}^c$ bordism for cyclic $2$-groups Pacific Journal of Mathematics 128 (1987) 1–24 The local index formula for a Hermitian manifold Pacific Journal of Mathematics 180 (1997) 51–56 Gill, Nick Cherlin's conjecture for sporadic simple groups Pacific Journal of Mathematics 297 (2018) 47–66 Gillam, William Cleanliness of geodesics in hyperbolic 3-manifolds Pacific Journal of Mathematics 213 (2004) 201–212 Gillaspy, Elizabeth $K$-theory and homotopies of 2-cocycles on higher-rank graphs Pacific Journal of Mathematics 278 (2015) 407–426 Gille, Philippe On maximal tori of algebraic groups of type $G_2$ Pacific Journal of Mathematics 279 (2015) 101–134 Gillespie, Thomas An invariant subspace theorem of J. Feldman Pacific Journal of Mathematics 26 (1968) 67–72 Analyticity and spectral decompositions of $L^p$ for compact abelian groups Pacific Journal of Mathematics 127 (1987) 247–260 Invariant subspaces and harmonic conjugation on compact abelian groups Pacific Journal of Mathematics 155 (1992) 201–213 The generalized M. Riesz theorem and transference Pacific Journal of Mathematics 120 (1985) 279–288 Gilliam, David Geometry and the Radon-Nikodym theorem in strict Mackey convergence spaces Pacific Journal of Mathematics 65 (1976) 353–364 Gillman, David Free curves in $E^{3}$ Pacific Journal of Mathematics 28 (1969) 533–542 Gilman, Robert Finite groups with small unbalancing $2$-components Pacific Journal of Mathematics 83 (1979) 55–106 Gilmer, Patrick Arf invariants of real algebraic curves Pacific Journal of Mathematics 230 (2007) 297–313 Integral TQFT for a one-holed torus Pacific Journal of Mathematics 252 (2011) 93–112 On surgery curves for genus-one slice knots Pacific Journal of Mathematics 265 (2013) 405–425 Real algebraic curves and link cobordism Pacific Journal of Mathematics 153 (1992) 31–69 Topological proof of the $G$-signature theorem for $G$ finite Pacific Journal of Mathematics 97 (1981) 105–114 Gilmer, Robert A characterization of Prüfer domains in terms of polynomials Pacific Journal of Mathematics 60 (1975) 81–85 Jónsson $\omega_0$-generated algebraic field extensions Pacific Journal of Mathematics 128 (1987) 81–116 On a classical theorem of Noether in ideal theory Pacific Journal of Mathematics 13 (1963) 579–583 On the divisors of monic polynomials over a commutative ring Pacific Journal of Mathematics 78 (1978) 121–131 Power series rings over a Krull domain Pacific Journal of Mathematics 29 (1969) 543–549 Rings in which semi-primary ideals are primary Pacific Journal of Mathematics 12 (1962) 1273–1276 Some containment relations between classes of ideals of a commutative ring Pacific Journal of Mathematics 15 (1965) 497–502 The group of units of a commutative semigroup ring Pacific Journal of Mathematics 85 (1979) 49–64 The pseudo-radical of a commutative ring Pacific Journal of Mathematics 19 (1966) 275–284 Gilmore, Maurice Three-dimensional open books constructed from the identity map Pacific Journal of Mathematics 80 (1979) 137–139 Gilmour, C. R. A. On the congruence lattice of a frame Pacific Journal of Mathematics 130 (1987) 209–213 Giménez, Fernando Volume estimates for real hypersurfaces of a Kaehler manifold with strictly positive holomorphic sectional and antiholomorphic Ricci curvatures Pacific Journal of Mathematics 142 (1990) 23–39 Gindikin, Simon The geometry of flag manifold and holomorphic extension of Szeg\"{o} kernels for $\mathbf{SU}(p,q)$ Pacific Journal of Mathematics 179 (1997) 201–220 Giné, Jaume A family of isochronous foci with Darboux\\first integral Pacific Journal of Mathematics 218 (2005) 343–355 Gingold, Harry On the location of zeroes of oscillatory solutions of $y^{(n)}=c(x)y$ Pacific Journal of Mathematics 119 (1985) 317–336 Uniqueness of linear boundary value problems for differential systems Pacific Journal of Mathematics 75 (1978) 107–136 Ginoux, Nicolas A new upper bound for the Dirac operator on hypersurfaces Pacific Journal of Mathematics 278 (2015) 79–101 Ginsburg, John $S$-spaces in countably compact spaces using Ostaszewski's method Pacific Journal of Mathematics 68 (1977) 393–397 A note on the cardinality of infinite partially ordered sets Pacific Journal of Mathematics 106 (1983) 265–270 Cardinal inequalities for topological spaces involving the weak Lindelof number Pacific Journal of Mathematics 79 (1978) 37–45 Some applications of ultrafilters in topology Pacific Journal of Mathematics 57 (1975) 403–418 Ginsburg, Seymour On mappings from the family of well ordered subsets of a set Pacific Journal of Mathematics 6 (1956) 583–589 Semigroups, Presburger formulas, and languages Pacific Journal of Mathematics 16 (1966) 285–296 Ginzburg, David Generalized modular symbols and relative Lie algebra cohomology Pacific Journal of Mathematics 175 (1996) 337–355 Ginzburg, Viktor On the Maslov class rigidity for coisotropic submanifolds Pacific Journal of Mathematics 250 (2011) 139–161 Periodic orbits ofHamiltonian flows near symplectic extrema Pacific Journal of Mathematics 206 (2002) 69–91 Gioia, Anthony Convolutions of arithmetic functions over cohesive basic sequences Pacific Journal of Mathematics 38 (1971) 391–399 Girão, Darlan Rank gradient of small covers Pacific Journal of Mathematics 266 (2013) 23–29 Girão, Frederico The Gauss-Bonnet-Chern mass of higher codimension graphical manifolds Pacific Journal of Mathematics 298 (2019) 201–216 Girard, Dennis The asymptotic behavior of norms of powers of absolutely convergent Fourier series Pacific Journal of Mathematics 37 (1971) 357–381 The behavior of the norm of an automorphism of the unit disk Pacific Journal of Mathematics 47 (1973) 443–456 Girardi, Maria Dentability, trees, and Dunford-Pettis operators on $L_1$ Pacific Journal of Mathematics 148 (1991) 59–79 Errata: Dentability, trees, and Dunford-Pettis operators on $L_1$'' Pacific Journal of Mathematics 157 (1993) 389–394 Girolo, Jack Approximating compact sets in normed linear spaces Pacific Journal of Mathematics 98 (1982) 81–89 Gislason, Gary On the existence question for a family of products Pacific Journal of Mathematics 34 (1970) 385–388 Gitik, Moti Some results on Specker's problem Pacific Journal of Mathematics 134 (1988) 227–249 Gitler Hammer, Samuel Composition properties of projective homotopy classes Pacific Journal of Mathematics 68 (1977) 47–61 Gittings, Raymond F. Finite-to-one open maps of generalized metric spaces Pacific Journal of Mathematics 59 (1975) 33–41 Giuffrida, Salvatore On the postulation of $0$-dimensional subschemes on a smooth quadric Pacific Journal of Mathematics 155 (1992) 251–282 Giulini, Saverio Singular characters and their $L^{p}$ norms on classical Lie groups Pacific Journal of Mathematics 109 (1983) 387–398 Giusti, Enrico Generalized solutions for the mean curvature equation Pacific Journal of Mathematics 88 (1980) 297–321 Glaser, Leslie A proof of the most general polyhedral Schoenflies conjecture possible Pacific Journal of Mathematics 38 (1971) 401–417 On tame Cantor sets in spheres having the same projection in each direction Pacific Journal of Mathematics 60 (1975) 87–102 Uncountably many almost polyhedral wild $(k-2)$-cells in $E^{k}$ for $k\geqq 4$ Pacific Journal of Mathematics 27 (1968) 267–273 Glasner, Eli Relatively invariant measures Pacific Journal of Mathematics 58 (1975) 393–410 Glasner, Moses Atoms on the Royden boundary Pacific Journal of Mathematics 49 (1973) 339–347 Bisection into small annuli Pacific Journal of Mathematics 24 (1968) 457–461 Correction to: Function-theoretic degeneracy criteria for Riemannian manifolds'' Pacific Journal of Mathematics 31 (1969) 834–834c Function-theoretic degeneracy criteria for Riemannian manifolds Pacific Journal of Mathematics 28 (1969) 351–356 Surjective extension of the reduction operator Pacific Journal of Mathematics 104 (1983) 361–369 Glass, Andrew $a^{\ast}$-closures of completely distributive lattice-ordered groups Pacific Journal of Mathematics 59 (1975) 43–67 Archimedean extensions of directed interpolation groups Pacific Journal of Mathematics 44 (1973) 515–521 Correction to: $a^*$-closures to completely distributive lattice-ordered groups'' Pacific Journal of Mathematics 61 (1975) 606–606 Glass, Darren Embedding problems and finite quotients Pacific Journal of Mathematics 205 (2002) 31–41 Glassco, Donald Irreducible sums of simple multivectors Pacific Journal of Mathematics 49 (1973) 13–32 Glassman, Neal Cohomology of nonassociative algebras Pacific Journal of Mathematics 33 (1970) 617–634 Glauberman, George A characteristic subgroup of a group of odd order Pacific Journal of Mathematics 56 (1975) 305–319 Correction to: A characteristic subgroup of a group of odd order'' Pacific Journal of Mathematics 61 (1975) 607–607b Normalizers of $p$-subgroups in finite groups Pacific Journal of Mathematics 29 (1969) 137–144 On Burnside's other $p^{a}q^{b}$ theorem Pacific Journal of Mathematics 56 (1975) 469–476 Weakly closed direct factors of Sylow subgroups Pacific Journal of Mathematics 26 (1968) 73–83 Gleason, Andrew Distribution of round-off errors for running averages Pacific Journal of Mathematics 3 (1953) 605–611 The abstract theorem of Cauchy-Weil Pacific Journal of Mathematics 12 (1962) 511–525 The extension of linear functionals defined on $H^{\infty}$ Pacific Journal of Mathematics 12 (1962) 163–182 Gleit, Alan On the structure topology of simplex spaces Pacific Journal of Mathematics 34 (1970) 389–405 Glenn, Laura Ann Model rigid CR submanifolds of CR dimension 1 Pacific Journal of Mathematics 184 (1998) 43–74 Glennie, Charles Some identities valid in special Jordan algebras but not valid in all Jordan algebras Pacific Journal of Mathematics 16 (1966) 47–59 Glicksberg, Irving A Phragmén-Lindelöf theorem for function algebras Pacific Journal of Mathematics 22 (1967) 401–406 A remark on analyticity of function algebras Pacific Journal of Mathematics 13 (1963) 1181–1185 An analogue of Liapounoff's convexity theorem for Birnbaum-Orlicz spaces and the extreme points of their unit balls Pacific Journal of Mathematics 116 (1985) 265–283 An application of Wermer's subharmonicity theorem Pacific Journal of Mathematics 94 (1981) 315–326 Boundary continuity of some holomorphic functions Pacific Journal of Mathematics 80 (1979) 425–434 Convolution semigroups of measures Pacific Journal of Mathematics 9 (1959) 51–67 Correction to: Maximal algebras and a theorem of Radó'' Pacific Journal of Mathematics 19 (1966) 587–587 Homomorphisms of certain algebras of measures Pacific Journal of Mathematics 10 (1960) 167–191 Indicator functions with large Fourier transforms Pacific Journal of Mathematics 105 (1983) 11–20 Maps preserving translates of a function Pacific Journal of Mathematics 87 (1980) 323–334 Maximal algebras and a theorem of Radó Pacific Journal of Mathematics 14 (1964) 919–941 More on Phragmén-Lindelöf for function algebras Pacific Journal of Mathematics 54 (1974) 25–38 Multipliers of quotients of $L_{1}$ Pacific Journal of Mathematics 38 (1971) 619–624 On convex hulls of translates Pacific Journal of Mathematics 13 (1963) 97–113 Removable discontinuities of $A$-holomorphic functions Pacific Journal of Mathematics 61 (1975) 417–426 Semi-square-summable Fourier-Stieltjes transforms Pacific Journal of Mathematics 31 (1969) 367–372 Weak compactness and separate continuity Pacific Journal of Mathematics 11 (1961) 205–214 Glimm, James Families of induced representations Pacific Journal of Mathematics 12 (1962) 885–911 Unitary operators in $C^{\ast}$-algebras Pacific Journal of Mathematics 10 (1960) 547–556 Globevnik, Josip Analytic extensions of vector-valued functions Pacific Journal of Mathematics 63 (1976) 389–395 Discs and the Morera property Pacific Journal of Mathematics 192 (2000) 65–91 Fourier coefficients of the Rudin-Carleson extensions Pacific Journal of Mathematics 88 (1980) 69–79 The range of analytic extensions Pacific Journal of Mathematics 69 (1977) 365–384 Glöckner, Helge Real and $p$-adicLie algebra functors on the category of topological groups Pacific Journal of Mathematics 203 (2002) 321–368 Glorieux, Olivier Entropy of embedded surfaces in quasi-fuchsian manifolds Pacific Journal of Mathematics 294 (2018) 375–400 Glover, Henry Fixed points on flag manifolds Pacific Journal of Mathematics 101 (1982) 303–306 Gluck, David Character value estimates for groups of Lie type Pacific Journal of Mathematics 150 (1991) 279–307 Glutsyuk, Alexey On commuting billiards in higher-dimensional spaces of constant curvature Pacific Journal of Mathematics 305 (2020) 577–595 Gnepp, Andrei Cost-minimizing networks\\among immiscible fluids in $\mathbb R^2$ Pacific Journal of Mathematics 196 (2000) 395–415 Göbel, Rüdiger On radicals and products Pacific Journal of Mathematics 118 (1985) 79–104 Products of conjugate permutations Pacific Journal of Mathematics 94 (1981) 47–60 Goberstein, Simon On orthodox semigroups determined by their bundles of correspondences Pacific Journal of Mathematics 153 (1992) 71–84 Godefroy, Gilles Compacts de Rosenthal Pacific Journal of Mathematics 91 (1980) 293–306 Godelle, Eddy Parabolic subgroups ofArtin groups\\ of type FC Pacific Journal of Mathematics 208 (2003) 243–254 Godoy, Tomás $L^2$ spectral decomposition on the Heisenberg group associated to the action of $U( p,q)$ Pacific Journal of Mathematics 193 (2000) 327–353 Goebel, Kazimierz On the minimal displacement of points under Lipschitzian mappings Pacific Journal of Mathematics 45 (1973) 151–163 Goel, D. S. Best approximation by a saturation class of polynomial operators Pacific Journal of Mathematics 55 (1974) 149–155 Goggins, Robert Cobordism of manifolds with strong almost tangent structures Pacific Journal of Mathematics 115 (1984) 361–371 Golan, Jonathan Topologies on the torsion-theoretic spectrum of a noncommutative ring Pacific Journal of Mathematics 51 (1974) 439–450 Golasiński, Marek Postnikov towers and Gottlieb groups of orbit spaces Pacific Journal of Mathematics 197 (2001) 291–300 Golber, David The cohomological description of a torus action Pacific Journal of Mathematics 46 (1973) 149–154 Gold, Robert $\Gamma$-extensions of imaginary quadratic fields Pacific Journal of Mathematics 40 (1972) 83–88 Genera in normal extensions Pacific Journal of Mathematics 63 (1976) 397–400 Goldberg, David $R$-groups and elliptic representations for $\mathrm{SL}_n$ Pacific Journal of Mathematics 165 (1994) 77–92 R-groups and parameters Pacific Journal of Mathematics 255 (2012) 281–303 Goldberg, Jack Polynomials orthogonal over a denumerable set Pacific Journal of Mathematics 15 (1965) 1171–1186 Goldberg, Michael Matrix $A_p$ weights via maximal functions Pacific Journal of Mathematics 211 (2003) 201–220 Goldberg, Moshe On characterizations and integrals of generalized numerical ranges Pacific Journal of Mathematics 69 (1977) 45–54 Stable subnorms revisited Pacific Journal of Mathematics 215 (2004) 15–27 Goldberg, Myron Cycles in $k$-strong tournaments Pacific Journal of Mathematics 40 (1972) 89–96 Goldberg, Richard An inversion of the Stieltjes transform Pacific Journal of Mathematics 8 (1958) 213–217 Averages of Fourier coefficients Pacific Journal of Mathematics 9 (1959) 695–699 Goldberg, Samuel Integrability of almost cosymplectic structures Pacific Journal of Mathematics 31 (1969) 373–382 Totally geodesic hypersurfaces of Kaehler manifolds Pacific Journal of Mathematics 27 (1968) 275–281 Goldberg, Seymour Closed linear operators and associated continuous linear opeators Pacific Journal of Mathematics 12 (1962) 183–186 Linear operators and their conjugates Pacific Journal of Mathematics 9 (1959) 69–79 Ranges and inverses of perturbed linear operators Pacific Journal of Mathematics 9 (1959) 701–706 Goldbring, Isaac Games and elementary equivalence of II_1 factors Pacific Journal of Mathematics 278 (2015) 103–118 Goldman, André Mesures cylindriques, mesures vectorielles et questions de concentration cylindrique Pacific Journal of Mathematics 69 (1977) 385–413 Goldman, William Polynomial forms on affine manifolds Pacific Journal of Mathematics 101 (1982) 115–121 Two examples of affine manifolds Pacific Journal of Mathematics 94 (1981) 327–330 Goldschmidt, David Classical link invariants and the Burau representation Pacific Journal of Mathematics 144 (1990) 277–292 Goldsmith, Donald Convolutions of arithmetic functions over cohesive basic sequences Pacific Journal of Mathematics 38 (1971) 391–399 On the density of certain cohesive basic sequences Pacific Journal of Mathematics 42 (1972) 323–327 On the multiplicative properties of arithmetic functions Pacific Journal of Mathematics 27 (1968) 283–304 Goldstein, Allen A finite algorithm for the solution of consistent linear equations and inequalities and for the Tchebycheff approximation of inconsistent linear equations Pacific Journal of Mathematics 8 (1958) 415–427 Goldstein, Daniel Inversion invariant additive subgroups of division rings Pacific Journal of Mathematics 227 (2006) 287–294 Norms in central simple algebras Pacific Journal of Mathematics 292 (2018) 373–388 Goldstein, Jerome Groups of isometries on Orlicz spaces Pacific Journal of Mathematics 48 (1973) 387–393 One parameter groups of isometries on certain Banach spaces Pacific Journal of Mathematics 64 (1976) 145–151 Goldstein, Myron $K$- and $L$-kernels on an arbitrary Riemann surface Pacific Journal of Mathematics 19 (1966) 449–459 On a paper of Rao Pacific Journal of Mathematics 27 (1968) 497–500 Goldstein, Norman Ampleness in complex homogeneous spaces and a second Lefschetz theorem Pacific Journal of Mathematics 106 (1983) 271–291 Degenerate secant varieties and a problem on matrices Pacific Journal of Mathematics 119 (1985) 115–124 Golightly, George Graph-dense linear transformations Pacific Journal of Mathematics 82 (1979) 371–377 Shadow and inverse-shadow inner products for a class of linear transformations Pacific Journal of Mathematics 103 (1982) 389–399 Golodets, Valentin Ya. Regularization of actions of groups and groupoids on measured equivalence relations Pacific Journal of Mathematics 137 (1989) 145–154 Golomb, Solomon A function-theoretic approach to the study of nonlinear recurring sequences Pacific Journal of Mathematics 56 (1975) 455–468 Formulas for the next prime Pacific Journal of Mathematics 63 (1976) 401–404 Golovko, Roman A note on Lagrangian cobordisms between Legendrian submanifolds of R^{2n+1} Pacific Journal of Mathematics 261 (2013) 101–116 The cylindrical contact homology of universally tight sutured contact solid tori Pacific Journal of Mathematics 274 (2015) 73–96 Golubitsky, Martin Primitive subalgebras of exceptional Lie algebras Pacific Journal of Mathematics 39 (1971) 371–393 Gomes, Andre Energy and volume of vector fields on spherical domains Pacific Journal of Mathematics 257 (2012) 1–7 Gómez Gil, Javier On local convexity of bounded weak topologies on Banach spaces Pacific Journal of Mathematics 110 (1984) 71–76 Gómez-Molleda, Maria Dihedral Galois groups of even degree polynomials Pacific Journal of Mathematics 229 (2007) 185–197 Gomi, Kensaku Finite groups with a standard subgroup isomorphic to $PSU(4,2)$ Pacific Journal of Mathematics 79 (1978) 399–462 Goncalves, Adilson Structural constants. II Pacific Journal of Mathematics 54 (1974) 39–51 Gonçalves, Daciberg Fixed points of $S^1$-fibrations Pacific Journal of Mathematics 129 (1987) 297–306 Groups of PL-homeomorphisms admitting non-trivial invariant characters Pacific Journal of Mathematics 287 (2017) 101–158 Inclusion of configuration spaces in Cartesian products, and the virtual cohomological dimension of the braid groups of S^2 and RP^2 Pacific Journal of Mathematics 287 (2017) 71–99 Postnikov towers and Gottlieb groups of orbit spaces Pacific Journal of Mathematics 197 (2001) 291–300 Sigma theory and twisted conjugacy classes Pacific Journal of Mathematics 247 (2010) 335–352 Sigma theory and twisted conjugacy-II: Hougton groups and pure symmetric automorphism groups Pacific Journal of Mathematics 280 (2016) 349–369 Twisted conjugacy classes in R. Thompson's Group F Pacific Journal of Mathematics 238 (2008) 1–6 Gonek, Steven Explicit formulae and discrepancy estimates for $a$-points of the Riemann zeta-function Pacific Journal of Mathematics 303 (2019) 47–71 Gong, Guihua Determinant rank of $C^*$-algebras Pacific Journal of Mathematics 274 (2015) 405–436 Gong, Sheng Biholomorphic convex mappings of ball in $\mathbb{C}^n$ Pacific Journal of Mathematics 161 (1993) 287–306 The growth and $1/4$-theorems for starlike mappings in $\mathbf{C}^n$ Pacific Journal of Mathematics 150 (1991) 13–22 Gong, Xianghong Divergence of the normalization for real Lagrangian surfaces near complex tangents Pacific Journal of Mathematics 176 (1996) 311–324 Normal forms for CR singular codimension-two Levi-flat submanifolds Pacific Journal of Mathematics 275 (2015) 115–165 Gonshor, Harry On abstract affine near-rings Pacific Journal of Mathematics 14 (1964) 1237–1240 Remarks on the Dedekind completion of a nonstandard model of the reals Pacific Journal of Mathematics 118 (1985) 117–132 Gonzalez, E. Existence and regularity for the problem of a pendent liquid drop Pacific Journal of Mathematics 88 (1980) 399–420 González, Manuel Local reflexivity of dual Banach spaces Pacific Journal of Mathematics 189 (1999) 263–278 Gonzalez, Maria $L^2$ estimates on chord-arc curves Pacific Journal of Mathematics 190 (1999) 225–233 González Nogueras, Marí­a del Mar Asymptotic behavior of Palais-Smale sequences associated with fractional Yamabe type equations Pacific Journal of Mathematics 278 (2015) 369–405 Classification of singularities for a subcritical fully non-linear problem Pacific Journal of Mathematics 226 (2006) 83–102 Renormalized weighted volume for the conformal fractional Laplacian Pacific Journal of Mathematics 257 (2012) 379–394 González-Espino-Barros, Jesús Classification of the stable homotopy types of stunted lens spaces for an odd prime Pacific Journal of Mathematics 176 (1996) 325–343 González-Meneses López, Juan Ordering pure braid groups on compact, connected surfaces Pacific Journal of Mathematics 203 (2002) 369–378 Gonzalez-Velasco, Enrique On the range of an unbounded partly atomic vector-valued measure Pacific Journal of Mathematics 154 (1992) 245–251 Goodearl, Kenneth $K_1$ of separative exchange rings and C*-algebras with real rank zero Pacific Journal of Mathematics 195 (2000) 261–275 Cancellation of low-rank vector bundles Pacific Journal of Mathematics 113 (1984) 289–302 Centers of regular self-injective rings Pacific Journal of Mathematics 76 (1978) 381–395 Choquet simplexes and $\sigma$-convex faces Pacific Journal of Mathematics 66 (1976) 119–124 Commutation relations for arbitrary quantum minors Pacific Journal of Mathematics 228 (2006) 63–102 Completions of regular rings. II Pacific Journal of Mathematics 72 (1977) 423–459 Direct limit groups and the Keesling-Mardešić shape fibration Pacific Journal of Mathematics 86 (1980) 471–476 Directly finite aleph-nought-continuous regular rings Pacific Journal of Mathematics 100 (1982) 105–122 Distributing tensor product over direct product Pacific Journal of Mathematics 43 (1972) 107–110 Essential products of nonsingular rings Pacific Journal of Mathematics 45 (1973) 493–505 Homogenization of regular rings of bounded index Pacific Journal of Mathematics 84 (1979) 63–78 Idealizers and nonsingular rings Pacific Journal of Mathematics 48 (1973) 395–402 Krull dimension of skew-Laurent extensions Pacific Journal of Mathematics 114 (1984) 109–147 Localization and splitting in hereditary noetherian prime rings Pacific Journal of Mathematics 53 (1974) 137–151 Patch-continuity of normalized ranks of modules over one-sided Noetherian rings Pacific Journal of Mathematics 122 (1986) 83–94 Periodic flat modules, and flat modules for finite groups Pacific Journal of Mathematics 196 (2000) 45–67 Power-cancellation of groups and modules Pacific Journal of Mathematics 64 (1976) 387–411 Rings over which certain modules are injective Pacific Journal of Mathematics 58 (1975) 43–53 Uniform rank over differential operator rings and Poincaré-Birkhoff-Witt extensions Pacific Journal of Mathematics 131 (1988) 13–37 Goodey, Paul A note on starshaped sets Pacific Journal of Mathematics 61 (1975) 151–152 Goodman, Frederick The Temperley-Lieb algebra at roots of unity Pacific Journal of Mathematics 161 (1993) 307–334 Translation invariant closed $\ast$ derivations Pacific Journal of Mathematics 97 (1981) 403–413 Goodman, Roe Positive-definite distributions and intertwining operators Pacific Journal of Mathematics 48 (1973) 83–91 Goodman, Ruth $K$-polar polynomials Pacific Journal of Mathematics 12 (1962) 1277–1288 A certain class of polynomials Pacific Journal of Mathematics 17 (1966) 57–69 Goodman, Victor Norms of random matrices Pacific Journal of Mathematics 59 (1975) 359–365 Goodrick, Richard Two bridge knots are alternating knots Pacific Journal of Mathematics 40 (1972) 561–564 Goodwin, Simon Counting conjugacy classes in the unipotent radical of parabolic subgroups of GLn(q) Pacific Journal of Mathematics 245 (2010) 47–56 Representation theory of type B and C standard Levi $W$-algebras Pacific Journal of Mathematics 269 (2014) 31–71 Goodykoontz, Jack Aposyndetic properties of hyperspaces Pacific Journal of Mathematics 47 (1973) 91–98 Connectedness im kleinen and local connectedness in $2^X$ and $C(X)$ Pacific Journal of Mathematics 53 (1974) 387–397 Gootman, Elliot The type of some $C^{\ast}$ and $W^{\ast}$-algebras associated with transformation groups Pacific Journal of Mathematics 48 (1973) 93–106 Gopala Krishna, J. Maximum term of a power series in one and several complex variables Pacific Journal of Mathematics 29 (1969) 609–622 Gopalakrishnan, N. S. Homological dimension of Ore-extensions Pacific Journal of Mathematics 19 (1966) 67–75 Gopalsamy, K. Oscillatory properties of systems of first order linear delay differential inequalities Pacific Journal of Mathematics 128 (1987) 299–305 Gordeev, Nikolai Intersection of conjugacy classes with Bruhat cells in Chevalley groups Pacific Journal of Mathematics 214 (2004) 245–261 Gordh, George Characterizing local connectedness in inverse limits Pacific Journal of Mathematics 58 (1975) 411–417 Monotone decompositions of irreducible Hausdorff continua Pacific Journal of Mathematics 36 (1971) 647–658 Terminal subcontinua of hereditarily unicoherent continua Pacific Journal of Mathematics 47 (1973) 457–464 Gordon, B. Algebraically defined subspaces in the cohomology of a Kuga fiber variety Pacific Journal of Mathematics 131 (1988) 261–276 Gordon, Basil A generalization of the coset decomposition of a finite group Pacific Journal of Mathematics 15 (1965) 503–509 A proof of the Bender-Knuth conjecture Pacific Journal of Mathematics 108 (1983) 99–113 Binomial coefficients whose products are perfect $k$th powers Pacific Journal of Mathematics 118 (1985) 393–400 Correction to: A generalization of the coset decomposition of a finite group Pacific Journal of Mathematics 15 (1965) 1474–1474c Ernst G. Straus, 1922--1983 Pacific Journal of Mathematics 118 (1985) i–xx On a theorem of Delaunay and some related results Pacific Journal of Mathematics 68 (1977) 399–409 On the determination of sets by the sets of sums of a certain order Pacific Journal of Mathematics 12 (1962) 187–196 Partitions of groups and complete mappings Pacific Journal of Mathematics 92 (1981) 283–293 Sequences in groups with distinct partial products Pacific Journal of Mathematics 11 (1961) 1309–1313 Gordon, Cameron On a theorem of Murasugi Pacific Journal of Mathematics 82 (1979) 69–74 Gordon, Gerald On the degeneracy of a spectral sequence associated to normal crossings Pacific Journal of Mathematics 90 (1980) 389–396 Gordon, Hugh Measure defined by abstract $L_{p}$ spaces Pacific Journal of Mathematics 10 (1960) 557–562 Rings of functions determined by zero-sets Pacific Journal of Mathematics 36 (1971) 133–157 Gordon, James Approximating annular capillary surfaces with equal contact angles Pacific Journal of Mathematics 247 (2010) 371–387 Properties of annular capillary surfaces with equal contact angles Pacific Journal of Mathematics 247 (2010) 353–370 Gordon, Julia Elliptic curves, random matrices and orbital integrals Pacific Journal of Mathematics 286 (2017) 1–24 Gordon, Manfred Determinants of Petrie matrices Pacific Journal of Mathematics 51 (1974) 451–453 Gordon, Robert Rings in which minimal left ideals are projective Pacific Journal of Mathematics 31 (1969) 679–692 Gordon, Samuel Associators in simple algebras Pacific Journal of Mathematics 51 (1974) 131–141 Gordon, William An analysis of equality in certain matrix inequalities. I Pacific Journal of Mathematics 34 (1970) 407–413 Gordon, Y. Unconditional Schauder decompositions of normed ideals of operators between some $l_{p}$-spaces Pacific Journal of Mathematics 60 (1975) 71–82 Gorenstein, Daniel On a theorem of Philip Hall Pacific Journal of Mathematics 19 (1966) 77–80 On almost-commuting permutations Pacific Journal of Mathematics 12 (1962) 913–923 Goresky, Robert Ordinary points mod p of GL(n,R) locally symmetric spaces Pacific Journal of Mathematics 303 (2019) 165–215 Real structures on polarized Dieudonne modules Pacific Journal of Mathematics 303 (2019) 217–241 Gorkin, Pamela Division theorems and the\\Shilov property for ${\rm H}^{\infty} + {\rm C}$ Pacific Journal of Mathematics 189 (1999) 279–292 Essentially commuting Toeplitz operators Pacific Journal of Mathematics 190 (1999) 87–109 Gorman, Howard Zero divisors in differential rings Pacific Journal of Mathematics 39 (1971) 163–171 Gorman, Howard The Brandt condition and invertibility of modules Pacific Journal of Mathematics 32 (1970) 351–371 Gorodski, Claudio The classification of simply-connected contact sub-riemannian symmetric spaces Pacific Journal of Mathematics 188 (1999) 65–82 Górski, Jerzy The Sochocki-Plemelj formula for the functions of two complex variables Pacific Journal of Mathematics 11 (1961) 897–907 Goseki, Zensiro On semigroups in which $x=xyx=xzx$ if and only if $x=xyzx$ Pacific Journal of Mathematics 60 (1975) 103–110 Goss, David On a new type of $L$-function for algebraic curves over finite fields Pacific Journal of Mathematics 105 (1983) 143–181 Goss, G. $C$-compact and functionally compact spaces Pacific Journal of Mathematics 37 (1971) 677–681 Some topological properties weaker than compactness Pacific Journal of Mathematics 35 (1970) 635–638 Gosselin, Richard Closure theorems for affine transformation groups Pacific Journal of Mathematics 54 (1974) 53–57 On Diophantine approximation and trigonometric polynomials Pacific Journal of Mathematics 9 (1959) 1071–1081 On the convergence behaviour of trigonometric interpolating polynomials Pacific Journal of Mathematics 5 (1955) 915–922 Gossez, Jean-Pierre Some geometric properties related to the fixed point theory for nonexpansive mappings Pacific Journal of Mathematics 40 (1972) 565–573 Goswami, Debashish Dilation of Markovian cocycleson \\ a von Neumann algebra Pacific Journal of Mathematics 211 (2003) 221–247 Gotoh, Yasuhiro On composition operators which preserve BMO Pacific Journal of Mathematics 201 (2001) 289–307 Gottlieb, Daniel The Lefschetz number and Borsuk-Ulam theorems Pacific Journal of Mathematics 103 (1982) 29–37 The total space of universal fibrations Pacific Journal of Mathematics 46 (1973) 415–417 Gottlieb, Sarah Algebraic automorphisms of algebraic groups with stable maximal tori Pacific Journal of Mathematics 72 (1977) 461–470 Gould, Matthew Automorphism groups retracting onto symmetric groups Pacific Journal of Mathematics 81 (1979) 93–100 Endomorphism and automorphism structure of direct squares of universal algebras Pacific Journal of Mathematics 59 (1975) 69–84 Multiplicity type and subalgebra structure in universal algebras Pacific Journal of Mathematics 26 (1968) 469–485 Goulding, Thomas Structure of semiprime $(p,\,q)$ radicals Pacific Journal of Mathematics 37 (1971) 97–99 Gouli-Andreou, Florence On the concircular curvature of a $(\kappa, \mu, \nu)$-manifold Pacific Journal of Mathematics 269 (2014) 113–132 Three classes of pseudosymmetric contact metric 3-manifolds Pacific Journal of Mathematics 245 (2010) 57–77 Two classes of pseudosymmetric contact metric 3-manifolds Pacific Journal of Mathematics 239 (2009) 17–37 Goullet de Rugy, Alain Un théorème du genre Andô-Edwards'' pour les Fréchet ordonnés normaux Pacific Journal of Mathematics 46 (1973) 155–166 Gourevitch, Dmitry Uniqueness of Shalika functionals: the archimedean case Pacific Journal of Mathematics 243 (2009) 201–212 Govaerts, Willy Locally convex spaces of non-Archimedean valued continuous functions Pacific Journal of Mathematics 109 (1983) 399–410 Gover, Ashwin Conformal Holonomy Equals Ambient Holonomy Pacific Journal of Mathematics 285 (2016) 303–318 Conformally invariant non-local operators Pacific Journal of Mathematics 201 (2001) 19–60 The ambient obstruction tensor and the conformal deformation complex Pacific Journal of Mathematics 226 (2006) 309–351 Gover, Eugene Increasing sequences of Betti numbers Pacific Journal of Mathematics 87 (1980) 65–68 Gow, R. Groups whose irreducible character degrees are ordered by divisibility Pacific Journal of Mathematics 57 (1975) 135–139 Goyal, Vinod Bounds for the solution of a certain class of nonlinear partial differential equations Pacific Journal of Mathematics 33 (1970) 117–138 Goze, Michel Z2xZ2-symmetric spaces Pacific Journal of Mathematics 236 (2008) 1–21 Grabiner, Sandy The $L^p$ theory of standard homomorphisms Pacific Journal of Mathematics 168 (1995) 49–60 Grabinsky, Guillermo Poisson process over $\sigma$-finite Markov chains Pacific Journal of Mathematics 111 (1984) 301–315 Grabowski, Jan A triple construction for Lie bialgebras Pacific Journal of Mathematics 221 (2005) 281–301 Grace, Edward Cut points in totally non-semi-locally-connected continua Pacific Journal of Mathematics 14 (1964) 1241–1244 On local properties and $G_{\delta}$ sets Pacific Journal of Mathematics 14 (1964) 1245–1248 Graczyk, Piotr The product formula for the spherical functions on symmetric spaces inthe complex case Pacific Journal of Mathematics 204 (2002) 377–393 Transition density estimates for stable processes on symmetric spaces Pacific Journal of Mathematics 217 (2004) 87–100 Graef, John Limit circle type results for sublinear equations Pacific Journal of Mathematics 104 (1983) 85–94 On the nonoscillation of perturbed functional-differential equations Pacific Journal of Mathematics 83 (1979) 365–373 Some nonoscillation criteria for higher order nonlinear differential equations Pacific Journal of Mathematics 66 (1976) 125–129 Graf, Siegfried On the existence of strong liftings in second countable topological spaces Pacific Journal of Mathematics 58 (1975) 419–426 Realizing automorphisms of quotients of product $\sigma$-fields Pacific Journal of Mathematics 99 (1982) 19–30 Grafakos, Loukas A Selberg integral formula and applications Pacific Journal of Mathematics 191 (1999) 85–94 Graham, C. Conformal Holonomy Equals Ambient Holonomy Pacific Journal of Mathematics 285 (2016) 303–318 Graham, Colin Bilinear operators on $L^\infty(G)$ of locally compact groups Pacific Journal of Mathematics 158 (1993) 157–176 Bimeasure algebras on LCA groups Pacific Journal of Mathematics 115 (1984) 91–127 Nonfactorization in commutative, weakly selfadjoint Banach algebras Pacific Journal of Mathematics 80 (1979) 117–125 Planar sidonicity and quasi independence for multiplicative subgroups of the roots of unity Pacific Journal of Mathematics 225 (2006) 325–360 Graham, Ian Bloch constants in one and several variables Pacific Journal of Mathematics 174 (1996) 347–357 Intrinsic measures and holomorphic retracts Pacific Journal of Mathematics 130 (1987) 299–311 Graham, Ronald On finite sums of reciprocals of distinct $n$th powers Pacific Journal of Mathematics 14 (1964) 85–92 On tightest packings in the Minkowski plane Pacific Journal of Mathematics 41 (1972) 699–715 The Radon transform on $\mathbf{Z}^k_2$ Pacific Journal of Mathematics 118 (1985) 323–345 Graham, William The forgetful map in rational K-theory Pacific Journal of Mathematics 236 (2008) 45–55 Graham-Squire, Adam Calculation of local formal Mellin transforms Pacific Journal of Mathematics 283 (2016) 115–137 Grainger, Arthur D. Invariant subspaces of compact operators on topological vector spaces Pacific Journal of Mathematics 56 (1975) 477–493 Granas, Andrzej Applications of topological transversality to differential equations. I. Some nonlinear diffusion problems Pacific Journal of Mathematics 89 (1980) 53–67 On a theorem of S. Bernstein Pacific Journal of Mathematics 74 (1978) 67–82 Topological transversality. II. Applications to the Neumann problem for $y^{\prime\prime}=f(t,\,y,\,y^{\prime})$ Pacific Journal of Mathematics 104 (1983) 95–109 Granata, Antonio A geometric characterization of $n$th order convex functions Pacific Journal of Mathematics 98 (1982) 91–98 Grandjean, F. A note on Taylor's Brauer group Pacific Journal of Mathematics 186 (1998) 13–27 Granirer, Edmond On the invariant mean on topological semigroups and on topological groups Pacific Journal of Mathematics 15 (1965) 107–140 Granja, Angel Apéry basis and polar invariants of plane curve singularities Pacific Journal of Mathematics 140 (1989) 85–96 Granlund, Seppo Note on the PWB-method in the nonlinear case Pacific Journal of Mathematics 125 (1986) 381–395 Grant, Caroline Hyperbolicity of surfaces modulo rational and elliptic curves Pacific Journal of Mathematics 139 (1989) 241–249 Metrics for singular analytic spaces Pacific Journal of Mathematics 168 (1995) 61–156 Grant, Douglas Topological groups which satisfy an open mapping theorem Pacific Journal of Mathematics 68 (1977) 411–423 Grant, John Automorphisms definable by formulas Pacific Journal of Mathematics 44 (1973) 107–115 Corrections to: Automorphisms definable by formulas'' Pacific Journal of Mathematics 55 (1974) 639–639 Grant, Joseph Braid groups and quiver mutation Pacific Journal of Mathematics 290 (2017) 77–116 Granville, Andrew Values of Bernoulli polynomials Pacific Journal of Mathematics 172 (1996) 117–137 Grassl, Richard Multisectioned partitions of integers Pacific Journal of Mathematics 69 (1977) 415–424 Polynomials in denumerable indeterminates Pacific Journal of Mathematics 97 (1981) 415–423 Grätzer, George A nonassociative extension of the class of distributive lattices Pacific Journal of Mathematics 49 (1973) 59–78 Chain conditions in free products of lattices with infinitary operations Pacific Journal of Mathematics 83 (1979) 107–115 Embedding lattices into lattices of ideals Pacific Journal of Mathematics 85 (1979) 65–75 On the endomorphism semigroup (and category) of bounded lattices Pacific Journal of Mathematics 35 (1970) 639–647 On the number of polynomials of an idempotent algebra. I Pacific Journal of Mathematics 32 (1970) 697–709 On the number of polynomials of an idempotent algebra. II Pacific Journal of Mathematics 47 (1973) 99–113 Symmetric difference in abelian groups Pacific Journal of Mathematics 74 (1978) 339–347 The amalgamation property in equational classes of modular lattices Pacific Journal of Mathematics 45 (1973) 507–524 Uniform representations of congruence schemes Pacific Journal of Mathematics 76 (1978) 301–311 Gray, Alfred Asymptotic values of a holomorphic function with respect to its maximum term Pacific Journal of Mathematics 18 (1966) 111–120 Sphere transitive structures and the triality automorphism Pacific Journal of Mathematics 34 (1970) 83–96 Gray, Allan Normal subgroups of monomial groups Pacific Journal of Mathematics 12 (1962) 527–532 Gray, Jack D. Local analytic extensions of the resolvent Pacific Journal of Mathematics 27 (1968) 305–324 Gray, John Extensions of sheaves of associative algebras by non-trivial kernels Pacific Journal of Mathematics 11 (1961) 909–917 Gray, Mary Abelian objects Pacific Journal of Mathematics 23 (1967) 69–78 Radical subcategories Pacific Journal of Mathematics 23 (1967) 79–89 Gray, Neil Unstable points in the hyperspace of connected subsets Pacific Journal of Mathematics 23 (1967) 515–520 Gray, William A note on topological transformation groups with a fixed end point Pacific Journal of Mathematics 19 (1966) 441–447 Grbac, Neven Endoscopic transfer for unitary groups and holomorphy of Asai L-functions Pacific Journal of Mathematics 276 (2015) 185–211 Greco, Gabriele H. Compactoid and compact filters Pacific Journal of Mathematics 117 (1985) 69–98 Greco, Silvio The Lüroth semigroup of plane algebraic curves Pacific Journal of Mathematics 151 (1991) 43–56 Green, David G. The lattice of congruences on an inverse semigroup Pacific Journal of Mathematics 57 (1975) 141–152 Green, Edward Diagonalizable derivations of \\finite-dimensional algebras II Pacific Journal of Mathematics 196 (2000) 341–352 On the representation theory of rings in matrix form Pacific Journal of Mathematics 100 (1982) 123–138 Green, Euline On a Radon-Nikodym theorem for finitely additive set functions Pacific Journal of Mathematics 27 (1968) 255–259 Green, John W. Length and area of a convex curve under affine transformation Pacific Journal of Mathematics 3 (1953) 393–402 Mean values of harmonic functions on homothetic curves Pacific Journal of Mathematics 6 (1956) 279–282 On the level surfaces of potentials of masses with fixed center of gravity Pacific Journal of Mathematics 2 (1952) 147–152 Green, John Completion and semicompletion of Moore spaces Pacific Journal of Mathematics 57 (1975) 153–165 Separating certain plane-like spaces by Peano continua Pacific Journal of Mathematics 38 (1971) 625–634 Green, Leon A sphere characterization related to Blaschke's conjecture Pacific Journal of Mathematics 10 (1960) 837–841 Green, Marvin A locally convex topology on a preordered space Pacific Journal of Mathematics 26 (1968) 487–491 Green, Paul Sphere transitive structures and the triality automorphism Pacific Journal of Mathematics 34 (1970) 83–96 Green, Philip $C^*$-algebras of transformation groups with smooth orbit space Pacific Journal of Mathematics 72 (1977) 71–97 Stable isomorphism and strong Morita equivalence of $C^*$-algebras Pacific Journal of Mathematics 71 (1977) 349–363 Green, Richard On representations of affine Temperley--Lieb algebras, II Pacific Journal of Mathematics 191 (1999) 243–273 Greenberg, Marvin Strictly local solutions of Diophantine equations Pacific Journal of Mathematics 51 (1974) 143–153 Greenberg, Michael On Isoperimetric Surfaces in General Relativity Pacific Journal of Mathematics 231 (2007) 63–84 Greenberg, Peter Pseudogroups of $C^1$ piecewise projective homeomorphisms Pacific Journal of Mathematics 129 (1987) 67–75 The Euler class for piecewise'' groups Pacific Journal of Mathematics 155 (1992) 283–293 The geometry of sum-preserving permutations Pacific Journal of Mathematics 135 (1988) 313–322 Greene, John Lagrange inversion over finite fields Pacific Journal of Mathematics 130 (1987) 313–325 Greene, M. T. Ordering the braid groups Pacific Journal of Mathematics 191 (1999) 49–74 Greene, Robert Addendum to: Lipschitz convergence of Riemannian manifolds'' Pacific Journal of Mathematics 140 (1989) 398–398 Lipschitz convergence of Riemannian manifolds Pacific Journal of Mathematics 131 (1988) 119–141 Semicontinuity of automorphism groups of strongly pseudoconvex domains: The low differentiability case Pacific Journal of Mathematics 262 (2013) 365–395 Greenfield, Gary R. Uniform distribution in subgroups of the Brauer group of an algebraic number field Pacific Journal of Mathematics 107 (1983) 369–381 Greenleaf, Frederick Characterization of group algebras in terms of their translation operators Pacific Journal of Mathematics 18 (1966) 243–276 Cyclic vectors for representations associated with positive definite measures: nonseparable groups Pacific Journal of Mathematics 45 (1973) 165–186 Existence of Borel transversals in groups Pacific Journal of Mathematics 25 (1968) 455–461 Groups of automorphisms of Lie groups: density properties, bounded orbits, and homogeneous spaces of finite volume Pacific Journal of Mathematics 86 (1980) 59–87 Norm decreasing homomorphisms of group algebras Pacific Journal of Mathematics 15 (1965) 1187–1219 Spectrum and multiplicities for restrictions of unitary representations in nilpotent Lie groups Pacific Journal of Mathematics 135 (1988) 233–267 Greenleaf, Newcomb Analytic sheaves on Klein surfaces Pacific Journal of Mathematics 37 (1971) 671–675 Positive holomorphic differentials on Klein surfaces Pacific Journal of Mathematics 32 (1970) 711–713 Greenspan, Bernard A bound for the orders of the components of a system of algebraic difference equations Pacific Journal of Mathematics 9 (1959) 473–486 Greenstein, Jacob Primitively generated Hall algebras Pacific Journal of Mathematics 281 (2016) 287–331 Greenwald, Harvey Lipschitz spaces of distributions on the surface of unit sphere in Euclidean $n$-space Pacific Journal of Mathematics 70 (1977) 163–176 Lipschitz spaces on the surface of the unit sphere in Euclidean $n$-space Pacific Journal of Mathematics 50 (1974) 63–80 On the theory of homogeneous Lipschitz spaces and Campanato spaces Pacific Journal of Mathematics 106 (1983) 87–93 Greenwood, R. E. Distribution of round-off errors for running averages Pacific Journal of Mathematics 3 (1953) 605–611 Grefsrud, Gary Oscillatory properties of solutions of certain $n$th order functional differential equations Pacific Journal of Mathematics 60 (1975) 83–93 Gregorac, Robert On realizing $\mathrm{HNN}$ groups in $3$-manifolds Pacific Journal of Mathematics 46 (1973) 381–387 Gregory, David Geometrical implications of upper semi-continuity of the duality mapping on a Banach space Pacific Journal of Mathematics 79 (1978) 99–109 Gregory, John An approximation theory for elliptic quadratic forms on Hilbert spaces: Application to the eigenvalue problem for compact quadratic forms Pacific Journal of Mathematics 37 (1971) 383–395 Numerical algorithms for oscillation vectors of second order differential equations including the Euler-Lagrange equation for symmetric tridiagonal matrices Pacific Journal of Mathematics 76 (1978) 397–406 Greither, Cornelius On constructions similar to the Burnside ring for commutative rings and profinite groups Pacific Journal of Mathematics 138 (1989) 57–71 Grenier, Douglas Fundamental domains for the general linear group Pacific Journal of Mathematics 132 (1988) 293–317 On the shape of fundamental domains in $\mathrm{GL}(n,\mathbf{R})/\mathrm{O}(n)$ Pacific Journal of Mathematics 160 (1993) 53–66 Griego, Richard Asymptotics for certain Wiener integrals associated with higher order differential operators Pacific Journal of Mathematics 142 (1990) 41–48 Grieser, Daniel Asymptotics of eigenfunctions on plane domains Pacific Journal of Mathematics 240 (2009) 109–133 Griess, Robert A remark about groups of characteristic $2$-type and $p$-type Pacific Journal of Mathematics 74 (1978) 349–355 Automorphisms of extra special groups and nonvanishing degree $2$ cohomology Pacific Journal of Mathematics 48 (1973) 403–422 Maximal subgroups and automorphisms of Chevalley groups Pacific Journal of Mathematics 71 (1977) 365–403 The splitting of extensions of $\mathrm{SL}(3,3)$ by the vector space $\mathbf{F}^3_3.$ Pacific Journal of Mathematics 63 (1976) 405–409 Griffeth, Stephen Systems of parameters and holonomicity of A-hypergeometric systems Pacific Journal of Mathematics 276 (2015) 281–286 Griffin, Ernest Everywhere defined linear transformations affiliated with rings of operators Pacific Journal of Mathematics 18 (1966) 489–493 Griffin, John Multipliers and the group $L_{p}$-algebras Pacific Journal of Mathematics 49 (1973) 365–370 Griffith, Phillip A note on a theorem of Hill Pacific Journal of Mathematics 29 (1969) 279–284 Binomial behavior of Betti numbers for modules of finite length Pacific Journal of Mathematics 133 (1988) 267–276 The structure of serial rings Pacific Journal of Mathematics 36 (1971) 109–121 Transitive and fully transitive primary abelian groups Pacific Journal of Mathematics 25 (1968) 249–254 Griffon, Richard On the arithmetic of a family of twisted constant elliptic curves Pacific Journal of Mathematics 305 (2020) 597–640 Grillet, Pierre Ideal extensions of semigroups Pacific Journal of Mathematics 26 (1968) 493–508 On subdirectly irreducible commutative semigroups Pacific Journal of Mathematics 69 (1977) 55–71 Grimaldi, Ralph Baer and $\mathrm{UT}$-modules over domains Pacific Journal of Mathematics 54 (1974) 59–72 Grimm, David Sums of squares in algebraic function fields over a complete discretely valued field Pacific Journal of Mathematics 267 (2014) 257–276 Grimmer, Ronald On the asymptotic behavior of solutions of $x''+a(t)f(x)=e(t)$ Pacific Journal of Mathematics 41 (1972) 43–55 Grinberg, Eric Analytic continuation of convex bodies and Funk's characterization of the sphere Pacific Journal of Mathematics 201 (2001) 309–322 Grishkov, Alexandre Normal enveloping algebras Pacific Journal of Mathematics 257 (2012) 131–141 Grivaux, Sophie Light groups of isomorphisms of Banach spaces and invariant LUR renormings Pacific Journal of Mathematics 301 (2019) 31–54 Grobner, Harald A cohomological injectivity result for the residual automorphic spectrum of $\mathrm{GL}_n$ Pacific Journal of Mathematics 268 (2014) 33–46 Groemer, H. On coverings of Euclidean space by convex sets Pacific Journal of Mathematics 75 (1978) 77–86 On some mean values associated with a randomly selected simplex in a convex set Pacific Journal of Mathematics 45 (1973) 525–533 On the extension of additive functionals on classes of convex sets Pacific Journal of Mathematics 75 (1978) 397–410 Space coverings by translates of convex sets Pacific Journal of Mathematics 82 (1979) 379–386 Gromadzki, Grzegorz On non-orientable Riemann surfaces with $2p$ or $2p+2$ automorphisms Pacific Journal of Mathematics 201 (2001) 267–288 Gronbaek, Niels Amenability of discrete convolution algebras, the commutative case Pacific Journal of Mathematics 143 (1990) 243–249 Grønbæk, Niels An imprimitivity theorem for representations of locallycompact groups on arbitrary Banach spaces Pacific Journal of Mathematics 184 (1998) 121–148 Gronski, Jan Classification of closed sets of attainability in the plane Pacific Journal of Mathematics 77 (1978) 117–129 Grosjean, Jean-Francois Upper boundsfor the first eigenvalue of the Laplacian on compact submanifolds Pacific Journal of Mathematics 206 (2002) 93–112 Gross, Benedict Embeddings into the integral octonions Pacific Journal of Mathematics 181 (1997) 147–158 Gross, Daniel Compact quotients by $\mathbf{C}^{\ast}$-actions Pacific Journal of Mathematics 114 (1984) 149–164 Gross, Fletcher Fixed-point-free operator groups of order $8$ Pacific Journal of Mathematics 28 (1969) 357–361 Groups admitting a fixed-point-free automorphism of order $2^{n}$ Pacific Journal of Mathematics 24 (1968) 269–275 The $2$-length of a finite solvable group Pacific Journal of Mathematics 15 (1965) 1221–1237 Gross, Fred Entire functions of several variables with algebraic derivatives at certain algebraic points Pacific Journal of Mathematics 31 (1969) 693–701 Gross, Jonathan Quotients of complete graphs: revisiting the Heawood map-coloring problem Pacific Journal of Mathematics 55 (1974) 391–402 Gross, Kenneth Ramanujan's master theorem for symmetric cones Pacific Journal of Mathematics 175 (1996) 447–490 Gross, Leonard Hankel operators over complex manifolds Pacific Journal of Mathematics 205 (2002) 43–97 Gross, Oliver A note on polynomial and separable games Pacific Journal of Mathematics 8 (1958) 735–741 Incidence matrices and interval graphs Pacific Journal of Mathematics 15 (1965) 835–855 Gross, Robert $S$-integer points on elliptic curves Pacific Journal of Mathematics 167 (1995) 263–288 Große-Brauckmann, Karsten Stable constant mean curvature surfaces minimize area Pacific Journal of Mathematics 175 (1996) 527–534 Grossman, Edward On the prime ideal divisors of $(a^{n}-b^{n})$ Pacific Journal of Mathematics 54 (1974) 73–83 Grossman, Jerrold On groups with specified lower central series quotients Pacific Journal of Mathematics 74 (1978) 83–90 Grossman, Nathaniel The volume of a totally-geodesic hypersurface in a pinched manifold Pacific Journal of Mathematics 23 (1967) 257–262 Grosswald, Emil A class of modified $\zeta$ and $L$-functions Pacific Journal of Mathematics 74 (1978) 357–364 Rational valued series of exponentials and divisor functions Pacific Journal of Mathematics 60 (1975) 111–114 Grove, Larry Tensor products over $H^{\ast}$-algebras Pacific Journal of Mathematics 15 (1965) 857–863 Grover, John Covering groups of groups of Lie type Pacific Journal of Mathematics 30 (1969) 645–655 Gruenhage, Gary Metrization of spaces with countable large basis dimension Pacific Journal of Mathematics 59 (1975) 455–460 Spaces determined by point-countable covers Pacific Journal of Mathematics 113 (1984) 303–332 Sup-characterization of stratifiable spaces Pacific Journal of Mathematics 105 (1983) 279–284 Grunau, Hans-Christoph Nonexistence of local minima of supersolutions for the circular clamped plate Pacific Journal of Mathematics 198 (2001) 437–442 Grünbaum, Branko Corrections to: Isotoxal tilings'' Pacific Journal of Mathematics 79 (1978) 563–563a Isotoxal tilings Pacific Journal of Mathematics 76 (1978) 407–430 On a conjecture of H. Hadwiger Pacific Journal of Mathematics 11 (1961) 215–219 On a theorem of L. A. Santaló Pacific Journal of Mathematics 5 (1955) 351–359 On some covering and intersection properties in Minkowski spaces Pacific Journal of Mathematics 9 (1959) 487–494 On the number of invariant straight lines \\for polynomial differential systems Pacific Journal of Mathematics 184 (1998) 207–230 Partitions of mass-distributions and of convex bodies by hyperplanes Pacific Journal of Mathematics 10 (1960) 1257–1261 Preassigning the shape of a face Pacific Journal of Mathematics 32 (1970) 299–306 Some applications of expansion constants Pacific Journal of Mathematics 10 (1960) 193–201 The dimension of intersections of convex sets Pacific Journal of Mathematics 12 (1962) 197–202 Grünbaum, Francisco Discrete bispectral Darboux transformations from Jacobi operators Pacific Journal of Mathematics 204 (2002) 395–431 Grundmeier, Dusty Sums of CR functions from competing CR structures Pacific Journal of Mathematics 293 (2018) 257–275 Grünenfelder, Luzius Ascent and descent for finite sequences\\ of commuting endomorphisms Pacific Journal of Mathematics 191 (1999) 95–121 Jordan analogs of the Burnside and Jacobson density theorems Pacific Journal of Mathematics 161 (1993) 335–346 Permutability of characters on algebras Pacific Journal of Mathematics 178 (1997) 63–70 Gu, Caixing Products of block Toeplitz operators Pacific Journal of Mathematics 185 (1998) 115–148 Guadalupe, Irwen Valle Normal curvature of surfaces in space forms Pacific Journal of Mathematics 106 (1983) 95–103 Guan, Bo The Monge--Amp\ere equation with \\ infinite boundary value Pacific Journal of Mathematics 216 (2004) 77–94 Guan, Zhuang-dan Extremal-solitons and the C∞ exponential convergence of modified Calabi flows on certain completion of C* bundles Pacific Journal of Mathematics 233 (2007) 91–124 On Bisectional Negatively Curved Compact K\"ahler-Einstein Surfaces Pacific Journal of Mathematics 288 (2017) 343–353 On the nonexistence of $S^6$ type complex threefolds in any compact homogeneous complex manifolds with the compact lie group $G_2$ as the base manifold Pacific Journal of Mathematics 305 (2020) 641–644 Type I almost homogeneous manifolds of cohomogeneity one---III Pacific Journal of Mathematics 261 (2013) 369–388 Type II almost homogeneous manifolds of cohomogeneity one Pacific Journal of Mathematics 253 (2011) 383–422 Guan, Qi'an On some properties of squeezing functions on bounded domains Pacific Journal of Mathematics 257 (2012) 319–341 Guàrdia i Rúbies, Jordi A family of arithmetic surfaces of genus 3 Pacific Journal of Mathematics 212 (2003) 71–91 Guaschi, John Inclusion of configuration spaces in Cartesian products, and the virtual cohomological dimension of the braid groups of S^2 and RP^2 Pacific Journal of Mathematics 287 (2017) 71–99 Guay, Nicolas Double affine Lie algebras and finite groups Pacific Journal of Mathematics 243 (2009) 1–41 Gubeladze, Joseph The poset of rational cones Pacific Journal of Mathematics 292 (2018) 103–115 Gudder, Stanley A note on proposition observables Pacific Journal of Mathematics 28 (1969) 101–104 A Radon-Nikodým theorem for $\ast$-algebras Pacific Journal of Mathematics 80 (1979) 141–149 Erratum: Uniqueness and existence properties of bounded observables'' Pacific Journal of Mathematics 19 (1966) 588–590 Orthogonally additive and orthogonally increasing functions on vector spaces Pacific Journal of Mathematics 58 (1975) 427–436 Partial algebraic structures associated with orthomodular posets Pacific Journal of Mathematics 41 (1972) 717–730 The center of a poset Pacific Journal of Mathematics 52 (1974) 85–89 Unbounded representations of $\ast$-algebras Pacific Journal of Mathematics 70 (1977) 369–382 Uniqueness and existence properties of bounded observables Pacific Journal of Mathematics 19 (1966) 81–93 Guenther, Ronald Applications of topological transversality to differential equations. I. Some nonlinear diffusion problems Pacific Journal of Mathematics 89 (1980) 53–67 On a theorem of S. Bernstein Pacific Journal of Mathematics 74 (1978) 67–82 Topological transversality. II. Applications to the Neumann problem for $y^{\prime\prime}=f(t,\,y,\,y^{\prime})$ Pacific Journal of Mathematics 104 (1983) 95–109 Guerberoff, Lucio Semi-stable deformation rings in even Hodge–Tate weights Pacific Journal of Mathematics 298 (2019) 299–374 Guerin, E. E. A convolution related to Golomb's root function Pacific Journal of Mathematics 79 (1978) 463–467 Guéritaud, François Compact anti-de Sitter 3-manifolds and folded hyperbolic structures on surfaces Pacific Journal of Mathematics 275 (2015) 325–359 Guggenheimer, Heinrich Approximation of curves Pacific Journal of Mathematics 40 (1972) 301–303 Guha, U. C. $(\gamma, k)$-summability of series Pacific Journal of Mathematics 7 (1957) 1593–1602 Gui, Changfeng On a sharp Moser--Aubin--Onofri inequality for functions on $S^2$ with symmetry Pacific Journal of Mathematics 194 (2000) 349–358 Guido, Daniele An asymptotic dimension for metric spaces, and the $0$-th Novikov--Shubin invariant Pacific Journal of Mathematics 204 (2002) 43–59 Guijarro, Luis On the metric structure ofopen manifolds with nonnegative curvature Pacific Journal of Mathematics 196 (2000) 429–444 Guijarro, Luis Bundles with spherical Euler class Pacific Journal of Mathematics 207 (2002) 377–392 Guilbault, Craig Compact contractible $n$-manifolds have arc spines $(n\geq 5)$ Pacific Journal of Mathematics 168 (1995) 1–10 Noncompact manifolds that are inward tame Pacific Journal of Mathematics 288 (2017) 87–128 On the fundamental groups of trees of manifolds Pacific Journal of Mathematics 221 (2005) 49–79 Guillemin, Victor Kähler metrics on singular toric varieties Pacific Journal of Mathematics 238 (2008) 27–40 Guinand, Paul Between the unitary and similarity orbits of normal operators Pacific Journal of Mathematics 159 (1993) 299–335 Guirardel, Vincent Scott and Swarup’s regular neighborhood as a tree of cylinders Pacific Journal of Mathematics 245 (2010) 79–98 Gulick, Frances F. Actions of functions in Banach algebras Pacific Journal of Mathematics 34 (1970) 657–673 Derivations and actions Pacific Journal of Mathematics 35 (1970) 95–116 Gulick, Sidney Commutativity and ideals in the biduals of topological algebras Pacific Journal of Mathematics 18 (1966) 121–137 The bidual of a locally multiplicatively-convex algebra Pacific Journal of Mathematics 17 (1966) 71–96 Gulliver, Robert Finiteness of the ramified set for branched immersions of surfaces Pacific Journal of Mathematics 64 (1976) 153–165 Total Curvature of Graphs after Milnor and Euler Pacific Journal of Mathematics 256 (2012) 317–357 Gun, Sanoli Lifting of elliptic curves Pacific Journal of Mathematics 301 (2019) 101–106 Gunawan, Hendra A generalization of maximal functions on compact semisimple Lie groups Pacific Journal of Mathematics 156 (1992) 119–134 Gundersen, Gary Meromorphic functions that share two finite values with their derivative Pacific Journal of Mathematics 105 (1983) 299–309 Guo, Bin Two Kazdan-Warner type identities for the renormalized volume coefficients and the the Gauss-Bonnet curvatures of a Riemannian metric Pacific Journal of Mathematics 251 (2011) 257–268 Guo, Enli Novikov-type inequalities for vector fields with non-isolated zero points Pacific Journal of Mathematics 201 (2001) 107–120 Guo, Hongxin Entropy and lowest eigenvalue on evolving manifolds Pacific Journal of Mathematics 264 (2013) 61–81 Guo, Jiandong Uniqueness of generalized Waldspurger model for $GL(2n)$ Pacific Journal of Mathematics 180 (1997) 273–289 Guo, Jianhan Fixed points of surface diffeomorphisms Pacific Journal of Mathematics 160 (1993) 67–89 Guo, JinFeng The measures of asymmetry for coproducts of convex bodies Pacific Journal of Mathematics 276 (2015) 401–418 Guo, Li O-operators on associative algebras and associative Yang-Baxter equations Pacific Journal of Mathematics 256 (2012) 257–289 Rota-Baxter operators on the polynomial algebras, integration and averaging operators Pacific Journal of Mathematics 275 (2015) 481–507 Guo, Qi Dual mean Minkowski measures and the Grunbaum conjecture for affine diameters Pacific Journal of Mathematics 292 (2018) 117–137 The measures of asymmetry for coproducts of convex bodies Pacific Journal of Mathematics 276 (2015) 401–418 Guo, Qilong The Heegaard distances cover all nonnegative integers Pacific Journal of Mathematics 275 (2015) 231–255 Guo, Ren Cell decompositions of Teichmüller spaces of surfaces with boundary Pacific Journal of Mathematics 253 (2011) 423–438 Guo, Yuxia Multi-bump bound state solutions for the quasilinear Schr\"{o}dinger equation with critical frequency Pacific Journal of Mathematics 270 (2014) 49–77 Guo, Zhen The Möbius characterizations of Willmore tori and Veronese submanifolds in the unit sphere Pacific Journal of Mathematics 241 (2009) 227–242 Guo, Zongming Existence of singular positive solutions for some semilinear elliptic equations Pacific Journal of Mathematics 236 (2008) 57–71 On non-radial singular solutions of supercritical bi-harmonic equations Pacific Journal of Mathematics 284 (2016) 395–430 Partial regularity for weak solutions of semilinear elliptic equations with supercritical exponents Pacific Journal of Mathematics 214 (2004) 89–107 Gupta, Anjan A criterion for modules over Gorenstein local rings to have rational Poincar\'e series Pacific Journal of Mathematics 305 (2020) 165–187 Gupta, Arjun Generalisation of a square'' functional equation Pacific Journal of Mathematics 57 (1975) 419–422 On a square'' functional equation Pacific Journal of Mathematics 50 (1974) 449–454 Gupta, R. N. Shirshov's theorem and representations of semigroups Pacific Journal of Mathematics 181 (1997) 159–176 Gupta, Radhika Loxodromics for the cyclic splitting complex and their centralizers Pacific Journal of Mathematics 301 (2019) 107–142 Gupta, Sanjiv On norms of trigonometric polynomials on $\mathrm{SU}(2)$ Pacific Journal of Mathematics 175 (1996) 491–505 Gurak, Stanley Explicit construction of certain split extensions of number fields and constructing cyclic classfields Pacific Journal of Mathematics 157 (1993) 269–294 Minimal polynomials for circular numbers Pacific Journal of Mathematics 112 (1984) 313–331 Minimal polynomials for Gauss circulants and cyclotomic units Pacific Journal of Mathematics 102 (1982) 347–353 Guralnick, Robert Equations in prime powers Pacific Journal of Mathematics 118 (1985) 359–367 Inversion invariant additive subgroups of division rings Pacific Journal of Mathematics 227 (2006) 287–294 Permutability of characters on algebras Pacific Journal of Mathematics 178 (1997) 63–70 Power cancellation of modules Pacific Journal of Mathematics 124 (1986) 131–144 Shirshov's theorem and representations of semigroups Pacific Journal of Mathematics 181 (1997) 159–176 Gürel, Başak Periodic orbits of Hamiltonian systems linear and hyperbolic at infinity Pacific Journal of Mathematics 271 (2014) 159–182 Gurevich, D. Braided groups of Hopf algebras obtained by twisting Pacific Journal of Mathematics 162 (1994) 27–44 Gursky, Matthew Conformal invariants associated to a measure: conformally covariant operators Pacific Journal of Mathematics 253 (2011) 37–56 Guruprasad, K. Flat connections, geometric invariants and the symplectic nature of the fundamental group of surfaces Pacific Journal of Mathematics 162 (1994) 45–55 Gustafson, Grant Green's function inequalities for two-point boundary value problems Pacific Journal of Mathematics 59 (1975) 327–343 Gustafson, Karl A note on left multiplication of semigroup generators Pacific Journal of Mathematics 24 (1968) 463–465 Multiplicative perturbation of semigroup generators Pacific Journal of Mathematics 41 (1972) 731–742 Gustafson, Richard F. A simple genus one knot with incompressible spanning surfaces of arbitrarily high genus Pacific Journal of Mathematics 96 (1981) 81–98 Gustavsen, Trond An elementary, explicit, proof of the existence of Quot schemes of points Pacific Journal of Mathematics 231 (2007) 401–415 Gustin, William An isoperimetric minimax Pacific Journal of Mathematics 3 (1953) 403–405 Guthrie, Joe Mapping spaces and $cs$-networks Pacific Journal of Mathematics 47 (1973) 465–471 Gutiérrez, Mauricio Concordance and homotopy. I. Fundamental group Pacific Journal of Mathematics 82 (1979) 75–91 Gutiérrez-Rodríguez, Ixchel Bach-flat isotropic gradient Ricci solitons Pacific Journal of Mathematics 293 (2018) 75–99 Guttman, Louis Some inequalities between latent roots and minimax (maximin) elements of real matrices Pacific Journal of Mathematics 7 (1957) 897–902 Gutú, Olivia Fibrations on Banach manifolds Pacific Journal of Mathematics 215 (2004) 313–329 Guyker, James Commuting hyponormal operators Pacific Journal of Mathematics 91 (1980) 307–325 On partial isometries with no isometric part Pacific Journal of Mathematics 62 (1976) 419–433
2020-08-05 13:35:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5026465654373169, "perplexity": 520.9098604484712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735958.84/warc/CC-MAIN-20200805124104-20200805154104-00080.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/choose-the-correct-option-from-the-given-alternative-if-the-a-drv-x-has-the-following-probability-distribution-xx-1-2-3-4-5-6-7-p-x-x-k-2k-2k-3k-k2-2k2-7k2-k-k-types-of-random-variables_153724
# Choose the correct option from the given alternative: If the a d.r.v. X has the following probability distribution : XX 1 2 3 4 5 6 7 P(X=x) k 2k 2k 3k k2 2k2 7k2+k k = - Mathematics and Statistics MCQ Fill in the Blanks Choose the correct option from the given alternative: If the a d.r.v. X has the following probability distribution: X 1 2 3 4 5 6 7 P(X=x) k 2k 2k 3k k2 2k2 7k2+k k = • 1/7 • 1/8 • 1/9 • 1/10 #### Solution If the a d.r.v. X has the following probability distribution: X 1 2 3 4 5 6 7 P(X=x) k 2k 2k 3k k2 2k2 7k2+k k = 1/10 Concept: Types of Random Variables Is there an error in this question or solution? #### APPEARS IN Balbharati Mathematics and Statistics 2 (Arts and Science) 12th Standard HSC Maharashtra State Board Chapter 7 Probability Distributions Miscellaneous Exercise 1 | Q 9 | Page 242
2022-12-05 17:00:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5914513468742371, "perplexity": 3203.414760607505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711042.33/warc/CC-MAIN-20221205164659-20221205194659-00066.warc.gz"}
https://www.skepticalcommunity.com/viewtopic.php?p=947641
## Fukushima one year on We are the Borg. Rob Lister Posts: 20769 Joined: Sun Jul 18, 2004 7:15 pm Title: Incipient toppler Location: Swimming in Lake Ed ### Re: Fukushima one year on sparks wrote:It might just as easily be in the concrete below or the dirt below that Rob. Besides, those dribbles of corium hardly represent all of the core material. Wherever it is it is under the reactor. Oh, and fuck you! That's a $1.05. sparks Posts: 15237 Joined: Fri Oct 26, 2007 4:13 pm Location: Friar McWallclocks Bar -- Where time stands still while you lean over! ### Re: Fukushima one year on BWAHAHA... You can lead them to knowledge, but you can't make them think. robinson Posts: 4400 Joined: Sat Aug 12, 2006 2:01 am Title: Be good to each other Location: USA ### Re: Fukushima one year on Well, it took almost four years to figure out and photograph Three Mile island, and that was a partial meltdown (not melt through like the three Fukushima reactors), with all the fuel still in the reactor. So 6 years to get a video of reactor 3 fuel is pretty good. You never know what's going to happen, then some shit happens nobody saw coming, then later somebody says they knew it was coming, then some new shit happens nobody saw coming, rinse and repeat robinson Posts: 4400 Joined: Sat Aug 12, 2006 2:01 am Title: Be good to each other Location: USA ### Re: Fukushima one year on Unlike Chernobyl, at least they are planning on doing something about it. You never know what's going to happen, then some shit happens nobody saw coming, then later somebody says they knew it was coming, then some new shit happens nobody saw coming, rinse and repeat Witness Posts: 21262 Joined: Thu Sep 19, 2013 5:50 pm ### Re: Fukushima one year on Japan Government Not Responsible For Fukushima: Court Tokyo: A Japanese court ruled Friday that the plant operator not the government was responsible for the 2011 Fukushima nuclear disaster, ordering the former to pay damages. The district court in Chiba near Tokyo said the government "was able to foresee" but "may not have been able to avoid the accident" caused by the tsunami that smashed into the Fukushima Daiichi power plant. […] Chiba court judge Masaru Sakamoto turned down the demand of 42 plaintiffs for the government to pay compensation. However, the court ordered operator Tokyo Electric Power Co (TEPCO) to pay a total of 376 million yen ($3.3 million), much less than the the 2.8 billion yen plaintiffs had sought. Around 12,000 people who fled over radiation fears have filed various group lawsuits against the government and TEPCO. https://www.ndtv.com/world-news/japan-g ... rt-1753740 Peanuts (for now). Anaxagoras Posts: 24675 Joined: Wed Mar 19, 2008 5:45 am Location: Yokohama/Tokyo, Japan ### Re: Fukushima one year on That's for just 42 plaintiffs. Works out to something like 9 million yen per plaintiff. Maybe after lawyers fees about 6 million each? Or in dollars about \$50,000. Overall I'm sure they've already had some other kinds of compensation from the government. It would be nice for TEPCO to pay them more but realistically, TEPCO is essentially bankrupt already. And they aren't the only victims, just 42 out of thousands. To give them more than all the others wouldn't be fair. Say we multiply that by 12,000 (for all of them). That would come to 108 billion, say 1 billion dollars. I guess that's not actually so big, is it. A fool thinks himself to be wise, but a wise man knows himself to be a fool. William Shakespeare Witness Posts: 21262 Joined: Thu Sep 19, 2013 5:50 pm ### Re: Fukushima one year on Tepco makes contact with melted fuel in unit 2 Tokyo Electric Power Company (Tepco) has released photos of fuel debris within the primary containment vessel (PCV) of unit 2 at the damaged Fukushima Daiichi nuclear power plant in Japan. The company determined that the debris can be lifted, which will help in removing the material. Details: http://www.world-nuclear-news.org/Artic ... -in-unit-2 sparks Posts: 15237 Joined: Fri Oct 26, 2007 4:13 pm Location: Friar McWallclocks Bar -- Where time stands still while you lean over! ### Re: Fukushima one year on Proof that the 'impossible' melt through was in fact possible. There's a bit of a problem here however. If a robot can lift part of the corium, what does that say of the rest of the ... tons ... of core material? We are doomed. But not by Fuckyoushima. That'll take too long. No. My money's on Globalistic Warming. You can lead them to knowledge, but you can't make them think. Witness Posts: 21262 Joined: Thu Sep 19, 2013 5:50 pm ### Re: Fukushima one year on sparks wrote: Sat Feb 16, 2019 4:46 pm My money's on Globalistic Warming. After a "winter" practically without snow we had 17 °C today. sparks Posts: 15237 Joined: Fri Oct 26, 2007 4:13 pm Location: Friar McWallclocks Bar -- Where time stands still while you lean over! ### Re: Fukushima one year on Yep. That's how it always begins... You can lead them to knowledge, but you can't make them think. Abdul Alhazred Posts: 76108 Joined: Mon Jun 07, 2004 1:33 pm Title: Yes, that one. Location: Chicago ### Re: Fukushima one year on "Globalistic" warming. Caused by globalism, then? Any man writes a mission statement spends a night in the box. -- our mission statement plappendale sparks Posts: 15237 Joined: Fri Oct 26, 2007 4:13 pm Location: Friar McWallclocks Bar -- Where time stands still while you lean over! ### Re: Fukushima one year on Caused by Trump of course. And in its proper form: Globalistical. Because I said so damnit! You can lead them to knowledge, but you can't make them think. xouper Posts: 9473 Joined: Fri Jun 11, 2004 4:52 am Location: has left the building ### Re: Fukushima one year on sparks wrote: Sun Feb 17, 2019 3:33 pm Caused by Trump of course. And in its proper form: Globalistical. Because I said so damnit! What exactly does it mean to be globalisticated? sparks Posts: 15237 Joined: Fri Oct 26, 2007 4:13 pm Location: Friar McWallclocks Bar -- Where time stands still while you lean over! ### Re: Fukushima one year on I have no idea what it means. Here's yur purfect opportinity xoup. Claim it as your own now. Or, perhaps, you and WC can split the difference? You can lead them to knowledge, but you can't make them think. xouper Posts: 9473 Joined: Fri Jun 11, 2004 4:52 am Location: has left the building ### Re: Fukushima one year on sparks wrote: Sun Feb 17, 2019 4:41 pm I have no idea what it means. Here's yur purfect opportinity xoup. Claim it as your own now. Or, perhaps, you and WC can split the difference? Sorry, I was not complaining. I think it is a perfectly cromulent word. Furthermore, I would not think of taking credit away from where credit is due. I had assumed you made it up, and I was playing along, lobbing you a softball question so you could have even more fun with it. Perhaps I should have known better than to do that while you are in a pissy mood. sparks Posts: 15237 Joined: Fri Oct 26, 2007 4:13 pm Location: Friar McWallclocks Bar -- Where time stands still while you lean over! ### Re: Fukushima one year on Pissy mood. Indeed. You can lead them to knowledge, but you can't make them think. robinson Posts: 4400 Joined: Sat Aug 12, 2006 2:01 am Title: Be good to each other Location: USA ### Re: Fukushima one year on The Internet can make you cranky You never know what's going to happen, then some shit happens nobody saw coming, then later somebody says they knew it was coming, then some new shit happens nobody saw coming, rinse and repeat sparks Posts: 15237 Joined: Fri Oct 26, 2007 4:13 pm Location: Friar McWallclocks Bar -- Where time stands still while you lean over! ### Re: Fukushima one year on Can? You can lead them to knowledge, but you can't make them think. ed Posts: 35298 Joined: Tue Jun 08, 2004 11:52 pm Title: The Hero of Sukhbataar ### Re: Fukushima one year on Witness wrote: Sun Feb 17, 2019 12:08 am sparks wrote: Sat Feb 16, 2019 4:46 pm My money's on Globalistic Warming. After a "winter" practically without snow we had 17 °C today. "C"????????????? WTF is that in American Wenn ich Kultur höre, entsichere ich meinen Browning! Abdul Alhazred Posts: 76108 Joined: Mon Jun 07, 2004 1:33 pm Title: Yes, that one. Location: Chicago ### Re: Fukushima one year on ed wrote: Thu Feb 28, 2019 9:57 pm Witness wrote: Sun Feb 17, 2019 12:08 am sparks wrote: Sat Feb 16, 2019 4:46 pm My money's on Globalistic Warming. After a "winter" practically without snow we had 17 °C today. "C"????????????? WTF is that in American
2019-08-18 03:57:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47469305992126465, "perplexity": 12216.205933194777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313589.19/warc/CC-MAIN-20190818022816-20190818044816-00529.warc.gz"}
https://www.techwhiff.com/issue/cutting-down-forests-changes-the-populations-of-more--170138
# Cutting down forests changes the populations of more than trees. Imagine the organisms that lived in this forest. After the trees have been cut, fewer animals can survive here. What are MOST LIKELY the limiting factors in this case? ###### Question: Cutting down forests changes the populations of more than trees. Imagine the organisms that lived in this forest. After the trees have been cut, fewer animals can survive here. What are MOST LIKELY the limiting factors in this case? ### How does Hughes characterize the speaker in "Dream Variations"? The speaker is portrayed as someone who lives a carefree and light-hearted existence. He portrays him as a man who lives in a dream state and refuses to see reality. He portrays him as a man who wishes he could freely express himself and rest safely. The speaker is portrayed as a man who loves to dance until he is exhausted. How does Hughes characterize the speaker in "Dream Variations"? The speaker is portrayed as someone who lives a carefree and light-hearted existence. He portrays him as a man who lives in a dream state and refuses to see reality. He portrays him as a man who wishes he could freely express himself an... ### URGENT HELP NEEDED! Which one is it? URGENT HELP NEEDED! Which one is it?... ### CAN SOMEONE PLEASE ANSWER THIS ASAP !!!!!! ILL GIVE BRAINLIEST!!! How does a university’s academic, social and cultural environments will prepare anyone for success? CAN SOMEONE PLEASE ANSWER THIS ASAP !!!!!! ILL GIVE BRAINLIEST!!! How does a university’s academic, social and cultural environments will prepare anyone for success?... ### Which of these best describes the function of a constitution? which of these best describes the function of a constitution?... ### Help pls <3333333333 help pls <3333333333... ### Analyze ways cultures have joined to creat a new , unique culture.List 2 influences and how they have changed) Analyze ways cultures have joined to creat a new , unique culture.List 2 influences and how they have changed)... ### A given line has the equation 10x + 2y = −2. What is the equation, in slope-intercept form, of the line that is parallel to the given line and passes through the point (0, 12)? A given line has the equation 10x + 2y = −2. What is the equation, in slope-intercept form, of the line that is parallel to the given line and passes through the point (0, 12)?... ### Which expression is equivalent to 2x4y+5xj2 - 3x373 ? O xy(2x+3y)(x - y) O xy(x+3y)(2 - y) O 1(2x - 3(x2+x2) O x23(2x - y)(x + 3y) Which expression is equivalent to 2x4y+5xj2 - 3x373 ? O xy(2x+3y)(x - y) O xy(x+3y)(2 - y) O 1(2x - 3(x2+x2) O x23(2x - y)(x + 3y)... ### I truly need help with the last math problem. what is the step by step solution to this math problem (-9+8)^2? I truly need help with the last math problem. what is the step by step solution to this math problem (-9+8)^2?... ### Why does lady capulet doubt benvolios description of the fight? Why does lady capulet doubt benvolios description of the fight?... ### Combine Felix's and Tyler's data. What is the mean of the combined data? Include your calculations as part of your answer. (Hint: Use a shortcut to find the answer.) Combine Felix's and Tyler's data. What is the mean of the combined data? Include your calculations as part of your answer. (Hint: Use a shortcut to find the answer.)... ### Suppose a population of 250 crickets doubles in size every 6 months. how many crickets will there be after 2 years? a. 2,000 crickets b. 6,000 crickets c. 1,000 crickets d. 4,000 crickets suppose a population of 250 crickets doubles in size every 6 months. how many crickets will there be after 2 years? a. 2,000 crickets b. 6,000 crickets c. 1,000 crickets d. 4,000 crickets... ### Question 1(Multiple Choice Worth 5 points)(06.03 MC)A 2.5-liter sample of a gas has 0.30 mole of the gas, If 0.15 mole of the gas is added, what is the final volume of the gas? Temperature and pressure remainconstant3.4 liters• 3.8 liters4.2 liters4.7 liters​ Question 1(Multiple Choice Worth 5 points)(06.03 MC)A 2.5-liter sample of a gas has 0.30 mole of the gas, If 0.15 mole of the gas is added, what is the final volume of the gas? Temperature and pressure remainconstant3.4 liters• 3.8 liters4.2 liters4.7 liters​... ### Can you solve -4(x+2)=8x+16 Can you solve -4(x+2)=8x+16... ### DeWayne has decided to seek psychotherapy for some personal difficulties he has been having. While on the telephone with one possible clinician, he asks her to describe the kind of treatment approach that she uses with clients. "I don't limit myself to a single theory or approach," the therapist answers. "Instead I operate in a(n) ______________ fashion, integrating various treatment approaches based on the specific needs of each client." a. eclectic b. supra-theoretical c. atheoretical d. Gest DeWayne has decided to seek psychotherapy for some personal difficulties he has been having. While on the telephone with one possible clinician, he asks her to describe the kind of treatment approach that she uses with clients. "I don't limit myself to a single theory or approach," the therapist ans... ### Which action by the u.s. federal government has extended the voting rights of citizens Which action by the u.s. federal government has extended the voting rights of citizens... ### How do the planets farther from the Sun differ from the planets that are closer to the Sun? A. They orbit the Sun. B. They are colder. C. They have moons. D. They are spherical in shape. How do the planets farther from the Sun differ from the planets that are closer to the Sun? A. They orbit the Sun. B. They are colder. C. They have moons. D. They are spherical in shape....
2022-12-03 18:21:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2986595630645752, "perplexity": 2817.097940266382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710936.10/warc/CC-MAIN-20221203175958-20221203205958-00796.warc.gz"}
https://web2.0calc.com/questions/domain_1843
+0 # Domain 0 26 1 What is the domain of the real-valued function $f(x)=\dfrac{2x-7}{\sqrt{x^2-5x+4}}$? Jun 7, 2022 I believe it's $$(-\infty,1)\cup(4,\infty)$$.
2022-07-05 15:08:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999507665634155, "perplexity": 7812.164920659967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104585887.84/warc/CC-MAIN-20220705144321-20220705174321-00744.warc.gz"}
https://www.semanticscholar.org/paper/On-the-Small-Ball-Inequality-in-Three-Dimensions-Bilyk-LACEY/8696f212390f33ff4bad907d660d7c6b7228382d
# On the Small Ball Inequality in Three Dimensions #### Abstract |R|=2−n |α(R)| . n1−η ∥∥∥ ∑ |R|=2−n α(R)hR ∥∥∥∞ This is an improvement over the ‘trivial’ estimate by an amount of n−η, and the optimal value of η (which we do not prove) would be η = 12 . There is a corresponding lower bound on the L∞ norm of the Discrepancy function of an arbitary distribution of a finite number of points in the unit cube in three dimensions. The prior result, in dimension 3, is that of József Beck [1], in which the improvement over the trivial estimate was logarithmic in n. We find several simplifications and extensions of Beck’s argument to prove the result above. ### Cite this paper @inproceedings{Bilyk2006OnTS, title={On the Small Ball Inequality in Three Dimensions}, author={Dmitriy Bilyk and ANDMICHAEL T. LACEY and Michael T. Lacey}, year={2006} }
2017-12-16 03:48:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9336307048797607, "perplexity": 1307.8402414786233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948581053.56/warc/CC-MAIN-20171216030243-20171216052243-00342.warc.gz"}
http://support.fletcherpenney.net/discussions/problems/799-how-to-escape-an-asterisk-inside-an-emphasis-when-exporting-to-latex
# How to escape an asterisk inside an emphasis when exporting to Latex? #### dudido 24 Feb, 2017 02:26 PM Hi, I'm using MMD 5.4.0 via Scrivener to produce Latex. I'm trying to trying to escape an asterisk inside an emphasis. I was trying to use the standard Markdown Backslash escape, but it does not work when exporting to Latex, that is \*text and *text* work as expected, but *\*text* will come out as \emph{\textbackslash{}}text* instead of \emph{*text}. suggested using the HTML code &ast; as workaround, but it does not translate to Latex. I assume this is a faulty or incomplete implementation, but maybe there is another workaround I couldn't think of? Thanks for the support 1. Support Staff Posted by fletcher on 25 Feb, 2017 09:48 PM Thanks for writing in. As you may have heard, MMD-6 is now in public alpha (really more beta at this point). I'm not developing MMD-5 further. https://github.com/fletcher/MultiMarkdown-6/wiki/Note-for-those-with-MMD-5-questions-or-bugs Your specific example should work fine in MMD-6, but let me know if you find problems. Thanks! Fletcher 2. Posted by dudido on 24 Mar, 2017 06:37 PM I can confirm that MMD 6 does solve that issue (plus others!). Thanks 3. dudido closed this discussion on 24 Mar, 2017 06:37 PM. Comments are currently closed for this discussion. You can start a new one. # Keyboard shortcuts ### Generic ? Show this help Blurs the current field ### Comment Form r Focus the comment reply box Submit the comment You can use Command ⌘ instead of Control ^ on Mac ## Recent Discussions 04 Mar, 2019 11:16 AM Indexes and Tags 21 Feb, 2019 03:18 PM Support of ESC character 09 Feb, 2019 06:04 PM Superscript at beginning of line produces mixed results when previewed 08 Dec, 2018 08:58 PM TeX export broke from MMD3 to MMD4? 14 Nov, 2018 04:54 AM Footnotes don't work after exclamation points?
2019-03-26 21:30:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6036643981933594, "perplexity": 7056.231825520588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912206016.98/warc/CC-MAIN-20190326200359-20190326222359-00351.warc.gz"}
https://forum.bebac.at/forum_entry.php?id=18402
## The mysterious ρ within studies [General Sta­tis­tics] Dear Astea! ❝ The intermediate question is mentioned situation about parent drug and its metabolite - logically, the correlation for the parameters of parent drug and its metabolite should be stronger than for two different drugs tested on the same population? Oh dear, yes! But that’s yet another story. 1e+06 simulations. Time consumed (secs)    user  system elapsed 979.75   95.18 1115.72 Get a faster machine! My almost three years old tin-can: 1e+06 simulations. Time consumed (secs)    user  system elapsed  208.64   31.17  240.75 ❝ Do I understand correctly that in your latest code you exployed all AUC and Cmax individual data for each study in order to estimate "within rho"? Yes. Apples and oranges. See there for a breakdown by analyte. Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes
2023-03-29 06:02:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.376996248960495, "perplexity": 12575.745094188916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00512.warc.gz"}
https://zbmath.org/?q=an:1351.28003
## Bilinear integration and applications to operator and scattering theory.(English)Zbl 1351.28003 The author studies the integration of operator valued functions with respect to a spectral or orthogonally scattered measure, and many examples are given to illustrate the results. In particular, a simple version of Fubini’s theorem in the operator context is given in Theorem 2.3; Proposition 3.4 considers the problem of approximation by operator valued simple functions; Theorem 3.5 characterizes Pettis’ measurability, and Theorem 3.7 gives several characterizations, in the case of a finite measure space $$(\Omega,\mu)$$ and for Banach spaces $$X$$ (which is separable) and $$Y$$, of when $$f\rightarrow {\mathcal L}_s(X,Y)$$ is strongly $$\mu$$-measurable. Finally, it is proved in Theorem 4.5 (among other results) that if $$\mathcal H$$ is a separable Hilbert space, $$A: {\mathcal D}(A) \rightarrow {\mathcal H}$$ is a selfadjoint operator with spectral measure $$P_A$$, $$E={\mathcal L}({\mathcal D}(A),{\mathcal H})\widehat\otimes_\tau{\mathcal D}(A)$$, $$(\Gamma,{\mathcal E},\mu)$$ is a $$\sigma$$-finite measure space, $$u:\mathbb R\times \Gamma\rightarrow\mathbb C$$ has the property that, for every $$h\in{\mathcal D}(A)$$, the function $$u(\cdot,\gamma)$$ is $$P_Ah$$-integrable in $${\mathcal D}(A)$$ and $$f:\Gamma\rightarrow {\mathcal L}({\mathcal D}(A),{\mathcal H})$$ is a suitable strongly $$\mu$$-measurable function, then the $${\mathcal L}({\mathcal D}(A),{\mathcal H})$$-valued function $\sigma\rightarrow\int_Tu(\sigma,\gamma)f(\gamma)d\mu(\gamma),\qquad\sigma\in\mathbb R$ is strongly measurable in $${\mathcal L}({\mathcal D}(A),{\mathcal H})$$ and $$(P_Ah)$$-integrable in $$E$$ with respect to the $${\mathcal D}(A)$$-valued measure $$P_Ah$$, for each set $$T\in{\mathcal E}$$. ### MSC: 28A25 Integration with respect to measures and other set functions 46A32 Spaces of linear operators; topological tensor products; approximation properties 46N50 Applications of functional analysis in quantum physics Full Text: ### References: [1] W.O. Amrein, V. Georgescu and J.M. Jauch, Stationary state scattering theory , Helv. Phys. Acta. 44 (1971), 407-434. [2] R. Bartle, A general bilinear vector integral , Stud. Math. 15 (1956), 337-351. · Zbl 0070.28102 [3] O. Blasco and J. van Neerven, Spaces of operator-valued functions measurable with respect to the strong operator topology , in Vector measures, integration and related topics , Oper. Th. Adv. Appl. 201 , Birkhäuser Verlag, Basel, 2010. · Zbl 1259.46033 [4] G.Y.H. Chi, On the Radon-Nikodym theorem in locally convex spaces , in Measure theory , Lect. Notes Math. 541 , Springer, Berlin, 1976. · Zbl 0335.46029 [5] J. Diestel and J.J. Uhl Jr., Vector measures , Math. Surv. 15 , American Mathematical Society, Providence, 1977. [6] N. Dinculeanu, Vector measures , Inter. Mono. Pure Appl. Math. 95 , Pergamon Press, Oxford, 1967. · Zbl 0142.10502 [7] I. Dobrakov, On integration in Banach spaces I, Czech. Math. J. 20 (1970), 511-536. · Zbl 0215.20103 [8] —-, On integration in Banach spaces II, Czech. Math. J. 20 (1970), 680-695. · Zbl 0224.46050 [9] L. Garcia-Raffi and B. Jefferies, An application of bilinear integration to quantum scattering , J. Math. Anal. Appl. 415 (2014), 394-421. · Zbl 1308.81167 [10] B. Jefferies, Some recent applications of bilinear integration , in Vector measures, integration and related topics , Oper. Th. Adv. Appl. 201 , Birkhäuser Verlag, Basel, 2010. · Zbl 1248.28015 [11] —-, Lattice trace operators , J. Operators 2014 , article ID 629502. [12] —-, The CLR inequality for dominated semigroups, Math. Phys. Anal. Geom., doi 10.1007/ s11040-014-9145-6, 2014. · Zbl 1359.47040 [13] B. Jefferies and S. Okada, Bilinear integration in tensor products , Rocky Mountain J. Math. 28 (1998), 517-545. · Zbl 0936.46035 [14] B. Jefferies and P. Rothnie, Bilinear integration with positive vector measures , J. Aust. Math. Soc. 75 (2003), 279-293. · Zbl 1048.28008 [15] G. Köthe, Topological vector spaces I, Springer-Verlag, Berlin, 1969. [16] —-, Topological vector spaces II, Springer-Verlag, Berlin, 1979. · Zbl 0417.46001 [17] G.L. Litvinov, Nuclear operators , Encycl. Math., M. Hazewinkel, ed., Springer, New York, 2001. [18] S. Okada, W. Ricker and E. Sánchez Pérez, Optimal domain and integral extension of operators. Acting in function spaces , Oper. Th.: Adv. Appl. 180 , Birkhauser Verlag, Basel, 2008. [19] W. Rudin, Functional analysis , McGraw-Hill, New York, 1973. · Zbl 0253.46001 [20] E. Saab, On the Radon-Nikodym property in a class of locally convex spaces , Pac. J. Math. 75 (1978) 281-291. · Zbl 0374.46002 [21] W. Schachermayer, Integral operators on $$L^{p}$$ spaces I, II, Indiana Univ. Math. J. 30 (1981), 123-140, 261-266; Addendum to: Integral operators on $$L^{p}$$ spaces , Indiana Univ. Math. J. 31 (1982), 73-81. · Zbl 0422.47012 [22] H. Schaefer, Topological vector spaces , Grad. Texts Math. 3 , Springer-Verlag, Berlin, 1980. · Zbl 0212.14001 [23] L. Schwartz, Radon measures in arbitrary topological spaces and cylindrical measures , Tata Inst. Fund. Research, Oxford University Press, Bombay, 1973. · Zbl 0298.28001 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2022-05-24 15:33:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.914762020111084, "perplexity": 1057.2318539500225}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662573053.67/warc/CC-MAIN-20220524142617-20220524172617-00760.warc.gz"}
http://arxitics.com/articles/2104.03306
## arXiv Analytics ### arXiv:2104.03306 [astro-ph.GA]AbstractReferencesReviewsResources #### A Parametric Galactic Model toward the Galactic Bulge Based on Gaia and Microlensing Data Published 2021-04-07Version 1 We developed a parametric Galactic model toward the Galactic bulge by fitting to spatial distributions of the Gaia DR2 disk velocity, VVV proper motion, BRAVA radial velocity, OGLE-III red clump star count, and OGLE-IV star count and microlens rate, optimized for use in microlensing studies. We include the asymmetric drift of Galactic disk stars and the dependence of velocity dispersion on Galactic location in kinematic model, which has been ignored in most previous models used for microlensing studies. We show that our model predicts a microlensing parameter distribution that is significantly different from the one with a typically used model in the previous studies. Through our modeling, we estimate various fundamental model parameters for our Galaxy, including the initial mass function (IMF) in the inner Galaxy. Combined constraints from star counts and the microlensing event timescale distribution from the OGLE-IV survey, in addition to a prior on the bulge stellar mass, enable us to successfully measure IMF slopes using a broken power law form over a broad mass range, $\alpha_{\rm bd} = 0.22^{+0.20}_{-0.55}$ for $M < 0.08 \, M_{\odot}$, $\alpha_{\rm ms} = 1.16^{+0.08}_{-0.15}$ for $0.08 \, M_{\odot} \leq M < M_{\rm br}$, and $\alpha_{\rm hm} = 2.32^{+0.14}_{-0.10}$ for $M \geq M_{\rm br}$, as well as a break mass at $M_{\rm br} = 0.90^{+0.05}_{-0.14} \, M_{\odot}$. This is significantly different from the Kroupa IMF for local stars, but similar to the Zoccali IMF measured from a bulge luminosity function. We also estimate the dark matter mass fraction in the bulge region of $28 \pm 7$\% which could be larger than a previous estimate. Because our model is purely parametric, it can be universally applied using the parameters provided in this paper. Comments: 46 pages, 13 figures, 7 tables, submitted Related articles: Most relevant | Search more arXiv:2005.01713 [astro-ph.GA] (Published 2020-05-01) The M-$σ$ relation from the disruption of binaries from the galactic bulge arXiv:1712.01604 [astro-ph.GA] (Published 2017-12-05) Modelling the formation of the galactic bulge arXiv:1402.0981 [astro-ph.GA] (Published 2014-02-05) The globular cluster NGC 6528 the ferrous side of the Galactic Bulge
2021-04-10 21:16:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7709701061248779, "perplexity": 2851.9720411986095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038059348.9/warc/CC-MAIN-20210410210053-20210411000053-00633.warc.gz"}
https://www.physicsforums.com/threads/tangent-plane.468111/
# Tangent plane quietrain ## Homework Statement find eqn of tangent plane to surface S at point P(2,1,3) the curves r = (2+3t, 1-t2 , 3-4t+t2) r = (1+u2 ,2u3-1 , 2u+1) both lie on S. ## The Attempt at a Solution i don't really know how to start. am i suppose to find eqn of S first? then use the formula n.(r - P) with the partial differientials fx , fy etc? but how do ifind S? with those 2 curves? thanks! Homework Helper hi quietrain! each curve has a tangent line … and two lines define a plane Homework Helper The tangent plane is a 2 dimensional vector space, and you know that the tangent space is defined by the tangent vectors of curves, and to find a tangent vector, just differentiate the curves w.r.t to their parameter and compare the vectors. quietrain but how do we know that those 2 curves will intersect at the point P? are we suppose to let the parameter t be = 0 and u = 1 so that they all become point P? then i differientiate to get r = (3, -2t, 2t-4) r = (2u, 6u2, 2) ? so these are the tangent vectors at the point P? when i sub t = 0 and u= 1? so r= (3, 0, -4) , r = (2 , 6 , 2) so i cross multiply them to get normal vector and then use n.(r-p) = 0 to get the tangent plane equation? btw, what if there issn't a common value for t or u such that they will become point P? Homework Helper hi quietrain! yes, that's perfect. btw, what if there issn't a common value for t or u such that they will become point P? then the question couldn't start "find eqn of tangent plane to surface S at point P(2,1,3)" quietrain ah i see thanks everyone! quietrain The tangent plane is a 2 dimensional vector space, and you know that the tangent space is defined by the tangent vectors of curves, and to find a tangent vector, just differentiate the curves w.r.t to their parameter and compare the vectors. oh i just wondered, if the equation is not parameterized, then i must do a df/dx df/dy df/dz partial derivatives and then compare the x with x y with y zwithz values at the point P? quietrain anyone? Homework Helper If what equation is "not parameterized"? A curve in three dimensions cannot be written as a single equation. That would be a surface. If you are asking about finding the tangent plane to a surface, given by f(x,y,z)= constant, then the normal vector is just the gradient $\nabla f$. And, by the way, once you have the two tangent vectors to the two curves, the normal vector to the surface they lie in is the cross product of the two vectors. quietrain oh , so if i do df/dx ,df/dy,df/dz and sub in the x,y,z of point P , i get the normal vector pointing outwards from that point P that is perpendicular to the tangent plane at that surface point? but i thought the gradient of f gives me tangent vector and not normal vector? also, if i sub in the values of point P into df/dx , df/dy, df/dz, i get 1 vector only? how do i get a second vector that also pass through point p and lie in the same plane? so that i can do the cross product to get the normal vector to surface? Homework Helper hi quietrain! please remember that you changed the question … originally, you asked about a surface defined by curves defined by parametrisation, but then you changed it … oh i just wondered, if the equation is not parameterized, then i must do a df/dx df/dy df/dz partial derivatives and then compare the x with x y with y zwithz values at the point P? … you asked about a surface not defined by parametrisation, and HallsofIvy correctly replied … If what equation is "not parameterized"? A curve in three dimensions cannot be written as a single equation. That would be a surface. If you are asking about finding the tangent plane to a surface, given by f(x,y,z)= constant, then the normal vector is just the gradient $\nabla f$. oh , so if i do df/dx ,df/dy,df/dz and sub in the x,y,z of point P , i get the normal vector pointing outwards from that point P that is perpendicular to the tangent plane at that surface point? but i thought the gradient of f gives me tangent vector and not normal vector? if the surface is defined by parameters that increase along the surface, then the gradient obviously is a tangent if the surface is defined by a function that is constant over the surface, then the gradient obviously is a normal quietrain if the surface is defined by parameters that increase along the surface, then the gradient obviously is a tangent if the surface is defined by a function that is constant over the surface, then the gradient obviously is a normal erm i understand the first line, its talking about the question i asked at my 1st post right? the 2nd line is the one i don't understand :( lets say the plane x+y+z = 6 , it is a surface that has a constant function over all surface right?(means it is x+y+z = 6 everywhere?) so the gradient is 0 if i do df/dx + df/dy+ df/dz = 0 so what is a normal? i thought normal means the vector pointing perpendicularly outwards of the plane? so it should be 1,1,1 in this case? Homework Helper hi quietrain! lets say the plane x+y+z = 6 , it is a surface that has a constant function over all surface right?(means it is x+y+z = 6 everywhere?) so the gradient is 0 if i do df/dx + df/dy+ df/dz = 0 yes, if f(x,y,z) = x + y + z, then that plane is the surface f(x,y,z) = 6 but the gradient is a vector, (∂f/∂x,∂f/∂y,∂f/∂z), which in this case is (1,1,1) Homework Helper erm i understand the first line, its talking about the question i asked at my 1st post right? the 2nd line is the one i don't understand :( lets say the plane x+y+z = 6 , it is a surface that has a constant function over all surface right?(means it is x+y+z = 6 everywhere?) so the gradient is 0 if i do df/dx + df/dy+ df/dz = 0 The gradient of a function of several variables is a vector, not a number. The gradient of the plane x+ y+ z= 6 is the vector <1 , 1, 1>, not the number 0. (In Britain, the term "gradient" is also used for the derivative of a function of a single variable. In the United States that is not often done. I know of no case in which a sum of partial derivatives is called a "gradient".) so what is a normal? i thought normal means the vector pointing perpendicularly outwards of the plane? so it should be 1,1,1 in this case? Yes, that is the normal. It is your understanding of "gradient" that is wrong. Last edited by a moderator: quietrain oh.. i went to read up on surface normal and from wiki, it says "If a surface S is given implicitly as the set of points (x,y,z) satisfying F(x,y,z) = 0, then, a normal at a point (x,y,z) on the surface is given by the gradient [URL]http://upload.wikimedia.org/math/e/5/1/e5146b7f34a2d4ee4bb6012a376f5f61.png[/URL] since the gradient at any point is perpendicular to the level set, and F(x,y,z) = 0 (the surface) is a level set of F." are they talking about a plane here? and does F(x,y,z) has to be = 0? or a constant will do like what tiny-tim says? or are these 2 things totally different stuff :( so now if i let f(x,y,z) = x2+y+z then if i find gradient i would get 2x,1,1? so this is the normal at whatever value x is regardless of y and z ? Last edited by a moderator: Homework Helper Yes, but it wouldn't be a unit normal. F(x,y,z) always =0 because I could define a G(x,y,z)=F(x,y,z)-constant and work with that instead. Homework Helper hi quietrain! "If a surface S is given implicitly as the set of points (x,y,z) satisfying F(x,y,z) = 0, then, a normal at a point (x,y,z) on the surface is given by the gradient [URL]http://upload.wikimedia.org/math/e/5/1/e5146b7f34a2d4ee4bb6012a376f5f61.png[/URL] since the gradient at any point is perpendicular to the level set, and F(x,y,z) = 0 (the surface) is a level set of F." are they talking about a plane here? no, F(x,y,z) = 0 can be any surface and does F(x,y,z) has to be = 0? or a constant will do like what tiny-tim says? or are these 2 things totally different stuff :( any constant will do … the different surfaces F(x,y,z) = k are like Russian dolls, they fit snugly inside each other, they never cross each other, and they are crossed by a system of curves which everywhere are normal to each surface, with each normal curve having gradient ∇F, ie (∂f/∂x,∂f/∂y,∂f/∂z) so now if i let f(x,y,z) = x2+y+z then if i find gradient i would get 2x,1,1? so this is the normal at whatever value x is regardless of y and z ? yes Last edited by a moderator: quietrain thanks everyone! Homework Helper hi quietrain! no, F(x,y,z) = 0 can be any surface any constant will do … the different surfaces F(x,y,z) = k are like Russian dolls, I like the "Russian dolls" analogy! they fit snugly inside each other, they never cross each other, and they are crossed by a system of curves which everywhere are normal to each surface, with each normal curve having gradient ∇F, ie (∂f/∂x,∂f/∂y,∂f/∂z) yes
2022-09-25 05:08:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7793779373168945, "perplexity": 653.8341355655091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00490.warc.gz"}
https://ans.disi.unitn.it/redmine/projects/internet-on-fire/repository/revisions/5cef0f13e18f7902bbb9f6e766f4143dfb58d698/entry/networkxMiCe/networkx-master/networkx/algorithms/connectivity/cuts.py
Statistics | Branch: | Revision: ## iof-tools / networkxMiCe / networkx-master / networkx / algorithms / connectivity / cuts.py @ 5cef0f13 History | View | Annotate | Download (22.5 KB) 1 # -*- coding: utf-8 -*- """ Flow based cut algorithms """ import itertools import networkx as nx # Define the default maximum flow function to use in all flow based # cut algorithms. from networkx.algorithms.flow import edmonds_karp from networkx.algorithms.flow import build_residual_network default_flow_func = edmonds_karp from .utils import (build_auxiliary_node_connectivity, build_auxiliary_edge_connectivity) __author__ = '\n'.join(['Jordi Torrents ']) __all__ = ['minimum_st_node_cut', 'minimum_node_cut', 'minimum_st_edge_cut', 'minimum_edge_cut'] def minimum_st_edge_cut(G, s, t, flow_func=None, auxiliary=None, residual=None): """Returns the edges of the cut-set of a minimum (s, t)-cut. This function returns the set of edges of minimum cardinality that, if removed, would destroy all paths among source and target in G. Edge weights are not considered. See :meth:minimum_cut for computing minimum cuts considering edge weights. Parameters ---------- G : NetworkX graph s : node Source node for the flow. t : node Sink node for the flow. auxiliary : NetworkX DiGraph Auxiliary digraph to compute flow based node connectivity. It has to have a graph attribute called mapping with a dictionary mapping node names in G and in the auxiliary digraph. If provided it will be reused instead of recreated. Default value: None. flow_func : function A function for computing the maximum flow among a pair of nodes. The function has to accept at least three parameters: a Digraph, a source node, and a target node. And return a residual network that follows NetworkX conventions (see :meth:maximum_flow for details). If flow_func is None, the default maximum flow function (:meth:edmonds_karp) is used. See :meth:node_connectivity for details. The choice of the default function may change from version to version and should not be relied on. Default value: None. residual : NetworkX DiGraph Residual network to compute maximum flow. If provided it will be reused instead of recreated. Default value: None. Returns ------- cutset : set Set of edges that, if removed from the graph, will disconnect it. See also -------- :meth:minimum_cut :meth:minimum_node_cut :meth:minimum_edge_cut :meth:stoer_wagner :meth:node_connectivity :meth:edge_connectivity :meth:maximum_flow :meth:edmonds_karp :meth:preflow_push :meth:shortest_augmenting_path Examples -------- This function is not imported in the base NetworkX namespace, so you have to explicitly import it from the connectivity package: >>> from networkx.algorithms.connectivity import minimum_st_edge_cut We use in this example the platonic icosahedral graph, which has edge connectivity 5. >>> G = nx.icosahedral_graph() >>> len(minimum_st_edge_cut(G, 0, 6)) 5 If you need to compute local edge cuts on several pairs of nodes in the same graph, it is recommended that you reuse the data structures that NetworkX uses in the computation: the auxiliary digraph for edge connectivity, and the residual network for the underlying maximum flow computation. Example of how to compute local edge cuts among all pairs of nodes of the platonic icosahedral graph reusing the data structures. >>> import itertools >>> # You also have to explicitly import the function for >>> # building the auxiliary digraph from the connectivity package >>> from networkx.algorithms.connectivity import ( ... build_auxiliary_edge_connectivity) >>> H = build_auxiliary_edge_connectivity(G) >>> # And the function for building the residual network from the >>> # flow package >>> from networkx.algorithms.flow import build_residual_network >>> # Note that the auxiliary digraph has an edge attribute named capacity >>> R = build_residual_network(H, 'capacity') >>> result = dict.fromkeys(G, dict()) >>> # Reuse the auxiliary digraph and the residual network by passing them >>> # as parameters >>> for u, v in itertools.combinations(G, 2): ... k = len(minimum_st_edge_cut(G, u, v, auxiliary=H, residual=R)) ... result[u][v] = k >>> all(result[u][v] == 5 for u, v in itertools.combinations(G, 2)) True You can also use alternative flow algorithms for computing edge cuts. For instance, in dense networks the algorithm :meth:shortest_augmenting_path will usually perform better than the default :meth:edmonds_karp which is faster for sparse networks with highly skewed degree distributions. Alternative flow functions have to be explicitly imported from the flow package. >>> from networkx.algorithms.flow import shortest_augmenting_path >>> len(minimum_st_edge_cut(G, 0, 6, flow_func=shortest_augmenting_path)) 5 """ if flow_func is None: flow_func = default_flow_func if auxiliary is None: H = build_auxiliary_edge_connectivity(G) else: H = auxiliary kwargs = dict(capacity='capacity', flow_func=flow_func, residual=residual) cut_value, partition = nx.minimum_cut(H, s, t, **kwargs) reachable, non_reachable = partition # Any edge in the original graph linking the two sets in the # partition is part of the edge cutset cutset = set() for u, nbrs in ((n, G[n]) for n in reachable): cutset.update((u, v) for v in nbrs if v in non_reachable) return cutset def minimum_st_node_cut(G, s, t, flow_func=None, auxiliary=None, residual=None): r"""Returns a set of nodes of minimum cardinality that disconnect source from target in G. This function returns the set of nodes of minimum cardinality that, if removed, would destroy all paths among source and target in G. Parameters ---------- G : NetworkX graph s : node Source node. t : node Target node. flow_func : function A function for computing the maximum flow among a pair of nodes. The function has to accept at least three parameters: a Digraph, a source node, and a target node. And return a residual network that follows NetworkX conventions (see :meth:maximum_flow for details). If flow_func is None, the default maximum flow function (:meth:edmonds_karp) is used. See below for details. The choice of the default function may change from version to version and should not be relied on. Default value: None. auxiliary : NetworkX DiGraph Auxiliary digraph to compute flow based node connectivity. It has to have a graph attribute called mapping with a dictionary mapping node names in G and in the auxiliary digraph. If provided it will be reused instead of recreated. Default value: None. residual : NetworkX DiGraph Residual network to compute maximum flow. If provided it will be reused instead of recreated. Default value: None. Returns ------- cutset : set Set of nodes that, if removed, would destroy all paths between source and target in G. Examples -------- This function is not imported in the base NetworkX namespace, so you have to explicitly import it from the connectivity package: >>> from networkx.algorithms.connectivity import minimum_st_node_cut We use in this example the platonic icosahedral graph, which has node connectivity 5. >>> G = nx.icosahedral_graph() >>> len(minimum_st_node_cut(G, 0, 6)) 5 If you need to compute local st cuts between several pairs of nodes in the same graph, it is recommended that you reuse the data structures that NetworkX uses in the computation: the auxiliary digraph for node connectivity and node cuts, and the residual network for the underlying maximum flow computation. Example of how to compute local st node cuts reusing the data structures: >>> # You also have to explicitly import the function for >>> # building the auxiliary digraph from the connectivity package >>> from networkx.algorithms.connectivity import ( ... build_auxiliary_node_connectivity) >>> H = build_auxiliary_node_connectivity(G) >>> # And the function for building the residual network from the >>> # flow package >>> from networkx.algorithms.flow import build_residual_network >>> # Note that the auxiliary digraph has an edge attribute named capacity >>> R = build_residual_network(H, 'capacity') >>> # Reuse the auxiliary digraph and the residual network by passing them >>> # as parameters >>> len(minimum_st_node_cut(G, 0, 6, auxiliary=H, residual=R)) 5 You can also use alternative flow algorithms for computing minimum st node cuts. For instance, in dense networks the algorithm :meth:shortest_augmenting_path will usually perform better than the default :meth:edmonds_karp which is faster for sparse networks with highly skewed degree distributions. Alternative flow functions have to be explicitly imported from the flow package. >>> from networkx.algorithms.flow import shortest_augmenting_path >>> len(minimum_st_node_cut(G, 0, 6, flow_func=shortest_augmenting_path)) 5 Notes ----- This is a flow based implementation of minimum node cut. The algorithm is based in solving a number of maximum flow computations to determine the capacity of the minimum cut on an auxiliary directed network that corresponds to the minimum node cut of G. It handles both directed and undirected graphs. This implementation is based on algorithm 11 in [1]_. See also -------- :meth:minimum_node_cut :meth:minimum_edge_cut :meth:stoer_wagner :meth:node_connectivity :meth:edge_connectivity :meth:maximum_flow :meth:edmonds_karp :meth:preflow_push :meth:shortest_augmenting_path References ---------- .. [1] Abdol-Hossein Esfahanian. Connectivity Algorithms. http://www.cse.msu.edu/~cse835/Papers/Graph_connectivity_revised.pdf """ if auxiliary is None: H = build_auxiliary_node_connectivity(G) else: H = auxiliary mapping = H.graph.get('mapping', None) if mapping is None: raise nx.NetworkXError('Invalid auxiliary digraph.') if G.has_edge(s, t) or G.has_edge(t, s): return [] kwargs = dict(flow_func=flow_func, residual=residual, auxiliary=H) # The edge cut in the auxiliary digraph corresponds to the node cut in the # original graph. edge_cut = minimum_st_edge_cut(H, '%sB' % mapping[s], '%sA' % mapping[t], **kwargs) # Each node in the original graph maps to two nodes of the auxiliary graph node_cut = set(H.nodes[node]['id'] for edge in edge_cut for node in edge) return node_cut - set([s, t]) def minimum_node_cut(G, s=None, t=None, flow_func=None): r"""Returns a set of nodes of minimum cardinality that disconnects G. If source and target nodes are provided, this function returns the set of nodes of minimum cardinality that, if removed, would destroy all paths among source and target in G. If not, it returns a set of nodes of minimum cardinality that disconnects G. Parameters ---------- G : NetworkX graph s : node Source node. Optional. Default value: None. t : node Target node. Optional. Default value: None. flow_func : function A function for computing the maximum flow among a pair of nodes. The function has to accept at least three parameters: a Digraph, a source node, and a target node. And return a residual network that follows NetworkX conventions (see :meth:maximum_flow for details). If flow_func is None, the default maximum flow function (:meth:edmonds_karp) is used. See below for details. The choice of the default function may change from version to version and should not be relied on. Default value: None. Returns ------- cutset : set Set of nodes that, if removed, would disconnect G. If source and target nodes are provided, the set contains the nodes that if removed, would destroy all paths between source and target. Examples -------- >>> # Platonic icosahedral graph has node connectivity 5 >>> G = nx.icosahedral_graph() >>> node_cut = nx.minimum_node_cut(G) >>> len(node_cut) 5 You can use alternative flow algorithms for the underlying maximum flow computation. In dense networks the algorithm :meth:shortest_augmenting_path will usually perform better than the default :meth:edmonds_karp, which is faster for sparse networks with highly skewed degree distributions. Alternative flow functions have to be explicitly imported from the flow package. >>> from networkx.algorithms.flow import shortest_augmenting_path >>> node_cut == nx.minimum_node_cut(G, flow_func=shortest_augmenting_path) True If you specify a pair of nodes (source and target) as parameters, this function returns a local st node cut. >>> len(nx.minimum_node_cut(G, 3, 7)) 5 If you need to perform several local st cuts among different pairs of nodes on the same graph, it is recommended that you reuse the data structures used in the maximum flow computations. See :meth:minimum_st_node_cut for details. Notes ----- This is a flow based implementation of minimum node cut. The algorithm is based in solving a number of maximum flow computations to determine the capacity of the minimum cut on an auxiliary directed network that corresponds to the minimum node cut of G. It handles both directed and undirected graphs. This implementation is based on algorithm 11 in [1]_. See also -------- :meth:minimum_st_node_cut :meth:minimum_cut :meth:minimum_edge_cut :meth:stoer_wagner :meth:node_connectivity :meth:edge_connectivity :meth:maximum_flow :meth:edmonds_karp :meth:preflow_push :meth:shortest_augmenting_path References ---------- .. [1] Abdol-Hossein Esfahanian. Connectivity Algorithms. http://www.cse.msu.edu/~cse835/Papers/Graph_connectivity_revised.pdf """ if (s is not None and t is None) or (s is None and t is not None): raise nx.NetworkXError('Both source and target must be specified.') # Local minimum node cut. if s is not None and t is not None: if s not in G: raise nx.NetworkXError('node %s not in graph' % s) if t not in G: raise nx.NetworkXError('node %s not in graph' % t) return minimum_st_node_cut(G, s, t, flow_func=flow_func) # Global minimum node cut. # Analog to the algorithm 11 for global node connectivity in [1]. if G.is_directed(): if not nx.is_weakly_connected(G): raise nx.NetworkXError('Input graph is not connected') iter_func = itertools.permutations def neighbors(v): return itertools.chain.from_iterable([G.predecessors(v), G.successors(v)]) else: if not nx.is_connected(G): raise nx.NetworkXError('Input graph is not connected') iter_func = itertools.combinations neighbors = G.neighbors # Reuse the auxiliary digraph and the residual network. H = build_auxiliary_node_connectivity(G) R = build_residual_network(H, 'capacity') kwargs = dict(flow_func=flow_func, auxiliary=H, residual=R) # Choose a node with minimum degree. v = min(G, key=G.degree) # Initial node cutset is all neighbors of the node with minimum degree. min_cut = set(G[v]) # Compute st node cuts between v and all its non-neighbors nodes in G. for w in set(G) - set(neighbors(v)) - set([v]): this_cut = minimum_st_node_cut(G, v, w, **kwargs) if len(min_cut) >= len(this_cut): min_cut = this_cut # Also for non adjacent pairs of neighbors of v. for x, y in iter_func(neighbors(v), 2): if y in G[x]: continue this_cut = minimum_st_node_cut(G, x, y, **kwargs) if len(min_cut) >= len(this_cut): min_cut = this_cut return min_cut def minimum_edge_cut(G, s=None, t=None, flow_func=None): r"""Returns a set of edges of minimum cardinality that disconnects G. If source and target nodes are provided, this function returns the set of edges of minimum cardinality that, if removed, would break all paths among source and target in G. If not, it returns a set of edges of minimum cardinality that disconnects G. Parameters ---------- G : NetworkX graph s : node Source node. Optional. Default value: None. t : node Target node. Optional. Default value: None. flow_func : function A function for computing the maximum flow among a pair of nodes. The function has to accept at least three parameters: a Digraph, a source node, and a target node. And return a residual network that follows NetworkX conventions (see :meth:maximum_flow for details). If flow_func is None, the default maximum flow function (:meth:edmonds_karp) is used. See below for details. The choice of the default function may change from version to version and should not be relied on. Default value: None. Returns ------- cutset : set Set of edges that, if removed, would disconnect G. If source and target nodes are provided, the set contains the edges that if removed, would destroy all paths between source and target. Examples -------- >>> # Platonic icosahedral graph has edge connectivity 5 >>> G = nx.icosahedral_graph() >>> len(nx.minimum_edge_cut(G)) 5 You can use alternative flow algorithms for the underlying maximum flow computation. In dense networks the algorithm :meth:shortest_augmenting_path will usually perform better than the default :meth:edmonds_karp, which is faster for sparse networks with highly skewed degree distributions. Alternative flow functions have to be explicitly imported from the flow package. >>> from networkx.algorithms.flow import shortest_augmenting_path >>> len(nx.minimum_edge_cut(G, flow_func=shortest_augmenting_path)) 5 If you specify a pair of nodes (source and target) as parameters, this function returns the value of local edge connectivity. >>> nx.edge_connectivity(G, 3, 7) 5 If you need to perform several local computations among different pairs of nodes on the same graph, it is recommended that you reuse the data structures used in the maximum flow computations. See :meth:local_edge_connectivity for details. Notes ----- This is a flow based implementation of minimum edge cut. For undirected graphs the algorithm works by finding a 'small' dominating set of nodes of G (see algorithm 7 in [1]_) and computing the maximum flow between an arbitrary node in the dominating set and the rest of nodes in it. This is an implementation of algorithm 6 in [1]_. For directed graphs, the algorithm does n calls to the max flow function. The function raises an error if the directed graph is not weakly connected and returns an empty set if it is weakly connected. It is an implementation of algorithm 8 in [1]_. See also -------- :meth:minimum_st_edge_cut :meth:minimum_node_cut :meth:stoer_wagner :meth:node_connectivity :meth:edge_connectivity :meth:maximum_flow :meth:edmonds_karp :meth:preflow_push :meth:shortest_augmenting_path References ---------- .. [1] Abdol-Hossein Esfahanian. Connectivity Algorithms. http://www.cse.msu.edu/~cse835/Papers/Graph_connectivity_revised.pdf """ if (s is not None and t is None) or (s is None and t is not None): raise nx.NetworkXError('Both source and target must be specified.') # reuse auxiliary digraph and residual network H = build_auxiliary_edge_connectivity(G) R = build_residual_network(H, 'capacity') kwargs = dict(flow_func=flow_func, residual=R, auxiliary=H) # Local minimum edge cut if s and t are not None if s is not None and t is not None: if s not in G: raise nx.NetworkXError('node %s not in graph' % s) if t not in G: raise nx.NetworkXError('node %s not in graph' % t) return minimum_st_edge_cut(H, s, t, **kwargs) # Global minimum edge cut # Analog to the algorithm for global edge connectivity if G.is_directed(): # Based on algorithm 8 in [1] if not nx.is_weakly_connected(G): raise nx.NetworkXError('Input graph is not connected') # Initial cutset is all edges of a node with minimum degree node = min(G, key=G.degree) min_cut = set(G.edges(node)) nodes = list(G) n = len(nodes) for i in range(n): try: this_cut = minimum_st_edge_cut(H, nodes[i], nodes[i + 1], **kwargs) if len(this_cut) <= len(min_cut): min_cut = this_cut except IndexError: # Last node! this_cut = minimum_st_edge_cut(H, nodes[i], nodes[0], **kwargs) if len(this_cut) <= len(min_cut): min_cut = this_cut return min_cut else: # undirected # Based on algorithm 6 in [1] if not nx.is_connected(G): raise nx.NetworkXError('Input graph is not connected') # Initial cutset is all edges of a node with minimum degree node = min(G, key=G.degree) min_cut = set(G.edges(node)) # A dominating set is \lambda-covering # We need a dominating set with at least two nodes for node in G: D = nx.dominating_set(G, start_with=node) v = D.pop() if D: break else: # in complete graphs the dominating set will always be of one node # thus we return min_cut, which now contains the edges of a node # with minimum degree return min_cut for w in D: this_cut = minimum_st_edge_cut(H, v, w, **kwargs) if len(this_cut) <= len(min_cut): min_cut = this_cut return min_cut
2021-12-02 20:29:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48225778341293335, "perplexity": 3759.296365996365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362287.26/warc/CC-MAIN-20211202175510-20211202205510-00243.warc.gz"}
http://codeforces.com/blog/entry/55878
Please subscribe to the official Codeforces channel in Telegram via the link: https://t.me/codeforces_official. × ### TooNewbie's blog By TooNewbie, 13 months ago, , Let's discuss problems. How to solve 2, 3, 6, 7, 11? • • +11 • » 13 months ago, # |   +10 And 4, please :) • » » 13 months ago, # ^ |   +5 dp[L][R] = cost to join stations of interval [L, R] at point (x[L], y[R]).Calculate it as min value dp[L][K] + dp[K+1][R] + (x[K+1] - x[L]) + (y[K] - y[R]).Answer is dp[0][n-1] + path from (x[0], y[n - 1]) to (0, 0) » 13 months ago, # |   +6 Hey-ho, the contest is still running! » 13 months ago, # |   0 9 ? • » » 13 months ago, # ^ | ← Rev. 2 →   +20 You need to find the number of primes (let P be a prime) 2 <= P <= 10^7 such that N is the smallest number so Q ^ N % P = 1 From Fermat's little theorem, Q^(P - 1) % P = 1, since P is prime, so N < P. Also, since Q ^ N % P = 1 Q ^ (2 * N) % P = 1 and Q ^ (P - 1) % P = 1 we know that N is a divisor of P — 1If N is not the smallest number so Q^N % P = 1, there must be a number D, so D is a divisor of N (like N is a divisor of P — 1) For each divisor of N, D, check if Q ^ D % P = 1, if that's true, N is not the first number that respects the property. If N is ok and all divisors fail, P is part of the solution for N • » » » 13 months ago, # ^ |   +5 One can only check divisors of the form , where p is prime divisor of N • » » » » 13 months ago, # ^ | ← Rev. 2 →   0 You can also check a divisor if it'a both a divisor of both N and P - 1 • » » » 13 months ago, # ^ |   0 In fact, you can get a very fast O(T) solution (T = 107) without even doing fast exponentiation (after you precalc primes, of course).Since you search only primes P = k * N + 1, there are only T / N candidates. So you can check each of them in O(N) time with trivial exponentiation, i.e. take 1 and multiply it N times. » 13 months ago, # | ← Rev. 2 →   0 Is 6 like a typical "Write the most lazy segment tree you can"? • » » 13 months ago, # ^ |   0 I think you can solve 6 with cascading on segment tree. You also must keep a strange thing in each node. For the lazy part, on each node it will amortize at the end at O(number of els on that node) the lazy part is like: "Ok, so on this node every value bigger than X will become X". When doing updates on first array, some values may become less than X. You will keep how many values are bigger than X and the product of elements lower than X. The way I implement lazy is like this: If you pass throw a node, you propagate the lazy to the children and the current node has no lazy. Here, every node will have a X value, and when propagating the lazy, the value X will only increase, so the overall complexity will be O(sizeofnode)The lazy part comes from how you compute an array c[i] = max(initial_swag, b[1] .. b[i — 1])I didn't implement this, but it seems good. O(NlogN) time and memory. • » » 13 months ago, # ^ |   0 Don't know if this will help, but it's the last line of the input. It's the only place where i saw this information "It is guaranteed that with each change the older parameter value is strictly smaller than the new value." • » » » 13 months ago, # ^ |   0 Not only"Each change either raises ... or raises ....". Raising means increasing » 13 months ago, # | ← Rev. 5 →   +9 7 Mx regex size if 16 ((a|(c|(g|t)))*) which matches anything. Can you do better than using regex implementation for every string?PS: this gets TLE • » » 13 months ago, # ^ |   0 This is the indended solution: check each regex of length < 16 against all strings.However, your success largely depends on how many candidate regexes you have (there are various ways to prune regexes), and how you check a string against a regex (again, a lot of options here). • » » » 13 months ago, # ^ |   0 I had around 110K regexes, Which is a lot. Also, is C++'s std::regex_match enough to pass the test-data? • » » » » 13 months ago, # ^ |   0 Java's java.util.Pattern is enough, but std::regex is very slow. • » » » » 13 months ago, # ^ | ← Rev. 3 →   0 In this article, std::regex_match has exponential time complexity. In this problem, (((a*)*)t) and aa..aag takes extremely long time.I don't know std::regex_match could get AC, but it must be used carefully. • » » » » » 13 months ago, # ^ |   +10 These sometimes called "evil" regular expressions. Some examples of slow working regexes constructions to avoid are given here. • » » » » 13 months ago, # ^ |   0 I'm pretty sure that std::regex was not enough while java Pattern was enough. Meet the problem where C++ library sucks, while Java library works =) The right way to match regexes is to build nondeterministic automaton.Also, it is rather easy to decrease number of regexes to 40K by removing 0, pruning closing or "or"-ing a closure, ensuring A < B for (A|B), and removing cases like (A(BC)) in favor of ((AB)C). • » » » » » 12 months ago, # ^ | ← Rev. 3 →   +10 Was interested in this problem and solved it in around a week.Used to generate all possible regexes, excluding "duplicates" and longer equivalents if there are shorter ones. Got TLE.Here are some ideas which I used to get AC: SpoilerFound minimal number of Kleene stars regex can contain. Found minimal cardinality of sets consisting of different letters, otherwise regex /[]*/ (in Simon's form) can be used as the shortest one.Later used "abstract" regular expressions consisted of dots (dot = match any letter). Earlier algorithm generated only few hundreds of such regexes. So an input data meets only some hundred "abstract" regular expressions, and often only some decades of them survive. Later I feed "abstract" regex with permutated letters and got some thousand normal regexes. This increased a speed or program significantly and still brought TLE.Finally, remembered that one should pre-compile all regexes (qr operator) before use them against plenty of test sets, otherwise they compile each time. Got AC 2.5s. » 13 months ago, # |   0 Link to the problems? » 13 months ago, # |   0 How to solve 8? • » » 13 months ago, # ^ | ← Rev. 2 →   +10 Lets make a list c so that c[i]=a[i]-d[i].for any [l,r], you need to make (r-l+1)/3 players with the lowest c forwards, (r-l+1)/3 with the highest c defenders, and other (r-l+1)/3 midfielders.It can be easily done with persistent segment tree in O(q*logn*logn). • » » » 13 months ago, # ^ |   +10 Isn't it an overkill? Binary indexed tree + parallel binary search is enough. • » » » » 13 months ago, # ^ |   +5 Could you please describe how parallel binary search works? • » » » » » 13 months ago, # ^ |   0 • » » » 13 months ago, # ^ |   0 can you please elaborate on your persistent segment tree? • » » » » 13 months ago, # ^ |   +5 We want to find k-th smallest ellement in [l,r] segment. We construct Persistent tree with values c[i]. Then We use binary search for know index k-th minimum element and for checking this we use persistent tree. code https://ideone.com/bk5Qag • » » 13 months ago, # ^ |   0 There is also a solution, although implementing it during the contest was probably not the best idea.Define value of player to be difference of his attack and defense. Now sort players by value (and by index in case of ties). From this point build a segment tree on them, and in each node of the tree store all its players sorted by index, and prefix sums of their values alongside.Now you can perform queries like "how many players with indices in [L..R) have values in [Vmin..Vmax)" in time using standard segment tree operation (you have to run binary search by L and R in each node of the tree). So you can use another bin.search to find values V1 and V2, such that the number of players with indices [L..R) is equal for the value ranges [Vmin..V1), [V1..V2), [V2..Vmax). You can use prefix sums to find sum of values in each of the value ranges, so this gives per query solution.Now recall that in a standard RSQ segment tree we can do operation like "which maximal prefix of array has sum less than X" in time, which is better that doing binary search by position and calculating sum on prefix inside (which is ). Now apply this idea to the segment tree above, and you don't need the outer binary search by value. So the solution becomes per query.Finally, you can use fractional cascading to avoid running binary search by L and R in each node, so that you need only one binary search at the root node. This gives per query solution. • » » » 13 months ago, # ^ |   0 There is a similar/equivalent solution with Wavelet trees.Just modify the k-th element function with some prefix-sums to return the sum of the k smallest elements instead. » 13 months ago, # |   +16 2: Keep going forward. At the right moment turn right. Keep going forward again. At the right moment stop and dig.When you keep going forward, you get a sequence of 'c' (closer) and 'f' (farther or same). If you analyze various patterns, you can notice that the run lengths of this sequence has a period of 8. You should stop when you just get 'c', you expect to get 'f' next, and the run length of current 'c' is maximum possible. • » » 13 months ago, # ^ |   +5 I haven't solve (fail on TC #2) and tried another approach: go forward if answer was "c", otherwise go random to L or R, until you stay in a cell from which you can get only "f" answer for all 4 directions. This means that you stay in correct cell, or in cell which is in mirror side of the cube. Then you go 1 circle around cube with two stops at c/f boundary, and dig depending on lengths of last "f" and "c" sequences. • » » » 13 months ago, # ^ |   0 I solved with same approach » 13 months ago, # |   +13 11? • » » 13 months ago, # ^ | ← Rev. 2 →   +11 Binary search over answer. Let's check the distance d, i.e. check if n segments is enough. It should be obvious that the greedy strategy should work, i.e. each segment should be as long as possible. Easy to see, that the best way to approximate some interval [x0, x1] is a line that passes through the points (x0, f(x0) + d), (x1, f(x1) + d ), let's say it's a line y = kx + b, then the maximum distance to f(x) will be at point with x = c / k or at the ends of the segment. So we can use one more binary search to find the maximum length of each segment, and build them one by one. per each test. • » » 13 months ago, # ^ |   0 There is a solution of O(1). » 13 months ago, # |   +5 3? » 13 months ago, # |   -10 I tried to access these problems on yandex but it said I need "coordinator login & pass".How can I look at these problems? » 13 months ago, # |   +10 We make this cube to solve 2 in contest, and this is very helpful.
2018-12-10 18:57:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3601807951927185, "perplexity": 1713.9747802052148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823382.1/warc/CC-MAIN-20181210170024-20181210191524-00293.warc.gz"}
http://mathhelpforum.com/pre-calculus/104373-finding-equation-curve.html
# Thread: Finding equation of curve 1. ## Finding equation of curve Find the equation of the curve COD assuming it is of the form y=ax^2n. C(-60,18) O(0,0), D(40,8). How on earth do i even do this question??? 2. Originally Posted by xwrathbringerx Find the equation of the curve COD assuming it is of the form y=ax^2n. C(-60,18) O(0,0), D(40,8). How on earth do i even do this question??? Plug in your points. This will give you tree equations. Then solve the system of thes equations for numbers a and n. 3. hmmm could you please show me how to simultaneously solve these... I've got 18 = a * 3600^n AND 8 = a * 1600^n which becomes 10 = (3600^n - 1600^n) * a 4. Originally Posted by xwrathbringerx hmmm could you please show me how to simultaneously solve these... I've got 18 = a * 3600^n AND 8 = a * 1600^n which becomes 10 = (3600^n - 1600^n) * a try dividing the two equations instead of subtracting them. 5. 1I've got 18/3600^n = 8/1600^n but I dont know how to go about solving n ....I tried cross multiplying
2017-03-26 18:18:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8368033766746521, "perplexity": 2038.775495713281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189244.95/warc/CC-MAIN-20170322212949-00085-ip-10-233-31-227.ec2.internal.warc.gz"}