url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://nl.overleaf.com/articles/symmetries-in-quantum-mechanics/mwcqyrttsswd
|
AbstractQuantum Mechanics was first conceived at the turn of the twentieth century, and since has shook the foundations of modern physics. It is a radically different viewpoint from classical physics, which works on the macroscopic scale, in contrast to quantum mechanics' microscopic domain. Though at first it was heavily debated by members of the scientific communit, it is and has been both theoretically and experimentally verified by the likes of Einstein, Heisenberg, Shr\"{o}dinger, to name but a few. This being said, it is still an incomplete theory, and has yet not been concretely proved, despite strong experimental evidence for its truth. The aim of this report is to introduce the field of quantum mechanics, and to investigate the notions of conservation/symmetry, familiar from classical mechanics. The transformations we consider here are parity/space-inversion, lattice translation and time reversal. We will build a knowledge base by analysng the operators that represent these transformation within a quantum mechanical framework. This paper is presented for an audience that has completed a mathematics degree course up to and including second year. The specific feilds we draw upon include differential equations (MA1OD1, MA2OD2, MA2PD1), linear algebra (MA2LIN), and dynamics (MA2DY). These modules are assumed to be prior knowledge. The main sources of information for this project are: An Introduction To Quantum Mechanics, D.J. Griffiths (1995), Second edition, Pearson Education ltd., 2005 Modern Quantum Mechanics, J.J. Sakurai (1994), First edition, Addison-Wesley Publishing Company inc. 1994 which are referenced throughout. For specific pages, see the bibliography, which is found in section 6.
|
2021-10-22 00:34:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 96, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5306233763694763, "perplexity": 594.0525397739593}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00301.warc.gz"}
|
http://www.cfd-online.com/W/index.php?title=ICASE/LaRC_workshop_on_benchmark_problems_in_computational_aeroacoustics,_category_1,_problem_1&diff=7127&oldid=7126
|
# ICASE/LaRC workshop on benchmark problems in computational aeroacoustics, category 1, problem 1
(Difference between revisions)
Revision as of 22:00, 11 February 2007 (view source)Harishg (Talk | contribs)← Older edit Revision as of 22:01, 11 February 2007 (view source)Harishg (Talk | contribs) Newer edit → Line 1: Line 1: Solve the initial value problem Solve the initial value problem :$\frac{\partial u}{\partial t} +\frac {\partial u}{\partial x} =0$ :$\frac{\partial u}{\partial t} +\frac {\partial u}{\partial x} =0$ + + Give numerical solution at t=100,200,300 and 400 over
## Revision as of 22:01, 11 February 2007
Solve the initial value problem
$\frac{\partial u}{\partial t} +\frac {\partial u}{\partial x} =0$
Give numerical solution at t=100,200,300 and 400 over
|
2016-09-27 19:03:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9517584443092346, "perplexity": 9707.545713783113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661155.56/warc/CC-MAIN-20160924173741-00134-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://zbmath.org/?q=an:1236.11060
|
## Parametric geometry of numbers and applications.(English)Zbl 1236.11060
This article deals with classical geometry of numbers. Let $$\mu_1,\dots,\mu_n$$ be reals with $$\mu_1+\dots+\mu_n=0$$. For $$Q>1$$ let $$T_Q:\mathbb R^n\to\mathbb R^n$$ be the linear map with $\mathbf p:=(p_1,\dots, p_n)\to (Q^{\mu_1}p_1,\dots, Q^{\mu_n} p_n).$ A symmetric convex body $$K$$ gives rise to the bodies $$K(Q)=T_Q(K)$$ parametrized by $$Q$$.
The authors study the successive minima $$\lambda_1(Q),\dots,\lambda_n(Q)$$ with respect to $$K(Q)$$, $$\Lambda$$ seen as a function of $$Q$$. Each $$\lambda_i(Q)$$ is a continuous function of $$Q$$ since $$Q$$ is closed. They wonder whether for given $$s, 1\leq s\leq n$$, there are arbitrarily large values of $$Q$$ with $$\lambda_s(Q)=\lambda_{s+1}(Q)$$. When $$A=\{i_1<\dots<i_s\}\subset \{1,\dots,n\}$$, set $$\mu_A=\sum_{i\in A}\mu_i$$ and let $$\pi_A: \mathbb R^n\to \mathbb R^s$$ be the map with $$\pi_A(\mathbf p)=(p_{i_1},\dots,p_{i_s})\in\mathbb R^s$$.
The authors prove: Suppose that for every $$s$$-dimensional space $$S$$ spanned by lattice points (i.e. points of $$\Lambda$$), there is some $$A$$ of cardinality $$s$$ with $$\mu_A<0$$ and $$\pi_A(S)=\mathbb R^s$$. Then there are arbitrary large values of $$Q$$ with $$\lambda_s(Q)=\lambda_{s+1}(Q)$$.
In general there is a nonzero lattice point $$\mathbf p$$ in $$\lambda_1(Q)K(Q)$$. Next the authors define $$\psi_i(Q)$$ for $$Q>1$$ by $\lambda_i(Q)=Q^{\psi_i(Q)}\text{\;for\;} i=1,\dots,n.$ The $$\psi_i(Q)$$ are again continuous, one has $$0<\psi_1(Q)\leq \dots\leq \psi_n(Q)$$ and from Minkowski’s theorem $|\psi_1(Q)+\dots+\psi_n(Q)|\leq c(K,\Lambda)/\log Q,$ for a certain constant depending on $$\Lambda$$ and $$K$$. The authors show that the quantities $\overline{\psi_i}=\lim \sup_{Q\to\infty}\psi_i(Q) \text{ and } \underline{\psi_i}=\lim \inf_{Q\to\infty}\psi_i(Q)$ are finite and satisfy the inequalities $$\overline{\psi_1}\leq \dots\leq \overline{\psi_n}$$ and $$\underline{\psi_1}\leq \dots\leq \underline{\psi_n}$$ and also $$\overline{\psi_i}\geq \underline{\psi_i},\;i=1,\dots, n$$.
They prove the following theorem: For $$1\leq i\leq n$$ one has \begin{aligned} &\overline{\psi}_1+\dots+\overline{\psi}_{i-1}+\underline{\psi}_i+\overline{\psi}_{i+1}+\dots+\overline{\psi}_n\geq 0,\\ &\underline{\psi}_1+\dots+\underline{\psi}_{i-1}+\overline{\psi}_i+\underline{\psi}_{i+1}+\dots+\underline{\psi}_n\leq 0. \end{aligned} They also give some application for the case $$n=3$$ corresponding to the dimension two in V. Jarník [ Trav. Inst. Math. Tbilissi 3, 193–212 (1938; Zbl 0019.10602)] and M. Laurent [Can. J. Math. 61, No. 1, 165–189 (2009; Zbl 1229.11101)] papers in the context of Dirichlet simultaneous approximation. When $$n=3$$, then \begin{aligned} &\overline{\psi}_1+\underline{\psi}_3+2\overline{\psi}_1\underline{\psi}_3=0,\\ &2\underline{\psi}_1+\overline{\psi}_3\leq -\underline{\psi}_3(3+2\underline{\psi}_1+4\overline{\psi}_3),\\ &2\overline{\psi}_3+\underline{\psi}_1\geq -\overline{\psi}_1(3+2\overline{\psi}_3+4\underline{\psi}_1).\end{aligned}
### MSC:
11H06 Lattices and convex bodies (number-theoretic aspects) 11J13 Simultaneous homogeneous approximation, linear forms
### Keywords:
lattices; successive minima; simultaneous approximation
### Citations:
Zbl 0019.10602; Zbl 1229.11101
Full Text:
|
2022-12-07 13:52:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9723456501960754, "perplexity": 315.8052427922883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711162.52/warc/CC-MAIN-20221207121241-20221207151241-00555.warc.gz"}
|
https://giscience-fsu.github.io/sperrorest/reference/partition_tiles.html
|
partition_tiles divides the study area into a specified number of rectangular tiles. Optionally small partitions can be merged with adjacent tiles to achieve a minimum number or percentage of samples in each tile.
partition_tiles(
data,
coords = c("x", "y"),
dsplit = NULL,
nsplit = NULL,
rotation = c("none", "random", "user"),
user_rotation,
offset = c("none", "random", "user"),
user_offset,
reassign = TRUE,
min_frac = 0.025,
min_n = 5,
iterate = 1,
return_factor = FALSE,
repetition = 1,
seed1 = NULL
)
## Arguments
data data.frame containing at least the columns specified by coords vector of length 2 defining the variables in data that contain the x and y coordinates of sample locations optional vector of length 2: equidistance of splits in (possibly rotated) x direction (dsplit[1]) and y direction (dsplit[2]) used to define tiles. If dsplit is of length 1, its value is recycled. Either dsplit or nsplit must be specified. optional vector of length 2: number of splits in (possibly rotated) x direction (nsplit[1]) and y direction (nsplit[2]) used to define tiles. If nsplit is of length 1, its value is recycled. indicates whether and how the rectangular grid should be rotated; random rotation is only between -45 and +45 degrees. if rotation='user', angles (in degrees) by which the rectangular grid is to be rotated in each repetition. Either a vector of same length as repetition, or a single number that will be replicated length(repetition) times. indicates whether and how the rectangular grid should be shifted by an offset. if offset='user', a list (or vector) of two components specifying a shift of the rectangular grid in (possibly rotated) x and y direction. The offset values are relative values, a value of 0.5 resulting in a one-half tile shift towards the left, or upward. If this is a list, its first (second) component refers to the rotated x (y) direction, and both components must have same length as repetition (or length 1). If a vector of length 2 (or list components have length 1), the two values will be interpreted as relative shifts in (rotated) x and y direction, respectively, and will therefore be recycled as needed (length(repetition) times each). logical (default TRUE): if TRUE, 'small' tiles (as per min_frac and min_n arguments and get_small_tiles) are merged with (smallest) adjacent tiles. If FALSE, small tiles are 'eliminated', i.e. set to NA. numeric >=0, <1: minimum relative size of partition as percentage of sample; argument passed to get_small_tiles. Will be ignored if NULL. integer >=0: minimum number of samples per partition; argument passed to get_small_tiles. Will be ignored if NULL. argument to be passed to tile_neighbors if FALSE (default), return a represampling object; if TRUE (used internally by other sperrorest functions), return a list containing factor vectors (see Value) numeric vector: cross-validation repetitions to be generated. Note that this is not the number of repetitions, but the indices of these repetitions. E.g., use repetition = c(1:100) to obtain (the 'first') 100 repetitions, and repetition = c(101:200) to obtain a different set of 100 repetitions. seed1+i is the random seed that will be used by set.seed in repetition i (i in repetition) to initialize the random number generator before sampling from the data set.
## Value
A represampling object. Contains length(repetition) resampling objects as repetitions. The exact number of folds / test-set tiles within each resampling objects depends on the spatial configuration of the data set and possible cleaning steps (see min_frac, min_n).
## Note
Default parameter settings may change in future releases. This function, especially the rotation and shifting part of it and the algorithm for cleaning up small tiles is still a bit experimental. Use with caution. For non-zero offsets (offset!='none')), the number of tiles may actually be greater than nsplit[1]*nsplit[2] because of fractional tiles lurking into the study region. reassign=TRUE with suitable thresholds is therefore recommended for non-zero (including random) offsets.
## Examples
data(ecuador)
set.seed(42)
parti <- partition_tiles(ecuador, nsplit = c(4, 3), reassign = FALSE)
# tile A4 has only 55 samples
# same partitioning, but now merge tiles with less than 100 samples to
#> $1 #> n.train n.test #> X1:Y3 600 151 #> X2:Y2 626 125 #> X3:Y1 584 167 #> X3:Y2 574 177 #> X3:Y3 620 131 #> # tile B4 (in 'parti') was smaller than A3, therefore A4 was merged with B4, # not with A3 # now with random rotation and offset, and tiles of 2000 m length: parti3 <- partition_tiles(ecuador, dsplit = 2000, offset = "random", rotation = "random", reassign = TRUE, min_n = 100 ) # plot(parti3, ecuador) summary(parti3) #>$1
#>
|
2021-10-22 03:25:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4138351380825043, "perplexity": 3101.9053830713297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00162.warc.gz"}
|
http://www.vallis.org/blogspace/preprints/2007/12/
|
## [0712.3737] Comparison between numerical relativity and a new class of post-Newtonian gravitational-wave phase evolutions: the non-spinning equal-mass case
Authors: Achamveedu Gopakumar, Mark Hannam, Sascha Husa, Bernd Brügmann
Date: 21 Dec 2007
Abstract: We compare the phase evolution of equal-mass nonspinning black-hole binaries from numerical relativity (NR) simulations with post-Newtonian (PN) results obtained from three PN approximants: the TaylorT1 and T4 approximants, for which NR-PN comparisons have already been performed in the literature, and the recently proposed approximant TaylorEt. The accumulated phase disagreement between NR and PN results over the frequency range $M\omega = 0.0455$ to $M\omega = 0.1$ is greater for TaylorEt than either T1 or T4, but has the attractive property of decreasing monotonically as the PN order is increased.
#### Dec 24, 2007
0712.3737 (/preprints)
2007-12-24, 13:03
## [0712.3787] Comparison between numerical-relativity and post-Newtonian waveforms from spinning binaries: the orbital hang-up case
Authors: Mark Hannam, Sascha Husa, Bernd Brügmann, Achamveedu Gopakumar
Date: 21 Dec 2007
Abstract: We compare results from numerical simulations of spinning binaries in the ‘orbital hangup’ case, where the binary completes at least nine orbits before merger, with post-Newtonian results using the approximants TaylorT1, T4 and Et. We find that, over the ten cycles before the gravitational-wave frequency reaches $M\omega = 0.1$, the accumulated phase disagreement between NR and 2.5PN results is less than three radians, and is less than 2.5 radians when using 3.5PN results. The amplitude disagreement between NR and restricted PN results increases with the black holes' spin, from about 6% in the equal-mass case to 12% when the black holes' spins are $S_i/M_iˆ2 = 0.85$. Finally, our results suggest that the merger waveform will play an important role in estimating the spin from such inspiral waveforms.
#### Dec 24, 2007
0712.3787 (/preprints)
2007-12-24, 13:03
## [0712.3541] On the final spin from the coalescence of two black holes
Authors: Luciano Rezzolla, Enrico Barausse, Ernst Nils Dorband, Denis Pollney, Christian Reisswig, Jennifer Seiler, Sascha Husa
Date: 20 Dec 2007
Abstract: We provide a compact analytic formula to compute the spin of the black hole produced by the coalescence of two black holes. The expression, which uses an analytic fit of numerical-relativity data and relies on four assumptions, aims at modelling generic initial spin configurations and mass ratios. A comparison with numerical-relativity simulations already shows very accurate agreements with all of the numerical data available to date, but we also suggest a number of ways in which our predictions can be further improved.
#### Dec 21, 2007
0712.3541 (/preprints)
2007-12-21, 12:27
## [0712.3419] Measurement of Quantum Fluctuations in Geometry
Authors: Craig J. Hogan
Date: 20 Dec 2007
Abstract: A phenomenological calculation is presented of the effect of quantum fluctuations in the spacetime metric, or holographic noise, on interferometeric measurement of the relative positions of freely falling proof masses, in theories where spacetime satisfies covariant entropy bounds and can be represented as a quantum theory on 2+1D null surfaces. The quantum behavior of the 3+1D metric, represented by a commutation relation expressing quantum complementarity between orthogonal position operators, leads to a parameter-free prediction of quantum noise in orthogonal position measurements of freely falling masses. A particular quantum weirdness of this holographic noise is that it only appears in measurements that compare transverse positions, and does not appear at all in purely radial position measurements. The effect on phase signal in an interferometer that continuously measures the difference in the length of orthogonal arms resembles that of a classical random Brownian motion of the beamsplitter with a Planck length step in orthogonal position difference every Planck time. This predicted holographic noise is comparable in magnitude with currently measured system noise, and should be detectable in the currently operating interferometer GEO600. Because of its transverse character, holographic noise is reduced relative to gravitational wave effects in some interferometer designs, such as LIGO, where beam power is much less in the beamsplitter than in the arms.
#### Dec 21, 2007
0712.3419 (/preprints)
2007-12-21, 12:27
## [0712.3539] Laser Ranging for Gravitational, Lunar, and Planetary Science
Authors: Stephen M. Merkowitz, Philip W. Dabney, Jeffrey C. Livas, Jan F. McGarry, Gregory A. Neumann, Thomas W. Zagwodzki
Date: 20 Dec 2007
Abstract: More precise lunar and Martian ranging will enable unprecedented tests of Einstein's theory of General Relativity and well as lunar and planetary science. NASA is currently planning several missions to return to the Moon, and it is natural to consider if precision laser ranging instruments should be included. New advanced retroreflector arrays at carefully chosen landing sites would have an immediate positive impact on lunar and gravitational studies. Laser transponders are currently being developed that may offer an advantage over passive ranging, and could be adapted for use on Mars and other distant objects. Precision ranging capability can also be combined with optical communications for an extremely versatile instrument. In this paper we discuss the science that can be gained by improved lunar and Martian ranging along with several technologies that can be used for this purpose.
#### Dec 21, 2007
0712.3539 (/preprints)
2007-12-21, 12:27
## [0712.3236] New Class of Gravitational Wave Templates for Inspiralling Compact Binaries
Authors: Achamveedu Gopakumar
Date: 19 Dec 2007
Abstract: Compact binaries inspiralling along quasi-circular orbits are the most plausible gravitational wave (GW) sources for the operational, planned and proposed laser interferometers. We provide new class of restricted post-Newtonian accurate GW templates for non-spinning compact binaries inspiralling along PN accurate quasi-circular orbits. Arguments based on data analysis, theoretical and astrophysical considerations are invoked to show why these time-domain Taylor approximants should be interesting to various GW data analysis communities.
#### Dec 19, 2007
0712.3236 (/preprints)
2007-12-19, 18:27
## [0712.3199] Gravitational waves from compact binaries inspiralling along post-Newtonian accurate eccentric orbits: Data analysis implications
Authors: M. Tessmer, A. Gopakumar
Date: 19 Dec 2007
Abstract: Compact binaries inspiralling along eccentric orbits are plausible gravitational wave (GW) sources for ground-based laser interferometers. We explore losses in event rates incurred when searching for GWs from compact binaries inspiralling along post-Newtonian accurate eccentric orbits with certain obvious non-optimal search templates. For the present analysis, GW signals having 2.5 post-Newtonian accurate orbital evolution are modeled following the phasing formalism, presented in [T. Damour, A. Gopakumar, and B. R. Iyer, Phys. Rev. D 70, 064028 (2004)]. The associated search templates are the usual time domain Taylor approximants for compact binaries in quasi-circular orbits, also having 2.5PN accurate non-stationary orbital phase evolution. We observe that these templates are highly inefficient in capturing our realistic GW signals having tiny residual eccentricities. We present reasons for our observations and provide certain possible remedies.
#### Dec 19, 2007
0712.3199 (/preprints)
2007-12-19, 18:27
## [0712.2598] Linking optical and infrared observations with gravitational wave sources through variability
Authors: Christopher W. Stubbs
Date: 16 Dec 2007
Abstract: Optical and infrared observations have thus far detected more celestial cataclysms than have been seen in gravity waves (GW). This argues that we should search for gravity wave signatures that correspond to flux variability seen at optical wavelengths, at precisely known positions. There is an unknown time delay between the optical and gravitational transient, but knowing the source location precisely specifies the corresponding time delays across the gravitational antenna network as a function of the GW-to-optical arrival time difference. Optical searches should detect virtually all supernovae that are plausible gravitational radiation sources. The transient optical signature expected from merging compact objects is not as well understood, but there are good reasons to expect detectable transient optical/IR emission from most of these sources as well. The next generation of deep wide-field surveys (for example PanSTARRS and LSST) will be sensitive to subtle optical variability, but we need to fill the ‘blind spots’ that exist in the Galactic plane, and for optically bright transient sources. In particular, a Galactic plane variability survey at 2 microns seems worthwhile. Science would benefit from closer coordination between the various optical survey projects and the gravity wave community.
#### Dec 19, 2007
0712.2598 (/preprints)
2007-12-19, 12:14
## [0712.2822] Classical Effective Field Theory and Caged Black Holes
Authors: Barak Kol, Michael Smolkin
Date: 18 Dec 2007
Abstract: Matched Asymptotic Expansion (MAE) is a useful technique in General Relativity and other fields whenever interaction takes place between physics at two different length scales. Here MAE is argued to be equivalent quite generally to Classical Effective Field Theory (ClEFT) where one (or more) of the zones is replaced by an effective theory whose terms are organized in order of increasing irrelevancy, as demonstrated by Goldberger and Rothstein in a certain gravitational context. The ClEFT perspective has advantages as the procedure is clearer, it allows a representation via Feynman diagrams, and divergences can be regularized and renormalized in standard field theoretic methods. As a side product we obtain a wide class of classical examples of regularization and renormalization, concepts which are usually associated with Quantum Field Theories.
We demonstrate these ideas through the thermodynamics of caged black holes, both simplifying the non-rotating case, and computing the rotating case. In particular we are able to replace the computation of six two-loop diagrams by a single factorizable two-loop diagram, as well as compute certain new three-loop diagrams. The results generalize to arbitrary compactification manifolds. For caged rotating black holes we obtain the leading correction for all thermodynamic quantities. The angular momentum is found to non-renormalize at leading order.
#### Dec 19, 2007
0712.2822 (/preprints)
2007-12-19, 10:49
## [0712.2542] A gravitational-wave probe of effective quantum gravity
Authors: Stephon Alexander, Lee Samuel Finn, Nicolas Yunes
Date: 15 Dec 2007
Abstract: The Green-Schwarz anomaly-cancelling mechanism in string theories requires a Chern-Simons term in the Einstein-Hilbert action, which leads to an amplitude birefringence of spacetime for the propagation of gravitational waves. While the degree of birefringence may be intrinsically small, its effects on a gravitational wave will accumulate as the wave propagates. The proposed Laser Interferometer Space Antenna (LISA) will be sensitive enough to observe the gravitational waves from sources at cosmological distances great enough that interesting bounds on the Chern-Simons may be found. Here we evaluate the effect of a Chern-Simons induced spacetime birefringence to the propagation of gravitational waves from such systems. We find that gravitational waves from in coalescing binary black hole system are imprinted with a signature of Chern-Simons gravity. This signature appears as a time-dependent change in the apparent orientation of the binary's orbital angular momentum with respect to the observer line-of-sight, with the change magnitude reflecting the integrated history of the Chern-Simons coupling over the worldline of a radiation wavefront. While spin-orbit coupling in the binary system will also lead to an evolution of the system's orbital angular momentum, the time dependence and other details of this \emph{real} effect are different than the \emph{apparent} effect produced by Chern-Simons birefringence, allowing the two effects to be separately identified.
#### Dec 17, 2007
0712.2542 (/preprints)
2007-12-17, 20:21
## [0712.2523] Searches for Gravitational Waves from Binary Neutron Stars: A Review
Authors: Warren G. Anderson, Jolien D. E. Creighton
Date: 15 Dec 2007
Abstract: A new generation of observatories is looking for gravitational waves. These waves, emitted by highly relativistic systems, will open a new window for ob- servation of the cosmos when they are detected. Among the most promising sources of gravitational waves for these observatories are compact binaries in the final min- utes before coalescence. In this article, we review in brief interferometric searches for gravitational waves emitted by neutron star binaries, including the theory, instru- mentation and methods. No detections have been made to date. However, the best direct observational limits on coalescence rates have been set, and instrumentation and analysis methods continue to be refined toward the ultimate goal of defining the new field of gravitational wave astronomy.
#### Dec 17, 2007
0712.2523 (/preprints)
2007-12-17, 20:21
## [0712.2050] Search of S3 LIGO data for gravitational wave signals from spinning black hole and neutron star binary inspirals
Authors: The LIGO Scientific Collaboration: B. Abbott, et al
Date: 12 Dec 2007
Abstract: We report on the first dedicated search for gravitational waves emitted during the inspiral of compact binaries with spinning component bodies. We analyze 788 hours of data collected during the third science run (S3) of the LIGO detectors. We searched for binary systems using a detection template family designed specially to capture the effects of spin-induced precession. The template bank we employed was found to yield high matches with our spin-modulated target waveform for binaries with masses in the asymmetric range 1.0 M_{\odot} < m_1 < 3.0 M_{\odot} and 12.0 M_{\odot} < m_{2} < 20.0 M_{\odot} which is where we would expect the spin of the binary's components to have significant effect. We find that our search of S3 LIGO data had good sensitivity to binaries in the Milky Way and to a small fraction of binaries in M31 and M33 with masses in the range 1.0 M_{\odot} < m_{1}, m_{2} < 20.0 M_{\odot}. No gravitational wave signals were identified during this search. Assuming a binary population with a Gaussian distribution of component body masses of a prototypical neutron star - black hole system with m_1 \simeq 1.35 M_{\odot} and m_2 \simeq 5 M_{\odot}, we calculate the 90% confidence upper limit on the rate of coalescence of these systems to be 15.9 yrˆ-1 L_10 ˆ-1, where L_10 is 10ˆ10 times the blue light luminosity of the Sun.
#### Dec 16, 2007
0712.2050 (/preprints)
2007-12-16, 20:29
## [0712.2032] Comment on `On the next-to-leading order gravitational spin(1)-spin(2) dynamics' by J. Steinhoff et al
Authors: Rafael A. Porto (UCSB), Ira Z. Rothstein (CMU)
Date: 12 Dec 2007
Abstract: In this comment we explain the discrepancy found between the results in arXiv:0712.1716v1 for the 3PN spin-spin potential and those previously derived in gr-qc/0604099. We point out that to compare one must include sub-leading lower order spin-orbit effects which contribute to the spin-spin potential once one transforms to the PN frame. When these effects are included the results in arXiv:0712.1716v1 do indeed reproduce those found in gr-qc/0604099.
#### Dec 14, 2007
0712.2032 (/preprints)
2007-12-14, 05:52
## [0712.1144] Pre-Merger Localization of Gravitational-Wave Standard Sirens With LISA: Triggered Search for an Electromagnetic Counterpart
Authors: Bence Kocsis (Harvard), Zoltan Haiman (Columbia), Kristen Menou (Columbia)
Date: 7 Dec 2007
Abstract: Electromagnetic (EM) counterparts to supermassive black hole binary mergers observed by LISA can be localized to within the field of view of astronomical instruments ~10 degˆ2 hours to weeks prior to coalescence. The temporal coincidence of any prompt EM counterpart with a gravitationally-timed merger may offer the best chance of identifying a unique host galaxy. We discuss the challenges posed by searches for prompt EM counterparts and propose novel observational strategies to address them. In particular, we discuss the size and shape evolution of the LISA localization error ellipses on the sky, and quantify the requirements for dedicated EM surveys of the area prior to coalescence. A triggered EM counterpart search campaign will require monitoring a several-square degree area. It could aim for variability at the 24-27 mag level in optical bands, for example, which corresponds to 1-10% of the Eddington luminosity of the prime LISA sources of 10ˆ6-10ˆ7 Msun BHs at z=1-2, on time-scales of minutes to hours, the orbital time-scale of the binary in the last 2-4 weeks. A cross-correlation of the period of any variable EM signal with the quasi-periodic gravitational waveform over 10-1000 cycles may aid the detection. Alternatively, EM searches can detect a transient signal accompanying the coalescence. We highlight the measurement of differences in the arrival times of photons and gravitons from the same cosmological source as a valuable independent test of the massive character of gravity, and of possible violations of Lorentz invariance in the gravity sector.
#### Dec 13, 2007
0712.1144 (/preprints)
2007-12-13, 08:54
## [0712.1716] On the next-to-leading order gravitational spin(1)-spin(2) dynamics
Authors: Jan Steinhoff, Steven Hergt, Gerhard Schäfer
Date: 11 Dec 2007
Abstract: Based on recents developments by the authors it is shown that the next-to-leading order spin(1)-spin(2) coupling potential recently derived by Porto and Rothstein cannot be regarded as correct if their variables, as claimed, belong to canonical ones in standard manner.
#### Dec 11, 2007
0712.1716 (/preprints)
2007-12-11, 20:20
## [0708.0414] Modeling of Emission Signatures of Massive Black Hole Binaries: I Methods
Authors: Tamara Bogdanovic (1,2), Britton D. Smith (1), Steinn Sigurdsson (1), Michael Eracleous (1) ((1) Pennsylvania State University,(2) University of Maryland)
Date: 2 Aug 2007
Abstract: We model the electromagnetic signatures of massive black hole binaries (MBHBs) with an associated gas component. The method comprises numerical simulations of relativistic binaries and gas coupled with calculations of the physical properties of the emitting gas. We calculate the UV/X-ray and the Halpha light curves and the Halpha emission profiles. The simulations are carried out with a modified version of the parallel tree SPH code Gadget. The heating, cooling, and radiative processes are calculated for two different physical scenarios, where the gas is approximated as a black-body or a solar metallicity gas. The calculation for the solar metallicity scenario is carried out with the photoionization code Cloudy. We focus on sub-parsec binaries which have not yet entered the gravitational radiation phase. The results from the first set of calculations, carried out for a coplanar binary and gas disk, suggest that there are pronounced outbursts in the X-ray light curve during pericentric passages. If such outbursts persist for a large fraction of the lifetime of the system, they can serve as an indicator of this type of binary. The predicted Halpha emission line profiles may be used as a criterion for selection of MBHB candidates from existing archival data. The orbital period and mass ratio of a binary may be inferred after carefully monitoring the evolution of the Halpha profiles of the candidates. The discovery of sub-parsec binaries is an important step in understanding of the merger rates of MBHBs and their evolution towards the detectable gravitational wave window.
#### Dec 11, 2007
0708.0414 (/preprints)
2007-12-11, 20:20
## [0712.1575] Fundamental properties and applications of quasi-local black hole horizons
Date: 10 Dec 2007
Abstract: The traditional description of black holes in terms of event horizons is inadequate for many physical applications, especially when studying black holes in non-stationary spacetimes. In these cases, it is often more useful to use the quasi-local notions of trapped and marginally trapped surfaces, which lead naturally to the framework of trapping, isolated, and dynamical horizons. This framework allows us to analyze diverse facets of black holes in a unified manner and to significantly generalize several results in black hole physics. It also leads to a number of applications in mathematical general relativity, numerical relativity, astrophysics, and quantum gravity. In this review, I will discuss the basic ideas and recent developments in this framework, and summarize some of its applications with an emphasis on numerical relativity.
#### Dec 10, 2007
0712.1575 (/preprints)
2007-12-10, 18:06
## [0712.1578] The cross-correlation search for periodic gravitational waves
Date: 10 Dec 2007
Abstract: In this paper we study the use of cross-correlations between multiple gravitational wave (GW) data streams for detecting long-lived periodic signals. Cross-correlation searches between data from multiple detectors have traditionally been used to search for stochastic GW signals, but recently they have also been used in directed searches for periodic GWs. Here we further adapt the cross-correlation statistic for periodic GW searches by taking into account both the non-stationarity and the long term-phase coherence of the signal. We study the statistical properties and sensitivity of this search, its relation to existing periodic wave searches, and describe the precise way in which the cross-correlation statistic interpolates between semi-coherent and fully-coherent methods. Depending on the maximum duration over we wish to preserve phase coherence, the cross-correlation statistic can be tuned to go from a standard cross-correlation statistic using data from distinct detectors, to the semi-coherent time-frequency methods with increasing coherent time baselines, and all the way to a full coherent search. This leads to a unified framework for studying periodic wave searches and can be used to make informed trade-offs between computational cost, sensitivity, and robustness against signal uncertainties.
#### Dec 10, 2007
0712.1578 (/preprints)
2007-12-10, 18:06
## [0712.1030] Bayesian comparison of Post-Newtonian approximations of gravitational wave chirp signals
Authors: R. Umstaetter, M. Tinto
Date: 6 Dec 2007
Abstract: We estimate the probability of detecting a gravitational wave signal from coalescing compact binaries in simulated data from a ground-based interferometer detector of gravitational radiation using Bayesian model selection. The simulated waveform of the chirp signal is assumed to be a spin-less Post-Newtonian (PN) waveform of a given expansion order, while the searching template is assumed to be either of the same Post-Newtonian family as the simulated signal or one level below its Post-Newtonian expansion order. Within the Bayesian framework, and by applying a reversible jump Markov chain Monte Carlo simulation algorithm, we compare PN1.5 vs. PN2.0 and PN3.0 vs. PN3.5 wave forms by deriving the detection probabilities, the statistical uncertainties due to noise as a function of the SNR, and the posterior distributions of the parameters. Our analysis indicates that the detection probabilities are not compromised when simplified models are used for the comparison, while the accuracies in the determination of the parameters characterizing these signals can be significantly worsened, no matter what the considered Post-Newtonian order expansion comparison is.
#### Dec 06, 2007
0712.1030 (/preprints)
2007-12-06, 19:53
## [0712.0196] Robust Bayesian detection of unmodelled bursts
Authors: Antony C Searle, Patrick J Sutton, Massimo Tinto, Graham Woan
Date: 2 Dec 2007
Abstract: A Bayesian treatment of the problem of detecting an unmodelled gravitational wave burst with a global network of gravitational wave observatories reveals that several previously proposed statistics have implicit biases that render them sub-optimal for realistic signal populations.
#### Dec 04, 2007
0712.0196 (/preprints)
2007-12-04, 08:58
## [0712.0343] Gravitational-wave data analysis using binary black-hole waveforms
Authors: P. Ajith
Date: 3 Dec 2007
Abstract: Coalescing binary black-hole systems are among the most promising sources of gravitational waves for ground-based interferometers. While the \emph{inspiral} and \emph{ring-down} stages of the binary black-hole coalescence are well-modelled by analytical approximation methods in general relativity, the recent progress in numerical relativity has enabled us to compute accurate waveforms from the \emph{merger} stage also. This has an important impact on the search for gravitational waves from binary black holes. In particular, while the current gravitational-wave searches look for each stage of the coalescence separately, combining the results from analytical and numerical relativity enables us to \emph{coherently} search for all three stages using a single template family. ‘Complete’ binary black-hole waveforms can now be produced by matching post-Newtonian waveforms with those computed by numerical relativity. These waveforms can be parametrised to produce analytical waveform templates. The ‘complete’ waveforms can also be used to estimate the efficiency of different search methods aiming to detect signals from black-hole coalescences. This paper summarises some recent efforts in this direction.
#### Dec 04, 2007
0712.0343 (/preprints)
2007-12-04, 08:58
|
2018-04-26 20:47:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5588551759719849, "perplexity": 2375.3653522298277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948549.21/warc/CC-MAIN-20180426203132-20180426223132-00273.warc.gz"}
|
https://www.gamedev.net/forums/topic/327390-getting-the-position-after-rotation/
|
# Getting the position after rotation
This topic is 4747 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
How would I get the position after I rotate? In my last post I got my line to rotate around a point, now I need to know where the two points are when I rotate so I can test if they collide with another line. I suppose the math would be calculated with where the rotation point is.
##### Share on other sites
I have an idea but its only part of the solution. I would get the distance of the center of rotation to the two end points. From there I would get the next position of rotation with Sin and Cos. Since it is a radius, I tried using my circle formula and now my line is way out of wack. How would I get the angle? Lets say I have a vertical line with points 100, 100, to 100, 200, and the center of rotation is 100, 200. So it would rotate around the bottom point. If I rotated it to the right a point, the bottom would stay the same because there is no distance, what about the top point? If anyone could help me out it would be greatly appreciated. thanks.
##### Share on other sites
Simply multiply the points' locations by the transformation matrix used to rotate them. The results are the rotated locations. If you use OpenGL, then you have to compute the transformation matrix yourself. If you use D3D, then you already have the transformation matrix.
##### Share on other sites
How would you do that.
new_X1 = x1 * RotationMatrix
new_Y1 = y1 * RotationMatrix
new_X2 = x2 * RotationMatrix
new_Y2 = y2 * RotationMatrix
Or do i do it with the MatrixMultiply function?
##### Share on other sites
How about D3DXVec3TransformCoord (if you're using d3d) or there should be an operator overload or function in your math library that does similar.
##### Share on other sites
It gives me incorrect positions when I use that, maybe I'm doing it wrong.
public override void OnRotate(){ Vector3 temp1, temp2, temp3, temp4; temp2 = new Vector3(); temp4 = new Vector3(); temp2.X = m_x1; temp2.Y = m_y1; temp4.X = m_x2; temp4.Y = m_y2; temp1 = Vector3.TransformCoordinate(temp2, mRotation); temp3 = Vector3.TransformCoordinate(temp4, mRotation); SetPoints(temp1.X, temp1.Y, temp3.X, temp3.Y);}
[Edited by - cptrnet on June 22, 2005 5:04:40 PM]
##### Share on other sites
does anyone see what Im doing wrong?
• 11
• 18
• 12
• 22
• 9
• ### Forum Statistics
• Total Topics
631397
• Total Posts
2999822
×
|
2018-06-21 20:32:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23322288691997528, "perplexity": 1301.2607335093678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864257.17/warc/CC-MAIN-20180621192119-20180621212119-00611.warc.gz"}
|
https://labs.tib.eu/arxiv/?author=Luis%20del%20Peral
|
• We measure the energy emitted by extensive air showers in the form of radio emission in the frequency range from 30 to 80 MHz. Exploiting the accurate energy scale of the Pierre Auger Observatory, we obtain a radiation energy of 15.8 \pm 0.7 (stat) \pm 6.7 (sys) MeV for cosmic rays with an energy of 1 EeV arriving perpendicularly to a geomagnetic field of 0.24 G, scaling quadratically with the cosmic-ray energy. A comparison with predictions from state-of-the-art first-principle calculations shows agreement with our measurement. The radiation energy provides direct access to the calorimetric energy in the electromagnetic cascade of extensive air showers. Comparison with our result thus allows the direct calibration of any cosmic-ray radio detector against the well-established energy scale of the Pierre Auger Observatory.
• Neutrinos in the cosmic ray flux with energies near 1 EeV and above are detectable with the Surface Detector array of the Pierre Auger Observatory. We report here on searches through Auger data from 1 January 2004 until 20 June 2013. No neutrino candidates were found, yielding a limit to the diffuse flux of ultra-high energy neutrinos that challenges the Waxman-Bahcall bound predictions. Neutrino identification is attempted using the broad time-structure of the signals expected in the SD stations, and is efficiently done for neutrinos of all flavors interacting in the atmosphere at large zenith angles, as well as for "Earth-skimming" neutrino interactions in the case of tau neutrinos. In this paper the searches for downward-going neutrinos in the zenith angle bins $60^\circ-75^\circ$ and $75^\circ-90^\circ$ as well as for upward-going neutrinos, are combined to give a single limit. The $90\%$ C.L. single-flavor limit to the diffuse flux of ultra-high energy neutrinos with an $E^{-2}$ spectrum in the energy range $1.0 \times 10^{17}$ eV - $2.5 \times 10^{19}$ eV is $E_\nu^2 dN_\nu/dE_\nu < 6.4 \times 10^{-9}~ {\rm GeV~ cm^{-2}~ s^{-1}~ sr^{-1}}$.
• A measurement of the cosmic-ray spectrum for energies exceeding $4{\times}10^{18}$ eV is presented, which is based on the analysis of showers with zenith angles greater than $60^{\circ}$ detected with the Pierre Auger Observatory between 1 January 2004 and 31 December 2013. The measured spectrum confirms a flux suppression at the highest energies. Above $5.3{\times}10^{18}$ eV, the "ankle", the flux can be described by a power law $E^{-\gamma}$ with index $\gamma=2.70 \pm 0.02 \,\text{(stat)} \pm 0.1\,\text{(sys)}$ followed by a smooth suppression region. For the energy ($E_\text{s}$) at which the spectral flux has fallen to one-half of its extrapolated value in the absence of suppression, we find $E_\text{s}=(5.12\pm0.25\,\text{(stat)}^{+1.0}_{-1.2}\,\text{(sys)}){\times}10^{19}$ eV.
The Japanese Experiment Module (JEM) Extreme Universe Space Observatory (EUSO) will be launched and attached to the Japanese module of the International Space Station (ISS). Its aim is to observe UV photon tracks produced by ultra-high energy cosmic rays developing in the atmosphere and producing extensive air showers. The key element of the instrument is a very wide-field, very fast, large-lense telescope that can detect extreme energy particles with energy above $10^{19}$ eV. The Atmospheric Monitoring System (AMS), comprising, among others, the Infrared Camera (IRCAM), which is the Spanish contribution, plays a fundamental role in the understanding of the atmospheric conditions in the Field of View (FoV) of the telescope. It is used to detect the temperature of clouds and to obtain the cloud coverage and cloud top altitude during the observation period of the JEM-EUSO main instrument. SENER is responsible for the preliminary design of the Front End Electronics (FEE) of the Infrared Camera, based on an uncooled microbolometer, and the manufacturing and verification of the prototype model. This paper describes the flight design drivers and key factors to achieve the target features, namely, detector biasing with electrical noise better than $100 \mu$V from $1$ Hz to $10$ MHz, temperature control of the microbolometer, from $10^{\circ}$C to $40^{\circ}$C with stability better than $10$ mK over $4.8$ hours, low noise high bandwidth amplifier adaptation of the microbolometer output to differential input before analog to digital conversion, housekeeping generation, microbolometer control, and image accumulation for noise reduction.
|
2019-12-12 13:16:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5494470000267029, "perplexity": 1226.2219018725605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540543850.90/warc/CC-MAIN-20191212130009-20191212154009-00340.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-p-section-p-1-algebraic-expressions-mathematical-models-and-real-numbers-exercise-set-page-17/17
|
## Precalculus (6th Edition) Blitzer
$10°C$
Start with formula: $C=\frac{5}{9}(F-32)$. Plug in $50$ for $F$: $C=\frac{5}{9}(50-32)$. Simplify to solve for C: $C=\frac{5}{9}(18)=10$.
|
2018-04-24 21:32:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9841548204421997, "perplexity": 1912.2374720531157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947328.78/warc/CC-MAIN-20180424202213-20180424222213-00061.warc.gz"}
|
https://socratic.org/questions/57ddcb6d7c0149718679451d
|
# Question #9451d
Given that the vectors have same length, it is meant that they have same magnitude. Let this be a and angle between them is $\theta$. By the problem the resultant of the two being twice of either i.e. 2a, we can write
${\left(2 a\right)}^{2} = {a}^{2} + {a}^{2} + 2 \times a \times a \cos \theta$
$\implies \cos \theta = 1 \implies \theta = {0}^{\circ}$
|
2020-07-10 12:36:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9169014096260071, "perplexity": 300.63868052189315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655908294.32/warc/CC-MAIN-20200710113143-20200710143143-00450.warc.gz"}
|
https://tex.stackexchange.com/questions/367070/beamer-custom-colors-not-working-as-expected
|
# Beamer custom colors not working as expected
I'm trying to set a custom color theme in Beamer, and it's not turning out how I thought it should.
Here's my color theme:
\definecolor{NASAred}{RGB}{252, 61, 33}
\definecolor{NASAblue}{RGB}{11, 61, 145}
\setbeamercolor{title}{bg=NASAblue, fg=white} % title block on first slide
\setbeamercolor{palette primary}{bg=NASAblue, fg=white} %right-hand side of bottom
\setbeamercolor{palette secondary}{bg=NASAred, fg=white} % center bottom
\setbeamercolor{palette tertiary}{bg=NASAblue,fg=white} % left bottom
\setbeamercolor{palette quaternary}{bg=NASAred,fg=white} %
\setbeamercolor{section in toc}{fg=NASAblue} % TOC sections
\setbeamercolor{item projected}{fg=NASAblue, bg=white} %
\setbeamercolor{frametitle}{fg=NASAblue,bg=white} %
\setbeamercolor{local structure}{fg=NASAblue} %
\setbeamercolor{item projected}{fg=NASAblue,bg=white} %
\setbeamertemplate{itemize item}{\color{NASAblue}$\bullet$} %
\setbeamertemplate{itemize subitem}{\color{NASAblue}\scriptsize{$\bullet$}}%
And here's my results:
Zoomed in:
Neither of the colors (which I got the RGB for from the official NASA style guide) look like the colors in the actual meatball (or like the samples in the style guide, when I put them next to each other). In particular, the blue looks too dark and the red looks too orange. I've tested this on three different monitors, so now I suspect that there are some darkening or lightening rules built into the custom colors in Beamer, instead of just using them as-is, but I can't find any documentation about it.
Does anyone know what's going on?
If you look at p. 12 of the design guide, the colours in the "Full color insignia" seems to be darker then in the logo on your slides - or on the logos which one finds on the internet.
## Workaround:
I took the .svg of the meatball logo from Wikipedia and opened it with inkscape. For example for the blue area, I see the following values:
Either use these values for defining the beamer colours or take the values from the style guide and adjust the image accordingly.
\documentclass{beamer}
\useoutertheme{infolines}
\definecolor{NASAred}{RGB}{238, 41, 61}
\definecolor{NASAblue}{RGB}{26, 93, 173}
\setbeamercolor{title}{bg=NASAblue, fg=white} % title block on first slide
\setbeamercolor{palette primary}{bg=NASAblue, fg=white} %right-hand side of bottom
\setbeamercolor{palette secondary}{bg=NASAred, fg=white} % center bottom
\setbeamercolor{palette tertiary}{bg=NASAblue,fg=white} % left bottom
\setbeamercolor{palette quaternary}{bg=NASAred,fg=white} %
\setbeamercolor{section in toc}{fg=NASAblue} % TOC sections
\setbeamercolor{item projected}{fg=NASAblue, bg=white} %
\setbeamercolor{frametitle}{fg=NASAblue,bg=white} %
\setbeamercolor{local structure}{fg=NASAblue} %
\setbeamercolor{item projected}{fg=NASAblue,bg=white} %
\setbeamertemplate{itemize item}{\color{NASAblue}$\bullet$} %
\setbeamertemplate{itemize subitem}{\color{NASAblue}\scriptsize{$\bullet$}}%
\title{Text}
\begin{document}
\begin{frame}
\titlepage
\centering
\includegraphics[width=.3\textwidth]{2000px-NASA_logo}
\end{frame}
\end{document}
• @MissMonicaE If you can't alter the logo, why don't you try with \definecolor{NASAred}{RGB}{238, 41, 61} \definecolor{NASAblue}{RGB}{26, 93, 173}? – user36296 Apr 28 '17 at 13:11
|
2019-10-16 16:46:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5618659257888794, "perplexity": 9495.242056879724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669057.0/warc/CC-MAIN-20191016163146-20191016190646-00443.warc.gz"}
|
https://gmatclub.com/forum/two-pieces-of-fruit-are-selected-out-of-a-group-of-8-pieces-142146.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 19 Jun 2018, 03:08
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Two pieces of fruit are selected out of a group of 8 pieces
Author Message
TAGS:
### Hide Tags
Manager
Status: Fighting hard
Joined: 04 Jul 2011
Posts: 68
GMAT Date: 10-01-2012
Two pieces of fruit are selected out of a group of 8 pieces [#permalink]
### Show Tags
09 Nov 2012, 01:34
3
2
00:00
Difficulty:
(N/A)
Question Stats:
39% (02:05) correct 61% (02:25) wrong based on 246 sessions
### HideShow timer Statistics
Two pieces of fruit are selected out of a group of 8 pieces of fruit consisting only of apples and bananas. What is the probability of selecting exactly 2 bananas?
(1) The probability of selecting exactly 2 apples is greater than 1/2.
(2) The probability of selecting 1 apple and 1 banana in either order is greater than 1/3.
_________________
I will rather do nothing than be busy doing nothing - Zen saying
VP
Joined: 02 Jul 2012
Posts: 1194
Location: India
Concentration: Strategy
GMAT 1: 740 Q49 V42
GPA: 3.8
WE: Engineering (Energy and Utilities)
Re: Two pieces of fruit are selected out of a group of 8 pieces [#permalink]
### Show Tags
09 Nov 2012, 03:27
3
1
Pansi wrote:
Two pieces of fruit are selected out of a group of 8 pieces of fruit consisting only of apples and bananas. What is the probability of selecting exactly 2 bananas?
(1) The probability of selecting exactly 2 apples is greater than ½.
(2) The probability of selecting 1 apple and 1 banana in either order is greater than 1/3
Total No. of ways of selecting = 8C2 = 28
1)No. of apples = 7, No. of bananas = 1, Probability : $$\frac{7C2}{8C2} = \frac{21}{28}$$
No. of apples = 6, No. of bananas = 2, Probability : $$\frac{6C2}{8C2} = \frac{15}{28}$$
No. of apples = 5, No. of bananas = 3, Probability : $$\frac{5C2}{8C2} = \frac{10}{28}$$
So, No. of apples can be 6 or 7. Insufficient
2)No. of apples = 7, No. of bananas = 1, Probability : $$\frac{7C1*1C1}{8C2} = \frac{7}{28}$$
No. of apples = 6, No. of bananas = 2, Probability : $$\frac{6C1*2C1}{8C2} = \frac{12}{28}$$
No. of apples = 5, No. of bananas = 3, Probability : $$\frac{5C1*3C1}{8C2} = \frac{15}{28}$$
So, No. of apples can be 5 or 6. ( More values other than 7 are also possible, but two values are enough to make the statement insufficient.)Insufficient
1 & 2 together. No. Of apples = 6, No. of bananas = 2. Enough info to find what is required. Sufficient.
Although, I'm not very strong at combinatronics and hence I'm not 100% sure of my method. Also since the question states that there ARE bananas, I'm assuming that no. of apples cannot be 8.
Kudos Please... If my post helped.
_________________
Did you find this post helpful?... Please let me know through the Kudos button.
Thanks To The Almighty - My GMAT Debrief
GMAT Reading Comprehension: 7 Most Common Passage Types
Math Expert
Joined: 02 Sep 2009
Posts: 46162
Re: Two pieces of fruit are selected out of a group of 8 pieces [#permalink]
### Show Tags
09 Nov 2012, 03:43
6
3
Two pieces of fruit are selected out of a group of 8 pieces of fruit consisting only of apples and bananas. What is the probability of selecting exactly 2 bananas?
Say there are $$x$$ bananas and $$y$$ ($$y=8-x$$) apples. The question is $$P(bb)=\frac{x}{8}*\frac{x-1}{7}=?$$. Basically we need to find how many bananas are there.
(1) The probability of selecting exactly 2 apples is greater than 1/2 --> $$\frac{y}{8}*\frac{y-1}{7}>\frac{1}{2}$$ --> $$y(y-1)>28$$ --> $$y$$ can be 6, 7, or 8, thus $$x$$ can be 2, 1, or 0. not sufficient.
(2) The probability of selecting 1 apple and 1 banana in either order is greater than 1/3. $$2*\frac{x}{8}*\frac{8-x}{7}>\frac{1}{3}$$ --> $$x(8-x)>\frac{28}{3}=9\frac{1}{3}$$, thus $$x$$ can be 2, 3, 4, 5, or 6. Not sufficient.
(1)+(2) From above x can only be 2. Sufficient.
Hope it's clear.
_________________
VP
Joined: 02 Jul 2012
Posts: 1194
Location: India
Concentration: Strategy
GMAT 1: 740 Q49 V42
GPA: 3.8
WE: Engineering (Energy and Utilities)
Re: Two pieces of fruit are selected out of a group of 8 pieces [#permalink]
### Show Tags
09 Nov 2012, 03:54
Bunuel wrote:
Two pieces of fruit are selected out of a group of 8 pieces of fruit consisting only of apples and bananas. What is the probability of selecting exactly 2 bananas?
Say there are $$x$$ bananas and $$y$$ ($$y=8-x$$) apples. The question is $$P(bb)=\frac{x}{8}*\frac{x-1}{7}=?$$. Basically we need to find how many bananas are there.
(1) The probability of selecting exactly 2 apples is greater than 1/2 --> $$\frac{y}{8}*\frac{y-1}{7}>\frac{1}{2}$$ --> $$y(y-1)>28$$ --> $$y$$ can be 6, 7, or 8, thus $$x$$ can be 2, 1, or 0. not sufficient.
(2) The probability of selecting 1 apple and 1 banana in either order is greater than 1/3. $$2*\frac{x}{8}*\frac{8-x}{7}>\frac{1}{3}$$ --> $$x(8-x)>\frac{28}{3}=9\frac{1}{3}$$, thus $$x$$ can be 2, 3, 4, 5, or 6. Not sufficient.
(1)+(2) From above x can only be 2. Sufficient.
Hope it's clear.
Just concerned. When the question statement says that there are bananas AND apples, do we need to consider situations in which there are only apples or only bananas??? I'm asking this not for just this question but for the GMAT on the whole.
_________________
Did you find this post helpful?... Please let me know through the Kudos button.
Thanks To The Almighty - My GMAT Debrief
GMAT Reading Comprehension: 7 Most Common Passage Types
Manager
Joined: 02 Nov 2012
Posts: 95
Location: India
Concentration: Entrepreneurship, Strategy
WE: Other (Computer Software)
Re: Two pieces of fruit are selected out of a group of 8 pieces [#permalink]
### Show Tags
09 Nov 2012, 04:04
MacFauz wrote:
Bunuel wrote:
Two pieces of fruit are selected out of a group of 8 pieces of fruit consisting only of apples and bananas. What is the probability of selecting exactly 2 bananas?
Say there are $$x$$ bananas and $$y$$ ($$y=8-x$$) apples. The question is $$P(bb)=\frac{x}{8}*\frac{x-1}{7}=?$$. Basically we need to find how many bananas are there.
(1) The probability of selecting exactly 2 apples is greater than 1/2 --> $$\frac{y}{8}*\frac{y-1}{7}>\frac{1}{2}$$ --> $$y(y-1)>28$$ --> $$y$$ can be 6, 7, or 8, thus $$x$$ can be 2, 1, or 0. not sufficient.
(2) The probability of selecting 1 apple and 1 banana in either order is greater than 1/3. $$2*\frac{x}{8}*\frac{8-x}{7}>\frac{1}{3}$$ --> $$x(8-x)>\frac{28}{3}=9\frac{1}{3}$$, thus $$x$$ can be 2, 3, 4, 5, or 6. Not sufficient.
(1)+(2) From above x can only be 2. Sufficient.
Hope it's clear.
Just concerned. When the question statement says that there are bananas AND apples, do we need to consider situations in which there are only apples or only bananas??? I'm asking this not for just this question but for the GMAT on the whole.
Choice (2) makes it clear that there is banana in the group of fruits, doesn't it? And yeah, it's always bad to assume ANYTHING on gmat, especially for Data Sufficiency and CR questions! So, when considering choice (1) by itself, no. of bananas=0 should also be one of the options.
_________________
TH
Give me +1 Kudos if my post helped!
Manager
Joined: 28 Feb 2012
Posts: 112
GPA: 3.9
WE: Marketing (Other)
Re: Two pieces of fruit are selected out of a group of 8 pieces [#permalink]
### Show Tags
11 Nov 2012, 02:53
Bunuel wrote:
Two pieces of fruit are selected out of a group of 8 pieces of fruit consisting only of apples and bananas. What is the probability of selecting exactly 2 bananas?
Say there are $$x$$ bananas and $$y$$ ($$y=8-x$$) apples. The question is $$P(bb)=\frac{x}{8}*\frac{x-1}{7}=?$$. Basically we need to find how many bananas are there.
(1) The probability of selecting exactly 2 apples is greater than 1/2 --> $$\frac{y}{8}*\frac{y-1}{7}>\frac{1}{2}$$ --> $$y(y-1)>28$$ --> $$y$$ can be 6, 7, or 8, thus $$x$$ can be 2, 1, or 0. not sufficient.
(2) The probability of selecting 1 apple and 1 banana in either order is greater than 1/3. $$2*\frac{x}{8}*\frac{8-x}{7}>\frac{1}{3}$$ --> $$x(8-x)>\frac{28}{3}=9\frac{1}{3}$$, thus $$x$$ can be 2, 3, 4, 5, or 6. Not sufficient.
(1)+(2) From above x can only be 2. Sufficient.
Hope it's clear.
I have solved this question with similar logic, but answered E because i understoon the 2nd statement as no matter what is the order the probability will be greater than 1/3, but in your solution i see that "in either order" means in both ways. Could you please clarify that?
_________________
If you found my post useful and/or interesting - you are welcome to give kudos!
Math Expert
Joined: 02 Sep 2009
Posts: 46162
Re: Two pieces of fruit are selected out of a group of 8 pieces [#permalink]
### Show Tags
12 Nov 2012, 11:00
ziko wrote:
Bunuel wrote:
Two pieces of fruit are selected out of a group of 8 pieces of fruit consisting only of apples and bananas. What is the probability of selecting exactly 2 bananas?
Say there are $$x$$ bananas and $$y$$ ($$y=8-x$$) apples. The question is $$P(bb)=\frac{x}{8}*\frac{x-1}{7}=?$$. Basically we need to find how many bananas are there.
(1) The probability of selecting exactly 2 apples is greater than 1/2 --> $$\frac{y}{8}*\frac{y-1}{7}>\frac{1}{2}$$ --> $$y(y-1)>28$$ --> $$y$$ can be 6, 7, or 8, thus $$x$$ can be 2, 1, or 0. not sufficient.
(2) The probability of selecting 1 apple and 1 banana in either order is greater than 1/3. $$2*\frac{x}{8}*\frac{8-x}{7}>\frac{1}{3}$$ --> $$x(8-x)>\frac{28}{3}=9\frac{1}{3}$$, thus $$x$$ can be 2, 3, 4, 5, or 6. Not sufficient.
(1)+(2) From above x can only be 2. Sufficient.
Hope it's clear.
I have solved this question with similar logic, but answered E because i understoon the 2nd statement as no matter what is the order the probability will be greater than 1/3, but in your solution i see that "in either order" means in both ways. Could you please clarify that?
The probability of selecting 1 apple and 1 banana in either order equals to the probability of selecting an apple and then a banana (x/8*(8-x)/7) PLUS the probability of selecting a banana and then an apple ((x-8)/8*x/7) --> x/8*(8-x)/7+(8-x)/8*x/7=2*x/8*(8-x)/7.
Hope it's clear.
_________________
Non-Human User
Joined: 09 Sep 2013
Posts: 6997
Re: Two pieces of fruit are selected out of a group of 8 pieces [#permalink]
### Show Tags
24 Jan 2018, 12:01
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: Two pieces of fruit are selected out of a group of 8 pieces [#permalink] 24 Jan 2018, 12:01
Display posts from previous: Sort by
|
2018-06-19 10:08:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5403347015380859, "perplexity": 791.336563106654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267862248.4/warc/CC-MAIN-20180619095641-20180619115641-00525.warc.gz"}
|
https://fides.fe.uni-lj.si/pyopus/download/0.10/docsrc/_build/html/gui.introduction.values.html
|
# 11.1.4. Specifying field values and identifiers¶
Fields for entering text (e.g. names, source code) are tajken literary. Fields for entering variables, parameters, options, and settings accept several data types
• integer and real numbers are treated as Python scalars of type int and float
• strings True and False are treated as scalar boolean values
• all other strings not containing whitespace are treated as strings
• space separated values are treated as lists
This has several limitations. For instance, you cannot specify a string containing whitespace. You also cannot specify an empty list or a list with no members. For such cases you can use the hash notation
#<Pythonic expression>
One would specify an empty list and a list with one element as
#[]
#['element']
The expression will be evaluated when the data in the GUI are dumped to a file, unless otherwise noted. The variables which were defined in the project are available in the evaluation environment.
The following hashed entries in the GUI result in identical dumped values
100
#100
#50+50
True
#True
#1==1
hello 3 4 5
#['hello', 3, 4, 5]
Identifiers are strings conforming to some simple rules.
• An identifier comprises only numbers, English letters, and underscores.
• It never starts with a number.
They are used for naming things in PyOPUS and the GUI.
|
2022-09-25 21:56:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29250970482826233, "perplexity": 2807.5544746738756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00433.warc.gz"}
|
https://blog.ropnop.com/hosting-clr-in-golang/
|
# Intro
A while back, I was nerd sniped on Twitter when someone asked if Go could be used to run .NET assemblies (DLLs or EXEs). Although I’ve been writing a lot of Go recently (and love it), I had never done much with Go on Windows, and certainly nothing advanced like interacting with syscalls or working with CGo before. This sounded like a really fun challenge and I decided to spend some time digging into it. After reading the amazing Black Hat Go book, I thought maybe I knew enough to make it work and that it couldn’t be that complicated. I was wrong. It was really hard. But in the end I got a PoC working, learned a ton, and decided to share my journey.
Before I jump in, I want to state that I was (and still am) a complete n00b when it comes to .NET. I’ve never written anything more advanced than a Hello World. I also don’t really know C/C++, and even less when it comes to C/C++ on Windows. With that being said, there are probably better ways to do this, and I probably have things wrong. But I wanted to share for others. Nobody is immediately an expert on anything, and code doesn’t come together magically the first time. I went from knowing nothing to having something working in a few weekends just by Googling and playing around. So this is the story of how I stumbled, bumbled, and Stack Overflow’d my way into running .NET assemblies in Golang.
## tl;dr
I’m releasing my PoC code on Github: go-clr. This package let’s you host the CLR and execute DLLs from disk or managed assemblies from memory. In short, you can run something like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 import ( clr "github.com/ropnop/go-clr" log fmt ) func main() { var exeBytes = []byte{0xa1 ....etc} retCode, err := clr.ExecuteByteArray(exeBytes) if err != nil { log.Fatal(err) } fmt.Printf(["[+] Exit code: %d\n", retCode) }
Check out the examples for more code and the GoDocs for exposed structs and functions.
# Background - Syscalls vs CGo
There are two main ways within Go you can interact with Windows APIs: syscalls or CGo. Syscalls require identifying and loading DLLs from the system and finding the exported function you want to call, where CGo lets you write C to do the “heavy lifting” and then call it from Go. For example, here’s two ways to pop a Windows message box in Golang. First, using CGo to write the function in C and then calling it from Go:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 package main /* #include void SayHello() { MessageBox(0, "Hello World", "Helo", MB_OK); } */ import "C" func main() { C.SayHello() }
One of the problems with this approach is CGo requires a GNU Compiler, which means simply running go build out of the box won’t work. You need to set up a build system with something like msys2. Go is also very opinionated about where you can include header files from, and I encountered some errors importing more advanced things I needed. The other problem is you need to write C.
The other way to call the Windows API is using exported functions from DLLs. For example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 // +build windows package main import ( "fmt" "syscall" "unsafe" ) func main() { user32 := syscall.MustLoadDLL("user32.dll") messageBox := user32.MustFindProc("MessageBoxW") text, _ := syscall.UTF16PtrFromString("Hello World!") caption, _ := syscall.UTF16PtrFromString("Hello") MB_OK := 0 ret, _, _ := messageBox.Call( uintptr(0), uintptr(unsafe.Pointer(text)), uintptr(unsafe.Pointer(caption)), uintptr(MB_OK)) fmt.Printf("Returned: %d\n", ret) }
This example requires quite a lot more Go code, but is much easier to build (a simple go build on Windows will work). It finds user32.dll and the exported MessageBoxW function, then makes a syscall to the function with necessary pointers. It requires converting strings to UTF16 pointers and makes use of unsafe.Pointer. I know, it looks really messy and confusing - but IMO this is the better way to do it. It let’s us write “pure” Go with no C dependencies. This is how I decided to write go-clr.
For a great introduction to calling the Windows API from Go and using unsafe, I found this article to be an amazing resource: https://medium.com/jettech/breaking-all-the-rules-using-go-to-call-windows-api-2cbfd8c79724
# Background - Calling Managed Code from Unmanaged Code
Like I mentioned, when I started this journey I had barely any knowledge of .NET and how it worked. I started by reading a few articles to understand what was necessary and the concept of “managed” vs “unmanaged” code came up. “Unmanaged” code usually refers to low level C/C++ code that is compiled and linked and “just works” if you execute the instructions. “Managed” code refers to code that is written to target .NET and will not “just work” without the CLR. I came to think of it conceptually as requiring an interpreter - e.g. you can’t just run Python code without Python being installed and calling the interpreter. You can’t just run .NET code without calling the CLR.
The Common Language Runtime, or CLR, is what is used by .NET to “interpret” the code. I still don’t fully understand it, but I conceptually came to think of it as Microsoft’s version of the Java Virtual Machine (JVM). When you write Java, it compiles to an intermediate bytecode and a JVM is required to execute it. Similarly then, when you write .NET, it compiles to an intermediate language and requires a CLR to execute it. I’m probably wrong and missing a lot, but but in the end all I needed to know was you need to create or attach to a CLR before you can execute .NET assemblies.
Knowing these basic terms and concepts helped greatly when Googling. I was able to Google things like “hosting CLR” and “running managed code from unmanaged” and get very good results.
Two of those results had excellent examples of running managed code from C/C++:
These two articles had examples of launching .NET assemblies from C. Since I knew how to open a message box in Go by calling the Windows API, I figured I had all the skills I needed to recreate their code in Go. How hard could it be? Yeah…I just had to draw the rest of the owl:
# Part 1 - Loading a Managed DLL from Disk
I set about trying to basically re-write the two examples I found into pure Go using syscalls. xpn’s code was only 70 lines, and after reading it over several times, I got the basic steps down that were necessary:
1. Create a “MetaHost” instance
2. Enumerate the runtimes
3. Get an “Interface” to that runtime
4. “Start” the Interface
5. Call “ExecuteInDefaultAppDomain” and pass in the DLL and arguments
What was extremely helpful to me was to have xpn’s code open in Visual Studio. This way I could right click on functions/constants and “View Definition” to see where they were coming from.
## Calling CLRCreateInstance
The first step in the whole process is to create a MetaHost instance by calling a native function. It looked like this in the sample code:
1 CLRCreateInstance(CLSID_CLRMetaHost, IID_ICLRMetaHost, (LPVOID*)&metaHost)
The MSDN docs on that function let me know that it is part of MSCorEE.dll. So to load in Go:
1 2 3 4 var ( modMSCoree = syscall.NewLazyDLL("mscoree.dll") procCLRCreateInstance = modMSCoree.NewProc("CLRCreateInstance") )
It takes 3 arguments: clsid, riid and ppInterface. The first two arguments are pointers to GUIDs, and the third is a pointer to a a pointer that will point to the new MetaInstance created. How to create the GUIDs in Go? First I right clicked in Visual Studio to see the definition for the constant CLSID_CLRMetaHost. It is defined in metahost.h and looks like this:
1 EXTERN_GUID(CLSID_CLRMetaHost, 0x9280188d, 0xe8e, 0x4867, 0xb3, 0xc, 0x7f, 0xa8, 0x38, 0x84, 0xe8, 0xde);
This took me several hours of Googling. I went down the wrong path at first of trying to convert the GUID to a string. But then I discovered the syscall package has a GUID type already. Fortunately, the metahost.h definitions matched up exactly with the parameters the struct expected, and I could re-create the GUID in Go by copying the values in like this:
1 2 import "golang.org/x/sys/windows" CLSID_CLRMetaHost = windows.GUID{0x9280188d, 0xe8e, 0x4867, [8]byte{0xb3, 0xc, 0x7f, 0xa8, 0x38, 0x84, 0xe8, 0xde}}
To get the pointer argument, I used an empty uintptr variable. The final call to the CLRCreateInstance function looked like this:
1 2 3 4 5 6 7 var pMetaHost uintptr hr, _, _ := procCLRCreateInstance.Call( uintptr(unsafe.Pointer(&CLSID_CLRMetaHost)), uintptr(unsafe.Pointer(&IID_ICLRMetaHost)), uintptr(unsafe.Pointer(&pMetaHost)) ) checkOK(hr, "procCLRCreateInstance")
I passed in unsafe pointers to the GUID structs and an unsafe pointer to a null pointer (essentially). If the function returns successfully, the value of pMetaHost should be populated with the actual memory address of the new CLR MetaHost instance.
The function returns an HRESULT. If the value is equal to 0, it was successful. So I wrote a helper function to compare an hr to zero and panic if it failed:
1 2 3 4 5 func checkOK(hr uintptr, caller string) { if hr != 0x0 { log.Fatalf("%s returned 0x%08x", caller, hr) } }
And it worked! I could see the value of pMetaHost was populated with an address. Now what to do with it?
## Recreating Interfaces in Go
This is where everything got really hard, really fast. Calling exported functions from DLLs was fairly straightforward, but now I was dealing with pointers which pointed to interfaces which pointed to other functions. I knew the next step in the chain was to call metaHost->EnumerateInstalledRuntimes, but all I had was a pointer to a metaHost object in memory.
Again, I am terrible at C/C++. But I knew enough to know that if memory layout all matched up, I could “cast” a pointer to an object. Fortunately, the same holds true for Go using the unsafe package. If I recreated the ICLRMetaHost interface as a struct in Go, it would be possible to convert my pointer to it. Once again, right clicking and “Viewing Definition” on the ICLRMetaHost interface let me see it defined in metahost.c. It appeared to be defined twice: once for C++ and once for C. I focused on the C interface:
Looked like it actually defined two interfaces: ICLRMetaHostVtbl and then ICLRMetaHost, which only has a pointer the Vtbl. I assumed that STDMETHODCALLTYPE would just be a pointer to a function, so a uintptr in Go would be fine. I followed along with the “C Structs & Go Structs” section on the post I referenced earlier and ended up with this defined in Go:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 //ICLRMetaHost Interface from metahost.h type ICLRMetaHost struct { vtbl *ICLRMetaHostVtbl } type ICLRMetaHostVtbl struct { QueryInterface uintptr AddRef uintptr Release uintptr GetRuntime uintptr GetVersionFromFile uintptr EnumerateInstalledRuntimes uintptr EnumerateLoadedRuntimes uintptr RequestRuntimeLoadedNotification uintptr QueryLegacyV2RuntimeBinding uintptr ExitProcess uintptr } func NewICLRMetaHostFromPtr(ppv uintptr) *ICLRMetaHost { return (*ICLRMetaHost)(unsafe.Pointer(ppv)) }
The NewICLRMetaHostFromPtr function takes the pointer I get from CLRCreateInstance and returns an ICLRMetaHost object. This all appeared to work fine. I even did a fmt.Printf("+%v", metaHost) on the object and could see that every struct field was populated with a pointer, so it looked like it was working.
Now to call a function like EnumerateInstalledRuntimes, I could use syscall.Syscall with the address of the function stored in ICLRMetaHost.vtbl.EnumerateInstalledRuntimes. Right?
## Calling Interface Methods
The EnumerateInstalledRuntimes method only takes in one parameter: a pointer to a pointer for the return value (an enumerator). So I implemented the call like this:
1 2 3 4 5 6 7 8 9 10 var pInstalledRuntimes uintptr hr, _, _ := syscall.Syscall( metaHost.vtbl.EnumerateInstalledRuntimes, 1, uintptr(unsafe.Pointer(&pInstalledRuntimes)), 0, 0, 0 ) checkOK(hr, "metaHost.EnumerateInstalledRuntimes")
Note: The 0s are necessary since syscall.Syscall requires 6 arguments, but only uses whatever is necessary.
Except…it didn’t work. No matter what I tried or tinkered with, I never got a good return value (i.e. 0). At this point I was well beyond my comfort level and had no idea how to troubleshoot, so I considered giving up and scrapping the whole idea. However one night I kept Googling certain terms I found in the C interface, like “vtbl” and “IUnknown” along with “golang”, and stumbled on a goldmine.
Turns out I can’t just treat these functions as native functions. They are methods of a COM object. I’ll be honest - I had heard of Component Object Model (COM), but knew nothing about it and didn’t have the time or patience to learn it. I still don’t understand it. But I found this amazing Stack Overflow answer about implementing COM methods in Golang: https://stackoverflow.com/questions/37781676/how-to-use-com-component-object-model-in-golang
Apparently, when calling a COM method, I have to make sure AddRef and Release are implemented, and pass a pointer to the object itself as the first argument. I essentially copied the code from that SO answer and boom - I got a good return code for the EnumerateInstalledRuntimes function:
1 2 3 4 5 6 7 8 9 func (obj *ICLRMetaHost) EnumerateInstalledRuntimes(pInstalledRuntimes *uintptr) uintptr { ret, _, _ := syscall.Syscall( obj.vtbl.EnumerateInstalledRuntimes, 2, uintptr(unsafe.Pointer(obj)), uintptr(unsafe.Pointer(pInstalledRuntimes)), 0) return ret }
## Implementing Additional Interfaces
That one SO answer cracked this all open for me. Now I knew how to implement the C style interfaces I was finding in header files in Visual Studio and call the functions I needed. Next up was implementing the IEnumUnknown interface, since that’s what EnumerateInstalledRuntimes points to. I got really good at quickly copying the interfaces from header files to Go with Vim macros, and I just copied/pasted the standard function implementations. One thing I did was not bother implementing functions I was never going to call - it didn’t matter as long as they were included in the struct definition for padding.
After the IEnumUnknown interface, I also needed the ICLRRuntimeInfo interface from metahost.h as well.
To enumerate installed runtime versions as strings, I had to do a little memory hackery. GetVersionString doesn’t actually return a string, but writes a UTF16 string to a spot in memory. So in Go, I allocated a 20 byte buffer array and passed the pointer to that as the spot to write the string to. Then I convert the buffer array to a UTF16 string that’s Go friendly. One thing I learned was to ensure I pass the pointer to the first element of the array - not the array itself. Ultimately the loop for enumerating runtimes looked like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 var rutimes []string var pRuntimeInfo uintptr var fetched = uint32(0) var versionString string versionStringBytes := make([]uint16, 20) versionStringSize := uint32(len(versionStringBytes)) var runtimeInfo *ICLRRuntimeInfo for { hr = installedRuntimes.Next(1, &pRuntimeInfo, &fetched) if hr != 0x0 { break } runtimeInfo = NewICLRRuntimeInfoFromPtr(pRuntimeInfo) if ret := runtimeInfo.GetVersionString(&versionStringBytes[0], &versionStringSize); ret != 0x0 { log.Fatalf("GetVersionString returned 0x%08x", ret) } versionString = syscall.UTF16ToString(versionStringBytes) runtimes = append(runtimes, versionString) } fmt.Printf("[+] Installed runtimes: %s\n", runtimes)
Amazingly, this actually worked the first time I ran it and I couldn’t believe it. Maybe I was finally starting to get the hang of writing C in Go ;)
## Executing a DLL From Disk
I won’t go in to all the other interfaces I created, but I essentially just went one by one converting the C from xpn and this blog. I ended up implementing:
Finally I got to the point where I was ready to try out my implementation of ICLRRuntimeHost’s ExecuteInDefaultAppDomain method. I wrote a dead simple C# program to test (which I still had to look up online how to do - told you..I don’t know C#…):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 using System; using System.Windows.Forms; namespace TestDLL { public class HelloWorld { public static int SayHello(string foobar) { MessageBox.Show("Hello from a C# DLL!"); return 0; } } }
And I compiled it to a DLL with: csc -target:library -out:TestDLL.dll TestDLL.cs. Then in Go I converted the filename, type name, method name and argument to UTF16 string pointers and called the method:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 pDLLPath, _ := syscall.UTF16PtrFromString("TestDLL.dll") pTypeName, _ := syscall.UTF16PtrFromString("TestDLL.HelloWorld") pMethodName, _ := syscall.UTF16PtrFromString("SayHello") pArgument, _ := syscall.UTF16PtrFromString("foobar") var pReturnVal *uint16 hr = runtimeHost.ExecuteInDefaultAppDomain( pDLLPath, pTypeName, pMethodName, pArgument, pReturnVal ) checkOK(hr, "runtimeHost.ExecuteInDefaultAppDomain") fmt.Printf("[+] Assembly returned: 0x%x\n", pReturnVal)
And it worked!!
After getting it working, I cleaned up the code a little bit and added some helper functions. You can see the full example from start to finish here: DLLFromDisk.go
I also added a wrapper function that basically automates the entire process, which you can call with just:
1 2 3 4 5 ret, err := clr.ExecuteDLLFromDisk( "TestDLL.dll", "TestDLL.HelloWorld", "SayHello", "foobar")
So that was cool and all…but what I really wanted to be able to do was load the assembly from memory. This required the DLL existing on disk, and I had dreams and visions of downloading a DLL or embedding it inside a Go binary and just having it execute. I thought the hard part was done and this would be easy….I was wrong.
# Part 2 - Executing Assemblies from Memory
My initial thought was to leverage virtual filesystem in Go, like vfs or packr2 and keep the DLL in memory. Of course I soon realized that was impossible, since the path to the DLL was being passed to a native function I had no control over and that function would always look on disk.
I also looked through the MSDN documents for ICLRRuntimeHost and couldn’t find any reference to loading or executing things from memory. But I remembered hearing and seeing others do this in some offensive tools, so I turned back to Google and found two tools that were executing .NET assemblies from memory using native code:
Looking at that example code, I realized they had to use the deprecated CLR methods to achieve in memory execution. So I needed to rewrite and re-implement a lot more in Go, specifically the ICORRuntimeHost.
The new high level flow looked like this:
1. Create a MetaHost instance
2. Enumerate the installed runtimes
3. Get RuntimeInfo to latest installed version
4. Run BindAsLegacyV2Runtime()
5. Get ICORRuntimeHost interface
6. Get default app domain from interface
7. Load assembly into app domain
8. Find entrypoint to loaded assembly
9. Call entrypoint
Yeah…quite a bit more complicated than just running a DLL from disk.
After implementing ICORRuntimeHost and AppDomain in Go, I realized looking at Donut and GrayFrost’s code that in order to call the Load_3 method inside an AppDomain, the bytecode had to be in a specific format: namely a SafeArray.
This was a fun rabbit hole to go down (sarcasm). I needed to figure out how to convert a byte array in Go to a Safe Array in memory. First, I created a Go struct based on the definition in OAld.h:
1 2 3 4 5 6 7 8 9 10 11 12 13 type SafeArray struct { cDims uint16 fFeatures uint16 cbElements uint32 cLocks uint32 pvData uintptr rgsabound [1]SafeArrayBound } type SafeArrayBound struct { cElements uint32 lLbound int32 }
After a few more nights of Googling and reading public C/C++ code on Github that implemented safe arrays, I realized I could create a SafeArray through a native function (SafeArrayCreate), and then use a raw memory copy to put bytes in the correct place. First, to create the SafeArray:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 var rawBytes = []byte{0xaa....} // my executable loaded in a byte array modOleAuto, err := syscall.LoadDLL("OleAut32.dll") must(err) procSafeArrayCreate, err := modOleAuto.FindProc("SafeArrayCreate") must(err) size := len(rawBytes) sab := SafeArrayBound{ cElements: uint32(size), lLbound: 0, } runtime.KeepAlive(sab) vt := uint16(0x11) // VT_UI1 ret, _, _ := procSafeArrayCreate.Call( uintptr(vt), uintptr(1), uintptr(unsafe.Pointer(&sab))) sa := (*SafeArray)(unsafe.Pointer(ret))
I still couldn’t figure out what the hell vt is and should be. I honestly just copied the value from Donut (0x11) which corresponds to a VT_UI1, and it worked so I stuck with it.
The procedure returns a pointer to a created SafeArray. Now our actual data (the bytes) need to be copied into memory where safeArray.pvData points to. I couldn’t figure out a way to do this in native Go, so I imported and used RtlCopyMemory from ntdll.dll to perform a raw memory copy:
1 2 3 4 5 6 7 8 9 modNtDll, err := syscall.LoadDLL("ntdll.dll") must(err) procRtlCopyMemory, err := modNtDll.FindProc("RtlCopyMemory") must(err) ret, _, err = procRtlCopyMemory.Call( sa.pvData, uintptr(unsafe.Pointer(&rawBytes[0])), uintptr(size))
Since SafeArrayCreate allocates the memory based on the cElements value (which is equal to the size of our byte array), we can just copy directly to that point in memory. Surprisingly, this worked.
I ultimately ended up wrapping this in a helper function, CreateSafeArray, that I would take in a byte array and return a pointer to a SafeArray in memory.
## Finding and Calling the Entry Point
Once a SafeArray was created, it could be loaded into an AppDomain with the Load_3 method:
1 2 3 4 5 6 7 8 9 func (obj *AppDomain) Load_3(pRawAssembly uintptr, asmbly *uintptr) uintptr { ret, _, _ := syscall.Syscall( obj.vtbl.Load_3, 3, uintptr(unsafe.Pointer(obj)), uintptr(unsafe.Pointer(pRawAssembly)), uintptr(unsafe.Pointer(asmbly))) return ret }
This gave me a pointer to an Assembly object. I needed to next implement the Assembly interface in Go. Looking at in Visual Studio (from mscorlib.tlh) it looked like any other interface I had implemented:
But at this point, not matter what I tried, I kept getting memory out of bound exceptions when calling the get_EntryPoint and Invoke_3 methods. Something was really wrong and I couldn’t figure it out. Since I had passed my comfort zone wayyy back - I was again almost ready to just give up. For probably 6 straight nights I kept re-reading and re-writing my code over and over but couldn’t figure it out.
I eventually started using debuggers to read memory and compare hex dumps between a working C++ program and my Go program, but still couldn’t spot the difference.
I started searching Github for other Go projects that were doing Windows API calls and found w32 from James Hovious. I actually knew of this project before, but decided to really start reading his source code. I came across his implementation of IDispatch. I suddenly remembered that the interface definition for Assembly mentioned IDispatch. I saw in his code that the vtbl struct included some additional methods I didn’t have:
1 2 3 4 5 6 7 8 9 type pIDispatchVtbl struct { pQueryInterface uintptr pAddRef uintptr pRelease uintptr pGetTypeInfoCount uintptr pGetTypeInfo uintptr pGetIDsOfNames uintptr pInvoke uintptr }
I don’t know what an IDispatch is and WTF I have no idea what these other functions are or what they do, but it dawned on me that if they are missing from my struct definition, then the memory won’t line up correctly. I added them in to the start of the Assembly struct and everything started working! I wasted nearly 2 weeks chasing this bug. Sometimes I hate computers.
### Calling the Entry Point
After finding the method entry point and creating a MethodInfo object, the last step is to just invoke the function with the Invoke_3 method. This function is defined in mscorlib.tlh as:
My initial thought when I saw that was “great…what’s a VARIANT”. Looking at the definition of VARIANT in OAldl.h, my heart sank even further. It was some crazy struct with UNION definitions accounting for all different types of values. To simplify things, I decided to not bother trying to implement passing arguments to the method so I could just use a null variant and null pointer as the parameters to Invoke_3. This is what Dount and GrayFrost do as well.I searched for “variant” and “golang” and found some implementations in the go-ole project that I could use.
Putting it together:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 safeArray, err := CreateSafeArray(exebytes) must(err) var pAssembly uintptr hr = appDomain.Load_3(uintptr(unsafe.Pointer(&safeArray)), &pAssembly) checkOK(hr) assembly := NewAssemblyFromPtr(pAssembly) var pEntryPointInfo uintptr hr = assembly.GetEntryPoint(&pEntryPointInfo) checkOK(hr) methodInfo := NewMethodInfoFromPtr(pEntryPointInfo) var pRetCode uintptr nullVariant := Variant{ VT: 1, Val: uintptr(0), } hr = methodInfo.Invoke_3( nullVariant, uintptr(0), &pRetCode) checkOK(hr) fmt.Printf("[+] Executable returned code %d\n", pRetCode)
To test it, I created a Hello World C# EXE by Googling “Hello World C# EXE”:
1 2 3 4 5 6 7 8 9 10 11 12 using System; namespace TestExe { class HelloWorld { static void Main() { Console.WriteLine("hello fom a c# exe!"); } } }
I built it with csc TestExe.cs, then loaded it in to a byte array in Go:
1 2 exebytes, err := ioutil.ReadFile("TestExe.exe") must(err)
And then created a SafeArray from it and went through the magic incantations necessary to get everything into place, and….
IT WORKED!! After several weeks of reading, Googling, copying/pasting code, trying things, building, crashing, re-building, re-crashing, debugging, giving up, coming back, feeling stupid, feeling smart, feeling stupid again and then feeling accomplished - I finally got it working. What a journey.
You can see the full example for loading an EXE from memory here.
# Conclusion
This was a really challenging but very rewarding experiment. I went from knowing next to nothing about the CLR, COM, and Go syscalls to building something pretty cool. That being said - I don’t think this is “production” ready and probably never will be. To be honest, I don’t understand enough of what it’s doing to really be able to troubleshoot, and I’ve noticed it can be pretty unstable.
But my hope in releasing the code is to show how others how it’s possible and give building blocks to expand upon for writing custom tooling. I also wanted to write up this post about my journey to hopefully inspire others that you can be a complete noob on a topic and still figure things out - there’s so much good information out there in the depths of Google, Github, and Stack Overflow.
Looking forward to seeing how other’s use/build upon this research! Let me know if you have any questions (and I’ll try my best to answer). Also, if you understand the stuff better than me - please let me know where I’m wrong and where I could improve the code! For example I seem to be hitting garbage collection issues in Go where randomly things fail if I add to may “fmt.Printf” calls. I don’t get it. Sometimes I hate computers :)
-ropnop
## References and Acknowledgements
I found the following pages very helpful. In no particular order:
|
2020-03-29 14:24:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1870574653148651, "perplexity": 2658.3193219331088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494349.3/warc/CC-MAIN-20200329140021-20200329170021-00444.warc.gz"}
|
https://www.electro-tech-online.com/threads/stm32-and-st-library-gpio-and-possibly-clock-problems.125769/
|
# STM32 and ST library, GPIO and possibly clock problems
#### wattrod
##### New Member
I'm new to ARM development and trying to do some fundamental beginner stuff with STM32. I have a VL Discovery board, and I'm just trying to send serial data. I set up the port, relying largely on sample code (though at this point I think I mostly understand what it's trying to do), and it definitely sends something, but it comes out as garbage on the other end. This happens for a variety of different baud rates, even 4800 baud. So I figure the baud rate is inaccurate for some reason (but I'm unsure how -- I call a function in the ST library and hand it a baud rate, it does the calculations).
So I try to measure the system clock. I try to set up the MCO on A8 so I can put a scope on it, and I end up getting a very faint signal, +/-40mV, that looks like a very messy clock, with waves that aren't very square. Most importantly, the period is not consistent. Many of the waves are sort of the same length, but frequently there are really short ones, and much longer ones. I couldn't even get a viewable image on my old analog scope; I had to use a DSO to capture a screenful so I could look at it.
So then I try to just toggle a GPIO pin to see how consistent that is, flipping it back and forth with the BSRR and BRR registers in a while(1) loop, mostly following example code to do so. And I'm getting no output at all.
So, I'm really confused at this point.
Code:
RCC_APB2PeriphClockCmd(RCC_APB2Periph_GPIOB, ENABLE);
GPIO_InitStructure.GPIO_Mode = GPIO_Mode_Out_PP;
GPIO_InitStructure.GPIO_Pin = GPIO_Pin_1;
GPIO_InitStructure.GPIO_Speed = GPIO_Speed_50MHz;
GPIO_Init(GPIOB, &GPIO_InitStructure);
Simplest question first -- should the above be adequate for setting up the GPIO pin B1 for output? Is there anything else I need to do? I've looked at several examples online and in the ST library example code, and I can't see that they're doing anything else.
To toggle the bit, I'm just doing this:
Code:
while (1) {
bit = !bit;
if (bit) GPIOB->BSRR = 1; else GPIOB->BRR = 1;
}
I put a scope on the pin, and see nothing. I've tried several different pins on different ports, no difference.
FWIW, regarding the clock stuff, the chip is using an 8Mhz crystal with a PLL set for 24Mhz.
Edit -- I should probably add that I'm using KEIL for development, but I'm trying to stay away from KEIL-related code, eg. STM32_Init, etc. I'm using the ST library v3.5.0. I've deleted KEIL's ancient ST library to prevent it from using that, as most people seem to recommend.
Last edited:
#### wattrod
##### New Member
Sigh.
Ok, even if it's embarrassing, I should update this to point out the should-have-been-obvious mistake I made. I didn't bit-shift. This seemed a little peculiar in the back of my mind at the time I copied and pasted the code from the ST library examples, but I guess not enough to get me to pay more attention. They were keeping the values in an array, so I didn't clue in to what they were actually doing.
The assignment "GPIOB->BSRR = 1" should be either "GPIOB->BSRR = 1 << 1" or "GPIOB->BSRR = GPIO_Pin_1".
So, that works, and the output is very consistent, about 1.2μS per pulse. Still working out the MCO problem, and then on to figure out why the serial timing is [apparently] off.
#### dangnl
##### New Member
STM32 Primer2
hey guys, so Im also working with an STM32 chip here. I attached it to a serial COM port which is connected to a laptop so I can use to send/receive signal from the STM32 Primer2. So there are 20 pins on the chip which I used to connect to the serial port and I used 3 pins as Ground, Usart_TX, Usart_Rx (the latter 2 are used for sending signal into the laptop). There is a pin #11 which can either be used as either a standard GPIO prot or an analog ADC12_in14 function. Im trying to used this port to send out a signal "1" or "0". So here's my sample code for rerouting the port.
GPIOC->BSRR = 1<<11;
Is this correct?
#### پروژه های الکترو
##### Member
I'm new to ARM development and trying to do some fundamental beginner stuff with STM32. I have a VL Discovery board, and I'm just trying to send serial data. I set up the port, relying largely on sample code (though at this point I think I mostly understand what it's trying to do), and it definitely sends something, but it comes out as garbage on the other end. This happens for a variety of different baud rates, even 4800 baud. So I figure the baud rate is inaccurate for some reason (but I'm unsure how -- I call a function in the ST library and hand it a baud rate, it does the calculations).
So I try to measure the system clock. I try to set up the MCO on A8 so I can put a scope on it, and I end up getting a very faint signal, +/-40mV, that looks like a very messy clock, with waves that aren't very square. Most importantly, the period is not consistent. Many of the waves are sort of the same length, but frequently there are really short ones, and much longer ones. I couldn't even get a viewable image on my old analog scope; I had to use a DSO to capture a screenful so I could look at it.
So then I try to just toggle a GPIO pin to see how consistent that is, flipping it back and forth with the BSRR and BRR registers in a while(1) loop, mostly following example code to do so. And I'm getting no output at all.
So, I'm really confused at this point.[size=-10][/size]
Code:
RCC_APB2PeriphClockCmd(RCC_APB2Periph_GPIOB, ENABLE);
GPIO_InitStructure.GPIO_Mode = GPIO_Mode_Out_PP;
GPIO_InitStructure.GPIO_Pin = GPIO_Pin_1;
GPIO_InitStructure.GPIO_Speed = GPIO_Speed_50MHz;
GPIO_Init(GPIOB, &GPIO_InitStructure);
Simplest question first -- should the above be adequate for setting up the GPIO pin B1 for output? Is there anything else I need to do? I've looked at several examples online and in the ST library example code, and I can't see that they're doing anything else.
To toggle the bit, I'm just doing this:
Code:
while (1) {
bit = !bit;
if (bit) GPIOB->BSRR = 1; else GPIOB->BRR = 1;
}
I put a scope on the pin, and see nothing. I've tried several different pins on different ports, no difference.
FWIW, regarding the clock stuff, the chip is using an 8Mhz crystal with a PLL set for 24Mhz.
Edit -- I should probably add that I'm using KEIL for development, but I'm trying to stay away from KEIL-related code, eg. STM32_Init, etc. I'm using the ST library v3.5.0. I've deleted KEIL's ancient ST library to prevent it from using that, as most people seem to recommend.
Hi there,
To change a PIN on a PORT, you can use GPIO's ODR register or BSRR/BRR for atomic bit set/reset.
As you figured it out, to atomic set PINx on PORTy you must write:
Code:
GPIOy->BSRR = ( 1 << x );
and to atomic reset PINx on PORTy you must write:
Code:
GPIOy->BRR = ( 1 << x );
but for toggling a bit you can simply use XOR operand on ODR register:
Code:
GPIOy->ODR ^= ( 1 << x );
for example to toggle bit 8 of GPIOB you can write:
Code:
GPIOB->ODR ^= ( 1 << 8 );
##### New Member
Hi guys, i'm hoping one of you can help me on the way.
I'm playing with STM32F030 nukleo board. Trying to toggle a pin as "bare metal" as possible ( = fast).
What i'm doing is this:
RCC -> AHBENR |= (1 << 17);
GPIOA -> MODER |= (1 << 10); // pin 5
GPIOA -> OSPEEDR |= ((3UL << 2*5) );
while(1)
{
GPIOA->BSRR = (1<<5); //a
GPIOA->BSRR = (1<<21); //b
}
The SystemCoreClock shows 48000000 but the toggling is only like 3 MHz, with shortest pulse 100nS
from a to b of course. This suggest only 10 MIPS.
My question is: isn't this too slow?
I have checked that i'm actually running on the PLL by changing the multiplier variable, and this does
actually change the toggle frequency.
Thanks in advance for any clues....
##### New Member
If is add i = 0, then it takes 200 nS, can this be correct?
while(1)
{
GPIOA->BSRR = (1<<5); //a
i = 0;
GPIOA->BSRR = (1<<21); //b
}
200nS from a to b
|
2019-01-23 19:56:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2918514311313629, "perplexity": 2131.959425748863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584350539.86/warc/CC-MAIN-20190123193004-20190123215004-00008.warc.gz"}
|
http://www.statsblogs.com/page/111/
|
## The joy of joining data.tables
June 10, 2014
By
The example I present here is a little silly, yet it illustrates how to join tables with data.table in R. Mapping old data to new dataCategories in general are never fixed, they always change at some point. And then the trouble starts with the data. Fo...
## Mathematical and Applied Statistics Lesson of the Day – The Central Limit Theorem Can Apply to the Sum
$Mathematical and Applied Statistics Lesson of the Day – The Central Limit Theorem Can Apply to the Sum$
The central limit theorem (CLT) is often stated in terms of the sample mean of independent and identically distributed random variables. An often unnoticed or forgotten aspect of the CLT is its applicability to the sample sum of those variables, too. Since , the sample size, is just a constant, it can be multiplied to to obtain […]
## Prévision de séries chronologiques
June 9, 2014
By
Dans la seconde partie du cours de modèles de prévision, on quittera (un peu) les données individuelles pour parler de données chronologiques. Les slides de cette semaine (et probablement la semaine prochaine) sont en ligne. J’ai mis en ligne, en parallèle, quelques notes de cours sur les séries temporelles, qui pourront peut être servir de complément. Comme le disait Doug Martin “Time series is the worst subject to teach. First,…
## “The medical press must become irrelevant to publication of clinical trials.”
June 9, 2014
By
“The medical press must become irrelevant to publication of clinical trials.” So said Stephen Senn at a recent meeting of the Medical Journalists’ Association with the title: “Is the current system of publishing clinical trials fit for purpose?” Senn has thrown a few stones in the direction of medical journals in guest posts on this […]
## At the Copa
June 9, 2014
By
This is the first post of a fairly regular series (at least I'll try to keep it this way!), dedicated to the impending FIFA World Cup (you may think I've gone all Barry Manilow, like Peter & co \$-\$ but I can reassure you I haven't).Marta, Virgilio ...
## I hate polynomials
June 9, 2014
By
A recent discussion with Mark Palko [scroll down to the comments at this link] reminds me that I think that polynomials are way way overrated, and I think a lot of damage has arisen from the old-time approach of introducing polynomial functions as a canonical example of linear regressions (for example). There are very few […] The post I hate polynomials appeared first on Statistical Modeling, Causal Inference, and Social…
## On deck this week
June 9, 2014
By
Mon: I hate polynomials Tues: Spring forward, fall back, drop dead? Wed: Bayes in the research conversation Thurs: The health policy innovation center: how best to move from pilot studies to large-scale practice? Fri: Stroopy names Sat: He’s not so great in math but wants to do statistics and machine learning Sun: Comparing the full […] The post On deck this week appeared first on Statistical Modeling, Causal Inference, and…
## Piketty’s Empirics Are as Bad as His Theory
June 9, 2014
By
In my earlier Piketty post, I wrote, "If much of its "reasoning" is little more than neo-Marxist drivel, much of its underlying measurement is nevertheless marvelous." The next day, recognizing the general possibility of a Reinhart-Rogoff error, b...
## A reader submits a Type DV analysis
June 9, 2014
By
Darin Myers at PGi was kind enough to send over an analysis of a chart using the Trifecta Checkup framework. I'm reproducing the critique in full, with a comment at the end. *** At first glance this looks like a...
## The Ways Probability Distributions Are Wrong.
June 9, 2014
By
Suppose there’s some aspect of our universe we’d like to know; perhaps it’s a physical measurement taken next week, or an unknown population average that exists today. Whatever it is, we use information to create a distribution which ...
## How to generate a grid of points in SAS
June 9, 2014
By
In many areas of statistics, it is convenient to be able to easily construct a uniform grid of points. You can use a grid of parameter values to visualize functions and to get a rough feel for how an objective function in an optimization problem depends on the parameters. And […]
## Tuning particle MCMC algorithms
June 8, 2014
By
Several papers have appeared recently discussing the issue of how to tune the number of particles used in the particle filter within a particle MCMC algorithm such as particle marginal Metropolis Hastings (PMMH). Three such papers are: Doucet, Arnaud, Michael Pitt, and Robert Kohn. Efficient implementation of Markov chain Monte Carlo when using an unbiased […]
## Tuning particle MCMC algorithms
June 8, 2014
By
Several papers have appeared recently discussing the issue of how to tune the number of particles used in the particle filter within a particle MCMC algorithm such as particle marginal Metropolis Hastings (PMMH). Three such papers are: Doucet, Arnaud, Michael Pitt, and Robert Kohn. Efficient implementation of Markov chain Monte Carlo when using an unbiased … Continue reading Tuning particle MCMC algorithms
## Enjoy the silence
June 8, 2014
By
I've been quite silent on the blog in the past few weeks \$-\$ a combination of exam-marking, conference-organisation and other few (some more, some less interesting) things...As for Bayes Pharma, we're nearly there \$-\$ the conference is this week Wednes...
## Regression and causality and variable ordering
June 8, 2014
By
Bill Harris wrote in with a question: David Hogg points out in one of his general articles on data modeling that regression assumptions require one to put the variable with the highest variance in the ‘y’ position and the variable you know best (lowest variance) in the ‘x’ position. As he points out, others speak […] The post Regression and causality and variable ordering appeared first on Statistical Modeling, Causal…
## New Award for David Hendry
June 7, 2014
By
It's difficult to imagine what our modern econometrics world would be like if it weren't for the numerous, seminal, contributions that Sir David Hendry has made over the course of his distinguished career.So, it was wonderful to see this announcement t...
## “Does researching casual marijuana use cause brain abnormalities?”
June 7, 2014
By
David Austin points me to a wonderfully-titled post by Lior Pachter criticizing a recent paper on the purported effects of cannabis use. Not the paper criticized here. Someone should send this all to David Brooks. I’ve heard he’s intereste...
## stone flakes
June 6, 2014
By
I browsed through UC Irvine Machine Learning Repository! the other day and noticed a nice data set regarding stone flakes produced by our ancestors, the prehistoric men. To quote the dataset owners:'The data set concerns the earliest history ...
## Frequentist vs. Bayesian Analysis
June 6, 2014
By
"Statisticians should readily use both Bayesian and frequentist ideas."So begins a 2004 paper by Bayarri and Berger, "The Interplay of Bayesian and Frequentist Analysis", Statistical Science, 19(1), 58-80.Let's re-phrase that opening sentence: "Econome...
## Hurricanes vs. Himmicanes
June 6, 2014
By
The story’s on the sister blog and I quote liberally from Jeremy Freese, who wrote: The authors have issued a statement that argues against some criticisms of their study that others have offered. These are irrelevant to the above observations, as I [Freese] am taking everything about the measurement and model specification at their word–my […] The post Hurricanes vs. Himmicanes appeared first on Statistical Modeling, Causal Inference, and Social…
## R style tip: prefer functions that return data frames
June 6, 2014
By
While following up on Nina Zumel’s excellent Trimming the Fat from glm() Models in R I got to thinking about code style in R. And I realized: you can make your code much prettier by designing more of your functions to return data.frames. That may seem needlessly heavy-weight, but it has a lot of down-stream […] Related posts: Prefer = for assignment in R Your Data is Never the Right…
## Statistically savvy journalism
June 6, 2014
By
Roy Mendelssohn points me to this excellent bit of statistics reporting by Matt Novak. I have no comment, I just think it’s good to see this sort of high-quality Felix Salmon-style statistically savvy journalism. The post Statistically savvy jou...
## The Real Reason Reproducible Research is Important
June 6, 2014
By
Reproducible research has been on my mind a bit these days, partly because it has been in the news with the Piketty stuff, and also perhaps because I just published a book on it and I'm teaching a class on … Continue reading →
|
2015-07-30 23:02:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3371078073978424, "perplexity": 4048.595858408005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987775.70/warc/CC-MAIN-20150728002307-00261-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://math.yfei.page/posts/2020-06-16-multiplicity-loci/
|
# Multiplicity Loci
Algebraic Geometry
Bundles of differential operators and their application to multiplicity loci will be discussed in this post.
Author
Published
June 16, 2020
## 1 Introduction
Let $$X$$ be a smooth projective variety and $$D$$ an integral ample divisor on $$X$$. Consider the graded algebra $R:=\bigoplus_{k=0}^\infty H^0(X,\O_X(kD)).$ A grade linear series $$A$$ is a graded subalgebra of a graded algebra $$R$$. For example, let $$V$$ be a subvariety of $$X$$ and $A_k^\alpha:=\{s\in H^0(X,\O_X(kD))\mid \mult_x(s)\ge k\alpha~\text{for all}~ x\in V\}.$ Then $A^\alpha(V, X):=\bigotimes_{k=0}^\infty A_k^\alpha$ is a grade linear series.
The volume of a grade linear series $$A$$ is defined to be $\Vol_X(A)=\limsup_{k\to \infty}\frac{\dim A_k}{k^n/n!}.$ We simply denote $$\Vol_X(D)$$ for $$\Vol_X(R)$$.
The volume $$\Vol_X(A)$$ measures the degree of freedom in choosing a divisor $$E_k$$ in $$A_k$$ for $$k\gg 0$$.
By asymptotic Riemann-Roch, we know that $$\Vol_X(D)=D^n$$. We also have $$\Vol_X(A^\alpha(x, X))\ge D^n-\alpha^n$$, where $$x$$ is a point in $$X$$.
In general, to estimate the volume $$\Vol_X(A^\alpha(V, X))$$, we use the short exact sequence and pass the estimation to that of the restricted volume $$\Vol_{X|V}(D)$$ of $$D$$ along $$V$$ defined as $\Vol_{X|V}(D):=\limsup_{k\to\infty}\frac{\dim H^0(X|V, \O_X(kD))}{k/d!},$ where $$d=\dim V$$ and $H^0(X|V, \O_X(kD)):=\Im(H^0(X, \O_X(kD))\to H^0(V, \O_V(kD))).$
By Corollary 2.15, the $$\limsup_{x\to \infty}$$ can be replace by $$\lim_{x\to \infty}$$ and $$D$$ maybe replaced by any $$\QQ$$–divisor.
By Theorem B, the volume $$\Vol_{X|V}(A)$$ computes the rate of growth of number of intersection points $$d$$ general divisors in $$A_k$$ which are away from the base locus of $$A_k$$ for $$k\gg 0$$.
In this notes, we will study multiplicity loci and their applications to volume calculations.
## 2 Multiplicity Loci
For each natural number $$k$$, let $$E_k\in A_k$$ be a general divisor. For any rational number $$\sigma>0$$, we define $Z_\sigma(E_k):=\{x\in X\mid \mult_x(E_k)\ge k\sigma\}$ which is called a multiplicity locus.
By Proposition 2.1.20, there exists a $$m_0$$ such that the base loci $$|km_0D|$$ are all the same for $$k\ge 1$$. However, it is not in general possible to take $$m_0 = 1$$.
Similarly, but with difference, multiplicity loci stabilize for $$k$$ sufficiently large.
:::{#lem-stable-multiplicity-locus},
## 3(Ein, Lazarsfeld, and Nakamaye 1996) Lemma 3.4
For a fixed positive rational number $$\sigma$$, there exists a positive integer $$k_0$$ such that $Z_\sigma(E_{k_1})=Z_\sigma(E_{k_2})\quad \text{for all}\quad k_1, k_2\ge k_0.$
:::
Proof. We will write $$Z(k)$$ for $$Z_\sigma(E_k)$$.
It suffices to show that for any positive integer $$a$$, there is a positive integer $$k(a)$$ such that $Z(c)\subset Z(a) \quad \text{for all}\quad c\ge k(a).$ Otherwise, there will be an infinite chain of subvarieties where the inclusions are strict.
Suppose that $$x\not\in Z(a)$$. We will show that $$x\not\in Z(c)$$ for all sufficiently large $$c$$. Let $$n$$ be the minimal positive integer such that $$nE_a$$ is an integral divisor and $$\eta=\frac1n$$. Then $$\mult_xE_a\le a\sigma-\eta$$. Let $$b$$ be a positive integer coprime with $$a$$. Then for any integer $$c\ge ab$$, there exist nonnegative integers $$\alpha$$ and $$\beta$$ such that $$c=\alpha a+\beta b$$. We may assume that $$\beta\le \alpha$$. Then the divisor $$F_c:= \alpha E_a +\beta E_b\in |A_c|$$ has the multiplicity \begin{aligned} \mult_xF_c=&\alpha\mult_xE_a+\beta\mult_xE_b\\ \le & a\alpha\sigma-\alpha\eta+\beta\mult_xE_b \end{aligned}
By Bertini’s Theorem, we know that for a general divisor $$E_c\in A_c$$ the following holds \begin{aligned} \mult_xE_c\le & \mult_xF_c+1\\ \le & c\sigma - b\beta\sigma-\alpha\eta+\beta\mult_xE_b+1\\ = & c\left(\sigma -\frac{\eta(1-\frac{\beta b}{c})}{a} + \frac{1+\beta\mult_xE_b-b\sigma}{c}\right) \end{aligned} Since $$\eta$$, $$\beta$$ and $$b$$ are bounded and independent of $$c$$, it follows that $$\mult_xF_c<c\sigma$$ for all sufficiently large $$c$$. Therefore, $$x\not\in Z(c)$$.
The above proof is adapted from proof of Lemma 2.3.1.
The original proof of the lemma uses upper semicontinuity lemma which is a corollary of Proposition 3.1 (see also Corollary 3.5).
Lemma 1 (Upper semicontinuity) Let $$f: X\to S$$ be a morphism with equidimensional fibers. Give an divisor $$D$$ and a point $$x\in X$$, for any subvariety $$Z\subset X$$ such that $$x\in X$$, we have $\mult_ZD\le\mult_x D,$ where $$\mult_ZD$$ is defined at the generic point of $$Z$$.
For a graded linear series $$A$$ and any positive rational number $$\sigma$$, we define the multiplicity locus of $$A$$ by $Z_\sigma(A)=Z_\sigma(E_k) \quad \text{for} \quad k\gg 1.$
For dimension reasons, there is an irreducible subvariety $$V$$ shared by two multiplicity loci. More precisely, we have the following “gap” lemma from . A version that works for a family of divisors can be found in Lemma 2.3.2.
Lemma 2 ( Lemma 1.5 and 1.6) Let $$X$$ be a smooth irreducible variety of dimension $$n$$ and $$A$$ a graded linear series. For a sequence of $$n+1$$ numbers $0\le\beta_1\le\beta_1\le\cdots\le\beta_{n+1},$ there is $$0\le i\le n$$ such that $$Z_i$$ and $$Z_{i+1}$$ share an irreducible component $$V$$ of codimension $$i$$ and passing through $$x$$.
We will call the irreducible component shared by two multiplicity loci a multiplicity jumping locus.
## 4 Multiplicity Loci vs Base Loci
In previous section, we’ve learned that there are differences between multiplicity loci and base loci. In this section, we will show that a multiplicity locus may be a base locus for another linear series. This result is from Theorem 3.9, see also Proposition 2.4.1.
## 5(Ein, Lazarsfeld, and Nakamaye 1996) Theorem 3.9, see also (Küchle and Steffens 1999) Proposition 2.4.1
Let $$X$$ be a smooth projective variety of dimension $$n$$, $$L$$ an integral ample divisor, and $$\delta$$ a rational number. Assume that the sheaf $$\D_{mL}^l\otimes\O_X(l\delta L)$$ of differential operators of order $$\le l$$ is generated by its sections for sufficiently large integers $$m$$ and $$l$$ such that $$l\delta$$ is a positive integer. If $$V$$ is a multiplicity jumping locus of $$Z_\sigma(A)$$ and $$Z_{\sigma+\varepsilon}(A)$$, where $$A\subset \bigoplus\limits_{k=0}^\infty H^0(X, \O_X(kL))$$ is a graded liner series, then $$V$$ is also an irreducible component of the base locus of the linear series $$|I_V^{(k\varepsilon)}\otimes \O_X(k(1+\delta\sigma)L)|$$, where $$k$$ is a sufficiently larger and sufficiently divisible integer, and $I_V^{(k\varepsilon)}=\{f \mid \mult_x(f)\ge k\varepsilon \quad\text{for all}\quad x\in V\}$ is the symbolic power.
Proof. By ?@lem-stable-multiplicity-locus and the assumption, we may assume that for all sufficiently large $$k$$, $$Z_\sigma(A)=Z_\sigma(E_k)$$, $$Z_{\sigma+\varepsilon}(A)=Z_{\sigma+\varepsilon}(E_k)$$ and the sheaf $$\D_{kL}^{k\sigma}\otimes\O_X(k\sigma\delta L)$$ is globally generated.
Because $$I_V^{(k\varepsilon)}\subset I_V$$. It is clear that $V\subset \Bs(|I_V^{(k\varepsilon)}\otimes \O_X(k(1+\delta\sigma)L)|).$
Let $$s_k\in H^0(X,\O_X(kL))$$ be the section whose zeroes is the divisor $$E_k$$.
Set $\Sigma_{k\sigma-1}=\{x\in X\mid\mult_x(E_k)>k\sigma-1\}$ and denote its ideal by $$I_{\Sigma_{k\sigma-1}}$$. Because $$\D_{kL}^{k\sigma}\otimes\O_X(k\sigma\delta L)$$ is globally generated, the image $$I_{\Sigma_{k\sigma-1}}\otimes \O_X(k\sigma\delta L))$$ of the morphism $\D_{kL}^{k\sigma}\otimes\O_X(k\sigma\delta L)\to \O_X(k\sigma\delta L))$ is also globally generated. Indeed, \begin{aligned} H= &H^0(X, I_{\Sigma_{k\sigma-1}}\otimes \O_X(k\sigma\delta L)))\\ =&\{D(s_k)\mid D\in H^0(X, \D_{kL}^{k\sigma}\otimes\O_X(k\sigma\delta L))\}. \end{aligned}
We first show that $$V\subset \Sigma_{k\sigma-1}$$. Note that if $$x\not\in \Sigma_{k\sigma-1}$$, then $$\mult_x(E_k)M<k\sigma$$. Thus, there exsits a differential operator $$D\in H^0(X, \D_{kL}^{k\sigma}\otimes\O_X(k\sigma\delta L))$$ such that $$D(s_k)=\mult_x(s_k)-k\sigma<0$$. Therefore, $$x\not\in V$$ and $$V\subset\Sigma_{k\sigma-1}$$.
Now we show that $\Bs(|I_V^{(k\varepsilon)}\otimes \O_X(k(1+\sigma\delta)L|)\subset\Sigma_{k\sigma-1}.$ Because $$\mult_V(s_k)\ge k(\sigma+\varepsilon)$$. For any differential operator $$D\in H^0(X, \D_{kL}^{k\sigma}\otimes\O_X(k\sigma\delta L))$$, we have $\mult_V(D(s_k))\ge k(\sigma+\varepsilon)-k\sigma=k\varepsilon.$ Therefore, $H\subset H^0(X, I_V^{(k\varepsilon)}\otimes \O_X(k(1+\sigma\delta)L)).$ It follows that $\Bs(|I_V^{(k\varepsilon)}\otimes \O_X(k(1+\sigma\delta)L)|)\subset \Bs(|H|)=\Sigma_{k\sigma-1},$ where the equality follows from the fact that $$I_{\Sigma_{k\sigma-1}}\otimes \O_X(k\sigma\delta L))$$ is globally generated.
By the construction of $$\Sigma_{k\sigma-1}$$, we know that $$\Sigma_{k\sigma-1}\subset Z_\sigma(A)$$. If $$W\supset V$$ is an irreducible component of $$\Sigma_{k\sigma-1}$$, then $$\mult_x(s_k)\ge k\sigma$$ for any $$x\in W$$. It follows that $$W\subset Z_\sigma(A)$$. Consequently, $$W=V$$ is also an irreducible component of $$Z_\sigma(A)$$. Otherwise, write $$Z_\sigma(A)=V\cup V'$$ where $$V$$ and $$V'$$ have no common irreducible components, we will see that $$W\subset V'$$ and end with a contradiction that $$V$$ become an irreducible component of $$V'$$.
Let $$U$$ be an irreducible component of $$\Bs(|I_V^{(k\varepsilon)}\otimes \O_X(k(1+\sigma\delta)L)|)$$ that contains $$V$$. Then $$V\subset U\subset W=V$$ which implies that $$V=U$$ is an irreducible component of $$\Bs(|I_V^{(k\varepsilon)}\otimes \O_X(k(1+\sigma\delta)L)|)$$.
As an application, we end this survey with the following result.
Proposition 1 ( Proposition 2.5.6) Let $$L$$ be an ample divisor such that $$L^n>\alpha^n$$. The $$L$$-degree of $$V$$ satisfies the following inequality $\varepsilon^c\mathrm{deg}_LV\leq L^n-(L^n-\alpha^n)^{\frac{c}{n}}\cdot(L^n)^{1-\frac{c}{n}},$ where $$V$$ and $$\varepsilon$$ are the ones defined in Proposition @ref(prp:multiplicity-loci), and $$c=\mathrm{codim} V$$.
## References
Ein, Lawrence, Robert Lazarsfeld, Mircea Mustaţă, Michael Nakamaye, and Mihnea Popa. 2009. “Restricted Volumes and Base Loci of Linear Series.” American Journal of Mathematics 131 (3): 607–51. http://www.jstor.org/stable/40263793.
Ein, Lawrence, Robert Lazarsfeld, and Michael Nakamaye. 1996. “Zero-Estimates, Intersection Theory, and a Theorem of Demailly.” In Higher Dimensional Complex Varieties. Proceedings of the International Conference, Trento, Italy, June 15–24, 1994, 183–207. Berlin: Walter de Gruyter.
Küchle, Oliver, and Andreas Steffens. 1999. “Bounds for Seshadri Constants.” In New Trends in Algebraic Geometry (Warwick, 1996), 264:235–54. London Math. Soc. Lecture Note Ser. Cambridge Univ. Press, Cambridge. https://doi.org/10.1017/CBO9780511721540.009.
Lazarsfeld, Robert. 2004. Positivity in Algebraic Geometry. I. Vol. 48. Ergebnisse Der Mathematik Und Ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer-Verlag, Berlin. https://doi.org/10.1007/978-3-642-18808-4.
Lipman, Joseph. 1982. “Equimultiplicity, Reduction, and Blowing Up.” In Commutative Algebra (Fairfax, Va., 1979), 68:111–47. Lecture Notes in Pure and Appl. Math. Dekker, New York.
Smirnov, Ilya. 2019. “On Semicontinuity of Multiplicities in Families.” https://arxiv.org/abs/1902.07460.
|
2023-01-29 13:31:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9796723127365112, "perplexity": 185.7547622247157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499713.50/warc/CC-MAIN-20230129112153-20230129142153-00085.warc.gz"}
|
https://stanford.edu/~hamidi/publication/hamidi-2021-randomized/
|
# The Randomized Elliptical Potential Lemma with an Application to Linear Thompson Sampling
Type
Publication
arXiv preprint arXiv:2102.07987
In this note, we introduce a randomized version of the well-known elliptical potential lemma that is widely used in the analysis of algorithms in sequential learning and decision-making problems such as stochastic linear bandits. Our randomized elliptical potential lemma relaxes the Gaussian assumption on the observation noise and on the prior distribution of the problem parameters. We then use this generalization to prove an improved Bayesian regret bound for Thompson sampling for the linear stochastic bandits with changing action sets where prior and noise distributions are general. This bound is minimax optimal up to constants.
|
2021-06-21 22:39:25
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9844029545783997, "perplexity": 366.67445798605416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504838.98/warc/CC-MAIN-20210621212241-20210622002241-00375.warc.gz"}
|
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Pseudo-metric
|
# All Science Fair Projects
## Science Fair Project Encyclopedia for Schools!
Search Browse Forum Coach Links Editor Help Tell-a-Friend Encyclopedia Dictionary
# Science Fair Project Encyclopedia
For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.
# Pseudo-Riemannian manifold
(Redirected from Pseudo-metric)
In differential geometry, a pseudo-Riemannian manifold is a smooth manifold equipped with a smooth, symmetric, (0,2) tensor which is nondegenerate at each point on the manifold. This tensor is called a pseudo-Riemannian metric or, simply, a (pseudo-)metric tensor.
The key difference between a Riemannian metric and a pseudo-Riemannian metric is that a pseudo-Riemannian metric need not be positive-definite, merely nondegenerate. Since every positive-definite form is also nondegenerate a Riemannian metric is a special case of a pseudo-Riemannian one. Thus pseudo-Riemannian manifolds can be considered generalizations of Riemannian manifolds.
Every nondegenerate, symmetric, bilinear form has a fixed signature (p,q). Here p and q denote the number of positive and negative eigenvalues of the form. The signature of a pseudo-Riemannian manifold is just the signature of the metric (one should insist that the signature is the same on every connected component). Note that p + q = n is the dimension of the manifold. Riemannian manifolds are simply those with signature (n,0).
Pseudo-Riemannian metrics of signature (p,1) (or sometimes (1,q), see sign convention) are called Lorentzian metrics. A manifold equipped with a Lorentzian metric is naturally called a Lorentzian manifold. After Riemannian manifolds, Lorentzian manifolds form the most important subclass of pseudo-Riemannian manifolds. They are important because of their physical applications to the theory of general relativity. A principal assumption of general relativity is that spacetime can be modeled as a Lorentzian manifold of signature (3,1).
Just as Euclidean space $\mathbf R^n$ can be thought of as the model Riemannian manifold, Minkowski space $\mathbf R^{p,1}$ with the flat Minkowski metric is the model Lorentzian manifold. Likewise, the model space for a pseudo-Riemannian manifold of signature (p,q) is $\mathbf R^{p,q}$ with the metric
$g = dx_1^2 + \cdots + dx_p^2 - dx_{p+1}^2 - \cdots - dx_{p+q}^2$
Some basic theorems of Riemannian geometry can be generalized to the pseudo-Riemannian case. In particular, the fundamental theorem of Riemannian geometry is true of pseudo-Riemannian manifolds as well. This allows one to speak of the Levi-Civita connection on a pseudo-Riemannian manifold along with the associated curvature tensor. On the other hand, there are many theorems in Riemannian geometry which do not hold in the generalized case. For example, it is not true that every smooth manifold admits a pseudo-Riemannian metric of a given signature; there are certain topological obstructions.
03-10-2013 05:06:04
|
2013-06-18 05:05:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7717170119285583, "perplexity": 376.2057845044567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706934574/warc/CC-MAIN-20130516122214-00003-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://msp.org/gt/2001/5-2/b04.xhtml
|
#### Volume 5, issue 2 (2001)
1 E Arbarello, M Cornalba, The Picard groups of the moduli spaces of curves, Topology 26 (1987) 153 MR895568 2 D Auroux, Symplectic 4–manifolds as branched coverings of $\mathbb{CP}^2$, Invent. Math. 139 (2000) 551 MR1738061 3 D Auroux, L Katzarkov, Branched coverings of $\mathbb{C}\mathrm{P}^2$ and invariants of symplectic 4–manifolds, Invent. Math. 142 (2000) 631 MR1804164 4 W Barth, C Peters, A Van de Ven, Compact complex surfaces, Ergebnisse der Mathematik und ihrer Grenzgebiete (3) 4, Springer (1984) MR749574 5 S Donaldson, I Smith, Lefschetz pencils and the canonical class for symplectic four-manifolds, Topology 42 (2003) 743 MR1958528 6 S K Donaldson, Lefschetz pencils on symplectic manifolds, J. Differential Geom. 53 (1999) 205 MR1802722 7 H Endo, Meyer's signature cocycle and hyperelliptic fibrations, Math. Ann. 316 (2000) 237 MR1741270 8 R Fintushe, R Stern, Counterexamples to a symplectic analogue of a theorem of Arakelov and Parsin, preprint (1999) 9 R E Gompf, The topology of symplectic manifolds, Turkish J. Math. 25 (2001) 43 MR1829078 10 J Harer, The second homology group of the mapping class group of an orientable surface, Invent. Math. 72 (1983) 221 MR700769 11 J Harris, I Morrison, Moduli of curves, Graduate Texts in Mathematics 187, Springer (1998) MR1631825 12 A K Liu, Some new applications of general wall crossing formula, Gompf's conjecture and its applications, Math. Res. Lett. 3 (1996) 569 MR1418572 13 F Luo, A presentation of the mapping class groups, Math. Res. Lett. 4 (1997) 735 MR1484704 14 Y Matsumoto, Lefschetz fibrations of genus two – a topological approach, from: "Topology and Teichmüller spaces (Katinkulta, 1995)", World Sci. Publ., River Edge, NJ (1996) 123 MR1659687 15 D McDuff, D Salamon, A survey of symplectic 4–manifolds with $b^{+}=1$, Turkish J. Math. 20 (1996) 47 MR1392662 16 J W Morgan, Z Szabó, Homotopy $K3$ surfaces and mod 2 Seiberg–Witten invariants, Math. Res. Lett. 4 (1997) 17 MR1432806 17 B Ozbagci, Signatures of Lefschetz fibrations, Pacific J. Math. 202 (2002) 99 MR1883972 18 U Persson, Chern invariants of surfaces of general type, Compositio Math. 43 (1981) 3 MR631426 19 B Siebert, G Tian, On hyperelliptic $C^{\infty}$–Lefschetz fibrations of four-manifolds, Commun. Contemp. Math. 1 (1999) 255 MR1696101 20 I Smith, Gauge theory and symplectic linear systems, in preparation 21 I Smith, Geometric monodromy and the hyperbolic disc, Q. J. Math. 52 (2001) 217 MR1838364 22 I Smith, Serre-Taubes duality for pseudoholomorphic curves, Topology 42 (2003) 931 MR1978044 23 I Smith, Symplectic geometry of Lefschetz fibrations, DPhil thesis, University of Oxford (1998) 24 I Smith, Lefschetz fibrations and the Hodge bundle, Geom. Topol. 3 (1999) 211 MR1701812 25 A I Stipsicz, On the number of vanishing cycles in Lefschetz fibrations, Math. Res. Lett. 6 (1999) 449 MR1713143 26 A I Stipsicz, Indecomposability of certain Lefschetz fibrations, Proc. Amer. Math. Soc. 129 (2001) 1499 MR1712877 27 C H Taubes, The Seiberg-Witten and Gromov invariants, Math. Res. Lett. 2 (1995) 221 MR1324704
|
2020-08-13 00:40:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8763849139213562, "perplexity": 1838.6954007243282}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738950.31/warc/CC-MAIN-20200812225607-20200813015607-00441.warc.gz"}
|
https://www.projectrhea.org/rhea/index.php?title=Applications_of_Convolution:_Simple_Image_Blurring&curid=14994&diff=77284&oldid=77280
|
Applications of Convolution: Image Blurring
The convolution forms the backbone of signal processing, but what are some direct applications of it? In this page, we will explore the application of the convolution operation in image blurring.
Convolution
In continuous time, a convolution is defined by the following integral:
$(f*g)(t) = \int_{-\infty}^{\infty}f(t-\tau)g(\tau)d\tau$
In discrete time, a convolution is defined by the following summation:
$[f*g][n] = \sum_{n = -\infty}^{\infty}f[n-\tau]g[\tau]$
A convolution takes one function and applies it repeatedly over the range of another through multiplication. As such, at a high level, the convolution can be thought of as having one function "smoothed" into another, or having the two functions "blended". This simple intuition offers insights into its uses in image blurring.
Image Blurring
The core purpose of blurring is to reduce the image's clarity and detail, but still maintain its content. For the most part, we still want to be able to generally tell what's inside the image. Every pixel in the image must be altered in a way such that it is noticeably different from before, but not completely garbled to the point it renders the image's content indiscernible. How can this be accomplished? One method would be to edit each pixel based on the pixels surrounding it. Intuitively, that would alter the pixel while still maintaining some information about its place in the picture, and therefore not completely garble the image. This is where convolution comes in. As stated previously, the convolution can be thought of as "blending" two function. Therefore, by convolving a pixel with the pixels surrounding it, we can set every pixel equal to a "smooth mix" of pixel information surrounding it.
Box Blur
The box blur is a simple blur in which every pixel is set equal to the average of all the pixels surrounding it. It can be expressed as a discrete convolution of two functions f[n] and g[n], where f[n] represents the discrete pixel values in the image, and g[n] is our kernel, which is a matrix represented as
$\frac{1}{9}\begin{bmatrix} 1 & 1 & 1\\ 1 & 1 & 1\\ 1 & 1 & 1 \end{bmatrix}$
for a 3x3 kernel. The matrix is divided by the sum of all the elements (9 in this example) to normalize it. Defining h[n] as a function of blurred pixel values, "p" as the number of pixels in the image, "N" as the sum of the kernel values, and "d" as the kernel dimension, the box blur can be expressed as the following discrete convolution for a 3x3 kernel:
$h[n] = [f*g][n] = \sum_{n=0}^{p} \frac{f[n]g[n-\tau]}{N} = \sum_{n=0}^{p} \frac{\sum_{i=0}^{d}\sum_{j=0}^{d}f_{i,j}g_{i,j}}{N}$
Application
Image free for noncommercial reuse from Pixabay
This app uses a 5x5 kernel to blur the image. The image you upload will be resized to fit the window. The blurring process will take some time, so please be patient when using the app. This section will walk through certain areas of the code relevant to the direct application of the convolution.
public static BufferedImage convolve(BufferedImage image, int kernelDimension) {
float kernel = (float) (1.0 / Math.pow(kernelDimension, 2));
BufferedImage newImage = image;
int width = image.getWidth();
int height = image.getHeight();
A separate function is created to convolve the image that takes in the input image and a dimension for the kernel matrix.
public static BufferedImage convolve(BufferedImage image, int kernelDimension) {
BufferedImage newImage = image;
int width = image.getWidth();
int height = image.getHeight();
for (int i = startX; i < width; i++) {
for (int j = startY; j < height; j++) {
int indexX1 = i - (int) kernelDimension/2;
int indexY1 = j - (int) kernelDimension/2;
if (indexX1 < 0) {
continue;
}
if (indexY1 < 0) {
continue;
}
if (indexX1 + kernelDimension >= width) {
continue;
}
if (indexY1 + kernelDimension >= height) {
continue;
}
int newPixel = convolvePixel(image.getSubimage(indexX1, indexY1, w, h), kernelDimension);
newImage.setRGB(i, j, newPixel);
}
}
return newImage;
}
The bufferedimage is a 2-dimensional matrix of pixel values. Two for-loops are used to loop through every pixel in the image. If a pixel is too close to the edges of the image, the kernel matrix cannot be applied. The function simply skips pixels whose kernel matrix would extend outside the image. Every valid pixel is passed into a separate function that performs the actual convolution and returns a new blurred pixel.
public static int convolvePixel (BufferedImage subImage, float kernelDimension) {
float kernel = (float) (1.0 / Math.pow(kernelDimension, 2));
int newPixel = 0;
float convSum = 0;
int subWidth = subImage.getWidth();
int subHeight = subImage.getHeight();
int newRed = 0;
int newGreen = 0;
int newBlue = 0;
for (int i = 0; i < subWidth; i++) {
for (int j = 0; j < subHeight; j++) {
convSum += kernel;
int p = subImage.getRGB(i,j);
int Red = (p>>16) & 0xff;
int Green = (p>>8) & 0xff;
int Blue = p & 0xff;
Red *= kernel;
Green *= kernel;
Blue *= kernel;
newRed += Red;
newBlue += Blue;
newGreen += Green;
}
}
The convolvePixel function takes in a subset of the image that will be used for the convolution, as well as the kernel dimension. Typically, a separate 2D array would be created to represent the kernel, but since every value of the matrix is the same, we can have just one variable represent it. A color image stores pixels in a 4-byte integer, with each byte representing either the alpha, red, green, and blue hues. The alpha value, which represents the opacity of a pixel, will simply be set to 255, its maximum value. The red, green, and blue values are extracted through bit-shifting and convoluted separately.
if (newRed > 255) {
newRed = 255;
} else if (newRed < 0) {
newRed = 0;
}
if (newBlue > 255) {
newBlue = 255;
} else if (newBlue < 0) {
newBlue = 0;
}
if (newGreen > 255) {
newGreen = 255;
} else if (newGreen < 0) {
newGreen = 0;
}
newPixel = 0x000000FF<<24 | newRed<<16 | newGreen<<8 | newBlue;
return newPixel;
}
All RGB values are clamped between 0-255 since those are the minimum and maximum values to represent the colors. Bitwise operations are used to form the new blurred pixel.
Alumni Liaison
To all math majors: "Mathematics is a wonderfully rich subject."
Dr. Paul Garrett
|
2020-01-28 11:59:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41808244585990906, "perplexity": 2270.7331745792762}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778168.77/warc/CC-MAIN-20200128091916-20200128121916-00277.warc.gz"}
|
https://www.physicsforums.com/threads/adiabatic-expansion.88083/
|
1. Sep 8, 2005
### physicsss
An ideal gas at 500 K is expanded adiabatically to 6.5 times its original volume. Determine its resulting temperature if the gas is as follows.
(a) monatomic
(b) diatomic (no vibrations)
(c) diatomic (molecules do vibrate)
Anything to get me started?
2. Sep 8, 2005
### Dr.Brain
Start with $PV^y$ = constant where $y = 1 + \frac{2}{f}$
where f= degrees of freedom
BJ
Last edited: Sep 8, 2005
3. Sep 8, 2005
### physicsss
But where does temperature come in the picture?
4. Sep 9, 2005
### Dr.Brain
Use PV=RT
Put the value of P from above equation into $PV^y$ , not try it out.
BJ
|
2017-07-27 06:52:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8584421873092651, "perplexity": 12579.077934333052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427749.61/warc/CC-MAIN-20170727062229-20170727082229-00380.warc.gz"}
|
https://gateoverflow.in/blog/3968/iit-gandhinagar
|
2,477 views
Hey!!!
As per many requests, I am finally updating a post on IIT, Gandhinagar. I am Priyanka Gautam, a student at IITGN, M.TECH, CSE-2017
IIT Gandhinagar is a great college with a beautiful and well-organized campus. All the facilities available here is nice including the faculties, staff, Academics requirements, research opportunities and overall. As far as CSE branch is concerned it started last year, but the faculties and courses structure and all facilities provided here is up to the level. So, there is not much difference.
LAST-YEAR SCORE:- GEN - 600 and SC/ST - 400 OBC( not remember exactly) but approx. around 500
Placement:- Overall placement is good, m.tech electrical avg. pkg is 13 Lakh around so we can hope best for CSE. Even here the most of b.tech guys placed good or many get admitted to the foreign university.
( In my opinion, placement does not matter much as long as you are getting good learning environment, so in the field of Computer science there is plenty of jobs as long as you are updated with industry and technology, you anyway get in)
Admission is started you can apply online those who are in confusion, first you must apply after getting the college, you must decide which to take or which do not, as thinking to much waste lot of time and make you tense so be chill and apply if possible.
Written test followed by a coding test and Interview.
Last year, there was 20 question in written test with 1/4 negative marking on the wrong attempt. 5 question in coding with 2 internal choices you have to attempt 3. The interview is fine just brush up your basics of Gate ( focusing on ALGO, CO, OS, Data structures, TOC,).
All the best!!!! All
Feel free to ask question !!!....
Interview Experience by Digvijay Pandey.
$Procedure:$
a. Programming Test
b. Written Test
c. Interview
a. $Programming\ Test:$
Attempt three out of five questions:
1. Return Array sum
2. Find the number of combination such that sum of row = sum of column (i don't remember question exactly but it was something like this)
3. Some tree related question (write just function)
4. ......
b. $Written\ Test:$
$20$ questions were there. All are objective type. Numerical related to page size, cache lines, regular languages, serial schedule, probability.
Algo question: An array is row-wise as well as column-wise sorted. You have to find an element. How much time will it take? (Options were there)
Did $2$ programs (All test cases cleared) and in the written test I got $20/20$ (They didn't disclose written marks but at the time of interview they asked me to solve written exam question. Because there was an ambiguous question (graph related) and I ticked it correctly and got the mark. Luck :P)
$1^{st}$ shortlisting based on Programming + Written test.
After this you have to fill area:
1. Theoretical Computer Science
2. Systems
3. Intelligent System
I marked $Theoretical\ Computer\ Science.$
c. $Interview:$
Question-related to the written test. Just single question.
A binary tree with $n$ node and height $h$ along with two arbitrary nodes are given. Find the maximum distance between these two nodes. $(2∗h)$
What is BFS?
Any other thing BFS do except traversal? (i.e. application of BFS. I said Finding shortest path if given graph is unweighted)
How? (Explained on board)
An array is given. Find the number of pairs with sum $=k$. Complexity?
If given array is already sorted. Then?
Difference between Merge and insertion sort?
People did $1$ programming question also selected for the interview. So try to score as much as possible in the written test.
Programming platform HackerRank.
Good Luck :)
If your $GATE\ Score > 720$ check these (1), (2), (3), (4), (5), (6)
If your $GATE\ Score > 650$ check (1)
If your $GATE\ Score > 600\ and\ BTech\%> 80$ check these (1),(2), (3).
posted Mar 23 in Others
edited Mar 24 | 2,477 views
0
Like
0
Love
0
Haha
0
Wow
0
Angry
0
|
2018-10-15 13:22:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3014993667602539, "perplexity": 3752.0650883112285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509196.33/warc/CC-MAIN-20181015121848-20181015143348-00312.warc.gz"}
|
https://www.groundai.com/project/jackknife-empirical-likelihood-methods-for-gini-correlations-and-their-equality-testing/
|
Jackknife Empirical Likelihood Methods for Gini Correlations and their Equality Testing
# Jackknife Empirical Likelihood Methods for Gini Correlations and their Equality Testing
Yongli Sanga, Xin Dangb and Yichuan Zhaoc CONTACT Yongli Sang. Email: yongli.sang@louisiana.edu
aDepartment of Mathematics, University of Louisiana at Lafayette, Lafayette, LA 70504, USA
bDepartment of Mathematics, University of Mississippi, University, MS 38677, USA
cDepartment of Mathematics and Statistics, Georgia State University, Atlanta, GA 30303, USA
###### Abstract
The Gini correlation plays an important role in measuring dependence of random variables with heavy tailed distributions, whose properties are a mixture of Pearson’s and Spearman’s correlations. Due to the structure of this dependence measure, there are two Gini correlations between each pair of random variables, which are not equal in general. Both the Gini correlation and the equality of the two Gini correlations play important roles in Economics. In the literature, there are limited papers focusing on the inference of the Gini correlations and their equality testing. In this paper, we develop the jackknife empirical likelihood (JEL) approach for the single Gini correlation, for testing the equality of the two Gini correlations, and for the Gini correlations’ differences of two independent samples. The standard limiting chi-square distributions of those jackknife empirical likelihood ratio statistics are established and used to construct confidence intervals, rejection regions, and to calculate -values of the tests. Simulation studies show that our methods are competitive to existing methods in terms of coverage accuracy and shortness of confidence intervals, as well as in terms of power of the tests. The proposed methods are illustrated in an application on a real data set from UCI Machine Learning Repository.
noindent
Keywords: Jackknife empirical likelihood; Gini correlation; -statistics; Wilks’ theorem; Test
MSC 2010 subject classification: 62G35, 62G20
## 1 Introduction
The Gini correlation has been used in a wide range of fields since proposed in 1987 ([17]). In the field of economic data analysis, the Gini correlation enables us to test whether an asset increases or decreases the risk of the portfolio ([18]), and can be used to build the relationship between the family income and components of income ([17]); in plant systems biology, the Gini correlation can compensate for the shortcomings of popular correlations in inferring regulatory relationships in transcriptome analysis ([11]); it has also been widely used in all branches of modern signal processing ([26]).
Let and be two non-degenerate random variables with continuous marginal distribution functions and , respectively, and a joint distribution function . Then two Gini correlations are defined as
γ(X,Y):=cov(X,G(Y))cov(X,F(X))and γ(Y,X):=cov(Y,F(X))cov(Y,G(Y)) (1)
to reflect different roles of and The representation of Gini correlation indicates that it has mixed properties of those of the Pearson and Spearman correlations, and thus complements these two correlations ([17], [18], [19]). The two Gini correlations in (1) are not symmetric in X and Y in general. The equality of the two Gini correlations can be involved in many procedures in Economics. For example, it can be applied to determine the similarity in two popular methodologies for constructing portfolios, the MV and MG ([20]), and the equality of the two Gini correlation between the return on each asset and the return on the portfolio is the necessary condition of the statement that all the Security Characteristic curves are linear ([20]), that is, a rejection of the hypothesis on the equality of Gini correlations is a rejection of the assumption that all the Security Characteristics curves are linear. Therefore, to understand the Gini correlation and to test the equality of the two Gini correlations are essential. In the paper, we develop a procedure to estimate the Gini correlation and to test the equality of the two Gini correlations. To the best of our knowledge, there is no nonparametric approaches to infer the Gini correlations.
As a nonparametric method, the empirical likelihood (EL) method introduced by Owen ([12], [13]) has been used heuristically for constructing confidence intervals. It combines the effectiveness of likelihood and the reliability of nonparametric approach. On the computational side, it involves a maximization of the nonparametric likelihood supported on data subject to some constraints. If these constraints are linear, the computation of the EL method is particularly easy. However, EL loses this efficiency when some nonlinear constraints are involved. To overcome this computational difficulty, Wood ([25]) proposed a sequential linearization method by linearizing the nonlinear constraints. However, they did not provide the Wilks’ theorem and stated that it was not easy to establish. Jing et al. ([4]) proposed the jackknife empirical likelihood (JEL) approach. The JEL method transforms the maximization problem of the EL with nonlinear constraints to the simple case of EL on the mean of jackknife pseudo-values, which is very effective in handling one and two-sample -statistics. Wilks’ theorems for one and two-sample -statistics are established. This approach has attracted statisticians’ strong interest in a wide range of fields due to its efficiency, and many papers are devoted to the investigation of the method, for example, [9], [14], [2], [22], [23], [7], [6] and so on. However, theorems derived in [4] are limited to a simple case of the -statistic but the Gini correlation cannot be estimated by a -statistic, which does not allow us to apply the results of [4] directly. However, it can be estimated by a functional of multiple -statistics ([17]). Due to this specific form of the Gini correlation, we propose a novel -statistic type functional and a JEL-based procedure with the -structured estimating function is applied for the Gini correlation. And this approach may work for making an inference about some difference functions of multiple -statistic structure with nuisance parameters involved.
In the test
H0:Δ=0vsHa:Δ≠0, (2)
where , the natural empirical estimator of is a function of 4 dependent -statistics. Based on -statistics theorem, , will, after appropriate normalization, have a limiting normal distribution. However, the asymptotic variance is complicated to calculate. In the present paper, by proposing a new -statistic type functional system, we avoid estimating the asymptotic variance to do the test. However, only a part of parameters are being interested. When only a part of parameters are of interest, Qin and Lawless ([15]) proposed to use a profile empirical likelihood method which is also an important tool to transform nonlinear constraints to some linear constraints by introducing link nuisance parameters. However, the profile EL could be computationally costly. Hjort, McKeague and Van Keilegom ([3]) proposed to reduce the computational complexity by allowing for plug-in estimates of nuisance parameters in estimating equations with the cost that the standard Wilks’ theorem may not hold. Li et al. ([6]) proposed a jackknife plug-in estimate in terms of a function of interested parameters so that their EL still have standard chi-square distributions. However, we cannot take advantage of their method since the parameters of interest in this paper are estimated by solving estimating functions with -statistics structure. We cannot apply theoretical results of the profile JEL method in [7], either. Li, Xu and Zhao ([7]) developed a JEL-based inferential procedure for general -structured estimating functions. It requires the condition that kernel functions are bounded both in the sample space and in the parameter space. Under merely second order moment assumptions, we establish the Wilks’ theorem for the jackknife empirical log-likelihood ratio for . The computation is also easy since a simple plug-in estimate of the nuisance parameter is used.
It is often of considerable interest to compare the Gini correlations from two independent populations. For instance, Lohweg ([10]) constructed adaptive wavelets for the analysis of different print patterns on a banknote and made it possible to use mobile devices for banknote authentication. After the wavelet transformations, there are four continuous variables: variance, skewness, kurtosis and entropy of wavelet transformed images. It is natural to ask what are correlations of each pair of the above variables. Are there any differences between the Genuine banknotes and Forgery banknotes? One of the main goals of this paper is to develop the JEL method for comparing the Gini correlations for independent data sets.
The remainder of the paper is organized as follows. In Section 2, we develop the JEL method for the Gini correlations. The JEL method for testing the equality of Gini correlations is proposed in Section 3. In Section 4, we consider the JEL method for comparing Gini correlations for two samples. Following the introduction of methods in each section, simulation studies are conducted to compare our JEL methods with some existing methods. A real data analysis is illustrated in Section 5. Section 6 concludes the paper with a brief summary. All proofs are reserved to the Appendix.
## 2 JEL for the Gini correlation
The Gini correlation has a mixed property of the Pearson correlation and the Spearman correlation: (1) If and are statistically independent then ; (2) is invariant under all strictly increasing transformations of or under changes of scale and location in ; (3) and (4) if is a monotonic increasing (decreasing) function of , then both and equal +1 (-1). From [17], in (1) can be written in the form as below
γ(X,Y)=Eh1((X1,Y1),(X2,Y2))Eh2((X1,Y1),(X2,Y2)), (3)
where and are independent copies of ,
h1((x1,y1),(x2,y2))=14[(x1−x2)I(y1>y2)+(x2−x1)I(y2>y1)] (4)
and
h2((x1,y1),(x2,y2))=14|x1−x2|. (5)
Then given an i.i.d. data set , with , the Gini correlation can be estimated by a ratio of two -statistics
^γ(X,Y)=U1U2=(n2)−1∑1≤iYj)+(Xj−Xi)I(Yj>Yi)](n2)−1∑1≤i
with the kernel of being and the kernel of being .
###### Remark 2.1
A direct computation of -statistics is time-intensive with complexity . Rewriting and as linear combinations of order statistics reduces the computation to . That is, and , where is the order statistics of and is the that belongs to ([17]).
By -statistic theory, the asymptotic normality of the estimator (6) for ([17], [16]) is:
√n(^γ(X,Y)−γ(X,Y))\lx@stackreld→N(0,vγ)asn→∞, (7)
with
vγ=(4/θ22)ζ1(θ1)+(4θ21/θ42)ζ2(θ2)−(8θ1/θ32)ζ3(θ1,θ2), (8)
where
θ1=cov(X,G(Y)),θ2=cov(X,F(X)),
ζ1(θ1)=Ez1{Ez2[h1(Z1,Z2)]}2−θ21,
ζ2(θ2)=Ez1{Ez2[h2(Z1,Z2)]}2−θ22
and
In particular, under a bivariate normal distribution with correlation , Xu ([26]) provided an explicit formula of , given by . However, the asymptotic variance is difficult to obtain for the non-normal distributions and an estimate of is needed either by a Monte Carlo simulation or based on the jackknife method. Let be the jackknife pseudo value of the Gini correlation estimator based on the sample with the observation deleted. Then the jackknife estimator of (8) is
^vγ=n−1nn∑i=1(^γ(−i)−¯^γ(⋅))2 (9)
where see [21].
In this section, we utilize the jackknife approach combining with the EL method to make inference on the Gini correlation.
### 2.1 JEL for the Gini correlation
Without loss of generality, we consider the case for , and the procedure for will be similar. For simplicity, we use to denote in this section. Define a novel -statistic type functional as
Un(γ)=(n2)−1∑1≤i
where
h((x1,y1),(x2,y2);γ)=h2((x1,y1),(x2,y2))⋅γ−h1((x1,y1),(x2,y2)) (11)
with and being given by (4) and (5), respectively. By (3), we have . To apply the JEL to , we define the jackknife pseudo sample as
^Vi(γ)=nUn(γ)−(n−1)U(−i)n−1(γ),
where is based on the sample with the observation being deleted. It can be easily shown that and
Un(γ)=1nn∑i=1^Vi(γ).
Let be nonnegative numbers such that Then following the standard empirical likelihood method for a univariate mean over the jackknife pseudo-values ([12], [13]), we define the JEL ratio at as
R(γ)=max{n∏i=1(npi):pi≥0,i=1,...,n;n∑i=1pi=1;n∑i=1pi^Vi(γ)=0}.
Utilizing the standard Lagrange multiplier technique, the jackknife empirical log-likelihood ratio at is
logR(γ)=−n∑i=1log[1+λ^Vi(γ)],
where satisfies
1nn∑i=1^Vi(γ)1+λ^Vi(γ)=0.
Define and . Then we have the following Wilks’ theorem with only the assumption of the existence of second moments:
###### Theorem 2.1
Assume , and . Then we have
−2logR(γ)\lx@stackreld→χ21,as n→∞.
Based on the theorem above, a jackknife empirical likelihood confidence interval for can be constructed as
Iα={~γ:−2log^R(~γ)≤χ21,1−α},
where denotes the quantile of the chi-square distribution with one degree of freedom, and is the observed empirical log-likelihood ratio at .
In application, an under-coverage problem may appear when the sample size is relatively small. In order to improve coverage probabilities, we utilize the adjusted empirical likelihood method ([1]) by adding one more pseudo-value
^Vn+1(γ)=−annn∑i=1^Vi(γ),
where Under the recommendation of [1], we take
### 2.2 Empirical performance
To evaluate the empirical performance of our JEL methods (denoted as ‘JEL’, ‘JEL’ for , , respectively), we conduct a simulation study. Another purpose is to examine whether the adjusted JEL methods (denoted as ‘AJEL’, ‘AJEL’ ) can make an improvement over the JEL method for small sample sizes. The interval estimators for the Gini correlations based on the asymptotical normality of (7) with variance calculated by (8) are denoted as ‘AV’ and ‘AV’, while ‘J’ and ‘J’ to denote the methods using (9) to estimate . Similar notions for the different method in the following sections will be used.
We also present the results for the Pearson’s correlation and denote it as ‘’. The limiting distribution of the regular sample Pearson correlation coefficient is normal: , where
vp=(1+ρ22)σ22σ20σ02+ρ24(σ40σ220+σ04σ202−4σ31σ11σ20−4σ13σ11σ02), (12)
and , see for example [24]. The Pearson correlation estimator requires a finite fourth moment on the distribution to evaluate its asymptotic variance. For bivariate normal distributions, the asymptotic variance simplifies to . For other distributions rather than the normal, the asymptotic variance may be estimated by a Monte Carlo simulation or by a jackknife variance method. We do not include another popular correlation Kendall in the simulation. Its performance is referred to [16].
We generate 3000 samples of two different sample sizes () from two different bivariate distributions, namely, normal and , with the scatter matrix . Without loss of generality, we consider only cases of with . For each simulated data set, and confidence intervals are calculated using different methods. We repeat this procedure 30 times. The average coverage probabilities and average lengths of confidence intervals as well as their standard deviations (in parenthesis) are presented in Tables 1, 2.
Under elliptical distributions including normal and distributions, the two Gini correlations and the Pearson correlation are equal to the linear correlation parameter , that is, ([16]). Thus, all the listed methods in Table 1 and Table 2 are for the same quantity .
From Table 1, we observe that under the bivariate normal distribution, all methods keep good coverage probabilities when sample size is large () but the JEL methods produce the shortest intervals for all values. It even behaves better than the method which is asymptotically optimal under normal distributions. AV (AV, AV) methods are slightly better than J (J, J) methods but not good as method. Note that the lengths of AV and AV methods are same, also standard deviations of confidence interval lengths for AV, J and methods are always 0. When the sample size is relatively small (), our JEL method always produces better coverage probabilities and shorter confidence intervals compared with J method, and performs better than AV method when is relatively large (). All the JEL, J and AV methods present slight under-coverage problems. However, the adjusted JEL method improves the under-coverage problems effectively and keeps shorter intervals.
Table 2 lists the results under the bivariate distribution. As expected, the method performs poorly for heavy-tailed distributions. It suffers a serious over-coverage problem for all cases. For the AV method, the asymptotic variance (8) is calculated by a Monte Carlo simulation with sample size of . In this sense, we say AV to be a parametric method and it yields good coverage probabilities. For the two nonparametric methods, JEL and J, both of them have slight under-coverage problems especially when is large and is small, but the JEL method produces better coverage probabilities and shorter confidence intervals than J. When the sample size is small () and is small (), JEL interval estimators are as short as half the length of J and AV interval estimators. Compared with AV methods, the JEL method always has shorter confidence intervals. Additionally the adjusted JEL methods improve the under-coverage problems.
## 3 JEL test for the equality of Gini correlations
The two Gini correlations in (1) are not equal generally. One sufficient condition for the equality of the two Gini correlations is that and are exchangeable up to a linear transformation. That is, there exist and () such that and are equally distributed. Particularly, if are elliptically distributed with linear correlation parameter , then and are exchangeable up to a linear transformation. Hence we have and they are equal to . More details are referred to ([17], [27]). Let . The hypotheses of interest are
H0:Δ=0vsHa:Δ≠0. (13)
The objective of this section is to test the equality of the two Gini correlations via the JEL method.
### 3.1 JEL test for the equality of the two Gini correlation
For simplicity, we use and to denote and , respectively. Let and we are interested in making inference of . Let and , where is defined in (11). We define a vector -statistic type functional as
\boldmath{U}n(\boldmath{θ})=(n2)−1∑1≤i
with kernels
\boldmath{G}((x1,y1),(x2,y2);\boldmath% {θ})=(g1((x1,y1),(x2,y2);Δ+γ2)g2((x1,y1),(x2,y2);γ2)).
It is easy to see that .
We do not apply the profile EL method since the computation of the profile EL could be very difficult even for equations without a -structure involved. For our case, since function does not depend on , it enables us to estimate by that solves
(n2)−1∑1≤i
It is easy to check that , which can be easily computed with complexity . We plug in and conduct the JEL method for . More specifically, let
Mn(Δ)=2n(n−1)∑1≤i
and
Mn,−i(Δ)=2(n−1)(n−2)∑1≤k
Then the jackknife pseudo samples are
^Vi(Δ)=nMn(Δ)−(n−1)Mn,−i(Δ),i=1,...,n (15)
and the jackknife empirical likelihood ratio at is
R(Δ)=max{n∏i=1(npi):pi≥0,i=1,...,n;n∑i=1pi=1;n∑i=1pi^Vi(Δ)=0}.
By the standard Lagrange multiplier method, we obtain the log empirical likelihood ratio as
logR(Δ)=−n∑i=1log{1+λ^Vi(Δ)},
where satisfies
f(λ)=1nn∑i=1^Vi(Δ)1+λ^Vi(Δ)=0. (16)
Define and . We have the following result.
###### Theorem 3.1
If , and , then
−2logR(Δ)\lx@stackreld→χ2(1),as n → ∞.
A proof of Theorem 3.1 needs to deal with an extra variation introduced by estimator , which is given in the Appendix.
###### Remark 3.1
Li ([7]) established the Wilks’ theorem for a general U-type profile empirical likelihood ratio under a strong condition that the kernel is uniformly bounded in both variables and parameters. Here we only assume the existence of the second moments of the kernel functions in the sample space.
###### Remark 3.2
The profile empirical likelihood ratio is usually computed through the ratio of the EL at the true value of the parameter and the EL at the maximal empirical likelihood estimate. In our case, because is independent with and is linear in and , our JEL ratio does not involve the maximal empirical likelihood estimate, enjoying the computational ease property.
We can obtain a jackknife empirical likelihood confidence interval for as
Iα={~Δ:−2log^R(~Δ)≤χ21,1−α},
where is the observed log-likelihood ratio at . If , we are able to reject at significance level. For the hypothesis test (13), under the null hypothesis, the -value can be calculated by
p-value=P(χ21>−2log^R(0)),
where is a random variable from a chi-square distribution with one degree of freedom. For instance, under elliptical distributions we are able to compute -values for the test. On the other hand, in the case that the true parameter , Theorem 3.1 holds rather under but under , and hence the power of the test under can be computed to be
power=1−P(Accept H0⏐Ha)=1−P(−2logR(Δ0)≤χ21,1−α)=P(−2logR(Δ0)>χ21,1−α).
In the next, we consider simulations of two cases. One is for the case and the other is for .
### 3.2 Empirical performance
In the simulation, 3000 samples of two different sample sizes () are drawn from bivariate normal and normal-lognormal distributions, that is, and are drawn from with the scatter matrix . Under the bivariate normal distribution, , the null hypothesis, , is true and values are provided in Table 3 along with averages and standard deviations of the coverage probabilities. Under the mixture of normal and lognormal distribution, and is not equal to for . Thus powers of the test are presented in Table 4.
Under bivariate normal distributions, both JEL and deltaJ methods have over coverage probability problems especially for large under small sample size. The JEL method performs relatively better than the deltaJ method for all sample size and all values. The -values in Table 3 are all greater than 0.5, which indicates that we cannot reject under a bivariate normal distribution. This implies no evidence to reject the exchangeability up to a linear transformation, which is a correct decision under bivariate normal distributions.
From Table 4, under normal-lognormal distributions we can observe that the powers of the test for all the listed methods are not high. This can be explained by the fact that the true values are very close to 0, making the procedures difficult to reject . However, the JEL method is more efficient with higher powers for all sample sizes than deltaJ. Among all the approaches in Table 4, deltaJ method produces good coverage probabilities when the sample size is large but have serious over-coverage problems when and is large. This is due to one characteristic of the lognormal distribution. The bias and variance of the sample correlation may be quite significant especially when the correlation coefficient is not close to zero ([24]). On the other hand, the JEL method has under-coverage problems when the sample size is small but these problems have been corrected effectively by the adjusted JEL method.
## 4 JEL for independent data
Let and , where for , be independent samples from distributions and with sample size and , respectively. Let , , and denote the Gini correlations between and , and for these two distributions, respectively. Let and , the hypotheses of interest are
H0:(δ1δ2)=(00)vsHa:(δ1δ2)≠(00). (17)
Our aim for this section is to derive a JEL method to test the above statements.
### 4.1 JEL for Gini correlation differences for independent data
Due to independence of for and , we have
δ1=E[h1(Z(1)1,Z(1)2)h2(Z(2)1,Z(2)2)−h2(Z(1)1,Z(1)2)h1(Z(2)1,Z(2)2)]Eh2(Z(1)1,Z(1)2)h2(Z(2)1,Z(2)2)
and
δ2=E[h′1(Z(1)1,Z(1)2)h′2(Z(2)1,Z(2)2)−h′2(Z(1)1,Z(1)2)h′1(Z(2)1,Z(2)2)]Eh′2(Z(1)1,Z(1)2)h′2(Z(2)1,Z(2)2),
where , This motivates us to define a two-sample -statistic type functional as
\boldmath{U}n1,n2(δ1,δ2)=(n12)−1(n22)−1∑1≤i1
with
\boldmath{H}(z(1)1,z(1)2,z(2)1,z(2)2;δ1,δ2)=(h2(z(1)1,z(1)2)h2(z(2)1,z(2)2)δ1−h1(z(1)1,z(1)2)h2(z(2)1,z(2)2)+h2(z(1)1,z(1)2)h1(z(2)1,z(2)2)h′2(z(1)1,z(1)2)h′2(z(2)1,z(2)2)δ2−h′1(z(1)1,z(1)2)h′2(z(2)1,z(2
|
2020-12-01 08:13:09
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8641678094863892, "perplexity": 713.9772645424005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141672314.55/warc/CC-MAIN-20201201074047-20201201104047-00635.warc.gz"}
|
https://brilliant.org/problems/a-geometry-problem-by-charuka-bandara/
|
# A geometry problem by Charuka Bandara
Geometry Level pending
$$E$$ and $$N$$ are points on the sides $$DC$$ and $$DA$$ of the square $$ABCD$$ such that $$AN : ND : DE = 2 : 3 : 4.$$ The line through $$N$$ perpendicular to $$BE$$ cuts $$BE$$ at $$P$$ and $$BC$$ at $$M$$. $$AC$$ cuts $$MN$$ at $$O$$ and $$BE$$ at point $$S$$. What fraction of the area of $$ABCD$$ is the area of triangle $$OPS$$?
Give your answer to the first decimal place.
×
|
2017-12-15 19:27:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6949501037597656, "perplexity": 79.67854556999879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948579564.61/warc/CC-MAIN-20171215192327-20171215214327-00238.warc.gz"}
|
https://itprospt.com/num/4941647/vel-ivv-imig-x27-kraphds-shown-below-whiich-acceleration
|
5
# Vel Ivv: (imig 'Kraphds shown below Whiich acceleration / . [irie graph conrespond to (he velocIty v . Ulme graph? VelocIyTime (6)10relerationTie (s)...
## Question
###### Vel Ivv: (imig 'Kraphds shown below Whiich acceleration / . [irie graph conrespond to (he velocIty v . Ulme graph? VelocIyTime (6)10relerationTie (s)
vel Ivv: (imig 'Kraphds shown below Whiich acceleration / . [irie graph conrespond to (he velocIty v . Ulme graph? VelocIy Time (6) 10 releration Tie (s)
#### Similar Solved Questions
##### Formlatc thc following problenmIciy Ronalris mrublen FindDOeuonup()=n Iyt + Ial that sntislic thic following coudlitions The vales p(U) givcn pints Uhe inlerval M, approximalcly MuN Givcn vales % p(L) Thc ul? Yvcl distinet (4 f j) The valu; arC Jso given_ Thc: dcrivalives of JLuI 4"(4) ~0Thc avcrge Tuc 05the iutrtvil [uHDDTOXIlclv coualLhc villucp(U)dt = H05):inantinlin coellicictlsstisly Ius "TrMct YTAMCAE(r)Xiw) ") (()? (p(1))' (' Mt)lt Ma5))"Giva such that E(r)
Formlatc thc following problenm Iciy Ronalris mrublen Find DOeuonu p()=n Iyt + Ial that sntislic thic following coudlitions The vales p(U) givcn pints Uhe inlerval M, approximalcly MuN Givcn vales % p(L) Thc ul? Yvcl distinet (4 f j) The valu; arC Jso given_ Thc: dcrivalives of JLuI 4 "(4) ~0 T...
##### Question No.2 Evaluate each limit by using the definition of the definite integral (with right Riemann sums) and the Fundamental Theorem of Calculus. limn-+o E+vz + 3 +_+ Wz+4 2T 3T (b) limn-+o sin + sin 2n + sin +-+sin 2n 2n 2n ()]
Question No.2 Evaluate each limit by using the definition of the definite integral (with right Riemann sums) and the Fundamental Theorem of Calculus. limn-+o E+vz + 3 +_+ Wz+4 2T 3T (b) limn-+o sin + sin 2n + sin +-+sin 2n 2n 2n ()]...
##### A swine producer has 98 sows in one barn and has elected to cull (discard) 23 of them. He has another barn with 107 sows. How many sowS does he need to cull to make the number equivalent in each barn?32 sowsSowS23 sows42 sows
A swine producer has 98 sows in one barn and has elected to cull (discard) 23 of them. He has another barn with 107 sows. How many sowS does he need to cull to make the number equivalent in each barn? 32 sows SowS 23 sows 42 sows...
##### (10 points) Three players and C are dividing among themselves set of common assets equally owned by the three of them The assets are divided into three shares 52, and s3} The table below shows the values of the shares to each player expressed as percent ofthe total value of the assets_25%( 34% 41%(Player Player Player 33 92 Which oftne following the best fair division ofthe assets using 51,52,and gets 52; gets s3; C gets 514 A gets 53; B gets 5z: C gets 514 Agets 51; B gets 5z; C gets 53- A gets
(10 points) Three players and C are dividing among themselves set of common assets equally owned by the three of them The assets are divided into three shares 52, and s3} The table below shows the values of the shares to each player expressed as percent ofthe total value of the assets_ 25%( 34% 41%(...
##### In each case, either the transfer function H(s) or the input function h(t) is given: Compute the forced responses if the function f (t) is the input function.H(s) (s +3)6 - 2)' f() =-2t(b) h() = e-3 f() =s-3) h() = 2c0s(2t), f(t) = % ()
In each case, either the transfer function H(s) or the input function h(t) is given: Compute the forced responses if the function f (t) is the input function. H(s) (s +3)6 - 2)' f() =-2t (b) h() = e-3 f() =s-3) h() = 2c0s(2t), f(t) = % ()...
##### Suppose the random variable Z follows a standard normal distribution: Then the value of P(Z <-3.39) is0 A 0.0003O.00000.99971.000
Suppose the random variable Z follows a standard normal distribution: Then the value of P(Z <-3.39) is 0 A 0.0003 O.0000 0.9997 1.000...
##### Problem 4 (20 points)Find an expression for the current through and the voltage cross RLin the circuit below. Note that R and Vz have been found (based on your School ID): Find an expression for the power bsorbed by RL Tabulate (find and put table) the power absorbed by RL for various values of RL (R = 0, R = R/4, R = R/2, R = R, RL = 2R, RL = 4RJ. Sketch power versus RL using the values you obtained in (III)(IV)R(2
Problem 4 (20 points) Find an expression for the current through and the voltage cross RLin the circuit below. Note that R and Vz have been found (based on your School ID): Find an expression for the power bsorbed by RL Tabulate (find and put table) the power absorbed by RL for various values of RL ...
##### Find vectors parallel to $\mathbf{v}$ of the given length. $$\mathbf{v}=\overrightarrow{P Q} \text { with } P(3,4,0) \text { and } Q(2,3,1) ; \text { length }=3$$
Find vectors parallel to $\mathbf{v}$ of the given length. $$\mathbf{v}=\overrightarrow{P Q} \text { with } P(3,4,0) \text { and } Q(2,3,1) ; \text { length }=3$$...
##### ONDateAssum e Y IN kave BlN , P)_ Jistributed N|^ hav e Pl) distr buted, An d Akaerta,B) d itributed 4u} to find Poo)un Fo itional distributivn
ON Date Assum e Y IN kave BlN , P)_ Jistributed N|^ hav e Pl) distr buted, An d Akaerta,B) d itributed 4u} to find Poo)un Fo itional distributivn...
##### Use the following information to answer Questions [1H4 The density curve of a continuous random variable X is shown in Figure[ 3 1 0 Figure 1: The density curve of a continuous random variable X1. Is this density curve symmetric at 0 and bimodal? (Please answer "Yes"No"What is the area under the density curve? According to the graph; what is the second quartile (median) of the random variable X? The third quartile of X is 1.348 and the first quartile is -1.348, what percentage of
Use the following information to answer Questions [1H4 The density curve of a continuous random variable X is shown in Figure[ 3 1 0 Figure 1: The density curve of a continuous random variable X 1. Is this density curve symmetric at 0 and bimodal? (Please answer "Yes "No" What is the...
##### Two objects required to distinguish them as 7. Which term describes the minimum distance between two separate objects? the ocular lenses of 10X, what = is the 8. If the total magnification of an image Is 4OOX and you are using magnifving power of the objective lens being used? The amount of light passing through the condenser needs to be decreased_ what microscope part should be adjusted? microscope that remains in focus when the objective lenses are changed? 10. What word describes
two objects required to distinguish them as 7. Which term describes the minimum distance between two separate objects? the ocular lenses of 10X, what = is the 8. If the total magnification of an image Is 4OOX and you are using magnifving power of the objective lens being used? The amount of light pa...
##### 1- Simplify the following Boolean expressions to a minimum number of literals: (XY' + W'Z) (WX' + YZ') ? 2- Reduce the following Boolean expressions to two literals: = WXY'Z W'XZ WXYZ? 3- Draw original and simplifiled circuit for function F in problem 2. ? 4- Find the complement of F = wx + yz; then show that FF'= 0 and F+F' = 1. ?
1- Simplify the following Boolean expressions to a minimum number of literals: (XY' + W'Z) (WX' + YZ') ? 2- Reduce the following Boolean expressions to two literals: = WXY'Z W'XZ WXYZ? 3- Draw original and simplifiled circuit for function F in problem 2. ? 4- Find the ...
##### The angle between the two planes * +Y= and 2x + y ~2z = 2isOptionOptionOpllon -Optlon ?
The angle between the two planes * +Y= and 2x + y ~2z = 2is Option Option Opllon - Optlon ?...
##### Find the differential of the function_6xY =dy6e6x
Find the differential of the function_ 6x Y = dy 6e6x...
##### 16. The Sanger = sequencing method is also known as the Roche 454 Illumina Dideoxysequencing method_17. A section of DNA is amplilied: that is; many copies are made via mechunism similar t0 cellular replication using recombinant DNA technology Sanger sequencing polymerase chain reaction (PCR)
16. The Sanger = sequencing method is also known as the Roche 454 Illumina Dideoxy sequencing method_ 17. A section of DNA is amplilied: that is; many copies are made via mechunism similar t0 cellular replication using recombinant DNA technology Sanger sequencing polymerase chain reaction (PCR)...
|
2022-09-28 15:14:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7900105714797974, "perplexity": 8683.43504205958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00228.warc.gz"}
|
https://geolis.math.tecnico.ulisboa.pt/seminars?action=show&id=6794
|
## 29/11/2022, Tuesday, 16:00–17:00 Europe/Lisbon Online
, Institut de Mathématiques de Jussieu - Paris Rive Gauche
In an influential article from the 1970s, Albert Fathi, having proven that the group of compactly supported volume-preserving homeomorphisms of the $n$-ball is simple for $n\geq 3$, asked if the same statement holds in dimension $2$. In a joint work with Cristofaro-Gardiner and Humilière, we proved that the group of compactly supported area-preserving homeomorphisms of the $2$-disc is not simple. This answers Fathi's question and settles what is known as "the simplicity conjecture" in the affirmative.
In fact, Fathi posed a more general question about all compact surfaces: is the group of "Hamiltonian homeomorphisms" (which I will define) simple? In my talk, I will review recent joint work with Cristofaro-Gardiner, Humilière, Mak and Smith answering this more general question of Fathi. The talk will be for the most part elementary and will only briefly touch on Floer homology which is a crucial ingredient of the solution.
|
2022-11-26 09:35:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6285814642906189, "perplexity": 1048.0180077591501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706285.92/warc/CC-MAIN-20221126080725-20221126110725-00397.warc.gz"}
|
https://www.esaral.com/q/8-taps-of-the-same-size-fill-a-tank-in-27-minutes-17271/
|
8 taps of the same size fill a tank in 27 minutes.
Question:
8 taps of the same size fill a tank in 27 minutes. If two taps go out of order, how long would the remaining taps take to fill the tank?
Solution:
Let x min be the required number of time. Then, we have:
No. of taps 8 6 Time (in min) 27 xx
Clearly, less number of taps will take more time to fill the tank .
So, it is a case of inverse proportion.
Now, $8 \times 27=6 \times x$
$\Rightarrow x=\frac{8 \times 27}{6}$
$\Rightarrow x=36$
Therefore, it will take 36 min to fill the tank.
|
2022-05-17 12:08:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6324046850204468, "perplexity": 574.5365297079896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517245.1/warc/CC-MAIN-20220517095022-20220517125022-00316.warc.gz"}
|
https://chemistry.stackexchange.com/questions/150266/ratio-of-carbon-monoxide-and-carbon-dioxide-when-carbon-and-oxygen-are-made-to-r
|
# Ratio of carbon monoxide and carbon dioxide when carbon and oxygen are made to react [duplicate]
$$\pu{12 g}$$ of $$\ce{C}$$ reacts with $$\pu{64 g}$$ of $$\ce{O2}$$ to give a mixture of $$\ce{CO}$$ and $$\ce{CO2}$$. Find amount of $$\ce{CO}$$ and $$\ce{CO2}$$ at the end of reaction.
What I tried was to let x moles of carbon react to give $$\ce{CO2}$$ and y moles react to give $$\ce{CO}$$.
$$\ce{ \underset{x}{C} + \underset{x}{O2} -> \underset{x}{CO2}}$$ $$\ce{\underset{y}{2C}} + \underset{y/2}{\ce{O2}}\ce{ -> \underset{y}{2CO}}$$
Solving for $$x + y = 1$$ and $$x + 0.5y = 2$$ , we get $$x=3$$ and $$y= -2$$ which is meaningless.
I followed the method used in this answer.
Can anyone tell me where I went wrong?
• Comments are not for extended discussion; this conversation has been moved to chat. Apr 20, 2021 at 8:42
• This is a bad question. The answer is indeterminate since the problem doesn't define the conditions for completion. Some mixture of CO, CO2 and O2 would seem likely. The only way to have a definite "solution" for the problem would be to make an explicit assumption that all the carbon goes to CO2 since O2 is in excess.
– MaxW
Apr 20, 2021 at 9:39
The answer that you got is meaningless because it is wrong.
You missed one reaction here. $$\require{cancel}\ce{CO}$$ can further react with $$\ce{O2}$$ to give $$\ce{CO2}$$
$$\ce{2CO + O2 -> 2CO2}$$
Now we have three reactions to consider,
$$\ce{2C + O2 -> 2CO} \tag{1}$$ $$\ce{2CO + O2 -> 2CO2}\tag{2}$$ $$\ce{C + O2 -> CO2} \tag{3}$$
(1) and (2) can be combined here to give:
$$\ce{2C + O2 + \cancel{2\ce{CO}} + O2 -> \cancel{\ce{2CO}} + 2CO2}$$
Which gives:
$$\ce{C +O2 ->CO2} \tag{3}$$
These reactions progress in the following manner, (1) + (2) = (3).
Now writing the three equations and using the above statement(which means we can find the value for just (1) first), we get that $$\pu{1 mol}$$ of $$\ce{C}$$ reacts with $$\pu{0.5 mol}$$ of $$\ce{O2}$$ to give $$\pu{1 mol}$$ of $$\ce{CO}$$. We still have $$\pu{1.5 mol}$$ of $$\ce{O2}$$ left, so the reaction goes further.
Now onto (2), $$\pu{1 mol}$$ of $$\ce{CO}$$ reacts with $$\pu{0.5 mol}$$ of $$\ce{O2}$$ to give $$\pu{1 mol}$$ of $$\ce{CO2}$$ and we still have $$\pu{1 mol}$$ of $$\ce{O2}$$ left which is in excess.
You can use this same method for this question as well
|
2022-05-19 06:08:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 41, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7184762954711914, "perplexity": 392.47592642802755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662525507.54/warc/CC-MAIN-20220519042059-20220519072059-00352.warc.gz"}
|
https://www.thejournal.club/c/paper/24414/
|
#### ProofFlow: Flow Diagrams for Proofs
##### Steven A. Kieffer
We present a light formalism for proofs that encodes their inferential structure, along with a system that transforms these representations into flow-chart diagrams. Such diagrams should improve the comprehensibility of proofs. We discuss language syntax, diagram semantics, and our goal of building a repository of diagrammatic representations of proofs from canonical mathematical literature. The repository will be available online in the form of a wiki at proofflow.org, where the flow chart drawing software will be deployable through the wiki editor. We also consider the possibility of a semantic tagging of the assertions in a proof, to permit data mining.
arrow_drop_up
|
2023-02-05 11:07:57
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8106240034103394, "perplexity": 2093.403639437933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500251.38/warc/CC-MAIN-20230205094841-20230205124841-00337.warc.gz"}
|
https://myelectrical.com/notes/entryid/221/capacitor-theory
|
# Capacitor Theory
By on
Capacitors are widely used in electrical engineering for functions such as energy storage, power factor correction, voltage compensation and many others. Capacitance is also inherent in any electrical distribution systems and can play a pivotal role in it's operation.
In order to fully understand capacitors and their use, it is essential that electrical practitioners have a good understanding of capacitor theory.
## Capacitance
Symbols Used
C - capacitor, with units of Farad (F)
R - resistor, with units of ohm (Ω)
V - d.c. source voltage in volts (V)
vc - capacitor voltage in volts (V)
I - peak charge or discharge current in amperes (A)
i - instantaneous current in amperes (A)
Q - electric charge (C)
E - electric field strength (V/m)
D - electric flux density (C/m2)
εo - permittivity of free space (f/m) - constant: 8.854 187 817... x 10−12
εr - relative permittivity of dielectric
Capacitors consist of conducting surfaces separated dielectric (insulator). The effect of this is that when a voltage is applied, charge flows into the capacitor and is stored. When an external circuit is connected to the capacitor, this stored charge will flow from the capacitor into the circuit.
Capacitance is a measure of amount of charge which can be stored within a capacitor. The SI unit of capacitance is the farad (F). The farad is the ratio of electrical charge stored by the capacitor to the voltage applied:
The amount of capacitance is defendant on the materials used and geometry of the capacitor.
Formally, capacitance is found by the solution of the Laplace equation ∇2φ = 0, where φ is a constant potential on conductor surface. Simpler geometries can also be solved using other methods (the example shows an example for a parallel plate capacitor).
### Example - Parallel Plate Capacitance
Parallel Plate Capacitor
(click for larger image)
Capacitor shown and assume the dielectric is a vacuum. Electrostatic theory suggests that the ratio of electric flux density to electric field strength is the permittivity of free space:
The electric flux density and electric field strength are given by:
With capacitance defined as :
The above equations can be combined and solved to give the capacitance of a parallel plate capacitor (with a free air dielectric) as:
farad
For more real dielectrics the capacitance will be increased directly in proportion to the the relative permittivity and given by:
farad
## Charging and Discharging of Capacitors
Charging (and discharging) of capacitors follows an exponential law. Consider the circuit which shows a capacitor connected to a d.c. source via a switch. The resistor represents the leakage resistance of the capacitor, resistance of external leads and connections and any deliberately introduced resistance.
Capacitor Charging Voltage
Capacitor Charging Voltage
When the switch is closed, the initial voltage across the capacitor (C) is zero and the current (i) is given by:
- from fundamental capacitor theory
The voltage across the resistor is the current multiplied by its value, giving:
From Kirchhoff's voltage law, the d.c. source voltage (V) equals the sum of the capacitor voltage (vc) and voltage across the resistor:
Which when rearranged gives:
and
By integrating both sides, we get:
at giving
By rearranging
which goes to
and
The voltage across the capacitor will increase from zero to that of the d.c. source as an exponential function.
### Capacitor Charging Current
Capacitor Charging &
Discharging
From the above:
Giving:
Letting the initial current (I), be the d.c source voltage divided by the resistance:
giving
### Time Constant
The product of resistance and capacitance (RC), has the units of seconds and is refereed to as the circuit time constant (denoted by the Greek letter Tau, τ).
Using this, the equations of voltage and charging current across the capacitor are written as:
Note: increasing the value of resistance R, will increase the time constant resulting in a slower charge (or discharge) of the capacitor.
### Capacitor Discharging
When discharging the current behaves the same as that for charging, but in the opposite direction. Voltage across the capacitor will decay exponentially to zero. Equations for both current and voltage discharge can be determined in a similar way to that shown above and are summarized as:
## Energy Storage
The greater the capacitance, the more energy it can store.
Current in the capacitor is given by:
Instantaneous power within the capacitor is the product of current and voltage:
watts
During an interval dt, the energy supplied is:
joules
By integrating the instantaneous energy as the capacitor voltage rises, we can find the total energy stored:
joules
It is worth noting that when connecting capacitors in series, the total capacitance reduces but the voltage rating increases. Connecting in parallel keeps the voltage rating the same, but increases the total capacitance. Either way the total energy storage of any combination is simply the sum of the storage capacity of each individual capacitor.
### Resistor Losses
In charging an ideal capacitor there are no losses. However, should a capacitor be charged via a resistor then it should be understood that half of the charging energy will be lost and dissipated as heat across the capacitor.
Consider the above circuit, with a charging current of:
The instantaneous power loss across the resistor is:
Consequentially the total power loss is:
Working through the solution gives:
$∫ 0 ∞ V 2 R e −2t RC dt = [ V 2 R ( −RC 2 ) e −2t RC ] 0 ∞ =[ 0 ]−[ − C V 2 2 ]$
$= 1 2 C V 2$ joules
It can be seen that the energy loss is the same as that stored within the capacitor. On discharging, there will also be half the store energy lost within the resistor.
## See Also
More interesting Notes:
Steven McFadyen
Steven has over twenty five years experience working on some of the largest construction projects. He has a deep technical understanding of electrical engineering and is keen to share this knowledge. About the author
myElectrical Engineering
comments powered by Disqus
#### View 1 Comments (old system)
1. Notes says:
6/27/2013 2:32 PM
Trackback from Notes
Capacitors have numerous applications in electrical and electronic applications. This note, examines the use of capacitors to store electrical energy. The sidebar shows details of a typical commercially available energy storage module. ...
Comments are closed for this post:
• have a question or need help, please use our Questions Section
• spotted an error or have additional info that you think should be in this post, feel free to Contact Us
Equipment Verification (to IEC Standards)
One of the requirements to ensuring that everything works is to have equipment selected, manufactured and verified [tested] to IEC standards. Not all equipment...
Motor Starting - Introduction
Motor starting and its associated problems are well-known to many people who have worked on large industrial processes. However, these things are, of course...
Post Editing Tips
If you at all familiar with programs like office and outlook, then adding and editing posts is pretty straightforward and intuitive. However, there are...
Electrical Engineering
Electrical engineering is a field that covers a wide variety of sub-fields, including electricity and electronics. It is a field that goes back to the...
Periodic Electrical Installation Inspection – What to Inspect?
This is the second post in a series of two on periodic electrical inspections. In the first post, I discussed how often inspections should be carried out...
UPS - Uninterruptible Power Supply
A UPS is an uninterruptible power supply. It is a device which maintains a continuous supply of electrical power, even in the event of failure of the...
Magicians of Engineering
The other day I was reading 'Night of the New Magicians' by Mary Pope Osborn with my son. The story is about a young boy and girl who travel back in time...
Our internet address and Vanity URLs
Visitors who like to type web address rather then click menus may be interested in how our URL structure works.
Induction Motor Equivalent Circuit
Induction motors are frequently used in both industrial and domestic applications. Within the induction motor, an electrical current in the rotor is induced...
How a Digital Substation Works
Traditionally substations have used circuit breakers, current transformers (CT), voltage transformers (VT) and protection relays all wired together using...
## Have some knowledge to share
If you have some expert knowledge or experience, why not consider sharing this with our community.
By writing an electrical note, you will be educating our users and at the same time promoting your expertise within the engineering community.
To get started and understand our policy, you can read our How to Write an Electrical Note
|
2020-01-21 00:46:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5028588175773621, "perplexity": 1657.367017526298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601040.47/warc/CC-MAIN-20200120224950-20200121013950-00550.warc.gz"}
|
https://kfp.bitbucket.io/fricas-ug/section-9.56.html
|
9.56 Octonion¶
The Octonions, also called the Cayley-Dixon algebra, defined over a commutative ring are an eight-dimensional non-associative algebra. Their construction from quaternions is similar to the construction of quaternions from complex numbers (see QuaternionXmpPage ).
As Octonion creates an eight-dimensional algebra, you have to give eight components to construct an octonion.
oci1 := octon(1,2,3,4,5,6,7,8)
1+2i+3j+4k+5E+6I+7J+8K
Type: Octonion Integer
oci2 := octon(7,2,3,-4,5,6,-7,0)
7+2i+3j-4k+5E+6I-7J
Type: Octonion Integer
Or you can use two quaternions to create an octonion.
oci3 := octon(quatern(-7,-12,3,-10), quatern(5,6,9,0))
-7-12i+3j-10k+5E+6I+9J
Type: Octonion Integer
You can easily demonstrate the non-associativity of multiplication.
(oci1 * oci2) * oci3 - oci1 * (oci2 * oci3)
2696i-2928j-4072k+16E-1192I+832J+2616K
Type: Octonion Integer
As with the quaternions, we have a real part, the imaginary parts i, j, k, and four additional imaginary parts E, I, J and K. These parts correspond to the canonical basis (1,i,j,k,E,I,J,K).
For each basis element there is a component operation to extract the coefficient of the basis element for a given octonion.
[real oci1, imagi oci1, imagj oci1, imagk oci1, imagE oci1, imagI oci1,
imagJ oci1, imagK oci1]
[1,2,3,4,5,6,7,8]
Type: List PositiveInteger
A basis with respect to the quaternions is given by (1,E). However, you might ask, what then are the commuting rules? To answer this, we create some generic elements.
We do this in FriCAS by simply changing the ground ring from Integer to Polynomial Integer.
q : Quaternion Polynomial Integer := quatern(q1, qi, qj, qk)
q1+qii+qjj+qkk
Type: Quaternion Polynomial Integer
E : Octonion Polynomial Integer:= octon(0,0,0,0,1,0,0,0)
E
Type: Octonion Polynomial Integer
Note that quaternions are automatically converted to octonions in the obvious way.
q * E
q1E+qiI+qjJ+qkK
Type: Octonion Polynomial Integer
E * q
q1E-qiI-qjJ-qkK
Type: Octonion Polynomial Integer
q * 1$(Octonion Polynomial Integer) q1+qii+qjj+qkk Type: Octonion Polynomial Integer 1$(Octonion Polynomial Integer) * q
q1+qii+qjj+qkk
Type: Octonion Polynomial Integer
Finally, we check that the normnormOctonion, defined as the sum of the squares of the coefficients, is a multiplicative map.
o : Octonion Polynomial Integer := octon(o1, oi, oj, ok, oE, oI, oJ, oK)
o1+oii+ojj+okk+oEE+oII+oJJ+oKK
Type: Octonion Polynomial Integer
norm o
ok2+oj2+oi2+oK2+oJ2+oI2+oE2+o12
Type: Polynomial Integer
p : Octonion Polynomial Integer := octon(p1, pi, pj, pk, pE, pI, pJ, pK)
p1+pii+pjj+pkk+pEE+pII+pJJ+pKK
Type: Octonion Polynomial Integer
Since the result is 0, the norm is multiplicative.
norm(o*p)-norm(p)*norm(o)
0
Type: Polynomial Integer
|
2019-02-18 02:50:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7151562571525574, "perplexity": 11872.354958769047}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484020.33/warc/CC-MAIN-20190218013525-20190218035525-00234.warc.gz"}
|
https://www.physicsforums.com/threads/pure-chance-question.843334/
|
# Pure chance question
Tags:
1. Nov 15, 2015
I recently had a discussion with someone about Quantum Mechanics. His story was confusing to me but I could detect that he made an error in his thinking
which I proceeded to explain :
You are trying to reason from the idea that the 'collapse of the wave-function', which precedes the measurement, is something you can reason about in the
first place. The wave-function allows us to determine the probability of detecting a particle in a certain place and time. It's a probability distribution
function which means the reason a particle appears, is measured, in a certain place and time is determined by pure chance only. It's just that the
chance can vary from place to place and in some places the chance might be zero. So reasoning about how the wave-function 'collapses' equates to reasoning
about something that per definition is determined by pure chance only. This is invalid, since pure chance cannot be defined. Hence, you end up with paradox galore.
My question is, doesn't that mean that physics, once it exposes this 'problem' of pure chance ultimately determining everything, has reached its philosophical
limit already at that moment, since once it reaches pure chance, it has basically reached undefinability.
Doesn't it just stop there? No matter which way you shake it, you always have to make the assumption that you can still 'get' something from pure chance,
which is invalid per definition. Or you could assume that it's not pure chance, but why the hell are you using a probability distribution function then?
Ideas?
2. Nov 15, 2015
There's also no way out of using a PDF! From Feynman, Lectures on Physics vol. III :
'The uncertainty principle 'protects' quantum mechanics. Heisenberg recognized that if it were possible to measure the momentum and the position simultaneously with a
greater accuracy, the quantum mechanics would collapse. So he proposed that is must be impossible. Then people sat down and tried to figure out ways of doing it,
and nobody could figure out a way to measure the position and the momentum of anything - a screen, an electron, a billiard ball, anything - with any greater accuracy.
Quantum mechanics maintains its perilous but still correct existence.'
3. Nov 15, 2015
### DrChinese
Welcome to PhysicsForums, Adversary!
You don't need to make an assumption when there is empirical evidence. That being our world exists and we are having this discussion! There is plenty of evidence for the laws of chance as being fundamental, not so much for the other side.
There is no known cause for the value of any quantum observable I chose to measure. That doesn't mean there isn't one, and that one won't ever be discovered. But there is no particular advantage to assuming one exists. And it is definitely a stretch to assume pure chance is "invalid per definition". That remains to be seen.
In fact, any other viewpoint would actually be circular reasoning: assuming that which you wish to prove.
Last edited: Nov 15, 2015
4. Nov 15, 2015
'Welcome to PhysicsForums, Adversary!'
Thanks.
'You don't need to make an assumption when there is empirical evidence.'
You're always making an implicit assumption. And then it's best to be pragmatic, which ultimately leads to the scientific method, indeed.
'That being our world exists and we are having this discussion!'
There it is!
'There is plenty of evidence for the laws of chance as being fundamental, not so much for the other side.'
I don't dispute this. But there is a problem with the concept of the 'laws of chance'.
'There is no known cause for the value of any quantum observable I chose to measure.'
There is, but it's random. That's why you use a PDF. And when you use a PDF, you're making the implicit, mathematical, assumption that it's random then, which means
it cannot be defined. That's the problem : Mathematically, you've already stated that it's undefinable.
'That doesn't mean there isn't one, and that one won't ever be discovered.'
Mathematically, you've already stated that definition of it is impossible. Reasonably, this means that there isn't one, and that one won't ever be discovered either.
'But there is no particular advantage to assuming one exists.'
Don't assume anything at all; It's undefinable per definition. Reason stops.
'And it is definitely a stretch to assume pure chance is "invalid per definition". That remains to be seen.'
I said that defining chance, as in complete unpredictability, is undefinable. If we could provide a definition in any way, it wouldn't be very unpredictable, would it?
'In fact, any other viewpoint would actually be circular reasoning: assuming that which you wish to prove.'
But we can already know that any assumption is invalid on this, for mathematical reasons. I think that that, in itself, is a better assumption.
5. Nov 16, 2015
### bhobba
Dr Chinese didn't say that.
The laws of chance are rigorously definable via the Kolmogerov axioms:
https://en.wikipedia.org/wiki/Probability_axioms
QM is actually the most reasonable extension of those axioms that allows continuous transformations between so called pure states:
http://arxiv.org/pdf/quant-ph/0101012.pdf
Thanks
Bill
6. Nov 16, 2015
When you try to formalize probability, you're always making the same implicit assumption that chance cannot be defined. If it could, then why are you
handling it in that way? Rigorous treatment contents itself with studying the behaviour of randomness, but makes no attempt to define it. If it would,
it would be immediately mathematically invalid.
Sometimes, this is useful in dealing with incomplete information about the world; A world that might on deeper analysis turn out to be not random.
Then it just works as a simplified model. Maybe that's where the misconception comes from.
Quantum Mechanics however would collapse if the behaviour turns out to be non-random in any way. The theory then cannot make any predictions any more.
Therefore, the undefinable pure chance concept is what you have left, when you talk about 'collapse of the wave-function' etc.
And that's invalid, because you're trespassing in the Pure Chance Zone, so to speak, beyond the math.
7. Nov 16, 2015
### bhobba
That's nonsense. I think you need to state your position with greater care.
Errrrr - because it works.
Thanks
Bill
8. Nov 16, 2015
### Demystifier
Let us suppose, for the sake of argument, that you are right that chance cannot be defined. Does it mean that it is invalid/inconsistent/paradoxical to have a theory in which chance plays a vital role? You are arguing that it is. But you are wrong. There is nothing invalid/inconsistent/paradoxical with dealing with a theory in which some elements cannot be defined.
In fact any theory (about anything) must eventually be reducible to something which cannot be defined. This should be clear even at the linguistic level: To define some word, you must use some other more fundamental words. And to define those more fundamental words, you must use some even more fundamental ones, etc. But you must stop at some point, as the number of words is not infinite. And when you stop, your most fundamental definition will contain some words which cannot be defined. Such words which cannot be defined by other words are called primitive words.
Take for example the Newton law $F=ma$. The quantities $F,m,a$ are defined as real numbers. Real numbers can be defined in terms of rational numbers (e.g. via a Dedekind cut), and rational numbers can be defined in terms of integer numbers. The integer numbers can be defined by Peano axioms, in terms of sets. But sets, according to modern mathematics, cannot be defined. A set is a primitive concept in mathematics. So Newton law is based on something which cannot be defined. But, my point is, that does not mean that Newton law is invalid/inconsistent/paradoxical.
Just as "set" is a primitive concept, it is possible that "chance" is also a primitive concept. But that does not mean that there is something invalid/inconsistent/paradoxical with a theory based on chance, just as there is nothing invalid/inconsistent/paradoxical with a theory based on sets.
9. Nov 16, 2015
### Demystifier
Suppose, for the sake of argument, that it is not a pure chance. Then why one uses a probability distribution function? For the same reason one uses a probability distribution function when flipping a coin: Because it's practical.
10. Nov 16, 2015
Ok, then, show me a formula for true random behaviour that I can call in a computer program. Like so :
int getRandom()
{
...
}
I don't mean pseudo random numbers, for obvious reasons, nor do I mean random numbers obtained by some physical process, like Linux does, since that also
ends up being without a definition then. I mean an algorithm, self-contained, that's purely random.
And because that's not possible, formal treatments don't attempt this. Hence, the implicit assumption that chance cannot be defined.
I'm not disputing that it works. But to get it to work, pure chance is required as the final 'decision maker'.
If this is anything but pure chance, QM is invalid; It relies on this assumption, to make predictions at all.
Basically, I'm just saying that QM works as math. Any interpretation always ends up trying to understand/define pure chance.
Therefore all interpretations of QM are nonsense, unreasonable. This is why Feynman took the 'shut up and calculate' approach : That, at least, works.
But to continue with this theory philosophically is nonsensical. You're not going to get anything any more once you reach the undefinable.
I'm also not disputing that everything turns out to be ultimately undefinable, therefore base, elementary, assumptions are required. Pure chance is one of them,
a base concept that cannot be broken up in simpler elements. It's just that, once you reach that, you can't reason any further, unless you maintain
that it's not purely random, which QM cannot do. Bayesians can do that, not Quantum Mechanics.
Practically, sure. If you stick to measurement and math, and stay the hell away from 'collapses of wave-functions', 'many worlds' etc.
Philosophically, it cannot be anything else than nonsense.
In my opinion, this has always been the big problem with it.
11. Nov 16, 2015
### bhobba
A program is deterministic. By definition random behaviour isn't. So you can't do it - obviously.
But interestingly there are pseudo random number generators that pass even the most sophisticated tests we have for randomness - but it is an evolving area as the tests get more sophisticated.
Its impossible, utterly impossible, to tell pseudo random behaviour from truly random behaviour.
Its also irrelevant to QM.
That's false.
Thanks
Bill
12. Nov 16, 2015
### Demystifier
The Adversary, if I understood you correctly, you are effectively saying the following:
Pure chance is either true on not true.
If pure chance is not true, then we should try to find out what is true.
If pure chance is true, then we cannot say anything more about that, in which case we should stop talking about it.
Am I correct?
13. Nov 16, 2015
I'm basically saying that when it comes to QM, we should shut up and calculate. I'd advise against any philosophical interpretation because that's
always going to be invalid, for the reasons I've been arguing.
It's a matter of whether or not you care about the philosophical void. It's probably why Feynman hated philosophy :)
14. Nov 16, 2015
### Demystifier
I thought you are arguing only against pure-chance interpretations. How can your arguments be used against interpretations which do not assume pure chance?
15. Nov 16, 2015
### haushofer
If Bell would have done just this he would never have found his inequalities. Physics is more than just bookkeeping, imho.
I have a feeling that it could well be that quantum gravity is not well understood because we don't understand the underlying principles of quantum mechanics, but that's just a gut feeling. In any case, i think the shut-up attitude is not very scientific, unless you see physics merely as a device to reproduce experimental results.
16. Nov 16, 2015
### gill1109
17. Nov 16, 2015
Bell : 'No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.'
In other words, pure chance cannot be defined. 'local hidden variables' is an attempt to define pure chance, which is impossible, as the theorem states.
'How can your arguments be used against interpretations which do not assume pure chance?'
QM has to assume pure chance. Remember the uncertainty principle? If that's false, so is the entirety of QM!
18. Nov 16, 2015
### Heinera
No, local hidden variables is not an attempt to define pure chance. In fact, most of the local hidden variable models that have been proposed use "pure chance", i.e. randomness, at the source.
19. Nov 16, 2015
### gill1109
Looking for local hidden variables is not, IMHO, an attempt to define pure chance. It's an attempt to avoid deciding whether or not "pure chance" is a fundamental physical feature of the universe.
In other words, if we could "explain" quantum mechanics through (preferably local) hidden variables theory, we wouldn't need to worry about whether or not pure chance exists.
20. Nov 16, 2015
### bhobba
Its got nothing to do with it.
Because they have nothing to do with it.
It simply has to assume the Kolmogorov axioms. How you interpret it is irrelevant ie if you assume the events defined in those axioms are random or psuedo random the axioms do not care.
Thanks
Bill
|
2017-08-21 10:59:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7168298959732056, "perplexity": 879.699911671111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108264.79/warc/CC-MAIN-20170821095257-20170821115257-00462.warc.gz"}
|
https://www.physicsforums.com/threads/triangle-approximation-derivation.469699/
|
# Triangle Approximation Derivation
## Homework Statement
Here is a drawing with all the needed variables:
http://i.imgur.com/192GI.jpg
## The Attempt at a Solution
I have been trying to figure out how this approximation is derived for some time now and have no progress to show for it. Any help in figuring out the steps would be greatly appreciated.
## Answers and Replies
Related Calculus and Beyond Homework Help News on Phys.org
The law of cosines says that
b^2 - a^2 = c^2 - 2ac * cos(B),
so,
b - a = -2ac * cos(B) / (b+a) + c^2 / (b+a).
If one says that a ~ b because c << a, then the above becomes
b - a ~ -c * cos(B) + c^2 / (2a),
which is close but does not match and is way off for B approaching zero degrees.
The law of cosines says that
b^2 - a^2 = c^2 - 2ac * cos(B),
so,
b - a = -2ac * cos(B) / (b+a) + c^2 / (b+a).
If one says that a ~ b because c << a, then the above becomes
b - a ~ -c * cos(B) + c^2 / (2a),
which is close but does not match and is way off for B approaching zero degrees.
As I read the figure and understand your notation, B = $$\pi$$ - ($$\theta$$ + $$\gamma$$), thus what you got equals what's on the figure.
As I read the figure and understand your notation, B = $$\pi$$ - ($$\theta$$ + $$\gamma$$), thus what you got equals what's on the figure.
Almost but not quite. I cannot figure out where the sin^2(B) factor comes from.
Are you familiar with Taylor series? I could get the form they have given using a Taylor series approximation.
|
2020-05-31 14:42:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7091184854507446, "perplexity": 944.2531061972545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413406.70/warc/CC-MAIN-20200531120339-20200531150339-00396.warc.gz"}
|
https://proofwiki.org/wiki/Continuous_Strictly_Midpoint-Concave_Function_is_Strictly_Concave
|
# Continuous Strictly Midpoint-Concave Function is Strictly Concave
## Theorem
Let $f$ be a real function which is defined on a real interval $I$.
Let $f$ be strictly midpoint-concave and continuous on $I$.
Then $f$ is strictly concave.
|
2022-01-20 11:07:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9973703026771545, "perplexity": 334.7138523648969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301737.47/warc/CC-MAIN-20220120100127-20220120130127-00150.warc.gz"}
|
https://meridian.allenpress.com/radiation-research/article-abstract/50/3/504/45285/The-Relative-Effectiveness-of-Fission-Neutrons-for
|
Miniature pigs were bilaterally irradiated either in a neutron field (incident neutron to gamma ratio of 5) or in a gamma ray field (incident gamma to neutron ratio of 15) from the AFRRI-TRIGA reactor. For both fields the dose rate at the midline of the pigs was 250 rads/minute and uniform (class A) irradiations to the gastrointestinal (GI) tract were achieved. Midline tissue doses from the neutron field ranged from 360 to 1970 rads; survival times were from 3.6 to 11 days. Midline tissue doses from the gamma ray field ranged from 605 to 2630 rads and survival times from 4.4 to 11.1 days. Median lethal doses for GI death <tex-math>$({\rm LD}_{50(7.5)})$</tex-math> were calculated to be 430 and 870 rads for the neutron and gamma ray fields, respectively. Relative effectiveness of the neutrons was 2.0. The ratio of the doses corresponding to 7.5 days mean survival time, calculated by the least-squares method, was used as an additional estimate of the neutron relative effectiveness. Results were in agreement with those of the <tex-math>${\rm LD}_{50(7.5)}$</tex-math> method. Similar determinations for miniature pigs irradiated in the gamma ray field at 125 rads/minute showed no gamma ray dose rate dependence between 250 and 125 rads/minute. The relative segmental radiosensitivity of the GI tract and postirradiation bacterial proliferation in the miniature pigs are discussed.
This content is only available as a PDF.
|
2021-03-02 05:15:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7151246666908264, "perplexity": 3229.4023406005313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363217.42/warc/CC-MAIN-20210302034236-20210302064236-00433.warc.gz"}
|
https://www.eathyreading.website/2022/07/quadratic-equation-solver-and-calculator.html
|
# QUADRATIC EQUATION SOLVER AND CALCULATOR
A quadratic equation is usually in the form:
$$ax^2+bx+c$$
Result:
### What is a quadratic equation?
A quadratic equation is a second-degree polynomial equation.
A quadratic equation typically takes the following form:
$$ax^2+bx+c=0$$
In the above, $a$ is the coefficient of $x^2$.
Keep in mind that $a$ cannot be 0 because if a is 0, ax² would not exist, so the equation becomes a linear equation.
For emphasis, $a≠0$
b is the coefficient of x. b can be equal to 0, that is:
$b=0$
However, the quadratic equation will have a complex root if b equals zero.
c is the constant of the quadratic equation. One of the roots of the quadratic equation would be zero if c is equal to 0.
### What are roots of quadratic equations?
The roots of a quadratic equation are the two values of x that are obtained when solving c equations.
The roots of a quadratic equation are the real solution to the quadratic equations.
Because quadratic roots satisfy the equation, they are sometimes referred to as the solution of the quadratic equation.
As you will know in subsequent headings, a quadratic equation may have real and complex roots.
### How do you solve a quadratic equation?
There are three ways to solve a quadratic equation: by factorization, completing the square method, and by formula.
You must obtain two roots when solving quadratic equations, regardless of the method you use.
### What is a real quadratic root?
A real quadratic root is the root of a quadratic equation that is a real number.
A real number includes any positive number, negative number, zero and irrational numbers.
Therefore, a quadratic equation has a quadratic root if its root is a positive number, negative number, or irrational number.
### What is a complex quadratic root?
A quadratic equation has a complex root if the root(s) of the quadratic equation has an imaginary number.
A complex quadratic root is usually in the form:
$$a±ib$$
where a is a real number and ib is an imaginary number.
Generally, a quadratic equation will have a complex root if its discriminant is lesser than zero.
### What is the discriminant of a quadratic equation?
The discriminant of a quadratic equation is the part of the quadratic formula under the square root.
Recalled that the formula for solving a quadratic equation is :
$$x=\frac{-b±\sqrt{b^2-4ac}}{2a}$$
Under the square root is $b2-4ac$. Therefore, it follows that the discriminant. of a quadratic equation is $b2-4ac$.
Help us grow our readership by sharing this post
|
2022-12-09 22:14:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7941040992736816, "perplexity": 348.54240787739025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711552.8/warc/CC-MAIN-20221209213503-20221210003503-00196.warc.gz"}
|
https://study.com/academy/answer/suppose-a-ten-year-1-000-bond-with-an-8-1-coupon-rate-and-semi-annual-coupons-is-trading-for-1-034-32-a-what-is-the-bond-s-yield-to-maturity-expressed-as-an-apr-with-semi-annual-compounding.html
|
# Suppose a ten-year, $1,000 bond with an 8.1% coupon rate and semi annual coupons is trading for... ## Question: Suppose a ten-year,$1,000 bond with an 8.1% coupon rate and semi annual coupons is trading for \$1,034.32.
a. What is the bond's yield to maturity (expressed as an APR with semi annual compounding) ?
b. If the bond's yield to maturity changes to 9.5% APR, what will be the bond's price?
## YTM on Semiannual Coupon Bonds:
Most bonds offer semiannual coupon payments. The yield to maturity (YTM) on such bonds is the annualized yield. When annualizing the yield, the convention is to convert the semiannual yield to an effective annual rate.
a.
Let
• P be the bond price,
• r be the semiannual interest rate,
• n be the number of periods of the bond = 10 * 2 = 20.
• Coupon = semiannual coupon =...
Become a Study.com member to unlock this answer! Create your account
|
2019-07-20 19:43:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23375259339809418, "perplexity": 6565.703136768447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526670.1/warc/CC-MAIN-20190720194009-20190720220009-00477.warc.gz"}
|
http://dwow.auus.pw/colorimetric-determination-of-an-equilibrium-constant.html
|
0016 mmol of Fe and 0. Determination of appropriate indicators for various acid-base. Mar 19: Spring Break - NO CLASS : NO. A state of chemical equilibrium exists when the rate of the forward reaction is equal to the rate of the reverse reaction. Equilibrium moles of Fe = 0. Measure the absorbance of the 5 equilibrium solutions at wavelength, = 447 nm Determine the equilibrium [FeSCN 2+] from the calibration curve from Part A. A system is at equilibrium when the macroscopic variables describing it are constant with time. Environmental Protection Agency Duluth, Minnesota 55804. 00100 M KSCN are added to 4. 200 M Fe(NO[subscript 3])[subscript 3], and for the equilibrium solutions, 0. Experiment 6: Determination of the Equilibrium Constant for Iron Thiocyanate Complex The data for this lab will be taken as a class to get one data set for the entire class. For an individual weak acid or weak base component, see Buffering agent. Chemical Equilibrium: Le Ch®telier's Principles. The Spectrophotometer Substances are colored when they absorb a particular wavelength of light in the visible region and transmit the other wavelengths. The ideal concentration range for iron determination lies between 1 and 8 mg L-1. On this page you can read or download colorimetric determination of an equilibrium constant in aqueous solution pre lab answers in PDF format. The well-known colorimetric determination of the equilibrium constant of the iron(III)−thiocyanate complex is simplified by preparing solutions in a cuvette. Living things need phosphorus. Colorimetric Determination of an Equilibrium Constant in Aqueous Solution23. Armstrong Analytical Chemistry 1947 19 (2), 100-102. In many uses, a mixture of ions is exchanged onto the column. Ion-Exchange Separation and Spectrophotometric Determination of Nickel and Cobalt The ion-exchange method of separation is discussed in sections 16-1 and 16-2 of Quantiative Chemical Analysis 6th ed. The solution was then shaken for 10 minutes at room temperature to attain equilibrium state. 3-1 Experiment 3 Measurement of an Equilibrium Constant Introduction: Most chemical reactions (e. Safety Notes. Determination of Molar Absorptivity Constant, (, for Fe(Ph)32+: Use the molar concentration of the standard Fe solution and its absorbance at (max to calculate the molar absorptivity constant, (, for Fe(Ph)32+. * The take home lab is due in the laboratory class on the due date. The intermediate pH selected was pH = 7. In Part 1 of the procedure, you will be preparing 5 beakers containing differing amounts of Fe3+ and SCN-. A system is at equilibrium when the macroscopic variables describing it are constant with time. Barreto*, Terry A. The intensity of the blue color is greatly increased by the addition of aqueous NH3, which converts the hydrated copper(II) ions to ammonia (or. The measurement of the equilibrium-exchange rate constants was achieved by selective inversion of a distinctive signal from. Eugene, Jr. Titration Curves of Polyprotic Acids 27. Determination of an Equilibrium Constant for the Iron (III) thiocynate Reaction 3 Once your calibration curve has been prepared you will be able to prepare a series of equilibrium mixtures and determine the equilibrium constants for each trial, using your calibration graph to. Introduction. 11 Page(s). spectrophotometric determination of the equilibrium constant of a reaction / experiment 5 s. Referring to Lab Experiment 22: Colorimetric Determination of an Equilibrium Constant in Aqueous Solution, which chemical reaction for Fe3+ and SCN is correct A) Fe3+(aq) + SCN-(aq) ----><---- FeSCN2+ (aq). In Part 1 of the procedure, you will be preparing 5 beakers containing differing amounts of Fe3+ and SCN-. Problem 32: Determination of an Equilibrium Constant Equilibrium constant is an important property of a chemical reaction. A wide range of values for K w have been calculated using Quantum Cluster Equilibrium (QCE) theory with a variety of ab initio and density functional methods. ) To determine the equilibrium constant for the reaction of iron (III) and thiocyanate to form the thiocyanatoiron(iii) complex ion using spectrophotometric data. Since absorbance does not carry any units, the units for $$\epsilon$$ must cancel out the units of length and concentration. Three colorimetric methods for the determination of SO/sub 2/ were developed. Equilibrium moles of FeSCN = 10 x 0. Determination of the Dissociation Constant of a Weak Acid. Introduction. Get Your Custom Essay on Determination of Iron by Reaction with Permanganate-A Redox Titration Get custom paper This titration involves the oxidation of Fe2+ ions to Fe3+ ions by the permanganate ion, and is carried out in sulfuric acid solution to prevent the air oxidation of the ferrous ion. A colorimetric method was developed for the determination of protolysis constants in aprotic solvents of low dielectric constant without the use of special buffers. 00 standard buffer solution. axis should have a range of 0. Graphical Techniques for Equilibrium Calculations. 500 M NaOH; 1. While it is possible to use 2D methods to determine rates, accurate determination requires a priori knowledge of optimum mixing times and a lengthy experiment, and so, we elected to use the alternative 1D EXSY approach. Quantitative determination of the residual salicylic acid by spectrophotometry Preparation of the calibration curve A series of standard complex solutions will be prepared, and their absorbance measured at 540 nm in a spectrophotometer. K f can be calculated through an experimental determination of the [FeNCS+2] eq using a standard curve (week 2) and deduction of the [Fe+3] eq and [NCS –] eq. In many uses, a mixture of ions is exchanged onto the column. Den här utgåvan av Laboratory Experiments for Chemistry: Pearson New International Edition är slutsåld. Journal of Chemical Education, 2011, 88, p. 0016 mmol of SCN reacted in the experiment. - Colorimetric Determination for the Composition and Equilibrium Constant For the Formation of a Metal Complex Ion Abstract The object of this experiment was to determine properties for the formation of a metal complex ion, ferrothiocyanate by observing its colorimetric characteristics. Spectrophotometric Determination of Iron Purpose To become familiar with the principles of calorimetric analysis and to determine the iron content of an unknown sample. Despite the evidence that results given by the dynamic method are high, this technique is preferable for routine determination of free base in GB for the following reasons:. 4-nitro-1,2-diaminobenzene is useful in that no blank correction is needed, but with a loss of sensitivity compared to pararosaniline. Rate Laws from Rate Versus Concentration Data (Differential Rate Laws) A differential rate law is an equation of the form. Absorptiometric study of ammonium aurintricarboxylate as a and an equilibrium constant of 10 9. • use Beer's Law to measure the equilibrium concentration of a complex ion. Colorimetric 22 Pre-lab Questions Determination of an Equilibrium Constant in Aqueous Solution Before beginning this experiment in the laboratory, you should be able to answer the following questions. Equilibrium Calculations General Approaches. The path length is measured in centimeters. It indicates the direction of a reaction. The equilibrium constant for the monosubstitution reaction, β 1, can be calculated using equation (2). Finally, we were able to use DCC to determine the equilibrium constant of the iron(III)-thiocyanate equilibrium. Print Colorimetric Determination of an Equilibrium Constant in an Aqueous Solution flashcards and study them anytime, anywhere. At equilibrium, the reactants turn into product and the products decompose into reactants at the same rate. Samantha Davis Determination of Equilibrium Constant Abstract: The objective of this experiment was to first determine the equilibrium constant through finding absorbance of an Iron thiocyanate by following a procedure given in a two part dilution experiment. Titration Curves of Polyprotic Acids. Determination of an Equilibrium Constant. Hydrolysis of Salts and pH of Buffer Solutions. Experiment #9: Potentiometric Determination of an Equilibrium Constant : April 4 : Chapter 18: Electrochemistry: Experiment #10: Activity of Metals (OnLine Web lab) (In the DCI and Computer Lab Activities Workbook page 283. 0106 mmol Concentration of all species at equilibrium:. Characteristics: [species]= mol L-1 [solvent]= 1 K is dimensionless a) In the REVERSE DIRECTION (from right to left) the constant is K´ K´= 1/K c C + d D a A + b B direct reverse. Ferguson, Ph. Superscript h (h) indicates lab is or will be available as a hybrid. Equilibrium Constant Post Lab Answers. Experiment 6 Colorimetric Determination of [Co+2] 32 Experiment 7 Determination of the Equilibrium Constant 36 for the Formation of FeSCN+2 Experiment 8 Solubility, K sp, Common Ion Effect and ΔH soln 41 of Potassium Hydrogen Phthalate Experiment 9 pH Measurements and Determination of the 48 Equivalent Mass and K a. 2/22 Determination of a Rate Law 79 - 88 2/29 Colorimetric Determination of an Equilibrium Constant in Aqueous Solution 89 - 104 3/07 Chemical Equilibrium: Le Châtelier's Principle 105 - 116 3/14-20 €€€€€€€€ SPRING BREAK € 3/21 Acid Properties, pH, and Indicators Hydrolysis of Salts and pH of Buffer Solutions 117. The method for determination of thorium is particularly efficient for materials with a low thorium content, using extraction by a mixture of ethyl acetate and acetone as separation method and the Thomason's colorimetric method for final determination. Chemical Equilibrium: Le ChAtelier's Principles 24. On this page you can read or download colorimetric determination of an equilibrium constant in aqueous solution pre lab answers in PDF format. 2 Procedure. Dubetz, David W. Another way is to prepare iron standards and to generate a standard curve of absorbance versus iron concentration. Tolkien FILE ID e85415 Ebook Digtal Media Library tappi standard t 532 study colorimetric determination of an equilibrium constant in an aqueous solution flashcards play games take quizzes print and more with easy notecards the well known. A COLORIMETRIC METHOD FOR THE DETERMINATION OF THE pH OF CEREBROSPINAL FLUID. 00 241 242 Experiment 22 - Colorimetric Determination of an Equilibrium Constant in Aqueous Solution Meter Wavelength control Wavelength scale Sample holder Light control Zero. Chapter 14: Chemical Equilibrium (The Extent of Chemical Reactions) Exam #2, Wednesday, March 2, 2016 at 5:30 p. Techtutor - assignment and tutoring help: physics homework. A highly simple, rapid, sensitive and selective method is developed for spectrophotometric determination of gabapentin in pure form as well as in pharmaceutical formulations. Titration Curves of Polyprotic Acids. In each beaker, there is an extreme excess of Fe3+ which forces the equilibrium far enough to the right that the [SCN-] can be assumed to be near zero and the [Fe3+] as remained essentially unchanged. 27 Colorimetric Determination of an Equil. The SCN– here is the limiting reactant. - 6:20 pm : Experiment #6: Colorimetric Determination of Phosphate Ion in Water: Mar 7 : Chapter 15: Acids and Base : Experiment #7: Spectrophotometric Determination of an Equilibrium Constant. The method is based on the formation of a yellow Schiff base derived from the condensation of gabapentin drug (1-amino methyl. constant for the reaction; in this case, the formation constant K f, which is shown below. Equilibrium constant values are usually estimated initially by reference to data sources. If you don't see any interesting for you, use our search form on bottom ↓. Here, known data of K, K 1 and K 2 derived from the following equations (10)-(12) are examples of values which can be used as the above-described equilibrium constant K of hydrofluoric acid and the equilibrium constants K 1 and K 2 of iron fluoride complexes (cf. Set these aside so you don't use them by mistake. 33 x 103 L/ mol, respectively. Chemical Equilibrium: Le Ch®telier’s Principles. For the calibration plot, 0. - Colorimetric Determination for the Composition and Equilibrium Constant For the Formation of a Metal Complex Ion Abstract The object of this experiment was to determine properties for the formation of a metal complex ion, ferrothiocyanate by observing its colorimetric characteristics. Part II — Determination of the Equilibrium Constant 1. C° T278 represents the c(DIC) in equilibrium with the preindustrial atmospheric CO 2 molar fraction of 278 ppm and was calculated with the proper dissociation constants 28,29,30. You should now understand the relationship defined by the Beer-Lambert law, how to determine concentration from absorbance using a spectrophotometer, and how to calculate an equilibrium constant using equilibrium concentrations. Colorimetric 22 Pre-lab Questions Determination of an Equilibrium Constant in Aqueous Solution Before beginning this experiment in the laboratory, you should be able to answer the following questions. Atomic Emission Spectra of Gases: Evidence of Quantum Structure. Mixtures of Acids and Base Calculations for pH Determination. 0 mL volumetric flask. For a reaction of the type aA + bB ↔ cC + dD, the equilibrium constant, K eq, is given. 00200 M Fe. Colorimetric Determination of an Equilibrium Constant in Aqueous Solution ; Chemical Equilibrium: LeChâtelier's Principle ; Hydrolysis of Salts and pH of Buffer Solutions ; Determination of the Dissociation Constant of a Weak Acid ; Titration Curves of Polyprotic Acids ; Determination of the Solubility-Product Constant for a Sparingly Soluble Salt. For the calibration plot, 0. Acid dissociation constant is an important chemical characteristic of organic and inorganic compounds and it affects both chemical properties and biological activities of the molecules. However, the biggest concern is still focused on color intensity evaluation because this kind of measurements requires reliable photosensitive devices. This condition is expressed in the equilibrium constant Kc for the reaction. As the pH of a solution containing the indicator changes. Harris (and see attached). between titrant and polyanionic analytes. Experiment 8: CHEMICAL EQULIBRIUM: PART II: DETERMINATION OF AN EQUILIBRIUM CONSTANT 77 Purpose: The equilibrium constant for the formation of iron(III) thiocyanate complex ion is to be determined. The greater the value of K a, the stronger the acid, and the greater the amount of dissociation. SCH 304: Coordination Chemistry : 45: View Description: Coordination Chemistry Description. Gravimetric Analysis of a Chloride Salt Gravimetric Determination of Phosphorus in Plant Food Paper Chromatography: Separation of Cations and Dyes Molecular Geometries of Covalent Molecules: Lewis Structures and the VSEPR model Atomic Spectra and Atomic Structure Behavior of Gases: Molar Mass of a Vapor Determination of R The Gas-Law Constant. Metathesis Reactions and Net Ionic Equations Colorimetric Determination of an Equilibrium Constant in Aqueous Solution Chemical Equilibrium: LeChacirc;telierrsquo;s Principle Hydrolysis of Salts and pH of Buffer Solutions: Determination of the Dissociation Constant of a Weak Acid Titration Curves of Polyprotic Acids. Spectrophotometric determination of acidity constants of salicylaldoxime in aqueous solution at 25 °C and ionic strength of 0. Note: You will no longer need the 1. model, where the logarithms of the equilibrium constants for CuII-chloride formation reactions were fitted to the data using a non-linear least-squares approach. THE FORMATION CONSTANT FOR A COMPLEX ION BY COLORIMETRY INSTRUCTOR RESOURCES The CCLI Initiative Learning Objectives The objectives of this experiment are to. 0 milliliters of HCl, and this time, instead of using sodium hydroxide, we're going to use barium hydroxide, and it takes 27. Students will: Prepare and test standard solutions of FeSCN2+ in equilibrium. CHEM 130 Lab There is a host of information available at chemlab. KINETICS AND PREDICTIVE MODELING OF PATULIN DEGRADATION BY under equilibrium and non-equilibrium Ozone concentration determination by indigo colorimetric. 8 mM was fitted to Eq. The concentration of the products at equilibrium will be measure by observing the absorption of blue light. 50 mL increments of 0. Different experimental methods have been developed for such investigations, each usually tailored to a particular characteristic of the reaction type. Test solutions of SCN– of unknown molar concentration. The Spectrophotometer Substances are colored when they absorb a particular wavelength of light in the visible region and transmit the other wavelengths. The measure or matching of colour may frequently furnish a convenient method of quantitative determination of materials which are themselves coloured or can be made to form coloured compounds by suitable chemical reactions. As for interfering ions, only NO 2-, S 2 O 3 2-, C 2 O 4 2-, HPO 4 2-, H 2 PO 4-, Co 2+ and Cu 2+ showed some influence on iron determination through the proposed method. 270 Experiment 11: Determination of Appropriate Indicators for Various Acid-Base Titrations and. The Michaelis constant (or equilibrium constant of the dissociation of the E -S complex) is therefore equal to the substrate concentration for which velocity is half the maximum velocity (see fig. Determination of the Dissociation Constant of a Weak. Determine the molar concentrations of the ions present in an equilibrium system. 00200 M KSCN are added to 4. - Colorimetric Determination for the Composition and Equilibrium Constant For the Formation of a Metal Complex Ion Abstract The object of this experiment was to determine properties for the formation of a metal complex ion, ferrothiocyanate by observing its colorimetric characteristics. ) To determine the concentration of an unknown by evaluating the relationship. The concentration of the products at equilibrium will be measure by observing the absorption of blue light. There is an equilibrium between the concentration of reactants and products. The accuracy of the assayed method was confirmed by recovery experiments in spiked samples. Kp is called the. On this page you can read or download colorimetric determination of an equilibrium constant in aqueous solution pre lab answers in PDF format. Determination of an Equilibrium Constant Introduction When chemical substances react, the reaction typically does not go to completion. Test solutions of SCN− of unknown molar concentration. (2) The equilibrium is pH dependant, and as pH is raised (the solution becomes more alkaline), the higher complexes with more than one sulfosalicylic acid ligand become stabilised at the expense of the lower complexes and the free iron (iii) ion. SCH 304: Coordination Chemistry : 45: View Description: Coordination Chemistry Description. I am stuck in this step. not change with time. We will use this method in our iron determination. asked by Anonymous on October 29, 2016; Chemistry(Urgent, please check) I completed a lab to find the determination of Kc. 23‡43 Procedure for determination of dissociation constants The hydrazones in basic media behaved as weak acids [24], hence their. Convert your values of K1 and K2 to values based on molar concentrations. Titration Curves of Polyprotic Acids. So when we write our Ka expression, our equilibrium expression, Ka is equal to concentration of products over reactants. A student carries out an experiment to determine the equilibrium constant for a reaction by colorimetric (spectrophotometric) analysis. Hydrolysis of Salts and pH of Buffer Solutions. ) To determine the concentration of an unknown by evaluating the relationship. of the weak Since weak acid acids involved. Nelson, Theodore E. The equilibrium constants describing these equations and the obtained values are shown in Table 2. Nelson and Kenneth C. cancer, and the dependence of their behaviors on their pKa values. Furthermore, using these reliable kinetic constants, and assuming unchanged the total free cysteine level (CysSH + CysS‐SCys), it is easy to predict that, in serum, a 10% increase in oxidized cysteine is enough to cause an increased concentration of HSA ox at equilibrium similar to the one found in nephropathic patients before dialysis (65%. Since P is normally a limiting reagent for life, changes in its spatial distribution can lead to important shifts in populations. Ross Weatherman. 0069 mmol Equilibrium moles SCN = 0. 0, temperature 25° and ionic strength 0. The guidebook ends with discussions on molecular geometry, kinetics, and chemical equilibrium. Colorimetric Determination of an Equilibrium Constant in Aqueous Solution. This ratio is the equilibrium constant, K eq, which is determined by substituting molar concentrations (indicated by the square brackets) into the equilibrium constant equation. Dubetz, David W. , 88(3), pp. Free Online Library: Comparison of pyrocatechol violet and aluminon for the determination of `reactive' aluminium in the presence of organic acids. Determination of an Equilibrium Constant Introduction When chemical substances react, the reaction typically does not go to completion. 4-nitro-1,2-diaminobenzene is useful in that no blank correction is needed, but with a loss of sensitivity compared to pararosaniline. Experiment 10 B: Determination of the Equilibrium Constant, K sp, for a Chemical Reaction. For the calibration plot, 0. In this experiment we calculated Absorbance of 6 flask, 0. Two main methods exist for determining the pKa of a compound: potentiometric titration and spectrophoto-metric titration. The measure or matching of colour may frequently furnish a convenient method of quantitative determination of materials which are themselves coloured or can be made to form coloured compounds by suitable chemical reactions. Such a system is said to be in chemical equilibrium. Print Colorimetric Determination of an Equilibrium Constant in an Aqueous Solution flashcards and study them anytime, anywhere. Cu - AgNO3 Reaction 2 days 1st year 10 Determination of the equilibrium constant for a chemical reaction Colorimetric Determination of an Equilibrium Constant 1 day Ch 13 11 Determination of appropriate indicators for various acid-base titrations; pH determination Relative Strength of Acids. , the "generic" A + B→ 2C) are reversible, meaning they have a forward reaction (A + B forming 2C) and a backward reaction (2C. However, the biggest concern is still focused on color intensity evaluation because this kind of measurements requires reliable photosensitive devices. If two reactants are mixed, they will tend to react to form products until a state is. * BY IRVINE McQUARRIE AND A. Titration Curves of Polyprotic Acids. equilibrium expressions, perform equilibrium calculations involving the solubility product constant, predict if precipitates will form upon mixing ionic solutions of various concentrations, identify complex ions, write equilibrium formation and dissociation reactions, calculate equilibrium values for complex ion solutions,. By measuring the concentration of the product produced from five different reactant concentrations, you will determine which equilibrium above occurs for the iron thiocyanate reaction. Recognize the characteristics of chemical equilibrium. Aminopeptidase inhibition is detected via a standard colorimetric assay with an inexpensive commercially available substrate. Colorimetric Determination of an Equilibrium Constant in Aqueous Solution. PubMed:Equilibrium constant of the gamma-sultone and extraction constant of sodium and potassium ions from water into benzene in presence of 18-crown-6 and bromocresol green. A system is at equilibrium when the macroscopic variables describing it are constant with time. Determination of the equilibrium constant for a chemical reaction By determining the number of moles present in an equilibrium mixture (by titration or other quantitative method like using a spectrophotometer), and knowing the K expression, K can be determined. might be approaching equilibrium is the U(IV/VI) redox equilibrium for 6 Finnish ground-water samples from the work of Ahonen et al. ) Apr 11 : Chapter 18: Electrochemistry: Experiment #11: Electrochemical Cells Part II. Play games, take quizzes, print and more with Easy Notecards. axis should have a range of 0. However, the biggest concern is still focused on color intensity evaluation because this kind of measurements requires reliable photosensitive devices. 4x10^-5 for this reaction. Determination of Molar Absorptivity Constant, (, for Fe(Ph)32+: Use the molar concentration of the standard Fe solution and its absorbance at (max to calculate the molar absorptivity constant, (, for Fe(Ph)32+. for colorimetric analysis. Results and Discussion: In order to study absorbance and spectroscopy, we must construct a calibration curve of absorbance with concentration. Determination of Equilibrium Constants. - Colorimetric Determination for the Composition and Equilibrium Constant For the Formation of a Metal Complex Ion Abstract The object of this experiment was to determine properties for the formation of a metal complex ion, ferrothiocyanate by observing its colorimetric characteristics. ; Barlag, R. 8 was AAC as a reagent for the colorimetric determination of. 1 L/mole-cm at 505 nm, what is the concentration of a solution of Co(NO3)2 that willgive an absorbance of 0. Why isthe amount of ammonia added is same to copper sulphate solution in colorimetric estimation? Colorimetric Determination of Copper equilibrium constant K. As described earlier, multiple ligands are immobilized on an SPR sensor for high-throughput testing of a sample. Part II — Determination of the Equilibrium Constant 1. In particular, you should bookmark the Chemical Principles section, which contains the lab manual for the course. Chemical Equilibrium: Le Châtelier’s Principles. Determination of an Equilibrium Constant OUTCOMES After completing this experiment, the student should be able to: use absorbance data to find the concentration of a colored species. 00200 M Fe. the vapour pressure of the pure solvte. Bursten and H. The third method is to use Equation (8) solved for pKa: (17) where pH i is the intermediate value between the acid and alkaline extreme values. of this series of documents, collectively titled “Monitored Natural Attenuation of Inorganic Contaminants in Ground Water,” is to provide a technical resource for remedial site managers to define and assess the potential for use of site-specific natural processes to play a role in the design of an overall remedial ap-. calculate the equilibrium constant for a reaction. Its pH changes very little when a small amount of strong acid or base is added to it. Colorimetric Determination of an Equilibrium Constant in Aqueous Solution. Determination of the Dissociation Constant of a Weak Acid. While it is possible to use 2D methods to determine rates, accurate determination requires a priori knowledge of optimum mixing times and a lengthy experiment, and so, we elected to use the alternative 1D EXSY approach. Chemical equilibrium. Equilibrium Constant Post Lab Answers. Titration Curves of Polyprotic Acids. ; Barlag, R. Quantitative determination of the residual salicylic acid by spectrophotometry Preparation of the calibration curve A series of standard complex solutions will be prepared, and their absorbance measured at 540 nm in a spectrophotometer. Test solutions of SCN– of unknown molar concentration. Laboratory Goals • Compare different methods of concentration determination • Use titration as a quantitative analysis technique • Use spectrophotometry as a quantitative analysis technique. not change with time. Brown, and Patricia D. To this solution, add 25 mL of deionized water, again using a clean graduated cylinder. The order of elution of ions can be predicted by knowing the equilibrium constants for each of the ions in the sample. Lab 3: Concentration Determination of an Aqueous Solution. 10 mL increments of 0. Here, known data of K, K 1 and K 2 derived from the following equations (10)-(12) are examples of values which can be used as the above-described equilibrium constant K of hydrofluoric acid and the equilibrium constants K 1 and K 2 of iron fluoride complexes (cf. Determination of an Equilibrium Constant - laney. for colorimetric analysis. Colorimetric Determination of an Equilibrium Constant in Aqueous Solution REPORT SHEET EXPERIMENT Colorimetric 22 Determination of an Equilibrium Constant in Aqueous Solut Colorimetric Determination of an Equilibrium Constant. Determining the equilibrium constant from pH values of changing concentrations of ethanoic acid Design Research question: How will altering the concentrations of ethanoic acid affect the pH value, and, in-turn, the equilibrium constant? Background information: When weak acids react, the reaction typically does not go to completion. Good precision of the results was verified by suitable studies. Determination of an Equilibrium Constant. Start studying colorimetric determination of an equilibrium constant in aq solution. Chemical Equilibrium: Le Ch®telier's Principles. Experiment 6: Determination of the Equilibrium Constant for Iron Thiocyanate Complex The data for this lab will be taken as a class to get one data set for the entire class. Equilibrium What students are saying As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students. Search the history of over 384 billion web pages on the Internet. 500 M NaOH; 1. Chemical Equilibrium: Le Châtelier’s Principles. Determination of an Equilibrium Constant, Keq Equilibrium Equilibrium Constant Data Collection and Calculation Beer’s Law Calibration Curve/ Working curve. 0 mL volumetric flask. 5–DETERMINATION OF AN EQUILIBRIUM CONSTANT LABORATORY OBJECTIVES AND ASSESSMENTS 1. The present invention has the object of creating an improved method based on the known colorimetric cyanide-picric acid color reaction which permits the determination of the cyanide concentration of aqueous solutions in a range of approximately 0. FULL TEXT Abstract: The effects of light-limitation stress were investigated in natural stands of the seagrasses Zostera marina and Cymodocea nodosa in Ria Formosa. Colorimetric Determination of the Iron(III)−Thiocyanate Reaction Equilibrium Constant with Calibration and Equilibrium Solutions Prepared in a Cuvette by Sequential Additions of One Reagent to the Other. Concentrations of solutions must be ranged between 1 × 10-4 and 8 × 10-4 M. 7 by increasing the extractant from 0. Living things need phosphorus. Colorimetric Determination of an Equilibrium Constant in Aqueous Solution 23. 00 standard buffer solution. Colorimetric Determination of an Equilibrium Constant in Aqueous Solution. Atomic Emission Spectra of Gases: Evidence of Quantum Structure. Colorimetric Determination of the Iron(III)-Thiocyanate Reaction Equilibrium Constant with Calibration and Equilibrium Solutions Prepared in a Cuvette by Sequential Additions of One Reagent to the Other. Furthermore, using these reliable kinetic constants, and assuming unchanged the total free cysteine level (CysSH + CysS‐SCys), it is easy to predict that, in serum, a 10% increase in oxidized cysteine is enough to cause an increased concentration of HSA ox at equilibrium similar to the one found in nephropathic patients before dialysis (65%. The present paper reports a facile and selective colorimetric method for the detection of potential environmental and health hazardous metal ions using green synthesized silver nanoparticles (AgNPs). Study Colorimetric Determination of an Equilibrium Constant in an Aqueous Solution flashcards. Equilibrium What students are saying As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students. The accuracy of the assayed method was confirmed by recovery experiments in spiked samples. 0, and should indicate that these values are "x 10 −5. concentration equilibrium constant. Equilibrium Constant Post Lab Answers. Introduction: In the previous week, we qualitatively investigated how an equilibrium shifts in response to a stress to re-establish equilibrium. Acids & Bases. 10 MHNO3 solution O 2 M Concentration of NaSCN in 0. Problem Posted I am doing a colorimetric determination of an equilibrium constant in aqueous solution. Colorimetric Determination of the Iron(III)-Thiocyanate Reaction Equilibrium Constant with Calibration and Equilibrium Solutions Prepared in a Cuvette by Sequential Additions of One Reagent to the Other. As for interfering ions, only NO 2-, S 2 O 3 2-, C 2 O 4 2-, HPO 4 2-, H 2 PO 4-, Co 2+ and Cu 2+ showed some influence on iron determination through the proposed method. 00 M KSCN solutions used in Part I. SCH 304: Coordination Chemistry : 45: View Description: Coordination Chemistry Description. The greater the value of K a, the stronger the acid, and the greater the amount of dissociation. monovalent activity coefficient, and Fn1 to Fn6 are functions of pHx and equilibrium constants for the carbonate, acetate, phosphate, sulfide, sulfate and ammonium subsystems. Determination of an Equilibrium Constant. Constant 11-18 19-29. nauki, XXXII, 1-2 (2011), str. 8 can be used to approximate the Henry’s law constant for NH 3 (g) specifically at variable temperature (K) (Renard, Calidonna et al. Dubetz, David W. Experiments and illustrations of chemical reactions are presented. The intensity of the coloured species is measured using a Spectronic 301 spectrophotometer. Equilibrium Constant Determination INTRODUCTION Every chemical reaction has a characteristic condition of equilibrium at a given temperature. On this page you can read or download colorimetric determination of an equilibrium constant in aqueous solution pre lab answers in PDF format. Since absorbance does not carry any units, the units for $$\epsilon$$ must cancel out the units of length and concentration. Colorimetric Determination of an Equilibrium Constant in Aqueous Solution REPORT SHEET EXPERIMENT Colorimetric 22 Determination of an Equilibrium Constant in Aqueous Solut Colorimetric Determination of an Equilibrium Constant. Introduction A. In order to determine a rate law we need to find the values of the exponents n, m, and p, and the value of the rate constant, k. Titration Curves of Polyprotic Acids. DISCUSSION. Methods described in the literature for the determination of zirconium are generally designed for relatively large amounts of this element. Test solutions of SCN– of unknown molar concentration. The binding of bestatin follows first order binding kinetics with a rate constant k on of 59 ± 5 M −1 s −1. Chem Lab 113L Final. An internship for research and development in the area of Mathematics or pertaining to subjects of Mathematics and/or Chemistry at a university or college in the state of North Carolina. 50 mL increments of 0. Prelab: In each beaker, there is an extreme excess of Fe3+ which forces the equilibrium far enough to the right that the [SCN–] can be assumed to be near zero and the [Fe3+] as remained essentially unchanged. Boiling an acidified sample is suggested for sulfide removal by the APHA, AWWA, WEF (1999) Manual. In the course of an investigation on the ionization of calcium it became evident that test-tube experiments, involving a simple. By measuring the concentration of the product produced from five different reactant concentrations, you will determine which equilibrium above occurs for the iron thiocyanate reaction. Determination of concentration by acid-base titration, including a weak acid or weak base 4. 10 mL increments of 0. 25 x 10–4 M Fe(NO 3) 3 or the 1. To determine the three equilibrium constants of K 1, K 2 and K 3, the experimental data for the sample solutions with protein concentration higher than 2. From the value of the reaction quotient, Q, determine whether a reaction is at equilibrium, and if not, which direction the reaction will initially. La Biblioteca Virtual en Salud es una colección de fuentes de información científica y técnica en salud organizada y almacenada en formato electrónico en la Región de América Latina y el Caribe, accesible de forma universal en Internet de modo compatible con las bases internacionales. Meyer and C.
|
2020-01-21 07:47:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7156868577003479, "perplexity": 2828.426060162555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601628.36/warc/CC-MAIN-20200121074002-20200121103002-00207.warc.gz"}
|
https://www.zigya.com/study/book?class=12&board=mbse&subject=Physics&book=Physics+Part+II&chapter=Nuclei&q_type=&q_topic=Mass-Energy+and+Nuclear+Binding+Energy&q_category=&question_id=PHEN12059519
|
## Book Store
Currently only available for.
`CBSE` `Gujarat Board` `Haryana Board`
## Previous Year Papers
Download the PDF Question Papers Free for off line practice and view the Solutions online.
Currently only available for.
`Class 10` `Class 12`
The radionuclide ${}^{11}\mathrm{C}$ decays according to
The maximum energy of the emitted positron is 0.960 MeV. Given the mass values:
Calculate Q and compare it with the maximum energy of the positron emitted.
For the given reaction, mass defect is,
Now, Q-value is ,
which, is the maximum energy of the positron.
We have,
The daughter nucleus is too heavy compared to e+ and v. So, it carries negligible energy (Ed ≈ 0). If the kinetic energy (Ev) carried by the neutrino is minimum (i.e., zero), the positron carries maximum energy, and this is practically all energy Q.
Hence, maximum E
e≈ Q.
302 Views
The normal activity of living carbon-containing matter is found to be about 15 decays per minute for every gram of carbon. This activity arises from the small proportion of radioactive ${}_{6}{}^{14}\mathrm{C}$${}_{6}{}^{14}\mathrm{C}$ present with the stable carbon istope ${}_{6}{}^{12}\mathrm{C}.$ When the organism is dead, its interaction with the atmosphere (which maintains the above equilibrium activity) ceases and its activity begins to drop. From the known half-life (5730 years) of ${}_{6}{}^{14}\mathrm{C},$ and the measured activity, the age of the specimen can be approximately estimated. This is the principle of ${}_{6}{}^{14}\mathrm{C}$ dating used in archaeology.
Suppose a specimen from Mohenjodaro gives an activity of 9 decays per minute per gram of carbon. Estimate the approximate age of the Indus Valley Civilisation.
Given,
Normal activity, R
0 = 15 decays/min,
Present activity R = 9 decays/min,
Half life, T = 5730 years,
Age t =?
As, activity is proportional to the number of radioactive atoms, therefore,
But,
$\therefore$
$⇒$
$⇒$
But,
Therefore,
is the approximate age.
134 Views
Obtain approximately the ratio of the nuclear radii of the gold isotope ${}_{79}{}^{197}\mathrm{Au}$ and the silver isotope ${}_{47}{}^{107}\mathrm{Ag}.$
Using the relation between the radius of nucleus and atomic mass,
Atomic mass of gold, A1 = 197
Atomic mass of silver, A2 = 107
$\therefore$
Now, taking log on both sides,
$⇒$
$⇒$
$⇒$
= 1.23, which is the required ratio of the nucleii.
220 Views
# A given coin has a mass of 3.0 g. Calculate the nuclear energy that would be required to separate all the neutrons and protons from each other. For simplicity assume that the coin is entirely made of (of mass 62.92960 u).
Given, mass of the coin = 3.0 g
Mass of atom = 62.92960 u
Each atom of the copper contains 29 protons and 34 neutrons.
Mass of 29 electrons = 29 x 0.000548 u
= 0.015892 u
Mass of nucleus = (62.92960 - 0.015892) u
= 62.913708 u
Mass of 29 protons = 29 x 1.007825 u
= 29.226925 u
Mass of 34 neutrons = 34 x 1.008665 u
= 34.29461 u
Total mass of protons and neutrons = (29.226925 + 34.29461) u
= 63.521535 u
Binding energy = (63.521535 - 62.913708) x 931.5 MeV
= 0.607827 x 931.5 MeV
$\therefore$ Required energy =
1508 Views
The half-life of ${}_{38}{}^{90}\mathrm{Sr}$ is 28 years. What is the disintegration rate of 15 mg of this isotope?
Given,
Half- life of ${}_{38}{}^{90}\mathrm{Sr}$ = 28 years.
Using the formula,
$⇒$
90 g of Sr contains 6.023 x 1023 atoms.
$\therefore$ 15 mg of Sr contains,
Disintegration rate,
292 Views
|
2021-04-19 03:59:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5430036187171936, "perplexity": 4153.001091844772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038863420.65/warc/CC-MAIN-20210419015157-20210419045157-00464.warc.gz"}
|
https://www.ias.ac.in/describe/article/pram/069/06/1165-1169
|
• The E166 experiment: Development of an undulator-based polarized positron source for the international linear collider
• # Fulltext
https://www.ias.ac.in/article/fulltext/pram/069/06/1165-1169
• # Keywords
International linear collider; helical undulator; polarized positron; Compton transmission polarimeter.
• # Abstract
A longitudinal polarized positron beam is foreseen for the international linear collider (ILC). A proof-of-principle experiment has been performed in the final focus test beam at SLAC to demonstrate the production of polarized positrons for implementation at the ILC. The E166 experiment uses a 1 m long helical undulator in a 46.6 GeV electron beam to produce a few MeV photons with a high degree of circular polarization. These photons are then converted in a thin target to generate longitudinally polarized $e^{+}$ and $e^{-}$. The positron polarization is measured using a Compton transmission polarimeter. The data analysis has shown asymmetries in the expected vicinity of $3.4$% and $\sim 1$% for photons and positrons respectively and the expected positron longitudinal polarization is covering a range from $50$% to 90%.
• # Author Affiliations
1. RWTH Aachen, Germany
2. Cornell University, USA
3. CCLRC Daresbury, UK
4. University of Durham, UK
5. DESY, Hamburg, Germany
6. DESY, Zeuthen, Germany
7. Humboldt University, Germany
8. Princeton University, USA
9. SLAC, USA
10. University of Tel Aviv, Israel
11. University of Tennessee, USA
• # Pramana – Journal of Physics
Volume 94, 2020
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019
|
2020-06-05 22:54:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31157588958740234, "perplexity": 9754.221128878986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348504341.78/warc/CC-MAIN-20200605205507-20200605235507-00440.warc.gz"}
|
http://mathhelpforum.com/trigonometry/50650-trigonometric-graph.html
|
# Math Help - trigonometric graph
1. ## trigonometric graph
1. The curve above is the graph of a sinusoidal function. It goes through the point and . Find a sinusoidal function that matches the given graph. If needed, you can enter =3.1416... as 'pi' in your answer, otherwise use at least 3 decimal digits.
2.The curve above is the graph of a sinusoidal function. It goes through the points and . Find a sinusoidal function that matches the given graph. If needed, you can enter =3.1416... as 'pi' in your answer, otherwise use at least 3 decimal digits.
2. 1. looks like a -sine curve
amplitude = 3
period = 6
phase shift = none
vertical shift = +3
$y = -3sin\left(\frac{\pi}{3}x\right) + 3$
2. I'd use a cosine curve for this one ...
amplitude = 4
period = 6
phase shift = left 2
vertical shift = none
you try it.
3. ## changing the sine function
You need to find a solution of the form
A + B sin (Cx - D)
where A shifts the wave up, here A = 3 as the wave moves around +3
B gives the size up and down, if B = 1 we have a size of 2, so we want B = 3 as we have a size of 6
C controls the period, a normal wave is period 2 pi, we want period 6, so C = pi/3
D controls the shift to the right, we actually want the shift to the left so that Cx - D = 0 when x = -3 hence D = -pi
$
y = 3 + 3sin\left(\frac{\pi}{3}x + \pi\right)
$
hope this helps you do the second question on your own.
cheers
Nobby
4. for question 2..i got 4cos(pi/3) but i dont know what to do for phase shift
5. $y = 4\cos\left[\frac{\pi}{3}(x + 2)\right]$
|
2015-03-01 03:15:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7729106545448303, "perplexity": 938.8263455708797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462202.69/warc/CC-MAIN-20150226074102-00006-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://www.vedantu.com/question-answer/protons-neutrons-and-electrons-are-in-cal-class-10-chemistry-cbse-5fd79b95aef1aa270c092592
|
# How many protons, neutrons and electrons are in calcium?
Verified
176.4k+ views
Hint: The atomic number of Ca is 20.
The mass number or atomic mass of Ca is 40.
So in the question it is asked to write the number of protons, electrons and number of neutrons present in the Ca atom. In the lower classes we have studied about the three main subatomic species i.e. protons, electrons and neutrons. Let’s brush up our memory about these subatomic particles and trace the relation between the atomic number and atomic mass with the number of protons, neutrons and electrons.
From the time we study about an atom, we always have heard of protons, electrons and neutrons, that an atom consists of these three subatomic particles and it contributes to the atomic mass of an element and its stability. Now let’s grab some basic idea of these three.
Protons: these are the positively charged subatomic species which is present in the atomic nucleus and they are represented as ${{p}^{+}}$.The protons possess a charge of $+1.602\times {{10}^{-19}}C$ and a mass of$1.67\times {{10}^{-27}}Kg$.
Electrons: these are the negatively charged species present in the atom in contrast to the protons and these are the species which revolve around the nucleus and possess much smaller size than the atomic nucleus. The electron is represented as ${{e}^{-}}$ and they possess a charge of $-1.602\times {{10}^{-19}}C$ and has a mass of about$9.11\times {{10}^{-31}}kg$.
Neutrons: these are the neutral species present in the nucleus of the atom and they do not have charge. Neutron is represented as ${{n}^{0}}$ and has a mass of$1.67\times {{10}^{-27}}Kg$.
Atomic number is the unique number or can be said as the identity of an element. Each element possesses a unique atomic number. Atomic number is the number of electrons or the number of protons present in an atom.
Atomic number is the sum of the number of protons and neutrons present in the atomic nucleus.
Now let’jkss move on to the solution:
Since we know that the atomic number of Ca is 20, it gives the number of protons and electrons present in the atom.
$Number\,of\,protons\,=20$
$Number\,of\,electrons\,=20$
Now we have to find the number of neutrons.
The atomic number of Ca is 40.
$\text{Atomic}\,\text{mass=no}\text{.}\,\text{of}\,{{\text{p}}^{\text{+}}}\text{+no}\text{.}\,\text{of}\,{{\text{n}}^{\text{0}}}$
$\text{no}\text{.}\,\text{of}\,{{\text{n}}^{\text{0}}}\text{=Atomic}\,\text{mass-no}\text{.}\,\text{of}\,{{\text{p}}^{\text{+}}}$
$\text{no}\text{.}\,\text{of}\,{{\text{n}}^{\text{0}}}\text{=40-20=20}$
There number of neutrons is 20.
Hence the number of protons, neutrons and electrons in a Ca atom is 20.
$Number\,of\,protons\,=20$
$Number\,of\,electrons\,=20$
$Number\,of\,neutrons=20$
Note:
There is always a chance of getting confused with the calculation of the atomic mass and the atomic number, many add up the number of protons and electrons for atomic mass. The atomic nucleus accounts for the mass of the element.
$\text{Atomic}\,\text{number=No}\text{.}\,\text{of}\,\text{protons=No}\text{.}\,\text{of}\,\text{electrons}$
The number of protons and electrons will be the same in an atom.
|
2022-08-14 07:02:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37392866611480713, "perplexity": 261.5419504931279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571996.63/warc/CC-MAIN-20220814052950-20220814082950-00160.warc.gz"}
|
http://stackoverflow.com/questions/97850/version-control-on-a-2gb-usb-drive
|
# Version control on a 2GB USB drive
For my school work, I do a lot of switching computers (from labs to my laptop to the library). I'd kind of like to put this code under some kind of version control. Of course the problem is that I can't always install additional software on the computers I use. Is there any kind of version control system that I can keep on a thumb drive? I have a 2GB drive to put this on, but I can get a bigger one if necessary.
The projects I'm doing aren't especially big FYI.
EDIT: This needs to work under windows.
EDIT II: Bazaar ended up being what I chose. It's even better if you go with TortoiseBzr.
-
You could use Portable Python and Bazaar (Bazaar is a Python app). I like to use Bazaar for my own personal projects because of its extreme simplicity. Plus, it can be portable because Python can be portable. You will just need to install it's dependencies in your Portable Python installation as well.
-
I do this with Git. Simply, create a Git repository of your directory:
git-init
git commit -m "Done"
Insert the stick, cd to directory on it (I have a big ext2 file I mount with -o loop), and do:
git-clone --bare /path/to/my/dir
Then, I take the stick to other computer (home, etc.). I can work directly on stick, or clone once again. Go to some dir on the hard disk and:
git-clone /path/to/stick/repos
When I'm done with changes, I do 'git push' back to stick, and when I'm back at work, I 'git push' once again to move the changes from stick to work computer. Once you set this up, you can use 'git pull' to fetch the changes only (you don't need to clone anymore, just the first time) and 'git push' to push the changes the other way.
The beauty of this is that you can see all the changes with 'git log' and even keep some unrelated work in sync when it changes at both places in the meantime.
If you don't like the command line, you can use graphical tools like gitk and git-gui.
-
I'm using Windows unfortunately. I put that in the tags, but I suppose I should have also put it in the question. – Jason Baker Sep 18 '08 at 23:17
Git works on Windows quite good, even graphical tools like Git-GUI and Gitk work as good as on Linux. I recommend using msysgit port from code.google.com/p/msysgit – Milan Babuškov Feb 17 '09 at 18:25
Thanks Milan for giving brief yet meaningful steps. – Amit Vaghela Oct 28 '10 at 6:04
+1. Concise. I was looking for a confirmation of the steps to setup a repository on a USB drive :) – buffer Jun 7 '11 at 14:11
On the download page there is the portable version: code.google.com/p/msysgit/downloads/list Currently the latest version says "beta" but it is very stable. I am using it a lot with no issues. Can absolutely recommend it. – mit Aug 12 '11 at 2:35
Darcs is great for this purpose.
• I can't vouch for other platforms, but on Windows it's just a single executable file which you could keep on the drive.
• Most importantly, its interactive command line interface is fantastic and very quickly becomes intuitive (I now really miss interactive commits in any VCS which lacks them) - you don't need to memorise many commands as part of your normal workflow either. This is the main reason I use it over git for personal projects.
Setting up:
darcs init
darcs add -r *
darcs record -am "Initial commit"
Creating a repository on your lab machine:
darcs get E:\path\to\repos
Checking what you've changed:
darcs whatsnew # Show all changed hunks of code
darcs whatsnew -ls # List all modified & new files
Interactively creating a new patch from your changes:
darcs record
Interactively pushing patches to the repository on the drive:
darcs push
It's known to be slow for large projects, but I've never had any performance issues with the small to medium personal projects I've used it on.
Since there's no installation required you could even leave out the drive and just grab the darcs binary from the web - if I've forgotten my drive, I pull a copy of the repository I want to work on from the mirror I keep on my webspace, then create and email patches to myself as files:
darcs get http://example.com/repos/forum/
# Make changes and record patches
darcs send -o C:\changes.patch
-
The best answer for you is some sort of DVCS (popular ones being Git, Mercurial, Darcs, Bazaar...). The reason is that you have a full copy of the whole repository on any machine you are using. I haven't used these systems personally, so others will be best at recommending a DVCS with a small footprint and good cross platform compatibility.
-
I'd use git. Git repos are really small and don't require a daemon. You can probably install cygwin or msysgit on your flashdrive.
-
git on windows is to much of a hassle in comparison with Mercurial or Bazaar. I use git when on Linux and Mercurial when on windows. – Valentin Vasilyev Apr 27 '09 at 6:50
@Valentin: I would disagree. Git works just fine on Windows without any installation (i.e. from USB drive). Also note that msys Git recently started to include "PortableGit" build on their Google Code site (code.google.com/p/msysgit). – Milan Gardian May 1 '09 at 13:44
Just to add an extra resource Subversion on a Stick. I've just set this up on my 4GB USB Drive, pretty simple and painless.
Thought I am now very tempted to try Bazaar.
Update: I've setup PortablePython on my USB drive, simple, but getting bazaar on there ... I gave up, one dependency after another, and as I've got svn working.
If anyone knows of an easy portable installer, I'd be greatful.
-
I recommend Fossil http://www.fossil-scm.org/
includes
• command line
• dvcs
• cross platform (and easy to compile)
• 'autosync' command make the essential task of syncing to a backup easy.
• backup server configuration is a doddle.
• easy to learn/use
• very helpful community
• web ui with wiki and bugtracker included.
• 3.5Mb, single executable
• one sqlite database as the repository
-
I use subversion on my thumb drive, the official binaries will work right off the drive. The problem with this trick is you need to access a command line for this to work or be able to run batch files. Of course, I sync the files on my thumb drive to a server that I pay for. You could always host the repository on a desktop (use the file:/// protocol) if you don't want to get hosting space on the web.
-
While the tortise intergration wouldn't work without being installed. You may be able to use soem of the UIs off of a thumbdrive. Havn't tried this myself but it may be easier then SVN command lines. – Matthew Whited Jan 8 '10 at 20:33
I will get lynched for saying this answer, but it works under Windows: RCS.
You simply make an RCS directory in each of the directories with your code. When time comes to check things in, ci -u $FILE. (Binary files also require you to run rcs -i -kb$FILE before the first checkin.)
Inside the RCS directory are a bunch of ,v files, which are compatible with CVS, should you wish to "upgrade" to that one day (and from there to any of the other VCS systems other posters mentioned). :-)
-
You could put the subversion binaries on there - they're only 16ish megs, so you'll have plenty of room for some repositories too. You can use the official binaries from the command line, or point a graphical tool (like TortoiseSVN) to the repository directory. If you're feeling fancy then you could rig the drive to autorun the SVNSERVE application, making any computer into a lightweight subversion server the minute you plug in the drive.
I found some instructions for this process here.
-
Subversion would kinda work. See thread
Personally, I prefer to keep everything on a single machine and Remote Desktop into it.
-
Flash memory and version control doesn't seem like a good idea to my ears. I'm afraid that the memory will wear out pretty soon, especially if you take extensive use of various version control operations that make many small disk operations (merge, reverting to and fro, etc).
At the very least, make sure that you back up the repository as often as humanly possible, in case the drive would fail.
-
You would think so. However since version control happens on human scale (I don't check in many times a second), this isn't an issue in practice. – Brian Carlton Feb 24 '09 at 21:43
I wasn't referring to the performance, but the lifetime and quality of some poor Flash memory. – Henrik Paul Feb 25 '09 at 8:15
This isn't 1985. Newer flash drivers will move data around to get more cycles per macro cell. Also each cell typically has more then 10k cycles. I don't think a simple repository for homework causing that much of a problem. – Matthew Whited Jan 8 '10 at 20:36
I thought it was more like 100k write operations, but if we stick with the 10k, then if you do the above process twice a day, i.e. 2xpull and 2xpush, then you get almost seven years use out of a \$5 stick. – CyberFonic Feb 8 '10 at 7:49
I think H.P.'s warning has value. But version control can be considered scalable. If you have a small-scale app, and use common sense for back-ups as advised, a flash drive sounds great to me. – Smandoli Aug 12 '10 at 13:53
bitnami stack subversion it's easy to install. You can try to install so too xampp with portableapps.com and subversion.
-
I'm using GIT according to Milan Babuškov's answer:
(1) create repository and commit (on office PC)
mkdir /home/yoda/project && cd /home/yoda/project
git init
git commit -m "Done"
(2) insert USB stick and make a clone of the repository
cat /proc/partitions
mount -t ext3 /dev/sdc1 /mnt/usb
git clone --bare /home/yoda/project /mnt/usb/project
(3) take the USB stick home and make a clone of repository at home
cat /proc/partitions
mount -t ext3 /dev/sdc1 /mnt/usb
git clone /mnt/usb/project /home/yoda/project
(4) push commits from home PC back to USB stick
mount -t ext3 /dev/sdc1 /mnt/usb
cd /home/yoda/project
git push
(5) take USB stick to the office and push commits from stick to office PC
mount -t ext3 /dev/sdc1 /mnt/usb
cd /mnt/usb/project
git push
(6) pull commits from office PC to USB stick
mount -t ext3 /dev/sdc1 /mnt/usb
cd /mnt/usb/project
git pull
(7) pull commits from USB stick to home PC
mount -t ext3 /dev/sdc1 /mnt/usb
cd /home/yoda/project
git pull
-
|
2014-12-18 11:36:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35726553201675415, "perplexity": 4566.56673928373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802766267.61/warc/CC-MAIN-20141217075246-00123-ip-10-231-17-201.ec2.internal.warc.gz"}
|
http://library.kiwix.org/ell.stackexchange.com_en_all_2020-11/A/question/62415.html
|
How to inform other people of your real name?
9
I usually use a nickname (e.g. John, and other people know me as John). Lets say I want to inform another person about my real name (the name that is on my ID card), how do I say it?
• My full name is Neram Smith.
• My real name is Neram Smith.
• My legal name is Neram Smith.
• My ID name is Neram Smith.
• My ID card name is Neram Smith.
Are the above correct? Any other better suggestions?
You may use "my name" in contrast with "my nickname" as "my nickname is John but my name is Neram Smith", or "I am called John but my name is Neram Smith" – Ahmad – 2015-07-22T17:17:43.693
1How about "Actual name"? Implies that you might use another (John in this case), which you might not prefer. – Martijn – 2015-07-23T10:27:29.280
1Colloquially you might state "My name is Neram Smith, but I go by John" – Jake – 2015-07-23T14:39:34.427
The name's Bond. James Bond. – James Wirth – 2015-07-23T18:06:56.277
Might want to go ahead and accept one of these answers if they seem to do a good job of explaining things for you. I upvoted four of the other ones, by the way; there's a pretty good selection here. – Nathan Tuggy – 2015-12-18T18:47:53.427
15
TL/DR: Use "full name" in your case. Here's why:
• "Real name" (or "actual name") is a little strange if they know you personally in real life. Does that mean that you've been letting them call you something that's not your real name? Best not to get into that. The only other exception to this would be if they've only recently been introduced to you by that nickname and you're taking an early opportunity to clarify that that's not your only name, which avoids the potential social awkwardness.
as opposed to online friendships — no one is surprised when your screen name "Alex001" turns out not to reflect any name you've ever been called offline
• "ID name" and "ID card name", while understandable, are a bit unnatural to my ear, although that's more because ID cards aren't usually considered important/canonical enough to use as the touchstone for something as personal as a name. (The grammar is perfectly fine in a technical sense; it's a cultural thing that I suspect holds true for most English-speaking countries.)
• "Legal name" works, to get across "yeah, this is what random faceless bureaucracies know me as". So does "the name on my ID card", oddly enough, although that's a little bit stronger a statement that it's not really your preferred name.
• "Original name" (per comments) is good if you've informally changed your name in actual usage from what your official ID still says; otherwise it's somewhat ill-fitting. "[Nationality] name" (Chinese name, Indian name, Dutch name, etc) would work in a similar situation, if you use a different name in a different country to avoid having to deal with people there mispronouncing or misremembering your name.
• "Full name" is probably the tidiest in general for cases where you're fine with friends calling you by it as well as by your nickname. It carries the idea of expanding the name options they had, but without denigrating the nickname and making them feel a little bad for having used it.
This is good... – James Wirth – 2015-07-22T17:09:44.767
"Real name" is strange for this particular example ("Neram Smith" vs. "John"), but it can be appropriate if you're talking about screen names, handles or pseudonyms. e.g. "On the forum I'm 'chunkylover53', but my real name is Homer Simpson." - Basically, contexts where you know beforehand that the "name" is actually a pseudonym. – R.M. – 2015-07-22T17:58:29.010
@R.M.: Right; I was assuming a context similar to the question, where they know the person in real life by a nickname, not a pseudonym as such. – Nathan Tuggy – 2015-07-22T18:02:32.470
4There's always "actual name"... I feel this is more useful in situations where you might have been introduced by a nickname and want to let the person know what you're actually called. Most people don't go by their "full name", outside of legal things... – Catija – 2015-07-22T18:06:47.997
@Catija: Yeah, although as my edit, that works about as well as "real name" in cases where they've been using it for a while: i.e. not that well. Still, that did bring up another possible exception for both of those. – Nathan Tuggy – 2015-07-22T18:10:07.320
@NathanTuggy My husband's friend gives everyone nicknames and, when people get together with him for the first time, he introduces you to people you don't know by your nickname... so his wife has a habit of following his introductions with "and his actual name is...", which is always amusing. Him: This is Jruple His wife: His actual name is Jeremy. – Catija – 2015-07-22T18:14:21.583
1It is extremely common for foreigners in a country to adopt, informally, a name more common to the country in which they find themselves, if they find that too many people from that country have difficulty with their real name. Most of the Chinese students in my college, for instance, had some “American” name that they used in most contexts, and that they preferred Americans use (whether this was out of a polite desire to avoid embarrassing Americans who struggle, or out of a desire to avoid having Americans butcher their real name, I could never tell). – KRyan – 2015-07-22T18:36:32.170
This was never a legal or formal thing; their real name was still applied to things like exams and coursework, and so on. The “American” name was just to ease conversation and to limit how much being foreign created obstacles in socializing. This answer basically ignores that altogether, even stating “Best not to get into that,” and thus seems like a poor answer to the question. – KRyan – 2015-07-22T18:38:39.097
@KRyan: A fair point, although I still don't think "real name" is a very good choice even then. (The question doesn't really imply this is the situation, although it's possible, so I added a different choice I suspect would work rather better.) – Nathan Tuggy – 2015-07-22T18:43:59.260
5
My full name is Maulik Bipinbhai Vyas
works in most of the registers as far as my knowledge goes. Because, calling your first name, middle name and surname a 'full name' is universal.
full name: - your whole name, including your first name, middle name, and family name
However, there may be many other ways to tell this.
5
My preference of the options presented for this particular case (a “Neram Smith” who is informally known as “Johnny”) is real name. I’ll address each option:
• Full name – To call “John Smith” your full name when you’ve been going by “Johnny” is fine, but “Neram Smith” is more than just a completion of “Johnny.” So in this case, I probably would not use full name, and would find it odd. It also implies that what you are labeling as your full name is actually complete: you could not say “Neram” is your full name.
• Original nameOriginal name only makes sense if you’ve actually changed your name; I think most English speakers would prefer birth name but either is fine for that case.
• ID card name or ID name – I don’t think too many English speakers would refer to their ID name or ID card name; the name found on one’s ID card would be explicitly described as such: the name on my ID card or similar. But most speakers would not do so at all. That said, both ID name and ID card name would be understood.
• Legal name – This would be most speakers’ preference over ID card name or similar, and might be the best choice overall. It does imply, again, a certain level of formality and completeness, which means it would be awkward to describe “Neram” as your legal name (but better than full name).
• Real name – Ultimately, though, I favor this. That “Johnny” is just a nickname is understood, and that real here is to constrast with that is also clear.
I also want to suggest another option, which may apply when the nickname is chosen specifically to avoid a name that would be unfamiliar to those you are speaking with. For instance, “Neram” is an Indian name,1 while “Johnny” is English; among English-speakers, a “Neram” may prefer to go by “Johnny” because of the English-speakers’ difficulty with “Neram” (whether it be to save them embarrassment, or to avoid having to hear their awful pronunciation, or some other reason altogether).2
In this case, it would be fairly common, and completely understood, for this individual to refer to “Neram Smith” as his Indian name. I have heard this particular approach from Chinese residents in the USA, who will choose an “American” name for conversation with Americans, and refer to their actual name as their Chinese name. Since they are Chinese, it is understood that the Chinese name is the “right” one, but this also implies that it is not desired for Americans to (try to) use it.
1. I think. I looked the name up, and it is the name of an Indian movie, but it’s not the name of any of the characters in the movie, and I can’t find anything about the name that isn’t that movie, or this question. Ultimately, the ethnicity of the name is irrelevant to my point, but my apologies if I have gotten it wrong.
2. I personally think “Neram” ought to be able to use his real name, and expect others to learn to pronounce it correctly, but I can understand wanting to avoid all that.
2
I think just saying "My name is ..." would be fine, or you can insert "full" if you want, for instance, if you wanted to emphasise that you are saying your full name.
You could use the example given and say:
I'm John, but my full name is John Johnny.
You can mix and match really, but don't go into as much detail as you have done, just stick with the following phrases:
My (full) name is ...
I am called ...
Grammatically, the ones you listed after that are correct, however I'm not certain that they're used very much. It's best to just keep things simple and avoid confusion, and the two phrases above are all you'll ever need!
So in conclusion, I would recommend:
My full name is .........
1..., prepare to die. – David Lord – 2015-07-23T05:10:55.727
2
I'd say it depends on where you get your nickname from. For example...
...If your nickname is a shortened version of your full name (Jon to Jonathan), it would be proper to say "My full name is Jonathan, but I go by Jon".
...If your nickname is not a common shortening, such as if you go by your lesser known middle name (Such as if your name were Jon Jacob Jinglheimerschmidt), it would be prudent to say "My first name is actually Jon, but I prefer my middle name, Jacob"
...If you are an immigrant going with a more "naturalized" name to ease social interation (For instance the french "Guillaume" being modified to "William" in english), I would use "My actual name is Guillaume but I go by William". In this case, social cues typically point to why you would not go by your actual name, and do not typically need to be communicated.
I think using "My full name is Jonathan, but I go by Jon" is good. I would almost always recommend saying "I go by abc". When I was in Taiwan, people could clearly tell I was foreign, and they would often just say "My name is Fred" instead of even mentioning their Chinese name because they knew I would likely struggle with it. I was actually pleased when people would instead give their actual name. I like trying to get their name right, as long as they don't get upset or frustrated quickly if I make a mistake at first. – Dan – 2015-07-23T00:46:20.763
2
TL;DR: Generically, "full name" is completely appropriate for informing people of your "real" name. However, in this situation, I feel that "given name" more accurately reflects the circumstances.
For the specific situation you're asking about, I think "given name" is more appropriate, as in:
My given name is Neram Smith, but please call me John.
This means that the name was given to you at birth by whoever named you, and also implies a connotation of a "true name." "Given name" makes it clear that Neram Smith is not really related to the name John - rather, you chose to call yourself John, while another person chose to call you Neram. It also clearly defines the relationship between the name John, which you have chosen to be called, and the name Neram Smith, which is the name that you officially hold.
Note that this usage may sound archaic to some English speakers.
+1 but I don't agree that this sounds archaic. It is simply a phrase that you don't often encounter (because most people only have one name and those who have multiple names don't even need to make the distinction very often). – krowe – 2015-07-23T05:05:04.123
Given name normally means the parts of your name which aren't inherited from your parents. For someone named John Smith, John is their given name or forename whereas Smith is their family name or surname. Given name works well across cultures, as in some cultures the the family name is the first name, whereas in others it is the last name. – AndyT – 2015-07-23T11:14:10.630
1
In many situations, "given name" is a valid term for this. "Given name," in English, is the one given to you at birth, recorded on your birth certificate.
This is very similar in nature to the definition for "legal name." However, many have noted the level of awkward formlessness associated with it. Instead, "given name" references the act of your parents naming you. This generates less of a cold feeling.
This term may not be valid if you are from a culture where the rituals surrounding naming are more complicated. However, for cultures which focus mostly/entirely on the name given to you at birth, the term is sufficient.
Given name normally means the parts of your name which aren't inherited from your parents. For someone named John Smith, John is their given name or forename whereas Smith is their family name or surname. Given name works well across cultures, as in some cultures the the family name is the first name, whereas in others it is the last name. – AndyT – 2015-07-23T11:13:49.090
|
2021-03-05 20:09:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3032655417919159, "perplexity": 1938.8259209021444}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373241.51/warc/CC-MAIN-20210305183324-20210305213324-00261.warc.gz"}
|
https://www.physicsforums.com/threads/why-is-there-a-dx-dx-in-the-implicit-differentiation-rule.582854/
|
# Why is there a dx/dx in the Implicit Differentiation rule?
1. Mar 1, 2012
### DrummingAtom
I tried deriving this one on my own and I'm just not understanding where the dx/dx term comes from. I'm looking dy/dx.
Starting with F(x,y) = 0:
$\frac{\partial{F}}{\partial{x}}\frac{dx}{dx} + \frac{\partial{F}}{\partial{y}}\frac{dy}{dx} = 0$
It seems redundant to say dx/dx when it turns out to be one anyway. Why does this step need to be done? Thanks
2. Mar 1, 2012
3. Mar 1, 2012
### HallsofIvy
It doesn't need to be done. What text did you see that in?
4. Mar 1, 2012
### DrummingAtom
Briggs Calculus. It just seemed kinda strange to say that x is a function of x so we need to take the derivative of it.
5. Mar 1, 2012
i do it!
|
2018-01-21 21:24:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43763604760169983, "perplexity": 1531.3638946397634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890874.84/warc/CC-MAIN-20180121195145-20180121215145-00228.warc.gz"}
|
https://aliquote.org/post/org-mode-literate/
|
I’ve been using Emacs for editing and evaluating R code with ESS for a long time now. I also like Emacs for editing statistical reports and compiling them using knitr (and before that, Sweave), using plain $\LaTeX$ or just RMarkdown. Now, I’m getting interested in org-mode as an alternative to noweb, which I previously used when looking for a way to integrate different programming languages (e.g., sh, sed, and R) into the same document.
## Literate programming
I’ve long been interested by literate programming (LP), and I wrote some posts about different aspects of LP. I’ve seen an increased interest in LP and reproducible research this year (see, e.g., Roger Peng’s blog posts or handouts on Coursera and elsewhere). At some point, it get even mixed up with the new kid on the block: Data Science, which really does mean nothing to me since I consider myself as a data scientist as soon as I have to clean and process data coming from medical research. (If this has to do with “big data” or the “computer scientist vs. statistician debate”, I’m not really interested in that: After all, we are data slaves and we must deal with that.)
## R and knitr
There are so many good tutorials and posts on R + knitr that I feel too lazy to search through my Twitter posts or web bookmarks. Here are two recent ones:
The updated RMarkdown (v2, brought to you by http://www.rstudio.com) now relies on Pandoc to build HTML, PDF, or even DOCX documents. They all are standalone documents, meaning that you don’t have to provide a separate folder with figures in the case of HTML, for example. This is pretty good. I’ve been using knitr for two years now, and I updated my statistical workflow to build custom reports based on knitr. I also wrote some custom Elisp file to edit/build RMarkdown files under Emacs. It should not be too difficult to update this code in order to make use of RStudio rmarkdown::render() instead of knitr::knit(). Furthermore, as discussed on GitHub, it is possible to define custom renderer, for example in, e.g., our ~/.Rprofile:
my_render <- function(input, encoding) {
rmarkdown::render(input, clean = FALSE, encoding = encoding)
}
or use YAML to set knitr opts and hooks.
## Org-mode
Organize your life in plain text offers a good overview of what is org-mode, and why people who cares about plain text workflow should care about Emacs/Org. I must admit I personally remain on very pragmatic approaches to programming (I need code to do something right now because I’m a medical statistician, not a programmer, but it has to be reproducible at some point–I don’t want to spend too much time rewriting it two years ago when a reviewer asks for further analyses) and editing (I like plain text because I can view and update it on almost every device: a laptop, my iPhone, etc., and I can use any of all the magic GNU utilities to process text streams).
## Org-mode for literate programming
I remember that people were already using org-mode to display live R code in their TeX document before the first release of knitr. Surely we could find several threads on Stack Overflow or Cross Validated around 2010-2012. As of Markdown vs. Org or LP through org-mode, here are some related threads, but see also Bernd Weiss’s Slides from the R Benutzer Treffen Köln / R user meeting in Cologne on “A brief introduction to reproducible research with Emacs’ Org mode and R”. There is also How to Use Emacs Org-Babel Mode to Write Literate Programming Document in R Language available on http://orgmode.org.
Two years ago, Eric Schulte and coll. published a paper entitled A Multi-Language Computing Environment for Literate Programming and Reproducible Research in the Journal of Statistical Software (46(3), 2012). This article provides a good review of exiting systems for LP, as well as some example of use of Org-mode with various programming languages, including R (p. 20 ff.). In fact, this is Babel (formerly, org-babel) that is responsible for processing chunks of code into a text file for matted using org directives. There are some user-submitted examples that helps to see how Babel can be used in interaction with R. It looks like ox-ravel.org, from orgmode-accessories is now considered as a potential alternative to Sweave or knitr for processing R code under Emacs. At least, it has been used to produce an R vignette, see Writing R vignettes in emacs org mode using ox-ravel.
Regarding Clojure, I found the following post, Using Clojure with org-babel and nREPL, where the author described how to configure Emacs to run Clojure code through nREPL. However, nREPL has been replaced by CIDER, so this is no longer applicable except for those who didn’t make the switch. In the meantime, Org-babel-clojure has been updated and can be used with the development version of org-mode.
|
2018-04-22 17:47:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4836007356643677, "perplexity": 2202.032839860189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945637.51/warc/CC-MAIN-20180422174026-20180422194026-00612.warc.gz"}
|
https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00288/43523/Massively-Multilingual-Sentence-Embeddings-for
|
## Abstract
We introduce an architecture to learn joint multilingual sentence representations for 93 languages, belonging to more than 30 different families and written in 28 different scripts. Our system uses a single BiLSTM encoder with a shared byte-pair encoding vocabulary for all languages, which is coupled with an auxiliary decoder and trained on publicly available parallel corpora. This enables us to learn a classifier on top of the resulting embeddings using English annotated data only, and transfer it to any of the 93 languages without any modification. Our experiments in cross-lingual natural language inference (XNLI data set), cross-lingual document classification (MLDoc data set), and parallel corpus mining (BUCC data set) show the effectiveness of our approach. We also introduce a new test set of aligned sentences in 112 languages, and show that our sentence embeddings obtain strong results in multilingual similarity search even for low- resource languages. Our implementation, the pre-trained encoder, and the multilingual test set are available at https://github.com/facebookresearch/LASER.
## 1 Introduction
While the recent advent of deep learning has led to impressive progress in natural language processing (NLP), these techniques are known to be particularly data-hungry, limiting their applicability in many practical scenarios. An increasingly popular approach to alleviate this issue is to first learn general language representations on unlabeled data, which are then integrated in task-specific downstream systems. This approach was first popularized by word embeddings (Mikolov et al., 2013b; Pennington et al., 2014), but has recently been superseded by sentence-level representations (Peters et al., 2018; Devlin et al., 2019). Nevertheless, all these works learn a separate model for each language and are thus unable to leverage information across different languages, greatly limiting their potential performance for low-resource languages.
In this work, we are interested in universal language agnostic sentence embeddings, that is, vector representations of sentences that are general with respect to two dimensions: the input language and the NLP task. The motivations for such representations are multiple: the hope that languages with limited resources benefit from joint training over many languages, the desire to perform zero-shot transfer of an NLP model from one language (typically English) to another, and the possibility to handle code-switching. To that end, we train a single encoder to handle multiple languages, so that semantically similar sentences in different languages are close in the embedding space.
Whereas previous work in multilingual NLP has been limited to either a few languages (Schwenk and Douze, 2017; Yu et al., 2018) or specific applications like typology prediction (Malaviya et al., 2017) or machine translation (Neubig and Hu, 2018), we learn general purpose sentence representations for 93 languages (see Table 1). Using a single pre-trained BiLSTM encoder for all 93 languages, we obtain very strong results in various scenarios without any fine-tuning, including cross-lingual natural language inference (XNLI data set), cross-lingual classification (MLDoc data set), bitext mining (BUCC data set), and a new multilingual similarity search data set we introduce covering 112 languages. To the best of our knowledge, this is the first exploration of general purpose massively multilingual sentence representations across a large variety of tasks.
Table 1:
List of the 93 languages along with their training size, the resulting similarity error rate on Tatoeba, and the number of sentences in it. Dashes denote language pairs excluded for containing fewer than 100 test sentences.
af am ar ay az be ber bg bn br bs ca cbk cs da de train sent. 67k 88k 8.2M 14k 254k 5k 62k 4.9M 913k 29k 4.2M 813k 1k 5.5M 7.9M 8.7M en→xx err. 11.20 60.71 8.30 n/a 44.10 31.20 29.80 4.50 10.80 83.50 3.95 4.00 24.20 3.10 3.90 0.90 xx→en err. 9.90 55.36 7.80 n/a 23.90 36.50 33.70 5.40 10.00 84.90 3.11 4.20 21.70 3.80 4.00 1.00 test sent. 1000 168 1000 – 1000 1000 1000 1000 1000 1000 354 1000 1000 1000 1000 1000 dtp dv el en eo es et eu fi fr ga gl ha he hi hr train sent. 1k 90k 6.5M 2.6M 397k 4.8M 5.3M 1.2M 7.9M 8.8M 732 349k 127k 4.1M 288k 4.0M en→xx err. 92.10 n/a 5.30 n/a 2.70 1.90 3.20 5.70 3.70 4.40 93.80 4.60 n/a 8.10 5.80 2.80 xx→en err. 93.50 n/a 4.80 n/a 2.80 2.10 3.40 5.00 3.70 4.30 95.80 4.40 n/a 7.60 4.80 2.70 test sent. 1000 – 1000 – 1000 1000 1000 1000 1000 1000 1000 1000 – 1000 1000 1000 hu hy ia id ie io is it ja ka kab kk km ko ku kw train sent. 5.3M 6k 9k 4.3M 3k 3k 2.0M 8.3M 3.2M 296k 15k 4k 625 1.4M 50k 2k en→xx err. 3.90 59.97 5.40 5.20 14.70 17.40 4.40 4.60 3.90 60.32 39.10 80.17 77.01 10.60 80.24 91.90 xx→en err. 4.00 67.79 4.10 5.80 12.80 15.20 4.40 4.80 5.40 67.83 44.70 82.61 81.72 11.50 85.37 93.20 test sent. 1000 742 1000 1000 1000 1000 1000 1000 1000 746 1000 575 722 1000 410 1000 kzj la lfn lt lv mg mhr mk ml mr ms my nb nds nl oc train sent. 560 19k 2k 3.2M 2.0M 355k 1k 4.2M 373k 31k 2.9M 2k 4.1M 12k 8.4M 3k en→xx err. 91.60 41.60 35.90 4.10 4.50 n/a 87.70 5.20 3.35 9.00 3.40 n/a 1.30 18.60 3.10 39.20 xx→en err. 94.10 41.50 35.10 3.40 4.70 n/a 91.50 5.40 2.91 8.00 3.80 n/a 1.10 15.60 4.30 38.40 test sent. 1000 1000 1000 1000 1000 – 1000 1000 687 1000 1000 – 1000 1000 1000 1000 pl ps pt ro ru sd si sk sl so sq sr sv sw ta te train sent. 5.5M 4.9M 8.3M 4.9M 9.3M 91k 796k 5.2M 5.2M 85k 3.2M 4.0M 7.8M 173k 42k 33k en→xx err. 2.00 7.20 4.70 2.50 4.90 n/a n/a 3.10 4.50 n/a 1.80 4.30 3.60 45.64 31.60 18.38 xx→en err. 2.40 6.00 4.90 2.70 5.90 n/a n/a 3.70 3.77 n/a 2.30 5.00 3.20 39.23 29.64 22.22 test sent. 1000 1000 1000 1000 1000 – – 1000 823 – 1000 1000 1000 390 307 234 tg th tl tr tt ug uk ur uz vi wuu yue zh train sent. 124k 4.1M 36k 5.7M 119k 88k 1.4M 746k 118k 4.0M 2k 4k 8.3M en→xx err. n/a 4.93 47.40 2.30 72.00 59.90 5.80 20.00 82.24 3.40 25.80 37.00 4.10 xx→en err. n/a 4.20 51.50 2.60 65.70 49.60 5.10 16.20 80.37 3.00 25.20 38.90 5.00 test sent. – 548 1000 1000 1000 1000 1000 1000 428 1000 1000 1000 1000
af am ar ay az be ber bg bn br bs ca cbk cs da de train sent. 67k 88k 8.2M 14k 254k 5k 62k 4.9M 913k 29k 4.2M 813k 1k 5.5M 7.9M 8.7M en→xx err. 11.20 60.71 8.30 n/a 44.10 31.20 29.80 4.50 10.80 83.50 3.95 4.00 24.20 3.10 3.90 0.90 xx→en err. 9.90 55.36 7.80 n/a 23.90 36.50 33.70 5.40 10.00 84.90 3.11 4.20 21.70 3.80 4.00 1.00 test sent. 1000 168 1000 – 1000 1000 1000 1000 1000 1000 354 1000 1000 1000 1000 1000 dtp dv el en eo es et eu fi fr ga gl ha he hi hr train sent. 1k 90k 6.5M 2.6M 397k 4.8M 5.3M 1.2M 7.9M 8.8M 732 349k 127k 4.1M 288k 4.0M en→xx err. 92.10 n/a 5.30 n/a 2.70 1.90 3.20 5.70 3.70 4.40 93.80 4.60 n/a 8.10 5.80 2.80 xx→en err. 93.50 n/a 4.80 n/a 2.80 2.10 3.40 5.00 3.70 4.30 95.80 4.40 n/a 7.60 4.80 2.70 test sent. 1000 – 1000 – 1000 1000 1000 1000 1000 1000 1000 1000 – 1000 1000 1000 hu hy ia id ie io is it ja ka kab kk km ko ku kw train sent. 5.3M 6k 9k 4.3M 3k 3k 2.0M 8.3M 3.2M 296k 15k 4k 625 1.4M 50k 2k en→xx err. 3.90 59.97 5.40 5.20 14.70 17.40 4.40 4.60 3.90 60.32 39.10 80.17 77.01 10.60 80.24 91.90 xx→en err. 4.00 67.79 4.10 5.80 12.80 15.20 4.40 4.80 5.40 67.83 44.70 82.61 81.72 11.50 85.37 93.20 test sent. 1000 742 1000 1000 1000 1000 1000 1000 1000 746 1000 575 722 1000 410 1000 kzj la lfn lt lv mg mhr mk ml mr ms my nb nds nl oc train sent. 560 19k 2k 3.2M 2.0M 355k 1k 4.2M 373k 31k 2.9M 2k 4.1M 12k 8.4M 3k en→xx err. 91.60 41.60 35.90 4.10 4.50 n/a 87.70 5.20 3.35 9.00 3.40 n/a 1.30 18.60 3.10 39.20 xx→en err. 94.10 41.50 35.10 3.40 4.70 n/a 91.50 5.40 2.91 8.00 3.80 n/a 1.10 15.60 4.30 38.40 test sent. 1000 1000 1000 1000 1000 – 1000 1000 687 1000 1000 – 1000 1000 1000 1000 pl ps pt ro ru sd si sk sl so sq sr sv sw ta te train sent. 5.5M 4.9M 8.3M 4.9M 9.3M 91k 796k 5.2M 5.2M 85k 3.2M 4.0M 7.8M 173k 42k 33k en→xx err. 2.00 7.20 4.70 2.50 4.90 n/a n/a 3.10 4.50 n/a 1.80 4.30 3.60 45.64 31.60 18.38 xx→en err. 2.40 6.00 4.90 2.70 5.90 n/a n/a 3.70 3.77 n/a 2.30 5.00 3.20 39.23 29.64 22.22 test sent. 1000 1000 1000 1000 1000 – – 1000 823 – 1000 1000 1000 390 307 234 tg th tl tr tt ug uk ur uz vi wuu yue zh train sent. 124k 4.1M 36k 5.7M 119k 88k 1.4M 746k 118k 4.0M 2k 4k 8.3M en→xx err. n/a 4.93 47.40 2.30 72.00 59.90 5.80 20.00 82.24 3.40 25.80 37.00 4.10 xx→en err. n/a 4.20 51.50 2.60 65.70 49.60 5.10 16.20 80.37 3.00 25.20 38.90 5.00 test sent. – 548 1000 1000 1000 1000 1000 1000 428 1000 1000 1000 1000
## 2 Related Work
Following the success of word embeddings (Mikolov et al., 2013b; Pennington et al., 2014), there has been an increasing interest in learning continuous vector representations of longer linguistic units like sentences (Le and Mikolov, 2014; Kiros et al., 2015). These sentence embeddings are commonly obtained using a recurrent neural network (RNN) encoder, which is typically trained in an unsupervised way over large collections of unlabeled corpora. For instance, the skip-thought model of Kiros et al. (2015) couples the encoder with an auxiliary decoder, and trains the entire system to predict the surrounding sentences over a collection of books. It was later shown that more competitive results could be obtained by training the encoder over labeled natural language inference (NLI) data (Conneau et al., 2017). This was later extended to multitask learning, combining different training objectives like that of skip-thought, NLI, and machine translation (Cer et al., 2018; Subramanian et al., 2018).
While the previous methods consider a single language at a time, multilingual representations have recently attracted a large attention. Most of this research focuses on cross-lingual word embeddings (Ruder et al., 2017), which are commonly learned jointly from parallel corpora (Gouws et al., 2015; Luong et al., 2015). An alternative approach that is becoming increasingly popular is to separately train word embeddings for each language, and map them to a shared space based on a bilingual dictionary (Mikolov et al., 2013a; Artetxe et al., 2018a) or even in a fully unsupervised manner (Conneau et al., 2018a; Artetxe et al., 2018b). Cross-lingual word embeddings are often used to build bag-of-word representations of longer linguistic units by taking their respective (IDF-weighted) average (Klementiev et al., 2012; Dufter et al., 2018). Although this approach has the advantage of requiring weak or no cross-lingual signal, it has been shown that the resulting sentence embeddings work poorly in practical cross-lingual transfer settings (Conneau et al., 2018b).
A more competitive approach that we follow here is to use a sequence-to-sequence encoder-decoder architecture (Schwenk and Douze, 2017; Hassan et al., 2018). The full system is trained end-to-end on parallel corpora akin to multilingual neural machine translation (Johnson et al., 2017): The encoder maps the source sequence into a fixed-length vector representation, which is used by the decoder to create the target sequence. This decoder is then discarded, and the encoder is kept to embed sentences in any of the training languages. While some proposals use a separate encoder for each language (Schwenk and Douze, 2017), sharing a single encoder for all languages also gives strong results (Schwenk, 2018).
Nevertheless, most existing work is either limited to a few, rather close languages (Schwenk and Douze, 2017; Yu et al., 2018) or, more commonly, consider pairwise joint embeddings with English and one foreign language (Espaa-Bonet et al., 2017; Guo et al., 2018). To the best of our knowledge, existing work on learning multilingual representations for a large number of languages is limited to word embeddings (Ammar et al., 2016; Dufter et al., 2018) specific applications like typology prediction (Malaviya et al., 2017) or machine translation (Neubig and Hu, 2018)—ours being the first paper exploring general purpose massively multilingual sentence representations.
All the previous approaches learn a fixed-length representation for each sentence. A recent research line has obtained very strong results using variable-length representations instead, consisting of contextualized embeddings of the words in the sentence (Dai and Le, 2015; Peters et al., 2018; Howard and Ruder, 2018; Devlin et al., 2019). For that purpose, these methods train either an RNN or self-attentional encoder over unnanotated corpora using some form of language modeling. A classifier can then be learned on top of the resulting encoder, which is commonly further fine-tuned during this supervised training. Concurrent to our work, Lample and Conneau (2019) propose a cross-lingual extension of these models, and report strong results in cross-lingual NLI, machine translation, and language modeling. In contrast, our focus is on scaling to a large number of languages, for which we argue that fixed-length approaches provide a more versatile and compatible representation form.1 Also, our approach achieves strong results without task-specific fine-tuning, which makes it interesting for tasks with limited resources.
## 3 Proposed Method
We use a single, language-agnostic BiLSTM encoder to build our sentence embeddings, which is coupled with an auxiliary decoder and trained on parallel corpora. In Sections ec1ec3, we describe its architecture, our training strategy to scale to 93 languages, and the training data used for that purpose.
### 3.1 Architecture
Figure 1 illustrates the architecture of the proposed system, which is based on Schwenk (2018). As it can be seen, sentence embeddings are obtained by applying a max-pooling operation over the output of a BiLSTM encoder. These sentence embeddings are used to initialize the decoder LSTM through a linear transformation, and are also concatenated to its input embeddings at every time step. Note that there is no other connection between the encoder and the decoder, as we want all relevant information of the input sequence to be captured by the sentence embedding.
Figure 1:
Architecture of our system to learn multilingual sentence embeddings.
Figure 1:
Architecture of our system to learn multilingual sentence embeddings.
We use a single encoder and decoder in our system, which are shared by all languages involved. For that purpose, we build a joint byte-pair encoding (BPE) vocabulary with 50k operations, which is learned on the concatenation of all training corpora. This way, the encoder has no explicit signal on what the input language is, encouraging it to learn language independent representations. In contrast, the decoder takes a language ID embedding that specifies the language to generate, which is concatenated to the input and sentence embeddings at every time step.
Scaling up to almost one hundred languages calls for an encoder with sufficient capacity. In this paper, we limit our study to a stacked BiLSTM with 1 to 5 layers, each 512-dimensional. The resulting sentence representations (after concatenating both directions) are 1024-dimensional. The decoder has always one layer of dimension 2048. The input embedding size is set to 320, and the language ID embedding has 32 dimensions.
### 3.2 Training Strategy
In preceding work (Schwenk and Douze, 2017; Schwenk, 2018), each input sentence was jointly translated into all other languages. However, this approach has two obvious drawbacks when trying to scale to a large number of languages. First, it requires an N-way parallel corpus, which is difficult to obtain for all languages. Second, it has a quadratic cost with respect to the number of languages, making training prohibitively slow as the number of languages is increased. In our preliminary experiments, we observed that similar results can be obtained using only two target languages.2 At the same time, we relax the requirement for N-way parallel corpora by considering separate alignments for each language combination.
Training minimizes the cross-entropy loss on the training corpus, alternating over all combinations of the languages involved. For that purpose, we use Adam with a constant learning rate of 0.001 and dropout set to 0.1, and train for a fixed number of epochs. Our implementation is based on fairseq,3 and we make use of its multi-GPU support to train on 16 NVIDIA V100 GPUs with a total batch size of 128,000 tokens. Unless otherwise specified, we train our model for 17 epochs, which takes about 5 days. Stopping training earlier decreases the overall performance only slightly.
### 3.3 Training Data and Pre-processing
As described in Section 3.2, training requires bitexts aligned with two target languages. We choose English and Spanish for that purpose, as most of the data are aligned with these languages.4 We collect training corpora for 93 input languages by combining the Europarl, United Nations, OpenSubtitles2018, Global Voices, Tanzil, and Tatoeba corpuses, which are all publicly available on the OPUS Web site5 (Tiedemann, 2012). Appendix H provides a more detailed description of this training data, and Table 1 summarizes the list of all languages covered and the size of the bitexts. Our training data comprises a total of 223 million parallel sentences. All pre-processing is done with Moses tools:6 punctuation normalization, removing non-printing characters, and tokenization. As the only exception, Chinese and Japanese were segmented with Jieba7 and Mecab,8 respectively. All the languages are kept in their original script with the exception of Greek, which we romanize into the Latin alphabet. It is important to note that the joint encoder itself has no information on the language or writing script of the tokenized input texts. It is even possible to mix multiple languages in one sentence.
## 4 Experimental Evaluation
In contrast with the well-established evaluation frameworks for English sentence representations (Conneau et al., 2017; Wang et al., 2018), there is not yet a commonly accepted standard to evaluate multilingual sentence embeddings. The most notable effort in this regard is arguably the XNLI data set (Conneau et al., 2018b), which evaluates the transfer performance of an NLI model trained on English data over 14 additional test languages (Section 4.1). So as to obtain a more complete picture, we also evaluate our embeddings in cross-lingual document classification (MLDoc, Section 4.2), and bitext mining (BUCC, Section 4.3). However, all these data sets only cover a subset of our 93 languages, so we also introduce a new test set for multilingual similarity search in 112 languages, including several languages for which we have no training data but whose language family is covered (Section 4.4). We remark that we use the same pre-trained BiLSTM encoder for all tasks and languages without any fine-tuning.
### 4.1 XNLI: Cross-lingual NLI
NLI has become a widely used task to evaluate sentence representations (Bowman et al., 2015; Williams et al., 2018). Given two sentences, a premise and a hypothesis, the task consists in deciding whether there is an entailment, contradiction, or neutral relationship between them. XNLI is a recent effort to create a data set similar to the English MultiNLI for several languages (Conneau et al., 2018b). It consists of 2,500 development and 5,000 test instances translated from English into 14 languages by professional translators, making results across different languages directly comparable.
We train a classifier on top of our multilingual encoder using the usual combination of the two sentence embeddings: (p,h,ph,|ph|), where p and h are the premise and hypothesis. For that purpose, we use a feed-forward neural network with two hidden layers of size 512 and 384, trained with Adam. All hyperparameters were optimized on the English XNLI development corpus only, and then the same classifier was applied to all languages of the XNLI test set. As such, we did not use any training or development data in any of the foreign languages. Note, moreover, that the multilingual sentence embeddings are fixed and not fine-tuned on the task or the language.
We report our results in Table 2, along with several baselines from Conneau et al. (2018b) and the multilingual BERT model (Devlin et al., 2019).9 Our proposed method obtains the best results in zero-shot cross-lingual transfer for all languages but Spanish. Moreover, our transfer results are strong and homogeneous across all languages: For 11 of them, the zero-short performance is (at most) 5% lower than the one on English, including distant languages like Arabic, Chinese, and Vietnamese, and we also achieve remarkable good results on low-resource languages like Swahili. In contrast, BERT achieves excellent results on English, outperforming our system by 7.5 points, but its transfer performance is much weaker. For instance, the loss in accuracy for both Arabic and Chinese is 2.5 points for our system, compared with 19.3 and 17.6 points for BERT.10 Finally, we also outperform all baselines of Conneau et al. (2018b) by a substantial margin, with the additional advantage that we use a single pre-trained encoder, whereas X-BiLSTM learns a separate encoder for each language.
Table 2:
Test accuracies on the XNLI cross-lingual natural language inference data set. All results from Conneau et al. (2018b) correspond to max-pooling, which outperforms the last-state variant in all cases. Results involving machine translation do not use a multilingual model and are not directly comparable with zero-shot transfer. Overall best results are in bold, the best ones in each group are underlined. * Results for BERT (Devlin et al., 2019) are extracted from its GitHub README.9 Monolingual BERT model for Thai from https://github.com/ThAIKeras/bert.
EN EN→XX fr es de el bg ru tr ar vi th zh hi sw ur Zero-Shot Transfer, one NLI system for all languages: Conneau et al. (2018b) X-BiLSTM 73.7 67.7 68.7 67.7 68.9 67.9 65.4 64.2 64.8 66.4 64.1 65.8 64.1 55.7 58.4 X-CBOW 64.5 60.3 60.7 61.0 60.5 60.4 57.8 58.7 57.5 58.8 56.9 58.8 56.3 50.4 52.2 BERT uncased* Transformer 81.4 – 74.3 70.5 – – – – 62.1 – – 63.8 – – 58.3 Proposed method BiLSTM 73.9 71.9 72.9 72.6 72.8 74.2 72.1 69.7 71.4 72.0 69.2 71.4 65.5 62.2 61.0 Translate test, one English NLI system: Conneau et al. (2018b) BiLSTM 73.7 70.4 70.7 68.7 69.1 70.4 67.8 66.3 66.8 66.5 64.4 68.3 64.2 61.8 59.3 BERT uncased* Transformer 81.4 – 74.9 74.4 – – – – 70.4 – – 70.1 – – 62.1 Translate train, separate NLI systems for each language: Conneau et al. (2018b) BiLSTM 73.7 68.3 68.8 66.5 66.4 67.4 66.5 64.5 65.8 66.0 62.8 67.0 62.1 58.2 56.6 BERT cased* Transformer 81.9 – 77.8 75.9 – – – – 70.7 – 68.9† 76.6 – – 61.6
EN EN→XX fr es de el bg ru tr ar vi th zh hi sw ur Zero-Shot Transfer, one NLI system for all languages: Conneau et al. (2018b) X-BiLSTM 73.7 67.7 68.7 67.7 68.9 67.9 65.4 64.2 64.8 66.4 64.1 65.8 64.1 55.7 58.4 X-CBOW 64.5 60.3 60.7 61.0 60.5 60.4 57.8 58.7 57.5 58.8 56.9 58.8 56.3 50.4 52.2 BERT uncased* Transformer 81.4 – 74.3 70.5 – – – – 62.1 – – 63.8 – – 58.3 Proposed method BiLSTM 73.9 71.9 72.9 72.6 72.8 74.2 72.1 69.7 71.4 72.0 69.2 71.4 65.5 62.2 61.0 Translate test, one English NLI system: Conneau et al. (2018b) BiLSTM 73.7 70.4 70.7 68.7 69.1 70.4 67.8 66.3 66.8 66.5 64.4 68.3 64.2 61.8 59.3 BERT uncased* Transformer 81.4 – 74.9 74.4 – – – – 70.4 – – 70.1 – – 62.1 Translate train, separate NLI systems for each language: Conneau et al. (2018b) BiLSTM 73.7 68.3 68.8 66.5 66.4 67.4 66.5 64.5 65.8 66.0 62.8 67.0 62.1 58.2 56.6 BERT cased* Transformer 81.9 – 77.8 75.9 – – – – 70.7 – 68.9† 76.6 – – 61.6
We also provide results involving Machine Translation (MT) from Conneau et al. (2018b). This can be done in two ways: 1) translate the test data into English and apply the English NLI classifier, or 2) translate the English training data and train a separate NLI classifier for each language. Note that we are not evaluating multilingual sentence embeddings anymore, but rather the quality of the MT system and a monolingual model. Moreover, the use of MT incurs an important overhead with either strategy: Translating test makes inference substantially more expensive, whereas translating train results in a separate model for each language. As shown in Table 2, our approach outperforms all translation baselines of Conneau et al. (2018b). We also outperform MT BERT for Arabic and Thai, and are very close for Urdu. Thanks to its multilingual nature, our system can also handle premises and hypothesis in different languages. As reported in Appendix I, the proposed method obtains very strong results in these settings, even for distant language combinations like French–Chinese.
### 4.2 MLDoc: Cross-lingual Classification
Cross-lingual document classification is a typical application of multilingual representations. In order to evaluate our sentence embeddings in this task, we use the MLDoc data set of Schwenk and Li (2018), which is an improved version of the Reuters benchmark (Lewis et al., 2004; Klementiev et al., 2012) with uniform class priors and a wider language coverage. There are 1,000 training and development documents and 4,000 test documents for each language, divided in 4 different genres. Just as with the XNLI evaluation, we consider the zero-shot transfer scenario: We train a classifier on top of our multilingual encoder using the English training data, optimizing hyper-parameters on the English development set, and evaluating the resulting system in the remaining languages. We use a feed-forward neural network with one hidden layer of 10 units.
As shown in Table 3, our system obtains the best published results for 5 of the 7 transfer languages. We believe that our weaker performance on Japanese can be attributed to the domain and sentence length mismatch between MLDoc and the parallel corpus we use for this language.
Table 3:
Accuracies on the MLDoc zero-shot cross-lingual document classification task (test set).
EN EN→XX de es fr it ja ru zh Schwenk and Li (2018) MultiCCA + CNN 92.20 81.20 72.50 72.38 69.38 67.63 60.80 74.73 BiLSTM (Europarl) 88.40 71.83 66.65 72.83 60.73 - - - BiLSTM (UN) 88.83 - 69.50 74.52 - - 61.42 71.97 Proposed method 89.93 84.78 77.33 77.95 69.43 60.30 67.78 71.93
EN EN→XX de es fr it ja ru zh Schwenk and Li (2018) MultiCCA + CNN 92.20 81.20 72.50 72.38 69.38 67.63 60.80 74.73 BiLSTM (Europarl) 88.40 71.83 66.65 72.83 60.73 - - - BiLSTM (UN) 88.83 - 69.50 74.52 - - 61.42 71.97 Proposed method 89.93 84.78 77.33 77.95 69.43 60.30 67.78 71.93
### 4.3 BUCC: Bitext Mining
Bitext mining is another natural application for multilingual sentence embeddings. Given two comparable corpora in different languages, the task consists of identifying sentence pairs that are translations of each other. For that purpose, one would commonly score sentence pairs by taking the cosine similarity of their respective embeddings, so parallel sentences can be extracted through nearest neighbor retrieval and filtered by setting a fixed threshold over this score (Schwenk, 2018). However, it was recently shown that this approach suffers from scale inconsistency issues (Guo et al., 2018), and Artetxe and Schwenk (2018) proposed the following alternative score addressing it:
$score(x,y)=margin(cos(x,y),∑z∈NNk(x)cos(x,z)2k+∑z∈NNk(y)cos(y,z)2k)$
where x and y are the source and target sentences, and NNk(x) denotes the k nearest neighbors of x in the other language. The paper explores different margin functions, with ratio ($margin(a,b)=ab$) yielding the best results. This notion of margin is related to CSLS (Conneau et al., 2018a).
We use this method to evaluate our sentence embeddings on the BUCC mining task (Zweigenbaum et al., 2017, 2018), using exact same hyper-parameters as Artetxe and Schwenk (2018). The task consists in extracting parallel sentences from a comparable corpus between English and four foreign languages: German, French, Russian, and Chinese. The data set consists of 150 K to 1.2 M sentences for each language, split into a sample, training and test set, with about 2–3% of the sentences being parallel. As shown in Table 4, our system establishes a new state-of-the-art for all language pairs with the exception of English-Chinese test. We also outperform Artetxe and Schwenk (2018) themselves, who use two separate models covering 4 languages each. Not only are our results better, but our model also covers many more languages, so it can potentially be used to mine bitext for any combination of the 93 languages supported.
Table 4:
F1 scores on the BUCC mining task.
TRAIN TEST de-en fr-en ru-en zh-en de-en fr-en ru-en zh-en Azpeitia et al. (2017) 83.33 78.83 - - 83.74 79.46 - - Grégoire and Langlais (2017) - 20.67 - - - 20 - - Zhang and Zweigenbaum (2017) - - - 43.48 - - - 45.13 Azpeitia et al. (2018) 84.27 80.63 80.89 76.45 85.52 81.47 81.30 77.45 Bouamor and Sajjad (2018) - 75.2 - - - 76.0 - - Chongman Leong and Chao (2018) - - - 58.54 - - - 56 Schwenk (2018) 76.1 74.9 73.3 71.6 76.9 75.8 73.8 71.6 Artetxe and Schwenk (2018) 94.84 91.85 90.92 91.04 95.58 92.89 92.03 92.57 Proposed method 95.43 92.40 92.29 91.20 96.19 93.91 93.30 92.27
TRAIN TEST de-en fr-en ru-en zh-en de-en fr-en ru-en zh-en Azpeitia et al. (2017) 83.33 78.83 - - 83.74 79.46 - - Grégoire and Langlais (2017) - 20.67 - - - 20 - - Zhang and Zweigenbaum (2017) - - - 43.48 - - - 45.13 Azpeitia et al. (2018) 84.27 80.63 80.89 76.45 85.52 81.47 81.30 77.45 Bouamor and Sajjad (2018) - 75.2 - - - 76.0 - - Chongman Leong and Chao (2018) - - - 58.54 - - - 56 Schwenk (2018) 76.1 74.9 73.3 71.6 76.9 75.8 73.8 71.6 Artetxe and Schwenk (2018) 94.84 91.85 90.92 91.04 95.58 92.89 92.03 92.57 Proposed method 95.43 92.40 92.29 91.20 96.19 93.91 93.30 92.27
### 4.4 Tatoeba: Similarity Search
Although XNLI, MLDoc, and BUCC are well-established benchmarks with comparative results available, they only cover a small subset of our 93 languages. So as to better assess the performance of our model in all these languages, we introduce a new test set of similarity search for 112 languages based on the Tatoeba corpus. The data set consists of up to 1,000 English-aligned sentence pairs for each language. Appendix J describes how the data set was constructed in more details. Evaluation is done by finding the nearest neighbor for each sentence in the other language according to cosine similarity and computing the error rate.
We report our results in Table 1. Contrasting these results with those of XNLI, one would assume that similarity error rates below 5% are indicative of strong downstream performance.11 This is the case for 37 languages, there are 48 languages with an error rate below 10% and 55 with less than 20%. There are only 15 languages with error rates above 50%. Additional result analysis is given in Appendix K.
We believe that our competitive results for many low-resource languages are indicative of the benefits of joint training, which is also supported by our ablation results in Section 5.3. In relation to that, Appendix L reports similarity search results for 29 additional languages without any training data, showing that our encoder can also generalize to unseen languages to some extent as long as it was trained on related languages.
## 5 Ablation Experiments
In this section, we explore different variants of our approach and study the impact on the performance for all our evaluation tasks. We report average results across all languages. For XNLI, we also report the accuracy on English.
### 5.1 Encoder Depth
Table 5 reports the performance on the different tasks for encoders with 1, 3, or 5 layers. We were not able to achieve good convergence with deeper models. It can be seen that all tasks benefit from deeper models, in particular XNLI and Tatoeba, suggesting that a single-layer BiLSTM has not enough capacity to encode so many languages.
Table 5:
Impact of the depth of the BiLSTM encoder.
Depth Tatoeba BUCC MLDoc XNLI-en XNLI-xx Err [%] F1 Acc [%] Acc [%] Acc [%] 1 37.96 89.95 69.42 70.94 64.54 3 28.95 92.28 71.64 72.83 68.43 5 26.31 92.83 72.79 73.67 69.92
Depth Tatoeba BUCC MLDoc XNLI-en XNLI-xx Err [%] F1 Acc [%] Acc [%] Acc [%] 1 37.96 89.95 69.42 70.94 64.54 3 28.95 92.28 71.64 72.83 68.43 5 26.31 92.83 72.79 73.67 69.92
Multitask learning has been shown to be helpful to learn English sentence embeddings (Subramanian et al., 2018; Cer et al., 2018). The most important task in this approach is arguably NLI, so we explored adding an additional NLI objective to our system with different weighting schemes. As shown in Table 6, the NLI objective leads to a better performance on the English NLI test set, but this comes at the cost of a worse cross-lingual transfer performance in XNLI and Tatoeba. The effect in BUCC is negligible.
Table 6:
Multitask training with an NLI objective and different weightings.
NLI Tatoeba BUCC MLDoc XNLI-en XNLI-xx obj. Err [%] F1 Acc [%] Acc [%] Acc [%] − 26.31 92.83 72.79 73.67 69.92 ×1 26.89 93.01 74.51 73.71 69.10 ×2 28.52 93.06 71.90 74.65 67.75 ×3 27.83 92.98 73.11 75.23 61.86
NLI Tatoeba BUCC MLDoc XNLI-en XNLI-xx obj. Err [%] F1 Acc [%] Acc [%] Acc [%] − 26.31 92.83 72.79 73.67 69.92 ×1 26.89 93.01 74.51 73.71 69.10 ×2 28.52 93.06 71.90 74.65 67.75 ×3 27.83 92.98 73.11 75.23 61.86
### 5.3 Number of Training Languages
So as to better understand how our architecture scales to a large amount of languages, we train a separate model on a subset of 18 evaluation languages, and compare it to our main model trained on 93 languages. We replaced the Tatoeba corpus with the WMT 2014 test set to evaluate the multilingual similarity error rate. This covers English, Czech, French, German, and Spanish, so results between both models are directly comparable. As shown in Table 7, the full model equals or outperforms the one covering the evaluation languages only for all tasks but MLDoc. This suggests that the joint training also yields to overall better representations.
Table 7:
Comparison between training on 93 languages and training on the 18 evaluation languages only.
#langs WMT BUCC MLDoc XNLI-en XNLI-xx Err [%] F1 Acc [%] Acc [%] Acc [%] All (93) 0.54 92.83 72.79 73.67 69.92 Eval (18) 0.59 92.91 75.63 72.99 68.84
#langs WMT BUCC MLDoc XNLI-en XNLI-xx Err [%] F1 Acc [%] Acc [%] Acc [%] All (93) 0.54 92.83 72.79 73.67 69.92 Eval (18) 0.59 92.91 75.63 72.99 68.84
## 6 Conclusions
In this paper, we propose an architecture to learn multilingual fixed-length sentence embeddings for 93 languages. We use a single language-agnostic BiLSTM encoder for all languages, which is trained on publicly available parallel corpora and applied to different downstream tasks without any fine-tuning. Our experiments on cross-lingual natural language inference (XNLI), cross-lingual document classification (MLDoc), and bitext mining (BUCC) confirm the effectiveness of our approach. We also introduce a new test set of multilingual similarity search in 112 languages, and show that our approach is competitive even for low-resource languages. To the best of our knowledge, this is the first successful exploration of general purpose massively multilingual sentence representations.
In the future, we would like to explore alternative encoder architectures like self-attention (Vaswani et al., 2017). We would also like to explore strategies to exploit monolingual data, such as using pre-trained word embeddings, back-translation (Sennrich et al., 2016; Edunov et al., 2018), or other ideas from unsupervised MT (Artetxe et al., 2018c; Lample et al., 2018). Finally, we would like to replace our language-dependant pre-processing with a language-agnostic approach like SentencePiece.12
Our implementation, the pre-trained encoder, and the multilingual test set are freely available at https://github.com/facebookresearch/LASER.
## A Training Data
Our training data consists of the combination of the following publicly available parallel corpora:
• •
Europarl: 21 European languages. The size varies from 400k to 2 M sentences depending on the language pair.
• •
United Nations: We use the first 2 million sentences in Arabic, Russian, and Chinese.
• •
OpenSubtitles2018: A parallel corpus of movie subtitles in 57 languages. The corpus size varies from a few thousand sentences to more than 50 million. We keep at most 2 million entries for each language pair.
• •
Global Voices: News stories from the Global Voices Web site (38 languages). This is a rather small corpus with fewer than 100k sentence in most of the languages.
• •
Tanzil: Quran translations in 42 languages, average size of 135k sentences. The style and vocabulary is very different from news texts.
• •
Tatoeba: A community-supported collection of English sentences and translations into more than 300 languages. We use this corpus to extract a separate test set of up to 1,000 sentences (see Appendix C). For languages with more than 1,000 entries, we use the remaining ones for training.
Using all these corpora would provide parallel data for more languages, but we decided to keep 93 languages after discarding several constructed languages with little practical use (Klingon, Kotava, Lojban, Toki Pona, and Volapük). In our preliminary experiments, we observed that the domain of the training data played a key role in the performance of our sentence embeddings. Some tasks (BUCC, MLDoc) tend to perform better when the encoder is trained on long and formal sentences, whereas other tasks (XNLI, Tatoeba) benefit from training on shorter and more informal sentences. So as to obtain a good balance, we used at most 2 million sentences from OpenSubtitles, although more data are available for some languages. The size of the available training data varies largely for the considered languages (see Table 1). This favors high-resource languages when the joint BPE vocabulary is created and the training of the joint encoder. In this work, we did not try to counter this effect by over-sampling low-resource languages.
## B XNLI Results for All Language Combinations
Table 8 reports the accuracies of our system on the XNLI test set when the premises and hypothesis are in a different language. The numbers in the diagonal correspond to the main results reported in Table 2. Our approach obtains strong results when combining different languages. We do not have evidence that distant languages perform considerably worse. Instead, the combined performance seems mostly bounded by the accuracy of the language that performs worst when used alone. For instance, Greek–Russian achieves very similar results to Bulgarian–Russian, two Slavic languages. Similarly, combing French with Chinese, two totally different languages, is only 1.5 points worse than French–Spanish, two very close languages.
Table 8:
XNLI test accuracies for our approach when the premise and hypothesis are in different languages.
Hypothesis en ar bg de el es fr hi ru sw th tr ur vi zh avg Premise en 73.9 70.0 72.0 72.8 71.6 72.2 72.2 65.9 71.4 61.5 67.6 69.7 61.0 70.7 70.3 69.5 ar 70.5 71.4 71.1 70.1 69.6 70.6 70.0 64.9 69.9 60.1 67.1 68.2 60.6 69.5 70.1 68.2 bg 72.7 71.1 74.2 72.3 71.7 72.1 72.7 65.5 71.7 60.8 69.0 69.8 61.2 70.5 70.5 69.7 de 72.0 69.6 71.8 72.6 70.9 71.7 71.5 65.2 70.8 60.5 68.1 69.1 60.5 70.0 70.7 69.0 el 73.0 70.1 72.0 72.4 72.8 71.5 71.9 65.2 71.7 61.0 68.1 69.5 61.0 70.2 70.4 69.4 es 73.3 70.4 72.4 72.7 71.5 72.9 72.2 65.0 71.2 61.5 68.1 69.8 60.5 70.4 70.4 69.5 fr 73.2 70.4 72.2 72.5 71.1 72.1 71.9 65.9 71.3 61.4 68.1 70.0 60.9 70.9 70.4 69.5 hi 66.7 66.0 66.7 67.2 65.4 66.1 65.6 65.5 66.5 58.9 63.8 65.9 59.5 65.6 66.0 65.0 ru 71.3 70.0 72.3 71.4 70.5 71.2 71.3 64.4 72.1 60.8 67.9 68.7 60.5 69.9 70.1 68.8 sw 65.7 64.5 65.7 65.0 65.1 65.2 64.5 61.5 64.9 62.2 63.3 64.5 58.2 65.0 65.1 64.0 th 70.5 69.2 71.4 70.1 69.6 70.2 69.6 65.2 70.2 62.1 69.2 67.7 60.9 70.0 69.6 68.4 tr 70.6 69.1 70.4 70.3 69.6 70.6 69.8 64.0 69.1 61.3 67.3 69.7 60.6 69.8 69.0 68.1 ur 65.5 64.8 65.3 65.9 65.3 65.7 64.8 62.1 65.3 58.2 63.2 64.1 61.0 64.3 65.0 64.0 vi 71.7 69.7 72.2 71.1 70.7 71.3 70.5 65.4 71.0 61.3 69.0 69.3 60.6 72.0 70.3 69.1 zh 71.6 69.9 71.7 71.1 70.1 71.2 70.8 64.1 70.9 60.5 68.6 68.9 60.3 69.8 71.4 68.7 avg 70.8 69.1 70.8 70.5 69.7 70.3 70.0 64.7 69.8 60.8 67.2 68.3 60.5 69.2 69.3 68.1
Hypothesis en ar bg de el es fr hi ru sw th tr ur vi zh avg Premise en 73.9 70.0 72.0 72.8 71.6 72.2 72.2 65.9 71.4 61.5 67.6 69.7 61.0 70.7 70.3 69.5 ar 70.5 71.4 71.1 70.1 69.6 70.6 70.0 64.9 69.9 60.1 67.1 68.2 60.6 69.5 70.1 68.2 bg 72.7 71.1 74.2 72.3 71.7 72.1 72.7 65.5 71.7 60.8 69.0 69.8 61.2 70.5 70.5 69.7 de 72.0 69.6 71.8 72.6 70.9 71.7 71.5 65.2 70.8 60.5 68.1 69.1 60.5 70.0 70.7 69.0 el 73.0 70.1 72.0 72.4 72.8 71.5 71.9 65.2 71.7 61.0 68.1 69.5 61.0 70.2 70.4 69.4 es 73.3 70.4 72.4 72.7 71.5 72.9 72.2 65.0 71.2 61.5 68.1 69.8 60.5 70.4 70.4 69.5 fr 73.2 70.4 72.2 72.5 71.1 72.1 71.9 65.9 71.3 61.4 68.1 70.0 60.9 70.9 70.4 69.5 hi 66.7 66.0 66.7 67.2 65.4 66.1 65.6 65.5 66.5 58.9 63.8 65.9 59.5 65.6 66.0 65.0 ru 71.3 70.0 72.3 71.4 70.5 71.2 71.3 64.4 72.1 60.8 67.9 68.7 60.5 69.9 70.1 68.8 sw 65.7 64.5 65.7 65.0 65.1 65.2 64.5 61.5 64.9 62.2 63.3 64.5 58.2 65.0 65.1 64.0 th 70.5 69.2 71.4 70.1 69.6 70.2 69.6 65.2 70.2 62.1 69.2 67.7 60.9 70.0 69.6 68.4 tr 70.6 69.1 70.4 70.3 69.6 70.6 69.8 64.0 69.1 61.3 67.3 69.7 60.6 69.8 69.0 68.1 ur 65.5 64.8 65.3 65.9 65.3 65.7 64.8 62.1 65.3 58.2 63.2 64.1 61.0 64.3 65.0 64.0 vi 71.7 69.7 72.2 71.1 70.7 71.3 70.5 65.4 71.0 61.3 69.0 69.3 60.6 72.0 70.3 69.1 zh 71.6 69.9 71.7 71.1 70.1 71.2 70.8 64.1 70.9 60.5 68.6 68.9 60.3 69.8 71.4 68.7 avg 70.8 69.1 70.8 70.5 69.7 70.3 70.0 64.7 69.8 60.8 67.2 68.3 60.5 69.2 69.3 68.1
## C Tatoeba: Data Set
Tatoeba13 is an open collection of English sentences and high-quality translations into more than 300 languages. The number of available translations is updated every Saturday. We downloaded the snapshot on November 19, 2018, and performed the following processing: 1) removal of sentences containing “@” or “http”, as emails and web addresses are not language-specific; 2) removal of sentences with fewer than three words, as they usually have little semantic information; 3) removal of sentences that appear multiple times, either in the source or the target.
After filtering, we created test sets of up to 1,000 aligned sentences with English. This amount is available for 72 languages. Limiting the number of sentences to 500, we increase the coverage to 86 languages, and 112 languages with 100 parallel sentences. It should be stressed that, in general, the English sentences are not the same for different languages, so error rates are not directly comparable across languages.
## D Tatoeba: Result Analysis
In this section, we provide some analysis on the results given in Table 1. We have 48 languages with an error rate below 10% and 55 with less than 20%, respectively (English included). The languages with less than 20% error belong to 20 different families and use 12 different scripts, and include 6 languages for which we have only small amounts of bitexts (less than 400k), namely, Esperanto, Galician, Hindi, Interlingua, Malayam, and Marathi, which presumably benefit from the joint training with other related languages.
Overall, we observe low similarity error rates on the Indo-Aryan languages, namely, Hindi, Bengali, Marathi, and Urdu. The performance on Berber languages (“ber” and “kab”) is remarkable, although we have fewer than 100k sentences to train them. This is a typical example of languages that are spoken by several millions of people, but for which the amount of written resources is very limited. It is quite unlikely that we would be able to train a good sentence embedding with language specific corpora only, showing the benefit of joint training on many languages.
Only 15 languages have similarity error rates above 50%. Four of them are low-resource languages with their own script and which are alone in their family (Amharic, Armenian, Khmer, and Georgian), making it difficult to benefit from joint training. In any case, it is still remarkable that a language like Khmer performs much better than random with only 625 training examples. There are also several Turkic languages (Kazakh, Tatar, Uighur, and Uzbek) and Celtic languages (Breton and Cornish) with high error rates. We plan to further investigate its cause and possible solutions in the future.
## E Tatoeba: Results for Unseen Languages
We extend our experiments to 29 languages without any training data (see Table 9). Many of them are recognized minority languages spoken in specific regions (e.g., Asturian, Faroese, Frisian, Kashubian, North Moluccan Malay, Piemontese, Swabian, or Sorbian). All share some similarities, at various degrees, with other major languages that we cover, but also differ by their own grammar or specific vocabulary. This enables our encoder to perform reasonably well, even if it did not see these languages during training.
Table 9:
Performance on the Tatoeba test set for languages for which we have no training data.
ang arq arz ast awa ceb ch csb cy dsb fo fy gd gsw hsb en→xx err. 58.96 58.62 31.24 12.60 63.20 81.67 64.23 54.55 89.74 48.64 28.24 46.24 95.66 52.99 42.44 xx→en err. 65.67 62.46 31.03 14.96 64.50 87.00 77.37 58.89 93.04 55.32 28.63 50.29 96.98 58.12 48.65 test sent. 134 911 477 127 231 600 137 253 575 479 262 173 829 117 483 jv max mn nn nov orv pam pms swg tk tzl war xh yi en→xx err. 73.66 48.24 89.55 13.40 33.07 68.26 93.10 50.86 50.00 75.37 54.81 84.20 90.85 93.28 xx→en err. 80.49 50.00 94.09 10.00 35.02 75.45 95.00 49.90 58.04 83.25 55.77 88.60 92.25 95.40 test sent. 205 284 440 1000 257 835 1000 525 112 203 104 1000 142 848
ang arq arz ast awa ceb ch csb cy dsb fo fy gd gsw hsb en→xx err. 58.96 58.62 31.24 12.60 63.20 81.67 64.23 54.55 89.74 48.64 28.24 46.24 95.66 52.99 42.44 xx→en err. 65.67 62.46 31.03 14.96 64.50 87.00 77.37 58.89 93.04 55.32 28.63 50.29 96.98 58.12 48.65 test sent. 134 911 477 127 231 600 137 253 575 479 262 173 829 117 483 jv max mn nn nov orv pam pms swg tk tzl war xh yi en→xx err. 73.66 48.24 89.55 13.40 33.07 68.26 93.10 50.86 50.00 75.37 54.81 84.20 90.85 93.28 xx→en err. 80.49 50.00 94.09 10.00 35.02 75.45 95.00 49.90 58.04 83.25 55.77 88.60 92.25 95.40 test sent. 205 284 440 1000 257 835 1000 525 112 203 104 1000 142 848
## Notes
1
For instance, there is not always a one-to-one correspondence among words in different languages (e.g., a single word of a morphologically complex language might correspond to several words of a morphologically simple language), so having a separate vector for each word might not transfer as well across languages.
2
Note that, if we had a single target language, the only way to train the encoder for that language would be auto-encoding, which we observe to work poorly. Having two target languages avoids this problem.
4
Note that it is not necessary that all input languages are systematically aligned with both target languages. Once we have several languages with both alignments, the joint embedding is well conditioned, and we can add more languages with one alignment only, usually English.
9
Note that the multilingual variant of BERT is not discussed in its paper (Devlin et al., 2019). Instead, the reported results were extracted from the README of the official GitHub project at https://github.com/google-research/bert/blob/master/multilingual.md on July 5, 2019.
10
Concurrent to our work, Lample and Conneau (2019) report superior results using another variant of BERT, outperforming our method by 4.5 points in average. However, note that these results are not fully comparable because 1) their system uses development data in the foreign languages, whereas our approach is fully zero-shot, 2) their approach requires fine-tuning on the task, 3) our system handles a much larger number of languages, and 4) our transfer performance is substantially better (an average loss of 4 vs 10.6 points with respect to the respective English system).
11
We consider the average of en$→$xx and xx$→$en
## References
Waleed
Ammar
,
George
Mulcaire
,
Yulia
Tsvetkov
,
Guillaume
Lample
,
Chris
Dyer
, and
Noah A.
Smith
.
2016
.
Massively multilingual word embeddings
.
CoRR
,
abs/1602.01925
.
Mikel
Artetxe
,
Gorka
Labaka
, and
Eneko
Agirre
.
2018a
.
Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations
. In
Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence
, pages
5012
5019
.
Mikel
Artetxe
,
Gorka
Labaka
, and
Eneko
Agirre
.
2018b
.
A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings
. In
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
, pages
789
798
,
Melbourne
.
Mikel
Artetxe
,
Gorka
Labaka
, and
Eneko
Agirre
.
2018c
.
Unsupervised statistical machine translation
. In
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
, pages
3632
3642
,
Brussels
.
Mikel
Artetxe
and
Holger
Schwenk
.
2018
.
Margin-based parallel corpus mining with multilingual sentence embeddings
.
CoRR
,
abs/1811.01136
.
Andoni
Azpeitia
,
Thierry
Etchegoyhen
, and
Eva
Martínez Garcia
.
2017
.
Weighted set-theoretic alignment of comparable sentences
. In
Proceedings of the 10th Workshop on Building and Using Comparable Corpora
, pages
41
45
,
Vancouver
.
Andoni
Azpeitia
,
Thierry
Etchegoyhen
, and
Eva
Martínez Garcia
.
2018
.
Extracting parallel sentences from comparable corpora with STACC variants
. In
Proceedings of the 11th Workshop on Building and Using Comparable Corpora
.
Houda
Bouamor
and
Hassan
.
2018
.
H2@BUCC18: Parallel sentence extraction from comparable corpora using multilingual sentence embeddings
. In
Proceedings of the 11th Workshop on Building and Using Comparable Corpora
.
Samuel R.
Bowman
,
Gabor
Angeli
,
Christopher
Potts
, and
Christopher D.
Manning
.
2015
.
A large annotated corpus for learning natural language inference
. In
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing
, pages
632
642
,
Lisbon
.
Daniel
Cer
,
Yinfei
Yang
,
Sheng-yi
Kong
,
Nan
Hua
,
Nicole
Limtiaco
,
Rhomni
St. John
,
Noah
Constant
,
Mario
Guajardo-Cespedes
,
Steve
Yuan
,
Chris
Tar
,
Brian
Strope
, and
Ray
Kurzweil
.
2018
.
Universal sentence encoder for English
. In
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
, pages
169
174
,
Brussels
.
Derek F.
Wong
,
Chongman
Leong
, and
Lidia S.
Chao
.
2018
.
UM-pAligner: Neural network-based parallel sentence identification model
. In
Proceedings of the 11th Workshop on Building and Using Comparable Corpora
.
Alexis
Conneau
,
Douwe
Kiela
,
Holger
Schwenk
,
Loïc
Barrault
, and
Antoine
Bordes
.
2017
.
Supervised learning of universal sentence representations from natural language inference data
. In
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
, pages
670
680
,
Copenhagen
.
Alexis
Conneau
,
Guillaume
Lample
,
Marc’Aurelio
Ranzato
,
Ludovic
Denoyer
, and
Herv
Jgou
.
2018a
.
Word translation without parallel data
. In
Proceedings of the 6th International Conference on Learning Representations (ICLR 2018)
.
Alexis
Conneau
,
Ruty
Rinott
,
Guillaume
Lample
,
Williams
,
Samuel
Bowman
,
Holger
Schwenk
, and
Veselin
Stoyanov
.
2018b
.
XNLI: Evaluating cross-lingual sentence representations
. In
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
, pages
2475
2485
,
Brussels
.
Andrew M.
Dai
and
Quoc V.
Le
.
2015
.
Semi-supervised sequence learning
. In
Advances in Neural Information Processing Systems 28
, pages
3079
3087
.
Jacob
Devlin
,
Ming-Wei
Chang
,
Kenton
Lee
, and
Kristina
Toutanova
.
2019
.
BERT: Pre-training of deep bidirectional transformers for language understanding
. In
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
, pages
4171
4186
,
Minneapolis, MN
.
Philipp
Dufter
,
Mengjie
Zhao
,
Martin
Schmitt
,
Alexander
Fraser
, and
Hinrich
Schütze
.
2018
.
Embedding learning through multilingual concept induction
. In
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
, pages
1520
1530
,
Melbourne
.
Sergey
Edunov
,
Myle
Ott
,
Michael
Auli
, and
David
Grangier
.
2018
.
Understanding back-translation at scale
. In
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
, pages
489
500
,
Brussels
.
Cristina
España-Bonet
,
Ádám Csaba
Varga
,
Alberto
Barrn-Cedeo
, and
Josef van
Genabith
.
2017
.
An empirical analysis of NMT-derived interlingual embeddings and their use in parallel sentence identification
.
IEEE Journal of Selected Topics in Signal Processing
,
1340
1348
.
Stephan
Gouws
,
Yoshua
Bengio
, and
Greg
.
2015
.
BilBOWA: Fast bilingual distributed representations without word alignments
. In
Proceedings of the 32nd International Conference on Machine Learning
,
volume 37
of Proceedings of Machine Learning Research, pages
748
756
,
Lille
.
Francis
Grégoire
and
Philippe
Langlais
.
2017
.
BUCC 2017 shared task: A first attempt toward a deep learning framework for identifying parallel sentences in comparable corpora
. In
Proceedings of the 10th Workshop on Building and Using Comparable Corpora
, pages
46
50
,
Vancouver
.
Mandy
Guo
,
Qinlan
Shen
,
Yinfei
Yang
,
Heming
Ge
,
Daniel
Cer
,
Gustavo
Hernandez Abrego
,
Keith
Stevens
,
Noah
Constant
,
Yun-Hsuan
Sung
,
Brian
Strope
, and
Ray
Kurzweil
.
2018
.
Effective parallel corpus mining using bilingual sentence embeddings
. In
Proceedings of the Third Conference on Machine Translation: Research Papers
, pages
165
176
,
Belgium
.
Hany
Hassan
,
Anthony
Aue
,
Chang
Chen
,
Vishal
Chowdhary
,
Jonathan
Clark
,
Christian
Federmann
,
Xuedong
Huang
,
Marcin
Junczys-Dowmunt
,
William
Lewis
,
Mu
Li
,
Shujie
Liu
,
Tie-Yan
Liu
,
Renqian
Luo
,
Arul
Menezes
,
Tao
Qin
,
Frank
Seide
,
Xu
Tan
,
Fei
Tian
,
Lijun
Wu
,
Shuangzhi
Wu
,
Yingce
Xia
,
Dongdong
Zhang
,
Zhirui
Zhang
, and
Ming
Zhou
.
2018
.
Achieving human parity on automatic Chinese to English news translation
.
CoRR
,
abs/1803.05567
.
Jeremy
Howard
and
Sebastian
Ruder
.
2018
.
Universal language model fine-tuning for text classification
. In
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
, pages
328
339
,
Melbourne
.
Melvin
Johnson
,
Mike
Schuster
,
Quoc V.
Le
,
Maxim
Krikun
,
Yonghui
Wu
,
Zhifeng
Chen
,
Nikhil
Thorat
,
Fernanda
Viégas
,
Martin
Wattenberg
,
Greg
,
Macduff
Hughes
, and
Jeffrey
Dean
.
2017
.
Google’s multilingual neural machine translation system: Enabling zero-shot translation
.
Transactions of the Association for Computational Linguistics
,
5
:
339
351
.
Ryan
Kiros
,
Yukun
Zhu
,
Ruslan R.
Salakhutdinov
,
Richard
Zemel
,
Raquel
Urtasun
,
Antonio
Torralba
, and
Sanja
Fidler
.
2015
.
Skip-thought vectors
. In
Advances in Neural Information Processing Systems 28
, pages
3294
3302
.
Alexandre
Klementiev
,
Ivan
Titov
, and
Binod
Bhattarai
.
2012
.
Inducing crosslingual distributed representations of words
. In
Proceedings of COLING 2012
, pages
1459
1474
,
Mumbai
.
Guillaume
Lample
and
Alexis
Conneau
.
2019
.
Cross-lingual language model pretraining
.
CoRR
,
abs/1901.07291
.
Guillaume
Lample
,
Myle
Ott
,
Alexis
Conneau
,
Ludovic
Denoyer
, and
Marc’Aurelio
Ranzato
.
2018
.
Phrase-based and neural unsupervised machine translation
. In
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
, pages
5039
5049
,
Brussels
.
Quoc
Le
and
Tomas
Mikolov
.
2014
.
Distributed representations of sentences and documents
. In
Proceedings of the 31st International Conference on Machine Learning
,
volume 32
of Proceedings of Machine Learning Research, pages
1188
1196
,
Bejing
.
David D.
Lewis
,
Yiming
Yang
,
Tony G.
Rose
, and
Fan
Li
.
2004
.
RCV1: A new benchmark collection for text categorization research
.
Journal of Machine Learning Research
,
5
:
361
397
.
Thang
Luong
,
Hieu
Pham
, and
Christopher D.
Manning
.
2015
.
Bilingual word representations with monolingual quality in mind
. In
Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing
, pages
151
159
,
Denver, CO
.
Chaitanya
Malaviya
,
Graham
Neubig
, and
Patrick
Littell
.
2017
.
Learning language representations for typology prediction
. In
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
, pages
2529
2535
,
Copenhagen
.
Tomas
Mikolov
,
Quoc V.
Le
, and
Ilya
Sutskever
.
2013a
.
Exploiting similarities among languages for machine translation
.
CoRR
,
abs/1309.4168
.
Tomas
Mikolov
,
Ilya
Sutskever
,
Kai
Chen
,
Greg S.
, and
Jeff
Dean
.
2013b
.
Distributed representations of words and phrases and their compositionality
. In
Advances in Neural Information Processing Systems 26
, pages
3111
3119
.
Graham
Neubig
and
Junjie
Hu
.
2018
.
Rapid adaptation of neural machine translation to new languages
. In
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
, pages
875
880
,
Brussels
.
Jeffrey
Pennington
,
Richard
Socher
, and
Christopher
Manning
.
2014
.
GloVe: Global vectors for word representation
. In
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)
, pages
1532
1543
,
Doha
.
Matthew
Peters
,
Mark
Neumann
,
Mohit
Iyyer
,
Matt
Gardner
,
Christopher
Clark
,
Kenton
Lee
, and
Luke
Zettlemoyer
.
2018
.
Deep contextualized word representations
. In
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
, pages
2227
2237
,
New Orleans, LA
.
Sebastian
Ruder
,
Ivan
Vulic
, and
Anders
Søgaard
.
2017
.
A survey of cross-lingual embedding models
.
CoRR
,
abs/1706.04902
.
Holger
Schwenk
.
2018
.
Filtering and mining parallel data in a joint multilingual space
. In
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
, pages
228
234
,
Melbourne
.
Holger
Schwenk
and
Matthijs
Douze
.
2017
.
Learning joint multilingual sentence representations with neural machine translation
. In
Proceedings of the 2nd Workshop on Representation Learning for NLP
, pages
157
167
,
Vancouver
.
Holger
Schwenk
and
Xian
Li
.
2018
.
A corpus for multilingual document classification in eight languages
. In
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)
,
Miyazaki
.
Rico
Sennrich
,
Barry
, and
Alexandra
Birch
.
2016
.
Improving neural machine translation models with monolingual data
. In
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
, pages
86
96
,
Berlin
.
Sandeep
Subramanian
,
Trischler
,
Yoshua
Bengio
, and
Christopher J.
Pal
.
2018
.
Learning general purpose distributed sentence representations via large scale multi-task learning
. In
Proceedings of the 6th International Conference on Learning Representations (ICLR 2018)
.
Jörg
Tiedemann
.
2012
.
Parallel data, tools and interfaces in OPUS
. In
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012)
, pages
2214
2218
,
Istanbul
.
Ashish
Vaswani
,
Noam
Shazeer
,
Niki
Parmar
,
Jakob
Uszkoreit
,
Llion
Jones
,
Aidan N.
Gomez
,
Łukasz
Kaiser
, and
Illia
Polosukhin
.
2017
.
Attention is all you need
. In
Advances in Neural Information Processing Systems 30
, pages
5998
6008
.
Alex
Wang
,
Amanpreet
Singh
,
Julian
Michael
,
Felix
Hill
,
Omer
Levy
, and
Samuel
Bowman
.
2018
.
GLUE: A multi-task benchmark and analysis platform for natural language understanding
. In
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
, pages
353
355
,
Brussels
.
Williams
,
Nikita
Nangia
, and
Samuel
Bowman
.
2018
.
A broad-coverage challenge corpus for sentence understanding through inference
. In
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
, pages
1112
1122
,
New Orleans, LA
.
Katherine
Yu
,
Haoran
Li
, and
Barlas
Oguz
.
2018
.
Multilingual seq2seq training with similarity loss for cross-lingual document classification
. In
Proceedings of The Third Workshop on Representation Learning for NLP
, pages
175
179
,
Melbourne
.
Zheng
Zhang
and
Pierre
Zweigenbaum
.
2017
.
zNLP: Identifying parallel sentences in Chinese-English comparable corpora
. In
Proceedings of the 10th Workshop on Building and Using Comparable Corpora
, pages
51
55
,
Vancouver
.
Pierre
Zweigenbaum
,
Serge
Sharoff
, and
Reinhard
Rapp
.
2017
.
Overview of the second BUCC shared task: Spotting parallel sentences in comparable corpora
. In
Proceedings of the 10th Workshop on Building and Using Comparable Corpora
, pages
60
67
,
Vancouver
.
Pierre
Zweigenbaum
,
Serge
Sharoff
, and
Reinhard
Rapp
.
2018
.
Overview of the third BUCC shared task: Spotting parallel sentences in comparable corpora
. In
Proceedings of the 11th Workshop on Building and Using Comparable Corpora
.
## Author notes
*
This work was performed during an internship at Facebook AI Research.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode
|
2021-05-18 11:27:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32032859325408936, "perplexity": 1642.0968988132627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989819.92/warc/CC-MAIN-20210518094809-20210518124809-00283.warc.gz"}
|
https://www.nature.com/articles/s41598-018-35307-5?error=cookies_not_supported&code=fbde722b-ce14-4c83-900e-1b80edf06ec3
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Coordinated Turning Behaviour of Loitering Honeybees
Subjects
An Author Correction to this article was published on 24 May 2019
Abstract
Turning during flight is a complex behaviour that requires coordination to ensure that the resulting centrifugal force is never large enough to disrupt the intended turning trajectory. The centrifugal force during a turn increases with the curvature (sharpness) of the turn, as well as the speed of flight. Consequently, sharp turns would require lower flight speeds, in order to limit the centrifugal force to a manageable level and prevent unwanted sideslips. We have video-filmed honeybees flying near a hive entrance when the entrance is temporarily blocked. A 3D reconstruction and analysis of the flight trajectories executed during this loitering behaviour reveals that sharper turns are indeed executed at lower speeds. During a turn, the flight speed is matched to the curvature, moment to moment, in such a way as to maintain the centrifugal force at an approximately constant, low level of about 30% of the body weight, irrespective of the instantaneous speed or curvature of the turn. This ensures that turns are well coordinated, with few or no sideslips - as it is evident from analysis of other properties of the flight trajectories.
Introduction
It is a common experience that driving too fast around a corner can cause a car to skid or roll over; a passenger standing in a bus can tip over if the bus makes a turn at high speed; or an aircraft attempting to make a very tight turn can experience a sideslip. The reason is that the act of turning while simultaneously moving forward creates a centrifugal force that is directed away from the centre of curvature of the turn. Newtonian mechanics dictates that the magnitude of the centrifugal force is proportional to (a) the curvature (the reciprocal of the radius of the turn), and (b) the square of the speed1. Hence, the sharper the turn, and the higher the speed, the greater the danger of losing control. Clearly, therefore, it makes sense to reduce one’s speed before commencing a turn, and to ensure that sharper turns are executed at a slower speed, in order to limit the centrifugal force to a safe and manageable value. This behaviour is adopted not only by car drivers, motorcyclists, bicyclists and runners, but also by several terrestrial and flying animals. Qualitative evidence to support such behaviour has been documented in race horses2, quolls3, houseflies4, fruitflies5, and bats6. However, a quantitative analysis of the relationship between flight speed and curvature, and the implications for the resulting centrifugal force that is experienced during turns, has not yet been explored in any animal.
Fruitflies (Drosophila) flying in a contained environment display segments of straight flight, interspersed with saccadic turns7,8. These turns are executed by performing a pitch and a roll of the body axis, which together induce a rapid change in the direction of flight. Visually evoked escape maneuvers of fruit flies also include sharp turns9, which are much faster than the stereotyped body saccades. While these turns enable rapid, aggressive changes of flight direction, they are inevitably associated with sideslips arising from the high centrifugal force. It is of interest to enquire whether flying insects are also capable of performing turns that are coordinated in such a way as to prevent sideslips - for example, during loitering flight. In our study, we induce bees to loiter in front of a beehive by blocking the entrance to the hive, which causes returning foragers to cruise in the vicinity of the hive entrance while they await entry. The behaviour of the bees in this ‘bee cloud’ is filmed using stereo video cameras and reconstructed in 3D to analyse their turning characteristics. The results reveal that loitering bees perform turns that are fully coordinated, and free of sideslips.
Materials and Methods
A non-captive honeybee colony (Apis Mellifera) was maintained on a semi-outdoor terrace on the rooftop of a building on the campus of the University of Queensland (St. Lucia). The bees were allowed to forage freely from the surrounding vegetation, without any restrictions. The experiment was commenced by temporarily blocking the hive entrance with a wooden strip (Fig. 1a). The returning foraging bees were thus temporarily denied entry into the hive but flew near the vicinity of the hive entrance, making multiple attempts to gain access. The resulting ‘bee cloud’ was filmed using two synchronized digital cameras (Redlake), configured to obtain stereo data. The cameras recorded video at 60 fps with 500 × 500 pixel resolution.
Before commencing the experiment, stereo camera calibration was performed to obtain the cameras’ intrinsic and extrinsic parameters. The video streams acquired by the two cameras were subsequently analysed by digitising the bee’s head and tail positions manually in each frame, to obtain the bee’s position coordinates in each view. A triangulation routine was executed to obtain the three-dimensional positional coordinates of each bee. The 3D coordinates of a bee were computable only when it was within the FOV of both cameras. The recording duration was 5.8 seconds (349 frames). The frames in the video footage carried varying numbers of bees, as individual bees entered or departed from the fields of view (FOV) of the two cameras. The method used to compute the kinematic parameters of the flight were based on vector calculus1. The magnitudes of the tangential and normal accelerations at any time instant ‘t’ can be computed as
$${a}_{T}=\frac{r^{\prime} ({\boldsymbol{t}}).r^{\prime\prime} ({\boldsymbol{t}})}{\Vert r^{\prime} ({\boldsymbol{t}})\Vert }$$
(1)
$${a}_{N}=\frac{\Vert z^{\prime} ({\boldsymbol{t}})\times r^{\prime\prime} ({\boldsymbol{t}})\Vert }{\Vert r^{\prime} ({\boldsymbol{t}})\Vert }$$
(2)
where $${a}_{T}\,\triangleq$$ tangential acceleration (TA) magnitude; $${a}_{N}\,\triangleq$$ normal or centripetal acceleration (CA) magnitude; $${\boldsymbol{r}}({\boldsymbol{t}})\,\triangleq$$ position vector; $$r^{\prime} ({\boldsymbol{t}})\,\triangleq$$ velocity vector and $$r^{\prime\prime} ({\boldsymbol{t}})\,\triangleq$$ total acceleration vector as function of time.
Mathematically, the curvature can be expressed as the rate of change of the unit tangent vector at a point. Using this concept, one can compute the magnitude of the curvature as function of time using the following vector algebra:
$$curvature\,k(t)=\frac{\Vert {\boldsymbol{r}}^{\prime} ({\boldsymbol{t}})\times {\boldsymbol{r}}^{\prime\prime} ({\boldsymbol{t}})\Vert }{{\Vert {\boldsymbol{r}}^{\prime} ({\boldsymbol{t}})\Vert }^{3}}$$
(3)
The radius of curvature (ρ) is expressed as the reciprocal of the curvature:
$$\rho (t)=\frac{1}{k(t)}$$
(4)
The raw data was pre-processed as follows: (i) A 5-point moving average filter was used to smooth the 3D position data; (ii) A central differencing method was used to compute a bias-free estimate of the gradient of the position vectors to compute the velocity vector, and subsequently the acceleration vector. No further filtering of these vectors was found to be necessary.
Results
The bee cloud data contains 3D position coordinates of a total of 66 bees. Figure 1a shows a perspective view of the bee cloud at a particular instant of time. Figure 1b shows the reconstructed 3D trajectories of the 66 bees, where each colour represents the trajectory of an individual bee. We used techniques of vector calculus to examine the flight characteristics of bees maneuvering in the cloud, by computing the following parameters:
1. a)
The speed of each bee in the cloud, and its variation as a function of time;
2. b)
The acceleration of each bee in the cloud, and its tangential and centripetal components, and the variation of these parameters as a function of time;
3. c)
The curvature and radius of curvature (ROC) of the flight trajectory, and its variation with time.
A flight segment illustrating the successive head positions and body orientations of a bee (Bee 2) during a turn is shown in Fig. 1c. This figure includes vector representations of the acceleration, and of its tangential and centripetal components. The variation of each of the above parameters as a function of time is shown in Fig. 2 for a longer turning segment from a different bee (Bee 57).
General relationship between instantaneous speed, curvature and centripetal acceleration
In general, the speed of a bee varies continuously through its flight path, as shown in Fig. 2a for an individual bee. The mean speed of this particular bee is 0.68 m/s, measured over its entire flight. Certain bees exhibited high speeds, despite flying in close proximity to other bees. For example, one individual reached a top speed of 2.61 m/s, while flying amidst 33 other bees in the cloud.
The average speed, measured over all bees in the cloud, was found to be 0.66 m/sec and the curvature of the trajectories displayed an average magnitude of 18 m−1. Average histograms of the speed and the curvature magnitudes of the trajectories of all 66 bees are shown in Fig. 3a,b, respectively. These histograms were obtained by computing an area-normalized histogram for each bee, and then averaging the results across the 66 bees.
The averaged histograms reveal a large variation in speed (ranging from 0 to 2.61 m/s; Fig. 3a), as well as curvature magnitude (ranging from 0 to ~300 m−1; Fig. 3b). However, these histograms include the variations across all bees, which have different mean speeds and mean curvature magnitudes.
A more representative measure of the average variability of speed and curvature within the trajectory of an individual bee is conveyed by the coefficient of variation (CV). This displays a value of 0.32 for speed, and 1.5 for curvature magnitude, when computed separately for each bee, and then averaged across all the bees.
Next, we calculated the tangential and centripetal components of the acceleration and plotted their variation as a function of time (Fig. 2c,d). The normalised histograms of tangential acceleration and the magnitude of the centripetal acceleration are shown in Fig. 3c,d respectively. The histogram of tangential acceleration clearly reveals that the flight contains acceleration and deceleration components, distributed approximately symmetrically about a value of zero (which corresponds to a constant tangential speed). The mean tangential acceleration, averaged across all bees, is 0.42 m/s2 (Fig. 3c), which is not significantly different from zero (p = 0.07; two tailed t-test). The mean standard deviation of the tangential acceleration is 2.0 m/s2. For many bees, the mean value of the tangential acceleration measured over the entire flight is very close to zero. Consequently, the CV of the tangential acceleration can become very large, approaching infinity, and not provide a useful measure of the variability of the tangential acceleration. A more useful measure is the CV of the magnitude of the tangential acceleration, which has a mean value of 1.86 m/s2, and a mean CV of 0.75, when computed separately for each bee, and then averaged across all bees. The relatively high CV value is likely due to the large variations in the magnitude of the tangential acceleration, which is maximal at the beginning and end of a turn, and zero near the middle.
The magnitude of the centripetal acceleration, averaged over each bee’s entire flight, and across all bees, has a mean value of 2.80 m/s2 (Fig. 3d), which is significantly different from zero (p = 2.6 × 10−25; two tailed t-test). This is obviously consistent with the fact that the bees are not always flying in straight lines, and that turns constitute a significant proportion of their flight trajectories. The mean CV of the magnitude of the centripetal acceleration is 0.51. Interestingly, the relatively low CV of the centripetal acceleration, compared to the CV for the tangential acceleration, raises the possibility that the centripetal acceleration may be regulated or restricted to particular limits while executing turns. This is explored in greater detail in the following section.
Analysis of turning flights
We began by computing the overall mean speed of all bees during straight and turning segments.
Points along the flight trajectories at which the curvature magnitude was greater than 250 were not included in the plots, because such measurements could be dominated by the errors in the image digitization process. Points at which the curvature magnitude was lower than 5 were considered to represent flight in an approximately straight line. These curvature limits were used to select the turning parts of the flight trajectory, and exclude segments that corresponded to straight flight or very sharp turns.
The mean speed was 0.83 m/s (s.d. = 0.15 m/s) during straight flights and 0.49 m/s (s.d. = 0.12 m/s) during turning flights. These speeds are significantly different (p = 2.88 × 10−08; paired sample t-test), indicating that the bees fly at a significantly slower speed when they are executing turns.
We were interested to examine how the variables of speed, centripetal acceleration, tangential acceleration, and curvature, discussed in the previous section, vary during turning segments. By imposing a curvature threshold of 5 m−1–250 m−1 (ROC equivalent of 0.004 m–0.20 m) on the curvature data, we were able to extract the turning segments from the complete flight trajectory. We then estimated the temporal variation of curvature, speed, centripetal acceleration and tangential acceleration during these turning segments. Examples of this analysis for 3 different bees are shown in Fig. 4. These flights were recorded at 335fps in order to visualise the variations of the turning parameters with a higher temporal resolution. In each case, the magnitude of the curvature (dark curve, upper right-hand panels in Fig. 4a–c) is low at the beginning of the turn, reaches a maximum value at the middle of the turn, and then declines toward zero as the turn is completed. The flight speed, on the other hand, (magenta curve, upper right-hand panels in Fig. 4a–c) varies in the opposite sense. Each bee gradually decreases its speed while entering the turn, reaching a minimum speed close to the point of maximum curvature, and then subsequently speeds up. This behaviour is confirmed by the plots in the lower right-hand panels in Fig. 4a–c, which show that the tangential acceleration (blue curve) is negative during the first half of the turn and positive during the second half, indicating that the bee is decelerating while entering the turn and accelerating while exiting it. Thus, the tangential component of acceleration varies dramatically during the flight - even changing its polarity halfway through the turn.
On the other hand, the magnitude of the centripetal component of the acceleration is more or less constant throughout the turn, as illustrated by the red curve in the lower right-hand panels of Fig. 4a–c.
The CVs of the centripetal acceleration maintained by these three bees displayed relatively low values of 0.16, 0.13 and 0.09, respectively, as shown in Fig. 4, indicating that the centripetal acceleration remains more or less constant (relative to its mean value) during the turn. On the other hand, the variations of the curvature (CV = 0.48, 0.37, 0.23) and speed (CV = 0.18, 0.25, 0.11) are higher.
The relative constancy of the centripetal acceleration throughout the course of the turn suggests that bees may be orchestrating turns by varying the flight speed and the curvature, moment for moment, in such a way that the centripetal acceleration is held constant during the course of the turn.
Do bees really hold the CA constant during turns? To test this hypothesis, we examined the predictions of a constant-CA model as follows. We may write the centripetal acceleration as:
$$centripetal\,acceleration=\frac{{v}^{2}}{\rho }$$
(5)
where ‘ρ’ is the instantaneous radius of curvature of the trajectory, and ‘v’ is the instantaneous bee speed.
If the centripetal acceleration is constant, we have
$$\frac{\,{v}^{2}}{\rho }=constant$$
(6)
Therefore,
$${v}^{2}\propto \rho \,{\rm{or}}\,{v}^{2}\propto \frac{1}{k}$$
(7)
where k is the curvature of the trajectory, which is a measure of the sharpness of the turn.
If the bees are holding their centripetal acceleration constant (as hypothesised), then either of the following two (equivalent) predictions must hold:
1. (a)
a linear relationship between the radius of curvature and speed2;
2. (b)
an inverse relationship between curvature and speed2.
To test the hypothesis, we examined the variation of speed2 with the radius of curvature (ROC) of the trajectory for individual bees. We plotted the variation of speed2 versus ROC for the three example bees illustrated in Fig. 4, which maintained their centripetal acceleration more or less a constant value. These relationships are shown as scatterplots in Fig. 5. This data is plotted for a ROC range of 0.004 m−0.20 m, which corresponds to a curvature magnitude range of 5 m−1– 250 m−1, as explained at the beginning of this Section. As a result of this windowing process, the trajectories of five bees were removed from the total of 66 bee trajectories. Unless explicitly stated, the number of bees included in all of our subsequent analyses is 61.
For the three scatterplots of speed2 versus ROC in Fig. 5, we performed regression analysis on the data by forcing the ‘Y’ intercept to be zero and estimating the slope of the regression line. We used the ‘robust regression’ routine in Matlab, which removes outliers in the data. The correlation efficient values (r) computed for the plots in Fig. 5a–c indicate that the regression lines fits the data very well, demonstrating a strong positive, linear correlation between speed2 and ROC, as per our prediction. This suggests that each bee is indeed holding the centripetal acceleration constant during the course of its turn. Additional examples from other bees (57 bees in total) are given in Section I (Fig. S1) of the Supplementary Material (SM). The mean value of the slope of the linear regression, computed individually for each of 57 bees, is 2.78 (see Table S1, SM). The correlation coefficients of these linear regressions are consistently high, displaying a mean value of 0.81, with over 85% of the values exceeding 0.70 (see Table S1, SM).
The relationship between speed2 and ROC for all the bees is illustrated in the scatterplot of Fig. 6a. Each colour in the scatterplot represents a different bee. The relatively high degree of variation in this scatterplot is due to the fact that, although each bee tends to show a strong linear correlation between speed2 and ROC, the slope of this relationship varies from individual to individual, as can be seen from the plots for individual bees (see Section I of the SM). The overall slope of a linear regression, performed on the data in Fig. 6a, is 2.17. This implies that the magnitude of the centripetal acceleration during a turn, averaged over all the bees, is approximately 2.17 m/s2.
To further test our hypothesis, we plotted the log-log relationship between speed2 and radius of curvature (ROC). As per our hypothesis, if there exists a linear relationship between speed2 and ROC, then the relationship between log(speed2) and log(ROC) should be linear, with a slope of 1.0.
Figure 6b shows the relationship between log(speed2) and log(ROC), plotted as a scattergram for the data pooled from the 61 bees. This relationship is approximately linear, with a slope of 0.92. This value is close to the value of 1.0 predicted by the hypothesis. The Y-axis intercept of the regression line shown in Fig. 6b is 0.62, from which the average centripetal acceleration can be calculated to be e0.62 = 1.86 m/s2. This is similar to the value of 2.17 m/s2 estimated from the slope of the regression of the data in Fig. 6a, the slight difference arising probably because the scatterplot in Fig. 6a is transformed nonlinearly to obtain the scatterplots of Fig. 6b.
Our hypothesis, namely, that bees hold the centrifugal acceleration constant during turns, predicts that at each point in the turn the instantaneous radius of curvature, ρ, should be proportional to the square of the instantaneous speed, v. Another way to test this hypothesis critically would be to examine whether ρ is indeed proportional to the square of v – or to the cube of v, for example, or some other integer or fractional power of v. To do this test, we express equation (5) in a more general form as:
$${v}^{n}=c\ast \rho$$
(8)
where n is the power of v and c is the constant of proportionality
Taking logarithms on both sides,
$$\mathrm{log}({v}^{n})=\,\mathrm{log}(c\ast \rho )$$
which can be written
$$n\,\mathrm{log}(v)=\,\mathrm{log}(c)+\,\mathrm{log}(\rho )$$
(9)
or
$$\mathrm{log}\,v=\frac{\mathrm{log}(c)}{n}+\frac{\mathrm{log}(\rho )}{n}$$
(10)
Thus, the slope of the regression between log(v) and log(ρ), equal to $$\frac{1}{n}$$, would allow us to estimate the appropriate value of the power. The y-axis intercept of the regression, equal to $$\frac{\mathrm{log}(c)}{n}$$, would then enable us to estimate the value of c, the coefficient of proportionality.
The scatter plot of log(v) vs log(ρ) is shown in the SM (Fig. S2). The slope of the regression line is 0.46, yielding a value of 2.2 for n. This is very close to the predicted value of 2, additionally supporting our hypothesis of constant centripetal acceleration. The y-axis intercept of the regression line is 0.31. Using n = 2.2, we obtain $$\mathrm{log}({\rm{c}})=2.2\times 0.31=0.68$$, from which we estimate the value of c to be 1.97 m/s2. This value is similar to those obtained from evaluating the slope of the regressions of the data in Fig. 6a,b (see above).
Thus, broadly speaking, the data of Figs 46 and S2 support the hypothesis that bees maintain a more or less constant centripetal acceleration of approximately 2 m/s2 during their turns, irrespective of the instantaneous speed or curvature at each point along the turn.
Another way to test our prediction - that the bees are holding their CA constant during their turns - is to re-express equation 6 in terms of the speed, heading rate and CA of the bee as follows:
$$\alpha =v\omega$$
(11)
or, equivalently,
$$v=\alpha \,{\omega }^{-1}$$
(12)
where: α is the centripetal acceleration in m/s2, ω is the heading rate of the bee in rad/s, v is the flight speed of the bee in m/s.
According to equation (12), the heading rate should vary inversely with the speed, if the centripetal acceleration is held constant during the turn. In other words, one would then expect a linear relationship between speed and the reciprocal of the heading rate (heading rate−1). The slope of this relationship should represent the magnitude of the centripetal acceleration (α). These predictions are analysed and discussed in detail in Section-1 of the SM, under the subheading “Relationship between heading rate and speed”. The interpretation of the results of this analysis is discussed here briefly.
The time course of the variation of the speed and the heading rate are shown for 6 different bees in the left-hand panels of Fig. S3 of the SM. In each case, the heading rate (magenta curve) varies inversely with the speed of the bee (dark curve). This implies that when the bee’s heading rate goes up, its speed decreases in such a way that the variation in CA is small. This observation is supported by the low values of the coefficient of variation of the CA in the six examples (0.14 ± 0.04).
We also verified the constant-CA prediction in a more direct way by plotting the relationship between the instantaneous speed and the reciprocal of the instantaneous heading rate as a scattergram for the 6 examples (Fig. S3, right-hand panels) and performed a linear regression analysis on the data. For each example, the correlation coefficient (r) is greater than 85%, demonstrating a strong positive and linear correlation between the speed and the reciprocal of the heading rate, as per the prediction in equation 12. These findings reinforce our hypothesis that bees hold the centripetal acceleration more or less constant during turns, thereby facilitating coordinated turns.
Table 1 compares the coefficients of variation (CV) of the variables that characterise the trajectories. We observe that, although the CV of the curvature is relatively high, signifying relatively large variations in curvature magnitude, the CV of the centripetal acceleration magnitude is relatively low.
This is because the bees are tailoring the flight speed to the curvature in such a way that a potential increase in CA arising from an increase in curvature during the turn is compensated by reducing the speed, and vice-versa, so that the centripetal acceleration is maintained at a more or less constant value through the course of the turn. Thus, the variations in centripetal acceleration during a turn are always low, despite considerable variations in the instantaneous curvature and the speed of the bee. This is evidenced by the relatively low value of CV for the centripetal acceleration, compared to the CVs for the curvature and speed2 for all of the 61 bees (see Table 1). In quantitative terms, the centripetal acceleration depends upon the product of (speed2) and (curvature), in which both terms display relatively high coefficients of variation (0.54 for speed2 and 0.81 for curvature, see Table 1). Despite these high variations, the CV of the product is comparatively low (0.40, Table 1), indicating that changes in the curvature are compensated by changes in speed that are of the appropriate direction and magnitude to ensure that the product (the centripetal acceleration) is held at a more or less constant value.
Loitering versus close encounter flights
The above analysis includes flight trajectories in which bees make obligatory turns to avoid collisions with other bees, as well as ‘voluntary’ turns while they are loitering in the vicinity of the hive entrance. These can be broadly classified as ‘close encounter’ turns and ‘loitering’ turns. We were interested to compare the characteristics of the two types of turns – one might, for example, expect close encounter turns to be more aggressive, featuring tighter turns and perhaps larger CAs. We distinguished between loitering turns (LTs) and close encounter turns (CETs) by using the following criterion. A bee’s turn was considered to be a LT when there was no other bee within a radius of 100 mm, and a CET when another bee was within a radius of 30 mm. Using this criterion, we classified the turns of 34 bees. In total, there were 77 LTs and 68 CETs. The number of turns executed by each bee is given in Section III (Table S3) of the SM. A comparison of the characteristics of the LTs and the CETs is shown in Table 2 (also see SM Table S2 for data from each individual bee). There are no statistically significant differences between the minimum speed during the turn, the maximum curvature of the turn, or the mean centripetal acceleration during the turn. Thus, LTs and CETs display very similar characteristics.
Comparison of characteristics of left and right turns
We were interested to examine whether the bees showed any preferences for turning direction. If the bee’s rotation about its dorsoventral axis (Zn) is in the clockwise direction, then the bee turns to the right, and vice versa. In order to determine the turning direction, we computed the 3D rotation vector, which is given by the cross product between the unit velocity vector and the unit centripetal acceleration vector (see Section IV of the SM for explanation). The bee’s turning direction is then obtained by taking the dot product of the 3D rotation vector with the unit vector representing the dorsoventral axis of the bee. If the dot product is positive, the bee is turning right; and vice versa.
This procedure was used to classify the turning direction, and then to compare the curvatures and centripetal accelerations during left turns with those during right turns. The histogram in Fig. 7a compares the distributions of the curvatures of right turns with those of left turns. Positive curvatures represent right turns, and negative curvatures left turns. The histogram is nearly symmetrical. The mean curvature magnitudes during left (−19.3 m−1) and right (17.2 m−1) turns are more or less equal and not significantly different (p = 0.230; two tailed t-test). The overall mean curvature for all turns (−0.97 m−1) is very close to zero and is not significantly different from zero (p = 0.277; two tailed t-test). This indicates that turns in either direction are (a) equally likely, and (b) display the same distribution of curvature magnitudes. Thus, the bees flying in our experimental situation do not display any noticeable left-right biases in their turning behaviour.
We also looked for possible biases in the centripetal accelerations associated with left versus right turns. Figure 7b shows a histogram of the distribution of centripetal acceleration for all bees. The peak value of the centripetal acceleration is slightly higher for left turns than for right turns. Apart from this, the histograms for the left and right turns are nearly symmetrical - the mean centripetal acceleration for right turns (+2.5 m/s2) is not significantly different from that for the left turns (−2.7 m/s2; p = 0.460, two tailed t-test). The overall mean centripetal acceleration for all turns (−0.28 m/s2) is very close to zero and is not significantly different from zero (p = 0.683; two tailed t-test). Thus, as with the curvature magnitudes, there is no major overall bias in the distribution of the centripetal accelerations.
Body deviation angle analysis of turning bees
From the analysis presented above, we have hypothesized that bees keep their centripetal acceleration almost constant during turns. This strategy might help them perform coordinated turns, without deviating from the intended flight trajectory. Accordingly, we were interested to look for evidence of sideslip. This was done by examining the body deviation angle (BD angle) during turns. We define the BD angle as the angle, measured in the horizontal plane, between the instantaneous flight direction vector and the instantaneous bee’s body orientation vector. This angle is zero when the body axis is aligned with the flight direction. Its polarity is defined to be negative when the body axis points into the turn, and positive when the body axis points away from the turn.
We commenced the analysis by calculating the BD angles and plotting their histograms during left turns, right turns and straight flight. “Straight flights” were defined to be sections of the trajectory in which the curvature magnitude was lower than a threshold of 5.0, and turning flights were sections in which the curvature magnitude exceeded 5.0, with the polarity of the curvature defining the direction of the turn. The results are shown in Fig. 8, where each histogram has been fitted to a Gaussian distribution.
The mean and standard deviation of the body deviation angle after correction for estimated errors in the measurement of the direction of body orientation and flight direction from the video images, are given in Section V of the SM. The results (see SM Table S4) reveal that the BD histograms for left turns, right turns and straight flight display a mean value close to zero, but a broad standard deviation of about 50 deg. This implies that, although the body orientation can occasionally deviate substantially from the direction of flight, the deviations are more or less symmetrical, with roughly half of the deviations pointing into the turn and the other half pointing outward. This is true for all three conditions - left turns, right turns, and even in straight flight. This suggests that the observed BDs are not a reflection of uncontrolled turns that involve sideslips; rather, they are a natural characteristic of the loitering bees, in which the body does not point consistently in the flight direction. Sideslips, if present, would be reflected in the left and right-turn histograms by an increased frequency of negative BD angles (body pointing into the turn) - which is not the case. Instances where the magnitude of the BD angle exceeds 90 deg represent situations where the bee is moving temporarily backwards. In this condition, the BD angles are again positive or negative equally frequently.
To further explore the existence of sideslips, we computed the mean value of the magnitude of the centripetal acceleration in each bin of the body deviation angle histograms of Fig. 8. The results, shown in Fig. 9, indicate that the magnitude of the centripetal acceleration is more or less constant, independent of body deviation angle. This is true for right turns, left turns, and straight flights. BD angles greater than 90 deg are not included in the histograms of Fig. 9, since those instances are not turning flights, but rather situations in which the bee is moving temporarily backwards.
The mean value of the CA magnitude, computed from the histograms of Fig. 9, are −2.80 m/s2 for left turns, 2.57 m/s2 for right turns, and 1.70 m/s2 for close-to-straight flights. Secondly, the observation that the CA magnitudes are similar even at large negative and positive values of BD angle (i.e. irrespective of whether the body is pointing sharply into or away from the turn), makes it very unlikely that the large negative values of BD angles (when the body is pointing sharply into the turn) are associated with uncontrolled sideslips or skids. In summary, the data in Figs 8 and 9 and Table S4 suggest that the bees flying in the cloud are never overcome by the centrifugal forces that are encountered while executing these turns, which would result in uncontrolled sideslips.
Discussion
We have investigated the turning flight characteristics of loitering honeybees in a semi-outdoor environment comprising a number of bees flying in close proximity to each other, trying to enter a blocked hive. We commenced our analysis by studying how the kinematics of bees vary in a cloud. In general, the speed of the bee varies continuously through its flight path.
Bomphrey et al.10 measured the characteristics of the flight envelope of freely flying blowflies (Calliphora vicinia) in an ingeniously designed ‘corner cube’ arena, which enabled them to film and reconstruct the flies’ 3D trajectories using a single video camera. Their results, compared to ours, indicate that blowflies display flight manoeuvres that are generally more aggressive than those of honeybees in similar conditions, featuring higher mean tangential and centripetal accelerations, but shallower turns. However, their study did not explore the relationships between these variables during turning behaviour.
Our study, which focuses on turning behaviour, shows that the flight speed tends to decrease whilst entering a turn, and increase whilst exiting it. This general pattern of speed variation has been documented in a number of other aerial and terrestrial animal species, for example fruit flies5, bats6, horses2 and northern quolls3. However, none of these studies have quantitatively examined the relationship between speed and turning radius. Our study does this and finds that, during the course of a turn, flight speed varies with curvature in such a way that the centrifugal force is maintained at a more or less constant value, irrespective of the moment-to-moment variations in speed and curvature.
Our results also provide an estimate of this centrifugal force. The histogram of Fig. 7b indicates that the mean centripetal acceleration is −2.69 m/s2 during left turns, and 2.52 m/s2 during right turns. This is in good agreement with the data from Fig. 3d (2.80 m/s2), and from Fig. 9a,b, which indicate mean centripetal accelerations of −2.80 m/s2 for left turns, and 2.57 m/s2 for right turns. It is also in good agreement with the mean value of 2.78 m/s2 obtained from the individual slopes of the speed2 vs ROC regressions for 57 bees (Table S1, SM). All of these numbers are consistently slightly higher than those inferred from the analyses of the scatterplots of Fig. 6a (2.17 m/s2), Fig. 6b (1.86 m/s2) and Fig. S2 (1.97 m/s2). We believe that the reason for this slight discrepancy is that, in the scatterplots, data from the bees were pooled without accounting for the flight duration of each bee, which would mean that bees that flew longer trajectories would have made a greater contribution to the estimated parameters. Therefore, it is likely that the values obtained from Figs 3d, 7b, 9a,b and Table S1 are most representative of the true mean magnitude of the centripetal acceleration. The grand mean of these mean values is 2.69 m/s2, which is about 27% of the acceleration due to gravity (9.81 m/s2). This means that the average centrifugal force experienced by a turning bee is approximately 30% of the bee’s weight, which we propose is low enough to permit coordinated turns without incurring unwanted sideslips. Orchestrating turns in this way would ensure that the insect is never overcome by the centrifugal force during the turn, and always maintains the intended (curved) trajectory.
To probe our hypothesis further, we examined whether bees undergo sideslips during turns. If a bee is unable to resist the centrifugal force that it experiences during a turn, we would expect its body to point into the turn – analogous to a car that skids out of control while making a sharp turn. Our findings (Fig. 8) indicate that there is no systematic bias in the body deviation angle that is correlated with the direction of the turn – in other words, there is no evidence that the body points preferentially into the turn. Moreover, the finding that the width of the body deviation histogram is approximately the same for left turns, right turns and nearly straight flights (see SM Table S4), suggests that the variations in the body deviation angle are a normal feature of honeybee flight in these experimental conditions, and do not reflect sideslips. Additional evidence for the lack of sideslips comes from the plots of centripetal acceleration versus body deviation angle (Fig. 9), which reveal that the centripetal acceleration is roughly constant – it does not vary with the body deviation angle. If sideslips were to occur, one would expect large body deviations into the turn (negative body deviation angles) to be associated with larger centripetal accelerations. This is clearly not the case – there is no correlation between the body deviation angle and the centripetal acceleration (or, equivalently, centrifugal force) – which, again, suggests that the observed variation in the body deviation angles is not due to the presence of uncontrolled turns. Our data of course includes several instances of turning bees in which the axis of the body is not aligned with the flight direction – as is evidenced by the broad histograms in Fig. 8. However, such flight segments, where the bee’s translational motion contains a lateral component, are likely to be controlled lateral motions, rather than uncontrolled sideslips resulting from a capitulation to the centrifugal force.
Our observation that turning bees hold the centripetal acceleration constant is further supported by the finding that the magnitude of this acceleration is practically the same during loitering turns and turns that involve a close encounter with another bee. Thus, it appears bees flying in a cloud display the same turning dynamics, regardless of the context in which the turn occurs. Further investigation is currently under way to explore the nature of the sensory information that is used to guide collision avoidance manoeuvres during these close encounter turns.
Finally, our study also indicates that, under our experimental conditions, left and right turns display similar characteristics, when the data are pooled across the bees that were investigated. Thus, it appears that, as a whole, the group of bees flying in our bee cloud does not exhibit a preferred turning direction (left or right) -although we cannot rule out the possibility that individual bees have turning biases, which would be a topic for future investigation. On the other hand, army ants, fish11 and bats12 rotate in a particular direction displaying a collective turning behaviour that could promote collision avoidance. In summary, our study documents a turning strategy that is used by honeybees to execute controlled, side-slip free turns while they are in a loitering mode of flight in a bee cloud. It would be interesting to examine whether this strategy also applies to flight in other conditions.
Data Availability
The datasets generated during the current study are available from the corresponding author on reasonable request.
Change history
• 24 May 2019
A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper.
References
1. 1.
Anton, H. Calculus - A new horizon. (John Wiley & Sons, 1999).
2. 2.
Tan, H. & Wilson, A. M. Grip and limb force limits to turning performance in competition horses. Proc. Biol. Sci. 278, 2105–2111 (2011).
3. 3.
Wynn, M. L., Clemente, C., Nasir, A. F. A. A. & Wilson, R. S. Running faster causes disaster: trade-offs between speed, manoeuvrability and motor control when running around corners in northern quolls (Dasyurus hallucatus). J. Exp. Biol. 218, 433–439 (2015).
4. 4.
Wagner, H. Flight performance and visual control of flight of the free flying Housefly. I. Organization of the flight motor. Phil. Trans. R. Soc. 312, 527–551 (1986).
5. 5.
Mronz, M. & Lehmann, F.-O. The free-flight response of Drosophila to motion of the visual environment. J. Exp. Biol. 211, 2026–2045 (2008).
6. 6.
Aldridge, H. D. Turning flight of bats. J. Exp. Biol. 128, 419–425 (1987).
7. 7.
Fry, S. N. The aerodynamics of free-flight maneuvers in Drosophila. Science. 300, 495–498 (2003).
8. 8.
Muijres, F. T., Elzinga, M. J., Iwasaki, N. A. & Dickinson, M. H. Body saccades of Drosophila consist of stereotyped banked turns. J. Exp. Biol. 218, 864–875 (2015).
9. 9.
Muijres, F. T., Elzinga, M. J., Melis, J. M. & Dickinson, M. H. Flies evade looming targets by executing rapid visually directed banked turns. Science. 344, 172–177 (2014).
10. 10.
Bomphrey, R. J., Walker, S. M. & Taylor, G. K. The typical flight performance of blowflies: Measuring the normal performance envelope of Calliphora vicina using a novel corner-cube arena. PLoS One 4, e7852 (2009).
11. 11.
Vicsek, T. & Zafeiris, A. Collective motion. Phys. Rep. 517, 71–140 (2012).
12. 12.
Ren, J., Wang, X., Jin, X. & Manocha, D. Simulating flying insects using dynamics and data-driven noise modeling to generate diverse collective behaviors. PLoS One 11, e0155698 (2016).
Acknowledgements
We thank the anonymous reviewers for their helpful and constructive comments. We thank Prof. Thomas Stace from the Department of Physics, University of Queensland, for providing valuable suggestions for analysis of the data. Our special thanks to Mr. Peter Anderson, for the care of the bee colony used in this experiment. This study was supported by a UQ International student scholarship and a Boeing top-up scholarship awarded to M.M., and Australian Discovery Research Grant DP 140100914 to M.V.S.
Author information
Authors
Contributions
M.M. and M.V.S. designed the experiments. M.M. analysed the data. M.M. and M.V.S. wrote the manuscript.
Corresponding author
Correspondence to Mandyam V. Srinivasan.
Ethics declarations
Competing Interests
The authors declare no competing interests.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Reprints and Permissions
Mahadeeswara, M.Y., Srinivasan, M.V. Coordinated Turning Behaviour of Loitering Honeybees. Sci Rep 8, 16942 (2018). https://doi.org/10.1038/s41598-018-35307-5
• Accepted:
• Published:
Keywords
• Flight Trajectory
• Flight Speed
• Instantaneous Speed
• Centripetal Acceleration (CA)
• Tangential Acceleration (TA)
• Markov network versus recurrent neural network in forming herd behavior based on sight and simple sound communication
• Urszula Markowska-Kaczmar
• & Tomasz Marcinkowski
Applied Soft Computing (2020)
• Two pursuit strategies for a single sensorimotor control task in blowfly
• Leandre Varennes
• , Holger G. Krapp
• & Stephane Viollet
Scientific Reports (2020)
• A novel setup for 3D chasing behavior analysis in free flying flies
• Léandre P. Varennes
• , Holger G. Krapp
• & Stéphane Viollet
Journal of Neuroscience Methods (2019)
|
2021-04-16 00:06:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7108580470085144, "perplexity": 1068.4127917918229}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088264.43/warc/CC-MAIN-20210415222106-20210416012106-00415.warc.gz"}
|
https://physics.stackexchange.com/questions/426666/what-happens-if-a-neutron-flies-towards-a-nucleus
|
# What happens if a neutron flies towards a nucleus?
Rutherford experiment shows that alpha-particles when they fly towards metal foil sometimes (in minority of cases) can bounce. An explanation proposed was that atoms in fact have positively charged nuclea and majority of space is covered by fields of negative charge caused by electrons. Indeed, these fields must have much smaller charge density so that they (almost) don't affect alpha particles.
However, according to this explanation a neutron when it flies towards a nucleus should not bounce because of electromagnetic force. This is because neutron is an uncharged particle. Gravity is too weak to have any significant effect between neutron and nucleus so they don't merge. Weak forcd probably also is too weak. Therefore we can only consider strong force.
So, when a neutron flies towards a nucleus with high speed, what happens? Does strong force come into effect? Or does it just pass through a nucleus?
• When neutrons travel inside a material, they will undergo scattering (elastic and inelastic) as well as other reactions, while interacting with the nuclei via the strong, nuclear force. Given a beam of neutron with intensity I0, when traveling through matter it will interact with the nuclei with a probability given by the total cross section σT ....can see-ocw.mit.edu/courses/nuclear-engineering/… – drvrm Sep 4 '18 at 11:11
• Did you know this happens in an atomic bomb? The chain reaction consists of free neutrons being absorbed by nuclei, which then fission and emit further neutrons. – Mitchell Porter Sep 4 '18 at 11:16
• @MitchellPorter, so, they are absorbed making some unstable (more and more unstable with each next neutron) atoms which then almost simultaneously decays causing their massive emission? – user168013 Sep 4 '18 at 11:35
• Both the strong and weak force can cause the neutron to interact with the neucleus. Also despite being neutral, the neutron has a nonzero magnetic moment though it's quite small and is more a consequence of being composed of quarks. – Triatticus Sep 4 '18 at 14:23
• Some on-site searches that might help a bit: physics.stackexchange.com/search?q=neutron+scattering and physics.stackexchange.com/search?q=neutron+capture. – dmckee Sep 5 '18 at 14:20
The place to go for neutron scattering data is The Evaluated Nuclear Data Files site, hosted in the US at Brookhaven National Laboratory. There one can get data for a wide range of neutron scattering possibilities, including the cross section vs energy.
Since fission was mentioned in the comments, lets look at U-235. Entering that target nucleus, asking for all neutron reactions, and requesting the cross section (sig = $\sigma$) looks like:
One gets back a long list of possible reactions starting with:
Line one is the total neutron cross section vs energy. Line 2 is the elastic scattering component, and line 3 is the inelastic component. Line 7 is the fission cross section. Things like line 8 are an inelastic neutron scattering through a particular level. Further down one find the (n,p), (n,$\alpha$), and other similar reactions.
Selecting a few of the boxes and hitting 'Plot' up above results in:
So, that is how you find out what neutrons will do.
• The ENDFs are the source for details, but I'm not sure they are a good source for someone who needs a "big picture" outline of neutron-matter interactions. Not that it is an easy subject to give a summary of, of course. – dmckee Sep 5 '18 at 14:16
• @dmckee - of course, but it is probably the fastest way to find "all" the known interactions. Plus I figure (assuming the Q isn't closed) that I can point to this answer in the future, given the number of past questions that were answerable with a look at ENDF (or ENSDF)... – Jon Custer Sep 5 '18 at 14:20
|
2019-08-18 21:08:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6832709312438965, "perplexity": 652.4705350911145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314130.7/warc/CC-MAIN-20190818205919-20190818231919-00458.warc.gz"}
|
https://nbi.ku.dk/english/research/theoretical-particle-physics-and-cosmology/calendar/2017/het-seminar-andrea-addazi/
|
Abstract: We revisit possible glimpses of black-hole formation by looking at ultra-planckian string-string collisions at very high final-state multiplicity. We compare, in particular, previous results using the optical theorem, the resummation of ladder diagrams at arbitrary loop order, and the AGK cutting rules, with the more recent study of $2\rightarrow N$ scattering at $N \sim sM_{Pl}^{-2}>>1$. We argue that some apparent tension between the two approaches disappears once a reinterpretation of the latter’s results in terms of suitably defined infrared-safe cross sections is adopted. Under that assumption, the typical final state produced in an ultra-planckian collision does indeed appear to share some properties with those expected from the evaporation of a black hole of mass $\sqrt{s}$, although no sign of thermalization is seen to emerge at this level of approximation.
|
2023-03-28 11:13:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8689998388290405, "perplexity": 717.5457872008824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948858.7/warc/CC-MAIN-20230328104523-20230328134523-00509.warc.gz"}
|
https://www.electricalexams.co/electric-line-switch-is-connected/
|
# In an electric line, a switch is connected to which of the following wire?
In an electric line, a switch is connected to which of the following wire?
|
2022-01-25 14:09:13
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9176839590072632, "perplexity": 636.7712044714586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304835.96/warc/CC-MAIN-20220125130117-20220125160117-00179.warc.gz"}
|
https://www.doubtnut.com/question-answer-chemistry/in-any-subshell-the-maimum-number-of-electrons-having-same-value-of-spin-quantum-number-is--642603735
|
# In any subshell, the maimum number of electrons having same value of spin quantum number is :
Updated On: 17-04-2022
Get Answer to any question, just click a photo and upload the photo
and get the answer completely free,
Text Solution
sqrt(l(l+1))l+22l+14l+2
Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams.
Transcript
hello student question is in any Sab shell the maximum number of electrons having same value of spin quantum number is this question we have to find out the maximum number of electrons in a subshell with have the same value of spin quantum number so first of all if we find out the number of orbitals in and subshell that will be equal to number of orbitals in any Sab shell that is calculated by the formula to L + 1 where l is the azimuthal quantum number so if tera number of orbitals are 12 + 1 and in one orbital
maximum two electrons can be accommodated one with opinion that is plus half and one with down spin that is minus aap or we can say one with pin in clockwise direction that is traffic and other in anticlockwise direction that is mine sunao the number of orbitals in any Sab Chala 12 plus one and in one orbital maximum two electrons can be accommodated so number of electrons in a subshell would be these many are the orbital and in one orbital two electrons total electrons would be to into 12 + 1
these are the total number of electrons now in these total number of electrons half hour present with plus half spin that is up spin and half are present with minus half spin it means half of the electron among the total electrons they have the same value of spin quantum number that is for for half equal to half + half that is for half electrons and this is also for the half electrons of the total number of electrons present in the success it mean the maximum number of electrons in any subject
victim value of spin quantum number are half of the total electrons protons electrons are 12 + 1 into two and this is half of this show the maximum electrons would be with same value of spin quantum number would be too well + so the option which is the correct answer is the option to A + but I hope you understood the solution thank you
|
2022-05-19 08:40:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5522230267524719, "perplexity": 495.8544128053829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662526009.35/warc/CC-MAIN-20220519074217-20220519104217-00588.warc.gz"}
|
https://www.askmehelpdesk.com/finance-accounting/relative-sales-price-223746.html
|
I have the basic format to figure out cost per lot for the question below but I do not see how in my book relative sales price is calculated. The best I could come up with is that total sales price is multiplied by .0001
Relative Sales Value Method
Group # of lots price per lot
1 9 $3,000 2 15$4,000
3 17 $2,400 Operating expenses for the year allocated to this project are$18,200. Lots unsold at year end were as follows:
Group1 5 lots
Group 2 7 lots
Group 3 2 lots
At fiscal year end what is net income?
I have: lots, number of lots,price per lot,total sale price but I don't see how to calculate relative sale price?
|
2018-05-28 05:29:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22486254572868347, "perplexity": 2272.011485320445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794871918.99/warc/CC-MAIN-20180528044215-20180528064215-00380.warc.gz"}
|
https://www.biostars.org/p/72020/#72037
|
Converting Solexa Reads To Fastq And Identifying Mismatched Bases
2
0
Entering edit mode
9.7 years ago
soosus ▴ 90
I would like to write a simple script or single line perl to convert the following two Solexa sequence reads (assume these reads in a text file "Solexa.export.txt") into fastq format. (Note: the quality scores are coded as ASCII offset of 64):
ILLUMINA-A93428 66 1 1 7848 1267 CGATGT 1 ATGTTGGCTGGCGGTGAAATAAATCTCAAACGTACCTGTTACAAAATTGTGTTAATCCTTTCAGATTCGCAG ggggggggggggggcfdffdgggfefffffcfcffcccacBeddddded_cffefgggggBBBBBBBBBBBB chrX.fa 122287709 R 40C21GAC7 269 Y
ILLUMINA-A93428 66 1 1 8782 1281 CGATGT 1 GACTCCGGGAAAGCAAGCTGAAGTGTTTTGTGGGACAGGCACATCATTTGGTCAACTCCTTTTCCCTGGTTG hhhhhhhhhhhhhhhhhhhhghfehhhhhhghfhf_ffffghhhhhhhhdaaghhghhhhgfeghfghhe] chrX.fa 144709339 F 72 335 Y
Also need help identifying mismatched bases in "Solexa.export.txt"
r fastq • 2.7k views
0
Entering edit mode
9.7 years ago
You might try this script to convert to SAM/BAM. Then, you can use tools like picard Sam2Fastq and samtools calmd to get the information you need.
https://github.com/samtools/samtools/blob/master/misc/export2sam.pl
0
Entering edit mode
9.7 years ago
Mitch Bekritsky ★ 1.3k
Try this:
awk '/./ {print "@"$1"\n"$9"\n+\n"$10}' Solexa.export.txt The '/./' in the awk command skips any blank lines. When I run this one-liner on your sample script, it produces the following output: @ILLUMINA-A93428 ATGTTGGCTGGCGGTGAAATAAATCTCAAACGTACCTGTTACAAAATTGTGTTAATCCTTTCAGATTCGCAG + ggggggggggggggcfdffdgggfefffffcfcffcccacBeddddded_cffefgggggBBBBBBBBBBBB @ILLUMINA-A93428 GACTCCGGGAAAGCAAGCTGAAGTGTTTTGTGGGACAGGCACATCATTTGGTCAACTCCTTTTCCCTGGTTG + hhhhhhhhhhhhhhhhhhhhghfehhhhhhghfhf_ffffghhhhhhhhdaaghhghhhhgfeghfghhe] When you say that the encoding is offset by 64, does that mean you want to change it, or is that just an FYI? Also, what do you mean that you want to identify mismatched bases? Do you just want to find reads that don't match perfectly? In that case, try this: awk '$14 !~ /^[0-9]+\$/ && /./' Solexa.export.txt
I'm assuming that the 14th field in Solexa.export.txt reports the alignment of the read to the reference genome. In that case, any entry where field 14 isn't only digits would imply that there are mismatched bases. In your second sequence, field 14 reports 72, the length of the read. In the first sequence, it reports 40C21GAC7, implying 40 matched bases, then a mismatched C, etc.
0
Entering edit mode
I mean the chromosomal position of locations where the reference base does not match the alternative base. Also, thanks for your reply!
0
Entering edit mode
Not sure about the exact math, but you can parse the 14th field and use the start position of the read to get the exact position of the mismatch in the reference genome. Not sure which field represents position in your output since it doesn't look like it's in SAM format, but you could take the start site, add 41, and that should be the position of the C mismatch. Should be a little perl script.
Happy to help!
|
2023-02-08 06:06:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2996773421764374, "perplexity": 5711.3195286175}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500719.31/warc/CC-MAIN-20230208060523-20230208090523-00618.warc.gz"}
|
http://stackoverflow.com/questions/6679388/challenge-simple-text-editing-with-vim-at-vimgolf-how-does-the-1-solution-wo
|
Challenge “Simple text editing with Vim” at VimGolf: How does the #1 solution work?
Simple text editing with Vim: http://vimgolf.com/challenges/4d1a34ccfa85f32065000004
I find it difficult to understand the #1 solution (Score 13).
Sorry, no solution is pasted in this post, because I don't know if it's appropriate to do that.
-
There is a problem with this question: nobody can even see the #1 solution (unless they're registered at vimgolf, like me). Meanwhile, why don't you TRY it step by step – sehe Jul 13 '11 at 13:09
The solution is centered on the :g command. From the help:
:g :global E147 E148
:[range]g[lobal]/{pattern}/[cmd]
Execute the Ex command [cmd] (default ":p") on the
lines within [range] where {pattern} matches.
So basically, the solution executes some ex commands on lines that have a "V", which are exactly the ones that need editing. You probably noticed that earlier solutions rely on duplicating the lines, rather than really changing them. This solution specially shows an interesting pattern:
3jYjVp3jYjVp3jYjVpZZ
^ ^ ^
Which could be reduced with a macro:
qa3jYjVpq3@aZZ
The solution using the :g command does basically the same thing. The first command executed is t.. From the help:
:t
:t Synonym for copy.
Copy the lines given by [range] to below the line
The address given was ., which means current line:
Line numbers may be specified with: :range E14 {address}
{number} an absolute line number
. the current line :.
$the last line in the file :$
% equal to 1,$(the entire file) :% 't position of mark t (lowercase) :' 'T position of mark T (uppercase); when the mark is in another file it cannot be used in a range /{pattern}[/] the next line where {pattern} matches :/ ?{pattern}[?] the previous line where {pattern} matches :? \/ the next line where the previously used search pattern matches \? the previous line where the previously used search pattern matches \& the next line where the previously used substitute pattern matches So the ex command t. means "copy current line to below the current line". Then, there is a bar which: :bar :\bar '|' can be used to separate commands, so you can give multiple commands in one line. If you want to use '|' in an argument, precede it with '\'. And the d command, which obviously deletes the line. It was given a range of +, meaning the "current line + 1". You could pass .+1 but + for short. These info can be read surrounding the help for :range: The default line specifier for most commands is the cursor position, but the commands ":write" and ":global" have the whole file (1,$) as default.
Each may be followed (several times) by '+' or '-' and an optional number.
This number is added or subtracted from the preceding line number. If the
number is omitted, 1 is used.
And that's it.
:g/V/t.|+d<CR>ZZ
On every line that has a "V", copy it down and delete the next line. Write and quit.
One thing that I didn't mention is why the :g commands are executed three times instead of 6 or even more (lines are duplicated along the process). The :g command starts positioning the cursor on line one, and goes down to \$. But if the commands change the current line, :g continues from there. So:
:g/V/
Current line is 4. Now:
t.
This moves the cursor to line 5. And then:
+d
Deletes line 6, the cursor remain in 5. So the next :g match will be in line 8.
-
+1 Nice explanation even if I can't read the problem. :) – Xavier T. Jul 13 '11 at 13:59
Thanks @Xavier T. The problem should be visible to every one, just follow the link provided in the question. – sidyll Jul 13 '11 at 14:06
You're right, I can actually see the starting/ending condition ! – Xavier T. Jul 13 '11 at 14:17
@sidyll It requires registration, and registration is in twitter. I am not ever going to do this. – ZyX Jul 13 '11 at 14:26
Another good solution is to use this: :g/#/ norm jYjVp – Magnun Leno Jul 13 '11 at 21:02
|
2016-06-30 23:27:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7992123365402222, "perplexity": 2878.39191485998}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00075-ip-10-164-35-72.ec2.internal.warc.gz"}
|
http://cvgmt.sns.it/paper/3365/
|
# Local spectral convergence in $RCD^*(K,N)$ spaces
created by ambrosio on 14 Mar 2017
[BibTeX]
Submitted Paper
Inserted: 14 mar 2017
Last Updated: 14 mar 2017
Year: 2017
Abstract:
In this note we give necessary and sufficient conditions for the validity of the local spectral convergence, in balls, on the $RCD^*$-setting.
|
2017-07-23 14:50:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2466598004102707, "perplexity": 2828.023951481384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424564.72/warc/CC-MAIN-20170723142634-20170723162634-00675.warc.gz"}
|
https://biomechanical.asmedigitalcollection.asme.org/tribology/article-abstract/123/2/380/464339/On-the-Thermal-Behavior-of-Giant-Magnetoresistance?redirectedFrom=fulltext
|
The magnetic/mechanical spacing between the transducer and the disk significantly decreases due to thermal expansion of pole tips at stressed high temperature and high humidity tests. The protruded pole tips and alumina overcoat can cause head/disk contacts, resulting in thermal asperities and pole tip damage. The damage at the head–disk interface due to protruded pole tips and alumina overcoat may degrade the drive mechanical performance when flying height is below 10 nm. In this study the change in pole tin recession (PTR) with temperature and current in the writer coil, are measured using an optical profiler and an atomic force microscope for heads having a stack design with single and dual layers of writer coils. The pole tips protrude above the ABS surface by 3–4 nm when the temperature of the head is raised by 50°C. Heads with a single layer of writer coils exhibit significantly lower thermal PTR than those with dual layers of coils. The ABS profiles at elevated temperature generated using the finite element modeling of the differential thermal expansion of various layers in the head stack are in close agreement with the measured profiles. The thermal PTR and alumina overcoat protrusion can be reduced by optimizing the thermal expansion coefficient of the alumina basecoat and overcoat, the height of the head stack, and by replacing alumina by $SiO2$ and SiC.
1.
Menon
,
A. K.
, and
Gupta
,
B. K.
,
1999
, “
Nanotechnology A Data Storage Perspective
,”
Datatech
,
2
, pp.
13
24
.
2.
Gupta
,
B. K.
, and
Menon
,
A. K.
,
1999
, “
Characterization of the Head-Disk Interface at Nanometer Dimensions
,”
IEEE Trans. Magn.
,
35
, pp.
764
769
.
3.
Li
,
Y.
, and
Talke
,
F.
,
1990
, “
Limitation and Correction of Optical Profilometry in Surface Characterization of Carbon Coated Magnetic Recording Disks
,”
ASME J. Tribol.
,
112
, pp.
670
677
.
4.
Smallen
,
M.
, and
Lee
,
J. J. K.
,
1993
, “
Pole Tip Recession Measurements on Thin Film Heads using Optical Profilometry with Phase Correction and Atomic Force Microscopy
,”
ASME J. Tribol.
,
115
, pp.
382
386
.
5.
Young
,
K. F.
,
1990
, “
Finite Element Analysis of Planar Stress Anisotropy and Thermal Behavior in Thin Films
,”
IBM J. Res. Dev.
,
34
, pp.
706
717
.
6.
Bhushan, B., and Gupta, B. K., 1991, Handbook of Tribology: Materials, Coatings and Surface Treatments, McGraw–Hill, New York.
|
2023-03-28 14:40:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3433385491371155, "perplexity": 7579.847301758308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948867.32/warc/CC-MAIN-20230328135732-20230328165732-00684.warc.gz"}
|
http://forum.allaboutcircuits.com/threads/rlc-problem.114505/
|
# rlc problem
Discussion in 'Homework Help' started by allOfMyDragons, Aug 16, 2015.
1. ### allOfMyDragons Thread Starter New Member
Aug 16, 2015
1
0
I have tried to solve this problem but there is a problem that i cant Find.
2. ### shteii01 AAC Fanatic!
Feb 19, 2010
3,297
482
was there a question somewhere?
3. ### WBahn Moderator
Mar 31, 2012
17,457
4,701
Just a cursory glance at your work shows several equations that are fundamentally impossible.
Consider:
$
i_1 \; = \; L \cdot di_2 \; = \; V
$
The first term is a current and the last term is a voltage and you are saying that they are equal. The middle term is neither a current nor a voltage.
And then:
$
V \; = \; \int $$i_1 \; + \; i_2$$
$
Aside from the fact that this isn't even an valid integral, you are claiming that the integral of a current yields a voltage.
Then you haven't even taken into account the current in the capacitor (which would probably be i3).
You really need to stop being so sloppy with your math.
4. ### MrAl Well-Known Member
Jun 17, 2014
2,224
437
Hello,
I cant read that dark, blurry drawing so i'll just offer a hint going by what it looks like you are after.
First, i assume that the switch was on for a long time relative to the longest time constant of the circuit, then at t=0 it is turned off (opened), and you want a solution such as the voltage across the capacitor which is the voltage across the lower RLC network.
If that is the case, then the only element that has energy in it at t=0 is the inductor so you can handle this circuit as a parallel RLC circuit with only initial energy in the inductor. So the inductor current at t=0 is the only thing that drives the solution (no other driving force at that point in time).
|
2016-10-26 21:13:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7975885272026062, "perplexity": 809.6448712048106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720973.64/warc/CC-MAIN-20161020183840-00494-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://groupprops.subwiki.org/wiki/Center_is_direct_projection-invariant
|
# Center is direct projection-invariant
Jump to: navigation, search
This article gives the statement, and possibly proof, of the fact that for any group, the subgroup obtained by applying a given subgroup-defining function (i.e., center) always satisfies a particular subgroup property (i.e., direct projection-invariant subgroup)}
View subgroup property satisfactions for subgroup-defining functions View subgroup property dissatisfactions for subgroup-defining functions
## Statement
The center of a group is a direct projection-invariant subgroup: any projection to a direct factor sends the center to within itself.
|
2014-08-30 16:11:07
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8130804896354675, "perplexity": 4400.787819852052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835505.87/warc/CC-MAIN-20140820021355-00245-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://jrogel.com/tag/computers/page/2/
|
iPython Notebook is now Jupyter… I knew it!
It is not really news… Jupyter is the new name of the loved iPython project, and it has been for a while and as they Jupiter projects puts it themselves
The language-agnostic parts of IPython are getting a new home in Project Jupyter
As announced in the python.org page, as of version 4.0, the The Big Split from the old iPython starts. I knew this and I even tweeted about it:
All, great, right? Well I still got surprised when after updating my Python installation and tried to start my ipython notebook I got an error that ended with:
File "importstring.py", line 31, in import_item
module = __import__(package, fromlist=[obj])
ImportError: No module named notebook.notebookapp
Then I remembered and to fix my problem I simply tried installing Jupyter (*I am using Anaconda) with the following command
conda install jupyter
Et voilà!
iPython Notebook is now Jupyter… I knew it!
Working with data and approaching data-based competitions
We are getting close to the end my 11-week Data Science class at General Assembly. As in the previous time, I had a whale of a time talking to people who are genuinely interested in data, analytics, science and models. Some of the projects this time have been Kaggle competitions. This has brought some advantages as the data is readily available, but other challenges do arise. It is effectively a game of whack-a-mole, right? Some times data is masked, or hashed, there my be too much of it or limited information.
In any case, the fact that you can submit your predictions and be ranked among other competitors does raise the question as to how (and more importantly why) do you gain and extra basis point in your score? In some cases this may indeed be important, but my view here is that since “all models are wrong” the truly important thing is to ask how comfortable are you with the score obtained and whether your business or application is resilient to that kind of error (think airplane safety versus ice-cream flavour choices). This discussion reminded me of a recent episode of Talking Machines, a podcast about machine learning that I recommended to you readers some time ago.
In episode 13 of Talking Machines Katherine Gorman and Ryan Adams interviewed Claudia Perlich, the Chief Data Scientist at DStillery. Claudia has won a number of competitions. She was trying to avoid talking about the subject, and I am glad the interviewers stirred the conversation that way. Her secret to winning a large number of competitions according to her is that she “finds something wrong with the data”. She explains that she likes getting intimately familiar with the data and often she comes across something that should not be there and can thus be exploited.
She talked about a particular breast cancer data model competition where they build the most predictive model “not because we understand medicine” she explained, “but because we realised that the patient identifier, which was just a random number, was by far the most predictive feature”. The story behind that dataset is that it was compiled from different data sources, some were from screening centres, others from treatment centres. As such, she explains “the base rate, i.e. the natural percentage of the population that was affected was very different and you could back this out from the patient identifier”. If they had been explicit about this, then the modelling would have been treated differently. I particularly like the fact that she highlights that these exploits are of importance in a competition environment but not in “real applications”.
When asked about her approach to finding these exploits, she explains that she looks at the data “in the screen, like the matrix, you have these things flashing down and what works very well for me is a certain expectation or intuition of what you should be seeing”. As an example Claudia mentions that things that should not be sorted and appear sorted in the dataset may be an indication of manipulation. Another example is, features that should be numeric but you see certain values that appear over and over again for no apparent reason typically means that someone for instance replaced a missing value with the average or the median. A practical tip she offers is that if a nearest-neighbour model performs better than other algorithms is an indication of potential duplicates in the dataset!
As I was explaining to one of the guys in the course, a lot of the times it is not just having tools and models at your disposal, but experience with their use and outcomes is very important too. I was glad to hear Claudia echo those thoughts! “There is no grand theory behind it, no recommended toolset” – she says. After all, she has been quoted as saying that:
There is no clean or dirty data, just data you don’t understand
Like me, Claudia dislikes when someone else cleans data on my behalf, as that creates sometimes more issues as in many cases assumptions about the data are made prior to its usage. That is not to say that you do not need to manipulate your data, but at least you know what transformations you have applied to it and the assumptions you have made.
I highly recommend that you listen to the podcast, it is a very good and informative episode. You can do so here.
Markup for Fast Data Science Publication – Reblog
I am an avid user of Markdown via Mou and R Markdown (with RStudio). The facility that the iPython Notebook offers in combining code and text to be rendered in an interactive webpage is the choice for a number of things, including the 11-week Data Science course I teach at General Assembly.
As for LaTeX, well, I could not have survived my PhD without it and I still use it heavily. I have even created some videos about how to use LaTeX, you can take a loot at them
My book “Essential Matlab and Octave” was written and formatted in its entirety using LaTeX. My new book “Data Science and Analytics with Python” is having the same treatment.
I was very pleased to see the following blog post by Benjamin Bengfort. This is a reblog of that post and the original can be found here.
Markup for Fast Data Science Publication
Benjamin Bengfort
A central lesson of science is that to understand complex issues (or even simple ones), we must try to free our minds of dogma and to guarantee the freedom to publish, to contradict, and to experiment. — Carl Sagan in Billions & Billions: Thoughts on Life and Death at the Brink of the Millennium
As data scientists, it’s easy to get bogged down in the details. We’re busy implementing Python and R code to extract valuable insights from data, train effective machine learning models, or put a distributed computation system together. Many of these tasks, especially those relating to data ingestion or wrangling, are time-consuming but are the bread and butter of the data scientist’s daily grind. What we often forget, however, is that we must not only be data engineers, but also contributors to the data science corpus of knowledge.
If a data product derives its value from data and generates more data in return, then a data scientist derives their value from previously published works and should generate more publications in return. Indeed, one of the reasons that Machine Learning has grown ubiquitous (see the many Python-tagged questions related to ML on Stack Overflow) is thanks to meticulous blog posts and tools from scientific research (e.g. Scikit-Learn) that enable the rapid implementation of a variety of algorithms. Google in particular has driven the growth of data products by publishing systems papers about their methodologies, enabling the creation of open source tools like Hadoop and Word2Vec.
By building on a firm base for both software and for modeling, we are able to achieve greater results, faster. Exploration, discussion, criticism, and experimentation all enable us to have new ideas, write better code, and implement better systems by tapping into the collective genius of a data community. Publishing is vitally important to keeping this data science gravy train on the tracks for the foreseeable future.
In academia, the phrase “publish or perish” describes the pressure to establish legitimacy through publications. Clearly, we don’t want to take our rule as authors that far, but the question remains, “How can we effectively build publishing into our workflow?” The answer is through markup languages – simple, streamlined markup that we can add to plain text documents that build into a publishing layout or format. For example, the following markup languages/platforms build into the accompanying publishable formats:
• Markdown → HTML
• iPython Notebook (JSON + Markdown) → Interactive Code
• reStructuredText + Sphinx → Python Documentation, ReadTheDocs.org
• AsciiDoc → ePub, Mobi, DocBook, PDF
• LaTeX → PDF
The great thing about markup languages is that they can be managed inline with your code workflow in the same software versioning repository. Github goes even further as to automatically render Markdown files! In this post, we’ll get you started with several markup and publication styles so that you can find what best fits into your workflow and deployment methodology.
Markdown
Markdown is the most ubiquitous of the markup languages we’ll describe in this post, and its simplicity means that it is often chosen for a variety of domains and applications, not just publishing. Markdown, originally created by John Gruber, is a text-to-HTML processor, where lightweight syntactic elements are used instead of the more heavyweight HTML tags. Markdown is intended for folks writing for the web, not designing for the web, and in some CMS systems, it is simply the way that you write, no fancy text editor required.
Markdown has seen special growth thanks to Github, which has an extended version of Markdown, usually referred to as “Github-Flavored Markdown.” This style of Markdown extends the basics of the original Markdown to include tables, syntax highlighting, and other inline formatting elements. If you create a Markdown file in Github, it is automatically rendered when viewing files on the web, and if you include a README.md in a directory, that file is rendered below the directory contents when browsing code. Github Issues are also expected to be in Markdown, further extended with tools like checkbox lists.
Markdown is used for so many applications it is difficult to name them all. Below are a select few that might prove useful to your publishing tasks.
• Jekyll allows you to create static websites that are built from posts and pages written in Markdown.
• Github Pages allows you to quickly publish Jekyll-generated static sites from a Github repository for free.
• Silvrback is a lightweight blogging platform that allows you to write in Markdown (this blog is hosted on Silvrback).
• Day One is a simple journaling app that allows you to write journal entries in Markdown.
• iPython Notebook expects Markdown to describe blocks of code.
• Stack Overflow expects questions, answers, and comments to be written in Markdown.
• MkDocs is a software documentation tool written in Markdown that can be hosted on ReadTheDocs.org.
• GitBook is a toolchain for publishing books written in Markdown to the web or as an eBook.
There are also a wide variety of editors, browser plugins, viewers, and tools available for Markdown. Both Sublime Text and Atom support Markdown and automatic preview, as well as most text editors you’ll use for coding. Mou is a desktop Markdown editor for Mac OSX and iA Writer is a distraction-free writing tool for Markdown for iOS. (Please comment your favorite tools for Windows and Android). For Chrome, extensions like Markdown Here make it easy to compose emails in Gmail via Markdown or Markdown Preview to view Markdown documents directly in the browser.
Clearly, Markdown enjoys a broad ecosystem and diverse usage. If you’re still writing HTML for anything other than templates, you’re definitely doing it wrong at this point! It’s also worth including Markdown rendering for your own projects if you have user submitted text (also great for text-processing).
Rendering Markdown can be accomplished with the Python Markdown library, usually combined with the Bleach library for sanitizing bad HTML and linkifying raw text. A simple demo of this is as follows:
First install markdown and bleach using pip:
$pip install markdown bleach Then create a markdown parsing function as follows: import bleach from markdown import markdown def htmlize(text): """ This helper method renders Markdown then uses Bleach to sanitize it as well as converting all links in text to actual anchor tags. """ text = bleach.clean(text, strip=True) # Clean the text by stripping bad HTML tags text = markdown(text) # Convert the markdown to HTML text = bleach.linkify(text) # Add links from the text and add nofollow to existing links return text Given a markdown file test.md whose contents are as follows: # My Markdown Document For more information, search on [Google](http://www.google.com). _Grocery List:_ 1. Apples 2. Bananas 3. Oranges The following code: >>> with open('test.md', 'r') as f: ... print htmlize(f.read()) Will produce the following HTML output: <h1>My Markdown Document</h1> For more information, search on <a href="http://www.google.com" rel="nofollow">Google</a>. <em>Grocery List:</em> <ol> <li>Apples</li> <li>Bananas</li> <li>Oranges</li> </ol> Hopefully this brief example has also served as a demonstration of how Markdown and other markup languages work to render much simpler text with lightweight markup constructs into a larger publishing framework. Markdown itself is most often used for web publishing, so if you need to write HTML, then this is the choice for you! To learn more about Markdown syntax, please see Markdown Basics. iPython Notebook iPython Notebook is an web-based, interactive environment that combines Python code execution, text (marked up with Markdown), mathematics, graphs, and media into a single document. The motivation for iPython Notebook was purely scientific: How do you demonstrate or present your results in a repeatable fashion where others can understand the work you’ve done? By creating an interactive environment where code, graphics, mathematical formulas, and rich text are unified and executable, iPython Notebook gives a presentation layer to otherwise unreadable or inscrutable code. Although Markdown is a big part of iPython Notebook, it deserves a special mention because of how critical it is to the data science community. iPython Notebook is interesting because it combines both the presentation layer as well as the markup layer. When run as a server, usually locally, the notebook is editable, explorable (a tree view will present multiple notebook files), and executable – any code written in Python in the notebook can be evaluated and run using an interactive kernel in the background. Math formula written in LaTeX are rendered using MathJax. To enhance the delivery and shareability of these notebooks, the NBViewer allows you to share static notebooks from a Github repository. iPython Notebook comes with most scientific distributions of Python like Anaconda or Canopy, but it is also easy to install iPython with pip: $ pip install ipython
iPython itself is an enhanced interactive Python shell or REPL that extends the basic Python REPL with many advanced features, primarily allowing for a decoupled two-process model that enables the notebook. This process model essentially runs Python as a background kernel that receives execution instructions from clients and returns responses back to them.
To start an iPython notebook execute the following command:
$ipython notebook This will start a local server at http://127.0.0.1:8888 and automatically open your default browser to it. You’ll start in the “dashboard view”, which shows all of the notebooks available in the current working directory. Here you can create new notebooks and start to edit them. Notebooks are saved as .ipynb files in the local directory, a format called “Jupyter” that is simple JSON with a specific structure for representing each cell in the notebook. The Jupyter notebook files are easily reversioned via Git and Github since they are also plain text. To learn more about iPython Notebook, please see the iPython Notebook documentation. reStructuredText reStructuredText is an easy-to-read plaintext markup syntax specifically designed for use in Python docstrings or to generate Python documentation. In fact, the reStructuredText parser is a component of Docutils, an open-source text processing system that is used by Sphinx to generate intelligent and beautiful software documentation, in particular the native Python documentation. Python software has a long history of good documentation, particularly because of the idea that batteries should come included. And documentation is a very strong battery! PyPi, the Python Package Index, ensures that third party packages provide documentation, and that the documentation can be easily hosted online through Python Hosted. Because of the ease of use and ubiquity of the tools, Python programmers are known for having very consistently documented code; sometimes it’s hard to tell the standard library from third party modules! In How to Develop Quality Python Code, I mentioned that you should use Sphinx to generate documentation for your apps and libraries in a docs directory at the top-level. Generating reStructuredText documentation in a docs directory is fairly easy: $ mkdir docs
$cd docs$ sphinx-quickstart
The quickstart utility will ask you many questions to configure your documentation. Aside from the project name, author, and version (which you have to type in yourself), the defaults are fine. However, I do like to change a few things:
...
> todo: write "todo" entries that can be shown or hidden on build (y/n) [n]: y
> coverage: checks for documentation coverage (y/n) [n]: y
...
> mathjax: include math, rendered in the browser by MathJax (y/n) [n]: y
Similar to iPython Notebook, reStructured text can render LaTeX syntax mathematical formulas. This utility will create a Makefile for you; to generate HTML documentation, simply run the following command in the docs directory:
$make html The output will be built in the folder _build/html where you can open the index.html in your browser. While hosting documentation on Python Hosted is a good choice, a better choice might be Read the Docs, a website that allows you to create, host, and browse documentation. One great part of Read the Docs is the stylesheet that they use; it’s more readable than older ones. Additionally, Read the Docs allows you to connect a Github repository so that whenever you push new code (and new documentation), it is automatically built and updated on the website. Read the Docs can even maintain different versions of documentation for different releases. Note that even if you aren’t interested in the overhead of learning reStructuredText, you should use your newly found Markdown skills to ensure that you have good documentation hosted on Read the Docs. See MkDocs for document generation in Markdown that Read the Docs will render. To learn more about reStructuredText syntax, please see the reStructuredText Primer. AsciiDoc When writing longer publications, you’ll need a more expressive tool that is just as lightweight as Markdown but able to handle constructs that go beyond simple HTML, for example cross-references, chapter compilation, or multi-document build chains. Longer publications should also move beyond the web and be renderable as an eBook (ePub or Mobi formats) or for print layout, e.g. PDF. These requirements add more overhead, but simplify workflows for larger media publication. Writing for O’Reilly, I discovered that I really enjoyed working in AsciiDoc – a lightweight markup syntax, very similar to Markdown, which renders to HTML or DocBook. DocBook is very important, because it can be post-processed into other presentation formats such as HTML, PDF, EPUB, DVI, MOBI, and more, making AsciiDoc an effective tool not only for web publishing but also print and book publishing. Most text editors have an AsciiDoc grammar for syntax highlighting, in particular sublime-asciidoc and Atom AsciiDoc Preview, which make writing AsciiDoc as easy as Markdown. AsciiDoctor is an AsciiDoc-specific toolchain for building books and websites from AsciiDoc. The project connects the various AsciiDoc tools and allows a simple command-line interface as well as preview tools. AsciiDoctor is primarily used for HTML and eBook formats, but at the time of this writing there is a PDF renderer, which is in beta. Another interesting project of O’Reilly’s is Atlas, a system for push-button publishing that manages AsciiDoc using a Git repository and wraps editorial build processes, comments, and automatic editing in a web platform. I’d be remiss not to mention GitBook which provides a similar toolchain for publishing larger books, though with Markdown. Editor’s Note: GitBook does support AsciiDoc. To learn more about AsciiDoc markup see AsciiDoc 101. LaTeX If you’ve done any graduate work in the STEM degrees then you are probably already familiar with LaTeX to write and publish articles, reports, conference and journal papers, and books. LaTeX is not a simple markup language, to say the least, but it is effective. It is able to handle almost any publishing scenario you can throw at it, including (and in particular) rendering complex mathematical formulas correctly from a text markup language. Most data scientists still use LaTeX, using MathJax or the Daum Equation Editor, if only for the math. If you’re going to be writing PDFs or reports, I can provide two primary tips for working with LaTeX. First consider cloud-based editing with Overleaf or ShareLaTeX, which allows you to collaborate and edit LaTeX documents similarly to Google Docs. Both of these systems have many of the classes and stylesheets already so that you don’t have to worry too much about the formatting, and instead just get down to writing. Additionally, they aggregate other tools like LaTeX templates and provide templates of their own for most document types. My personal favorite workflow, however, is to use the Atom editor with the LaTeX package and the LaTeX grammar. When using Atom, you get very nice Git and Github integration – perfect for collaboration on larger documents. If you have a TeX distribution installed (and you will need to do that on your local system, no matter what), then you can automatically build your documents within Atom and view them in PDF preview. A complete tutorial for learning LaTeX can be found at Text Formatting with LaTeX. Conclusion Software developers agree that testing and documentation is vital to the successful creation and deployment of applications. However, although Agile workflows are designed to ensure that documentation and testing are included in the software development lifecycle, too often testing and documentation is left to last, or forgotten. When managing a development project, team leads need to ensure that documentation and testing are part of the “definition of done.” In the same way, writing is vital to the successful creation and deployment of data products, and is similarly left to last or forgotten. Through publication of our work and ideas, we open ourselves up to criticism, an effective methodology for testing ideas and discovering new ones. Similarly, by explicitly sharing our methods, we make it easier for others to build systems rapidly, and in return, write tutorials that help us better build our systems. And if we translate scientific papers into practical guides, we help to push science along as well. Don’t get bogged down in the details of writing, however. Use simple, lightweight markup languages to include documentation alongside your projects. Collaborate with other authors and your team using version control systems, and use free tools to make your work widely available. All of this is possible becasue of lightweight markup languages, and the more profecient you are at including writing in your workflow, the easier it will be to share your ideas. Helpful Links This post is particularly link-heavy with many references to tools and languages. For reference, here are my preferred guides for each of the Markup languages discussed: Books to Read Special thanks to Rebecca Bilbro for editing and contributing to this post. Without her, this would certainly have been much less readable! As always, please follow @DistrictDataLab on Twitter and subscribe to this blog by clicking the Subscribe button on the blog home page. Benjamin Bengfort Using curl to download a shortened URL – Dropbox, bit.ly I was in the middle of an introductory workshop for Data Science at General Assembly and I was talking about using command line instructions to facilitate the manipulation of files and folders. We covered some of the usual ones such as ls, mv, mkdir, cat, more, less, etc. I was then going to demonstrate how easy it was to download a file from the command line using curl and I had prepared a small file uploaded to Dropbox and shortened its URL with bit.ly. “So far so good” – I thought – and then proceeded with the demonstration… Only to find out that the command I was using was indeed downloading a file, but it was the only downloading the wrapper html created by bit.ly for the re-directioning… I should have known better than that! Of course all this happening while various pairs of gazing eyes were upon me… I tried again using a different flag and… nothing! and again… nothing… Pressure mounting, I decided to cut the embarrassment short and apologised. Got them to download the file in the less glamorous way by using the browser… So, if you are ever in that predicament, here is the solution, use the -L flag with curl: $ curl -L -o newname.ext http://your.shortened.url
The -L deals with the redirectioning of the shortened URL and make sure that you use the -o flag to assign a new name to your file.
E voilà!
Failed Battery
I have had this 17-in MacBook Pro for a few years… perhaps about 8 years? Probably a bit more? In any case, I have it more as a memento than anything else as I have a more modern one these days. I still keep it updated and all the rest of it so I was rather surprised to get it out and see that the battery has effectively bursted!!! I hope the rest of the machine still works though :(
Enable NTFS read and write in your Mac
I was confronted with an old issue, that had not been an issue for a while: writing to an external hard drive that was formatted with Windows (NTFS) from my mac. I used to have NTFS-3G (together with MacFUSE) installed and that used to be fine. However, I guess something when a bit eerie with Mavericks as I was not able to get my old solution to work.
So, here is what I did (you will need superuser powers, so be prepared to type your password):
Open a Terminal (Terminal.app) and create a file called stab in the /etc folder. For instance you can type:
$sudo nano /etc/fstab You can now enter some information in your newly created file telling MacOS information about your device. If your external drive is called mydevice enter the following: LABEL=mydevice none ntfs rw,auto,nobrowse Use tabs between the fields listed above. Save your file and you are now ready to plug your device. There is a small caveat: Once you do this, your hard drive is not going to appear in your Desktop. But do not disappear, you can still use the terminal to access the drives mounted by going to /Volumes folder as follows: $ sudo ln -s /Volumes ~/Desktop/Volumes
et voilà!
An alternative way to reduce the size of PDF in a mac
I am sure you, like me, have had the need to reduce the file size of a PDF. Take for example the occasional need of sending a PDF by email just to find out that the size is such that the message is rejected. I have used Adobe Acrobat Pro to help, but recently I came across an alternative way of achieving this: Use Colorsync Utility in a mac. Here is how:
1. Right click the PDF that needs reducing and select “Open with…”
2. Select Colorsync Utility and wait for the application to open the file
3. At the bottom of the status bar in the application, you can now select one of the quartz filters available
4. Press “Apply”
5. and voilà
All Watched Over by Machines of Loving Grace
This is the title of the 1967 poem by Richard Brautigan and of course that is the name of the great three-part documentary by Adam Curtis. If you haven’t watched it, please do yourself a favour and take a look.
All Watched Over by Machines of Loving Grace
I’d like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.
I like to think
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.
I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal brothers and sisters,
and all watched over
by machines of loving grace.
RFU Podcast feed – one that can be used!
I have been waiting quite a bit for the RFU to make their podcasts available via iTunes or some other similar service. I used to listen to them but for one reason or another the feed changed to the extent that no submission was done to iTunes and the RSS of the RFU’s website is basically dead.
So, inspired by a post by Rolando Garza, I decided to hack an RSS that can actually be used to download the RFU podcast and get some information about Rugby. So I used a combination of Feedity which used HTML scraping to generate an RSS of almost any page. With the help op Yahoo Pipes I managed to use the magic of regular expressions to add appropriate dates and enclosures to the feed and the result is the RFU Podcast Feed.
So, as long as the RFU does not change the way they deal with their website and the posting of their mp3 content, then you can enjoy a bit of Rugby right in your mp3 devices.
|
2020-09-26 14:47:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20382525026798248, "perplexity": 2368.848331692567}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400244231.61/warc/CC-MAIN-20200926134026-20200926164026-00044.warc.gz"}
|
http://mathhelpforum.com/calculus/202536-limit-fraction-summation.html
|
# Math Help - limit of fraction with summation
1. ## limit of fraction with summation
hi,
it has been some time since i last saw calculus and I'm stuck at one excercise. The point is just to find limit of
$\frac{ln(x)}{\sum_{k=1}^n \frac{1}{k}}$
What I would do is just solving the summation as definite integral and then differentiating the fraction via l'Hospital rule. Is that an ok way to go?
2. ## Re: limit of fraction with summation
But the summation is in terms of k ,, and the numerator is in term of x !
3. ## Re: limit of fraction with summation
Originally Posted by pyjong
The point is just to find limit of
$\frac{ln(x)}{\sum_{k=1}^n \frac{1}{k}}$
What I would do is just solving the summation as definite integral and then differentiating the fraction via l'Hospital rule. Is that an ok way to go?
This question does not make sense. Where is the limit? $x\to?$ or $n\to\infty~?$
What is the exact wording of the question?
4. ## Re: limit of fraction with summation
oh .. sorry my bad, there is supposed to be n instead of x and the limit is in +inf
5. ## Re: limit of fraction with summation
Originally Posted by pyjong
oh .. sorry my bad, there is supposed to be n instead of x and the limit is in +inf
It is well known that the Harmonic series is divergent, so the denominator tends to \displaystyle \begin{align*} \infty \end{align*}, and the numerator also tends to \displaystyle \begin{align*} \infty \end{align*}. Since this goes to \displaystyle \begin{align*} \frac{\infty}{\infty} \end{align*}, you should be able to apply L'Hospital's Rule.
6. ## Re: limit of fraction with summation
Originally Posted by pyjong
oh .. sorry my bad, there is supposed to be n instead of x and the limit is in +inf
So it is $\lim _{n \to \infty }}\frac{{\ln (n)}}{{\sum\limits_{k = 1}^n {\frac{1}{k}} }}$
For the harmonic series let ${H_n} = \sum\limits_{k = 1}^n {\frac{1}{k}}$.
7. ## Re: limit of fraction with summation
There is another way to do it but it requires a piece of knowledge. The Euler-Mascheroni constant is defined as
$\gamma = \lim_{n\to \infty} (H_n - \ln n)$
where $H_n$ is the $n$-th Harmonic number. Now we have
$\lim_{n\to \infty} \frac{\ln n}{H_n} =\lim_{n\to \infty} \frac{\ln n - H_n + H_n}{H_n} = \lim_{n\to \infty} \frac{\frac{\ln n - H_n}{H_n} + 1}{1}$
What's the next step? If you want to see, click "show"
Spoiler:
Now, as $n\to \infty$ the numerator in the inner fraction tends to $-\gamma$ and the denominator tends to $+\infty$. Therefore, the limit is
$\lim_{n\to \infty} \frac{\ln n}{H_n} = 1$
|
2014-03-17 10:24:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981135129928589, "perplexity": 643.0997546497591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678705117/warc/CC-MAIN-20140313024505-00038-ip-10-183-142-35.ec2.internal.warc.gz"}
|
http://www.hulver.com/scoop/story/2005/8/10/125736/641
|
Whoops Apocalypse [FBC]
By TheophileEscargot (Wed Aug 10, 2005 at 07:57:36 AM EST) (all tags)
The myth of the Cleansing Apocalypse. Web. Me.
Poll: most likely to do well after the apocalypse?
The myth of the Cleansing Apocalypse
There's a certain narrative, that seems to have moved beyond the scope of libertarian-flavoured SF into the popular imagination, of a kind of Cleansing Apocalypse. The story goes that a great environmental or economic disaster sweeps the nation; the government collapses; but small groups of rugged rural survivors stay alive to sow the seeds of a new and better society. It's a compelling mixture of dystopian warning and Arcadian fantasy. Proponents advise that the best way to survive is to own rural property; to learn how to hunt, farm and survive independently; and be prepared to defend yourself against wandering Mad Max-style human predators.
I think the problem is it depends on a certain assumption: that disaster will cause the collapse of the State, which isn't really borne out by experience.
In his history Stalingrad, Antony Beevor reports of a scene where after the Soviet breakout, a German general is captured in a cellar by the Russians. "Where are your men?" demand the captors. Wearily, the general points to the handful of soldiers around him. What's relevant here is that even though virtually everyone has been wiped out, the structures and institutions of the army have survived.
This doesn't just apply to the military realm, but to the civilian. As the Russians pushed into German in WW2, the Nazi regime did not collapse; if anything it's control over the populace grew stronger. As sanctions bit into post-Gulf-War-One Iraq, Saddam Hussein only consolidated his strength and control. While the rulers of the government may change, the existence of a government remains. If anything, the State gains more power in a crisis.
Furthermore, in a disaster or crisis, the survivors tend to be people with lots of contacts, with large support networks, with friends in high places. It's not the rugged individualists who do best, but the schmoozers, the networkers, the party hacks and the social climbers.
Cleansing Apocalypse fantasies tend to involve valiant homeowners fighting off roaming bands of disorganized marauders: difficult but achievable. However, this assumes an ahistorical breakdown of authority. Given that the state is likely to survive, the people after the land are likely to be a hungry but organized army; well-armed and vastly outnumbering the rural farmers. This isn't really practical to defend against.
Another tenet of the Cleansing Apocalypse is that it will be necessary to know all kinds of survival skills; how to plough the land, milk a cow, gather the harvest, and be self-sufficient. Again, this assumes a breakdown in organization. A more likely scenario is that urban authorities will take control of agriculture; as in Stalinesque collective farms, or Mugabe's takeover of the white farms; or Pol Pot's forced ruralization. The pattern would be of unskilled labour directed by a few: following simple orders is all that's required of the new farm labourers. The best quality of life goes to those in a position of power when the apocalypse comes.
If you believe that the disaster is coming, the best way to prepare is not to buy land in the country and learn to farm; your property will probably be confiscated and your labour used in virtual slavery. The best strategy is to remain in the cities, where you can cultivate useful relationships, and work your way up the hierarchy of power. In the countryside, you will be too isolated to work a shifting power structure. Man is a political animal, a creature of the Polis; political skills rather than agricultural will remain the most useful.
To survive the coming apocalypse, start networking now.
Web
Tim Berners-Lee interview: OK's blogging, dislikes bomb-making.
Wired: factories get cleaner, the new Uranium boom.
Me
Wrote an analysis document today. Feel a strange sense of achievement.
FBC
Faster, Better, Cheaper is a reduced content, reduced editing, increased frequency diary concept.
Whoops Apocalypse [FBC] | 47 comments (47 topical, 0 hidden) | Trackback
why is the urban network.. by infinitera (2.00 / 0) #1 Wed Aug 10, 2005 at 08:06:25 AM EST
[…] a professional layabout. Which I aspire to be, but am not yet. — CheeseburgerBrown
Because it was stapled to the chicken by Rogerborg (4.00 / 8) #4 Wed Aug 10, 2005 at 08:38:20 AM EST
I didn't understand your question, but didn't want to appear rude.
-
Metus amatores matrum compescit, non clementia.
[ Parent ]
And a million points by Trip (2.00 / 0) #20 Wed Aug 10, 2005 at 10:11:58 AM EST
goes to Rogerbjörk.
[ Parent ]
I love you. Will you marry me? by greyrat (4.00 / 1) #28 Wed Aug 10, 2005 at 11:53:18 AM EST
[ Parent ]
nly in canada [nt] by monkeymind (4.00 / 2) #39 Wed Aug 10, 2005 at 11:37:39 PM EST
[ Parent ]
Cleansing Apocalypse by The Fool (2.00 / 0) #2 Wed Aug 10, 2005 at 08:10:47 AM EST
That sounds like my plans for the junk in my garage this weekend.
I AM CLEA-NOR, DESTROYER OF CLUTTER. ALL YOU CARDBOARD BOXES OF STUFF SHALL BOW BEFORE ME!!!
Good luck by TheophileEscargot (4.00 / 1) #3 Wed Aug 10, 2005 at 08:30:34 AM EST
I can practically hear the trumps sounding.
--
It is unlikely that the good of a snail should reside in its shell: so is it likely that the good of a man should?
[ Parent ]
For some reason I remember by MillMan (4.00 / 2) #5 Wed Aug 10, 2005 at 08:39:41 AM EST
what Scott Adams said prior to y2k (bonus: contains amusing y2k predictions):
I've heard that many people are hoarding cash and food just in case civilization collapses. My strategy is to hoard guns and ammo so I can take the cash and food from the people who didn't do a good job thinking through the "collapse of society" concept.
Whether or not the state survives depends on the extent of the damage, I think. At any rate though, whoever has the best army / militia is going to do what you said. I see no reason to believe otherwise.
"Just as there are no atheists in foxholes, there are no libertarians in financial crises." -Krugman
Er, didn't the Cleansing Apocolypse by Rogerborg (4.00 / 1) #6 Wed Aug 10, 2005 at 08:40:43 AM EST
Start somewhere in the gruntings and muttering of our filthy smelly* cave-dwelling ancestors? Just about every culture has a Flood analog where only the Good survive and the Undeserving are washed away. It's not a new idea.
* I believe this may have been pre-Lunix, perhaps around the time of System V.
-
Metus amatores matrum compescit, non clementia.
In a non-FBC diary by TheophileEscargot (4.00 / 1) #7 Wed Aug 10, 2005 at 08:56:25 AM EST
To make it relevant, I would have linked to that guy who was all over Salon and places explaining how there was no point going to college and everyone should learn how to use a horse-drawn plough and such.
--
It is unlikely that the good of a snail should reside in its shell: so is it likely that the good of a man should?
[ Parent ]
is joht titor a filthy smelly gruntling? by martingale (2.00 / 0) #33 Wed Aug 10, 2005 at 03:38:06 PM EST
Well, is he, punk?
--
$E(X_t|F_s) = X_s,\quad t > s$
[ Parent ]
gaaah s/joht/john/ [n/t] by martingale (2.00 / 0) #34 Wed Aug 10, 2005 at 03:38:47 PM EST
--
$E(X_t|F_s) = X_s,\quad t > s$
[ Parent ]
Isn't he a porn star? by Rogerborg (4.00 / 2) #41 Thu Aug 11, 2005 at 12:23:56 AM EST
I really only know about historical figures and porn stars.
-
Metus amatores matrum compescit, non clementia.
[ Parent ]
let's just say by martingale (4.00 / 1) #42 Thu Aug 11, 2005 at 01:11:05 AM EST
He would make a good movie (*)
(*) hollywood standards being what they are, and all, we're not talking about the BBC here.
--
$E(X_t|F_s) = X_s,\quad t > s$
[ Parent ]
God damn you. by blixco (4.00 / 1) #43 Thu Aug 11, 2005 at 04:19:00 AM EST
Hot coffee through the nose is PAINFUL.
You're on fire lately, man. You should have your own show.
---------------------------------
The farmers always win.
[ Parent ]
sanctions == apocalypse? by DesiredUsername (2.00 / 0) #8 Wed Aug 10, 2005 at 08:56:26 AM EST
Perhaps overstating their disastrousness a tad. Even losing a (modern) war, while politically, morally and economically a disaster, doesn't usually kill huge fractions of the populace. I mean, this kind of story usually kills 99% of people, which would leave the US government consisting of like 3 Congresspeople and a few judges.
---
Now accepting suggestions for a new sigline
The Germans at Stalingrad by TheophileEscargot (2.00 / 0) #10 Wed Aug 10, 2005 at 08:59:47 AM EST
Suffered those kinds of losses without their structure breaking down.
Plus it's Peak Oil and Foreign Debt and such that seem to be the apocalypses that people are talking about.
--
It is unlikely that the good of a snail should reside in its shell: so is it likely that the good of a man should?
[ Parent ]
The Germans at Stalingrad by DesiredUsername (4.00 / 1) #11 Wed Aug 10, 2005 at 09:06:32 AM EST
had a home base, not to mention a populous enemy, so their continued structure is explainable otherwise.
I've never read a post-apocalyptic story that started with a banking crisis, so I'll have to defer to your description of that.
---
Now accepting suggestions for a new sigline
[ Parent ]
Home base? by TheophileEscargot (2.00 / 0) #14 Wed Aug 10, 2005 at 09:14:32 AM EST
They were encircled in the Kessel after the Russian breakout... not the homeliest of places really.
--
It is unlikely that the good of a snail should reside in its shell: so is it likely that the good of a man should?
[ Parent ]
Military by ucblockhead (4.00 / 1) #15 Wed Aug 10, 2005 at 09:20:11 AM EST
As I said elsewhere, I think military structures are a special case, especially groups of soldiers stuck in a foreign land. I mean, it's not like a couple random germans could go find some fat cows, hole up in a bunker and expect to wait the war out.
Having no place to go is exactly why they stuck together!
---
[ Parent ]
Well, they could surrender by TheophileEscargot (2.00 / 0) #16 Wed Aug 10, 2005 at 09:23:00 AM EST
Though that wasn't exactly a picnic either.
--
It is unlikely that the good of a snail should reside in its shell: so is it likely that the good of a man should?
[ Parent ]
Besides by ucblockhead (2.00 / 0) #18 Wed Aug 10, 2005 at 09:31:32 AM EST
That's just exchanging one command structure for another.
---
[ Parent ]
I mean Germany still existed by DesiredUsername (2.00 / 0) #17 Wed Aug 10, 2005 at 09:26:26 AM EST
An apocalypse has to be more than the decimation of a military grouping. It has to be catastrophically severe on at least a nationwide basis, if not globally.
---
Now accepting suggestions for a new sigline
[ Parent ]
Economic collapse by ucblockhead (2.00 / 0) #13 Wed Aug 10, 2005 at 09:13:09 AM EST
The US government showed little sign of collapsing during the great depression. If anything, history shows that economic depressions result in tighter government control, even if there is a short period of anarchy/civil war.
The French Revolution is a great example of a government brought down by economics. It took, what, six years to get to Napoleon?
---
[ Parent ]
yep. by 256 (4.00 / 1) #9 Wed Aug 10, 2005 at 08:58:18 AM EST
i have always thought it was kind of silly to BUY land as a precaution in case of massive societal collapse.
my method has been to maintain an intimate knowledge of certain rural areas accessible from the city in which i live and know which land it will be feasible to seize when necessary.
I assume that a deed of ownership will have little clout after the apocalypse.
I guess that makes me one of the bad guys, eh?
---
I don't think anyone's ever really died from smoking. --ni
Do you think you are the only one by monkeymind (4.00 / 1) #38 Wed Aug 10, 2005 at 11:35:12 PM EST
looking at this land?
Do you think the people who are already on it are going to walk away when you turn up with a gun and say get off?
If you are a real survivalist and you think it is all going to go south you buy the land now so that you can prepare it for when the shit hits the fan.
This does not mean just planting a few fruit trees. It means that you build defencive structures on the land so anyone trying to take it away will regret it.
So you see a nice farmhouse you want to take over. You turn up with a few of your buddies packin guns. To your surprise there is no one there...
So you move in. You set up guards and shit and start living. A couple of weeks later, when you have relaxed a bit. The people who own the land, saw you coming and have been watching you from their hidie holes come out one night. They kill you all in your sleep and put you next to the lot who tried it 6 months ago.
the main problem with the mad max type movies is that they assume that the 'farmers' are all Amish and are not as ruthless about keeping their land and lives as the raiders are at taking it.
[ Parent ]
Cleansing apocolypse by ucblockhead (4.00 / 2) #12 Wed Aug 10, 2005 at 09:07:06 AM EST
I tend to agree. The dirty truth is that most fantasies of cleansing apocalypses (apocalypsi?) are just wishful thinking, hoping that all the bad people go so the good people can create utopia.
However, your examples are all political/military apocalypses. It's not surprising that command structures survive during war because militaries are designed to maintain command structure through decimation. I think there are even better examples than you give. A modern one being the collapse of the Soviet Union and an ancient one being the (slow) collapse of the Western Roman Empire. In both cases, you've certainly got increased anarchy, but overall yes, it's all "new boss, same as the old boss" stuff. (I suppose you could also call the USSR an economic collapse as well.)
Most of the literary apocalypses are not political/military. The only good example I can think of in history is perhaps the black death, but I don't know enough about that period to draw conclusions. Jared Diamond's book Collapse does have some info relating to how people deal with environmental collapse, but most of his examples are small.
A lot depends on how the apocalypse happens. Nukes targeting all major cities would tend to wipe out lines of command while horrible plagues would tend to leave some of them in place.
I'm often amused (in a sad, cynical way) by the way some environmentalists (including reasonable ones like Diamond) say, in essense "you rich bastards better watch out, because when things go to shit, you'll be up against the wall!!!" Thing is, history doesn't work like that. When things go to shit, the rich pay the middle class to help them build enclaves and use machine guns on the poor. If there is some environmental apocalypse, it'll be the poor that make the bulk of the dead. That's true of any crisis. But that's not something those who fantasize about the rich oil barrons getting theirs want to face.
---
apocalypsis by martingale (2.00 / 0) #35 Wed Aug 10, 2005 at 03:49:20 PM EST
Apocalypsis, apocalypsis, f (3rd decl.) I think, so the nominative plural should be apocalypses, however in your case you're talking about fantasies of ..., so I expect the ending should be apocalysibus
--
$E(X_t|F_s) = X_s,\quad t > s$
[ Parent ]
An American apocalypse by 606 (4.00 / 2) #19 Wed Aug 10, 2005 at 09:55:40 AM EST
It seems to me that the Mad Max apocalypse has two contributing sources:
One, the cold war, which suggested that the apocalypse would be nuclear, and were nuclear war to break out the obvious targets would be major political and economic centres which would be utterly decimated. Rural communities would be left alone, except that all the bankers, lawyers, and politicians would be dead.
And two, the American militia theory whereby corrupt governments are overthrown by groups of irate citizens who form local governments.
Consider Tyler Durden in Fight Club trying to destroy the credit histories of all people on earth. That's the kind of apocalypse that is popularily dreamt of, where the skill of self-preservation would be useful. The kind of apocalypse needed for a cleanse is one that creates a huge power vacuum. Imagine Iraq with all of the American troops withdrawn right after ousting Sadaam... that's a cleansing apocalypse, though, like Mad Max in actual fact it's very very dirty.
-----
imagine dancing banana here
Nope. by Breaker (4.00 / 1) #37 Wed Aug 10, 2005 at 04:27:33 PM EST
All previous control structures are gone.
No army, no police. No clean water to drink, food looking a little shortlived.
No rules.
Immediate landscape devastated.
How are you going to survive?
That's Apocalypse.
[ Parent ]
What has Stalingrad got to do with Apocalypse? by xth (4.00 / 2) #21 Wed Aug 10, 2005 at 10:36:34 AM EST
It was just a field battle, like thousands before it. Apocalypse implies there's no winner - with Stalingrad, the Allies were there to pick up the pieces.
[Thanks for your interest in xth's comments. This accont is now spent]
Part 2 - Sorry, PDA crashes with long comments by xth (2.00 / 0) #22 Wed Aug 10, 2005 at 10:46:49 AM EST
By their very nature, there are not many apocaIpses we have real data on. Roman Empire - that's it (I know nothllng about Chinese history).
.Everything else, either there was a 'winner', or there was nobody there to document it. Examples of the latter: Easter Island;Jamaica during Napoleonic wars (?); 2imbabwe.
[Thanks for your interest in xth's comments. This accont is now spent]
[ Parent ]
Literalist [nt] by TheophileEscargot (4.00 / 1) #23 Wed Aug 10, 2005 at 10:47:31 AM EST
--
It is unlikely that the good of a snail should reside in its shell: so is it likely that the good of a man should?
[ Parent ]
Literalistophobe [nt] by xth (4.00 / 2) #24 Wed Aug 10, 2005 at 10:49:25 AM EST
[Thanks for your interest in xth's comments. This accont is now spent]
[ Parent ]
Namecallers! by greyrat (2.00 / 0) #27 Wed Aug 10, 2005 at 11:51:46 AM EST
[ Parent ]
And by ucblockhead (2.00 / 0) #25 Wed Aug 10, 2005 at 11:18:32 AM EST
Rome was more of a long slide than a collapse.
---
[ Parent ]
Chinese History by Scrymarch (2.00 / 0) #44 Fri Aug 12, 2005 at 03:58:18 AM EST
Short version - don't go looking there for evidence of the disappearance of the centralised state.
Disclaimer: I am not qualified to make any longer statement about Chinese history.
The Political Science Department of the University of Woolloomooloo
[ Parent ]
Not even interludes? by xth (2.00 / 0) #45 Fri Aug 12, 2005 at 04:10:56 AM EST
[Thanks for your interest in xth's comments. This accont is now spent]
[ Parent ]
Well by Scrymarch (2.00 / 0) #46 Sat Aug 13, 2005 at 04:13:36 PM EST
At the end of dynasties you got periods where governance broke down, but there were alternative governments in the form of warlords, robber-barons, uprisings and invasions. I think the population density is probably a factor here. In the outlying provinces switching control to neighbouring empires. Periods of sort of falling out of the tax-collection / brigand-extortion loop for a while must have happened but that would be most pronounced on the marginal land where people were scraping a living as it was.
My guess is the American survivalist scenario is kind of an outgrowth of the white settler period of American history, where there were low population densities compared to that possible under settled agriculture. Maybe you could make it succeed by having a new political entity ready to go, like a city state with a good water catchment and a defensible position.
The Political Science Department of the University of Woolloomooloo
[ Parent ]
Networking? by blixco (2.00 / 0) #26 Wed Aug 10, 2005 at 11:30:18 AM EST
Or not.
I mean, you could take the easy way out.
---------------------------------
The farmers always win.
What? Floppies? by ad hoc (4.00 / 2) #32 Wed Aug 10, 2005 at 01:52:58 PM EST
--
[ Parent ]
No by Breaker (4.00 / 1) #36 Wed Aug 10, 2005 at 04:23:14 PM EST
Just a good DNS and DMZ policy would work.
[ Parent ]
As a good example of this .... by lm (4.00 / 1) #29 Wed Aug 10, 2005 at 12:27:09 PM EST
... Haiti?
There is no more degenerate kind of state than that in which the richest are supposed to be the best.
Cicero, The Republic
Apocalypse by thunderbee (2.00 / 0) #30 Wed Aug 10, 2005 at 12:58:27 PM EST
By definition it goes beyond being a mere crisis.
I've always understood Apocalypse as destructive enough that existing power structure collapses. It's a mere catastrophe otherwise.
And if this happens, only guns matter. Second comes your usefulness (being a doctor might get you some comfort & protection from the local warlord, if you don't get shot first that is). Less that that, you're fair game one way or another.
MillMan's post is dead on :)
No by ucblockhead (4.00 / 1) #31 Wed Aug 10, 2005 at 01:50:19 PM EST
What matters most is charisma.
All power in human society ultimately depends on your ability to convince other people to do what you want. Remember, the stereotypical view of a post-apocalyptical warlord isn't a big guy with a gun...it's someone with a bunch of big guys with guns doing his bidding.
---
[ Parent ]
Not entirely true by thunderbee (2.00 / 0) #40 Thu Aug 11, 2005 at 12:17:00 AM EST
You forget that there's nothing to stop one of your trigger-happy gun-totting underlings to shoot the leader "just because" he disagrees.
Charisma works alone only in a fanatical surrounding; otherwise, brute force still rules.
You need a certain amount of civilisation to maintain a power structure without permanent armed struggle. It's the mistake everybody makes. They assume that the artificial protections of society endures. It won't. If you disagree with someone, you just shoot him. That kind of puts a different light on politics ;)
[ Parent ]
Yeah but no but, by ToyChicken (2.00 / 0) #47 Tue Aug 16, 2005 at 12:44:24 AM EST
So, despite the implication of the word 'apocalypse' which I think is probably a bit strong (we'd all be dead, networkers and farmers alike). Under what circumstances would:
a) The state actually truly fail / cease to exist
b) People revert to a utterly barbarian state where the value of diversity is forgotten.
I appreciate that brute force might win in the short term, but it does have a historical tendency to fail if it's used as the sole means of control for a long period of time. i.e. The abusers often get overthrown by the abused, or the abusers turn on themselves once they've killed everyone else.
Personally speaking, I'm looking forward to the time when there's only me and a select group of individuals left, and we can build a world with air-conditioning, cyclepaths, and clean code...
I love Nebbish, he's the best...
Whoops Apocalypse [FBC] | 47 comments (47 topical, 0 hidden) | Trackback
|
2017-11-20 19:08:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17713995277881622, "perplexity": 5708.593039067992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806124.52/warc/CC-MAIN-20171120183849-20171120203849-00240.warc.gz"}
|
http://www.talkstats.com/threads/expected-value.25491/
|
# Expected value
#### Hans Rudel
##### New Member
Could someone please explain the following?
A guy and a girl play a game where they roll a dice. If the guy rolls an even number, the girl pays him that amount. If he rolls an odd number he pays her that amount.
The expected value is supposedly $0.5. ie mean of -1+2-3+4-5+6 but if i split the values like so girl: 1,3,5 total = 9 Average is 9/3 = 3 guy: 2,4,6 total = 12 Average = 12/3 = 4 dif = 1 and not 0.5 why can i not split up their winnings like the above? Thanks for your help #### Dason ##### Ambassador to the humans You can but you need to be more careful. Let X be the amount won on any given roll. $$E[X] = \sum XP(X)$$ $$= (-1)(1/6) + (2)(1/6) + (-3)(1/6) + (4)(1/6) + (-5)(1/6) + (6)(1/6)$$ $$= (1/6)(-1 +2 -3 +4 -5 +6)$$ $$= (1/6)(-1 -3 -5) + (1/6)(2 + 4 + 6)$$ [Here we separate the outcomes into either losing or winning] $$=(1/2)(-3) + (1/2)(4)$$ [Here I bring 1/3 into each of the sums to get the averages you have in your calculations] You need to account for the fact that you're losing$3 and that there is a probability of 1/2 of falling into either losing or winning. It's really that you didn't account for the probability of falling into losing or winning that messed you up here.
We can show something more general using a conditioning argument. Let $$I_W(X)$$ be an indicator of whether or not you win money on a given roll. We can use the fact that
E(X) = E[E(X|Y)] and what I showed up above is really just an application of this where $$Y=I_W(X)$$
#### Hans Rudel
##### New Member
Hey, thanks very much for replying to my post and clarifying my question.
|
2022-09-24 15:49:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.54552161693573, "perplexity": 1248.700330329634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00705.warc.gz"}
|
https://brilliant.org/discussions/thread/tt-rc-missiles/
|
# TT: RC Missiles
The RC missiles are in my opinion the most important power-up. You fire an RC Missile, and your tank freezes, i.e. you can't move it anymore. Instead, you're now controlling the missile. So you can make your missile track other players and destroy them.
TIPS AND INFO
• You are susceptible to being destroyed by your own missile. So watch out.
• RC missiles have awful steering. Because of this, don't try to be a masterful ninja. Although weaving in and out is fun, be direct and minimize the time it takes for the missile to reach its target. Don't show off.
• Before launching the missile, hide in a corner. You are an easy target when you can't move.
• Try to avoid walls with the missile. It bounces off of them in a weird way and throws your controls.
• When the missile is near an enemy, aim it straight at them. If it misses, it could hit a wall and you could pretty much lose control.
• You only have so much time before the missile disappears after launching it AND after you get it.
Use these tips to master the RC missile!
Note by Finn Hulse
5 years, 10 months ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
Sort by:
It's funny when you chase another person across the map trying to hit them and they are panicking and next thing you know they run into the missile.
- 5 years, 7 months ago
Yeah. :D
- 5 years, 7 months ago
It seems to me that death rays curve toward your opponents?
- 5 years, 7 months ago
Lasers? Yes. :D
- 5 years, 7 months ago
Could you also pick up a landmine and use it to guard yourself while you chase people with the missile? I've also noticed that the missile is faster than your opponents. THERE IS NO ESCAPE ;) (except for it timing out, but i tend to get a death ray right after and blow them up)
- 5 years, 7 months ago
|
2020-03-30 01:11:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9343805909156799, "perplexity": 3563.2582634673745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496330.1/warc/CC-MAIN-20200329232328-20200330022328-00452.warc.gz"}
|
https://chat.stackexchange.com/transcript/8595/2019/8/19
|
12:00 AM
RELOAD! There are 6248 unanswered questions (89.8380% answered)
1 hour later…
1:20 AM
possible answer invalidation by Jamal on question by Rex Overflow: codereview.stackexchange.com/posts/225464/revisions
2 hours later…
3:42 AM
0
I have problems when my output only update column file1. I want to check if column file1 has value then update file2, and the otherwise if file2 has value update columns file1. I hope you can guide me what's wrong with my code: function update_files($data,$id){ $this->db->where('id',$id)...
your code is not as optimal as it could be ... i would head over to codereview.stackexchange they are probably better at optimizing this stuff ... for starters i think if list1 was a generator that might be slightly faster ... you can probably also eliminate large chunks of the test set by some methods — Joran Beasley 1 min ago
You might want to post on Code Review, I would suggest to loop through elements in document.querySelectorAll('select') and assign the event listener to each element. In setWeather you could differentiate between the elements by introducing a parameter to setWeather(event). Also, you might want to look into jQuery which makes things a bit easier. — shrys 21 secs ago
4:21 AM
4:44 AM
0
This is from LeetCode: Missing Number Given an array containing n distinct numbers taken from 0, 1, 2, ..., n, find the one that is missing from the array. Example 1: Input: [3,0,1] Output: 2 Example 2: Input: [9,6,4,2,3,5,7,0,1] Output: 8 Note: Your algo...
Monking
If the code is working this question should rather go to codereview.stackexchange.comMarkus Deibel 21 secs ago
5:04 AM
0
The codes below get the rows of data until it reaches the total running quantity limit. Notice that the SQL uses sub-query. select *, sum(Quantity) over (order by id) as RunningQty from #PurchaseOrderProducts. I'm worried that the sub-query would get all the data from the table. Is there a way t...
5:42 AM
Monking
6:03 AM
0
I'm attempting to write an animation system on top of SFML, and I've come up with a design where I can wrap sf::Drawable objects into a RenderComponent class, and the RenderComponent class can be given an Animation object at any time, which will overwrite a previous animation if one exists. Here ...
6:15 AM
Perhaps you want codereview.SE instead of stackoverflow? This question looks a bit too broad. — SOFe 15 secs ago
I'm voting to close this question as off-topic because it belongs to codereview.stackexchange.comP0W 49 secs ago
6:55 AM
7:28 AM
> the code works, however
Either it does or it doesn't work.
@P0W If it works, yes. But it looks like a feature request the way it's written now and those are off-topic at Code Review. Please see their help center. — Mast 44 secs ago
1 hour later…
8:41 AM
-1
So I've been trying to simulate the trajectory of a frisbee in flight thrown with different angles of attack/initial velocities. The differential equations used for the code are: $$\frac{dv_y}{dt} = k_L v_y^2 - g$$ $$\frac{dv_x}{dt} = -k_D v_x^2$$ where k_L \text{ is the transformed coeffici...
9:41 AM
0
I wrote a stack class in C++ using arrays of fixed width. Could anyone review my code ? I didn't comment on any of the functions, because I thought class itself is self explanatory. Is it a wrong approach or which kind of comments can I write? #include <stdlib.h> #include <iostream> template <...
10:20 AM
-1
Here is my first attempt at creating a full Champions League Simulation, including drawing the groups and subsequent games and playing all matches. The code is very, very long and I am sure there must be a more concise way of writing it. One problem I could not get around was when drawing the g...
2 hours later…
12:07 PM
probably better suited for codereview.stackexchange.comMikeT 22 secs ago
12:25 PM
If you want help improving working code you should post this on CodeReview.SE. If you do decide to do so please delete the question here. — NathanOliver 9 secs ago
12:39 PM
0
QUESTION IS BEING EDITED+FIXED, PLEASE WAIT I'm working on a small logger class, and want to try to improve my ability to wrangle mastery of PHP OOPs. The logger itself is uninteresting, but makes for an ideal illustration of the kind of top-level, first-class system that might warrant Being ...
0
I have a code that calls many times a function returning a list of the dates between two dates formatted as ('0001-01-01 00:01:00'),('0001-01-02 00:01:00'), ... The current solution is: import numpy as np import time from datetime import datetime beg = datetime.strptime('01 01 0001', '%...
@Tom would it work if they are Double . This is the entire code [codereview.stackexchange.com/questions/226346/…Miriam List 57 secs ago
12:54 PM
Monking
If your code doesn't work then it isn't really a candidate for Code Review. A minimal reproducible example would give enough information for someone to run the code and see the same problem. You have given no useful information about Arr2 even though it is central to the problem. You are apparently trying to add 1 to a type for which +1 is undefined. Why you are doing that and what you should do instead we literally have no idea, because you have not given us anything to go on. — John Coleman 12 secs ago
@Toby Speight Looks like an API to me : codereview.stackexchange.com/questions/209827/…
1:18 PM
1
The question is: Write a program that prompts the user to input ten values between 80 and 85 and stores them in an array. Your program must be able to count the frequency of each value appears in the array. So, is there have any more efficient ways to present the count for frequency number betwe...
possible answer invalidation by shamalaia on question by shamalaia: codereview.stackexchange.com/posts/226417/revisions
@Duga Formatting fix.
@ John Coleman , @JNevill. I had the code by it was way to slow so people from community helped me modify it. However now I want to modify as I posted it in here [codereview.stackexchange.com/questions/226346/… . I was trying to find a way to do it and thought that using an increment would help me on solving the problem.Sorry for the inconvenience — Miriam List 42 secs ago
Well the ID's can expose infromation about the data size nothing more nothing less. You can better post the source code off the PHP and Javascript if you worry about security.. But then i think the question is better suited for codereview.stackexchange.comRaymond Nijland 13 secs ago
That being said, I'm not at all sure what you are trying to do here. Your description above doesn't foot with your ask in codereview. You can't increment a string, so I'm not sure why you are trying. Perhaps you are misunderstanding what increment means or misunderstanding exactly what you are trying to increment? A clearer description of what you are trying to achieve here would really help. — JNevill 29 secs ago
1:38 PM
0
This is my first time writing so much jQuery, and because of the amount of content on the page, I get a lot of the following console warning in Google Developer Tools as I'm changing filters: [Violation] Forced reflow while executing JavaScript took XXXms A working version of the full code ...
0
I'm new in oop programming pattern. I created this simple User Class and i'm wondering if it : Is following oop rules and logic ? Is maintaibable ? Is can be tested without any problems ? Can be expanded to a large user class (permissions,etc) ? Is secure ? this is my class <?php class Use...
Maybe @VLAZ pretty sure code reviews is out off topic there it more a thoery based website on small code parts.. — Raymond Nijland 36 secs ago
1:58 PM
1
I have created this bank ATM app mockup which I believe implement some Domain-Design Design layered architecture, repository pattern and uow pattern, if not fully. In my solution, I have these projects (layer): ATM.Model (Domain model entity layer) ATM.Persistence (Persistence Layer) ATM.Applic...
2:18 PM
-2
I have ifs with similar logic for querying the database. Is it possible to reduce the ifs into ternary expressions? protected override List<SimpleExpressionBase> GetAdditionalFilterExpressionsForGET( EntityQueryParametersForGet queryParameters) { var expre...
1
So I've just written a framework that is supposed to make it easier to create a network-based application (client-server) using the native java.net package. I was wondering if the structure of my code is well understandable and if there are ways to improve my code (in both, readability and perfor...
1
Let's assume that you need to filter customers based on 3 criteria: name tags assigned to customer customer's location For some reasons you have to do it directly in your program code, without database queries or PL/SQL for that matter. Which of these two solutions do you find better in any t...
2:38 PM
0
I have this function that takes a string which represents a html-color and returns the OLE value for it. If the string is null or empty or it can't be parsed, it uses a default value. However I have to write that one line containing the default value twice and I don't see how I could remove this ...
3:05 PM
I'm voting to close this question as off-topic because posts asking "What do you think of my code?" belong on codereview.stackexchange.com. — Peter B 40 secs ago
3:30 PM
my naive algorithm takes a lot of time. Then show your naïve algo. The question might be better suited to codereview.stackexchange.comTheMaster 38 secs ago
0
I would like to make this code more efficient, I have isolated the problem as being these two for loops: for i, device in enumerate(list_devices): for data_type_attr_name in data_types: result_list_element = { "device_reference": device.name, ...
@dfhwze doesn't really look like it, TBH
@Vogel612 that last block is the exact pattern of the other answer.
no actually it isn't....
and the code is really very much too simple to merit stealing
3:39 PM
I'll withdraw my comment then?
not really my call to make.
@Vogel612 Ok I won't make a point out of it :)
possible answer invalidation by Asadefa on question by Asadefa: codereview.stackexchange.com/posts/226352/revisions
@Duga no invalidation, but the question is a bit weird
posted on August 19, 2019 by CommitStrip
3:57 PM
0
I recently solved the Knight Dialer and the ugly numbers problems, but I'm not quite sure if I got the time complexity right; hence I'm requesting the following: 1) Time and space complexity review on both method and explanation on why so. Knight dialer: (movs map has all the possible movements...
Monking
4:14 PM
Greetings, Programs.
1
I'm using the matomo reports api to pull out some data in js. For some charts, I'm in the need to sort the array data based on inner property value ("val"), further to get only the 3 highest (Hong Kooong, Munich, Tokyo). Finally, this data should be used for the creation of a new key/val object (...
This might also be a good question for Code Review for help optimizing your current solution — G. Anderson 41 secs ago
Could someone explain to me why the lack of executable code on this question is ok? codereview.stackexchange.com/questions/226352/…
4:30 PM
@pacmaninbw I can't because I don't understand the question.
4:45 PM
If perf is important and you believe you have tests that demonstrate a significant difference between a dictionary-based approach and an explicitly-implemented state machine, then you should just use the latter. If you are curious about whether your dictionary-based approach could be improved you can post that to codereview.stackexchange.com. I don't plan to do any more searching for my post...archives of the old newsgroups seem to be incomplete, and there doesn't seem to be much point in digging it up anyway. — Peter Duniho 30 secs ago
@dfhwze The OP is asking which of 3 massive structures (they would be classes in most modern languages) would perform the best.
5:03 PM
@pacmaninbw I thought he asked about layout of this massive set of structures, rendering the question opinion-based
@dfhwze it is opinion based as well.
(2) closing votes now ;-)
this question rather belongs to codereview.stackexchange.comYour Common Sense 10 secs ago
Not the right tool or place in the process. Do you really want your code not to compile until you've documented it all? Not to debug? The right place for this is in code review, possibly with some help from an automated checkin script. — Gabe Sechan 38 secs ago
I would advise you to check out codereview.stackexchange.com, where questions about better ways to do code that is already working are more-on-topic. This site is for questions related to problems, like code that doesn't actually work. — rory.ap 38 secs ago
5:56 PM
0
Building upon a recent question I answered here Simple middleware pipeline builder (similar to asp.net-core) I came across a cross site question Stackoverflow: Adding middleware in Carter framework makes some URL to not trigger which looked like it could benefit from what I had learned from ...
6:16 PM
1
I've decided to rebuild the code with advices from previous topic C++ Least cost swapping 2 I've also decided to change input from passing the file name to passing the content of a file, so i am wondering if the input checking is still correct as in previous version. Please review my code. #incl...
0
I have implemented Token and FieldToken classes as part of my project and would like to here some suggestions for improvement. First, Token class represents logical operators, relational operators, parentheses, etc. It does not hold a value, only type of the token (e.g. AND, OR, etc.). Here is i...
6:32 PM
No offense Martin, but I think I can only agree with the point #6. Your answer does not really answer the question. It looks more like a code review. — Dharman 56 secs ago
0
I have this class on my project, which I am using to populate a dropdown on my form. I wonder if there is any way to dynamically generate a constant list like the one I have below, since it would be basically 'db' followed by index # frozen_string_literal: true class GatherTaskHost LIST = [...
0
I have rewritten my python code in cython in order to speed up it quite a bit. However after checking the code in jupyter it seems that part of it is still compiled as python code, therefore its not being sped up enough. My function is pretty basic, it gets start date, end date and creates array...
@Dharman oh yeah it is totally a code review! — Martin 23 secs ago
If your code is complete and work and you are looking for improvement suggestions, it should be on Code Review instead. This site is primarily for fixing broken code. — Carcigenicate 32 secs ago
6:55 PM
1
I recently reviewed a question here on Code Review. The problem statement is Write a program that prompts the user to input ten values between 80 and 85 and stores them in an array. Your program must be able to count the frequency of each value appears in the array. I coded my own solution ...
Hi all readers. I presume @Holli posted this in response to a comment I wrote on an earlier SO. My plan was to do a proper writeup of the race because I thought that would be helpful. I'll still do that tonight if folk don't kill this question. Hi Holli. I apologize. If I'd known you'd be posting code without evidence of a race I would have directed you to post a question at codereview. Hopefully it'll all work out. — raiph 19 secs ago
That is inherently the issue, Daniel. That is essentially a discussion, and SO is not designed for that; I hope that doesn't come off as anything but a statement. If you end up having a coding issue, e.g., you put something together and you ended up using a bunch of nested if-statements and they didn't work, SO would be the place to post to help resolve the specific issue. if you have working code and need to optimize, CodeReview would be the place for you. — Cyril 58 secs ago
7:23 PM
@CommitStrip ugh tell us about it
It makes sense that you tried (successfully as far as you know?) to eradicate races. And seek a review. But that means the end result, in theory, belongs on codereview not SO. I'm personally of the "whatever works" persuasion. If you get a better response here then that's cool. I'll try to give you what feedback I can when I can here and/or on codereview. It's a heck of a lot of code for SO. I enjoyed it, 'cuz it's well written, but still. I'm not sure why you got a downvote but whoever it was hasn't voted to close, so that's good. I've removed the ipsum text in case that was the problem. — raiph 27 secs ago
7:41 PM
In the meantime I've upvoted this question to help it along as a separate question even if it might end up doing better in terms of answers on the codereview site. — raiph 23 secs ago
Also if this is working and you would like a review, please go here. — Çöđěxěŕ 11 secs ago
In the meantime I've upvoted this question to help it along as a separate question. I can see it maybe getting closed (I note someone's just voted to close it as "too broad"). And I can see it doing better on the codereview site if you post it there and P6ers with concurrency chops know that the post is there. Hmm. In fact I really like that as an idea. If you post it, then we can mention it at other locations, eg r/perl6. — raiph 48 secs ago
1
I wrote a queue class in C++ using arrays of fixed width. Could anyone review my code ? I would appreciate any comment and recommendations. It works like a circular queue, so I handled back and front pointers in that manner. #include <stdlib.h> #include <iostream> template <class T> class Queue...
8:05 PM
I think you should split your questions. Question #1 is on topic for StackOverflow. Question #2 would be better at codereview.stackexchange.com after you have a working implementation. — Freiheit 24 secs ago
0
I wrote a backtracking Sudoku solving algorithm in Python. It solves a 2D array like this (zero means "empty field"): [ [7, 0, 0, 0, 0, 9, 0, 0, 3], [0, 9, 0, 1, 0, 0, 8, 0, 0], [0, 1, 0, 0, 0, 7, 0, 0, 0], [0, 3, 0, 4, 0, 0, 0, 8, 0], [6, 0, 0, 0, 8, 0, 0, 0, 1], [0, 7, 0, 0, 0, 2, ...
8:35 PM
@pacmaninbw It's not. Close as lacking context for not having any actual code in it.
DMGregory migrated it.
If this is working code and you're just looking for feedback on the structure I would suggest posting on codereview.stackexchange.com. as it stands this is primarily opinion-based and is likely to get closed here on SO. — wpercy 1 min ago
8:54 PM
0
This code adds the following functionality to the basic tkinter.Tk() class... settings-file interaction for persistent application settings fullscreen functionality using F11, can remember last configuration or be defaulted removes the need for multiple root.configure() statements removes the n...
9:14 PM
1
The exercise description: Given a ‘words.txt’ file (attached) containing a newline-delimited list of dictionary words, please implement a small command line application capable to return all the anagrams from ‘words.txt’ for a given word. You can use any Scala library of your choice, no...
10:00 PM
possible answer invalidation by emadboctor on question by emadboctor: codereview.stackexchange.com/posts/226363/revisions
10:13 PM
-1
So as I was in the process of revisiting my older SFML projects (2 -3 months old) to see if I can recompile them to get an updated executable, I noticed that each program that utilized sf::Texts was throwing a particular error. Once I closed the window, an exception was thrown and Visual Studio o...
0
I was wondering if I should bother with @Embedded or just use a String to represent a typical String value. Here's my code for a Localized lookup key @AllArgsConstructor @Data @Embeddable @NoArgsConstructor public class Localized implements Serializable { public static final String KEY_COL...
10:33 PM
You can post the slow part of your macro for review - it may be slow by design. Note here though. There is a site for code review :) — urdearboy 24 secs ago
0
In recent months I have been trying to figure out how in the world one can mimic the functionality of Excel's New Dynamic Arrays exclusively in VBA. There are tricky ways to do this using the window's API, (see this link), and I have also found that one can utilize ADO with Querytables (see this ...
11:23 PM
If you can reduce you code to only a few elements (2 or 3) and post it on the codereview stackoverfow. There is probably somebody who is willing to do a (harsh) review for you. — second 33 secs ago
0
I have a script in a Work Order Management System (Maximo) that performs a spatial query. Details: Takes the X&Y coordinates of a work order in Maximo Generates a URL from the coordinates (including other spatial query information) Executes the URL via a separate script/library (LIB_HTTPCLIENT...
11:52 PM
0
Here's a Quicksort I had fun writing and improving, so I thought I'd post it here. In my (brief) testing it's about 15% to 20% faster than Java's Arrays.sort(). The sort routine is a fairly vanilla Quicksort. The main improvements are to the pivot selection, and the Quicksort switches to an In...
|
2019-11-20 18:33:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25096139311790466, "perplexity": 1589.1579735813293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670597.74/warc/CC-MAIN-20191120162215-20191120190215-00068.warc.gz"}
|
http://books.duhnnae.com/2017/jul8/150143203551-Statistical-mechanics-and-dynamics-of-solvable-models-with-long-range-interactions-Condensed-Matter-Statistical-Mechanics.php
|
# Statistical mechanics and dynamics of solvable models with long-range interactions - Condensed Matter > Statistical Mechanics
Abstract: The two-body potential of systems with long-range interactions decays atlarge distances as $Vr\sim 1-r^\alpha$, with $\alpha\leq d$, where $d$ is thespace dimension. Examples are: gravitational systems, two-dimensionalhydrodynamics, two-dimensional elasticity, charged and dipolar systems.Although such systems can be made extensive, they are intrinsically nonadditive. Moreover, the space of accessible macroscopic thermodynamicparameters might be non convex. The violation of these two basic properties isat the origin of ensemble inequivalence, which implies that specific heat canbe negative in the microcanonical ensemble and temperature jumps can appear atmicrocanonical first order phase transitions. The lack of convexity impliesthat ergodicity may be generically broken. We present here a comprehensivereview of the recent advances on the statistical mechanics andout-of-equilibrium dynamics of systems with long-range interactions. The coreof the review consists in the detailed presentation of the concept of ensembleinequivalence, as exemplified by the exact solution, in the microcanonical andcanonical ensembles, of mean-field type models. Relaxation towardsthermodynamic equilibrium can be extremely slow and quasi-stationary states maybe present. The understanding of such unusual relaxation process is obtained bythe introduction of an appropriate kinetic theory based on the Vlasov equation.
Author: A. Campa 1, T. Dauxois 2, S. Ruffo 3 1 Complex Systems and Theoretical Physics Unit, ISS and INFN, Rome, Italy 2 Laboratoire de P
Source: https://arxiv.org/
|
2017-09-25 11:44:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6910083889961243, "perplexity": 2217.147261562192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818691476.47/warc/CC-MAIN-20170925111643-20170925131643-00224.warc.gz"}
|
https://www.physicsforums.com/threads/moebius-transformation-3-points.716863/
|
# Möbius transformation, 3 points
1. Oct 16, 2013
### usn7564
1. The problem statement, all variables and given/known data
Find the Möbius transformation that maps
0 -> -1
1 -> infinity
infinity -> 1
2. Relevant equations
$$w = f(z) = \frac{az + b}{cz+d}$$
3. The attempt at a solution
My first idea was to attempt to solve it as a normal system of eq's but that quickly falls apart due to infinity being there. Been toying with the idea of using the fact that lines will map to lines or circles but don't quite know how to apply it.
And yes I know there's a formula for these exact types of questions but it's in the next sub chapter, book figures it's possible to do without knowing that. Just can't for the life of me figure it out.
2. Oct 16, 2013
### camillio
Start with 0 -> -1. This gives you relation between b and d. Then simply think in limits, when f(z) -> inf, f(z) -> 1?
3. Oct 16, 2013
### usn7564
Got it to work, was a bit too caught up with the fact that infinity was defined as a point (which I need to read up on more) which just messed with me. Just looking at the limits it wasn't actually bad at all.
Thanks.
|
2017-11-22 04:38:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5716045498847961, "perplexity": 620.7477450538657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806455.2/warc/CC-MAIN-20171122031440-20171122051440-00180.warc.gz"}
|
http://clay6.com/qa/24364/a-person-carrying-a-plane-mirror-facing-his-back-is-moving-with-a-speed-of-
|
Browse Questions
# A person carrying a plane mirror facing his back is moving with a speed of V m/s. Another person is following the mirror at the speed of u m/s. What is the speed of approach of the person to the image in the mirrors.
$(a)\;u-2v \\ (b)\;v-2u \\ (c)\;v-u \\ (d)2(u-v)$
The speed with which the person following is approaching the mirror is
$(u-v) m/s$
$\therefore$ the speed with which he is approaching his image in the mirror is $2 (u-v) m/s$
Hence d is the correct answer.
|
2017-04-25 10:37:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.777190625667572, "perplexity": 536.8711209748867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120338.97/warc/CC-MAIN-20170423031200-00576-ip-10-145-167-34.ec2.internal.warc.gz"}
|
http://tex.stackexchange.com/questions/20128/writing-gauss-jordan-transformation-matrix
|
# Writing Gauss-Jordan Transformation Matrix
I am having trouble writing procedure for Gauss-Jordan transformation matrix in LaTeX.
I want to write something like
r2 - r1 r3-2r1
[A|I] -------> [A'|I'] ----->
and so on with the corresponding transformations (e.g., r2-r1) on top of the arrows. Could you give me an example code of how to do this? I only know that I need \bmatrix for matrices with [], but I do not know how to split them in between with a vertical line |.
Thanks a lot!
-
You might find the answers at this question useful as well: tex.stackexchange.com/q/2233/86 – Loop Space Jun 7 '11 at 7:01
With amsmath you can use
\xrightarrow{r_{2}-r_{1}}
that will produce an arrow wide enough to accommodate the thing on top.
In order to split matrices you need to use array
\left[\begin{array}{@{}cc|c@{}}
<entries>
\end{array}\right]
With the @{} at both sides you'll get output similar to that of bmatrix.
-
Thanks I got it now! – Erik Jun 6 '11 at 23:33
|
2014-07-22 09:39:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7608936429023743, "perplexity": 1387.147358762547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997857714.64/warc/CC-MAIN-20140722025737-00184-ip-10-33-131-23.ec2.internal.warc.gz"}
|
http://www.cpt.univ-mrs.fr/spip.php?page=jour&lang=fr&date=2017-03-24
|
# Vendredi 24 mars 2017
14h00 – 15h00, Amphi 5 du CPT
### Causality without events
#### Michal ECKSTEIN (Uniwersytet Jagiellonski, Krakow)
Einstein’s causality is one of the fundamental principles underlying modern physical theories. Whereas it is readily implemented in classical physics founded on Lorentzian geometry, its status in quantum theory has long been controversial. It is usually believed that the quantum nature of spacetime at small scales induces the breakdown of causality, although there is no empirical evidence that would support such a view. In my talk, I will argue that one can have a rigid causal structure even in a quantum spacetime’ without the local events. To this end I will draw from the mathematical richness of noncommutative geometry à la Connes and an operational viewpoint on physics. I will illustrate the general concept with an almost commutative’ toy-model and discuss the potential empirical consequences.
mars 2017 :
|
2018-04-22 04:53:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4966945946216583, "perplexity": 1390.419186322735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945493.69/warc/CC-MAIN-20180422041610-20180422061610-00203.warc.gz"}
|
https://www.eml-unitue.de/publication/Large-Loss-Matters-in-Weakly-Supervised-Multi-Label-Classification
|
Large Loss Matters in Weakly Supervised Multi-Label Classification
Youngwook Kim*, Jae Myung Kim*, Zeynep Akata, Jungwoo Lee
IEEE Conference on Computer Vision and Pattern Recognition, CVPR
2022
Abstract
Weakly supervised multi-label classification (WSML) task, which is to learn a multi-label classification using partially observed labels per image, is becoming increasingly important due to its huge annotation cost. In this work, we first regard unobserved labels as negative labels, casting the WSML task into noisy multi-label classification. From this point of view, we empirically observe that memorization effect, which was first discovered in a noisy multi-class setting, also occurs in a multi-label setting. That is, the model first learns the representation of clean labels, and then starts memorizing noisy labels. Based on this finding, we propose novel methods for WSML which reject or correct the large loss samples to prevent model from memorizing the noisy label. Without heavy and complex components, our proposed methods outperform previous state-of-the-art WSML methods on several partial label settings including Pascal VOC 2012, MS COCO, NUSWIDE, CUB, and OpenImages V3 datasets. Various analysis also show that our methodology actually works well, validating that treating large loss properly matters in a weakly supervised multi-label classification.
# Introduction
In a weakly supervised multi-label classification (WSML) task, labels are given as a form of partial label, which means only a small amount of categories is annotated per image. This setting reflects the recently released large-scale multi-label datasets, e.g. JFT-300M or InstagramNet-1B, which provide only partial label. Thus, it is becoming increasingly important to develop learning strategies with partial labels.
# Approach
## Target with Assume Negative
Let us define an input $x \in X$ and a target $y \in Y$ where $X$ and $Y$ compose a dataset $D$. In a weakly supervised multi-label learning for image classification task, $X$ is an image set and $Y = \{0,1,u\}^K$ where $u$ is an annotation of `unknown', i.e. unobserved label, and $K$ is the number of categories. For the target $y$, let $S^{p}=\{i|y_i=1\}$, $S^{n}=\{i|y_i=0\}$, and $S^{u}=\{i|y_i=u\}$. In a partial label setting, small amount of labels are known, thus $|S^{p}| + |S^{n}| < K$. We start our method with Assume Negative (AN) where all the unknown labels are regarded as negative. We call this modified target as $y^{AN}$,
$y^{AN}_i = \begin{cases} 1, & i \in S^{p}\\ 0, & i \in S^{n} \cup S^{u} , \end{cases}$
and the set of all $y^{AN}$ as $Y^{AN}$. $\{y_i^{AN} | i \in S^{p}\}$ and $\{y_i^{AN} | i \in S^{n}\}$ are the set where each element is true positive and true negative, respectively. $\{y_i^{AN} | i \in S^{u}\}$ contains both true negative and false negative. % and $\{y_i^{AN} | i \in S^{u}\}$ are true negative and false negative, respectively. % Therefore, $y_i^{AN} = 0$ would be either true negative or false negative. % Note that in WSML with AN target, there are no labels with false positive. The naive way of training the model $f$ with the dataset $D^{\prime} = (X, Y^{AN})$ is to minimize the loss function $L$,
$L = \frac{1}{|D^{\prime}|} \sum_{y^{AN} \in D^{\prime}} \frac{1}{K} \sum_{i=1}^{K} \mathrm{BCELoss} (f(x)_i, y_i^{AN}) ,$
where $f(\cdot) \in [0,1]^{K}$ and $\mathrm{BCELoss}(\cdot, \cdot)$ is the binary cross entropy loss between the function output and the target. We call this naive method as Naive AN.
## Memorization effect in WSML
We observe that a memorization effect occurs in WSML when the model is trained with the dataset with AN target. To confirm this, we make the following experimental setting. We convert Pascal VOC 2012 dataset into partial label one by randomly remaining only one positive label for each image and regard other labels as unknown (dataset $D$). These unknown labels are then assumed as negative (dataset $D^{\prime}$). We train ResNet-50 model with $D^{\prime}$ using the loss function $L$ in the equation above. We look at the trend of loss value corresponding to each label $y_i^{AN}$ in a training dataset while the model is trained. A single example for true negative label and false negative label is shown in the above figure. For a true negative label, the corresponding loss value keeps decreasing as the number of iteration increases (blue line). Meanwhile, the loss of a false negative label slightly increases in the initial learning phase, and then reaches the highest in the middle phase followed by decreasing to reach near $0$ at the end (red line). This implies that the model starts to memorize the wrong label from the middle phase.
## Method
In this section, we propose novel methods for WSML motivated from the ideas of noisy multi-class learning which ignores the large loss during training the model. Remind that in WSML with AN target, the model starts memorizing the false negative label in the middle of the training with having a large loss at that time. While we can only observe that the label in the set $\{y_i^{AN}| i \in S^{u}\}$ is negative and cannot explicitly discriminate whether it is false or true, we are able to implicitly distinguish between them. It is because the loss from false negative is likely to be larger than the loss from true negative before memorization starts. Therefore, we manipulate the label in the set $\{y_i^{AN}| i \in S^{u}\}$ that corresponds to the large loss value during the training process to prevent the model from memorizing false negative labels. We do not manipulate the known true labels, i.e. $\{y_i^{AN}| i \in S^{p}\cup S^{n}\}$, since they are all clean labels. Instead of using the original loss function, we further introduce the weight term $\lambda_i$ in the loss function,
$L = \frac{1}{|D^{\prime}|} \sum_{(x, y^{AN}) \in D^{\prime}} \frac{1}{K} \sum_{i=1}^{K} l_i \times \lambda_i .$
We define $l_i = \mathrm{BCELoss} \, (f(x)_i, y_i^{AN})$ where arguments of function $l_i$, that are $f(x)$ and $y^{AN}$, are omitted for convenience. The term $\lambda_i$ is defined as a function, $\lambda_i=\lambda(f(x)_i, y_i^{AN})$, where arguments are also omitted for convenience. $\lambda_i$ is the weighted value for how much the loss $l_i$ should be considered in the loss function. Intuitively, $\lambda_i$ should be small when $i \in S^{u}$ and the loss $l_i$ has high value in the middle of the training, that is, to ignore that loss since it is likely to be the loss from a false negative sample. We set $\lambda_i=1$ when $i \in S^{p}\cup S^{n}$ since the label $y_i^{AN}$ from these indices is a clean label. We present three different schemes of offering the weight $\lambda_i$ for $i\in S^{u}$. The schematic description is shown below.
Large Loss Rejection. This is to gradually increase the rejection rate during the training process. We set the function $\lambda_i$ as
$\lambda_i = \begin{cases} 0, & i\in S^{u} \mathrm{and} l_i > R(t) \\ 1, & \mathrm{otherwise} , \end{cases}$
where $t$ is the number of current epochs in the training process and $R(t)$ is the loss value that has $[(t-1) \cdot \Delta_{rel}]\%$ largest value in the loss set $\{ l_i | (x, y^{AN}) \in D^{\prime}, i\in S^{u}\}$. $\Delta_{rel}$ is a hyperparameter that determines the speed of increase of rejection rate. Defining $\lambda_i$ as above makes rejecting large loss samples in the loss function. We do not reject any loss values at the first epoch, $t=1$, since the model learns clean patterns in the initial phase. In practice, we use mini-batch in each iteration instead of full batch $D^{\prime}$ for composing the loss set. We call this method as LL-R. We also propose LL-Ct and LL-Cp which refer to large loss correction (temporary) and large loss correction (permanent), respectively. The readers can find these variants in detail in the paper.
## Results
The figure above shows the qualitative result of LL-R. The arrow indicates the change of categories with positive labels during training and GT indicates actual ground truth positive labels for a training image. We see that although not all ground truth positive labels are given, our proposed method progressively corrects the category of unannotated GT as positive. We also observe in the first three columns that a category that has been corrected once continues to be corrected in subsequent epochs, even though we perform correction temporarily for each epoch. This conveys that LL-R successfully keeps the model from memorizing false negatives. We also report the failure case of our method on the rightmost side where the model confuses the car as truck which is a similar category and misunderstands the absent category person as present. The quantitative comparison and more analysis of our method can be found in the paper.
(c) 2023 Explainable Machine Learning Tübingen Impressum
|
2023-03-30 05:53:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 66, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7108359932899475, "perplexity": 703.3932877846916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00752.warc.gz"}
|
https://einsteintoolkit.org/thornguide/WVUThorns/IllinoisGRMHD/documentation.html
|
## IllinoisGRMHD: A Compact, Dynamic-Spacetime General Relativistic Magnetohydrodynamics Code for Easy User Adoption
Date: 2015-10-12 12:00:00 -0600 (Mon, 12 Oct 2015)
Abstract
IllinoisGRMHD solves the equations of General Relativistic MagnetoHydroDynamics (GRMHD) using a high-resolution shock capturing scheme. It is a rewrite of the Illinois Numerical Relativity (ILNR) group’s GRMHD code, and generates results that agree to roundoff error with that original code. Its feature set coincides with the features of the ILNR group’s recent code (ca. 2009–2014), which was used in their modeling of the following systems:
1. Magnetized circumbinary disk accretion onto binary black holes
2. Magnetized black hole–neutron star mergers
3. Magnetized Bondi flow, Bondi-Hoyle-Littleton accretion
4. White dwarf–neutron star mergers
IllinoisGRMHD is particularly good at modeling GRMHD flows into black holes without the need for excision. Its HARM-based conservative-to-primitive solver has also been modified to check the physicality of conservative variables prior to primitive inversion, and move them into the physical range if they become unphysical.
### 1 Introduction
Currently IllinoisGRMHD consists of
1. the Piecewise Parabolic Method (PPM) for reconstruction,
2. the Harten, Lax, van Leer (HLL/HLLE) approximate Riemann solver, and
3. a modified HARM Conservative-to-Primitive solver (see REQUIRED CITATION #2 below).
IllinoisGRMHD evolves the vector potential ${A}_{\mu }$ (on staggered grids) instead of the magnetic fields (${B}^{i}$) directly, to guarantee that the magnetic fields will remain divergenceless even at AMR boundaries. On uniform resolution grids, this vector potential formulation produces results equivalent to those generated using the standard, staggered flux-CT scheme. This scheme is based on that of Del Zanna (2003, see below OPTIONAL CITATION #1).
For further information about motivations, basic equations, how IllinoisGRMHD works, as well as basic code test results, please see the IllinoisGRMHD code announcement paper, at
http://arxiv.org/abs/1501.07276. If you use IllinoisGRMHD for your research, you are asked to include the REQUIRED CITATIONS listed below in your citations.
For a quick “Guide to Getting Started”, please visit the IllinoisGRMHD web page:
http://math.wvu.edu/~zetienne/ILGRMHD/
===================
REQUIRED CITATIONS:
===================
1. IllinoisGRMHD code announcement paper: Class. Quantum Grav. 32 (2015) 175009,
(http://arxiv.org/abs/1501.07276)
2. Noble, S. C., Gammie, C. F., McKinney, J. C., & Del Zanna, L. 2006, Astrophysical Journal, 641, 626.
|
2018-11-16 07:30:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.419964998960495, "perplexity": 9134.448444033604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742981.53/warc/CC-MAIN-20181116070420-20181116092420-00360.warc.gz"}
|
http://mathhelpforum.com/advanced-statistics/163002-hypothesis-testing.html
|
# Math Help - hypothesis testing
1. ## hypothesis testing
(a) a sample of 14 children from parents without diabetes had a mean fasting glucose level of 82.2mg/dL with sample standard deviation 2.49mg/dL.
give a 95% confidence interval for the underlying standard deviation of fasting glucose level for children from parents without diabetes.
(b)a second sample of 14 children from parents both with type II diabetes had a mean fasting glucose level of 86.1mg/dL with sample standard deviation 2.09mg/dL. perform a hypothesis test to determine if the variability in glucose levels is the same for both groups of children. include a statement of $H_{0}$ and $H_{1}$, the test statistic and its distribution under $H_{0}$
(c) based on your answer to (b), perform a hypothesis test to determine if the mean glucose level is higher for children from parents with type II diabetes than from parents without diabetes. include a statement of $H_{0}$ and $H_{1}$, the test statistic and its distribution under $H_{0}$
2. Originally Posted by wik_chick88
(a) a sample of 14 children from parents without diabetes had a mean fasting glucose level of 82.2mg/dL with sample standard deviation 2.49mg/dL.
give a 95% confidence interval for the underlying standard deviation of fasting glucose level for children from parents without diabetes.
(b)a second sample of 14 children from parents both with type II diabetes had a mean fasting glucose level of 86.1mg/dL with sample standard deviation 2.09mg/dL. perform a hypothesis test to determine if the variability in glucose levels is the same for both groups of children. include a statement of $H_{0}$ and $H_{1}$, the test statistic and its distribution under $H_{0}$
(c) based on your answer to (b), perform a hypothesis test to determine if the mean glucose level is higher for children from parents with type II diabetes than from parents without diabetes. include a statement of $H_{0}$ and $H_{1}$, the test statistic and its distribution under $H_{0}$
What have you tried? What is the specific difficulty you are having here?
CB
3. for part (a) i think i have figured it out...
$\sigma$ = ?
n = 14
s = 2.49
from the chi-square-distribution table i got
$\chi^{2}_{13;0.025}$ = 5.01
$\chi^{2}_{13;0.975}$ = 24.7
so a 95% confidence interval for $\chi^{2}$ is
$(\frac{2.49^{2} * 13}{24.7},\frac{2.49^{2} * 13}{5.01})$
= (3.263, 16.088)
so a 95% confidence interval for $\chi$ is
(1.806, 4.011)
firstly, have i done this correctly? i was referring to the poorly set out example in our textbook! also, can u please explain why you use the values $\gamma$= 0.025 and 0.975 when referring to the chi-square-distribution table?
4. You want .95 probability between any two percentiles
so you can use those two, the .025 and the.975 percentiles, which most do
That puts equal probability in each tail
Or you can use one sided CIs where all the .05 is in one tail
or use .02 and .97 etc.........
And these are CIs for $\sigma^2$ and $\sigma$
$\chi^2$ is the underlying distribution
5. In part (b) you want to perfrom an F test comparing the two POPULATION variances
Then in (c) you will a t test comparing the two means.
NOW if you come away from (b) saying that the two st deviations are equal you
will perform a pooled t test, where you pool the sample st deviations
IF not you will use the non pooled test where you need to approximate the nasty degrees of freedom using satterwhite's approximation
Student's t-test - Wikipedia, the free encyclopedia
also http://www.bsos.umd.edu/socy/alan/ha..._two_means.pdf
|
2014-04-20 20:33:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5447307229042053, "perplexity": 1066.4756033853903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/science/physics/physics-for-scientists-and-engineers-a-strategic-approach-with-modern-physics-4th-edition/chapter-7-newton-s-third-law-exercises-and-problems-page-180/53
|
## Physics for Scientists and Engineers: A Strategic Approach with Modern Physics (4th Edition)
$a = 1.77~m/s^2$
Note that the 1.0-kg block has the same magnitude of acceleration as the 2.0-kg block. We can set up a force equation for the 1.0-kg block. Let $m_1$ be the mass of this block. $\sum F = m_1~a$ $T-F_f = m_1~a$ $T-m_1~g~\mu_k = m_1~a$ $T = m_1~g~\mu_k + m_1~a$ We can use the expression for tension $T$ in the force equation for the 2.0-kg block. Let $m_2$ be the mass of this block. $\sum F = m_2~a$ $F - T - F_{f1}- F_{f2} = m_2~a$ $F - (m_1~g~\mu_k + m_1~a) - m_1~g~\mu_k- (m_1+m_2)~g~\mu_k = m_2~a$ $F - 3m_1~g~\mu_k - m_2~g~\mu_k = (m_1+m_2)~a$ $a = \frac{F - 3m_1~g~\mu_k - m_2~g~\mu_k}{m_1+m_2}$ $a = \frac{20~N - (3)(1.0~kg)(9.80~m/s^2)(0.30) - (2.0~kg)(9.80~m/s^2)(0.30)}{1.0~kg+2.0~kg}$ $a = 1.77~m/s^2$
|
2018-07-19 17:59:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8266317844390869, "perplexity": 230.37024577631445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591150.71/warc/CC-MAIN-20180719164439-20180719184439-00284.warc.gz"}
|
http://www.whxb.pku.edu.cn/CN/10.3866/PKU.WHXB201904056
|
### 静电纺纳米纤维基超级电容器无粘合剂电极材料的研究进展
• 收稿日期:2019-04-12 录用日期:2019-05-16 发布日期:2019-05-30
• 通讯作者: 王策 E-mail:cwang@jlu.edu.cn
• 作者简介:王策,1982毕业于吉林大学化学系高分子专业,获得学士学位。1995年在奥地利维也纳技术大学获得博士学位。随后在美国爵硕大学开展博士后工作。现任吉林大学化学学院麦克德尔米德实验室教授,博士生导师。主要研究方向是静电纺复合纳米纤维的制备及其在电磁屏蔽、传感、环境、能源等领域的应用
• 基金资助:
国家自然科学基金(21875084);国家自然科学基金(51773075);吉林省科技厅(20190101013JH)
### Research on Electrospun Nanofiber-Based Binder-Free Electrode Materials for Supercapacitors
Di Tian,Xiaofeng Lu,Weimo Li,Yue Li,Ce Wang*()
• Received:2019-04-12 Accepted:2019-05-16 Published:2019-05-30
• Contact: Ce Wang E-mail:cwang@jlu.edu.cn
• Supported by:
the National Natural Science Foundation of China(21875084);the National Natural Science Foundation of China(51773075);the Project of Science and Technology Agency, Jilin Province, China(20190101013JH)
Abstract:
The increased demand for high-performance supercapacitors has fueled the development of electrode materials. As an important part of supercapacitors, the electrochemical performance of the supercapacitor is directly affected by the specific surface area, conductivity, electrochemical activity, and stability of electrode materials. In the traditional manufacturing method, a binder must be added to powdered electrode materials to enhance their combination with the current collector, which could lead to morphology damage, pore blockage, and reduced conductivity of active materials that will adversely affect their electrochemical behavior. Thus, research on binder-free electrode materials has attracted significant interest. Recently, electrospun nanofibers have been widely used as supercapacitor electrode materials because of their advantages such as large specific surface area, high porosity, and easy preparation. The attainable continuity and flexibility endow electrospun nanofiber membranes outstanding performance among large numbers of binder-free materials. This review considers recent studies on electrospun nanofiber-based binder-free electrode materials for supercapacitors, including carbon nanofibers, carbon-based composite nanofibers, conductive polymer-based composite nanofibers, and metal oxide nanofibers. These studies demonstrate that pore structure construction, activation treatment, and nitrogen doping can improve the specific surface area, electrochemical activity, wettability, and graphitization degree of carbon nanofibers to enhance their electrochemical properties. Moreover, combining carbon nanofibers with metal oxides, metal sulfides, metal carbides, and conductive polymers by methods such as blending, chemical deposition, electrochemical deposition, etc., can improve their capacitance, rate performance, and cycling stabilities, which complement the advantages of different materials and proves that the performance of multicomponent materials is better than that of single-component materials. In particular, conductive polymers based on composite nanofibers and metal oxide nanofibers can be used as binder-free materials by electrospinning technology, but their dependence on other substances as well as fragile fiber membrane limit their widespread application. Therefore, in order to ensure the continuity or flexibility of fiber membranes, carbon-based composite nanofibers with multicomponent and hierarchical structure could potentially be used/constructed as binder-free electrode materials. Combinations with new types of electrode materials such as metal-organic frameworks (MOF), covalent organic frameworks (COF), MXenes, metal nitride, metal phosphide, etc., and the preparation of materials with novel structures have also been attempted. In order to realize the practical application of eletrospun nanofiber-based binder-free electrode materials, more attention should be given to improving their mechanical properties, production efficiency, and research on the application of flexible devices. We hope that this review can broaden ideas for improving the development and application of electrospun nanofiber-based binder-free electrode materials for high-performance supercapacitors.
MSC2000:
• O646
|
2020-02-22 11:22:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20912501215934753, "perplexity": 10045.165711727292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145657.46/warc/CC-MAIN-20200222085018-20200222115018-00422.warc.gz"}
|
http://icsepapers.com/blog/tag/machine
|
# Blog Tag Machine
## Terms related to machines
Effort(E): The force applied on a machine to do some mechenical work. Load(L): The weight moved or the resistance overcome by a machine in doing some mechenical work. Fulcrum(F): A fixed point or an axil around which a machine turn i doing mechenical work. Effort arm: The perpendicular distance between the effort point and the fulcrum of a mach . . .
## Levers of the third order
In this types of levers the effort is situated between the load and the fulcrum. In the levers of third order, the effort arm is always smaller than the load arm. Thus more effort is required to lift a lighter load, This lever is also called speed multiplier lever as load moves through lerger distance as compared to the effort. For examples: Sugar . . .
Pages
|
2022-08-11 20:17:17
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8308312296867371, "perplexity": 2622.8828467681783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571502.25/warc/CC-MAIN-20220811194507-20220811224507-00509.warc.gz"}
|
https://masteryourtest.com/study/DC/Current
|
# What Is Current? Current Explained
fact
Current is measured in a unit called Amperes (shortened to Amps and written 'A')
fact
Current is represented with an uppercase or lowercase "I" in equations or on circuits.
fact
Current is the number of electrons moving past a single point in a circuit per second. 1A of current means 1 Coulomb (some unknown number of coulombs is written "Q" in equations) of electrons passes a point every second. 1 Coulomb is just a number and it equals $$6.2415091 \times 10^{18}$$.
law
The fact above is written algebraically as: $$I = \frac{Q}{t}$$ That's read as "the current equals the coulombs of electrons divided by the number of seconds"
fact
There are two different ways to show the direction of current. "Electron flow" and "Conventional flow". "Electron flow" is the direction that electrons actually move in the circuit however "Conventional flow" is the opposite and is the one we use for historical reasons. Even though it is "wrong" we still get all the correct answers. In "conventional flow" we show positive charges going from high voltages to low voltages or leaving the positive end of a battery and going to the negative end. Whenever I or anybody else mentions "current" we mean "Conventional flow" current.
fact
We measure current with a device called an Ammeter which has the following symbol:
fact
An ammeter must be connected in series with whatever you want to measure the current of so that the current flows through it. We'll cover more on what "in series" means in later sections.
example
Assemble a circuit to measure the current through a resistor connected to a battery
To measure the current flowing through the resistor R1 we assemble the circuit as follows: As you can see all of the current that flows through R1 will also flow through the ammeter and no current flows through the ammeter that didn't also flow through the resistor.
fact
An Ideal ammeter has zero resistance. This is of course impossible but it is the goal in ammeter design. This is so that adding the ammeter into the circuit doesn't change the current in that part of the circuit. If the ammeter had a large resistance it would lower the current from its "normal" value when we tried to measure it.
practice problems
|
2019-08-19 18:22:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6242725849151611, "perplexity": 626.1183279613743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314904.26/warc/CC-MAIN-20190819180710-20190819202710-00404.warc.gz"}
|
https://es.mathworks.com/help/simbio/ref/sbiofit.html
|
# sbiofit
Perform nonlinear least-squares regression
Statistics and Machine Learning Toolbox™, Optimization Toolbox™, and Global Optimization Toolbox are recommended for this function.
## Description
example
fitResults = sbiofit(sm,grpData,ResponseMap,estiminfo) estimates parameters of a SimBiology model sm using nonlinear least-squares regression.
grpData is a groupedData object specifying the data to fit. ResponseMap defines the mapping between the model components and response data in grpData. estimatedInfo is an EstimatedInfo object that defines the estimated parameters in the model sm. fitResults is a OptimResults object or NLINResults object or a vector of these objects.
sbiofit uses the first available estimation function among the following: lsqnonlin (Optimization Toolbox), nlinfit (Statistics and Machine Learning Toolbox), or fminsearch.
By default, each group in grpData is fit separately, resulting in group-specific parameter estimates. If the model contains active doses and variants, they are applied before the simulation.
example
fitResults = sbiofit(sm,grpData,ResponseMap,estiminfo,dosing) uses the dosing information specified by a matrix of SimBiology dose objects dosing instead of using the active doses of the model sm if there is any.
example
fitResults = sbiofit(sm,grpData,ResponseMap,estiminfo,dosing,functionName) uses the estimation function specified by functionName. If the specified function is unavailable, a warning is issued and the first available default function is used.
example
fitResults = sbiofit(sm,grpData,ResponseMap,estiminfo,dosing,functionName,options) uses the additional options specified by options for the function functionName.
example
fitResults = sbiofit(sm,grpData,ResponseMap,estiminfo,dosing,functionName,options,variants) applies variant objects specified as variants instead of using any active variants of the model.
example
fitResults = sbiofit(_,Name,Value) uses additional options specified by one or more Name,Value pair arguments.
example
[fitResults,simdata] = sbiofit(_) also returns a vector of SimData objects simdata using any of the input arguments in the previous syntaxes.
Note
• sbiofit unifies sbionlinfit and sbioparamestim estimation functions. Use sbiofit to perform nonlinear least-squares regression.
• sbiofit simulates the model using a SimFunction object, which automatically accelerates simulations by default. Hence it is not necessary to run sbioaccelerate before you call sbiofit.
## Examples
collapse all
Background
This example shows how to fit an individual's PK profile data to one-compartment model and estimate pharmacokinetic parameters.
Suppose you have drug plasma concentration data from an individual and want to estimate the volume of the central compartment and the clearance. Assume the drug concentration versus the time profile follows the monoexponential decline ${C}_{t}={C}_{0}{e}^{-{k}_{e}t}$, where ${C}_{t}$ is the drug concentration at time t, ${C}_{0}$ is the initial concentration, and ${k}_{e}$ is the elimination rate constant that depends on the clearance and volume of the central compartment ${k}_{e}=Cl/V$.
The synthetic data in this example was generated using the following model, parameters, and dose:
• One-compartment model with bolus dosing and first-order elimination
• Volume of the central compartment (Central) = 1.70 liter
• Clearance parameter (Cl_Central) = 0.55 liter/hour
• Constant error model
• Bolus dose of 10 mg
The data is stored as a table with variables Time and Conc that represent the time course of the plasma concentration of an individual after an intravenous bolus administration measured at 13 different time points. The variable units for Time and Conc are hour and milligram/liter, respectively.
plot(data.Time,data.Conc,'b+')
xlabel('Time (hour)');
ylabel('Drug Concentration (milligram/liter)');
Convert to groupedData Format
Convert the data set to a groupedData object, which is the required data format for the fitting function sbiofit for later use. A groupedData object also lets you set independent variable and group variable names (if they exist). Set the units of the Time and Conc variables. The units are optional and only required for the UnitConversion feature, which automatically converts matching physical quantities to one consistent unit system.
gData = groupedData(data);
gData.Properties.VariableUnits = {'hour','milligram/liter'};
gData.Properties
ans = struct with fields:
Description: ''
UserData: []
DimensionNames: {'Row' 'Variables'}
VariableNames: {'Time' 'Conc'}
VariableDescriptions: {}
VariableUnits: {'hour' 'milligram/liter'}
VariableContinuity: []
RowNames: {}
CustomProperties: [1x1 matlab.tabular.CustomProperties]
GroupVariableName: ''
IndependentVariableName: 'Time'
groupedData automatically set the name of the IndependentVariableName property to the Time variable of the data.
Construct a One-Compartment Model
Use the built-in PK library to construct a one-compartment model with bolus dosing and first-order elimination where the elimination rate depends on the clearance and volume of the central compartment. Use the configset object to turn on unit conversion.
pkmd = PKModelDesign;
pkc1.DosingType = 'Bolus';
pkc1.EliminationType = 'linear-clearance';
pkc1.HasResponseVariable = true;
model = construct(pkmd);
configset = getconfigset(model);
configset.CompileOptions.UnitConversion = true;
For details on creating compartmental PK models using the SimBiology® built-in library, see Create Pharmacokinetic Models.
Define Dosing
Define a single bolus dose of 10 milligram given at time = 0. For details on setting up different dosing schedules, see Doses in SimBiology Models.
dose = sbiodose('dose');
dose.TargetName = 'Drug_Central';
dose.StartTime = 0;
dose.Amount = 10;
dose.AmountUnits = 'milligram';
dose.TimeUnits = 'hour';
Map Response Data to the Corresponding Model Component
The data contains drug concentration data stored in the Conc variable. This data corresponds to the Drug_Central species in the model. Therefore, map the data to Drug_Central as follows.
responseMap = {'Drug_Central = Conc'};
Specify Parameters to Estimate
The parameters to fit in this model are the volume of the central compartment (Central) and the clearance rate (Cl_Central). In this case, specify log-transformation for these biological parameters since they are constrained to be positive. The estimatedInfo object lets you specify parameter transforms, initial values, and parameter bounds if needed.
paramsToEstimate = {'log(Central)','log(Cl_Central)'};
estimatedParams = estimatedInfo(paramsToEstimate,'InitialValue',[1 1],'Bounds',[1 5;0.5 2]);
Estimate Parameters
Now that you have defined one-compartment model, data to fit, mapped response data, parameters to estimate, and dosing, use sbiofit to estimate parameters. The default estimation function that sbiofit uses will change depending on which toolboxes are available. To see which function was used during fitting, check the EstimationFunction property of the corresponding results object.
fitConst = sbiofit(model,gData,responseMap,estimatedParams,dose);
Display Estimated Parameters and Plot Results
Notice the parameter estimates were not far off from the true values (1.70 and 0.55) that were used to generate the data. You may also try different error models to see if they could further improve the parameter estimates.
fitConst.ParameterEstimates
ans=2×4 table
Name Estimate StandardError Bounds
______________ ________ _____________ __________
{'Central' } 1.6993 0.034821 1 5
{'Cl_Central'} 0.53358 0.01968 0.5 2
s.Labels.XLabel = 'Time (hour)';
s.Labels.YLabel = 'Concentration (milligram/liter)';
plot(fitConst,'AxesStyle',s);
Use Different Error Models
Try three other supported error models (proportional, combination of constant and proportional error models, and exponential).
fitProp = sbiofit(model,gData,responseMap,estimatedParams,dose,...
'ErrorModel','proportional');
fitExp = sbiofit(model,gData,responseMap,estimatedParams,dose,...
'ErrorModel','exponential');
fitComb = sbiofit(model,gData,responseMap,estimatedParams,dose,...
'ErrorModel','combined');
Use Weights Instead of an Error Model
You can specify weights as a numeric matrix, where the number of columns corresponds to the number of responses. Setting all weights to 1 is equivalent to the constant error model.
weightsNumeric = ones(size(gData.Conc));
fitWeightsNumeric = sbiofit(model,gData,responseMap,estimatedParams,dose,'Weights',weightsNumeric);
Alternatively, you can use a function handle that accepts a vector of predicted response values and returns a vector of weights. In this example, use a function handle that is equivalent to the proportional error model.
weightsFunction = @(y) 1./y.^2;
fitWeightsFunction = sbiofit(model,gData,responseMap,estimatedParams,dose,'Weights',weightsFunction);
Compare Information Criteria for Model Selection
Compare the loglikelihood, AIC, and BIC values of each model to see which error model best fits the data. A larger likelihood value indicates the corresponding model fits the model better. For AIC and BIC, the smaller values are better.
allResults = [fitConst,fitWeightsNumeric,fitWeightsFunction,fitProp,fitExp,fitComb];
errorModelNames = {'constant error model','equal weights','proportional weights', ...
'proportional error model','exponential error model',...
'combined error model'};
LogLikelihood = [allResults.LogLikelihood]';
AIC = [allResults.AIC]';
BIC = [allResults.BIC]';
t = table(LogLikelihood,AIC,BIC);
t.Properties.RowNames = errorModelNames;
t
t=6×3 table
LogLikelihood AIC BIC
_____________ _______ _______
constant error model 3.9866 -3.9732 -2.8433
equal weights 3.9866 -3.9732 -2.8433
proportional weights -3.8472 11.694 12.824
proportional error model -3.8257 11.651 12.781
exponential error model 1.1984 1.6032 2.7331
combined error model 3.9163 -3.8326 -2.7027
Based on the information criteria, the constant error model (or equal weights) fits the data best since it has the largest loglikelihood value and the smallest AIC and BIC.
Display Estimated Parameter Values
Show the estimated parameter values of each model.
Estimated_Central = zeros(6,1);
Estimated_Cl_Central = zeros(6,1);
t2 = table(Estimated_Central,Estimated_Cl_Central);
t2.Properties.RowNames = errorModelNames;
for i = 1:height(t2)
t2{i,1} = allResults(i).ParameterEstimates.Estimate(1);
t2{i,2} = allResults(i).ParameterEstimates.Estimate(2);
end
t2
t2=6×2 table
Estimated_Central Estimated_Cl_Central
_________________ ____________________
constant error model 1.6993 0.53358
equal weights 1.6993 0.53358
proportional weights 1.9045 0.51734
proportional error model 1.8777 0.51147
exponential error model 1.7872 0.51701
combined error model 1.7008 0.53271
Conclusion
This example showed how to estimate PK parameters, namely the volume of the central compartment and clearance parameter of an individual, by fitting the PK profile data to one-compartment model. You compared the information criteria of each model and estimated parameter values of different error models to see which model best explained the data. Final fitted results suggested both the constant and combined error models provided the closest estimates to the parameter values used to generate the data. However, the constant error model is a better model as indicated by the loglikelihood, AIC, and BIC information criteria.
Suppose you have drug plasma concentration data from three individuals that you want to use to estimate corresponding pharmacokinetic parameters, namely the volume of central and peripheral compartment (Central, Peripheral), the clearance rate (Cl_Central), and intercompartmental clearance (Q12). Assume the drug concentration versus the time profile follows the biexponential decline ${C}_{t}=A{e}^{-at}+B{e}^{-bt}$, where Ct is the drug concentration at time t, and a and b are slopes for corresponding exponential declines.
The synthetic data set contains drug plasma concentration data measured in both central and peripheral compartments. The data was generated using a two-compartment model with an infusion dose and first-order elimination. These parameters were used for each individual.
CentralPeripheralQ12Cl_Central
Individual 11.900.680.240.57
Individual 22.106.050.360.95
Individual 31.704.210.460.95
The data is stored as a table with variables ID, Time, CentralConc, and PeripheralConc. It represents the time course of plasma concentrations measured at eight different time points for both central and peripheral compartments after an infusion dose.
Convert the data set to a groupedData object which is the required data format for the fitting function sbiofit for later use. A groupedData object also lets you set independent variable and group variable names (if they exist). Set the units of the ID, Time, CentralConc, and PeripheralConc variables. The units are optional and only required for the UnitConversion feature, which automatically converts matching physical quantities to one consistent unit system.
gData = groupedData(data);
gData.Properties.VariableUnits = {'','hour','milligram/liter','milligram/liter'};
gData.Properties
ans =
struct with fields:
Description: ''
UserData: []
DimensionNames: {'Row' 'Variables'}
VariableNames: {'ID' 'Time' 'CentralConc' 'PeripheralConc'}
VariableDescriptions: {}
VariableUnits: {1x4 cell}
VariableContinuity: []
RowNames: {}
CustomProperties: [1x1 matlab.tabular.CustomProperties]
GroupVariableName: 'ID'
IndependentVariableName: 'Time'
Create a trellis plot that shows the PK profiles of three individuals.
sbiotrellis(gData,'ID','Time',{'CentralConc','PeripheralConc'},...
'Marker','+','LineStyle','none');
Use the built-in PK library to construct a two-compartment model with infusion dosing and first-order elimination where the elimination rate depends on the clearance and volume of the central compartment. Use the configset object to turn on unit conversion.
pkmd = PKModelDesign;
pkc1.DosingType = 'Infusion';
pkc1.EliminationType = 'linear-clearance';
pkc1.HasResponseVariable = true;
model = construct(pkmd);
configset = getconfigset(model);
configset.CompileOptions.UnitConversion = true;
Assume every individual receives an infusion dose at time = 0, with a total infusion amount of 100 mg at a rate of 50 mg/hour. For details on setting up different dosing strategies, see Doses in SimBiology Models.
dose = sbiodose('dose','TargetName','Drug_Central');
dose.StartTime = 0;
dose.Amount = 100;
dose.Rate = 50;
dose.AmountUnits = 'milligram';
dose.TimeUnits = 'hour';
dose.RateUnits = 'milligram/hour';
The data contains measured plasma concentrations in the central and peripheral compartments. Map these variables to the appropriate model species, which are Drug_Central and Drug_Peripheral.
responseMap = {'Drug_Central = CentralConc','Drug_Peripheral = PeripheralConc'};
The parameters to estimate in this model are the volumes of central and peripheral compartments (Central and Peripheral), intercompartmental clearance Q12, and clearance rate Cl_Central. In this case, specify log-transform for Central and Peripheral since they are constrained to be positive. The estimatedInfo object lets you specify parameter transforms, initial values, and parameter bounds (optional).
paramsToEstimate = {'log(Central)','log(Peripheral)','Q12','Cl_Central'};
estimatedParam = estimatedInfo(paramsToEstimate,'InitialValue',[1 1 1 1]);
Fit the model to all of the data pooled together, that is, estimate one set of parameters for all individuals. The default estimation method that sbiofit uses will change depending on which toolboxes are available. To see which estimation function sbiofit used for the fitting, check the EstimationFunction property of the corresponding results object.
pooledFit = sbiofit(model,gData,responseMap,estimatedParam,dose,'Pooled',true)
pooledFit =
OptimResults with properties:
ExitFlag: 3
Output: [1x1 struct]
GroupName: []
Beta: [4x3 table]
ParameterEstimates: [4x3 table]
J: [24x4x2 double]
COVB: [4x4 double]
CovarianceMatrix: [4x4 double]
R: [24x2 double]
MSE: 6.6220
SSE: 291.3688
Weights: []
LogLikelihood: -111.3904
AIC: 230.7808
BIC: 238.2656
DFE: 44
DependentFiles: {1x3 cell}
EstimatedParameterNames: {'Central' 'Peripheral' 'Q12' 'Cl_Central'}
ErrorModelInfo: [1x3 table]
EstimationFunction: 'lsqnonlin'
Plot the fitted results versus the original data. Although three separate plots were generated, the data was fitted using the same set of parameters (that is, all three individuals had the same fitted line).
plot(pooledFit);
Estimate one set of parameters for each individual and see if there is any improvement in the parameter estimates. In this example, since there are three individuals, three sets of parameters are estimated.
unpooledFit = sbiofit(model,gData,responseMap,estimatedParam,dose,'Pooled',false);
Plot the fitted results versus the original data. Each individual was fitted differently (that is, each fitted line is unique to each individual) and each line appeared to fit well to individual data.
plot(unpooledFit);
Display the fitted results of the first individual. The MSE was lower than that of the pooled fit. This is also true for the other two individuals.
unpooledFit(1)
ans =
OptimResults with properties:
ExitFlag: 3
Output: [1x1 struct]
GroupName: 1
Beta: [4x3 table]
ParameterEstimates: [4x3 table]
J: [8x4x2 double]
COVB: [4x4 double]
CovarianceMatrix: [4x4 double]
R: [8x2 double]
MSE: 2.1380
SSE: 25.6559
Weights: []
LogLikelihood: -26.4805
AIC: 60.9610
BIC: 64.0514
DFE: 12
DependentFiles: {1x3 cell}
EstimatedParameterNames: {'Central' 'Peripheral' 'Q12' 'Cl_Central'}
ErrorModelInfo: [1x3 table]
EstimationFunction: 'lsqnonlin'
Generate a plot of the residuals over time to compare the pooled and unpooled fit results. The figure indicates unpooled fit residuals are smaller than those of pooled fit as expected. In addition to comparing residuals, other rigorous criteria can be used to compare the fitted results.
t = [gData.Time;gData.Time];
res_pooled = vertcat(pooledFit.R);
res_pooled = res_pooled(:);
res_unpooled = vertcat(unpooledFit.R);
res_unpooled = res_unpooled(:);
plot(t,res_pooled,'o','MarkerFaceColor','w','markerEdgeColor','b')
hold on
plot(t,res_unpooled,'o','MarkerFaceColor','b','markerEdgeColor','b')
refl = refline(0,0); % A reference line representing a zero residual
title('Residuals versus Time');
xlabel('Time');
ylabel('Residuals');
legend({'Pooled','Unpooled'});
This example showed how to perform pooled and unpooled estimations using sbiofit. As illustrated, the unpooled fit accounts for variations due to the specific subjects in the study, and, in this case, the model fits better to the data. However, the pooled fit returns population-wide parameters. If you want to estimate population-wide parameters while considering individual variations, use sbiofitmixed.
This example shows how to estimate category-specific (such as young versus old, male versus female), individual-specific, and population-wide parameters using PK profile data from multiple individuals.
Background
Suppose you have drug plasma concentration data from 30 individuals and want to estimate pharmacokinetic parameters, namely the volumes of central and peripheral compartment, the clearance, and intercompartmental clearance. Assume the drug concentration versus the time profile follows the biexponential decline ${C}_{t}=A{e}^{-at}+B{e}^{-bt}$, where ${C}_{t}$ is the drug concentration at time t, and $a$ and $b$ are slopes for corresponding exponential declines.
This synthetic data contains the time course of plasma concentrations of 30 individuals after a bolus dose (100 mg) measured at different times for both central and peripheral compartments. It also contains categorical variables, namely Sex and Age.
clear
Convert to groupedData Format
Convert the data set to a groupedData object, which is the required data format for the fitting function sbiofit. A groupedData object also allows you set independent variable and group variable names (if they exist). Set the units of the ID, Time, CentralConc, PeripheralConc, Age, and Sex variables. The units are optional and only required for the UnitConversion feature, which automatically converts matching physical quantities to one consistent unit system.
gData = groupedData(data);
gData.Properties.VariableUnits = {'','hour','milligram/liter','milligram/liter','',''};
gData.Properties
ans = struct with fields:
Description: ''
UserData: []
DimensionNames: {'Row' 'Variables'}
VariableNames: {1x6 cell}
VariableDescriptions: {}
VariableUnits: {1x6 cell}
VariableContinuity: []
RowNames: {}
CustomProperties: [1x1 matlab.tabular.CustomProperties]
GroupVariableName: 'ID'
IndependentVariableName: 'Time'
The IndependentVariableName and GroupVariableName properties have been automatically set to the Time and ID variables of the data.
Visualize Data
Display the response data for each individual.
t = sbiotrellis(gData,'ID','Time',{'CentralConc','PeripheralConc'},...
'Marker','+','LineStyle','none');
% Resize the figure.
t.hFig.Position(:) = [100 100 1280 800];
Set Up a Two-Compartment Model
Use the built-in PK library to construct a two-compartment model with infusion dosing and first-order elimination where the elimination rate depends on the clearance and volume of the central compartment. Use the configset object to turn on unit conversion.
pkmd = PKModelDesign;
pkc1.DosingType = 'Bolus';
pkc1.EliminationType = 'linear-clearance';
pkc1.HasResponseVariable = true;
model = construct(pkmd);
configset = getconfigset(model);
configset.CompileOptions.UnitConversion = true;
For details on creating compartmental PK models using the SimBiology® built-in library, see Create Pharmacokinetic Models.
Define Dosing
Assume every individual receives a bolus dose of 100 mg at time = 0. For details on setting up different dosing strategies, see Doses in SimBiology Models.
dose = sbiodose('dose','TargetName','Drug_Central');
dose.StartTime = 0;
dose.Amount = 100;
dose.AmountUnits = 'milligram';
dose.TimeUnits = 'hour';
Map the Response Data to Corresponding Model Components
The data contains measured plasma concentration in the central and peripheral compartments. Map these variables to the appropriate model components, which are Drug_Central and Drug_Peripheral.
responseMap = {'Drug_Central = CentralConc','Drug_Peripheral = PeripheralConc'};
Specify Parameters to Estimate
Specify the volumes of central and peripheral compartments Central and Peripheral, intercompartmental clearance Q12, and clearance Cl_Central as parameters to estimate. The estimatedInfo object lets you optionally specify parameter transforms, initial values, and parameter bounds. Since both Central and Peripheral are constrained to be positive, specify a log-transform for each parameter.
paramsToEstimate = {'log(Central)', 'log(Peripheral)', 'Q12', 'Cl_Central'};
estimatedParam = estimatedInfo(paramsToEstimate,'InitialValue',[1 1 1 1]);
Estimate Individual-Specific Parameters
Estimate one set of parameters for each individual by setting the 'Pooled' name-value pair argument to false.
unpooledFit = sbiofit(model,gData,responseMap,estimatedParam,dose,'Pooled',false);
Display Results
Plot the fitted results versus the original data for each individual (group).
plot(unpooledFit);
For an unpooled fit, sbiofit always returns one results object for each individual.
Examine Parameter Estimates for Category Dependencies
Explore the unpooled estimates to see if there is any category-specific parameters, that is, if some parameters are related to one or more categories. If there are any category dependencies, it might be possible to reduce the number of degrees of freedom by estimating just category-specific values for those parameters.
First extract the ID and category values for each ID
catParamValues = unique(gData(:,{'ID','Sex','Age'}));
Add variables to the table containing each parameter's estimate.
allParamValues = vertcat(unpooledFit.ParameterEstimates);
catParamValues.Central = allParamValues.Estimate(strcmp(allParamValues.Name, 'Central'));
catParamValues.Peripheral = allParamValues.Estimate(strcmp(allParamValues.Name, 'Peripheral'));
catParamValues.Q12 = allParamValues.Estimate(strcmp(allParamValues.Name, 'Q12'));
catParamValues.Cl_Central = allParamValues.Estimate(strcmp(allParamValues.Name, 'Cl_Central'));
Plot estimates of each parameter for each category. gscatter requires Statistics and Machine Learning Toolbox™. If you do not have it, use other alternative plotting functions such as plot.
h = figure;
ylabels = ["Central","Peripheral","Q12","Cl\_Central"];
plotNumber = 1;
for i = 1:4
thisParam = estimatedParam(i).Name;
% Plot for Sex category
subplot(4,2,plotNumber);
plotNumber = plotNumber + 1;
gscatter(double(catParamValues.Sex), catParamValues.(thisParam), catParamValues.Sex);
ax = gca;
ax.XTick = [];
ylabel(ylabels(i));
legend('Location','bestoutside')
% Plot for Age category
subplot(4,2,plotNumber);
plotNumber = plotNumber + 1;
gscatter(double(catParamValues.Age), catParamValues.(thisParam), catParamValues.Age);
ax = gca;
ax.XTick = [];
ylabel(ylabels(i));
legend('Location','bestoutside')
end
% Resize the figure.
h.Position(:) = [100 100 1280 800];
Based on the plot, it seems that young individuals tend to have higher volumes of central and peripheral compartments (Central, Peripheral) than old individuals (that is, the volumes seem to be age-specific). In addition, males tend to have lower clearance rates (Cl_Central) than females but the opposite for the Q12 parameter (that is, the clearance and Q12 seem to be sex-specific).
Estimate Category-Specific Parameters
Use the 'CategoryVariableName' property of the estimatedInfo object to specify which category to use during fitting. Use 'Sex' as the group to fit for the clearance Cl_Central and Q12 parameters. Use 'Age' as the group to fit for the Central and Peripheral parameters.
estimatedParam(1).CategoryVariableName = 'Age';
estimatedParam(2).CategoryVariableName = 'Age';
estimatedParam(3).CategoryVariableName = 'Sex';
estimatedParam(4).CategoryVariableName = 'Sex';
categoryFit = sbiofit(model,gData,responseMap,estimatedParam,dose)
categoryFit =
OptimResults with properties:
ExitFlag: 3
Output: [1x1 struct]
GroupName: []
Beta: [8x5 table]
ParameterEstimates: [120x6 table]
J: [240x8x2 double]
COVB: [8x8 double]
CovarianceMatrix: [8x8 double]
R: [240x2 double]
MSE: 0.4362
SSE: 205.8690
Weights: []
LogLikelihood: -477.9195
AIC: 971.8390
BIC: 1.0052e+03
DFE: 472
DependentFiles: {1x3 cell}
EstimatedParameterNames: {'Central' 'Peripheral' 'Q12' 'Cl_Central'}
ErrorModelInfo: [1x3 table]
EstimationFunction: 'lsqnonlin'
When fitting by category (or group), sbiofit always returns one results object, not one for each category level. This is because both male and female individuals are considered to be part of the same optimization using the same error model and error parameters, similarly for the young and old individuals.
Plot Results
Plot the category-specific estimated results.
plot(categoryFit);
For the Cl_Central and Q12 parameters, all males had the same estimates, and similarly for the females. For the Central and Peripheral parameters, all young individuals had the same estimates, and similarly for the old individuals.
Estimate Population-Wide Parameters
To better compare the results, fit the model to all of the data pooled together, that is, estimate one set of parameters for all individuals by setting the 'Pooled' name-value pair argument to true. The warning message tells you that this option will ignore any category-specific information (if they exist).
pooledFit = sbiofit(model,gData,responseMap,estimatedParam,dose,'Pooled',true);
Warning: CategoryVariableName property of the estimatedInfo object is ignored when using the 'Pooled' option.
Plot Results
Plot the fitted results versus the original data. Although a separate plot was generated for each individual, the data was fitted using the same set of parameters (that is, all individuals had the same fitted line).
plot(pooledFit);
Compare Residuals
Compare residuals of CentralConc and PeripheralConc responses for each fit.
t = gData.Time;
allResid(:,:,1) = pooledFit.R;
allResid(:,:,2) = categoryFit.R;
allResid(:,:,3) = vertcat(unpooledFit.R);
h = figure;
responseList = {'CentralConc', 'PeripheralConc'};
for i = 1:2
subplot(2,1,i);
oneResid = squeeze(allResid(:,i,:));
plot(t,oneResid,'o');
refline(0,0); % A reference line representing a zero residual
title(sprintf('Residuals (%s)', responseList{i}));
xlabel('Time');
ylabel('Residuals');
legend({'Pooled','Category-Specific','Unpooled'});
end
% Resize the figure.
h.Position(:) = [100 100 1280 800];
As shown in the plot, the unpooled fit produced the best fit to the data as it fit the data to each individual. This was expected since it used the most number of degrees of freedom. The category-fit reduced the number of degrees of freedom by fitting the data to two categories (sex and age). As a result, the residuals were larger than the unpooled fit, but still smaller than the population-fit, which estimated just one set of parameters for all individuals. The category-fit might be a good compromise between the unpooled and pooled fitting provided that any hierarchical model exists within your data.
This example uses the yeast heterotrimeric G protein model and experimental data reported by [1]. For details about the model, see the Background section in Parameter Scanning, Parameter Estimation, and Sensitivity Analysis in the Yeast Heterotrimeric G Protein Cycle.
Store the experimental data containing the time course for the fraction of active G protein.
time = [0 10 30 60 110 210 300 450 600]';
GaFracExpt = [0 0.35 0.4 0.36 0.39 0.33 0.24 0.17 0.2]';
Create a groupedData object based on the experimental data.
tbl = table(time,GaFracExpt);
grpData = groupedData(tbl);
Map the appropriate model component to the experimental data. In other words, indicate which species in the model corresponds to which response variable in the data. In this example, map the model parameter GaFrac to the experimental data variable GaFracExpt from grpData.
responseMap = 'GaFrac = GaFracExpt';
Use an estimatedInfo object to define the model parameter kGd as a parameter to be estimated.
estimatedParam = estimatedInfo('kGd');
Perform the parameter estimation.
fitResult = sbiofit(m1,grpData,responseMap,estimatedParam);
View the estimated parameter value of kGd.
fitResult.ParameterEstimates
ans=1×3 table
Name Estimate StandardError
_______ ________ _____________
{'kGd'} 0.11307 3.4439e-05
Suppose you want to plot the model simulation results using the estimated parameter value. You can either rerun the sbiofit function and specify to return the optional second output argument, which contains simulation results, or use the fitted method to retrieve the results without rerunning sbiofit.
[yfit,paramEstim] = fitted(fitResult);
Plot the simulation results.
sbioplot(yfit);
This example shows how to estimate the time lag before a bolus dose was administered and the duration of the dose using a one-compartment model.
Plot the data.
plot(data.Time,data.Conc,'x')
xlabel('Time (hour)')
ylabel('Conc (milligram/liter)')
Convert to groupedData.
gData = groupedData(data);
gData.Properties.VariableUnits = {'hour','milligram/liter'};
Create a one-compartment model.
pkmd = PKModelDesign;
pkc1.DosingType = 'Bolus';
pkc1.EliminationType = 'linear-clearance';
pkc1.HasResponseVariable = true;
model = construct(pkmd);
configset = getconfigset(model);
configset.CompileOptions.UnitConversion = true;
Add two parameters that represent the time lag and duration of a dose. The lag parameter specifies the time lag before the dose is administered. The duration parameter specifies the length of time it takes to administer a dose.
lagP.ValueUnits = 'hour';
durP.ValueUnits = 'hour';
Create a dose object. Set the LagParameterName and DurationParameterName properties of the dose to the names of the lag and duration parameters, respectively. Set the dose amount to 10 milligram which was the amount used to generate the data.
dose = sbiodose('dose');
dose.TargetName = 'Drug_Central';
dose.StartTime = 0;
dose.Amount = 10;
dose.AmountUnits = 'milligram';
dose.TimeUnits = 'hour';
dose.LagParameterName = 'lagP';
dose.DurationParameterName = 'durP';
Map the model species to the corresponding data.
responseMap = {'Drug_Central = Conc'};
Specify the lag and duration parameters as parameters to estimate. Log-transform the parameters. Initialize them to 2 and set the upper bound and lower bound.
paramsToEstimate = {'log(lagP)','log(durP)'};
estimatedParams = estimatedInfo(paramsToEstimate,'InitialValue',2,'Bounds',[1 5]);
Perform parameter estimation.
fitResults = sbiofit(model,gData,responseMap,estimatedParams,dose,'fminsearch')
fitResults =
OptimResults with properties:
ExitFlag: 1
Output: [1x1 struct]
GroupName: One group
Beta: [2x4 table]
ParameterEstimates: [2x4 table]
J: [11x2 double]
COVB: [2x2 double]
CovarianceMatrix: [2x2 double]
R: [11x1 double]
MSE: 0.0024
SSE: 0.0213
Weights: []
LogLikelihood: 18.7511
AIC: -33.5023
BIC: -32.7065
DFE: 9
DependentFiles: {1x2 cell}
EstimatedParameterNames: {'lagP' 'durP'}
ErrorModelInfo: [1x3 table]
EstimationFunction: 'fminsearch'
Display the result.
fitResults.ParameterEstimates
ans=2×4 table
Name Estimate StandardError Bounds
________ ________ _____________ ______
{'lagP'} 1.986 0.0051568 1 5
{'durP'} 1.527 0.012956 1 5
plot(fitResults)
## Input Arguments
collapse all
SimBiology model, specified as a SimBiology model object. The active configset object of the model contains solver settings for simulation. Any active doses and variants are applied to the model during simulation unless specified otherwise using the dosing and variants input arguments, respectively.
Data to fit, specified as a groupedData object.
The name of the time variable must be defined in the IndependentVariableName property of grpData. For instance, if the time variable name is 'TIME', then specify it as follows.
grpData.Properties.IndependentVariableName = 'TIME';
If the data contains more than one group of measurements, the grouping variable name must be defined in the GroupVariableName property of grpData. For example, if the grouping variable name is 'GROUP', then specify it as follows.
grpData.Properties.GroupVariableName = 'GROUP';
A group usually refers to a set of measurements that represent a single time course, often corresponding to a particular individual, or experimental condition.
Note
sbiofit uses the categorical function to identify groups. If any group values are converted to the same value by categorical, then those observations are treated as belonging to the same group. For instance, if some observations have no group information (that is, an empty character vector ''), then categorical converts empty character vectors to <undefined>, and these observations are treated as one group.
Mapping information of model components to grpData, specified as a character vector, string, string vector, or cell array of character vectors.
Each character vector or string is an equation-like expression, similar to assignment rules in SimBiology. It contains the name (or qualified name) of a quantity (species, compartment, or parameter) or an observable object in the model sm, followed by the character '=' and the name of a variable in grpData. For clarity, white spaces are allowed between names and '='.
For example, if you have the concentration data 'CONC' in grpData for a species 'Drug_Central', you can specify it as follows.
ResponseMap = 'Drug_Central = CONC';
To name a species unambiguously, use the qualified name, which includes the name of the compartment. To name a reaction-scoped parameter, use the reaction name to qualify the parameter.
If the model component name or grpData variable name is not a valid MATLAB® variable name, surround it by square brackets, such as:
ResponseMap = '[Central 1].Drug = [Central 1 Conc]';
If a variable name itself contains square brackets, you cannot use it in the expression to define the mapping information.
An error is issued if any (qualified) name matches two components of the same type. However, you can use a (qualified) name that matches two components of different types, and the function first finds the species with the given name, followed by compartments and then parameters.
Estimated parameters, specified as an EstimatedInfo object or vector of estimatedInfo objects that defines the estimated parameters in the model sm, and other optional information such as their initial estimates, transformations, bound constraints, and categories. Supported transforms are log, logit, and probit. For details, see Parameter Transformations.
You can specify bounds for all estimation methods. The lower bound must be less than the upper bound. For details, see Bounds.
When using scattersearch, you must specify finite transformed bounds for each estimated parameter.
When using fminsearch, nlinfit, or fminunc with bounds, the objective function returns Inf if bounds are exceeded. When you turn on options such as FunValCheck, the optimization might error if bounds are exceeded during estimation. If using nlinfit, it might report warnings about the Jacobian being ill-conditioned or not being able to estimate if the final result is too close to the bounds.
If you do not specify Pooled name-value pair argument, sbiofit uses CategoryVariableName property of estiminfo to decide if parameters must be estimated for each individual, group, category, or all individuals as a whole. Use the Pooled option to override any CategoryVariableName values. For details about CategoryVariableName property, see EstimatedInfo object.
Note
sbiofit uses the categorical function to identify groups or categories. If any group values are converted to the same value by categorical, then those observations are treated as belonging to the same group. For instance, if some observations have no group information (that is, an empty character vector '' as a group value), then categorical converts empty character vectors to <undefined>, and these observations are treated as one group.
Dosing information, specified as an empty array ([] or {}), 2-D matrix or cell vector of dose objects (ScheduleDose object or RepeatDose object).
If you omit the dosing input, the function applies the active doses of the model if there are any.
If you specify the input as empty [] or {}, no doses are applied during simulation, even if the model has active doses.
For a matrix of dose objects, it must have a single row or one row per group in the input data. If it has a single row, the same doses are applied to all groups during simulation. If it has multiple rows, each row is applied to a separate group, in the same order as the groups appear in the input data. Multiple columns are allowed so that you can apply multiple dose objects to each group.
Note
As of R2021b, doses in the columns are no longer required to have the same configuration. If you previously created default (dummy) doses to fill in the columns, these default doses have no effect and indicate no dosing.
For a cell vector of doses, it must have one element or one element per group in the input data. Each element must be [] or a vector of doses. Each element of the cell is applied to a separate group, in the same order as the groups appear in the input data.
In addition to manually constructing dose objects using sbiodose, if the input groupedData object has dosing information, you can use the createDoses method to construct doses.
Estimation function name, specified as a character vector or string. Choices are as follows.
• "fminsearch"
• "nlinfit" (Statistics and Machine Learning Toolbox is required.)
• "fminunc" (Optimization Toolbox is required.)
• "fmincon" (Optimization Toolbox is required.)
• "lsqcurvefit" (Optimization Toolbox is required.)
• "lsqnonlin" (Optimization Toolbox is required.)
• "patternsearch" (Global Optimization Toolbox is required.)
• "ga" (Global Optimization Toolbox is required.)
• "particleswarm" (Global Optimization Toolbox is required.)
• "scattersearch"
By default, sbiofit uses the first available estimation function among the following: lsqnonlin (Optimization Toolbox), nlinfit (Statistics and Machine Learning Toolbox), or fminsearch. The same priority applies to the default local solver choice for scattersearch.
For the summary of supported methods and fitting options, see Supported Methods for Parameter Estimation in SimBiology.
Options specific to the estimation function, specified as a struct or optimoptions object.
• statset struct for nlinfit
• optimset struct for fminsearch
• optimoptions object for lsqcurvefit, lsqnonlin, fmincon, fminunc, particleswarm, ga, and patternsearch
• struct for scattersearch
See Default Options for Estimation Algorithms for more details and default options associated with each estimation function.
Variants, specified as an empty array ([] or {}) or vector of variant objects.
If you
• Omit this input argument, the function applies the active variants of the model if there are any.
• Specify this input as empty, no variants are used even if the model has active variants.
• Specify this input as a vector of variants, the function applies the specified variants to all simulations, and the model active variants are not used.
• Specify this input as a vector of variants and also specify the Variants name-value argument, the function applies the variants specified in this input argument before applying the ones specified in the name-value argument.
### Name-Value Arguments
Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose Name in quotes.
Example: 'ErrorModel','constant','UseParallel',true specifies a constant error model and to run simulations in parallel during parameter estimation.
Error models used for estimation, specified as a character vector, string, string vector, cell array of character vectors, categorical vector, or table.
If it is a table, it must contain a single variable that is a column vector of error model names. The names can be a cell array of character vectors, string vector, or a vector of categorical variables. If the table has more than one row, then the RowNames property must match the response variable names specified in the right hand side of ResponseMap. If the table does not use the RowNames property, the nth error is associated with the nth response.
If you specify only one error model, then sbiofit estimates one set of error parameters for all responses.
If you specify multiple error models using a categorical vector, string vector, or cell array of character vectors, the length of the vector or cell array must match the number of responses in ResponseMap.
You can specify multiple error models only if you are using these methods: lsqnonlin, lsqcurvefit, fmincon, fminunc, fminsearch, patternsearch, ga, and particleswarm.
Four built-in error models are available. Each model defines the error using a standard mean-zero and unit-variance (Gaussian) variable e, simulation results f, and one or two parameters a and b.
• "constant": $y=f+ae$
• "proportional": $y=f+b|f|e$
• "combined": $y=f+\left(a+b|f|\right)e$
• "exponential": $y=f\ast \mathrm{exp}\left(ae\right)$
Note
• If you specify an error model, you cannot specify weights except for the constant error model.
• If you use a proportional or combined error model during data fitting, avoid specifying data points at times where the solution (simulation result) is zero or the steady state is zero. Otherwise, you can run into division-by-zero problems. It is recommended that you remove those data points from the data set. For details on the error parameter estimation functions, see Maximum Likelihood Estimation.
Weights used for fitting, specified as an empty array [], matrix of real positive weights where the number of columns corresponds to the number of responses, or a function handle that accepts a vector of predicted response values and returns a vector of real positive weights.
If you specify an error model, you cannot use weights except for the constant error model. If neither the ErrorModel or Weights is specified, by default, the software uses the constant error model with equal weights.
Group-specific variants, specified as an empty array ([] or {}), 2-D matrix or cell vector of variant objects. These variants let you specify parameter values for specific groups during fitting. The software applies these group-specific variants after active variants or the variants input argument. If the value is empty ([] or {}), no group-specific variants are applied.
For a matrix of variant objects, the number of rows must be one or must match the number of groups in the input data. The ith row of variant objects is applied to the simulation of the ith group. The variants are applied in order from the first column to the last. If this matrix has only one row of variants, they are applied to all simulations.
For a cell vector of variant objects, the number of cells must be one or must match the number of groups in the input data. Each element must be [] or a vector of variants. If this cell vector has a single cell containing a vector of variants, they are applied to all simulations. If the cell vector has multiple cells, the variants in the ith cell are applied to the simulation of the ith group.
In addition to manually constructing variant objects using sbiovariant, if the input groupedData object has variant information, you can use createVariants to construct variants.
Fit option flag to fit each individual or pool all individual data, specified as a numeric or logical 1 (true) or 0 (false).
When true, the software performs fitting for all individuals or groups simultaneously using the same parameter estimates, and fitResults is a scalar results object.
When false, the software fits each group or individual separately using group- or individual-specific parameters, and fitResults is a vector of results objects with one result for each group.
Note
Use this option to override any CategoryVariableName values of estiminfo.
Flag to enable parallelization, specified as a numeric or logical 1 (true) or 0 (false). If true and Parallel Computing Toolbox™ is available, sbiofit supports several levels of parallelization, but only one level is used at a time.
For an unpooled fit (Pooled = false) for multiple groups, each fit is run in parallel.
For a pooled fit (Pooled = true), parallelization happens at the solver level. In other words, solver computations, such as objective function evaluations, are run in parallel.
For details, see Multiple Parameter Estimations in Parallel.
Flag to use parameter sensitivities to determine gradients of the objective function, specified as a numeric or logical 1 (true) or 0 (false). By default, it is true for fmincon, fminunc, lsqnonlin, lsqcurvefit, and scattersearch methods. Otherwise, it is false.
If it is true, the software always uses the sundials solver, regardless of what you have selected as the SolverType property in the Configset object.
The software uses the complex-step approximation method to calculate parameter sensitivities. Such calculated sensitivities can be used to determine gradients of the objective function during parameter estimation to improve fitting. The default behavior of sbiofit is to use such sensitivities to determine gradients whenever the algorithm is gradient-based and if the SimBiology model supports sensitivity analysis. For details about the model requirements and sensitivity analysis, see Sensitivity Analysis in SimBiology.
Flag to show the progress of parameter estimation, specified as a numeric or logical 1 (true) or 0 (false). If true, a new figure opens containing plots.
Plots show the log-likelihood, termination criteria, and estimated parameters for each iteration. This option is not supported for nlinfit.
If you are using the Optimization Toolbox functions (fminunc, fmincon, lsqcurvefit, lsqnonlin), the figure also shows the First Order Optimality (Optimization Toolbox) plot.
For an unpooled fit, each line on the plots represents an individual. For a pooled fit, a single line represents all individuals. The line becomes faded when the fit is complete. The plots also keep track of the progress when you run sbiofit (for both pooled and unpooled fits) in parallel using remote clusters. For details, see Progress Plot.
## Output Arguments
collapse all
Estimation results, returned as a OptimResults object or NLINResults object or a vector of these objects. Both results objects are subclasses of the LeastSquaresResults object.
If the function uses nlinfit (Statistics and Machine Learning Toolbox), then fitResults is a NLINResults object. Otherwise, fitResults is an OptimResults object.
For an unpooled fit, the function fits each group separately using group-specific parameters, and fitResults is a vector of results objects with one results object for each group.
For a pooled fit, the function performs fitting for all individuals or groups simultaneously using the same parameter estimates, and fitResults is a scalar results object.
When the pooled option is not specified, and CategoryVariableName values of estimatedInfo objects are all <none>, fitResults is a single results object. This is the same behavior as a pooled fit.
When the pooled option is not specified, and CategoryVariableName values of estimatedInfo objects are all <GroupVariableName>, fitResults is a vector of results objects. This is the same behavior as an unpooled fit.
In all other cases, fitResults is a scalar object containing estimated parameter values for different groups or categories specified by CategoryVariableName.
Simulation results, returned as a vector of SimData objects representing simulation results for each group or individual.
If the 'Pooled' option is false, then each simulation uses group-specific parameter values. If true, then all simulations use the same (population-wide) parameter values.
The states reported in simdata are the states that were included in the ResponseMap input argument and any other states listed in the StatesToLog property of the runtime options (RuntimeOptions) of the SimBiology model sm.
collapse all
### Maximum Likelihood Estimation
SimBiology® estimates parameters by the method of maximum likelihood. Rather than directly maximizing the likelihood function, SimBiology constructs an equivalent minimization problem. Whenever possible, the estimation is formulated as a weighted least squares (WLS) optimization that minimizes the sum of the squares of weighted residuals. Otherwise, the estimation is formulated as the minimization of the negative of the logarithm of the likelihood (NLL). The WLS formulation often converges better than the NLL formulation, and SimBiology can take advantage of specialized WLS algorithms, such as the Levenberg-Marquardt algorithm implemented in lsqnonlin and lsqcurvefit. SimBiology uses WLS when there is a single error model that is constant, proportional, or exponential. SimBiology uses NLL if you have a combined error model or a multiple-error model, that is, a model having an error model for each response.
sbiofit supports different optimization methods, and passes in the formulated WLS or NLL expression to the optimization method that minimizes it. For simplicity, each expression shown below assumes only one error model and one response. If there are multiple responses, SimBiology takes the sum of the expressions that correspond to error models of given responses.
Expression that is being minimized
Weighted Least Squares (WLS)For the constant error model, $\sum _{i}^{N}{\left({y}_{i}-{f}_{i}\right)}^{2}$
For the proportional error model, $\sum _{i}^{N}\frac{{\left({y}_{i}-{f}_{i}\right)}^{2}}{{f}_{i}^{2}/{f}_{gm}^{2}}$
For the exponential error model, $\sum _{i}^{N}{\left(\mathrm{ln}{y}_{i}-\mathrm{ln}{f}_{i}\right)}^{2}$
For numeric weights, $\sum _{i}^{N}\frac{{\left({y}_{i}-{f}_{i}\right)}^{2}}{{w}_{gm}/{w}_{i}}$
Negative Log-likelihood (NLL)For the combined error model and multiple-error model, $\sum _{i}^{N}\frac{{\left({y}_{i}-{f}_{i}\right)}^{2}}{2{\sigma }_{i}^{2}}+\sum _{i}^{N}\mathrm{ln}\sqrt{2\pi {\sigma }_{i}^{2}}$
The variables are defined as follows.
N Number of experimental observations yi The ith experimental observation ${f}_{i}$ Predicted value of the ith observation ${\sigma }_{i}$ Standard deviation of the ith observation. For the constant error model, ${\sigma }_{i}=a$For the proportional error model, ${\sigma }_{i}=b|{f}_{i}|$For the combined error model, ${\sigma }_{i}=a+b|{f}_{i}|$ ${f}_{gm}$ ${f}_{gm}={\left(\prod _{i}^{N}|{f}_{i}|\right)}^{1}{N}}$ ${w}_{i}$ The weight of the ith predicted value ${w}_{gm}$ ${w}_{gm}={\left(\prod _{i}^{N}{w}_{i}\right)}^{1}{N}}$
When you use numeric weights or the weight function, the weights are assumed to be inversely proportional to the variance of the error, that is, ${\sigma }_{i}^{2}=\frac{{a}^{2}}{{w}_{i}}$ where a is the constant error parameter. If you use weights, you cannot specify an error model except the constant error model.
Various optimization methods have different requirements on the function that is being minimized. For some methods, the estimation of model parameters is performed independently of the estimation of the error model parameters. The following table summarizes the error models and any separate formulas used for the estimation of error model parameters, where a and b are error model parameters and e is the standard mean-zero and unit-variance (Gaussian) variable.
Error ModelError Parameter Estimation Function
'constant': ${y}_{i}={f}_{i}+ae$${a}^{2}=\frac{1}{N}\sum _{i}^{N}{\left({y}_{i}-{f}_{i}\right)}^{2}$
'exponential': ${y}_{i}={f}_{i}\mathrm{exp}\left(ae\right)$${a}^{2}=\frac{1}{N}\sum _{i}^{N}{\left(\mathrm{ln}{y}_{i}-\mathrm{ln}{f}_{i}\right)}^{2}$
'proportional': ${y}_{i}={f}_{i}+b|{f}_{i}|e$${b}^{2}=\frac{1}{N}\sum _{i}^{N}{\left(\frac{{y}_{i}-{f}_{i}}{{f}_{i}}\right)}^{2}$
'combined': ${y}_{i}={f}_{i}+\left(a+b|{f}_{i}|\right)e$Error parameters are included in the minimization.
Weights${a}^{2}=\frac{1}{N}\sum _{i}^{N}{\left({y}_{i}-{f}_{i}\right)}^{2}{w}_{i}$
Note
nlinfit only support single error models, not multiple-error models, that is, response-specific error models. For a combined error model, it uses an iterative WLS algorithm. For other error models, it uses the WLS algorithm as described previously. For details, see nlinfit (Statistics and Machine Learning Toolbox).
### Default Options for Estimation Algorithms
The following table summarizes the default options for various estimation functions.
FunctionDefault Options
nlinfit (Statistics and Machine Learning Toolbox)
sbiofit uses the default options structure associated with nlinfit, except for:
FunValCheck = 'off' DerivStep = max(eps^(1/3), min(1e-4,SolverOptions.RelativeTolerance)), where the SolverOptions property corresponds to the model’s active configset object.
fmincon (Optimization Toolbox)
sbiofit uses the default options structure associated with fmincon, except for:
Display = 'off' FunctionTolerance = 1e-6*abs(f0), where f0 is the initial value of the objective function. OptimalityTolerance = 1e-6*abs(f0), where f0 is the initial value of the objective function. Algorithm = 'trust-region-reflective' when 'SensitivityAnalysis' is true, or 'interior-point' when 'SensitivityAnalysis' is false. FiniteDifferenceStepSize = max(eps^(1/3),min(1e-4,SolverOptions.RelativeTolerance)), where the SolverOptions property corresponds to the model active configset object. TypicalX = 1e-6*x0, where x0 is an array of transformed initial estimates.
fminunc (Optimization Toolbox)
sbiofit uses the default options structure associated with fminunc, except for:
Display = 'off' FunctionTolerance = 1e-6*abs(f0), where f0 is the initial value of the objective function. OptimalityTolerance = 1e-6*abs(f0), where f0 is the initial value of the objective function. Algorithm = 'trust-region' when 'SensitivityAnalysis' is true, or 'quasi-newton' when 'SensitivityAnalysis' is false. FiniteDifferenceStepSize = max(eps^(1/3),min(1e-4,SolverOptions.RelativeTolerance)), where the SolverOptions property corresponds to the model active configset object. TypicalX = 1e-6*x0, where x0 is an array of transformed initial estimates.
fminsearch
sbiofit uses the default options structure associated with fminsearch, except for:
Display = 'off' TolFun = 1e-6*abs(f0), where f0 is the initial value of the objective function.
lsqcurvefit (Optimization Toolbox), lsqnonlin (Optimization Toolbox)
Requires Optimization Toolbox.
sbiofit uses the default options structure associated with lsqcurvefit and lsqnonlin, except for:
Display = 'off' FunctionTolerance = 1e-6*norm(f0), where f0 is the initial value of the objective function. OptimalityTolerance = 1e-6*norm(f0), where f0 is the initial value of the objective function. FiniteDifferenceStepSize = max(eps^(1/3),min(1e-4,SolverOptions.RelativeTolerance)) , where the SolverOptions property corresponds to the model active configset object. TypicalX = 1e-6*x0, where x0 is an array of transformed initial estimates.
patternsearch (Global Optimization Toolbox)
Requires Global Optimization Toolbox.
sbiofit uses the default options object (optimoptions) associated with patternsearch, except for:
Display = 'off' FunctionTolerance = 1e-6*abs(f0), where f0 is the initial value of the objective function. MeshTolerance = 1.0e-3 AccelerateMesh = true
ga (Global Optimization Toolbox)
Requires Global Optimization Toolbox.
sbiofit uses the default options object (optimoptions) associated with ga, except for:
Display = 'off' FunctionTolerance = 1e-6*abs(f0), where f0 is the initial value of the objective function. MutationFcn = @mutationadaptfeasible
particleswarm (Global Optimization Toolbox)
Requires Global Optimization Toolbox.
sbiofit uses the following default options for the particleswarm algorithm, except for:
Display = 'off' FunctionTolerance = 1e-6*abs(f0), where f0 is the initial value of the objective function. InitialSwarmSpan = 2000 or 8; 2000 for estimated parameters with no transform; 8 for estimated parameters with log, logit, or probit transforms.
scattersearchSee Scatter Search Algorithm.
### Scatter Search Algorithm
The scattersearch method implements a global optimization algorithm [2] that addresses some challenges of parameter estimation in dynamic models, such as convergence to local minima.
Algorithm Overview
The algorithm selects a subset of points from an initial pool of points. In that subset, some points are the best in quality (that is, lowest function value) and some are randomly selected. The algorithm iteratively evaluates the points and explores different directions around various solutions to find better solutions. During this iteration step, the algorithm replaces any old solution with a new one of better quality. Iterations proceed until any stopping criteria are met. It then runs a local solver on the best point.
Initialization
To start the scatter search, the algorithm first decides the total number of points needed (NumInitialPoints). By default, the total is 10*N, where N is the number of estimated parameters. It selects NumInitialPoints points (rows) from InitialPointMatrix. If InitialPointMatrix does not have enough points, the algorithm calls the function defined in CreationFcn to generate the additional points needed. By default, Latin hypercube sampling is used to generate these additional points. The algorithm then selects a subset of NumTrialPoints points from NumInitialPoints points. A fraction (FractionInitialBest) of the subset contains the best points in terms of quality. The remaining points in the subset are randomly selected.
Iteration Steps
The algorithm iterates on the points in the subset as follows:
1. Define hyper-rectangles around each pair of points by using the relative qualities (that is, function values) of these points as a measure of bias to create these rectangles.
2. Evaluate a new solution inside each rectangle. If the new solution outperforms the original solution, replace the original with the new one.
3. Apply the go-beyond strategy to the improved solutions and exploit promising directions to find better solutions.
4. Run a local search at every LocalSearchInterval iteration. Use the LocalSelectBestProbability probability to select the best point as the starting point for a local search. By default, the decision is random, giving an equal chance to select the best point or a random point from the trial points. If the new solution outperforms the old solution, replace the old one with the new one.
5. Replace any stalled point that does not produce any new outperforming solution after MaxStallTime seconds with another point from the initial set.
6. Evaluate stopping criteria. Stop iterating if any criteria are met.
The algorithm then runs a local solver on the best point seen.
Stopping Criteria
The algorithm iterates until it reaches a stopping criterion.
Stopping OptionStopping Test
FunctionTolerance and MaxStallIterations
Relative change in best objective function value over the last MaxStallIterations is less than FunctionTolerance.
MaxIterations
Number of iterations reaches MaxIterations.
OutputFcn
OutputFcn can halt the iterations.
ObjectiveLimit
Best objective function value at an iteration is less than or equal to ObjectiveLimit.
MaxStallTime
Best objective function value did not change in the last MaxStallTime seconds.
MaxTime
Function run time exceeds MaxTime seconds.
Algorithm Options
You create the options for the algorithm using a struct.
OptionDescription
CreationFcn
Handle to the function that creates additional points needed for the algorithm. Default is the character vector 'auto', which uses Latin hypercube sampling.
The function signature is: points = CreationFcn(s,N,lb,ub), where s is the total number of sampled points, N is the number of estimated parameters, lb is the lower bound, and ub is the upper bound. If any output from the function exceeds bounds, these results are truncated to the bounds.
Display
Level of display returned to the command line.
• 'off' or 'none' (default) displays no output.
• 'iter' gives iterative display.
• 'final' displays just the final output.
FractionInitialBest
Numeric scalar from 0 through 1. Default is 0.5. This number is the fraction of the NumTrialPoints that are selected as the best points from the NumInitialPoints points.
FunctionTolerance
Numeric scalar from 0 through 1. Default is 1e-6. The solver stops if the relative change in best objective function value over the last MaxStallIterations is less than FunctionTolerance. This option is also used to remove duplicate local solutions. See XTolerance for details.
InitialPointMatrix
Initial (or partial) set of points. M-by-N real finite matrix, where M is the number of points and N is the number of estimated parameters.
If M < NumInitialPoints, then scattersearch creates more points so that the total number of rows is NumInitialPoints.
If M > NumInitialPoints, then scattersearch uses the first NumInitialPoints rows.
Default is the initial transformed values of estimated parameters stored in the InitialTransformedValue property of the EstimatedInfo object, that is, [estiminfo.InitialTransformedValue].
LocalOptions
Options for the local solver. It can be a struct (created with optimset or statset (Statistics and Machine Learning Toolbox)) or an optimoptions (Optimization Toolbox) object, depending on the local solver. Default is the character vector 'auto', which uses the default options of the selected solver with some exceptions. In addition to these exceptions, the following options limit the time spent in the local solver because it is called repeatedly:
• MaxFunEvals (maximum number of function evaluations allowed) = 300
• MaxIter (maximum number of iterations allowed) = 200
LocalSearchInterval
Positive integer. Default is 10. The scattersearch algorithm applies the local solver to one of the trial points after the first iteration and again every LocalSearchInterval iteration.
LocalSelectBestProbability
Numeric scalar from 0 through 1. Default is 0.5. It is the probability of selecting the best point as the starting point for a local search. In other cases, one of the trial points is selected at random.
LocalSolver
Character vector or string specifying the name of a local solver. Supported methods are 'fminsearch', 'lsqnonlin', 'lsqcurvefit', 'fmincon', 'fminunc', 'nlinfit'.
Default local solver is selected with the following priority:
• If Optimization Toolbox is available, the solver is lsqnonlin.
• If Statistics and Machine Learning Toolbox is available, the solver is nlinfit.
• Otherwise, the solver is fminsearch.
MaxIterations
Positive integer. Default is the character vector 'auto' representing 20*N, where N is the number of estimated parameters.
MaxStallIterations
Positive integer. Default is 50. The solver stops if the relative change in the best objective function value over the last MaxStallIterations iterations is less than FunctionTolerance.
MaxStallTime
Positive scalar. Default is Inf. The solver stops if MaxStallTime seconds have passed since the last improvement in the best-seen objective function value. Here, the time is the wall clock time as opposed to processor cycles.
MaxTime
Positive scalar. Default is Inf. The solver stops if MaxTime seconds have passed since the beginning of the search. The time here means the wall clock time as opposed to processor cycles.
NumInitialPoints
Positive integer that is >= NumTrialPoints. The solver generates NumInitialPoints points before selecting a subset of trial points (NumTrialPoints) for subsequent steps. Default is the character vector 'auto', which represents 10*N, where N is the number of estimated parameters.
NumTrialPoints
Positive integer that is >= 2 and <= NumInitialPoints. The solver generates NumInitialPoints initial points before selecting a subset of trial points (NumTrialPoints) for subsequent steps. Default is the character vector 'auto', which represents the first even number n for which ${n}^{2}-n\ge 10*N$, where N is the number of estimated parameters.
ObjectiveLimit
Scalar. Default is -Inf. The solver stops if the best objective function value at an iteration is less than or equal to ObjectiveLimit.
OutputFcn
Function handle or cell array of function handles. Output functions can read iterative data and stop the solver. Default is [].
Output function signature is stop = myfun(optimValues,state), where:
• stop is a logical scalar. Set to true to stop the solver.
• optimValues is a structure containing information about the trial points with fields.
• bestx is the best solution point found, corresponding to the function value bestfval.
• bestfval is the best (lowest) objective function value found.
• iteration is the iteration number.
• medianfval is the mean objective function value among all the current trial points.
• stalliterations is the number of iterations since the last change in bestfval.
• trialx is a matrix of the current trial points. Each row represents one point, and the number of rows is equal to NumTrialPoints.
• trialfvals is a vector of objective function values for trial points. It is a matrix for lsqcurvefit and lsqnonlin methods.
• state is a character vector giving the status of the current iteration.
• 'init' – The solver has not begun to iterate. Your output function can use this state to open files, or set up data structures or plots for subsequent iterations.
• 'iter' – The solver is proceeding with its iterations. Typically, this state is where your output function performs its work.
• 'done' – The solver reaches a stopping criterion. Your output function can use this state to clean up, such as closing any files it opened.
TrialStallLimit
Positive integer, with default value of 22. If a particular trial point does not improve after TrialStallLimit iterations, it is replaced with another point.
UseParallel
Logical flag to compute objective function in parallel. Default is false.
XTolerance
Numeric scalar from 0 through 1. Default is 1e-6. This option defines how close two points must be to consider them identical for creating the vector of local solutions. The solver calculates the distance between a pair of points with norm, the Euclidean distance. If two solutions are within XTolerance distance of each other and have objective function values within FunctionTolerance of each other, the solver considers them identical. If both conditions are not met, the solver reports the solutions as distinct.
To get a report of every potential local minimum, set XTolerance to 0. To get a report of fewer results, set XTolerance to a larger value.
### Multiple Parameter Estimations in Parallel
There are two ways to use parallel computing for parameter estimation.
Set 'UseParallel' to true
To enable parallelization for sbiofit, set the name-value pair 'UseParallel' to true. The function supports several levels of parallelization, but only one level is used at a time.
• For an unpooled fit for multiple groups (or individuals), each group runs in parallel.
• For a pooled fit with multiple groups using a solver that has the parallel option, parallelization happens at the solver level. That is, sbiofit sets the parallel option of the corresponding estimation method (solver) to true, and the objective function evaluations are performed in parallel. For instance, for a gradient-based method (such as lsqnonlin), the gradients might be computed in parallel. If the solver is not already executing in parallel, then simulations (of different groups) are run in parallel.
If you have a single group and are using a solver that does not have the parallel option, setting UseParallel=true will run nothing in parallel but still run simulations serially on workers. Thus, It might not be beneficial to enable parallelization because of the overhead time needed by the workers. It might be faster to run without parallelization instead.
Use parfeval or parfor
You can also call sbiofit inside a parfor loop or use a parfeval inside a for-loop to perform multiple parameter estimations in parallel. It is recommended that you use parfeval because these parallel estimations run asynchronously. If one fit produces an error, it does not affect the other fits.
If you are trying to find a global minimum, you can use global solvers, such as particleswarm (Global Optimization Toolbox) or ga (Global Optimization Toolbox) (Global Optimization Toolbox is required). However, if you want to define the initial conditions and run the fits in parallel, see the following example that shows how to use both parfor and parfeval.
Model and Data Setup
Store the experimental data containing the time course for the fraction of active G protein [1].
time = [0 10 30 60 110 210 300 450 600]';
GaFracExpt = [0 0.35 0.4 0.36 0.39 0.33 0.24 0.17 0.2]';
Create a groupedData object based on the experimental data.
tbl = table(time,GaFracExpt);
grpData = groupedData(tbl);
Map the appropriate model element to the experimental data.
responseMap = 'GaFrac = GaFracExpt';
Specify the parameter to estimate.
paramToEstimate = {'kGd'};
Generate initial parameter values for kGd.
rng('default');
iniVal = abs(normrnd(0.01,1,10,1));
fitResultPar = [];
Parallel Pool Setup
Start a parallel pool using the local profile.
poolObj = parpool('local');
Starting parallel pool (parpool) using the 'local' profile ...
Connected to the parallel pool (number of workers: 6).
Using parfeval (Recommended)
First, define a function handle that uses the local function sbiofitpar for estimation. Make sure the function sbiofitpar is defined at the end of the script.
optimfun = @(x) sbiofitpar(m1,grpData,responseMap,x);
Perform multiple parameter estimations in parallel via parfeval using different initial parameter values.
for i=1:length(iniVal)
f(i) = parfeval(optimfun,1,iniVal(i));
end
fitResultPar = fetchOutputs(f);
Summarize the results for each run.
allParValues = vertcat(fitResultPar.ParameterEstimates);
allParValues.LogLikelihood = [fitResultPar.LogLikelihood]';
allParValues.RunNumber = (1:length(iniVal))';
allParValues.Name = categorical(allParValues.Name);
allParValues.InitialValue = iniVal;
% Rearrange the columns.
allParValues = allParValues(:,[5 1 6 2 3 4]);
% Sort rows by LogLikelihood.
sortrows(allParValues,'LogLikelihood')
ans=10×6 table
RunNumber Name InitialValue Estimate StandardError LogLikelihood
_________ ____ ____________ ________ _____________ _____________
9 kGd 3.5884 3.022 0.127 -1.2843
10 kGd 2.7794 2.779 0.029701 -1.2319
3 kGd 2.2488 2.2488 0.096013 -1.0786
2 kGd 1.8439 1.844 0.28825 -0.90104
6 kGd 1.2977 1.2977 0.011344 -0.48209
4 kGd 0.87217 0.65951 0.003583 0.9279
1 kGd 0.54767 0.54776 0.0020424 1.5323
7 kGd 0.42359 0.42363 0.0024555 2.6097
8 kGd 0.35262 0.35291 0.00065289 3.6098
5 kGd 0.32877 0.32877 0.00042474 4.0604
Define the local function sbiofitpar that performs parameter estimation using sbiofit.
function fitresult = sbiofitpar(model,grpData,responseMap,initialValue)
estimatedParam = estimatedInfo('kGd');
estimatedParam.InitialValue = initialValue;
fitresult = sbiofit(model,grpData,responseMap,estimatedParam);
end
Using parfor
Alternatively, you can perform multiple parameter estimations in parallel via the parfor loop.
parfor i=1:length(iniVal)
estimatedParam = estimatedInfo(paramToEstimate,'InitialValue',iniVal(i));
fitResultTemp = sbiofit(m1,grpData,responseMap,estimatedParam);
fitResultPar = [fitResultPar;fitResultTemp];
end
Close the parallel pool.
delete(poolObj);
### Parameter Estimation with Hybrid Solvers
sbiofit supports global optimization methods, namely ga (Global Optimization Toolbox) and particleswarm (Global Optimization Toolbox) (Global Optimization Toolbox required). To improve optimization results, these methods lets you run a hybrid function after the global solver stops. The hybrid function uses the final point returned by the global solver as its initial point. Supported hybrid functions are:
Make sure that your hybrid function accepts your problem constraints. That is, if your parameters are bounded, use an appropriate function (such as fmincon or patternsearch) for a constrained optimization. If not bounded, use fminunc, fminsearch, or patternsearch. Otherwise, sbiofit throws an error.
For an illustrated example, see Perform Hybrid Optimization Using sbiofit.
## References
[1] Yi, T-M., Kitano, H., and Simon, M. (2003). A quantitative characterization of the yeast heterotrimeric G protein cycle. PNAS. 100, 10764–10769.
[2] Gábor, A., and Banga, J.R. (2015). Robust and efficient parameter estimation in dynamic models of biological systems. BMC Systems Biology. 9:74.
## Version History
Introduced in R2014a
|
2023-02-05 08:38:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 39, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5675726532936096, "perplexity": 3648.7510017553936}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500250.51/warc/CC-MAIN-20230205063441-20230205093441-00491.warc.gz"}
|
https://hal.archives-ouvertes.fr/hal-01163065
|
# Poisson statistics for matrix ensembles at large temperature
Abstract : In this article, we consider $\beta$-ensembles, i.e. collections of particles with random positions on the real line having joint distribution $\frac{1}{Z_N(\beta)}|\Delta(\lambda)|^\beta e^{- \frac{N\beta}{4}\sum_{i=1}^N\lambda_i^2}d \lambda,$ in the regime where $\beta\to 0$ as $N\to\infty$. We briefly describe the global regime and then consider the local regime. In the case where $N\beta$ stays bounded, we prove that the local eigenvalue statistics, in the vicinity of any real number, are asymptotically to those of a Poisson point process. In the case where $N\beta\to\infty$, we prove a partial result in this direction.
Type de document :
Article dans une revue
Journal of Statistical Physics, Springer Verlag, 2015, 161 (3), pp.633-656. 〈10.1007/s10955-015-1340-8〉
Domaine :
https://hal.archives-ouvertes.fr/hal-01163065
Contributeur : Florent Benaych-Georges <>
Soumis le : vendredi 12 juin 2015 - 07:06:58
Dernière modification le : jeudi 11 janvier 2018 - 06:19:45
### Citation
Florent Benaych-Georges, Sandrine Péché. Poisson statistics for matrix ensembles at large temperature. Journal of Statistical Physics, Springer Verlag, 2015, 161 (3), pp.633-656. 〈10.1007/s10955-015-1340-8〉. 〈hal-01163065〉
### Métriques
Consultations de la notice
|
2018-01-21 22:17:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5405641794204712, "perplexity": 1796.6966574604671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890893.58/warc/CC-MAIN-20180121214857-20180121234857-00007.warc.gz"}
|
https://physics.stackexchange.com/questions/184543/equivalent-resistance?noredirect=1
|
# Equivalent Resistance [duplicate]
I've never had problems in finding the $R_{eq}$ of a circuit, but in this case I don't really know how to do it:
## marked as duplicate by ACuriousMind♦, John Rennie, Qmechanic♦May 17 '15 at 19:15
• – Phoenix87 May 17 '15 at 16:48
• You should put a test voltage $V$ across the circuit then solve for passing current $I$ using Kirchhoff's laws. Equivalent resistance will be $R_{eq}={V \over I}$ ($V$ will cancel out) – Azad May 17 '15 at 17:55
First, reformulating the loop rule, we get that since the potential difference between the top and bottom end is the same, all paths from the top to bottom end must end up with the same potential difference, lets say $V$. There are four paths, (1,3), (1,4,5), (2,4,3) and (2,5). Hence, assuming the current through 4 to go to the left (if value is negative, then current goes to right) and that current goes from top to bottom,
$I_1R_1 + I_3R_3 = V$, $I_2R_2 + I_5R_5 = V$, $I_1R_1 + I_4R_4 + I_5R_5 = V$ and $I_2R_2 - I_4R_4 + I_3R_3 = V$
$I_1 + I_2 = I_3 + I_5$, $I_1 = I_4 + I_3$ and $I_2 + I_4 = I_5$
With these equations, we need to solve for $V/I_{net} = V/I_{before} = \frac{V}{I_1+I_2}$. From here it is just mathematics.
|
2019-10-14 01:25:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6022918820381165, "perplexity": 318.7722850892984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648481.7/warc/CC-MAIN-20191014003258-20191014030258-00498.warc.gz"}
|
https://eprint.iacr.org/2019/561
|
## Cryptology ePrint Archive: Report 2019/561
Faster Bootstrapping of FHE over the integers with large prime message space
Zhizhu Lian and Yupu Hu and Hu Chen and Baocang Wang
Abstract: Bootstrapping of FHE over the integer with large message is a open problem, which is to evaluate double modulo $(c ~\text{mod}~ p )~\mod~ Q$ arithmetic homomorphically for large $Q$. In this paper, we express this double modulo reduction circuit as a arithmetic circuit of degree at most $\theta^2 \log^2\theta/2$, with $O(\theta \log^2\theta)$ multiplication gates, where $\theta= \frac{\lambda}{\log \lambda}$ and $\lambda$ is the security parameter. The complexity of decryption circuit is independent of the message space size $Q$ with a constraint $Q> \theta \log^2\theta/2$.
Category / Keywords: public-key cryptography / Fully homomorphic encryption, Bootstrapping, Restricted depth-3 circuit
|
2019-09-23 14:00:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.780407190322876, "perplexity": 2212.3887392079323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576965.71/warc/CC-MAIN-20190923125729-20190923151729-00435.warc.gz"}
|
https://mechanismsrobotics.asmedigitalcollection.asme.org/mechanicaldesign/article/142/9/093301/1072728/Load-Displacement-Characterization-in-Three?searchresult=1
|
## Abstract
Lamina emergent torsion (LET) joints for use in origami-based applications enables folding of panels. Placing LET joints in series and parallel (formulating LET arrays) opens the design space to provide for tunable stiffness characteristics in other directions while maintaining the ability to fold. Analytical equations characterizing the elastic load–displacement for general serial–parallel formulations of LET arrays for three degrees-of-freedom are presented: rotation about the desired axis, in-plane rotation, and extension/compression. These equations enable the design of LET arrays for a variety of applications, including origami-based mechanisms. These general equations are verified using finite element analysis, and to show variability of the LET array design space, several verification plots over a range of parameters are provided.
## 1 Introduction
Lamina emergent mechanisms (LEMs) are compliant mechanisms formed from a planar material and have some or all of their motion out of the plane [18]. They can be fabricated using two-dimensional processes and rely on the compliance of flexible members formed from the planar material to gain their motion [9]. Apart from common benefits of compliant mechanisms such as high precision, the absence of backlash and wear, and the limited amount of parts [9], LEMs are characterized by a low manufacturing cost, a relatively simple topology, and compactness in the initial state. Characterization of the load–displacement behavior of LEMs is an important step to appropriately design mechanisms with desirable behaviors. They have been used in a variety of applications, including microelectromechanical systems [1,10] and origami-based mechanisms [11,12]. A particularly useful LEM for origami-based mechanisms is the lamina emergent torsion (LET) joint. The LET joint achieves high rotational compliance while minimizing the required footprint of the joint, allowing for origami-like folding of panels and localized joints [1315]. Recent studies have provided many folding and modeling techniques (e.g., elastic origami models [16], truss frameworks [17], topology optimization [18], and mechanical properties of paper folds [19]) and applications (e.g., kinetogami [20], deformable structures [21], and cylinders [22]) for origami-based mechanisms. Systems of panels and low-footprint joint designs are beneficial for applications where the function of the panels depends on the panel size. This feature can be used to create networks of monolithic panels and joints to obtain desired motions and functions (e.g., deployable devices [2325], printed circuit boards [26], and actuation origami [2729]).
LEMs have been placed in series and parallel to facilitate desired behaviors of systems, and work has been done to characterize the global bending stiffness in the desired degree-of-freedom (DOF) [30]. When LEMs are placed in series and parallel, they have been termed lamina emergent arrays (LEAs) or compliant arrays [31]. LEAs also have compliance in other DOFs, and motion in these other DOFs has generally been termed parasitic and undesirable. Specific topological changes of joints, known as surrogate folds for origami-based mechanisms, have been designed to reduce the parasitic motion [32] to behave more like the kinematics of smooth folding structures [33].
This paper describes the formulation of arrays of LET joints (LET arrays, a subset of LEAs) suitable for origami-based mechanisms in which panel-area conservation is desirable and develops a load–displacement relationship set of analytical equations to describe the arrays in three DOFs in the elastic region. The behavior predicted by the theoretical approach is then verified via the finite element analysis (FEA). An integrated simulation environment [34], in which a matlab parametric script guides the computer-aided design (CAD) geometry generations as well as the structural batch simulations, is used to automatically test several LET arrays for each DOF. The rest of the paper is organized as follows: Sec. 2 gives a description of LET array formulations to be studied in this paper, Sec. 3 describes the theoretical load-deflection laws of the LET arrays for each of the considered DOF, Sec. 4 provides a comparison between theoretical and FEA results, and Sec. 5 summarizes the work and presents concluding remarks.
## 2 General Lamina Emergent Torsion Arrays
Figure 1 shows the joint frame and motions considered in this paper. We will refer to the rotation about the x-axis ($Rx(γ)$) as folding (referring to the origami-based engineering nomenclature), translation along the y-axis (Ty(y)) as extension/compression (or for brevity, extension), and rotation about the z-axis ($Rz(β)$) as in-plane rotation. LET joints were introduced as joints well-suited for high rotation in the desired folding motion and where some compliance in other DOF were also observed [13]. A single LET joint was defined as a specific formulation of four compliant torsion segments in series and parallel. Two possible configurations were presented as the Outside LET and the Inside LET which had the same four torsion segments with slight topological differences resulting in different boundary conditions of the segments. The differing boundary conditions affected the off-axis motion, but had no significant effects on the desired folding motion.
Fig. 1
Fig. 1
Close modal
Researchers, to understand these motions, have developed models describing the load-displacement relationships for folding, extension, and in-plane rotation for single joints. A summary of their work is listed in Table 1. The table also shows the gaps in the literature that are important for origami-based design and this paper fills these gaps. Specifically, we give general equations which account for the various boundary conditions for the three DOF for general LET arrays with any combination of series, parallel, and configuration. Figure 2 illustrates motivations for filling these research gaps. Shown in the figure is a system of panels and joints that uses LET arrays to fold a large area into a small volume in the form of an origami pattern [23]. On the left is a close-up of these panels and joints, in the middle is the unfolded system, and on the right are details of the folded configuration. In the side view of the folded configuration, a close-up of a LET array is shown with displacements in the y-axis and about the x and z axes. It is this type of behavior which this paper characterizes to better understand how such systems of panels and joint perform.
Fig. 2
Fig. 2
Close modal
Table 1
State-of-the-art of LET joint equations
Motion
JointFolding (Rx)Extension/compression (Ty)In-plane rotation (Rz)
Outside LETEqs. (1)–(4) [13], Eqs. (1)–(2) [35]Eqs. (24)–(26) [13], Eqs. (1)–(3) [36]Eqs. (3)–(11) [35], Eqs. (6)–(9) [36]
Inside LETEqs. (5)–(7) [13]Eqs. (4)–(5) [36]Eqs. (10)–(12) [36]
Full joint arrayEqs. (5)–(7) [31]xx
General arrayxxx
Motion
JointFolding (Rx)Extension/compression (Ty)In-plane rotation (Rz)
Outside LETEqs. (1)–(4) [13], Eqs. (1)–(2) [35]Eqs. (24)–(26) [13], Eqs. (1)–(3) [36]Eqs. (3)–(11) [35], Eqs. (6)–(9) [36]
Inside LETEqs. (5)–(7) [13]Eqs. (4)–(5) [36]Eqs. (10)–(12) [36]
Full joint arrayEqs. (5)–(7) [31]xx
General arrayxxx
Standard LET joints assume four torsion segments: two in series and two in parallel. Other formulations are possible, which include odd numbers of torsion segments. To reduce confusion when discussing LET arrays made up of torsion segments, we will drop the term LET joints when describing the topology of the arrays. Instead, we will refer to LET array topologies using the following convention: SsPpc. S is the number of torsion segments in series, P is the number of torsion segments in parallel, and c is the configuration (whether the topology resembles an inside or outside LET joint, when applicable). For example, the designation 2s2pi is equivalent to an inside LET joint and 2s2po is the equivalent to an outside LET joint. A single torsion segment has the designation 1s1p and does not have a configuration c designation. Figure 3 shows possible formulations of LET arrays. The 2s2p location in the grid shows both configurations available to the 2s2p designation, with the outside LET on top and the inside LET on bottom. The outside/inside option is only available to designations with both even S and even P. The prototype shown in Fig. 2 indicates the LET array designations used in the left detail.
Fig. 3
Fig. 3
Close modal
General LET arrays can be used to tailor stiffness values in different DOF while maintaining the ability to fold. For example, assuming the geometry of individual segments remain the same, LET arrays of the same S will have the same range of motion before failure in folding but will have different folding, extension, and in-plane rotation stiffness values, as is the case for the LET arrays indicated in Fig. 2. Note that, for example, LET arrays with a P = 2 formulation are equivalent to two separate LET arrays with a P = 1 formulation in terms of folding but are not necessarily equivalent in extension and in-plane rotation as the boundary conditions of the torsion segments can be different for each case. There are unlimited variations of formulating the compliant segments. This paper focuses on the topologies depicted in Fig. 3 for any S and P.
Load–displacement relationships (stiffness rates) for each of the three DOFs considered in this paper are given in this section. It is desirable in the design of systems of panels and joints to obtain equivalent stiffness rates, expressed as scalars, in each of the DOF for each joint. Throughout this section, fundamental dimensions are used and are labeled in Fig. 4. Torsion segments and bending segments are also indicated in the figure, about which will be discussed throughout the paper. lT and wT are the length and width dimensions of torsion segments, respectively. lB and wB are the length and width dimensions of bending segments, respectively. L and W are the length and width dimensions of overall array, respectively, and are functions of the dimensions of the segments in bending and torsion. Throughout these analyses, we assume isotropic material properties. These relationships may also be applicable to symmetric and balanced laminate composites that exhibit isotropic behavior, but it is likely that other considerations, such as for coupling behavior, would be required.
Fig. 4
Fig. 4
Close modal
While LET arrays with varying dimensions are possible, we assume that for a particular LET array, the bending and torsion segments, separately, are of equal dimensions and therefore stiffness. This assumption allows for simplification of the predictive models. However, if LET arrays of varying dimensions are to be designed, the following guidelines are suggested: (1) to avoid unnecessary stress risers, ensure that dimensions of the torsion segments are the same along any particular row and (2) treat rows of identical geometry as unique LET arrays and solve the displacement of each LET array in series.
### 3.1 Folding.
We assume that the torsional stiffness kT is the same for each torsion bar. We assume that the bending stiffness for each segment in bending on the outside of the array is kB and that those on the inside of the array are twice as wide and therefore consists of two springs. A generalized analogous spring model is built by summing elements within rows in parallel and then the rows in series (Fig. 5). The resulting load–displacement relationship in folding for LET arrays is as follows:
$Mx=Keq,rxγ$
(1)
where $γ$ is the fold angle and Keq,rx is the stiffness rate found when adding the springs in parallel and in series:
$Keq,rx=PkTkBS(kT+kB)+kT$
(2)
A symmetric polar moment of inertia JT equation for beam torsional stiffness kT is given by [37]:
$kT=fJTGlT$
(3)
where f is a compensation function [37] to ensure accuracy of a symmetric torsional stiffness JT equation:
$f=1.167z5+29.49z4+30.9z3+100.9z2+30.38z+29.41z5+25.91z4+41.58z3+90.43z2+41.74z+25.21$
(4)
and
$JT=2t3wT37t2+7wT2$
(5)
where z = wT/t and G is the modulus of rigidity of the material. The bending stiffness of the segments (Euler–Bernoulli) in bending is
$kB=EIBlB$
(6)
where E is the elastic modulus of the material and IB = wBt3/12 is the moment of inertia of the segment in bending.
Fig. 5
Fig. 5
Close modal
### 3.2 Extension/Compression.
An outside LET torsion segment has a fixed-guided boundary condition, while inside LET torsion segment has a fixed-clamped condition. Here, we consider LET arrays that have a mixture of fixed-guided and fixed-clamped torsion segments, an example of which is shown in Fig. 6. The LET array has a 3s2p designation where four segments are fixed guided and two are fixed clamped. The differences between fixed guided and fixed clamped, in terms of loading conditions and mechanical responses, are discussed in Ref. [38].
Fig. 6
Fig. 6
Close modal
To model the equivalent extension/compression spring stiffness Ky,eq of LET arrays, we describe individual segment stiffness rates along the rows first (adding in parallel) and then across the rows (adding in series). Unlike the uniform distribution of stiffness rates for LET arrays in folding, LET arrays in extension/compression have a nonuniform distribution due to the mixed boundary conditions. Thus, additional row types are introduced as follows (see Fig. 7): a boundary row is a torsion row that contains a fixed-clamped segment, and if exists, it does so only at the transition from panel to joint (maximum of two in an array). A regular row is a torsion row that does not have fixed-clamped segments along the left and right edges of the array, and if exists, it may repeat (no maximum, but dependent on S). A bending segment row is composed of only bending segments. Figure 7 shows a LET array, which has two boundary rows, two regular rows, and five bending segment rows (as another example, the array in Fig. 6 has one boundary and two regular rows).
Fig. 7
Fig. 7
Close modal
The force–displacement relationship is given as follows:
$Fy=Ky,eqy$
(7)
where y is in the direction indicated in Fig. 1, and the spring stiffness Ky,eq for a LET array in extension/compression is as follows:
$Ky,eq=kboundkregkbendnregkboundkbend+nboundkregkbend+nbendkregkbound$
(8)
where kbend is the axial stiffness of bending segments:
$kbend=2PEtwBlB$
(9)
and the number of bending segment rows is as follows:
$nbend=S+1$
(10)
The number of boundary rows nbound is as follows:
(11)
For example, nbound of the LET array in Fig. 7 is 2 because S > 1 and P is odd. The number of regular rows nreg is as follows:
$nreg=S−nbound$
(12)
The boundary row force Fbound is
$Fbound=kboundybound$
(13)
where ybound is the displacement of the boundary row and the boundary row stiffness kbound is
$kbound=nbfgkfg|yfg=ybound+nbfckfc|yfc=ybound$
(14)
where yfg and yfc are the displacements of fixed-guided and fixed-clamped segments, respectively, and the regular row force Fr is
$Fr=kregyr$
(15)
where the regular row stiffness kreg is
$kreg=nrfgkfg|yfg=yr+nrfckfc|yfg=yr$
(16)
where nbfg is the number of fixed-guided segments in a boundary row:
(17)
For example, nbfg of the LET array in Fig. 7 is 1 because S > 1 and P is odd. nbfc is the number of fixed-clamped segments in a boundary row:
$nbfc=P−nbfg$
(18)
and where nrfg is the number of fixed-guided segments in a regular row:
$nrfg={1,P=12,otherwise$
(19)
For example, nrfg of the LET array in Fig. 7 is 2 because P > 1. nrfc is the number of fixed-clamped segments in a regular row:
$nrfc=P−nrfg$
(20)
The stiffness coefficient kfg of a fixed-guided beam assuming large deflections is nonlinear. Using equations for a fixed-guided beam from Ref. [9], a force–displacement relationship is found as follows:
$Ffg=4KTarcsinyfgγfglT3γfg2lT2−yfg2−lT(γfg−1)$
(21)
where $KT=2γfgKΘEIy/lT$ (the pseudo-rigid-body model (PRBM) spring constant for a fixed-guided beam) and $γfg$ and $KΘ$ are the characteristic radius factor and the stiffness coefficient and are often approximated as 0.85 and 2.65, respectively [9]. E is the material modulus of elasticity and Iy is the area moment of inertia for the torsion segment in bending in the y-direction of the array ($twT3/12$). The stiffness kfg is the derivative of Eq. (21) with respect to yfg:
$kfg=∂Ffg∂yfg$
(22)
The stiffness coefficient kfc of a fixed-clamped beam assuming large deflections is also nonlinear. The force–displacement relationship is as follows:
$Ffc=KAΔlT∂ΔlT∂yfc+2KTΘfc∂Θfc∂yfc1+LT2∂Θfc∂yfc$
(23)
where KA is the torsion segment axial stiffness:
$KA=EATγfclT+ΔlT$
(24)
where AT is the axial cross-sectional area of the torsion segment as a function of elongation and Poisson’s ratio $ν$:
$AT=twTlT(1+ΔlTlT(1−2ν))lT+ΔlT$
(25)
where ΔLT is the change in the length of the fixed-clamped beam:
$ΔLT=γfc2lT2+yfc2−γfclT$
(26)
and Θfc is the PRBM angle:
$Θfc=arctanyfcγfclT$
(27)
The kinematic coefficients are as follows:
$∂ΔlT∂yfc=yfcγfc2lT2+yfc2$
(28)
and
$∂Θfc∂yfc=γfclTγfc2lT2+yfc2$
(29)
The stiffness kfc value is the derivative of Eq. (23) with respect to yfc:
$kfc=∂Ffc∂yfc$
(30)
If there are both regular and boundary rows, respective displacements yr and yb are unknown because the stiffness values of the rows are nonlinear. Since the rows are assumed to be springs in series, the force is constant for each row:
$Fb=Fr$
(31)
and the regular row displacement can be parameterized as follows:
$yr=y−nbybnr$
(32)
such that the displacements yr and yb can be solved for using Eq. (31) as a function of joint displacement y. The evaluation of the force–displacement relationship is not straightforward. Algorithm 1 is presented for clarity.
#### Evaluating the force-displacement relationship F y
Algorithm 1
1: Evaluate integers (Eqs. (9)(12) and (17)(20)
2: for allydo
3: Formulate boundary row force $Fb$ equation (substitute (Eqs. (22) and (30) (using Eqs. (23) and (24)(26)) into Eq. (14)) into Eq. (13))
4: Formulate regular row force $Fr$ equation (substitute (Eqs. 22 and 30 (using Eqs. (23) and (24)(26)) into Eq. (16)) into Eq. (15))
5: Equate forces $Fb$ and $Fr$ (Eq. (31)), parameterize $yr$ (substitute Eq. (32) into Eq. (31)), and solve for $yb$
6: Evaluate row stiffness values $kb$ and $kr$ (Eqs. (14) and (16)) using updated $yb$ and $yr$
7: Evaluate equivalent spring stiffness $Ky,eq$ (Eq. (8)) by using updated row stiffness values $kb$ and $kr$
8: Evaluate force $Fy$ (Eq. (7)) by using updated equivalent spring stiffness $Ky,eq$
9: return$Fy$
### 3.3 In-Plane Rotation.
Different spring models are used for in-plane rotation of LET arrays for the cases when P = 1 and P > 1. For P = 1, moment-loaded beams are added in series. For P > 1, a proposed analogous spring model is developed. These cases are discussed below.
#### 3.3.1 In-Plane Rotation When P = 1.
For the case where P = 1, a moment–displacement relationship is found using a spring analogy of beams with moments applied at the end in series [9]:
$Mz=Keq,rzβ$
(33)
The equivalent spring stiffness rate is as follows:
$Keq,rz=γKΘEIylT1Scθ$
(34)
where $γ=0.7346$, KΘ = 2.0643, and cθ = 1.5164 for this case.
#### 3.3.2 In-Plane Rotation When P > 1.
A simplified spring model that assumes column extension/compression relative to a neutral axis is used to approximate the torque/displacement relationship of LET arrays of P > 1. Figure 8 illustrates how a particular LET array can be modeled as separate columns in extension and compression. Equivalent column spring stiffness values are found by adding rows of a column in series. A rotational displacement load is applied about the neutral axis, and the rotational displacement $β$ causes extension/compression displacements in the columns resulting in column forces Fi. We assume small angles such that the columns remain vertical. These forces are multiplied by their respective distances from the neutral axis ai and are summed to equate to the resultant moment Mz:
$Mz=∑i=1PFiai$
(35)
The equivalent stiffness is as follows:
$Keq,rz=∂Mz∂β$
(36)
Each LET array formulation has a left boundary column and a right boundary column and has nrc regular columns, where
$nrc=P−2$
(37)
The column forces are composed of combinations of a left boundary column force Flbc, regular column forces Frc,i, and a right boundary column force Frbc, resulting from displacements acting on the equivalent spring rates klbc, krc, and krbc, respectively. The left boundary column force Flbc is equal to the force from a fixed-guided segment Flbcfg or the force from a fixed-clamped segment Flbcfc. They are as follows:
$Flbcfg=kfg|yfg=ylbcfgylbcfg$
(38)
$Flbcfc=kfc|yfc=ylbcfcylbcfc$
(39)
By parameterizing ylbcfg as follows:
$ylbcfg=y1−nlbcfcylbcfcnlbcfg$
(40)
and by equating Eqs. (38) and (39), ylbcfc can be solved for, similar to steps 5–6 of Algorithm 1. This process is repeated for the right column force, where
$Frbcfg=kfg|yfg=yrbcfgyrbcfg$
(41)
and
$Frbcfc=kfc|yfc=yrbcfcyrbcfc$
(42)
and where
$yrbcfg=y1−nrbcfcyrbcfcnrbcfg$
(43)
Since the regular columns are composed of entirely of fixed-clamped segments, the resulting force can be evaluated directly as follows:
$Frcfc=1Skfc|yfc=y/Sy$
(44)
where ylbcfg and ylbcfc are the y displacement of fixed-guided and fixed-clamped segments, respectively, of the left boundary column; yrbcfg and yrbcfc are the y displacement of fixed-guided and fixed-clamped segments, respectively, of the right boundary column; and yrcfc is the y displacement of fixed-clamped segments of the regular column. nlbcfc is the number of fixed-clamped segments in the left boundary column:
(45)
nlbcfg is the number of fixed-guided segments in the left boundary column:
$nlbcfg=S−nlbcfc$
(46)
where nrbcfc is the number of fixed-clamped segments in the right boundary column:
(47)
and nrbcfg is the number of fixed-guided segments in the right boundary column:
$nrbcfg=S−nrbcfc$
(48)
Fig. 8
Fig. 8
Close modal
The moment arms ai are the distances from the neutral axis Cx to the forces Fi. The neutral axis is found using
$Cx=∑i=1PFixi∑i=1PFi$
(49)
where xi is the distance from one side of the array to a column force:
$xi=lT2+wB+(i−1)(lT+2wB)$
(50)
and ai is given as follows:
$ai=|xi−Cx|$
(51)
and the yi displacement for each column is given as follows:
$yi=aisinβ$
(52)
The instantaneous spring rates (kfc and kfg) are the same as before (Eqs. (30) and (22)). Due to the nonlinearity of the spring rates, the neutral axis shifts as a function of displacement. Algorithm 2 is a proposed numerical algorithm to determine the location of the neutral axis, which is required to calculate the moment–rotation relationship. The values for $γfg$ and $γfc$ are substituted in $γfgz$ and $γfcz$, respectively, for more accurate in-plane rotation results.
##### Locating the neutral axis C x
Algorithm 2
1: for all$β$do
2: $Cx←L2$ ▹ begin by assuming symmetry
3: Solve for $Flbc$, $Frc$, and $Frbc$
4: ifS is even and P is odd then
5: Calculate $Cx,new$
6: $δCx←|Cx−Cx,new|$
7: $Cx←(Cx,new+Cx)/2$
8: while$δCx>ϵ$do ▹ $ϵ$ is a predetermined convergence (i.e., $ϵ=1e−6m$)
9: Solve for $Flbc$, $Frc$, and $Frbc$ using $Cx$
10: Calculate $Cx,new$
11: $δCx←|Cx−Cx,new|$
12: $Cx←(Cx,new+Cx)/2$
13: return$Cx$
14: else
15: $Cx←Cx,new$
16: return$Cx$
## 4 Finite Element Analysis Comparison
To verify the analytical models, several LET arrays were tested using an integrated software framework [34], in which a matlab script manages the parametric study, a solidworks macro updates the geometry of the LET arrays, and ansys apdl provides the force/torque–deflection characteristics of each configuration via batch mode. A routine, overseen by matlab, was defined to automatically export the CAD files from solidworks and, subsequently, to perform the structural batch simulations of each candidate. In line with the proposed theoretical models, three DOFs were analyzed, namely $Rx(γ)$, Ty(y), and $Rz(β)$. For each of the DOF, a free mesh with second-order tetrahedral elements ansys solid187 was defined with a maximum element size of t. This element type is particularly suitable for bending-dominated problems, where the shear locking effect is undesired due to its effect on the bending stiffness. In all simulations, the nonlinear geometry (NLGEOM) option was turned on. As for the employed material, aluminum alloy 7075 (heat treated) is considered, owing to its high strength-to-modulus ratio [9]. The Young’s modulus and Poisson’s ratio are, respectively, E = 71.7 GPa and $ν=0.33$. Concerning the boundary conditions, each LET array was fixed to the ground at one end and guided in a pure translation (along the y-axis) or rotation (about the x or z axes) on the other end. Since SOLID187 elements do not have rotational DOF, MPC184 elements were used to apply kinematic constraints (spider web of beams) between the solid model’s nodes and a master node (as shown in Fig. 9), onto which rotational displacement loads can be applied. Referring to Figs. 2 and 4, the tested array parameters were lB = 2 mm; WB = 4 mm; WT = 2 mm; t = 1 mm; lT = 10 mm, lT = 20 mm, or lT = 30 mm; S (ranging from 1 to 3); P (ranging from 1 to 5); and c (either i or o, where applicable).
Fig. 9
Fig. 9
Close modal
Figure 10 shows the force/torque–deflection relationships for arrays with S = 2, lT = 10 mm and various P with the standard $γ=0.85$. The modeled and FEA results have significant differences, with the modeled values higher than the FEA in each case, as shown in Fig. 11. For design, this may not be undesirable when considering strength. However, when considering behavior of systems that implement these types of arrays, more accurate predictions of the stiffness values are desired. A possible explanation for the overestimated stiffness is the inaccurately assumed boundary conditions. The fixed-guided and fixed-clamped boundary conditions do not account for the compliance of the bending segments or of the regions not modeled. Figure 12 shows an FEA strain plot of a 2s3p LET array, indicating the regions where forces and moments are transmitted, but whose spring models are not considered even though strain is observed. A more accurate representation of the overall deflection would include the compliance of these segments. One way to model this compliance in the Ty and Mz DOF is to modify the characteristic radius factor $γ$, effectively softening the assumed rigid boundaries of these regions (hence, the designated $γfg$ or $γfc$ variables in Eqs. (21), (23), and others). In the case of the arrays in Fig. 10, increasing the effective length of the torsion segment would decrease the array stiffness, bringing them closer to the FEA results. A way to model the compliance of these segments for the Mx DOF is to effectively lengthen the bending segments so that they “extend” into these regions, which we have done by adding 1/2wT (1/4wT to both sides), making Eq. (14) equate to kB = E IB/(lB + 1/2wT). By making these modifications, the errors between the FEA and modeled results are reduced (see Fig. 11). Table 2 lists the modified characteristic radius factors (e.g., $γfg$) used for the associated lT and S values and their corresponding figures. In each row of the table, the only geometry variation was lT. The radius factors listed in the table were found using an optimization routine defined as follows:
$minimize∑Errorwith respect toγfg,γfc,γfgz,γfczsubject to0.1≤γfg,γfc,γfgz,γfcz≤2.0for eachlT∈{1.0,2.0,3.0}cm$
where Error is the absolute relative error of the load–displacement FEA and modeled results for each increment, DOF, and value of lT:
$Error=|Ty,M−Ty,FTy,F|$
(53)
where the subscripts M and F refer to modeled and FEA results, respectively, and Rz is substituted for Ty for the in-plane rotation DOF. The trend shows that for shorter torsion segments, the radius factors were increased from the standard 0.85 and that for longer torsion segments, the factors were decreased. By using the modified characteristic radius factors, the figures listed in Table 2 (i.e., Figs. 1321, representing the load–displacement relationships for all the tested configurations) suggest that accurate stiffness estimates are achieved by using the presented model for LET arrays.
Fig. 10
Fig. 10
Close modal
Fig. 11
Fig. 11
Close modal
Fig. 12
Fig. 12
Close modal
Fig. 13
Fig. 13
Close modal
Fig. 14
Fig. 14
Close modal
Fig. 15
Fig. 15
Close modal
Fig. 16
Fig. 16
Close modal
Fig. 17
Fig. 17
Close modal
Fig. 18
Fig. 18
Close modal
Fig. 19
Fig. 19
Close modal
Fig. 20
Fig. 20
Close modal
Fig. 21
Fig. 21
Close modal
Table 2
List of parameters, figures, and modified characteristic radius factors
lT (cm)$γfg$$γfc$$γfgz$$γfcz$SFigure
1.01.291.061.181.161, 2, 313, 14, 15
2.00.740.730.640.821, 2, 316, 17, 18
3.00.610.640.480.801, 2, 319, 20, 21
lT (cm)$γfg$$γfc$$γfgz$$γfcz$SFigure
1.01.291.061.181.161, 2, 313, 14, 15
2.00.740.730.640.821, 2, 316, 17, 18
3.00.610.640.480.801, 2, 319, 20, 21
One exception to the modified radius factors listed in Table 2 was the case when P = 2 for the Rz DOF. The spring model presented in Sec. 3.3.2 assumes that the columns remain vertical during loading with the global rotation occurring as the left and right columns extend in opposite directions. For P = 2, however, there are no other segments in parallel to enforce vertical displacement, and so the model overestimates the stiffness by artificially enforcing the vertical constraint. Therefore, a separate optimization was performed for the case when P = 2 for each of the lT values. The optimal radius factors were as follows: (for lT = 10 mm) $γfg=γfc=1.01$, (for lT = 20 mm) $γfg=0.53$ and $γfc=0.56$, and (for lT = 30 mm) $γfg=0.40$ and $γfc=0.49$. These values are especially useful when considering the traditional outside and inside LET joints (which are equivalent to 2s2po and 2s2pi LET arrays, respectively).
It can be seen that for the Ty and Rz DOF, the force/torque–displacement relationships exhibit nonlinear behavior when S is low and P is high (see Figs. 13, 16, and 19) for both the FEA and modeled results. This is to be expected when considering the nonlinear terms in Eqs. (21) and (23) for these high-stiffness array formulations. It can also be seen that the errors are higher for these cases, especially for the Rz DOF. A possible explanation for this error in the Rz model is that for these stiff cases, the simplifying spring models do not account for compliance at the boundaries and at the interactions between the bending and torsion segments. This inaccuracy in model assumptions is less apparent in the less-stiff and linear regimes of the arrays. Other cases (higher S) show excellent agreement between the analytical and FEA results.
The comparison between the FEA and modeled results verified that the models can accurately characterize general LET arrays for elastic load–displacement behaviors. These models represent an important tool for designers of systems that incorporate LET arrays to enable design of behaviors of such systems. The equations used assume the material to remain in the elastic region throughout the motion and are particularly useful in exploring the design space of the formulation of the LET arrays. When finalizing a design, consideration of the elastic limits is required. Equations of maximum stress of LET joints [13,39] can be applied to determine the stresses of LET arrays.
## 5 Conclusion
In this paper, LET arrays were defined as torsion segments in series and in parallel, which can be used for origami-based applications and others. The load–displacement relationships for the arrays were characterized in three DOFs enabling the design of arrays that consider off-axis motion.1 The proposed models were verified using FEA for several LET array configurations resulting from an automatic framework. By using the standard $γ=0.85$ value, conservative stiffness estimates are available. If more accurate stiffness estimated are desired, modified radius factors may be used. By using these modified values, load–displacement models had good agreement with the FEA results. Based on intuition and optimization results, shorter torsion segments may use higher $γ$ values and longer torsion segments may use lower $γ$ values. In some cases, the nonlinear behavior of the arrays were sources of error between the modeled and FEA results. Because the use of LET arrays is becoming more prevalent, the analytical models presented in this work enable their design. By using LET arrays, a designer can tailor the stiffness behavior for folding, extension, and in-plane rotation to realize folding and origami-based mechanisms. The provided stiffness values can be used for multibody dynamic or other system-modeling techniques in the design of such systems. By providing several plots of the force/torque–displacement relationships to show model verification, we have also showed the variability of the design space available to LET arrays.
## Footnote
1
To facilitate rapid implementation, a repository containing the proposed models for the LET array analysis can be downloaded from the source: http://dx.doi.org/10.17632/bvcjbcg3cs.1
## Acknowledgment
The authors gratefully acknowledge critical discussions on in-plane rotation stiffness with Jared Butler, Brandon Sargent, Philip Stevenson, and Collin Ynchausti and the finite element modeling assistance from Kenny Seymour.
## Funding Data
• NASA Space Technology Research Fellowship (Grant No. 80NSSC17K0145; Funder ID: 10.13039/100000104).
## References
1.
Jacobsen
,
J.
,
Winder
,
B.
,
Howell
,
L.
, and
Magleby
,
S.
,
2010
, “
Lamina Emergent Mechanisms and Their Basic Elements
,”
ASME J. Mech. Rob.
,
2
(
1
), pp.
1
9
. 10.1115/1.4000523
2.
Qiu
,
C.
,
Qi
,
P.
,
Liu
,
H.
,
Althoefer
,
K.
, and
Dai
,
J. S.
,
2016
, “
Six-Dimensional Compliance Analysis and Validation of Orthoplanar Springs
,”
ASME J. Mech. Des.
,
138
(
4
), p.
042301
. 10.1115/1.4032580
3.
Qiu
,
L.
,
Huang
,
G.
, and
Yin
,
S.
,
2017
, “
Design and Performance Analysis of Double C-Type Flexure Hinges
,”
ASME J. Mech. Rob.
,
9
(
4
), p.
044503
. 10.1115/1.4036609
4.
Qiu
,
L.
,
Yin
,
S.
, and
Xie
,
Z.
,
2016
, “
Failure Analysis and Performance Comparison of Triple-LET and LET Flexure Hinges
,”
Eng. Failure Anal.
,
66
, pp.
35
43
. 10.1016/j.engfailanal.2016.04.006
5.
Wang
,
R.
, and
Zhang
,
X.
,
2017
, “
Optimal Design of a Planar Parallel 3-DOF Nanopositioner With Multi-Objective
,”
Mech. Mach. Theory
,
112
, pp.
61
83
. 10.1016/j.mechmachtheory.2017.02.005
6.
Gollnick
,
P. S.
,
Magleby
,
S. P.
, and
Howell
,
L. L.
,
2011
, “
An Introduction to Multilayer Lamina Emergent Mechanisms
,”
ASME J. Mech. Des.
,
133
(
8
), p.
081006
. 10.1115/1.4004542
7.
Alfattani
,
R.
, and
Lusk
,
C.
,
2018
, “
A Lamina-Emergent Frustum Using a Bistable Collapsible Compliant Mechanism
,”
ASME J. Mech. Des.
,
140
(
12
), p.
125001
. 10.1115/1.4037621
8.
Lobontiu
,
N.
,
Gress
,
T.
,
Munteanu
,
M. G.
, and
Ilic
,
B.
,
2019
, “
Stiffness Design of Circular-Axis Hinge, Self-Similar Mechanism With Large Out-of-Plane Motion
,”
ASME J. Mech. Des.
,
141
(
9
), p.
092302
. 10.1115/1.4042792
9.
Howell
,
L. L.
,
2001
,
Compliant Mechanisms
,
John Wiley & Sons
,
New York
.
10.
Aten
,
Q. T.
,
Jensen
,
B. D.
, and
Howell
,
L. L.
,
2012
, “
Geometrically Non-Linear Analysis of Thin-Film Compliant Mems Via Shell and Solid Elements
,”
Finite Elements Anal. Des.
,
49
(
1
), pp.
70
77
. 10.1016/j.finel.2011.08.022
11.
Francis
,
K. C.
,
Blanch
,
J. E.
,
Magleby
,
S. P.
, and
Howell
,
L. L.
,
2013
, “
Origami-Like Creases in Sheet Materials for Compliant Mechanism Design
,”
Mech. Sci.
,
4
(
2
), pp.
371
380
. 10.5194/ms-4-371-2013
12.
Pehrson
,
N. A.
,
Magleby
,
S. P.
,
Lang
,
R. J.
, and
Howell
,
L. L.
,
2016
, “
Introduction of Monolithic Origami With Thick-Sheet Materials
,”
Proceedings of the International Association for Shell and Spatial Structures Annual Symposium
,
Tokyo, Japan
,
Sept. 26–30
, pp.
1
10
.
13.
Jacobsen
,
J. O.
,
Chen
,
G.
,
Howell
,
L. L.
, and
Magleby
,
S. P.
,
2009
, “
Lamina Emergent Torsional (LET) Joint
,”
Mech. Mach. Theory
,
44
(
11
), pp.
2098
2109
. 10.1016/j.mechmachtheory.2009.05.015
14.
Xie
,
Z.
,
Qiu
,
L.
, and
Yang
,
D.
,
2017
, “
Design and Analysis of Outside-Deployed Lamina Emergent Joint (OD-LEJ)
,”
Mech. Mach. Theory
,
114
, pp.
111
124
. 10.1016/j.mechmachtheory.2017.03.011
15.
Xie
,
Z.
,
Qiu
,
L.
, and
Yang
,
D.
,
2018
, “
Design and Analysis of a Variable Stiffness Inside-Deployed Lamina Emergent Joint
,”
Mech. Mach. Theory
,
120
, pp.
166
177
. 10.1016/j.mechmachtheory.2017.09.023
16.
Saito
,
K.
,
Tsukahara
,
A.
, and
Okabe
,
Y.
,
2015
, “
New Deployable Structures Based on An Elastic Origami Model
,”
ASME J. Mech. Des.
,
137
(
2
), p.
021402
. 10.1115/1.4029228
17.
Chen
,
Y.
,
Sareh
,
P.
,
Yan
,
J.
,
Fallah
,
A. S.
, and
Feng
,
J.
,
2019
, “
An Integrated Geometric-Graph-Theoretic Approach to Representing Origami Structures and Their Corresponding Truss Frameworks
,”
ASME J. Mech. Des.
,
141
(
9
), p.
091402
. 10.1115/1.4042791
18.
Gillman
,
A. S.
,
Fuchi
,
K.
, and
Buskohl
,
P. R.
,
2019
, “
Discovering Sequenced Origami Folding Through Nonlinear Mechanics and Topology Optimization
,”
ASME J. Mech. Des.
,
141
(
4
), p.
041401
. 10.1115/1.4041782
19.
,
C.
,
Cavoret
,
J.
,
Dureisseix
,
D.
,
Jean-Mistral
,
C.
, and
Ville
,
F.
,
2016
, “
An Experimental Study and Model Determination of the Mechanical Stiffness of Paper Folds
,”
ASME J. Mech. Des.
,
138
(
4
), p.
041401
. 10.1115/1.4032629
20.
Gao
,
W.
,
Ramani
,
K.
,
Cipra
,
R. J.
, and
Siegmund
,
T.
,
2013
, “
Kinetogami: A Reconfigurable, Combinatorial, and Printable Sheet Folding
,”
ASME J. Mech. Des.
,
135
(
111009
), p.
111009
. 10.1115/1.4025506
21.
Geiss
,
M. J.
,
Boddeti
,
N.
,
Weeger
,
O.
,
Maute
,
K.
, and
Dunn
,
M. L.
,
2019
, “
Combined Level-Set-XFEM-Density Topology Optimization of Four-Dimensional Printed Structures Undergoing Large Deformation
,”
ASME J. Mech. Des.
,
141
(
5
), p.
051405
. 10.1115/1.4041945
22.
Bös
,
F.
,
Wardetzky
,
M.
,
Vouga
,
E.
, and
Gottesman
,
O.
,
2016
, “
On the Incompressibility of Cylindrical Origami Patterns
,”
ASME J. Mech. Des.
,
139
(
2
), p.
021404
.
23.
Pehrson
,
N. A.
,
Smith
,
S. P.
,
Ames
,
D. C.
,
Magleby
,
S. P.
, and
Arya
,
M.
,
2019
, “
Self-Deployable, Self-Stiffening, and Retractable Origami-Based Arrays for Spacecraft
,”
AIAA Scitech 2019 Forum
,
San Diego, CA
,
Jan. 7–11
,
Paper No. AIAA 2019–0484
.
24.
Huang
,
H.
,
Li
,
B.
,
Zhang
,
T.
,
Zhang
,
Z.
,
Qi
,
X.
, and
Hu
,
Y.
,
2019
, “
Design of Large Single-Mobility Surface-Deployable Mechanism Using Irregularly Shaped Triangular Prismoid Modules
,”
ASME J. Mech. Des.
,
141
(
1
), p.
012301
. 10.1115/1.4041178
25.
Zirbel
,
S. A.
,
Lang
,
R. J.
,
Thomson
,
M. W.
,
Sigel
,
D. A.
,
Walkemeyer
,
P. E.
,
Trease
,
B. P.
,
Magleby
,
S. P.
, and
Howell
,
L. L.
,
2013
, “
Accommodating Thickness in Origami-Based Deployable Arrays
,”
ASME J. Mech. Des.
,
135
(
11
), p.
111005
. 10.1115/1.4025372
26.
Defigueiredo
,
B. P.
,
Zimmerman
,
T. K.
,
Russell
,
B. D.
, and
Howell
,
L. L.
,
2018
, “
Regional Stiffness Reduction Using Lamina Emergent Torsional Joints for Flexible Printed Circuit Board Design
,”
ASME J. Electron. Packag.
,
140
(
4
), p.
041001
. 10.1115/1.4040552
27.
Fuchi
,
K.
,
Buskohl
,
P. R.
,
Bazzan
,
G.
,
Durstock
,
M. F.
,
Reich
,
G. W.
,
Vaia
,
R. A.
, and
Joo
,
J. J.
,
2015
, “
Origami Actuator Design and Networking Through Crease Topology Optimization
,”
ASME J. Mech. Des.
,
137
(
9
), p.
091401
. 10.1115/1.4030876
28.
Guang
,
C.
, and
Yang
,
Y.
,
2018
, “
An Approach to Designing Deployable Mechanisms Based on Rigid Modified Origami Flashers
,”
ASME J. Mech. Des.
,
140
(
8
), p.
082301
. 10.1115/1.4040178
29.
Klett
,
Y.
,
2018
, “
PALEO: Plastically Annealed Lamina Emergent Origami
,”
ASME IDETC/CIE International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
,
,
Aug. 26–29
, p. V05BT07A062.
30.
Nelson
,
T. G.
,
Bruton
,
J. T.
,
Rieske
,
N. E.
,
Walton
,
M. P.
,
Fullwood
,
D. T.
, and
Howell
,
L. L.
,
2016
, “
Material Selection Shape Factors for Compliant Arrays in Bending
,”
Mater. Des.
,
110
, pp.
865
877
. 10.1016/j.matdes.2016.08.056
31.
Nelson
,
T. G.
,
Lang
,
R. J.
,
Pehrson
,
N. A.
,
Magleby
,
S. P.
, and
Howell
,
L. L.
,
2016
, “
Facilitating Deployable Mechanisms and Structures Via Developable Lamina Emergent Arrays
,”
ASME J. Mech. Rob.
,
8
(
3
), p.
031006
. 10.1115/1.4031901
32.
Delimont
,
I. L.
,
Magleby
,
S. P.
, and
Howell
,
L. L.
,
2015
, “
A Family of Dual-Segment Compliant Joints Suitable for Use as Surrogate Folds
,”
ASME J. Mech. Des.
,
137
(
9
), p.
092302
. 10.1115/1.4030875
33.
Peraza Hernandez
,
E. A.
,
Hartl
,
D. J.
, and
Lagoudas
,
D. C.
,
2016
, “
Kinematics of Origami Structures With Smooth Folds
,”
ASME J. Mech. Rob.
,
8
(
6
), p.
061019
. 10.1115/1.4034299
34.
Bilancia
,
P.
,
Berselli
,
G.
,
Bruzzone
,
L.
, and
Fanghella
,
P.
,
2019
, “
A CAD/CAE Integration Framework for Analyzing and Designing Spatial Compliant Mechanisms Via Pseudo-Rigid-Body Methods
,”
Rob. Comput. Integrated Manuf.
,
56
, pp.
287
302
. 10.1016/j.rcim.2018.07.015
35.
Boehm
,
K. J.
,
Gibson
,
C. R.
,
Hollaway
,
J. R.
, and
Espinosa-Loza
,
F.
,
2016
, “
A Flexure-Based Mechanism for Precision Adjustment of National Ignition Facility Target Shrouds in Three Rotational Degrees of Freedom
,”
Fusion Sci. Technol.
,
70
(
2
), pp.
265
273
. 10.13182/FST15-217
36.
Chen
,
G.
,
Magleby
,
S. P.
, and
Howell
,
L. L.
,
2018
, “
Membrane-Enhanced Lamina Emergent Torsional Joints for Surrogate Folds
,”
ASME J. Mech. Des.
,
140
(
6
), p.
062303
. 10.1115/1.4039852
37.
Chen
,
G.
, and
Howell
,
L. L.
,
2009
, “
Two General Solutions of Torsional Compliance for Variable Rectangular Cross-Section Hinges in Compliant Mechanisms
,”
Precision Eng.
,
33
(
3
), pp.
268
274
. 10.1016/j.precisioneng.2008.08.001
38.
Howell
,
L. L.
,
DiBiasio
,
C. M.
,
Cullinan
,
M. A.
,
Panas
,
R. M.
, and
Culpepper
,
M. L.
,
2010
, “
A Pseudo-Rigid-Body Model for Large Deflections of Fixed-Clamped Carbon Nanotubes
,”
ASME J. Mech. Rob.
,
2
(
3
), p.
034501
. 10.1115/1.4001726
39.
Chen
,
G.
, and
Howell
,
L. L.
,
2018
, “
Symmetric Equations for Evaluating Maximum Torsion Stress of Rectangular Beams in Compliant Mechanisms
,”
Chin. J. Mech. Eng.
,
31
(
14
), pp.
1
1
. 10.3901/JME.2018.01.001
|
2022-05-24 09:04:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 127, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6637329459190369, "perplexity": 3832.3963075965253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662570051.62/warc/CC-MAIN-20220524075341-20220524105341-00343.warc.gz"}
|
https://zbmath.org/?q=an:1032.62063
|
## Existence conditions for the uniformly minimum risk unbiased estimators in a class of linear models.(English)Zbl 1032.62063
Summary: This paper studies the existence of uniformly minimum risk unbiased (UMRU) estimators of parameters in a class of linear models with an error vector having multivariate normal distribution or $$t$$-distribution, which includes the growth curve model, the extended growth curve model, the seemingly unrelated regression equations model, the variance components model, and so on. Necessary and sufficient existence conditions are established for UMRU estimators of the estimable linear functions of regression coefficients under convex losses and matrix losses, respectively. Under the (extended) growth curve model and the seemingly unrelated regression equations model with normality assumption, the conclusions given in the literature can be derived by applying the general results in this paper. For the variance components model, the necessary and sufficient existence conditions are reduced as terse forms.
### MSC:
62J05 Linear regression; mixed models 62H12 Estimation in multivariate analysis
Full Text:
### References:
[1] S. Geisser, Growth curve analysis, in: P.R. Krishnaih (Ed.), Handbook of Statistics, Vol. 1, North-Holland, Amsterdam, pp. 89-115. · Zbl 0483.62038 [2] Kubokawa, T., Double shrinkage estimation of common coefficients in two regression equations with heteroscedasticity, J. multivariate anal., 67, 169-189, (1998) · Zbl 0953.62008 [3] Lehmann, E.L., Theory of point estimation, (1983), Wiley New York · Zbl 0522.62020 [4] Potthoff, R.F.; Roy, S.N., A generalized multivariate analysis of variance model useful especially for growth curve problem, Biometrika, 51, 313-326, (1964) · Zbl 0138.14306 [5] Rao, C.R., Estimation of parameters in a linear model, Ann. statist., 4, 1023-1037, (1976) · Zbl 0336.62055 [6] Srivastava, V.K.; Giles, D.E.A., Seemingly unrelated regression equations models, (1987), Marcel Dekker New York · Zbl 0568.62066 [7] Sutradhar, B.C.; Ali, M.M., Estimation of the parameters of regression model with a multivariate t error variable, Commun. statist. theory methods, 15, 429-450, (1986) · Zbl 0608.62061 [8] Verbyla, A.P.; Venables, W.N., An extension of the growth curve model, Biometrika, 75, 129-138, (1988) · Zbl 0636.62073 [9] D. Von Roson, Maximum likelihood estimates in multivariate linear normal models with special references to the growth curve model, Research Report 135, Department of Mathematics and Statistics, University of Stockholm, Stockholm, Sweden, 1984. [10] Von Roson, D., Maximum likelihood estimators in multivariate linear normal model, J. multivariate anal., 31, 187-200, (1989) · Zbl 0686.62037 [11] Von Roson, D., Uniqueness conditions for maximum likelihood estimators in a multivariate linear model, J. statist. plann. inference, 36, 269-276, (1993) [12] Wu, Q.G., Necessary and sufficient conditions for the existence of the UMRU estimators in growth curve models, Chinese sci. bull., 39, 89-92, (1994) · Zbl 0795.62054 [13] Wu, Q.G., Existence of the uniformly minimum risk unbiased estimator in seemingly unrelated regression system, Acta math. sinica (N.S.), 11, 23-28, (1995) · Zbl 0832.62051 [14] Wu, Q.G., Existence of the UMRU estimator in SURE model with special covariance structures, Chinese sci. bull., 42, 1149-1151, (1997) · Zbl 0942.62508 [15] Wu, Q.G., Existence conditions of the uniformly minimum risk unbiased estimators in extended growth curve models, J. statist. plann. inference, 69, 101-114, (1998) · Zbl 0924.62057 [16] Zellner, A., An efficient method of estimating seemingly unrelated regression and tests for aggregation bias, J. amer. statist. assoc., 57, 348-368, (1962) · Zbl 0113.34902 [17] Zellner, A., Bayesian and non-Bayesian analysis of the regression model with multivariate student-t error terms, J. amer. statist. assoc., 71, 400-405, (1976) · Zbl 0348.62026
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2022-09-30 18:49:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6693075895309448, "perplexity": 3788.4255051121586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00636.warc.gz"}
|
https://stats.stackexchange.com/questions/487907/what-are-the-differences-between-the-likelihood-functions-in-maximum-likelihood
|
# What are the differences between the likelihood functions in Maximum Likelihood Estimation and in the Bayes' Theorem? [duplicate]
I am wondering the differences between the likelihood function in Maximum Likelihood Estimation and the likelihood function in Bayes' Theorem. To me, the likelihood function in Bayes' Theorem depends on the prior probability distribution because values of parameters come from the prior distribution and used in the likelihood function. On the other hand, we do not have any prior probability distribution but we can still do Maximum Likelihood Estimation. It is quite confusing.
They are the same. The likelihood is $$p(X|\theta)$$ where $$X$$ is the data and $$\theta$$ is the parameter to be estimated, this term gives the probability of $$X$$ given $$\theta$$, so $$p(\theta)$$ (the prior) does not get involved.
However the posterior probability,$$\,\,p(\theta|X)$$, does depend on $$p(\theta)$$ because $$p(\theta|X) = \frac{p(X|\theta) p(\theta)}{p(X)} \propto p(X|\theta) p(\theta)$$.
Suppose there is one data point $$y$$ ans we assume $$y\sim N(\mu, \sigma^2)$$. Then the likelihood is: $$\dfrac{1}{\sigma {\sqrt {2\pi }}} e^{-{\frac {1}{2}}\left({\frac {y-\mu }{\sigma }}\right)^{2}}.$$ If any, there is one theoretical difference. That is the way we treat the parameters $$\mu$$ and $$\sigma$$. In the Bayesian framework, the parameters are random (that is why we need priors), whereas in the frequentist inference, they are fixed.
|
2023-03-26 18:36:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9574901461601257, "perplexity": 148.22875192039896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946445.46/warc/CC-MAIN-20230326173112-20230326203112-00060.warc.gz"}
|
https://mersenneforum.org/showthread.php?s=dcffe97ee803fb93f09b06922952c7e3&t=5758&page=139
|
Register FAQ Search Today's Posts Mark Forums Read
2018-05-16, 03:59 #1519
retina
Undefined
"The unspeakable one"
Jun 2006
My evil lair
5×11×107 Posts
Quote:
Originally Posted by Madpoo Y'all are just impatient. LOL
Hehe, maybe you are right.
I want it all, and I want it NOW!
Last fiddled with by retina on 2018-05-16 at 03:59
2018-05-16, 14:38 #1520
GP2
Sep 2003
2×1,291 Posts
Quote:
Originally Posted by Madpoo Since adding PRP to the site, I hope folks appreciate that adding a new feature like that and then going back through and fixing all kinds of reports behind the scenes is one of those things with lots of moving parts. Personally I didn't really mind thinking that it would be done here and there as time permitted. Y'all are just impatient. LOL
This might not be PRP related though.
It is undercounting the number of factored exponents in the 77M range by one (35812 instead of 35813). So that should be old SQL code, which is giving the wrong answer for some reason.
2018-05-16, 15:43 #1521
James Heinrich
"James Heinrich"
May 2004
ex-Northern Ontario
31·103 Posts
Quote:
Originally Posted by S485122 The Detailed report / Exponent status is not outputing correctly in text-only format : when pasted the lines are missing an end of line character. One can use the text-only report if one goes through "view source". I abandoned the idea of doing this 56 times :-(
You may be better served with the "simple" exponent report instead, which outputs pure text:
https://www.mersenne.org/report_expo...xp_hi=77000100
2018-05-16, 17:56 #1522
S485122
Sep 2006
Brussels, Belgium
32·181 Posts
Quote:
Originally Posted by James Heinrich You may be better served with the "simple" exponent report instead, which outputs pure text: https://www.mersenne.org/report_expo...xp_hi=77000100
Indeed that query works ! Thanks ! Since it is limited to 100 exponents, it will not help finding where the problems are.
The missing factored exponent is a fact and might be a clue to the cause of the problems..
Jacob
2018-05-16, 20:12 #1523
James Heinrich
"James Heinrich"
May 2004
ex-Northern Ontario
1100011110012 Posts
Quote:
Originally Posted by S485122 Indeed that query works ! Thanks ! Since it is limited to 100 exponents, it will not help finding where the problems are.
I have cleaned up the report code, and increased the report limit from 100 to 1000. I even added a modicum of documentation:
https://www.mersenne.org/report_exponent_simple/
2018-05-18, 01:38 #1524 GP2 Sep 2003 2×1,291 Posts Code: ----------=-----=-- | -----=-----=-----=-----=----- | -----=-----=-----=----- | -----=-----=-----=----- | Exponent Range | Composite | Status Unproven | Assigned | Available | Start Count P | F LL-D | LL LLERR NO-LL | TF P-1 LL LL-D | TF P-1 LL LL-D | ----------=-----=-- | -----=-----=-----=-----=----- | -----=-----=-----=----- | -----=-----=-----=----- | 77000000 55009 1 | 35813 613 18582 | 40 | 18542 | Problem solved, apparently, or at least it went away. Do we have any insight as to which exponent was stuck on NO-LL ?
2018-05-21, 03:43 #1525
Serpentine Vermin Jar
Jul 2014
1100110011012 Posts
Quote:
Originally Posted by GP2 Code: ----------=-----=-- | -----=-----=-----=-----=----- | -----=-----=-----=----- | -----=-----=-----=----- | Exponent Range | Composite | Status Unproven | Assigned | Available | Start Count P | F LL-D | LL LLERR NO-LL | TF P-1 LL LL-D | TF P-1 LL LL-D | ----------=-----=-- | -----=-----=-----=-----=----- | -----=-----=-----=----- | -----=-----=-----=----- | 77000000 55009 1 | 35813 613 18582 | 40 | 18542 | Problem solved, apparently, or at least it went away. Do we have any insight as to which exponent was stuck on NO-LL ?
I finally dug down into it. After chasing a few rabbit trails that went nowhere, the answer was surprisingly simple. The prime in that range was never removed from a table that holds the "factoring effort" of each exponent. Why that matters? Because that table is actually used to derive the # of factored exponents in a range. Apparently it's slightly quicker to do it that way than to actually look at the factor table.
Anyway, it threw it all off by one. Removed it and... fixed.
As a bonus, I did update that page to show the header lines every 50M instead of every 100M. On my 1920x1080 screen that means you're basically going to see a header row somewhere on the page when scrolling.
2018-05-30, 17:22 #1526 ixfd64 Bemusing Prompter "Danny" Dec 2002 California 11·211 Posts The server seems to be down. It's back up. Last fiddled with by ixfd64 on 2018-05-30 at 17:34
2018-06-04, 15:35 #1527
ramgeis
Apr 2013
11101002 Posts
Quote:
Originally Posted by ramgeis No, I manually submitted the results done by Prime95 from a machine with no connection to the outside world. Of course I can delete my assignment, but that doesn't make the problem go away. I just wanted to raise the awareness of an issue on the server side. I'll definitely have a look at it again when I submit the next results...
Another observation: they also don't show up in the recent cleared exponents report https://www.mersenne.org/report_recent_cleared/
I do get credit for the result though - although it does not show up in the summary table at the end of the page after the manual submission.
2018-06-05, 01:24 #1528
Prime95
P90 years forever!
Aug 2002
Yeehaw, FL
23·3·7·43 Posts
Quote:
Originally Posted by ramgeis Another observation: they also don't show up in the recent cleared exponents report https://www.mersenne.org/report_recent_cleared/ .
PRP testing now shows up in that report. It displays cofactor PRP testing too, maybe it shouldn't.
2018-06-05, 12:07 #1529 lycorn Sep 2002 Oeiras, Portugal 24·89 Posts That´s right. I think all those inconclusive PRP tests should appear in the "Recent Results" report instead.
Similar Threads Thread Thread Starter Forum Replies Last Post ewmayer Lounge 39 2015-05-19 01:08 ewmayer Science & Technology 41 2014-04-16 11:54 cheesehead Soap Box 56 2013-06-29 01:42 cheesehead Soap Box 61 2013-06-11 04:30 Dubslow Programming 19 2012-05-31 17:49
All times are UTC. The time now is 13:35.
Wed Dec 2 13:35:43 UTC 2020 up 83 days, 10:46, 2 users, load averages: 4.12, 4.45, 4.45
|
2020-12-02 13:35:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4455745220184326, "perplexity": 6591.981972665609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141708017.73/warc/CC-MAIN-20201202113815-20201202143815-00215.warc.gz"}
|
http://accesspediatrics.mhmedical.com/content.aspx?bookid=455§ionid=40310477
|
Chapter 194
Food allergies, defined as adverse immune responses to food proteins, are an increasingly common concern in the pediatric age group. Food allergy is not one disease, but a spectrum of clinicopathological disorders.1 As such, its manifestations differ significantly, depending on the immune mechanism involved and the affected target organ, ranging from the prototypical acute urticaria/angioedema to chronic conditions such as eczema or failure to thrive. Currently, there are no tests that can reliably predict the severity of a food allergic reaction, which may vary with similar exposures and even in the same individual. As a whole, fatalities are rare, but they do occur. Teenagers are particularly vulnerable because they undertake unnecessary risks and may ignore the warning signs of an impending severe reaction.
Because a diagnosis of food allergy entails a considerable nutritional and social burden for the affected children and their families, all efforts should be geared to ensure that a true food hypersensitivity is the cause of the patient’s complaints. This is not an easy task, given the protean clinical manifestations of these disorders and the recognized pitfalls of the routine laboratory tests. In some instances, a double-blind placebo controlled food challenge (DBPCFC) may be necessary. This costly and at times cumbersome procedure is at present the only gold standard for the diagnosis of food allergy.
Food allergies are often the first step in what has been termed the atopic march, involving the sequential development of different allergic diseases in the same child.2 While many children will outgrow their food allergies before their sixth birthday, for others this will remain a lifelong concern. For the vast majority of food hypersensitivities, there are presently no curative treatments. Current management of these conditions relies on careful avoidance of the offending food(s) and initiating therapy to curtail symptoms in case of accidental exposures. Yet, reinstructing the immune system to tolerate food allergens is an attainable goal, as demonstrated by the success of allergen-specific immunotherapy in the treatment of respiratory allergies. Growing evidence from a number of clinical trials suggests that the same can be achieved in food-allergic patients, which could radically change the way in which these patients are managed in the near future.3
Food allergies are far more prevalent in developed countries than in the developing world. In the United States, the overall prevalence of food allergy has been estimated in 3.5% of the general population, with roughly twice as many children than adults afflicted by these disorders.4 Like other allergic diseases, food allergies appear to be on the rise. The prevalence of peanut allergy, for instance, has more than doubled in the last 10 years, both in the United States and in Great Britain.5 The interaction of genetic, dietary, and environmental factors appears central to the recent increase in food allergies.
Food allergies have a strong genetic component. Studies in twins show that 7% of dizygotic and 64% of monozygotic twins share a peanut allergy, and siblings from a peanut-allergic ...
Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access.
Ok
## Subscription Options
### AccessPediatrics Full Site: One-Year Subscription
Connect to the full suite of AccessPediatrics content and resources including 20+ textbooks such as Rudolph’s Pediatrics and The Pediatric Practice series, high-quality procedural videos, images, and animations, interactive board review, an integrated pediatric drug database, and more.
|
2017-02-27 04:26:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19154566526412964, "perplexity": 3300.8969946984553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00161-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://dsp.stackexchange.com/questions/24032/signal-processing-why-compute-imaginary-part
|
# Signal processing - why compute imaginary part?
I'm new to signal processing.
Could somebody explain to me what are the benefits and reasons for decomposing a signal into real and especially imaginary parts?
I'm looking at the hilbert transform in matlab. Matlal hilbert transform
The imaginary part is a version of the original real sequence with a 90° phase shift. What exactly is a phase shift of 90 degrees.
For example if I have a data sequence of [100, 100, 200, 100, 200], how are the imaginary parts calculated? How do you phase shift 90 degrees?
Thanks
• You are wrong. The real and imaginary part look similar, but they are not just shifted data, they are different. Jun 10 '15 at 12:56
• @AnderBiguri - thats what the matlab page says??
Jun 10 '15 at 13:05
• No it doesn't, you just can see it with the bare eye. Look e.g. to the second figure. Real part has 6 peaks while imaginary 4. " The analytic signal x = xr + i*xi has a real part, xr, which is the original data, and an imaginary part, xi, which contains the Hilbert transform" Jun 10 '15 at 13:09
• @AnderBiguri matlab reference correctly indicates that the imaginary part contains a phase shift, not time shifted data. The same phase shift for different frequency components results in different time shifts which alters the shape in figure 2. Jun 10 '15 at 13:47
• @Pi, flag for moderator attention then. Jun 10 '15 at 19:14
One use of Hilbert Transform is to recover the amplitude envelope of a signal.
Here is a practical example: Extracting Binary Magnetic-Strip Card Data from raw WAV
Often the shortest distance between two points is through complex analysis. Even if the start and destination don't appear to involve complex numbers.
Regarding "What is the phase shift of 90°?": If it is single cisoid, i.e. a single point making a circular orbit around the origin on the complex plane, then applying a phase shift of 90° obviously sets it backwards or forwards by a quarter-cycle. Now if you consider an arbitrary waveform to be a sum of cisoids with different amplitude, phase and frequency, (Fourier Theory shows that you can generally achieve this), you can just set each one back by a quarter cycle, and recombine. That's one way to accomplish the Hilbert Transform.
http://bl.ocks.org/jinroh/7524988 <-- this gives a visual example of combining cisoids into the target waveform.
This is really more than one question and for any answer to make sense, you have to come to terms (or be okay with) with Euler's formula: $$e^{i\theta}=\cos(\theta) +i\sin(\theta)$$
There's a good math.stackexchange with plenty of fodder to gain intuition. Then reasoning about imaginary parts, harmonic conjugates, Hilbert & its relationship to Fourier becomes more straightforward. Also, it makes reading the docs for any (MATLAB) implementation less "WTF?". Good luck.
A window of samples can be decomposed into an even vector (symmetric about the center) and an odd vector (anti-symmetric about the center). The even vector is the "real" or cosine component in the frequency domain, the odd vector is the "imaginary" or sine component in the frequency domain. These two vectors are orthogonal under certain operations.
If you only use the real component, then you can only analyze symmetric windows of data (or cosine compositions).
Combining the two components, cosine and sine, into a single complex data type representation allows writing many of the equations regarding signal processing using less pencil or chalk (half as many symbols to scribble).
|
2021-10-25 00:51:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5994859337806702, "perplexity": 856.102242715942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00617.warc.gz"}
|
https://www.physicsforums.com/threads/limit-of-sequence-lim-n-1-cos-2-n.673418/
|
# Limit of sequence lim n (1-cos(2/n))
1. Feb 21, 2013
### izen
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
above picture !!
2. Feb 21, 2013
### jbunniii
You can write
\begin{align} 1 - \cos\left(\frac{2}{n}\right) = 2\left[\frac{1}{2} - \frac{1}{2}\cos\left(\frac{2}{n}\right)\right] &= 2\sin^2\left(\frac{1}{n}\right) \\ \end{align}
3. Feb 21, 2013
### LCKurtz
$$\frac 1 {\frac 1 n}$$doesn't go to 0 as $n\to\infty$.
Another method is to use L'Hospital's rule on$$\frac{1 - \cos{\frac 1 n}}{\frac 1 n}$$
4. Feb 21, 2013
### izen
where is 'n' gone?
5. Feb 21, 2013
### izen
thanks LCKurtz
6. Feb 21, 2013
### jbunniii
It didn't go anywhere. I was just suggesting how to rewrite the hard part. Now if we substitute back, we get
$$n\left[1 - \cos\left(\frac{2}{n}\right)\right] = 2n \sin^2\left(\frac{1}{n}\right)$$
Can you see what to do now?
7. Feb 21, 2013
### izen
Thanks jbunniii
|
2018-03-17 04:54:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.979388952255249, "perplexity": 9472.206717706258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257644271.19/warc/CC-MAIN-20180317035630-20180317055630-00725.warc.gz"}
|
https://www.unison-lang.org/learn/fundamentals/values-and-functions/delayed-computations/
|
# Delayed computations
Values in Unison are not, by default, lazily evaluated. But it's common to want to express that a value or calculation should be delayed until it's absolutely needed.
For example the termlongTextin the following snippet is evaluated strictly:
longText : Text
longText = "🐵 Imagine infinite monkeys on infinite typewriters 🙊…"
coinflip : Boolean -> Text
coinflip bool = if bool then longText else "Hi"
But you might not actually need to evaluatelongText--thereare circumstances where the calculation of a value might be very expensive or could introduce surprising behavior if runeagerly.
One way you might solve for this is to create a "thunk": a function with no arguments which returns the desired value or computation when it's called.
longText : () -> Text
longText _ = "🐵 Imagine infinite monkeys on infinite typewriters 🙊…"
Because this is a common pattern, Unison provides the single quote,',as syntactic sugar for representing a function with the form() -> a.
We can rewrite the type() -> Textas just'Text.
longText : 'Text
longText : 'Text
longText = '"🐵 Imagine infinite monkeys on infinite typewriters 🙊…"
Just as Unison provides syntax for representing delayed computations in a type signature, there are a few ways to delay an expression in Unison.
🧠
These are all valid ways to implement a function for the type'Text:
• myFunction = '"delayed with single quote"
• myFunction _ = "delayed with underscore argument"
• myFunction = do "delayed with do keyword"
All three are equivalent in meaning.
When we want to run the thunk, we could call the value with()to represent "I'm calling this with zero arguments", but Unison also provides syntactic sugar for "forcing" or calling a thunk with the!symbol.
Calling our functionlongTextlooks like:
coinflip : Boolean -> Text
coinflip : Boolean -> Text
coinflip bool = if bool then !longText else "Hi"
To review: single quote,',an underscore_,or thedokeyword introduce athunkand the bang symbol,!,executes it, but what if we want to callcoinflipwith an argumenttrueand then return the result of that function application in a thunk? Our final desired return type is'Text.To do that, we have to surround the function application forcoinflip truein parentheses before prepending the single quote symbol.
delayedText : 'Text
delayedText = '(coinflip true)
!delayedText⧨"🐵 Imagine infinite monkeys on infinite typewriters 🙊…"
🧠
It might be tempting to write'coinflip truewith no parentheses in an attempt to get a return type of'Textor() -> Text,but the single quote ordosyntax delays thefunctionin that circumstance, not the result of evaluating the function.'coinflip truehas the signature(() -> Boolean) -> Text)and we're trying to pass it an argument oftrue.🙅🏻♀️ That may not be what we're looking for. When we want to delay the result ofexecuting a function,we must surround the entire expression in parentheses.
|
2023-01-28 20:46:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39366084337234497, "perplexity": 7387.1825194181165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499654.54/warc/CC-MAIN-20230128184907-20230128214907-00258.warc.gz"}
|
http://dml.cz/dmlcz/140610
|
# Article
Full entry | PDF (0.1 MB)
Keywords:
finite fields; distribution of irreducible polynomials; residue
Summary:
In this paper we generalize the method used to prove the Prime Number Theorem to deal with finite fields, and prove the following theorem: $\pi (x)= \frac q{q - 1}\frac x{{\log _q x}}+ \frac q{(q - 1)^2}\frac x{{\log _q^2 x}}+O\Bigl (\frac {x}{{\log _q^3 x}}\Bigr ),\quad x=q^n\rightarrow \infty$ where $\pi (x)$ denotes the number of monic irreducible polynomials in $F_q [t]$ with norm $\le x$.
References:
[1] Kruse, M., Stichtenoth, H.: Ein Analogon zum Primzahlsatz fur algebraische Functionenkoper. Manuscripta Math. 69 (1990), 219-221 German. DOI 10.1007/BF02567920 | MR 1078353
[2] Davenport, H.: Multiplicative Number Theory. Springer-Verlag New York (1980). MR 0606931 | Zbl 0453.10002
Partner of
|
2016-10-28 02:44:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9804035425186157, "perplexity": 1755.8878714199257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721555.36/warc/CC-MAIN-20161020183841-00552-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://ax.dev/api/ax.html
|
# ax¶
class ax.Arm(parameters, name=None)[source]
Base class for defining arms.
Randomization in experiments assigns units to a given arm. Thus, the arm encapsulates the parametrization needed by the unit.
clone(clear_name=False)[source]
Create a copy of this arm.
Parameters
clear_name (bool) – whether this cloned copy should set its name to None instead of the name of the arm being cloned. Defaults to False.
Return type
Arm
property has_name
Return true if arm’s name is not None.
Return type
bool
static md5hash(parameters)[source]
Return unique identifier for arm’s parameters.
Parameters
parameters (Dict[str, Union[str, float, int, None]]) – Parameterization; mapping of param name to value.
Return type
str
Returns
Hash of arm’s parameters.
property name
Get arm name. Throws if name is None.
Return type
str
property name_or_short_signature
Returns arm name if exists; else last 4 characters of the hash.
Used for presentation of candidates (e.g. plotting and tables), where the candidates do not yet have names (since names are automatically set upon addition to a trial).
Return type
str
property parameters
Get mapping from parameter names to values.
Return type
Dict[str, Union[str, float, int, None]]
property signature
Get unique representation of a arm.
Return type
str
class ax.BatchTrial(experiment, generator_run=None, trial_type=None)[source]
property abandoned_arms
List of arms that have been abandoned within this trial
Return type
property arm_weights
The set of arms and associated weights for the trial.
These are constructed by merging the arms and weights from each generator run that is attached to the trial.
Return type
property arms
All arms contained in the trial.
Return type
property arms_by_name
Map from arm name to object for all arms in trial.
Return type
clone()[source]
Clone the trial.
Return type
BatchTrial
Returns
A new instance of the trial.
property experiment
The experiment this batch belongs to.
Return type
Experiment
property generator_run_structs
List of generator run structs attached to this trial.
Struct holds generator_run object and the weight with which it was added.
Return type
property index
The index of this batch within the experiment’s batch list.
Return type
int
property is_factorial
Return true if the trial’s arms are a factorial design with no linked factors.
Return type
bool
mark_arm_abandoned(arm_name, reason=None)[source]
Mark a arm abandoned.
Usually done after deployment when one arm causes issues but user wants to continue running other arms in the batch.
Parameters
Return type
BatchTrial
Returns
The batch instance.
normalized_arm_weights(total=1, trunc_digits=None)[source]
Returns arms with a new set of weights normalized to the given total.
This method is useful for many runners where we need to normalize weights to a certain total without mutating the weights attached to a trial.
Parameters
• total (float) – The total weight to which to normalize. Default is 1, in which case arm weights can be interpreted as probabilities.
• trunc_digits (Optional[int]) – The number of digits to keep. If the resulting total weight is not equal to total, re-allocate weight in such a way to maintain relative weights as best as possible.
Return type
Returns
Mapping from arms to the new set of weights.
run()[source]
Deploys the trial according to the behavior on the runner.
The runner returns a run_metadata dict containining metadata of the deployment process. It also returns a deployed_name of the trial within the system to which it was deployed. Both these fields are set on the trial.
Return type
BatchTrial
Returns
The trial instance.
property status_quo
The control arm for this batch.
Return type
property weights
Weights corresponding to arms contained in the trial.
Return type
class ax.ChoiceParameter(name, parameter_type, values, is_ordered=False, is_task=False, is_fidelity=False, target_value=None)[source]
Parameter object that specifies a discrete set of values.
add_values(values)[source]
Add input list to the set of allowed values for parameter.
Cast all input values to the parameter type.
Parameters
values (List[Union[str, float, int, None]]) – Values being added to the allowed list.
Return type
ChoiceParameter
set_values(values)[source]
Set the list of allowed values for parameter.
Cast all input values to the parameter type.
Parameters
values (List[Union[str, float, int, None]]) – New list of allowed values.
Return type
ChoiceParameter
validate(value)[source]
Checks that the input is in the list of allowed values.
Parameters
value (Union[str, float, int, None]) – Value being checked.
Return type
bool
Returns
True if valid, False otherwise.
class ax.ComparisonOp[source]
Class for enumerating comparison operations.
class ax.Data(df=None, description=None)[source]
Class storing data for an experiment.
The dataframe is retrieved via the df property. The data can be stored to gluster for future use by attaching it to an experiment using experiment.add_data() (this requires a description to be set.)
df
DataFrame with underlying data, and required columns.
description
static column_data_types()[source]
Type specification for all supported columns.
Return type
Dict[str, Type[+CT_co]]
property df_hash
Compute hash of pandas DataFrame.
This first serializes the DataFrame and computes the md5 hash on the resulting string. Note that this may cause performance issue for very large DataFrames.
Parameters
df – The DataFrame for which to compute the hash.
Returns
str: The hash of the DataFrame.
Return type
str
static from_evaluations(evaluations, trial_index, sample_sizes=None)[source]
Convert dict of evaluations to Ax data object.
Parameters
Return type
Data
Returns
Ax Data object.
static from_fidelity_evaluations(evaluations, trial_index, sample_sizes=None)[source]
Convert dict of fidelity evaluations to Ax data object.
Parameters
Return type
Data
Returns
Ax Data object.
static required_columns()[source]
Names of required columns.
Return type
class ax.Experiment(search_space, name=None, optimization_config=None, tracking_metrics=None, runner=None, status_quo=None, description=None, is_test=False, experiment_type=None)[source]
Base class for defining an experiment.
add_tracking_metric(metric)[source]
Add a new metric to the experiment.
Parameters
metric (Metric) – Metric to be added.
Return type
Experiment
property arms_by_name
The arms belonging to this experiment, by their name.
Return type
property arms_by_signature
The arms belonging to this experiment, by their signature.
Return type
attach_data(data)[source]
Attach data to experiment.
Parameters
data (Data) – Data object to store.
Return type
int
Returns
Timestamp of storage in millis.
property data_by_trial
Data stored on the experiment, indexed by trial index and storage time.
First key is trial index and second key is storage time in milliseconds. For a given trial, data is ordered by storage time, so first added data will appear first in the list.
Return type
property default_trial_type
Default trial type assigned to trials in this experiment.
In the base experiment class this is always None. For experiments with multiple trial types, use the MultiTypeExperiment class.
Return type
property experiment_type
The type of the experiment.
Return type
fetch_data(metrics=None, **kwargs)[source]
Fetches data for all metrics and trials on this experiment.
Parameters
Return type
Data
Returns
Data for the experiment.
property has_name
Return true if experiment’s name is not None.
Return type
bool
property is_simple_experiment
Whether this experiment is a regular Experiment or the subclassing SimpleExperiment.
lookup_data_for_trial(trial_index)[source]
Lookup stored data for a specific trial.
Returns latest data object present for this trial. Returns empty data if no data present.
Parameters
trial_index (int) – The index of the trial to lookup data for.
Return type
Data
Returns
Requested data object.
lookup_data_for_ts(timestamp)[source]
Collect data for all trials stored at this timestamp.
Useful when many trials’ data was fetched and stored simultaneously and user wants to retrieve same collection of data later.
Can also be used to lookup specific data for a single trial when storage time is known.
Parameters
timestamp (int) – Timestamp in millis at which data was stored.
Return type
Data
Returns
Data object with all data stored at the timestamp.
property metrics
The metrics attached to the experiment.
Return type
property name
Get experiment name. Throws if name is None.
Return type
str
new_batch_trial(generator_run=None, trial_type=None)[source]
Create a new batch trial associated with this experiment.
Return type
BatchTrial
new_trial(generator_run=None, trial_type=None)[source]
Create a new trial associated with this experiment.
Return type
Trial
property num_abandoned_arms
How many arms attached to this experiment are abandoned.
Return type
int
property num_trials
How many trials are associated with this experiment.
Return type
int
property optimization_config
The experiment’s optimization config.
Return type
property parameters
The parameters in the experiment’s search space.
Return type
remove_tracking_metric(metric_name)[source]
Remove a metric that already exists on the experiment.
Parameters
metric_name (str) – Unique name of metric to remove.
Return type
Experiment
runner_for_trial(trial)[source]
The default runner to use for a given trial.
In the base experiment class, this is always the default experiment runner. For experiments with multiple trial types, use the MultiTypeExperiment class.
Return type
property search_space
The search space for this experiment.
When setting a new search space, all parameter names and types must be preserved. However, if no trials have been created, all modifications are allowed.
Return type
SearchSpace
property status_quo
The existing arm that new arms will be compared against.
Return type
property sum_trial_sizes
Sum of numbers of arms attached to each trial in this experiment.
Return type
int
supports_trial_type(trial_type)[source]
Whether this experiment allows trials of the given type.
The base experiment class only supports None. For experiments with multiple trial types, use the MultiTypeExperiment class.
Return type
bool
property time_created
Creation time of the experiment.
Return type
datetime
property trials
The trials associated with the experiment.
Return type
property trials_expecting_data
the list of all trials for which data has arrived or is expected to arrive.
Type
List[BaseTrial]
Return type
update_tracking_metric(metric)[source]
Redefine a metric that already exists on the experiment.
Parameters
metric (Metric) – New metric definition.
Return type
Experiment
class ax.FixedParameter(name, parameter_type, value, is_fidelity=False, target_value=None)[source]
Parameter object that specifies a single fixed value.
validate(value)[source]
Checks that the input is equal to the fixed value.
Parameters
value (Union[str, float, int, None]) – Value being checked.
Return type
bool
Returns
True if valid, False otherwise.
class ax.GeneratorRun(arms, weights=None, optimization_config=None, search_space=None, model_predictions=None, best_arm_predictions=None, type=None, fit_time=None, gen_time=None, model_key=None, model_kwargs=None, bridge_kwargs=None)[source]
An object that represents a single run of a generator.
This object is created each time the gen method of a generator is called. It stores the arms and (optionally) weights that were generated by the run. When we add a generator run to a trial, its arms and weights will be merged with those from previous generator runs that were already attached to the trial.
property arm_weights
Mapping from arms to weights (order matches order in arms property).
Return type
property arms
Returns arms generated by this run.
Return type
clone()[source]
Return a deep copy of a GeneratorRun.
Return type
GeneratorRun
property generator_run_type
The type of the generator run.
Return type
property index
The index of this generator run within a trial’s list of generator run structs. This field is set when the generator run is added to a trial.
Return type
property optimization_config
The optimization config used during generation of this run.
Return type
property param_df
Constructs a Pandas dataframe with the parameter values for each arm.
Useful for inspecting the contents of a generator run.
Returns
a dataframe with the generator run’s arms.
Return type
pd.DataFrame
property search_space
The search used during generation of this run.
Return type
property time_created
Creation time of the batch.
Return type
datetime
property weights
Returns weights associated with arms generated by this run.
Return type
class ax.Metric(name, lower_is_better=None)[source]
Base class for representing metrics.
lower_is_better
Flag for metrics which should be minimized.
clone()[source]
Create a copy of this Metric.
Return type
Metric
fetch_experiment_data(experiment, **kwargs)[source]
Fetch this metric’s data for an experiment.
Default behavior is to fetch data from all trials expecting data and concatenate the results.
Return type
Data
classmethod fetch_experiment_data_multi(experiment, metrics, **kwargs)[source]
Fetch multiple metrics data for an experiment.
Default behavior calls fetch_trial_data_multi for each trial. Subclasses should override to batch data computation across trials + metrics.
Return type
Data
fetch_trial_data(trial, **kwargs)[source]
Fetch data for one trial.
Return type
Data
classmethod fetch_trial_data_multi(trial, metrics, **kwargs)[source]
Fetch multiple metrics data for one trial.
Default behavior calls fetch_trial_data for each metric. Subclasses should override this to trial data computation for multiple metrics.
Return type
Data
property name
Get name of metric.
Return type
str
class ax.Models[source]
Registry of available models.
Uses MODEL_KEY_TO_MODEL_SETUP to retrieve settings for model and model bridge, by the key stored in the enum value.
To instantiate a model in this enum, simply call an enum member like so: Models.SOBOL(search_space=search_space) or Models.GPEI(experiment=experiment, data=data]). Keyword arguments specified to the call will be passed into the model or the model bridge constructors according to their keyword.
For instance, Models.SOBOL(search_space=search_space, scramble=False) will instantiate a RandomModelBridge(search_space=search_space) with a SobolGenerator(scramble=False) underlying model.
view_defaults()[source]
Obtains the default keyword arguments for the model and the modelbridge specified through the Models enum, for ease of use in notebook environment, since models and bridges cannot be inspected directly through the enum.
Return type
Returns
A tuple of default keyword arguments for the model and the model bridge.
view_kwargs()[source]
Obtains annotated keyword arguments that the model and the modelbridge (corresponding to a given member of the Models enum) constructors expect.
Return type
Returns
A tuple of annotated keyword arguments for the model and the model bridge.
class ax.Objective(metric, minimize=False)[source]
Base class for representing an objective.
minimize
If True, minimize metric.
clone()[source]
Create a copy of the objective.
Return type
Objective
property metric
Get the objective metric.
Return type
Metric
property metrics
Get a list of objective metrics.
Return type
class ax.OptimizationConfig(objective, outcome_constraints=None)[source]
An optimization configuration, which comprises an objective and outcome constraints.
There is no minimum or maximum number of outcome constraints, but an individual metric can have at most two constraints–which is how we represent metrics with both upper and lower bounds.
clone()[source]
Make a copy of this optimization config.
Return type
OptimizationConfig
property objective
Get objective.
Return type
Objective
property outcome_constraints
Get outcome constraints.
Return type
class ax.OptimizationLoop(experiment, total_trials=20, arms_per_trial=1, random_seed=None, wait_time=0, run_async=False)[source]
Managed optimization loop, in which Ax oversees deployment of trials and gathering data.
full_run()[source]
Runs full optimization loop as defined in the provided optimization plan.
Return type
OptimizationLoop
get_best_point()[source]
Obtains the best point encountered in the course of this optimization.
Return type
get_current_model()[source]
Obtain the most recently used model in optimization.
Return type
run_trial()[source]
Run a single step of the optimization plan.
Return type
None
static with_evaluation_function(parameters, evaluation_function, experiment_name=None, objective_name=None, minimize=False, parameter_constraints=None, outcome_constraints=None, total_trials=20, arms_per_trial=1, wait_time=0, random_seed=None)[source]
Constructs a synchronous OptimizationLoop using an evaluation function.
Return type
OptimizationLoop
classmethod with_runners_and_metrics(parameters, path_to_runner, paths_to_metrics, experiment_name=None, objective_name=None, minimize=False, parameter_constraints=None, outcome_constraints=None, total_trials=20, arms_per_trial=1, wait_time=0, random_seed=None)[source]
Constructs an asynchronous OptimizationLoop using Ax runners and metrics.
Return type
OptimizationLoop
class ax.OrderConstraint(lower_parameter, upper_parameter)[source]
Constraint object for specifying one parameter to be smaller than another.
clone()[source]
Clone.
Return type
OrderConstraint
clone_with_transformed_parameters(transformed_parameters)[source]
Clone, but replace parameters with transformed versions.
Return type
OrderConstraint
property constraint_dict
Weights on parameters for linear constraint representation.
Return type
property lower_parameter
Parameter with lower value.
Return type
Parameter
property parameters
Parameters.
Return type
property upper_parameter
Parameter with higher value.
Return type
Parameter
class ax.OutcomeConstraint(metric, op, bound, relative=True)[source]
Base class for representing outcome constraints.
Outcome constraints may of the form metric >= bound or metric <= bound, where the bound can be expressed as an absolute measurement or relative to the status quo (if applicable).
metric
Metric to constrain.
op
Specifies whether metric should be greater or equal to, or less than or equal to, some bound.
bound
The bound in the constraint.
relative
Whether you want to bound on an absolute or relative scale. If relative, bound is the acceptable percent change.
clone()[source]
Create a copy of this OutcomeConstraint.
Return type
OutcomeConstraint
class ax.Parameter[source]
is_valid_type(value)[source]
Whether a given value’s type is allowed by this parameter.
Return type
bool
property python_type
The python type for the corresponding ParameterType enum.
Used primarily for casting values of unknown type to conform to that of the parameter.
Return type
class ax.ParameterConstraint(constraint_dict, bound)[source]
Base class for linear parameter constraints.
Constraints are expressed using a map from parameter name to weight followed by a bound.
The constraint is satisfied if w * v <= b where:
w is the vector of parameter weights. v is a vector of parameter values. b is the specified bound. * is the dot product operator.
property bound
Get bound of the inequality of the constraint.
Return type
float
check(parameter_dict)[source]
Whether or not the set of parameter values satisfies the constraint.
Does a weighted sum of the parameter values based on the constraint_dict and checks that the sum is less than the bound.
Parameters
parameter_dict (Dict[str, Union[float, int]]) – Map from parameter name to parameter value.
Return type
bool
Returns
Whether the constraint is satisfied.
clone()[source]
Clone.
Return type
ParameterConstraint
clone_with_transformed_parameters(transformed_parameters)[source]
Clone, but replaced parameters with transformed versions.
Return type
ParameterConstraint
property constraint_dict
Get mapping from parameter names to weights.
Return type
class ax.ParameterType[source]
An enumeration.
class ax.RangeParameter(name, parameter_type, lower, upper, log_scale=False, digits=None, is_fidelity=False, target_value=None)[source]
Parameter object that specifies a continuous numerical range of values.
property digits
Number of digits to round values to for float type.
Upper and lower bound are re-cast after this property is changed.
Return type
is_valid_type(value)[source]
Same as default except allows floats whose value is an int for Int parameters.
Return type
bool
property log_scale
Whether to sample in log space when drawing random values of the parameter.
Return type
bool
property lower
Lower bound of the parameter range.
Value is cast to parameter type upon set and also validated to ensure the bound is strictly less than upper bound.
Return type
float
update_range(lower=None, upper=None)[source]
Set the range to the given values.
If lower or upper is not provided, it will be left at its current value.
Parameters
Return type
RangeParameter
property upper
Upper bound of the parameter range.
Value is cast to parameter type upon set and also validated to ensure the bound is strictly greater than lower bound.
Return type
float
validate(value)[source]
Returns True if input is a valid value for the parameter.
Checks that value is of the right type and within the valid range for the parameter. Returns False if value is None.
Parameters
value (Union[str, float, int, None]) – Value being checked.
Return type
bool
Returns
True if valid, False otherwise.
class ax.Runner[source]
Abstract base class for custom runner classes
abstract run(trial)[source]
Deploys a trial based on custom runner subclass implementation.
Parameters
trial (BaseTrial) – The trial to deploy.
Return type
Returns
Dict of run metadata from the deployment process.
property staging_required
Whether the trial goes to staged or running state once deployed.
Return type
bool
stop(trial)[source]
Stop a trial based on custom runner subclass implementation.
Optional to implement
Parameters
trial (BaseTrial) – The trial to deploy.
Return type
None
class ax.SearchSpace(parameters, parameter_constraints=None)[source]
Base object for SearchSpace object.
Contains a set of Parameter objects, each of which have a name, type, and set of valid values. The search space also contains a set of ParameterConstraint objects, which can be used to define restrictions across parameters (e.g. p_a < p_b).
cast_arm(arm)[source]
Cast parameterization of given arm to the types in this SearchSpace.
For each parameter in given arm, cast it to the proper type specified in this search space. Throws if there is a mismatch in parameter names. This is mostly useful for int/float, which user can be sloppy with when hand written.
Parameters
arm (Arm) – Arm to cast.
Return type
Arm
Returns
New casted arm.
check_membership(parameterization, raise_error=False)[source]
Whether the given parameterization belongs in the search space.
Checks that the given parameter values have the same name/type as search space parameters, are contained in the search space domain, and satisfy the parameter constraints.
Parameters
Return type
bool
Returns
Whether the parameterization is contained in the search space.
check_types(parameterization, allow_none=True, raise_error=False)[source]
Checks that the given parameterization’s types match the search space.
Checks that the names of the parameterization match those specified in the search space, and the given values are of the correct type.
Parameters
Return type
bool
Returns
Whether the parameterization has valid types.
out_of_design_arm()[source]
Create a default out-of-design arm.
An out of design arm contains values for some parameters which are outside of the search space. In the modeling conversion, these parameters are all stripped down to an empty dictionary, since the point is already outside of the modeled space.
Return type
Arm
Returns
New arm w/ null parameter values.
class ax.SimpleExperiment(search_space, name=None, objective_name=None, evaluation_function=<function unimplemented_evaluation_function>, minimize=False, outcome_constraints=None, status_quo=None)[source]
Simplified experiment class with defaults.
Parameters
add_tracking_metric(metric)[source]
Add a new metric to the experiment.
Parameters
metric (Metric) – Metric to be added.
Return type
SimpleExperiment
eval()[source]
Evaluate all arms in the experiment with the evaluation function passed as argument to this SimpleExperiment.
Return type
Data
eval_trial(trial)[source]
Evaluate trial arms with the evaluation function of this experiment.
Parameters
trial (BaseTrial) – trial, whose arms to evaluate.
Return type
Data
property evaluation_function
Get the evaluation function.
Return type
fetch_data(metrics=None, **kwargs)[source]
Fetches data for all metrics and trials on this experiment.
Parameters
Return type
Data
Returns
Data for the experiment.
property has_evaluation_function
Whether this SimpleExperiment has a valid evaluation function attached.
Return type
bool
property is_simple_experiment
Whether this experiment is a regular Experiment or the subclassing SimpleExperiment.
update_tracking_metric(metric)[source]
Redefine a metric that already exists on the experiment.
Parameters
metric (Metric) – New metric definition.
Return type
SimpleExperiment
class ax.SumConstraint(parameters, is_upper_bound, bound)[source]
Constraint on the sum of parameters being greater or less than a bound.
clone()[source]
Clone.
Return type
SumConstraint
clone_with_transformed_parameters(transformed_parameters)[source]
Clone, but replace parameters with transformed versions.
Return type
SumConstraint
property constraint_dict
Weights on parameters for linear constraint representation.
Return type
property op
Whether the sum is constrained by a <= or >= inequality.
Return type
ComparisonOp
property parameters
Parameters.
Return type
class ax.Trial(experiment, generator_run=None, trial_type=None)[source]
Trial that only has one attached arm and no arm weights.
Parameters
• experiment (Experiment) – experiment, to which this trial is attached
• generator_run (Optional[GeneratorRun]) – generator_run associated with this trial. Trial has only one generator run (and thus arm) attached to it. This can also be set later through add_arm or add_generator_run, but a trial’s associated genetor run is immutable once set.
property abandoned_arms
Abandoned arms attached to this trial.
Return type
property arm
The arm associated with this batch.
Return type
property arms
All arms attached to this trial.
Returns
list of a single arm
attached to this trial if there is one, else None.
Return type
arms
property arms_by_name
Dictionary of all arms attached to this trial with their names as keys.
Returns
dictionary of a single
arm name to arm if one is attached to this trial, else None.
Return type
arms
property generator_run
Generator run attached to this trial.
Return type
property objective_mean
Objective mean for the arm attached to this trial.
Return type
ax.optimize(parameters, evaluation_function, experiment_name=None, objective_name=None, minimize=False, parameter_constraints=None, outcome_constraints=None, total_trials=20, arms_per_trial=1, random_seed=None)[source]
Construct and run a full optimization loop.
Return type
ax.save(experiment, filepath)
Save experiment to file.
1. Convert Ax experiment to JSON-serializable dictionary.
2. Write to file.
Return type
None
ax.load(filepath)
Experiment
|
2019-10-21 12:51:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23491331934928894, "perplexity": 11397.864884327919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987773711.75/warc/CC-MAIN-20191021120639-20191021144139-00460.warc.gz"}
|
https://calcresource.com/eval-arsinh.html
|
Inverse hyperbolic sine calculator
This tool evaluates the inverse hyperbolic sine of a number: arsinh(x). Enter the argument x below.
x = Result: arsinh(x) =
Definitions
General
The inverse hyperbolic sine function, in modern notation written as arsinh(t) or arcsinh(t) or sinh-1t, gives the value x (hyperbolic angle), so that:
The inverse hyperbolic sine function accepts arguments from the whole real space. Since the hyperbolic sine is defined through the natural exponential function , its inverse can be defined through the natural logarithm function, using the following formula, for any real x:
Properties
The derivative of the inverse hyperbolic sine function is:
The integral of the inverse hyperbolic sine function is given by:
|
2019-06-19 04:53:30
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9870820641517639, "perplexity": 715.0810914731451}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998913.66/warc/CC-MAIN-20190619043625-20190619065625-00416.warc.gz"}
|
https://lammps.sandia.gov/doc/pair_thole.html
|
# pair_style lj/cut/thole/long/omp command
## Syntax
pair_style style args
• style = thole or lj/cut/thole/long or lj/cut/thole/long/omp
• args = list of arguments for a particular style
thole args = damp cutoff
damp = global damping parameter
cutoff = global cutoff (distance units)
lj/cut/thole/long or lj/cut/thole/long/omp args = damp cutoff (cutoff2)
damp = global damping parameter
cutoff = global cutoff for LJ (and Thole if only 1 arg) (distance units)
cutoff2 = global cutoff for Thole (optional) (distance units)
## Examples
pair_style hybrid/overlay ... thole 2.6 12.0
pair_coeff 1 1 thole 1.0
pair_coeff 1 2 thole 1.0 2.6 10.0
pair_coeff * 2 thole 1.0 2.6
pair_style lj/cut/thole/long 2.6 12.0
## Description
The thole pair styles are meant to be used with force fields that include explicit polarization through Drude dipoles. This link describes how to use the thermalized Drude oscillator model in LAMMPS and polarizable models in LAMMPS are discussed on the Howto polarizable doc page.
The thole pair style should be used as a sub-style within in the pair_hybrid/overlay command, in conjunction with a main pair style including Coulomb interactions, i.e. any pair style containing coul/cut or coul/long in its style name.
The lj/cut/thole/long pair style is equivalent to, but more convenient that the frequent combination hybrid/overlay lj/cut/coul/long cutoff thole damp cutoff2. It is not only a shorthand for this pair_style combination, but it also allows for mixing pair coefficients instead of listing them all. The lj/cut/thole/long pair style is also a bit faster because it avoids an overlay and can benefit from OMP acceleration. Moreover, it uses a more precise approximation of the direct Coulomb interaction at short range similar to coul/long/cs, which stabilizes the temperature of Drude particles.
The thole pair styles compute the Coulomb interaction damped at short distances by a function
$$$T_{ij}(r_{ij}) = 1 - \left( 1 + \frac{s_{ij} r_{ij} }{2} \right) \exp \left( - s_{ij} r_{ij} \right)$$$
This function results from an adaptation to point charges (Noskov) of the dipole screening scheme originally proposed by Thole. The scaling coefficient $$s_{ij}$$ is determined by the polarizability of the atoms, $$\alpha_i$$, and by a Thole damping parameter $$a$$. This Thole damping parameter usually takes a value of 2.6, but in certain force fields the value can depend upon the atom types. The mixing rule for Thole damping parameters is the arithmetic average, and for polarizabilities the geometric average between the atom-specific values.
$$$s_{ij} = \frac{ a_{ij} }{ (\alpha_{ij})^{1/3} } = \frac{ (a_i + a_j)/2 }{ [(\alpha_i\alpha_j)^{1/2}]^{1/3} }$$$
The damping function is only applied to the interactions between the point charges representing the induced dipoles on polarizable sites, that is, charges on Drude particles, $$q_{D,i}$$, and opposite charges, $$-q_{D,i}$$, located on the respective core particles (to which each Drude particle is bonded). Therefore, Thole screening is not applied to the full charge of the core particle $$q_i$$, but only to the $$-q_{D,i}$$ part of it.
The interactions between core charges are subject to the weighting factors set by the special_bonds command. The interactions between Drude particles and core charges or non-polarizable atoms are also subject to these weighting factors. The Drude particles inherit the 1-2, 1-3 and 1-4 neighbor relations from their respective cores.
For pair_style thole, the following coefficients must be defined for each pair of atoms types via the pair_coeff command as in the example above.
• alpha (distance units^3)
• damp
• cutoff (distance units)
The last two coefficients are optional. If not specified the global Thole damping parameter or global cutoff specified in the pair_style command are used. In order to specify a cutoff (third argument) a damp parameter (second argument) must also be specified.
For pair style lj/cut/thole/long, the following coefficients must be defined for each pair of atoms types via the pair_coeff command.
• epsilon (energy units)
• sigma (length units)
• alpha (distance units^3)
• damps
• LJ cutoff (distance units)
The last two coefficients are optional and default to the global values from the pair_style command line.
Styles with a gpu, intel, kk, omp, or opt suffix are functionally the same as the corresponding style without the suffix. They have been optimized to run faster, depending on your available hardware, as discussed on the Speed packages doc page. The accelerated styles take the same arguments and should produce the same results, except for round-off and precision issues.
These accelerated styles are part of the GPU, USER-INTEL, KOKKOS, USER-OMP and OPT packages, respectively. They are only enabled if LAMMPS was built with those packages. See the Build package doc page for more info.
You can specify the accelerated styles explicitly in your input script by including their suffix, or you can use the -suffix command-line switch when you invoke LAMMPS, or you can use the suffix command in your input script.
See the Speed packages doc page for more instructions on how to use the accelerated styles effectively.
Mixing:
The thole pair style does not support mixing. Thus, coefficients for all I,J pairs must be specified explicitly.
The lj/cut/thole/long pair style does support mixing. Mixed coefficients are defined using
$$$\alpha_{ij} = \sqrt{\alpha_i\alpha_j}$$$
$$$a_{ij} = \frac 1 2 (a_i + a_j)$$$
## Restrictions
These pair styles are part of the USER-DRUDE package. They are only enabled if LAMMPS was built with that package. See the Build package doc page for more info.
This pair_style should currently not be used with the charmm dihedral style if the latter has non-zero 1-4 weighting factors. This is because the thole pair style does not know which pairs are 1-4 partners of which dihedrals.
The lj/cut/thole/long pair style should be used with a Kspace solver like PPPM or Ewald, which is only enabled if LAMMPS was built with the kspace package.
|
2019-06-27 04:18:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.3455137610435486, "perplexity": 9237.238399758056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000613.45/warc/CC-MAIN-20190627035307-20190627061307-00486.warc.gz"}
|
http://fhqn.coroilcontrappunto.it/thermal-pad-thickness.html
|
Kvadsheim, PH, LP Folkow and AS Blix. 5 mm to 10 mm thicknesses. One is a pad, the other is a paste. 2 Relationship between thermal resistance (R-value) and thickness of commercial isocyanurate foam sheathing (data obtained from a Canadian manufacturer) FIGURE 5. K + Check Stock & Lead Times 39 in stock for next day delivery (Liege stock): Order before 20:00(mainland UK) & 18. - Packed with 9 thermal pads, three sizes, each with 3 pieces: 20x67x0. 5mm Thermal Pads product list at Newark. The Thermal Conductivity Thanks in Advanced. 00 MATERIALS LABOR. 5mm Thick Silicone Thermal Pad for Laptop IC Chips Average rating: 0 out of 5 stars, based on 0 reviews Current Price $10. Recommended thermal pad thickness? By xnoobftw, February 13, 2016 in Graphics Cards · 20 replies. The part should stand off of. 0 4996 and an Arctic Accelero Xtreme III I want to slap on. Gap Fillers • Soft and compressible/stress relieving • Idealfor applications wherelarge gap tolerancesare present, typically150µm to 5 mm • Non-flowable • Limited adhesion. Regardless if the 1mm are a bit thick, they will be compressed and the difference in heat transmittance 0. Minimum thickness for laminated pads is 1 inch (two 14-inch layers). Rugged, comfortable, versatile and lightweight—hallmarks of a closed-cell design—the NEMO Switchback sleeping pad even has a heat-reflecting film to keep the chill from creeping through. The product is created by filling PGS Graphite Sheet into silicon resin. DK2, (FR4, thickness = 1 mm, pads with plugged thermal vias) DK6, (FR4, thickness = 0. You can find different size of thickness, 0,5mm, 1mm & 1,5mm. For a thin bond-line of 10-50 microns, it is a combination of the interfacial resistance between the joining surfaces that is typically comparable to the contribution due to the thickness and thermal resistance of the adhesive interface material. Thermal Pads Our dispensable thermal pad solutions enable you to quickly and precisely dispense or print a layer of thermally conductive silicone compound in controllable thicknesses on complex substrate shapes, helping to ensure excellent thermal management and lower cost of ownership compared to prefabricated thermal pads. We are a thermal interface material manufacturer, die cut converter and designer of thermal management materials for electronic cooling. Please note you may have to register before you can post: click the register link above to proceed. Thermal Pad, Filled Silicone Polymer, 300 mm x 210 mm x 2 mm, 1. A method for the measurement of the thermal properties of biological materials. Carpet Padding Categories Choosing the right carpet padding is essential to the lifetime of your carpet. Specifying a thicker pad that was properly sized to the heatsink plate was the order of the day, and a couple of other smaller heatsink pads were also replaced, all with Thermal Grizzly Minus Pad. I mean thermal pad for chipset and the ram of the graphics adapter. The latest generation of Microsoft's gaming console, the Xbox One X or Project Scorpio, finally released on November 7th, 2017. Where long-term effective construction, coupled with financial efficiency are expected by owners and specifiers, the inherent qualities of FOAMGLAS® ensure that it is the only choice. Our heatshield products help to improve the performance and efficiency of your vehicle, engine, or whatever type of project your are working on. It should be emphasized. Unfortunately, I did not save the thermal pads that came stock on the card. and thickness of the pad. Posted February 13, 2016 · Best Answer · Go To Post. The temperature on the steelwork on the warm side of the cladding system is 9. * The resultant thickness of the Gap Pad will determine the thermal resistance. Thickness —. Let's say that we have an unknown material where 30 Watts of heat flows through an area of 2 m 2 with a thickness of 1 m and a temperature difference, ΔT, of 50 K. 28 mm 2 for a solid via, resulting in a thermal resistance of 441 ºC/W per via compared to 96. It has a thermal conductivity of 6. thermal pad on the board provides reliable soldering, as well as heat transfer while the vias provide heat dissipating paths. 6:8573 doi: 10. Best Selling Hot Tub Installation Pad and Spa Foundation. Thermal pad 2. 6 x 10-3: 1. 008" Thickness, Natural Tack Both Sides, Sil-Pad TSP 1800ST Series / Also Known as Bergquist Sil-Pad 1500ST Series, IDH 2166373. Two grades, Dow Corning TC-4016 and TC-4026 Dispensable Thermal Pads, incorporate glass beads to offer improved control over bond line thickness. 2 mils, and the largest was 0. You can find a lot of info and reviews about it by searching on. For the SSD, about 1. I had the same issue as TaKeN once with some rather solid blue thermals pads I bought, the EK ones are soft and squishy so even if you get them too thick they'll press down and won't put up any resistance. The thermal conductive film is originally made of silica gel as the basic material, and related metal materials as auxiliary production. term "relative thermal conductivity" is appropriate because the thermal conductivity of these materials depends on the relative thickness of the layers and their orientation with respect to heat flow. Refer to the graph below to obtain the thermal resistance of the material. Fabreeka-TIM® structural thermal break, or thermal insulation material (TIM), is an energy saving, load bearing thermal break used between flanged steel connections. Technical Data Sheet BERGQUIST SIL PAD TSP 1800ST Known as BERGQUIST SIL-PAD 1500ST November -2018 PRODUCT DESCRIPTION Electrically Insulating, Thermally Conductive, Soft Tack Elastomeric Material. Sil-Pad 1500®) have a rough surface texture and will show a 15-20% decrease in thermal resistance over a 24 hour period. I mean thermal pad for chipset and the ram of the graphics adapter. Borgstede, MD,4 William G. A History of Innovation. When design procedures require a pad thickness greater than the maximum recommended thickness, investigate the use of FI FE bearings. I cut it down to a 35mm x 45mm and centered it. Failure to treat a condition b. The smallest observed range was 0. thermal conductivity is achieved compared to FR4 depending on filling material and quantity, but filling materials influence the cycle resistance of the board. 8g Gray Pad 2 : 60mm(W) X 1. The coefficient of linear thermal expansion (CLTE) of any material is the change of a material's dimension per unit change in temperature. Thermal resistance is 0. Stayfree ® Maxi Pads. You can also place filled and capped vias directly under the thermal solder pad for circuit board applications that have a thickness greater than 0. 5mm(H) X 20mm(D),3. I had the same issue as TaKeN once with some rather solid blue thermals pads I bought, the EK ones are soft and squishy so even if you get them too thick they'll press down and won't put up any resistance. Is that the recommendation, or could I use something thicker? Thanks. 5mm, purp + Check Stock & Lead Times 13 in stock for next day delivery (UK stock): Order before 20:00(mainland UK) & 18. 10 mm stencil with the enlarged aperture design that Samtec found to be optimal. brake pads overview After four decades at the forefront of disc brake rotor manufacturing, DBA was determined to develop its very own range of high-quality performance brake pads. K + Check Stock & Lead Times 196 in stock for same day shipping: Order before 8pm EST Standard Shipping (Mon – Fri. 5mm thick pads come in several W/mK thermal ratings for your cooling needs. The smallest observed range was 0. Thickness —. March 31, 2017 It's rare that one will offhandedly know which thermal interface material will work the best for their PCB thermal pad. All that in order to regulate the heat […]. 1Fluid-Filled—Copper Windings 55/65 °C Rise 1 Weights, gallons of fluid, and dimensions are for reference only and not for construction. List of symbols c Specific heat (J kg-1 K-1) d 1 Pad thickness (m) d 2 Disk thickness (m) dE_ Thermal energy per unit time (W) dP Friction power (W) E c Kinetic energy (J. 25mm Thermal Pads distributor. 88% Upvoted. If the manufacturer used Thermal Pad instead of paste is probably because there is a gap between the heat-sink and CPU/GPU surface in which Thermal Paste won't be able to. If the manufacturer used Thermal Pad instead of paste is probably because there is a gap between the heat-sink and CPU/GPU surface in which Thermal Paste won't be able to. 4 in² per watt dissipated for a 40°C rise in board temperature. Most pre-cured pads are soft and gel-like to minimize compression forces on the electronics and are available in a broad range of both thicknesses and thermal conductivity. Regardless if the 1mm are a bit thick, they will be compressed and the difference in heat transmittance 0. Thermal pads transfer heat from graphics card VRAM, VREG, MOSFET, etc. In order to dissipate 1 watt of power a good rule of thumb is that your board with need to have an area of 15. 001 inch or 0. 87") and is 1. QFN is a thermal efficient package and the exposed pad is a very cost-effective solution. 2 SSD (both sides with components), please use 0. I want to take it apart and replace the thermal paste. 6 x 10-3: 1. Second, while the DM6467 is designed for a heat sink, sounds like you're also trying to cool other chips that may not be. I'm currently looking all over the internet for information needed on the thickness and width needed for thermal padding replacement for my AMD Radeon R9 290X. The product is created by filling PGS Graphite Sheet into silicon resin. 2 mils, and the largest was 0. These very soft thermal gap filler materials are designed to conform and fill in gaps between a heat source and heat sink. the gap (thickness of the thermal pad) create bend-ing moments at the bolts - the thicker the thermal pad or the gap, the greater the bending moments at the bolts. Pricing is for ONE PAD each. May 10, 2020 #1 I was just wondering if how thick are the thermal pads for the smaller chips inside the gaming laptop?. 8 m high, has a thickness of 6. 88% Upvoted. The temperature on the steelwork on the warm side of the cladding system is 9. 0(b) 1, 2, 3 OO-75 (a) Measured via ISO 22007 Hot Disk method (b) Value reported on technical data sheet determined via ASTM D5470 from the supplied thermal pads and sandwiching them between the copper disks. We are a thermal interface material manufacturer, die cut converter and designer of thermal management materials for electronic cooling. Risk: Continued operation at or below Rotor Minimum Thickness can lead to Brake system failure. Armatherm™ FRR structural thermal break material provides a combination of low thermal conductivity and high compressive strength and has been used in hundreds of structural steel framing connections transferring load in moment and shear conditions. Ive read a lot of conflicting posts about thermal pads for the CPU and GPU on the Alienware m15x. Thermal Pad F 0,5mm - (120x16mm) - (EAN: 3830046996725) STEP 6 PLACING THERMAL PADS Once cut to size the thermal pads should be applied to the PCB as illustrated below. S ∆ T ∆ = ∆ bearings. The Intel stock HSF is designed for use with a thermal pad 2. I need to replace the thermal pads on my HP Pavilion dv5 1250us Entertainment PC but need to know the specifications for the thermal pads. Thermal pads. Pre-formed gap pads in standard or custom shapes, sizes, and thickness have at least one major attraction—they’re extremely easy to use compared to older material types such as thermal greases. An annular ring is the area on the pad that surrounds a through via. Safe practice would be to assume 1mm over 0. Thermal pads are used in a variety of electronic applications and industries including computers, laptops, tablet PCs, smart phones, routers, LEDs, solar, medical device, power supplies, wireless devices, and the automotive industry. The width of these rings can affect your design and. 008" Thickness, Natural Tack Both Sides, Sil-Pad TSP 1800ST Series / Also Known as Bergquist Sil-Pad 1500ST Series, IDH 2166373. Quick question, I have a T61 14 in. In this formula, δ Cu refers to the thickness of each copper layer; k Cu is the thermal conductivity of copper with a value of 388[W/(m•°C)]; k j is the thermal conductivity of each copper routing; δ F is the thickness of each FR-4 layer; k F is the thermal conductivity of FR-4 with a value of 0. 2 drives that. At higher plating thicknesses, a bump may be an issue, with an excessive bump creating issues with dielectric thickness. Power Stop Extreme Performance pads are best suited for high horsepower cars and big wheel upgrades. Because the 1mm thermal pad is too thick, it leaves a too big gap between the heat sink and the processor for CPU thermal paste to fill. A mask defined pad is when the mask relief is the same size or smaller than the copper pad it is exposing. A new brake pad will be around 12mm thick or 1/2 inch, and pads with sensors typically start to warn you when they get to 3mm or 1/8 inch, with a squeal or a warning light on the dash. This document provides information on pad sizes, stencil aperture, solder paste and soldering profile for d-PWER™ POL Converters. STEP 5 1: Thermal Pad E - 0. We offer an extensive line of standard thermal management materials as well as custom thermal material formulations, constructions and die cut designs targeted towards specific thermal application requirements. COVID-19 Update. It’s typified by MSI’s GE75 Raider which offers top-end, gaming-friendly components in a 17-inch chassis that, at less-than 3cm thickness, most certainly can be regarded as a portable PC. S ∆ T ∆ = ∆ bearings. Foam can be cut to custom sizes or glued together for thicker pieces. Graphite-PAD Thickness characteristic. 5mm thermal pad and the machine becomes stable. 0 mm - Silicone Based Thermal Pad (1mm Thickness) 1pc Blue GPU CPU Cooling Heatsink Silicone Thermal Conductive Pad$1. This blue colored thermal pad is available in 50mmx50mm and 145mmx145mm sizes, and thickness of 0. Thermal resistance Everyone knows that the existence of thermal conductive silicone pad is to be able to dissipate heat to the product. A full page of thermal pad! This is the cream of the crop for thermal pads. 51 psi Compression Set ASTM D3574 — 4% Flammability Rating — Passes Federal Flammability Standards DOC-FF-1-70 Smoke Density — Passes, 110 max. This new pad range is the result of an exhaustive research and development program, having put these pads through their paces on the street, off-road, at racetracks. of wire geometry, bond pad thickness, anddimensions of ‘toe’ and ‘heel’ on bond pads to optimize the thermal reliability of u wire wedgeA -bonds. ×Sorry to interrupt. The actual delta T is a linear function of the power transferred through the thermal element, with negative slope. 0254mm, and maximum surface wetting. Carpet Pad Thickness I see that the bassboatseats. Available Sizes 30mm X 30mm or 40 X40mm Graphite Thermal Pad. The fan stopped spinning at full speed all the time. LMT70 PCB Path of Heat Flow (Thermal Conductivity of Materials, k [W/(m×K)], Given in. Thermal Pad - Correct thickness. Thicker thermal break pads provide reduced heat flow Stainless-steel and FRP bolts reduce heat flow for a 0. Available Sizes 30mm X 30mm or 40 X40mm Graphite Thermal Pad. RTX 2070 XC Replacement Thermal Pad Thickness Hello, I was recently using an aftermarket AIO on my GPU which had a pump failure. Simple vias or via-in-pad can provide a large reduction in thermal resistance. Multiply the length and width, if they are given in inches, by the factor 2. Jual Thermal Pad TGF600M 6W/m. 2: ASTM D3801 V-0. Normally the thickness is determined by the gap between the two object squishing against each other, but since I'm just sticking heat sinks on the PCB, what thickness thermal pads should I use then? 0. However, there is no way you could avoid using the thermal pads on this heatsink, as there is way too much room in between the chips and the heatsink surface. K + Check Stock & Lead Times 155 available for 3 - 4 business days delivery: (UK stock) Order before 19:35 Mon-Fri (excluding National Holidays). Thermal Grizzly Minus Pad 8 100x100x2mm Thermal Pad. ×Sorry to interrupt. HELP - Do I need thermal pads? 07-03-2018 01:54 PM - edited 07-03-2018 01:56 PM Yes since the thermal pad spots are dry they will just lift off the chips when you remove the heatsink. 5mm: Net Weight and dimension: Gray Pad 1 : 60mm(W) X 1mm(H) X 20mm(D), 2. 20 mm (8 mil) long and covered with solder mask to prevent solder from flowing onto the probe pad. This is a silicone-based thermal pad with special filler material that is not disclosed by the company. Click here to obtain permission for Thermal Insulation Thickness Charts and Material Characteristics for Piping. Anisotropic in-plane thermal conductivity of black phosphorus nanoribbons at temperatures higher than 100K. 5mm, 1mm, 1. K + Check Stock & Lead Times 181 available for 4 - 6 business days delivery: (UK stock) Order before 18:30 Mon-Fri (excluding National Holidays). Indium Corporation is a leader in the development of both solder and metal-based thermal interface materials (TIM) for a wide variety of applications. The Barrier™ under concrete thermal insulation and vapor retarder product family is a high-performance, simple to use and labor saving product for all of your under-slab, under concrete, and radiant floor insulation projects. 0 4996 and an Arctic Accelero Xtreme III I want to slap on. Polymer matrix composites have a through-thickness-thermal conductivity whose value is realized in applications such as composite spaceborne electronics enclosures where heat dissipation is entirely dependent on thermal conduction to a heat sink. Here at Overclockers UK we sell a wide variety of Thermal Pads that cover nearly every application from graphics cards to motherboard chipsets. Create symmetry of the leads and pads so that the device can self--center on the pad layout. High Thermally Conductive Filler Pads TIF780 LED Light Application 2mm Thickness. 2 SSD (flat back without component), please use 1. Travis’ manager weighed in, reinforcing: “It (XPS 13) uses a cutting-edge thermal insulator (Silicon Aerogel, NASA technology) with allows us to reduce the skin temperature by 3 degrees. Advantages: Low cost Widely Available Reworkable Disadvantages: Uneven surfaces Not good for fine pitch Pb Thermal shock Solder Bridging Plugged or reduced PTH’s Process Flow. 0 mm - Thermal Compound for All Coolers, Efficient Thermal Conductivity, Gap Filler, Non-stick, Safe Handling, Easy to Apply - Blue 4. 5 out of 5 stars 483 £4. And i'll not use sided adhesive pads, just the common ones. At higher plating thicknesses, a bump may be an issue, with an excessive bump creating issues with dielectric thickness. Dell :: M/B Chipset Thermal Pad Thickness? Sep 8, 2009. Even further, increasing the copper area of pad to 2500mm 2 reduces the θ JA to 50°C/W. Safe practice would be to assume 1mm over 0. A good PS4 thermal pad works in the same way as a PS4 thermal paste. How does the heat transfer conduction calculator works? The heat transfer conduction calculator below is simple to use. Flirc Case Thermal pad. ** Includes two thermal pads with different thickness. Pre-formed gap pads in standard or custom shapes, sizes, and thickness have at least one major attraction—they’re extremely easy to use compared to older material types such as thermal greases. Thermal Pad 1 Thermal pad Company 1 2. thermal pad, 150mm x 150mm x 0. The standard bond thickness for both films is 0. In aesthetic clinics, 7 cm by 10 cm Bovie plates made of metal that do not bend easily are often used, because of cheap costs. What is the recommended thickness of thermal padding/tape to use on the ssd for this laptop? If possible can you recommend. The Z26 brake torque is consistently higher than OE pads with outstanding thermal stability. Thin, double-sided or multilayer printed circuit boards with a array of thermal vias (Via-in-Pad), which improve the heat conduction. In this paper, the theoretical influence of the skin thickness and the thermal contact resistance is studied on the thermal model describing the temperature evolution in skin and materials when they come in contact. 2D FEA simulation is runusing temperature dependent Au elastic -plastic. A History of Innovation. This does not affect your statutory rights. Some ENPIRION QFN packages have a large GND pad that is used to allow heat from the. Breakdown voltage: 9. Let's say that we have an unknown material where 30 Watts of heat flows through an area of 2 m 2 with a thickness of 1 m and a temperature difference, ΔT, of 50 K. Multiple Thickness Laptop/IC/GPU/PCB Thermally Conductive Solid Silicone Thermal Pad(id:10577638), View quality Multiple Thickness Thermal Pad, Silicone Thermal Pad, Thermally Conductive Pad details from ShenZhen 3KS Electronic Material Co. Graphite-PAD Thickness characteristic. Thermal Design with Exposed-Pad Packages August 01, 2016 by Robert Keim Most designers are by now quite familiar with integrated-circuit packages that incorporate an "exposed pad" or "thermal pad" in addition to the component's power, ground, and signal connections. Unfortunately, I did not save the thermal pads that came stock on the card. 5 mm 2: Thermal Pad F - 1. Subtracting the initial gap pad thickness by the deflection value, obtained above, will give the resultant thickness. Thickness: 1mm 8. and I wanted to know what thickness of the thermal pad that makes contact with the southbridge chip needs to be? Thermal Pad Thickness 2013-09-02, 20:26 PM. Check out some tips, review and buying guide for the best car break pads on the market this year and choose the high-quality one. Product Title 0. The Thermal Stitch crochet pattern has several uses – pretty much any pattern that requires lots of thickness and sturdiness. Its thermal conductivity is 0. Buy ARCTIC Thermal Pad 50 x 50 x 1. 53 50 50-60 Pad 104 Oct 2007 4 695 100 1. 35[W/(m•°C)]; δ PCB is the overall PCB. I put the Blue Artic pad on my MBA and with a few notes for install. This blue colored thermal pad is available in 50mmx50mm and 145mmx145mm sizes, and thickness of 0. $Q=-KA(\frac{dT}{dx})$ $k$is the thermal conductivity and can be expressed as: [math]k=\frac{Q}{A\frac{dT}{dx. Unfortunately, I did not save the thermal pads that came stock on the card. k) – and can fit M. When design procedures require a pad thickness greater than the maximum recommended thickness, investigate the use of FI FE bearings. Details Details. In my experience, 1. Flirc Case Thermal pad. Thermal Pad 1 Thermal pad Company 1 2. After removing the caliper body, unfasten the retaining bolts and remove the caliper mounting bracket, in order to remove the rotor. Thermal resistance is the reciprocal of thermal conductance. Correct description by the seller. Tflex 300, at pressures of 50psi, will deflect to over 50% the original thickness. Click here to obtain permission for Thermal Insulation Thickness Charts and Material Characteristics for Piping. Disc Brake Rotor Minimum Thickness (also known as Scrap Thickness) is the minimum safe working thickness of a rotor at which it must be replaced. The graphene unit cell, marked by dashed lines in Figure 1a , contains N = 2 carbon atoms. Check out some tips, review and buying guide for the best car break pads on the market this year and choose the high-quality one. If you are using 1. I still haven't ordered the thermal pads because the good stuff isn't cheap and I don't want to buy alot more than I need. I need to replace the thermal pads on my HP Pavilion dv5 1250us Entertainment PC but need to know the specifications for the thermal pads. Discussion Triton 500 PT515-51 thermal pad thickness?. * The resultant thickness of the Gap Pad will determine the thermal resistance. At the CPU surface - thermal past is used, but a the GPU end acer used a thermal pad. 3 GND Pad Thermal Vias. Specific thermal resistance or thermal resistivity R λ in kelvin metres per watt (K⋅m/W), is a material constant. The surface of the pad is slightly sticky and is equipped with a protective foil when shipped. THERMSTRATE® thermal pads to fit AMPSS® modules provide a clean and effective alterna-tive to thermal compound for interfacing the heatsink to the module. However, there is no way you could avoid using the thermal pads on this heatsink, as there is way too much room in between the chips and the heatsink surface. The smallest observed range was 0. There are few things more comforting than a cracking fire in a wood-burning stove on a chilly day. All that remains is to select a tie width that satisfies the two conflicting constraints, thermal and electrical. The graphene unit cell, marked by dashed lines in Figure 1a , contains N = 2 carbon atoms. In the actual circuit the thermal pad must always be electrically connected to the Source. For a comprehensive check, the brake pads should be removed and cleaned. Wathai Blue 100x100mm 1. • Starting at layout #5, the copper pad covers both thermal and Source pads. I'd say try to get a service or replacement first. The client develops respiratory. Thermaltake Announces the DistroCaseTM 350P Mid-Tower Chassis Taipei, Taiwan-June 2nd, 2020-Thermaltake, the leading PC DIY premium brand for Cooling, Gaming Gear, and Enthusiast Memory solutions, is thrilled to announce an advanced PC case- the DistroCaseTM 350P Mid-Tower Chassis. A new brake pad will be around 12mm thick or 1/2 inch, and pads with sensors typically start to warn you when they get to 3mm or 1/8 inch, with a squeal or a warning light on the dash. Thickness: 1mm 8. Optimizing PCB Thermal Performance for Cree thermal pads is a method to dissipate heat through an FR-4 PCB and into an where l is the layer thickness, k is the thermal conductivity, and A is the area normal to the heat source. Film based Sil-Pads (Sil-Pad K-4®, Sil-Pad K-6® and Sil-Pad K-10®) are smoother initially and show a 5% decrease over the same period of time. Thermal resistance (R) and thermal conductance (C) of the materials are reciprocals of one another and can be derived from thermal conductivity (k) and the thickness of the materials. Similarly, there also has a convertion for outer. * Add 9" for Bay-O-Net fusing. The amount of voiding post reflow in the thermal pad should not exceed 25% per IPC-A-610—Acceptability of Electronic Assemblies(6). Thermal Pad, Silicone, 101. Thermal stability up to 900C Flame retardant Colour: white RS 122-1773: filter media roll, 1m x 20m RS 122-1774: filter pad, 1m x 1. 25 inches, and from xxx W/m-K thermal conductivity all the way up to xx W/m-K, GAP PAD has your application covered. 46 55 55-60 Pad 106 Sep 2007 4 730 100 1. Unfortunately, I did not save the thermal pads that came stock on the card. OTC brake pad gauge (click image to buy) The materials used to construct brake pads include steel backing plates, shims, friction materials, rubberized coatings, and thermal insulation coatings. IC Graphite Thermal Pad – Preliminary. Even when not seen by the naked eye, microscopic air bubbles can be. A new brake pad will be around 12mm thick or 1/2 inch, and pads with sensors typically start to warn you when they get to 3mm or 1/8 inch, with a squeal or a warning light on the dash. Learn more about BERGQUIST SIL PAD TSP 3500 an high-performance, fiberglass-reinforced, silicone-based insulator. 45inch Surface Color Black Base Color Black Dimensions (L x W x H) 350 x 264 x 2 mm / 11. The actual delta T is a linear function of the power transferred through the thermal element, with negative slope. Gallery of thermal images. 5mm of thermal pad material is going to be minimal. I saw this thermal pads that are the same thickness of the original that Asus put on the heatsink 1,5mm : Alphacool Eisschicht Wärmeleitpad - 14W/mK 120x20x1,5mm - 2 Stück Alphacools ultimative Lösung (d)einen Computer mit einer Eisschicht zu kühlen. You can find a lot of info and reviews about it by searching on. Via Nicki’s Homemade Crafts. As metal components get smaller and tolerances get tighter, material selection and equipment capabilities become more important. For thermal pad 2 & 3, adjust and cut. In the PCB industry, 1OZ means the thickness of a 1OZ piece of copper uniformly spread over an area of 1 square foot (FT2). tained on the back and the pad of the same finger. The pad has a textured surface and a smooth bottom surface for easy welding to UltraPly TPO roof membrane. Design Guidelines for Cypress Quad Flat No-lead (QFN) Packaged Devices www. The solder tab serves as a thermal barrier, permitting a higher temperature for the larger wire solder joint without loss of degradation to the strain gage leads. List of symbols c Specific heat (J kg-1 K-1) d 1 Pad thickness (m) d 2 Disk thickness (m) dE_ Thermal energy per unit time (W) dP Friction power (W) E c Kinetic energy (J. A good example are power supply circuit of graphics cards and processors, so the choice of test subjects was unambiguous. Hi all, New member here. Doubling the pad thickness doubles the thermal resistance. A while back i burnt a few cards because this issue and i. Its special developed polymer results in its flexibility and its soft 'gap filler' properties. Input the cross-sectional area (m 2) Add your materials thickness (m) Enter the hot side temperature (°C). Discussion Triton 500 PT515-51 thermal pad thickness?. Search By Printer ≡ Menu Χ Menu. to the power IC and its exposed thermal pad by thermal vias (see Figure 8. Thermal pad 2. The thermal pad has some thickness (1/8th inch maybe) 3. 76 24 50-60 Pad 107 Sep 2007 4 700 100 1. 008" Thickness, Natural Tack Both Sides, Sil-Pad TSP 1800ST Series / Also Known as Bergquist Sil-Pad 1500ST Series, IDH 2166373. I cut it down to a 35mm x 45mm and centered it. Thermal transport in graphene is a thriving area of research, thanks to graphene's extraordinary heat conductivity properties and its potential for use in thermal management applications. This essentially turns the laptop's back panel into a giant heatsink for the pads. Insulators?. Best Thermal Pads, Thermal Tapes, Thermal Adhesives for CPU, GPU, Laptops, Notebooks, Motherboard Chipset, ICs, 3D Printer, Heatsinks, PCBs and DIY projects. Henkel corporate provides some information regarding Henkel and COVID-19. 203mm Application Thermal management, Thermally. The adhesive surface on both sides! In case your thermal pads for the Raspberry Pi 4 Model B or Raspberry Pi 3 Model B+ are getting bad or old, you can use these pads to replace it. A Thermal Pad works in the same way as a Thermal Paste - It transfers heat from hot areas of a circuit board into a heatsink. 87") and is 1. One method of evaluating the potential for condensation for a particular detail is by calculating its Temperature Index (TI). Thermal Interface Products Gap Pad, Silicone Thermal Interface, High Performance, 6 W/m-K Thermal Conductivity, Ultra Low Modulus, 8" x 16" Sheet, 0. 5mm thickness. thickness of approximately 600nm. Translation and Other Rights For information on how to request permission to translate our work and for any other rights related query please click here. CON-SERV Inc recommends an 11 ga. I am sure anyone would appreciate a handmade crocheted potholder as a present. The First Dedicated Advanced Patient Pad Specifically Designed to Protect Patients from Thermal Injuries During MRI. of wire geometry, bond pad thickness, anddimensions of ‘toe’ and ‘heel’ on bond pads to optimize the thermal reliability of u wire wedgeA -bonds. - not replacing them, I believe, overheated the GD-ROM's chip before twice on my other PAL Dreamcast) on my working Dremcasts. Most pre-cured pads are soft and gel-like to minimize compression forces on the electronics and are available in a broad range of both thicknesses and thermal conductivity. The thickness of the pad along. RTX 2070 XC Replacement Thermal Pad Thickness Hello, I was recently using an aftermarket AIO on my GPU which had a pump failure. I am using the heatsink & cpu from the dead mobo. 001-72845 Rev. Yes, thicker is worse in terms of heat transfer. 2 thermal pads are made of electrically non-conductive silicone - a material that offers decent thermal conductivity (up to 4 W/m. Hops Member Posts: 2 New User. Thermal Pad only. The field oxide was grown at 1000˚C to a thickness of approximately 600nm. • Starting at layout #5, the copper pad covers both thermal and Source pads. COOL-GAPFILL™ is yet another enhanced thermal gap-filling material that provides extreme thermal conductivity and thus unparalleled thermal interface resistance for board level multiple component thermal management. I need to replace the thermal pads on my HP Pavilion dv5 1250us Entertainment PC but need to know the specifications for the thermal pads. 5" I-3 CABINET/TANK DIMENSIONS Figure 2. BERGQUIST® GAP PAD® TGP 3000 is a soft gap filling material rated at a thermal conductivity of 3 W/m-K. Hardness (Shore). Thermal Pad, Silicone, Fibreglass, 100 mm x 100 mm x 0. As the rotor reaches its minimum thickness, the braking distance increases, sometimes up to 4 meters. 5mm(H) X 20mm(D),3. A good PS4 thermal pad works in the same way as a PS4 thermal paste. What is the recommended thickness of thermal padding/tape to use on the ssd for this laptop? If possible can you recommend. I have a IBM Thinkpad T20 and their stock thermal pad seems to work nicely. Specific thermal resistance or thermal resistivity R λ in kelvin metres per watt (K⋅m/W), is a material constant. It seems to be 1 to 2 millimeter. For metal fabricators, materials and machines are key to success A continuous wet-processing system from Metfab automatically sharpens, deburrs, and polishes metal parts. Thermal Grizzly Carbonaut high-tech carbon thermal pads can be used as an excellent alternative to mid-range thermal compounds. Understanding thermal gap filler pads, PCB deflection, and stress. Manufactured to meet QS and ISO Quality System Standards; Utilizes an Electrocoating finish that provides long lasting corrosion protection; Designed to withstand 400 hours of salt water exposure without rusting; Double disc grinding ensures parallelism, eliminates run out, and provides near perfect disc thickness; The double disc grinding also leaves a non-directional finish on the friction. Great replacement for traditional heat sink compound grease paste. 5 mil, and I was wondering if it was okay to stack the pads to actually make contact with the cooler. The thicknessSENSOR is a fully assembled system for non-contact thickness measurements of strip and plate material. My heatsink connects to both the CPU & what must be the gpu. For NSMD pads, the solder mask opening should be about 120 m to 150 m larger than the pad size, providing a 60 m to 75 m design clearance between the copper pad and solder mask. Most pre-cured pads are soft and gel-like to minimize compression forces on the electronics and are available in a broad range of both thicknesses and thermal conductivity. RTX 2070 XC Replacement Thermal Pad Thickness Hello, I was recently using an aftermarket AIO on my GPU which had a pump failure. Thermal Conductivity (W/mK). Some ENPIRION QFN packages have a large GND pad that is used to allow heat from the. If any creep deformation occurs in the pad over time, this creates extra stresses and increases Schöck Isokorb® S22 structural thermal breaks installed. Discussion Triton 500 PT515-51 thermal pad thickness?. 3M™ Thermally Conductive Silicone Interface Pad 5583S • Apply the thermal pad to one substrate at a modest angle with the use of a squeegee, rubber roller or finger Shore 00 results depend on test method and thickness of the sample tested. it odd they gave you a 1mm thickness that's a lot of thermal grease to put on a chip. WisDOT Bridge Manual. Thermal pads are an ideal solution to fill air gaps caused by imp…. - not replacing them, I believe, overheated the GD-ROM's chip before twice on my other PAL Dreamcast) on my working Dremcasts. This increases the heat energy and premature wearing away of the pad. Choose a thermal pad that is the same thickness as the air gap or a tiny bit thicker (see the pad manufacturer's recommendation). High Performance Thermal Interface Alternative to Thermal Compound for CPU's, GPU's and Gamestations. * Add 9" for Bay-O-Net fusing. Their flexibility and elasticity make them suited to the coating of the very uneven surfaces. 25mm~15mm Thickness Thermal Gap Filler Pad/Heat Transfer Pad. 6 Ω, which is modeled as a parallel resistance to laminate's thermal resistance of ~58 Ω. Gap Fillers • Soft and compressible/stress relieving • Idealfor applications wherelarge gap tolerancesare present, typically150µm to 5 mm • Non-flowable • Limited adhesion. It is recommended that presence and amount of voids are checked using X-rays post reflow. I had the same issue as TaKeN once with some rather solid blue thermals pads I bought, the EK ones are soft and squishy so even if you get them too thick they'll press down and won't put up any resistance. 5mm, 1mm, 1. The smallest observed range was 0. The thermal pad is placed between the mSATA SSD and the bottom of the chassis to improve the thermal performance. 0(b) 1, 2, 3 OO-45 Thermal Pad 2 Thermal pad Company 1 4. easy to apply desired thickness and long lasting. It is suitable for all thermal pad applications greater than 0. Some deep partial thickness burns eventually require skin grafts. Thermal Model Layout Copper (k=380 W/m-K; t = 2 oz or 0. 10 mm stencil with the enlarged aperture design that Samtec found to be optimal. 5mm, 1mm and 1. S ∆ T ∆ = ∆ bearings. 30: SARCON® GR14A: General purpose gap filler pad, UL94 V-0/V. Pad Thickness (mm) ASTM2 D5470 ASTM D5470 ASTM E1461 TCM Series 0. ARCTIC Thermal Pad 50 x 50 x 1. Thermal conductivity is an inherent or absolute property of the™material. THERMSTRATE® thermal pads to fit AMPSS® modules provide a clean and effective alterna-tive to thermal compound for interfacing the heatsink to the module. 2 Via Types and Solder Voiding Voids within the solder joints under the exposed pad can have an adverse effect on high-speed and RF applications, as well as, on the thermal performance. For more than 39 years, JARO has developed a broad range of unique, cutting-edge cooling technologies designed to extend the life of electronic components for many different industries. The purpose of this paper is to present our work on the effect of nickel (Ni) pad metallization thickness on fatigue failure of lead-free solder joints after thermal aging. Before understanding how much the thickness of the thermal gap pad of the notebook is suitable, in fact, we must first look at the relationship between the thermal conductivity and the thickness of the product. 0(b) 1, 2, 3 OO-75 (a) Measured via ISO 22007 Hot Disk method (b) Value reported on technical data sheet determined via ASTM D5470 from the supplied thermal pads and sandwiching them between the copper disks. 00(NI) (for re-reeled items 16:30 - mainland UK & NI) Mon-Fri (excluding National Holidays). 9 mm Total Length: 142. 5mm, 1mm and 1. Upon reaching the required melt temperature, the pad will fully change phase and attain minimum bond-line thickness (MBLT) • less than 0. Quality Thermal Insulation Materials manufacturers & exporter - buy 0. The thermal conductivity normally found in tables is the value valid for normal room temperature. For thickness, I agree with most everyone else: 3-1/2" thick (using 2x4s for forms) will be plenty. The coefficient of linear thermal expansion (CLTE) of any material is the change of a material's dimension per unit change in temperature. Just now, xnoobftw said: What thermal conductivity of the thermal pads would you recommend then? The best i can find is 3. I saw this thermal pads that are the same thickness of the original that Asus put on the heatsink 1,5mm : Alphacool Eisschicht Wärmeleitpad - 14W/mK 120x20x1,5mm - 2 Stück Alphacools ultimative Lösung (d)einen Computer mit einer Eisschicht zu kühlen. Recommended thermal pad thickness? By xnoobftw, February 13, 2016 in Graphics Cards · 20 replies. 88% Upvoted. Unfortunately, I did not save the thermal pads that came stock on the card. Description Part Number Description Part Number Thermal Pad, 80 size APA502-80-001 Thermal Pad, 60 size APA502-60-001 Note: These thermal pads do not provide. The conversion formula for ounces and grams (g) is: 1OZ ≈ 28. However, you would pay 0. Pad to Pad 100µm - 4µinch 100µm - 4µinch 125µm - 5µinch Thermal Spoke Width Thermal Leg Reduction Blue, Yellow, Black, White Min thickness 15-30 µm (0. Most of our vendors have developed methods to supply this thickness however, there was some grumbling from a few at first. 8°C and the heat loss (χ-value) is 1. Balakrishnana, M. Thermal Transistor Insulator Pad SIL-PAD® 400 Sil-Pad® is a thermally-conductive interface material developed for the electronics industry. Disc Brake Rotor Minimum Thickness (also known as Scrap Thickness) is the minimum safe working thickness of a rotor at which it must be replaced. FR4 material thermally in parallel with 24 vias). Henkel corporate provides some information regarding Henkel and COVID-19. In order to dissipate 1 watt of power a good rule of thumb is that your board with need to have an area of 15. Rotors has the min thickness stamped on the edge, and use a micrometer to measure the thickness. The Artic Silver 5 thermal paste you put on is far more effective at wicking away heat from the processor. Search By Printer ≡ Menu Χ Menu. Refer to numbering in previous icture when applying thermal pads of different sizes or thickness. Thermal Pad, Filled Silicone Polymer, 150 mm x 105 mm x 1 mm, 1. The coefficient of linear thermal expansion (CLTE) of any material is the change of a material's dimension per unit change in temperature. k - Z Axis) In the thermal resistance vs pressure vs deflection charts PC93 provides low thermal impedance. I'd like to replace the different thermal pads (it's dry and has crumbled after 15 years I suppose. When you put a heatsink on a cpu die, you have a thin layer of air in-between the metals, the purpose of paste is not to increase the distance of the metals, but to replace the air which has a 0. 2 m wide and 1. Its special developed polymer results in its flexibility and its soft 'gap filler' properties. The little black dudes are the things that are kinda nekkid. Two heated rollers create high-quality professional, bubble and wrinkle-free results. The thickness of the pad along. This pad has been added to the Intel® NUC chassis in manufacturing. 8g Gray Pad 2 : 60mm(W) X 1. List of symbols c Specific heat (J kg-1 K-1) d 1 Pad thickness (m) d 2 Disk thickness (m) dE_ Thermal energy per unit time (W) dP Friction power (W) E c Kinetic energy (J. My dad accidentally got the 1 mm thick pads instead of the 1. Add to Cart. In my experience, 1. Another inherent thermal property of a material is its thermal resistance, R, as defined in Equation 2. 3 x 10-3: 2. IC Graphite Features: Top tier thermal performance. McMaster-Carr is the complete source for your plant with over 595,000 products. For reliability in the manufacturing process, the drill diameter of the vias should be no less than 0. The pedal return springs may make a significant contribution to the overall pedal force. The Fujipoly System Builder Thermal Pad starts at 300mm x 200mm (11. 50: SARCON® GR-ae: Standard Gap Filler: General purpose gap filler pad, UL94 V-0/V-1 class: 3. Now with All-in-One multi-fluid absorption for periods, leaks and moisture. In this formula, δ Cu refers to the thickness of each copper layer; k Cu is the thermal conductivity of copper with a value of 388[W/(m•°C)]; k j is the thermal conductivity of each copper routing; δ F is the thickness of each FR-4 layer; k F is the thermal conductivity of FR-4 with a value of 0. 3: Designing the aperture opening of the thermal pad The large thermal pad being screened with excessive solder may result the device “floating”, thus causing a non-wetting between the QFN/DFN leads and PCB due to solder surface tension during reflow. One method of evaluating the potential for condensation for a particular detail is by calculating its Temperature Index (TI). 00 MATERIALS LABOR. 3 Minimum Operating Temp oC -51. 5mm, 1mm, 1. Timtronics is an advanced, present-day innovator of Thermal Interface Materials. tained on the back and the pad of the same finger. If there are thermal vias in the board exposed pad area, it is recommended that these are plugged. in the GS66 Stealth shows that this machine is a more forgiving thermal. * Add 9" for Bay-O-Net fusing. The thermal pad equalises the effect of irregularities from microscopic scale to visible scratches etc. 100" Thickness, Gap Pad TGP 6000ULM Series, IDH 2196912. Physical Properties: Durometer: 90 ± 5. Nowadays, most vehicles use a disc braking system to allow the driver to slow down and stop their vehicle. 5mm, 1mm and 1. 00 PRODUCT Thermal Impedance1 (C-cm2/W), No Shim Thermal Conduc-tivity(W/m˚C) Reliability (hrs), 150°C Baking Pad Thickness (mm) ASTM2 D5470 ASTM D5470 ASTM E1461 PTM “D” Series 0. In some cases the gaps are a function of tolerance variation and in other cases there is intentional spacing. 0 W/m-K of thermal conductivity. to the power IC and its exposed thermal pad by thermal vias (see Figure 8. - Packed with 9 thermal pads, three sizes, each with 3 pieces: 20x67x0. How thick are the thermal pads (is it just one thickness or different thickness are used all over the Dreamcast?). 98% of products ordered ship from stock and deliver same or next day. 6 x 10-3: 1. Also, the thermal pad itself is slippery, so make sure to lay down your tower horizontally before installing the cooler or better install the cooler outside of the chassis if using IC Graphite Thermal Pad and then install the motherboard inside the chassis. Gap Fillers versus Thermal Pads ( 07/19/2018 ) Written by: Eric Wyman In the quest to find the best thermal management material, battery manufacturers typically are drawn to one of two types of products: a cure-in-place, liquid-dispensable gap filler (gap filler) or a pre-cured thermal pad (thermal or gap pad). How do I know what thickness thermal pads to use on my ps4 pro model 7115b? 2 comments. K + Check Stock & Lead Times 206 available for 4 - 5 business days delivery: (UK stock) Order before 21:35 Mon-Fri (excluding National Holidays). 5mm or 1,0mm ? I want to replace them. For example, a characteristic of a heat sink. Epidermal thickness indicated by heat flow method Thermal conductivity (vo- lar surface of forearm) (cal/cm2 sec x 10-3 Dermis -0094. I had the same issue as TaKeN once with some rather solid blue thermals pads I bought, the EK ones are soft and squishy so even if you get them too thick they'll press down and won't put up any resistance. Problem is that pads are often not the spcified thickness, so better get a few in different thicknesses. Yes, thicker is worse in terms of heat transfer. For metal fabricators, materials and machines are key to success A continuous wet-processing system from Metfab automatically sharpens, deburrs, and polishes metal parts. 201700233 nature in optical, electrical, thermal, and mechanical properties, which has been proved experimentally and. Conceptual illustration of thermal test vehicle with embedded capacitive sensors. The first women’s suit to appear on our wetsuit thickness guide, the Billabong Salty Dayz 5/4mm may have a retro design, but its performance is truly modern. Unfortunately, I did not save the thermal pads that came stock on the card. 6ppm/ C) and PCB thickness 1000 1000 1000 1000 RDL pad UBM Silicon Passivation Layer Epoxy Cupost Solder Ball PCB Pad. The temperature on the steelwork on the warm side of the cladding system is 9. Head Thickness: 7. Graphite-PAD is a thermal interface material (TIM) that compatibly obtained excellent thermal conductivity in thickness direction (Z-axis direction) and high flexibility (deformable with a low load). 5 mm to 10 mm thicknesses. 5mm thick 6W thermal pads ontop of eachother on three of the hottest VRM chips, such that the thermal pads touch both the VRM and the aluminium back panel of the laptop. Another inherent thermal property of a material is its thermal resistance, R, as defined in Equation 2. So is thermal compound, thermal goop, thermal gunk, heat paste, that gooey stuff you put between your CPU and heat sink, hot ass gloop-a-doop. The width of these rings can affect your design and. The walkway pad or acceptable paver is required at all access points (ladders, hatches, doorways, etc), and is. Thermal Pad, Filled Silicone Polymer, 150 mm x 105 mm x 1 mm, 1. Since the thickness of the thermal interface material 15 will vary with the attaching pressure of the heat sink 13, therefore, the BLT of the thermal interface material 15 should be measured first, then the volume of the thermal interface material 15 and the parameter of the attaching pressure of the heat sink 13 can be determined. Thermal Pad only. RTX 2070 XC Replacement Thermal Pad Thickness Hello, I was recently using an aftermarket AIO on my GPU which had a pump failure. Thermal gap pads. Marmox Thermoblock is a block of load-bearing insulation material designed to be placed at the base of a masonry or timber–frame wall to address the thermal bridge. 5mm - Easy to install, you can select most suitable thickness and cut the required sizes. Polymer matrix composites have a through-thickness-thermal conductivity whose value is realized in applications such as composite spaceborne electronics enclosures where heat dissipation is entirely dependent on thermal conduction to a heat sink. 5mm thick pads come in several W/mK thermal ratings for your cooling needs. 3 shows the predicted temperature distribution through the penetration without a thermal break. 5 mm to 10 mm thicknesses. As thin as 0. 1 x 10-3: 1. Thermal stability up to 900C Flame retardant Colour: white RS 122-1773: filter media roll, 1m x 20m RS 122-1774: filter pad, 1m x 1. 3 x 10-3: 2. 10PCS 100*100*3. Thermal pads are used in a variety of electronic applications and industries including computers, laptops, tablet PCs, smart phones, routers, LEDs, solar, medical device, power supplies, wireless devices, and the automotive industry. LMT70 PCB Path of Heat Flow (Thermal Conductivity of Materials, k [W/(m×K)], Given in. 047W/mK which will result in a significant reduction in y values when used at wall to floor junctions. In order to dissipate 1 watt of power a good rule of thumb is that your board with need to have an area of 15. Optimizing PCB Thermal Performance for Cree thermal pads is a method to dissipate heat through an FR-4 PCB and into an where l is the layer thickness, k is the thermal conductivity, and A is the area normal to the heat source. 5mm(H) X 20mm(D),3. Is that the recommendation, or could I use something thicker? Thanks. OTC brake pad gauge (click image to buy) The materials used to construct brake pads include steel backing plates, shims, friction materials, rubberized coatings, and thermal insulation coatings. Suitable for CPU GPU south/north bridge ASICs ช้อป Thermalright ODYSSEY Thermal Pad thickness 1mm for CPU GPU PSU and ASICs. Cari produk Lainnya lainnya di Tokopedia. Graphite-PADs are composed of silicone resin and Pyrolytic Graphite Sheets. Figure 11 show the calculations for the thermal resistance of the overall PCB (i. plate be laminated every 2" in pads over 2" thick for strength and stability. Safe practice would be to assume 1mm over 0. Layout #1 through layout #4 have the thermal pad separated from the Source and these layouts are only for the thermal analysis. This is the cream of the crop for thermal pads. Description Part Number Description Part Number Thermal Pad, 80 size APA502-80-001 Thermal Pad, 60 size APA502-60-001 Note: These thermal pads do not provide. Two heated rollers create high-quality professional, bubble and wrinkle-free results. Thermal Pad: Thermal conductivity: 1. March 31, 2017 It's rare that one will offhandedly know which thermal interface material will work the best for their PCB thermal pad. 100" Thickness, Gap Pad TGP 6000ULM Series, IDH 2196912. The warmth of a pad is listed in one of two ways: R-value or a suggested degree rating. 2 drives that. For metal fabricators, materials and machines are key to success A continuous wet-processing system from Metfab automatically sharpens, deburrs, and polishes metal parts. 5mm(H) X 20mm(D),3. Antimicrobial Treatment — ultra•fresh™ Thermal Resistance — R-Value. It’s comprised of a hydro-powered mattress pad, thermal regulating control unit(s) and a remote making it perfect for one or two sleepers! Utilizing hydropower (water), this sleep system operates between 55-115°F (13-46°C), helping encourage quality, restorative sleep. It was less clear, however, what thickness of pads is most effective, if it is better to pay for more expensive ones, and what improvement to expect. This blue colored thermal pad is available in 50mmx50mm and 145mmx145mm sizes, and thickness of 0. 31 028501 View the article online for updates and enhancements. There are few things more comforting than a cracking fire in a wood-burning stove on a chilly day. There are a lot of pictures out there detailing all of the different sized thermal pads that you need. 5 no problem, but the FAQ states they normally use 1mm thermal pads. 2 Relationship between thermal resistance (R-value) and thickness of commercial isocyanurate foam sheathing (data obtained from a Canadian manufacturer) FIGURE 5. 00 PRE−GREASED MICA TOTAL \$130. Despite an emphasis on controlling wrap thickness, it was difficult to achieve. The width of these rings can affect your design and. 001 inch or 0. Thermal cycles, seismic activity, variable loading conditions, and long term material changes are some of the conditions that must be accommodated by the structural system. EK made sure to provide customers with more than adequate quantity of thermal pads to complete this step. If you are an aggressive driver that uses the brake often, they will not last long. For the application where thermal paste is too thin and thermal pads are too thick, K5 pro provides a balance. There will come a time when your brake pads will need to be replaced. 0(b) 1, 2, 3 OO-45 Thermal Pad 2 Thermal pad Company 1 4. 21 In/In/°Cx10-5: Thermal Conductivity: ASTM C177: Ca. 3” (without desktop thickness) Quiet motor max load 150kg (330 pounds) with 4 level profile programmable controller Cable management system Full surface RGB mouse pad included. Thermal cycles, seismic activity, variable loading conditions, and long term material changes are some of the conditions that must be accommodated by the structural system. One thing to know about chips that are not designed for heat sinks is that most of the heat flows through the balls of the chip rather than the top. Stefan1024. Wood stoves need a heat-resistant pad underneath to protect the. Temperature operating range -200 C to +400C. Hardness (Shore): 30~55 6. 6-mm thick star board approximately 270 mm2,. Via Nicki’s Homemade Crafts. With a typical thickness of 0. Thermal relief pads in most modern CAD systems are created by rules. One method of evaluating the potential for condensation for a particular detail is by calculating its Temperature Index (TI). Additionally, here I am going to list down the best thermal pads of different varieties for your cooling needs. Thermal impedance: 0. Design the center pad so that it is equivalent to maximum value of the exposed pad on the PQFN/QFN package. Pricing is for ONE PAD each. Design Guidelines for Cypress Quad Flat No-lead (QFN) Packaged Devices www. At Dunelm, we have a range of thermal curtains and thermal curtain linings to help keep the cold out on those blustery, wintry nights. Viswanathanb a Department of Mechanical Engineering, Nandha Engineering College, Erode-638052, Tamilnadu, India. The part should stand off of. Some ENPIRION QFN packages have a large GND pad that is used to allow heat from the. Doubling the pad thickness doubles the thermal resistance. Its thermal conductivity is 0. Today at Epec, the customer comes first, and everything we do must be put through that filter. Thanks to the combination of great performance and the fair pricing, the thermal pad was recommended as a "value for money winner". Even when not seen by the naked eye, microscopic air bubbles can be. The tolerance for mask placement is ±2mils so that is why we adjust mask apertures to be 2mils larger all the way around the copper feature helping to ensure the entire pad can be soldered. TS 6701 Thermal Insulation – Hot service TS 6702 Thermal Insulation – Cold service ASTM C-680 Standard Practice for Heat Loss or Gain and Surface Temp. I just purchased a used GV-N108TGAMINGOC BLACK-11GD, which is a GTX 1080 ti. 5mm Artic purchased from Amazon 140x140. Multiply the length and width, if they are given in inches, by the factor 2. 25 mm, and the center-line of the vias should remain 0. It is watertight, so only through-punctures and unsealed joins or gaps between pieces will allow transmission of moisture. with a thermal-electric-coupled physics solver to examine the effects of different wire parameters. The pads are the Standard Thickness of 0.
|
2020-11-26 12:09:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42159754037857056, "perplexity": 5617.576230550751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188146.22/warc/CC-MAIN-20201126113736-20201126143736-00554.warc.gz"}
|
https://webwork.libretexts.org/webwork2/html2xml?answersSubmitted=0&sourceFilePath=Library/270/setDerivatives1_5Tangents/ur_dr_1_5_7.pg&problemSeed=123567&courseID=anonymous&userID=anonymous&course_password=anonymous&showSummary=1&displayMode=MathJax&problemIdentifierPrefix=102&language=en&outputformat=sticky
|
If find $f'( 5 )$.
$f'( 5 )$ =
Use this to find an equation of the tangent line to the curve $y = f(x)$ at the point $\left( 5 , {\textstyle\frac{25}{26}} \right)$.
An equation of the tangent line is $y$ = .
Your overall score for this problem is
|
2020-06-06 02:25:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 5, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8114207983016968, "perplexity": 110.43795875673744}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348509264.96/warc/CC-MAIN-20200606000537-20200606030537-00168.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/92135-graphing-function-its-reciprocal-also-get-domain-range.html
|
# Math Help - Graphing function and its reciprocal, also get domain/range
1. ## Graphing function and its reciprocal, also get domain/range
Hello.
I have the function y=x^2 - 3
Would the reciprocal be: y= 1/(x^2-3)?
(I need to graph these by hand without a graphing calculator.)
Now how would I graph y=x^2-3?
The -3 tells me the min of the range is -3? So I start at -3 and go up, but how do I know how wide to make it or where to make the zeroes?
Same thing for the reciprocal, I have no idea to graph that and find the range/domains of it too.
Any help would be greatly appreciated.
2. Originally Posted by olen12
Hello.
I have the function y=x^2 - 3
Would the reciprocal be: y= 1/(x^2-3)?
(I need to graph these by hand without a graphing calculator.)
Now how would I graph y=x^2-3?
The -3 tells me the min of the range is -3? So I start at -3 and go up, but how do I know how wide to make it or where to make the zeroes?
Same thing for the reciprocal, I have no idea to graph that and find the range/domains of it too.
Any help would be greatly appreciated.
The function $y=x^2-3$ is simply the same thing as $y=x^2$, a parabola that has shifted down the y axis three units.
So just graph x^2 and slide it down.
The function $y=\frac{1}{x^2-3}$ is simply a hyperbola. Whenever dealing with hyperbolas, always look for vertical asymptotes. i.e. where the function is undefined. Theses will help you sketch the graph.
|
2015-07-30 09:24:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7032979726791382, "perplexity": 771.9838692287598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987155.85/warc/CC-MAIN-20150728002307-00152-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://stacks.math.columbia.edu/tag/0CTI
|
Lemma 38.28.7. In Situation 38.28.1 let $K$ be as in Lemma 38.28.2. Let $W \subset X$ be as in Lemma 38.28.6. Set $\mathcal{F} = H^0(K)|_ W$. Then, after possibly shrinking the open $W$, the support of $\mathcal{F}$ is proper over $A$.
Proof. Fix $n \geq 1$. Let $I_ n = \mathop{\mathrm{Ker}}(A \to A_ n)$. By More on Algebra, Lemma 15.11.3 the pair $(A, I_ n)$ is henselian. Let $Z \subset W$ be the support of $\mathcal{F}$. This is a closed subset as $\mathcal{F}$ is of finite presentation. By part (3) of Lemma 38.28.6 we see that $Z \times _{\mathop{\mathrm{Spec}}(A)} \mathop{\mathrm{Spec}}(A_ n)$ is equal to the support of $\mathcal{F}_ n$ and hence proper over $\mathop{\mathrm{Spec}}(A/I)$. By More on Morphisms, Lemma 37.52.9 we can write $Z = Z_1 \amalg Z_2$ with $Z_1, Z_2$ open and closed in $Z$, with $Z_1$ proper over $A$, and with $Z_1 \times _{\mathop{\mathrm{Spec}}(A)} \mathop{\mathrm{Spec}}(A/I_ n)$ equal to the support of $\mathcal{F}_ n$. In other words, $Z_2$ does not meet $X_0$. Hence after replacing $W$ by $W \setminus Z_2$ we obtain the lemma. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2022-08-07 16:40:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9917359352111816, "perplexity": 124.72469141039797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570651.49/warc/CC-MAIN-20220807150925-20220807180925-00687.warc.gz"}
|
https://socratic.org/questions/can-ph-levels-go-beyond-14
|
# Can pH levels go beyond 14?
Aug 30, 2017
yes. example NaOH with molarity 10M/L [pH=15].
#### Explanation:
pH levels can surely go beyong 14 . if you check out some lower level books , you'll find 'pH range 0-14' but that doesn't make any sense.
pH varies from 0-14 at 25°C .
While reading one of the experiments in my bogus'lab manual ', i found pH of water at temperature more than 25°C to be less than 7 . But those books don't print why that's so.
The pH decreases on imcreasing temperature but yet steam isn't acidic . That leads to a practical conclusion that pH isn't static or say it doesn't has fixed range.
We know , pure water is neutral which means
pH = pOH.
therfore on adding both these we get the pH range .
Those rotten books print 'increase in ${H}^{+}$ ions' as it's cause but don't really justify it .
Well, this was that pH range could contract and as per the question we gotta answer something different.
Now, pH could also be more than 7 (of pure water) but how?
That's the same answer as we have when we increase temperature .
with this i would conclude
pH is inversely proportional to $\delta t e m p e r a t u r e$
Now , this was all like postulates without any justification .
Lets move to another question 'why does temperature have any effect on pH?'
The answer is related to concentration of H^(+) & OH^(-) ions [note H3O^(+) & H^(+) are one and the same things.
Throught several experiments it was found that the concemtration of Hydronium ions and hydroxide ions both is ${10}^{- 7}$M each . (conclusion water is neutral)
but as soon as we increase the temperature the rate of forward reaction increases or say the equilibrium shifts $H 2 O$ breaks down into its respective conjucate base and acid [ions] thus there is an increase in concemtration of ions .
suppose the concentration increase from ${10}^{- 7}$ to ${10}^{- 5}$ .
this increase will be pf both cation amd anion thus solution stays neutral.
at 25°C
Kw =[$H 3 {O}^{+}$][$O {H}^{-}$]
Kw = ${10}^{- 7}$ × ${10}^{- 7}$ =
${10}^{- 14}$
Now if you want to find max pH its
-log (${10}^{- 14}$)=14
but as in the example we took
the concentration to be ${10}^{- 5}$ . so using the formula we get max pH as 10 . similarly we could also increase the ph range decreasing temperature.
Here's a link which includes some example and also explaination
https://chemistry.stackexchange.com/questions/6997/ph-range-outside-conventional-0-14
cheers!
|
2021-07-29 22:53:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 15, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5227312445640564, "perplexity": 2950.8539415677137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153897.89/warc/CC-MAIN-20210729203133-20210729233133-00405.warc.gz"}
|
https://math.stackexchange.com/questions/661439/prove-trig-identity-tanx-cotx-secx-cscx-wherever-defined
|
# Prove trig identity: $\tan(x) + \cot(x) = \sec(x) \csc(x)$ wherever defined
I appreciate the help.
My attempt:
\begin{align} \tan(x) + \cot(x) &= \frac{\sin(x)}{\cos(x)} + \frac{\cos(x)}{\sin(x)} \\ &= \frac{\sin^2(x)}{\cos(x) \sin(x)}+\frac{\cos^2(x)}{\cos(x) \sin(x)} \\ &= \frac{\sin^2(x)+\cos^2(x)}{\cos(x) \sin(x)}\\ &= \frac{1}{\cos(x) \sin(x)}\\ &= \frac{1}{\frac{1}{\sec(x)}\frac{1}{\csc(x)}}\\ &=\frac{1}{\frac{1}{\sec \csc}}\\ &=\frac{1}{1}\cdot \frac{\sec(x) \csc(x)}{1}\\ &= \sec(x) \csc(x) \end{align}
• OK! If you have to do this for an exam, however, I suggest you write in all of the " $\ \theta \$ "s (or whatever symbol you are using for angles). A grader may take points off for not writing the functions properly. (What you did is fine for your own "scrap work", of course.) – colormegone Feb 3 '14 at 0:48
• yup. It's quicker to go from $\frac1{cos\cdot{sin}}$ to $\frac1{cos}\frac1{sin}=sec\cdot{csc}$. – Eleven-Eleven Feb 3 '14 at 0:49
That is exactly correct! Just two things: First, $\tan,\sin,\cos,$ etc hold no meaning on their own, they need an argument. So just be sure to write $\tan x$, $\cos x$ etc rather than just $\tan$ or $\cos$.
Finally, you could save time on your proof by noticing on the fourth step that $$\frac{1}{\cos x\sin x}=\frac{1}{\cos x}\frac{1}{\sin x}=\sec x \csc x$$
Your steps are correct, but just keep in mind that robotically converting everying into $\sin$s and $\cos$s isn't the only option available to you.
Note that $$\cot\theta = \frac{\cos\theta}{\sin\theta}=\frac{\frac{1}{\sin\theta}}{\frac{1}{\cos\theta}}=\frac{\csc\theta}{\sec\theta}$$ that $$\cot\theta\tan\theta=\frac{1}{\tan\theta}\cdot\tan\theta=1$$ and that $$\sec^2\theta=\tan^2+1$$ then $$\begin{array}{lll} \tan\theta+\cot\theta&=&1\cdot(\tan\theta+\cot\theta)\\ &=&(\cot\theta\tan\theta)(\tan\theta+\cot\theta)\\ &=&(\cot\theta)(\tan\theta(\tan\theta+\cot\theta))\\ &=&\frac{\csc\theta}{\sec\theta}(\tan^2\theta+1)\\ &=&\frac{\csc\theta}{\sec\theta}\sec^2\theta\\ &=&\sec\theta\csc\theta \end{array}$$
An alternative approach writes $$t=\tan x/2$$ so $$\tan x+\cot x=\frac{2t}{1-t^2}+\frac{1-t^2}{2t}=\frac{1+t^2}{1-t^2}\frac{1+t^2}{2t}=\sec x\csc x.$$
Equations last but one, last but two and last but three can be deleted and you can straight away jump to last result because you already know $$\sec,\csc$$ are inverse trig functions of $$\cos, \sin$$.
For acute $$\theta$$, there's this trigonograph:
$$\sec\theta \cdot \csc\theta \;=\; 2\,|\triangle OPQ| \;=\; 1\cdot( \tan\theta +\cot\theta)$$
|
2021-07-25 07:13:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998142123222351, "perplexity": 1528.3893091625098}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151638.93/warc/CC-MAIN-20210725045638-20210725075638-00685.warc.gz"}
|
http://booksbw.com/index.php?id1=4&category=mathematical&author=boyce-we&book=2001&page=158
|
Books in black and white
Books Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
# Elementary differential equations 7th edition - Boyce W.E
Boyce W.E Elementary differential equations 7th edition - Wiley publishing , 2001. - 1310 p.
ISBN 0-471-31999-6
Previous << 1 .. 152 153 154 155 156 157 < 158 > 159 160 161 162 163 164 .. 486 >> Next
2x(1) — 3x(2) — x(3) = 0.
Frequently, it is useful to think of the columns (or rows) of a matrix A as vectors. These column (or row) vectors are linearly independent if and only if detA = 0. Further, if C = AB, then it can be shown that det C = (det A)(det B). Therefore, if the columns (or rows) of both A and B are linearly independent, then the columns (or rows) of C are also linearly independent.
Now let us extend the concepts of linear dependence and independence to a set of vector functions x(1)(t),..., x(k)(t) defined on an interval a < t < j3. The vectors x(1) (t),, x(k) (t) are said to be linearly dependent on a < t < j3 if there exists a set of constants c1t..., ck, not all of which are zero, such that c1x(1) (t) + ••• + ckx(k) (t) = 0 for all t in the interval. Otherwise, x(1)(t),..., x(k)(t) are said to be linearly independent. Note that if x(1) (t),..., x(k) (t) are linearly dependent on an interval, they are linearly dependent at each point in the interval. However, if x(1)(t),..., x(k)(t) are linearly independent on an interval, they may or may not be linearly independent at each point; they may, in fact, be linearly dependent at each point, but with different sets of constants at different points. See Problem 14 for an example.
Eigenvalues and Eigenvectors. The equation
Ax = y (24)
can be viewed as a linear transformation that maps (or transforms) a given vector x
into a new vector y. Vectors that are transformed into multiples of themselves are
important in many applications.4 To find such vectors we set y = kx, where k is a scalar proportionality factor, and seek solutions of the equations
Ax = kx, (25)
or
(A — kI)x = 0. (26)
4For example, this problem is encountered in finding the principal axes of stress or strain in an elastic body, and in finding the modes of free vibration in a conservative system with a finite number of degrees of freedom.
7.3 Systems of Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors 363
EXAMPLE
4
The latter equation has nonzero solutions if and only if X is chosen so that
A(X) = det(A - XI) = 0. (27)
Values of X that satisfy Eq. (27) are called eigenvalues of the matrix A, and the nonzero solutions of Eq. (25) or (26) that are obtained by using such a value of X are called the eigenvectors corresponding to that eigenvalue.
If A is a 2 x 2 matrix, then Eq. (26) has the form
and Eq. (27) becomes
A(X) = (^ii — X)(a22 — X) — ^12^21 = 0. (29)
The following example illustrates how eigenvalues and eigenvectors are found.
Find the eigenvalues and eigenvectors of the matrix
A = U —2> • (30)
The eigenvalues X and eigenvectors x satisfy the equation (A — XI)x = 0, or
3 — x —i V*i\ = M
4 —2—v W W'
The eigenvalues are the roots of the equation
(31)
det(A — XI) =
3 — X —1 4 -2- X
= X2 — X — 2 = 0^ (32)
Thus the eigenvalues are X1 = 2 and X2 = — 1.
To find the eigenvectors we return to Eq. (31) and replace X by each of the eigenvalues in turn. For X = 2 we have
Hence each row of this vector equation leads to the condition x1 - x2 = 0, so x1 and x2 are equal, but their value is not determined. If x1 = c, then x2 = c also and the eigenvector x(1) is
x(1) = c ^^ , c = 0. (34)
Usually, we will drop the arbitrary constant c when finding eigenvectors; thus instead of Eq. (34) we write
x(1) = (1) , (35)
and remember that any nonzero multiple of this vector is also an eigenvector. We say that x(1) is the eigenvector corresponding to the eigenvalue X1 = 2.
364
Chapter 7. Systems of First Order Linear Equations
Now setting k = -1 in Eq. (31), we obtain
/4 -A /x,\ /0
, . . .nl. (36)
4 -1J \xj \0J v ’
Again we obtain a single condition on x1 and x2, namely, 4x1 - x2 = 0. Thus the eigenvector corresponding to the eigenvalue k2 = — 1 is
it-(2) — /' 1
= (;)' (37)
or any nonzero multiple of this vector.
As Example 4 illustrates, eigenvectors are determined only up to an arbitrary nonzero multiplicative constant; if this constant is specified in some way, then the eigenvectors are said to be normalized. In Example 4, we set the constant equal to 1, but any other nonzero value could also have been used. Sometimes it is convenient to normalize an eigenvector x by choosing the constant so that (x, x) = 1.
Equation (27) is a polynomial equation of degree n in k, so there are n eigenvalues k17... ,kn, some of which may be repeated. If a given eigenvalue appears m times as a root of Eq. (27), then that eigenvalue is said to have multiplicity m. Each eigenvalue has at least one associated eigenvector, and an eigenvalue of multiplicity m may have q linearly independent eigenvectors, where
1 < q < m. (38)
Examples show that q may be any integer in this interval. If all the eigenvalues of a matrix A are simple (have multiplicity one), then it is possible to show that the n eigenvectors of A, one for each eigenvalue, are linearly independent. On the other hand, if A has one or more repeated eigenvalues, then there may be fewer than n linearly independent eigenvectors associated with A, since for a repeated eigenvalue we may have q < m .As we will see in Section 7.8, this fact may lead to complications later on in the solution of systems of differential equations.
Previous << 1 .. 152 153 154 155 156 157 < 158 > 159 160 161 162 163 164 .. 486 >> Next
|
2018-10-18 04:45:37
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8270828723907471, "perplexity": 483.1039185475489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511703.70/warc/CC-MAIN-20181018042951-20181018064451-00169.warc.gz"}
|