url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://cs.stackexchange.com/questions/106030/job-scheduling-approximation
|
# Job scheduling approximation
In the course notes for Stanford MS&E-319: https://web.stanford.edu/class/msande319/lec1.pdf
Lemma 5 is given as:
The approximation factor of the modified greedy [scheduling] algorithm is 4/3.
And gives the example:
Note that 4/3 is essentially tight. Consider an instance with $$m$$ machines, $$n = 2m+ 1$$ jobs, $$2m$$ jobs of length $$m + 1, m + 2, · · · , 2m − 1$$ and one job of length $$m$$.
does the above example have an error as a proof of lemma 5?
I have been thinking about it over a day.
The instance in the example is "almost" the right one, however you are right that, as given, it does not prove tightness. The instance is missing two more jobs of size $$m$$.
After we include these two jobs, we have 2 jobs of each size from $$2m-1$$ to $$m+1$$ and 3 jobs of size $$m$$ (for $$n=2m+1$$ jobs in total, as stated).
Under the modified greedy algorithm, the maximum load will be $$4m-1$$. However, one can instead schedule a job of size $$2m-i$$ with a job of size $$m+i$$ for each $$i=1,2,..., \lfloor m/2 \rfloor$$, and the remaining 3 jobs of size $$m$$ separately on the same machine. The maximum load will then be $$3m$$.
|
2019-11-13 07:40:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5779512524604797, "perplexity": 436.313683646064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496666229.84/warc/CC-MAIN-20191113063049-20191113091049-00262.warc.gz"}
|
https://econpapers.repec.org/article/sprannopr/v_3a248_3ay_3a2017_3ai_3a1_3ad_3a10.1007_5fs10479-016-2211-7.htm
|
# Endogenous interval games in oligopolies and the cores
Aymeric Lardon ()
Aymeric Lardon: University of Nice-Sophia Antipolis
Annals of Operations Research, 2017, vol. 248, issue 1, 345-363
Abstract: Abstract In this article we study interval games in oligopolies following the $$\gamma$$ γ -approach. First, we analyze their non-cooperative foundation and show that each coalition is associated with an endogenous real interval. Second, the Hurwicz criterion turns out to be a key concept to provide a necessary and sufficient condition for the non-emptiness of each of the induced core solution concepts: the interval and the standard $$\gamma$$ γ -cores. The first condition permits to ascertain that even for linear and symmetric industries the interval $$\gamma$$ γ -core is empty. Moreover, by means of the approximation technique of quadratic Bézier curves we prove that the second condition always holds, hence the standard $$\gamma$$ γ -core is non-empty, under natural properties of profit and cost functions.
Keywords: Interval game; Oligopoly; $$\gamma$$ γ -Cores; Hurwicz criterion; Quadratic Bézier curve (search for similar items in EconPapers)
JEL-codes: C71 D43 (search for similar items in EconPapers)
Date: 2017
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2) Track citations by RSS feed
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text
Ordering information: This journal article can be ordered from
http://www.springer.com/journal/10479
|
2019-10-17 20:43:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3273361027240753, "perplexity": 3641.480717661331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986676227.57/warc/CC-MAIN-20191017200101-20191017223601-00270.warc.gz"}
|
https://stats.stackexchange.com/questions/67752/regression-techniques-similar-to-kriging-gaussian-process-regression
|
# Regression techniques similar to Kriging/Gaussian process regression
I am looking for regression techniques which are similar to Kriging/Gaussian process regression, in that no explicit model needs to be specified. (Discounting the prior over functions) I have three independent variables and one dependent variable to which I want to apply such a procedure. The independent variables specify coordinates (locations in 3D), while the dependent variable specifies Wi-Fi signal strength at the given coordinates. Since it is hard to appropriately visualize such high dimensional data, techniques without explicit model dependence are of primary interest. The only similar technique I found was the somewhat unsophisticated Nearest-neighbour interpolation.
Are the above-mentioned techniques the only choices for such a problem?
I would add a more detailed description of the problem you are trying to solve as your question is very vague. What are you trying to attack with Kriging/Gaussian process regression? Any clue on the nature of your problem could really help you get a better answer. Anyway, you could basically use some of a wide array of non-parametric machine learning algorithms. For example: CART, random forest, boosted regression trees, etc. Even though you don't want to specify an explicit model, you do have a problem of the form: $Y:=X_{1}+X_{2}+X_{3}$. You can fit any of these models to your data, for example, fitting random forest in R would go something like this:
fittedmodel <- randomForest(Y~X1+X2+X3,data=yourdata, ntree=1000)
fittedmodel <- randomForest(Y=yourdata[,1],X=yourdata[,2:4], ntree=1000)
The random forest R implementation has more parameters to play with, but in general is a model that requires little tuning so just using the defaults as I am doing here can give acceptable results.
What are you doing this for? if you want to interpolate a map, you can then use this trained model on your unlabelled observations, that simply must have values for the independent variables ($X_{1}, X_{2}, X_{3}$):
interpolation <- predict(fittedmodel,unlabeleddata)
Also, if you are interpolating a map, introducing the coordinates as independent variables is sometimes quite helpful.
• Thanks for the answer; I have added a line about my problem. It is basically about Wi-Fi strengths observed at different locations. – Comp_Warrior Aug 19 '13 at 21:38
• Then I definetly think this could definetly be an approach you would follow.Just train one of these models on the locations where you know the Wi-Fi strengh and predict for the rest of the locations. You might get a good model. I don't think it is all that hard to visualize this result (X1,X2,X3) would be your 3d coordinates and the Y could be a color, like a 3d contour plot. There should be some 3d kriging examples out there. In the worst case, something like this: stackoverflow.com/questions/3786189/r-4d-plot-x-y-z-colours. – JEquihua Aug 19 '13 at 23:25
• Thanks for the guidance. When I said visualization is hard, I meant to say that I found it hard to ascertain an appropriate model from the kind of visualization you suggested; the Wi-Fi signal strengths are quite close. – Comp_Warrior Aug 20 '13 at 8:25
You could try Locally Weighted Linear Regression (wiki article). In fact there is a connection between the two techniques (local regression and GP regression) as described in pg. 26 of the GPML book.
|
2021-05-18 04:37:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6549146771430969, "perplexity": 642.9881710209344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989820.78/warc/CC-MAIN-20210518033148-20210518063148-00605.warc.gz"}
|
https://www.physicsforums.com/threads/how-do-you-know-if-a-reaction-will-take-place-or-not.411043/
|
# How do you know if a reaction will take place or not
1. Jun 18, 2010
### usermanual
how do you know if a reaction will take place or not
like i know that i have to use the activity series and all but isnt that only when the reaction thing is single displacement where there 1 element in the first thing and then 2 element in the second compound so like this (a+ bc= ac + b) so for example Zn(s) + 2HCl(aq) -> ZnCl2(aq) + H2(g) so this reaction occurs cuz Zn is more to the left of the activity series compared to H
but my question is what if its a double displacement reaction so AB+ CD -> CB + AD
for example i have Na2S + H2O how would i kno if a reaction takes place or not?
or do i always assume that when there are 2 elements in each reactant/ compound, that a reaction will always occur?
srry if this question is kinda confusing
it's just that i dont know the correct chemistry terms to use
and also im in grade 11 chemistry so please easy with the chem terms
2. Jun 19, 2010
### alxm
There are two factors in whether a reaction will occur or not: Thermodynamics and Kinetics.
By thermodynamics I mean whether or not the products of the reaction have lower energy than the reactants. You can calculate that using heats of formation. For redox reactions you can use electrochemical potentials. (since the energy of the reaction is $$\Delta G = -nFE$$)
By kinetics I mean the reaction rate. If the products have lower energy, it's energetically beneficial to react, but it doesn't say anything about the rate at which that occurs. For instance, graphite has lower energy at room temperature/pressure than diamond does. But you don't see diamonds spontaneously turning into lumps of coal! Because it happens so very very slowly. The kinetics of a reaction you really have to measure experimentally.
Now if you look at Na2S, what can happen when you put it in water?
Well you could have the dissociation Na2S --> 2Na+ + S2-. From experience one can predict this - simply because few sodium salts are insoluble, and indeed, sodium sulfide is water soluble.
But the sulfide ion, S2-, is actually too basic to exist in water. (It has a pKa > 14) So in water, it will react to form a less basic pair of one hydrosulfide ion and one hydroxide ion:
S2- + H2O --> HS- + OH-
|
2018-03-19 11:40:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5694730281829834, "perplexity": 1690.7667831493527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646875.28/warc/CC-MAIN-20180319101207-20180319121207-00743.warc.gz"}
|
https://brilliant.org/problems/a-classical-mechanics-problem-by-nawara-elhussein/
|
# A classical mechanics problem by Nawara Elhussein
Classical Mechanics Level pending
when rate of change of velocity = 0 , then the body ??????
×
Problem Loading...
Note Loading...
Set Loading...
|
2017-07-22 23:01:42
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9347580075263977, "perplexity": 8317.411567332429}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424154.4/warc/CC-MAIN-20170722222652-20170723002652-00677.warc.gz"}
|
https://forum.zkoss.org/answers/112734/revisions/
|
# Revision history [back]
when using the built-in method sendRedirect(url) you're limited to HTTP-GET requests and their associated limitations. Usually webservers and java application servers have configurable limits for the maximum URL length, which includes the URL parameters => hence the message (Request header is too large)
Technically it doesn't sound like there's a limit so if your configuration can be savely adapted to what you need, then you might be set with just a few configuration changes.
If changing the server limits is not an option, you already looked into the right direction using HTTP-POST (which allows larger parameters by default - since they are sent in the request body instead of the header).
Instead you need to create a native html form with e.g. (hidden parameters) and post it via javascript in the browser.
here a similar question maybe it already points you in the right direction: https://stackoverflow.com/questions/133925/javascript-post-request-like-a-form-submit
when using the built-in method sendRedirect(url) you're limited to HTTP-GET requests and their associated limitations. Usually webservers and java application servers have configurable limits for the maximum URL length, which includes the URL parameters => hence the message (Request header is too large)
Technically it doesn't sound like there's a limit so if your configuration can be savely adapted to what you need, then you might be set with just a few configuration changes.
If changing the server limits is not an option, you already looked into the right direction using HTTP-POST (which allows larger parameters by default - since they are sent in the request body instead of the header).header). However POSTing the request from the server won't/can't update the client side browser.
Instead you need to create a native html form with e.g. (hidden (e.g. hidden parameters) and post it via javascript in the browser.
here Here a similar question maybe it already points you in the right direction: https://stackoverflow.com/questions/133925/javascript-post-request-like-a-form-submit
Support Options
• Email Support
• Training
• Consulting
• Outsourcing
|
2021-05-16 09:52:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26140469312667847, "perplexity": 2103.770898474958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992516.56/warc/CC-MAIN-20210516075201-20210516105201-00526.warc.gz"}
|
https://cs.stanford.edu/~ppasupat/a9online/1243.html
|
The symbol $\nabla$ (nabla) represents the del operator.
# Usage
## Definition
$$\nabla = \nail{\fracp{}{x_1}, \dots, \fracp{}{x_n}}$$
For example, using the standard basis $\hat i, \hat j, \hat k$ of $\RR^3$, we get $$\nabla = \nail{\fracp{}{x}, \fracp{}{y}, \fracp{}{z}} = \hat i\fracp{}{x} + \hat j\fracp{}{y} + \hat k\fracp{}{z}$$
If $f:\RR^3\to\RR$ is a scalar field, $$\Mr{grad} f = \nabla f = \nail{\fracp{f}{x}, \fracp{f}{y}, \fracp{f}{z}} \in \RR^3$$ The gradient is the "slope" direction and magnitude.
## Divergence
If $f:\RR^3\to\RR^3$ is a vector field and $f(x,y,z) = (f_x,f_y,f_z)$, $$\Mr{div} f = \nabla\cdot f = \fracp{f_x}{x} + \fracp{f_y}{y} + \fracp{f_z}{z} \in \RR$$ The divergence measures how much field diverges from the given point.
## Curl
If $f:\RR^3\to\RR^3$ is a vector field and $f(x,y,z) = (f_x,f_y,f_z)$, $$\Mr{curl} f = \nabla\times f = \abs{\begin{matrix} \fracp{}{x} & \fracp{}{y} & \fracp{}{z} \\ f_x & f_y & f_z \\ \hat i & \hat j & \hat k \end{matrix}}\in\RR^3$$ The curl is the torque at a given point.
## Directional Derivative
If $f:\RR^3\to\RR$ is a scalar field and $a(x,y,z) = (a_x,a_y,a_z)$, $$a\cdot\Mr{grad} f = (a\cdot\nabla) f = a_x\fracp{f}{x} + a_y\fracp{f}{y} + a_z\fracp{f}{z} \in \RR$$
## Hessian / Laplacian
This one is confusing. ML people use $\nabla^2$ to denote the Hessian matrix. For $f:\RR^3\to\RR$, $$\nabla^2 f = \matx{ \fracp{^2}{x^2} f & \fracp{^2}{x\partial y} f & \fracp{^2}{x\partial z} f \\ \fracp{^2}{y\partial x} f & \fracp{^2}{y^2} f & \fracp{^2}{y\partial z} f \\ \fracp{^2}{z\partial x} f & \fracp{^2}{z\partial y} f & \fracp{^2}{z^2} f \\ }\in\RR^3$$
However, in physics, $\nabla^2$ denotes the Laplacian operator $$\Delta f = \nabla^2 f = \nabla\cdot\nabla f = \fracp{^2}{x^2} f + \fracp{^2}{y^2} f + \fracp{^2}{z^2} f \in \RR$$
Both operators can also be applied on $f:\RR^3\to\RR^\text{higher}$, but the results will have more dimensions.
# References
Exported: 2016-07-13T01:31:43.734581
|
2018-02-22 16:31:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9735497236251831, "perplexity": 603.7873824785153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814140.9/warc/CC-MAIN-20180222160706-20180222180706-00790.warc.gz"}
|
https://socratic.org/questions/what-is-is-the-ph-of-salt#435583
|
# What is is the pH of salt?
Jun 6, 2017
It Depends.
#### Explanation:
Salts are the products of any neutralization reaction.
It happens basically three types like
1.Reaction between Strong acid and Strong Base.
2.Reaction between Strong base and Weak acid.
3.reaction between Strong acid and weak base.
On the basis the these reactions salts are of three types.
1.Neutral salts: These are the salts whose PH is exactly equal to 7.
for ex
$N a O H + H C l \rightarrow N a C l + {H}_{2} O$
Here NaCl is a Neutral base,and hence the PH is equal to 7.
2.Acidic salts$\rightarrow$These salts have PH less than 7.
3.Basic salts$\rightarrow$These salts have PH more than 7.
|
2021-09-24 20:57:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5376852750778198, "perplexity": 12446.766857273356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057580.39/warc/CC-MAIN-20210924201616-20210924231616-00100.warc.gz"}
|
https://www.physicsforums.com/threads/derivation-of-phi-hat-wrt-phi-in-spherical-unit-vectors.737864/
|
# Derivation of Phi-Hat wrt Phi in Spherical Unit Vectors
1. Feb 11, 2014
### EarthDecon
1. The problem statement, all variables and given/known data
I just want to know how to get from this: ∂ø^/∂ø = -x^cosø - y^sinø
to this: = -(r^sinθ+θ^cosθ)
2. Relevant equations
All the equations found here in the Spherical Coordinates section: http://en.wikipedia.org/wiki/Unit_vector
3. The attempt at a solution
I've tried a bunch of ways of algebraically getting the answer but I seem to be getting nowhere. Maybe I'm missing an equation. I tried adding and subtracting z^cosθ to get -r^ but I'm still missing the other piece. Please help! Thanks so much.
2. Feb 11, 2014
### Hypersphere
Have you tried going the other way? It is easier (at least more natural) to prove that statement that way, inserting the formulas for $\hat{r}$ and $\hat{\theta}$.
3. Feb 12, 2014
### EarthDecon
I have not, however, it seems like such a process would be much too long. If someone asked you to solve this for a test, how would one solve this without taking so long to do this? To be fair, if each unit vector derivation (d$\hat{r}$/dt and d$\hat{θ}$/dt) was incredibly short, why would this one partial derivative take 30 mins to do (if it does so)? I'll try, but I'd like to start from beginning to end to properly understand the process. I appreciate the reply though, thank you.
4. Feb 12, 2014
### Hypersphere
On a test, I'd probably use a geometrical method - essentially drawing and trying to see how each unit axis would change. This particular derivative is probably the trickiest one even using that method though.
However, you wanted an algebraic method. In one direction it looks like a trick to me, in the other way it comes out quite naturally (and that substitution doesn't take very long at all).
5. Feb 21, 2014
### EarthDecon
Oh yes. You were right. I tried the derivation backwards and it only took about half a page. However I probably would never have thought about it, but there is a trick. The trick is to place (sin2θ + cos2θ) multiplying the cosø and the sinø in the equation. Once that's done you distribute out the sinθ in the x-hat and the cosθ in the y-hat and the negative one. Then add and subtract z-hat cosθsinθ and group it so that you get r-hat on one side and phi-hat on the other and you should get the answer.
|
2018-01-24 11:48:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8220154047012329, "perplexity": 732.5732906993559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084894125.99/warc/CC-MAIN-20180124105939-20180124125939-00279.warc.gz"}
|
http://mathoverflow.net/questions/2446/best-algebraic-geometry-text-book-other-than-hartshorne/2447
|
# Best Algebraic Geometry text book? (other than Hartshorne)
I think (almost) everyone agrees that Hartshorne's Algebraic Geometry is still the best.
Then what might be the 2nd best? It can be a book, preprint, online lecture note, webpage, etc.
One suggestion per answer please. Also, please include an explanation of why you like the book, or what makes it unique or useful.
-
Since I'm not an algebraic geometer, I don't know whether I'm qualified to comment. But if I am, I've got to disagree about Hartshorne. Every time I open my copy, I think "God, this makes algebraic geometry look unappetizing". Maybe if I worked through it systematically I'd like it. But as a reference for a non-expert, it's pretty off-putting, I find. – Tom Leinster Oct 25 '09 at 16:02
Let me present my perspective on "Hartshorne is best issue". It's certainly very systematic with lots of exercises and a wonderful reference book, but it's only useful to people who somehow got the motivation to study abstract algebraic geometry, not as the first book. – Ilya Nikokoshev Oct 25 '09 at 21:52
I can believe it's a wonderful reference, but I've found it unsatisfying at the conceptual level. Two examples: 1. He never mentions that the category of affine schemes is dual to the category of rings, as far as I can see. I'd expect to see that in huge letters near the definition of scheme. How could you miss that out? 2. He puts the condition "F(emptyset) is trivial" into the definition of presheaf, when really it belongs in the definition of sheaf. That's a small thing, but hinders the reader from getting a good understanding of these important concepts. – Tom Leinster Oct 27 '09 at 4:50
Even worse than that, his construction of the structure sheaf basically rigs it so the stalks are the localizations at the primes, and doesn't even try to explain what's going on. There's no motivation, and it's not even described in a theorem or definition or theorem/definition. The reduced induced closed subscheme is introduced in an example, etc. It's not a book that you can read, it's a book that you have to work through. – Harry Gindi Dec 17 '09 at 3:50
-1 for "I think (almost) everyone agrees that Hartshorne's Algebraic Geometry is still the best." It may be a decent reference that one takes with oneself on a journey for the case one should need some result, but as a textbook it is useless. – darij grinberg Jun 1 '10 at 20:54
I think Algebraic Geometry is too broad a subject to choose only one book. Maybe if one is a beginner then a clear introductory book is enough or if algebraic geometry is not ones major field of study then a self-contained reference dealing with the important topics thoroughly is enough. But Algebraic Geometry nowadays has grown into such a deep and ample field of study that a graduate student has to focus heavily on one or two topics whereas at the same time must be able to use the fundamental results of other close subfields. Therefore I find the attempt to reduce his/her study to just one book (besides Hartshorne's) too hard and unpractical. That is why I have collected what in my humble opinion are the best books for each stage and topic of study, my personal choices for the best books are then:
• CLASSICAL: Beltrametti et al. "Lectures on Curves, Surfaces and Projective Varieties" which starts from the very beginning with a classical geometric style. Very complete (proves Riemann-Roch for curves in an easy language) and concrete in classic constructions needed to understand the reasons about why things are done the way they are in advanced purely algebraic books. There are very few books like this and they should be a must to start learning the subject. (Check out Dolgachev's review.)
• HALF-WAY/UNDERGRADUATE: Shafarevich - "Basic Algebraic Geometry" vol. 1 and 2. They may be the most complete on foundations for varieties up to introducing schemes and complex geometry, so they are very useful before more abstract studies. But the problems are hard for many beginners. They do not prove Riemann-Roch (which is done classically without cohomology in the previous recommendation) so a modern more orthodox course would be Perrin's "Algebraic Geometry, An Introduction", which in fact introduce cohomology and prove RR.
• ADVANCED UNDERGRADUATE: Holme - "A Royal Road to Algebraic Geometry". This new title is wonderful: it starts by introducing algebraic affine and projective curves and varieties and builds the theory up in the first half of the book as the perfect introduction to Hartshorne's chapter I. The second half then jumps into a categorical introduction to schemes, bits of cohomology and even glimpses of intersection theory.
• ONLINE NOTES: Gathmann - "Algebraic Geometry" which can be found here. Just amazing notes; short but very complete, dealing even with schemes and cohomology and proving Riemann-Roch and even hinting Hirzebruch-R-R. It is the best free course in my opinion, to get enough algebraic geometry background to understand the other more advanced and abstract titles. For an abstract algebraic approach, a freely available online course is available by the nicely done new long notes by R. Vakil.
• GRADUATE FOR ALGEBRISTS AND NUMBER THEORISTS: Liu Qing - "Algebraic Geometry and Arithmetic Curves". It is a very complete book even introducing some needed commutative algebra and preparing the reader to learn arithmetic geometry like Mordell's conjecture, Faltings' or even Fermat-Wiles Theorem.
• GRADUATE FOR GEOMETERS: Griffiths; Harris - "Principles of Algebraic Geometry". By far the best for a complex-geometry-oriented mind. Also useful coming from studies on several complex variables or differential geometry. It develops a lot of algebraic geometry without so much advanced commutative and homological algebra as the modern books tend to emphasize.
• BEST ON SCHEMES: Görtz; Wedhorn - Algebraic Geometry I, Schemes with Examples and Exercises. Tons of stuff on schemes; more complete than Mumford's Red Book (For an online free alternative check Mumfords' Algebraic Geometry II unpublished notes on schemes.). It does a great job complementing Hartshorne's treatment of schemes, above all because of the more solvable exercises.
• UNDERGRADUATE ON ALGEBRAIC CURVES: Fulton - "Algebraic Curves, an Introduction to Algebraic Geometry" which can be found here. It is a classic and although the flavor is clearly of typed concise notes, it is by far the shortest but thorough book on curves, which serves as a very nice introduction to the whole subject. It does everything that is needed to prove Riemann-Roch for curves and introduces many concepts useful to motivate more advanced courses.
• GRADUATE ON ALGEBRAIC CURVES: Arbarello; Cornalba; Griffiths; Harris - "Geometry of Algebraic Curves" vol 1 and 2. This one is focused on the reader, therefore many results are stated to be worked out. So some people find it the best way to really master the subject. Besides, the vol. 2 has finally appeared making the two huge volumes a complete reference on the subject.
• INTRODUCTORY ON ALGEBRAIC SURFACES: Beauville - "Complex Algebraic Surfaces". I have not found a quicker and simpler way to learn and clasify algebraic surfaces. The background needed is minimum compared to other titles.
• ADVANCED ON ALGEBRAIC SURFACES: Badescu - "Algebraic Surfaces". Excellent complete and advanced reference for surfaces. Very well done and indispensable for those needing a companion, but above all an expansion, to Hartshorne's chapter.
• ON HODGE THEORY AND TOPOLOGY: Voisin - Hodge Theory and Complex Algebraic Geometry vols. I and II. The first volume can serve almost as an introduction to complex geometry and the second to its topology. They are becoming more and more the standard reference on these topics, fitting nicely between abstract algebraic geometry and complex differential geometry.
• INTRODUCTORY ON MODULI AND INVARIANTS: Mukai - An Introduction to Invariants and Moduli. Excellent but extremely expensive hardcover book. When a cheaper paperback edition is released by Cambridge Press any serious student of algebraic geometry should own a copy since, again, it is one of those titles that help motivate and give conceptual insights needed to make any sense of abstract monographs like the next ones.
• ON MODULI SPACES AND DEFORMATIONS: Hartshorne - "Deformation Theory". Just the perfect complement to Hartshorne's main book, since it did not deal with these matters, and other books approach the subject from a different point of view (e.g. geared to complex geometry or to physicists) than what a student of AG from Hartshorne's book may like to learn the subject.
• ON GEOMETRIC INVARIANT THEORY: Mumford; Fogarty; Kirwan - "Geometric Invariant Theory". Simply put, it is still the best and most complete. Besides, Mumford himself developed the subject. Alternatives are more introductory lectures by Dolgachev.
• ON INTERSECTION THEORY: Fulton - "Intersection Theory". It is the standard reference and is also cheap compared to others. It deals with all the material needed on intersections for a serious student going beyond Hartshorne's appendix; it is a good reference for the use of the language of characteristic classes in algebraic geometry, proving Hirzebruch-Riemann-Roch and Grothendieck-Riemann-Roch among many interesting results.
• ON SINGULARITIES: Kollár - Lectures on Resolution of Singularities. Great exposition, useful contents and examples on topics one has to deal with sooner or later. As a fundamental complement check Hauser's wonderful paper on the Hironaka theorem.
• ON POSITIVITY: Lazarsfeld - Positivity in Algebraic Geometry I: Classical Setting: Line Bundles and Linear Series and Positivity in Algebraic Geometry II: Positivity for Vector Bundles and Multiplier Ideals. Amazingly well written and unique on the topic, summarizing and bringing together lots of information, results, and many many examples.
• INTRODUCTORY ON HIGHER-DIMENSIONAL VARIETIES: Debarre - "Higher Dimensional Algebraic Geometry". The main alternative to this title is the new book by Hacon/Kovács' "Classifiaction of Higher-dimensional Algebraic Varieties" which includes recent results on the classification problem and is intended as a graduate topics course.
• ADVANCED ON HIGHER-DIMENSIONAL VARIETIES: Kollár; Mori - Birational Geometry of Algebraic Varieties. Considered as harder to learn from by some students, it has become the standard reference on birational geometry.
-
I agree with your assessment of the Görtz + Wedhorn book - I'm really learning a lot from it. – Zev Chonoles Mar 2 '11 at 3:29
I agree with your point of view about Griffiths and Harris, its really beautiful. – George Jul 12 '11 at 11:24
Gathmann's lecture notes are indeed great. I had a certain phobia with algebraic geometry for a long time, and the the introduction chapter in his notes is the only thing which made me realize that there was nothing to be scared of. His emphasis on the geometric picture (sometimes literally - there are lots of pictures!) rather than on the algebraic language really made me love algebraic geometry. I also like how he often compares the theorems and definitions with the analogues ones theorems or definitions in differential or complex geometry. – Mark Nov 9 '11 at 1:03
I think the best "textbook" is Ravi Vakil's notes:
http://math.stanford.edu/~vakil/0708-216/
http://math.stanford.edu/~vakil/0910-216/
-
Professor Vakil has informed people at his site that this year's version of the notes will be posted in September at his blog.I think these notes are quickly becoming legendary,like Mumford's notes were before publication. A super,2 year long graduate course using totally free materials could begin with Fulton and then move on to Vakil's notes. – The Mathemagician Jul 7 '10 at 5:38
I think it is important to have links to the newest version: math216.wordpress.com and actual PDFs at math.stanford.edu/~vakil/216blog. – Ilya Grigoriev Nov 3 '11 at 7:01
Liu wrote a nice book, which is a bit more oriented to arithmetic geometry. (The last few chapters contain some material which is very pretty but unusual for a basic text, such as reduction of algebraic curves.)
-
I actually love Liu's approach. – Barbara Jul 7 '10 at 5:56
Perhaps this is cliché, but I recommend EGA (links to full texts: I, II, III(1), III(2), IV(1), IV(2), IV(3), IV(4)).
I know it's a scary 1800 pages of French, but
1. It's really easy French. I would describe myself as not knowing any French, but I can read EGA without too much trouble.
2. It's extremely clear. The proofs are usually very short because the results are very well organized.
3. It's the canonical reference for algebraic geometry. I assure you it is not 1800 pages of fluff.
I've found it quite rewarding to to familiarize myself with the contents of EGA. Many algebraic geometry students are able to say with confidence "that's one of the exercises in Hartshorne, chapter II, section 4." It's even more empowering to have that kind of command over a text like EGA, which covers much more material with fewer unnecessary hypotheses and with greater clarity. I've found this combined table of contents to be useful in this quest. [Edit: The combined table of contents unfortunately seems to be defunct. Here is a web version of Mark Haiman's EGA contents handout.]
-
Some time ago I had the idea of starting an EGA translation wiki project. The Berkeley math dept requires its grad students to pass a language exam which consists of translating a page of math in French, German, or Russian into English. I'm sure that many other schools have similar requirements. So every year, we have hundreds of grad students translating a page of math into English. Why not produce something useful with those man-hours? In lieu of a language exam, have the students translate a few pages of EGA. We'd be able to produce a translation of EGA and other works fairly quickly. – Kevin H. Lin Dec 17 '09 at 12:18
"The proofs are usually very short because the results are very well organized." This is only one half of the truth!! When I have to look up something in EGA, it's like an infinite tree of theorems which I have to walk up. Every step seems to be trivial, yeah. I don't get the point till I work it out by myself. I'm really envious of the people who learn directly from the master Grothendieck. – Martin Brandenburg Feb 2 '10 at 0:08
Excuse me Anton, but you have very perverse sense of what constitutes a textbook. EGA isn't any more textbook of algebraic geometry than Bourbaki is a textbook of mathematics. – Victor Protsak Jun 2 '10 at 1:04
@Victor: I don't understand your objection. Could you explain in what ways EGA does not constitute a textbook? You certainly don't need to already know algebraic geometry to read it. Reading it, you will certainly learn algebraic geometry. Is your objection that there aren't any exercises? Is it that EGA also covers a lot of commutative algebra, which you'd rather think of as a separate subject? Is it the length? Why is it any worse than Eisenbud's 800 page commutative algebra book plus Griffiths & Harris' 900 page algebraic geometry book? – Anton Geraschenko Jun 2 '10 at 16:17
It's a research monograph (and it's unfinished, by the way). It does build the subject from the ground up, just like Bourbaki's "Elements of mathematics" builds mathematics from the ground up, but it is less pedagogical by comparison (which is understandable). The fact that there are no exercises in it and the manner in which it was written are probably reflections of its function. Note that I don't object that it's a good reference on the foundations of algebraic geometry; but to call it a $\textit{textbook}$, and even nominate it as a best AG textbook, is simply preposterous. – Victor Protsak Jun 3 '10 at 17:59
I'm a fan of The Geometry of Schemes by Eisenbud and Harris. Its great for a conceptual introduction that won't turn people off as fast as Hartshorne. However, it barely even mentions the concept of a module of a scheme, and I believe it ignores sheaf cohomology entirely.
-
It does, but it also talks about representability of functors, and does a lot of basic constructions a lot more concretely and in more detail than Hartshorne. – Charles Siegel Oct 25 '09 at 14:53
Oh, I'm a big fan of the book. I'm just warning that if you read it all the way through, you still won't know the 'basics' of algebraic geometry. – Greg Muller Oct 25 '09 at 16:14
Too few textbooks motivate mathematical machinery (not just in AG), so this book really stands apart for that reason. I just wish they kept the original title, Why Schemes? – Thierry Zell Aug 18 '10 at 15:53
Shafarevich wrote a very basic introduction, it's used in undergraduate classes in algebraic geometry sometimes
Basic Algebraic Geometry 1: Varieties in Projective Space
also, for a more computational point of view
Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra
And the followup by the same authors
Using Algebraic Geometry
-
The Cox, Little, O'Shea books are what I use when introducing the subject to someone with less background, or more concrete interests. They tend to work very well (advising a freshman through IVA this semester, actually.) – Charles Siegel Oct 25 '09 at 14:25
Shafarevich also has a Volume 2, on schemes and advanced topics. I'd say that both books are suitable for a graduate-level introduction, and are my vote for best algebraic geometry textbook. – Alison Miller Oct 25 '09 at 20:27
Yes, it might be good idea to include volume 2 in the answer as well, the book is highly readable. – Ilya Nikokoshev Oct 25 '09 at 22:02
@ Alison I second your vote,Alison. – The Mathemagician Jun 1 '10 at 21:16
I totally, absolutely agree about Shafarevitch being the best textbook. – Claudio Gorodski Sep 29 '11 at 2:36
At a lower level then Hartshorne is the fantastic "Algebraic Curves" by Fulton. It's available on his website.
-
This is a terrific book from what I've read of it and it will be my first choice when I start seriously relearning this material. – The Mathemagician Jul 7 '10 at 5:31
I've been teaching an introductory course in algebraic geometry this semester and I've been looking at many sources. I've found that Milne's online book (jmilne.org) is excellent. He gives quite a thorough treatment of the theory of varieties over an algebraic closed field. The book is very complete and everything seems to be done "in the nicest way".
-
Kenji Ueno's three-volume "Algebraic Geometry" is well-written, clear, and has the perfect mix of text and diagrams. It's undoubtedly a real masterpiece- very user-friendly.
-
Yes, I think it is quite well-written and easy to proceed . . . and very thin. At least, I may get some basic notions fastly and also see some concrete examples. – kakalotte Nov 2 '11 at 17:46
Joe Harris's book Algebraic Geometry might be a good warm-up to Hartshorne.
-
I second Shafarevitch's two volumes on Basic Algebraic Geometry: the best overview of the subject I have ever read.
Another very nice book is Miranda's Algebraic Curves which manages to get a long way (Riemann-Roch etc) without doing sheaves and line bundles until the end. Of course, by then, you are really wanting sheaves and line bundles!
-
I've also heard very great things about Miranda's book. It clearly is a less advanced book, but I've heard it makes great preparation for understanding more modern algebraic geometry (e.g. Hartshorne). – David Corwin Jan 3 '10 at 22:26
The book An Invitation to Algebraic Geometry by Karen Smith et al. is excellent "for the working or the aspiring mathematician who is unfamiliar with algebraic geometry but wishes to gain an appreciation of its foundations and its goals with a minimum of prerequisites," to quote from the product description at amazon.com.
-
I liked Mumford's "Algebraic geometry I: Complex projective varieties" a lot, and also Griffiths' "Introduction to algebraic curves". Now I think I am falling in love with "Griffiths & Harris". For the record, I hate Hartshorne's.
-
I am SHOCKED that this book hasn't gotten more votes, it's very geometric and an easier read than Shafarevich (which I also like very much). Is it a symptom of groupthink or a tendency of each generation to pick their own idols? – Victor Protsak Jun 2 '10 at 0:57
Computer Scientists, me included, seem to prefer Ideals, Varieties, and Algorithms by David A. Cox, John B. Little, Don O'Shea (http://www.cs.amherst.edu/~dac/iva.html)
-
Dear Andrew L, Why? – Emerton Jul 9 '10 at 2:09
For people with an interest in practical aspects of AG, what about Abhyankar's Algebraic geometry for scientists and engineers? – Thierry Zell Aug 18 '10 at 15:57
I've tried learning algebraic geometry several times. I asked around and was told to read Hartshorne. I started reading it several times and each time put it away. I realized that I could work through the sections and solve some of the problems, but I gained absolutely no intuition for reading Hartshorne. Discussing this with other people, I found that it was a common occurrence for students to read Hartshorne and afterwards have no idea how to do algebraic geometry. (I imagine this was the motivation for asking this question.)
After more poking around, I discovered Mumford's "Red book of Varieties and Schemes". While Mumford doesn't do cohomology, he motivates the definitions of schemes and and many of there basic properties while providing the reader with geometric intuition. This book isn't easy to read and you have to work out a lot, but the rewards are great. Another great feature of this book is that Mumford bought the rights to the book back from Springer and the book is available for free online.
Another book was supposed to be written that built on the "Red book" including cohomology. After many years, I think this is near completion; see Algebraic Geometry 2. Whlile many of the above books are excellent, it's a surprise that these books aren't the standard.
-
I would think Algebraic Geometry 2 would be the successor to Algebraic Geometry 1 and not Red Book. Also any news on when Algebraic Geometry 2 will be published? – Najdorf Feb 5 '11 at 11:42
I enjoyed Griffiths-Harris a lot.
-
How about the wrong definition of a sheaf which survived all editions of Griffiths-Harris inlcuding Russian translation (they extend from compatible pairs, not arbitrary compatible families of local sections: thus the non-sheaf examples like presheaf of bounded functions would do). – Zoran Skoda Oct 29 '10 at 21:15
Miles Reid's Undergraduate Algebraic Geometry is an excellent topical (meaning it does not intend to cover any substantial part of the whole subject) introduction. In particular, it's the only undergraduate textbook that isn't commutative algebra with a few pictures thrown in.
-
The only differences between the first and second editions of Mumford's Red Book are the numerous typographical errors introduced during its incompetent TeXing... – Andy Putman Jun 4 '10 at 3:36
Dear Andrew L, Regarding your first comment: when I was a student learning from Hartshorne, I had various complaints about it, but on the other hand, I also learned a vast amount from it. And I've grown more and more to appreciate its very beautiful (and not at all abstract) treatment of curves and surfaces in Chapters 4 and 5. On the other hand, as a student my complaint was that it was not abstract enough (didn't treat non-alg. closed fields, finite flat group schemes over integer rings, abelian schemes, flat descent, etc.). – Emerton Jul 9 '10 at 2:12
I believe the issue of "which book is best" is extremely sensitive to the path along which one is moving into the subject. If your background is in differential geometry, complex analysis, etc, then Huybrechts' Complex Geometry is a good bridge between those vantage points and a more algebraic geometric landscape. Obviously I'm taking liberties with the question, as I wouldn't advertise Huybrechts' book as an algebraic geometry text in the strict sense. However, I think it can, for certain people, help to ease the transition into one. It's also very well written, in my opinion. (I should also emphasize that I'm not saying this is the only purpose of the book: its content is extremely valuable for other reasons, with material on vector bundles, SUSY, deformations of complex structures, etc.)
As for dedicated algebraic geometry texts other than Hartshorne, I also vote for Ravi Vakil's notes. They're excellent.
-
If Griffiths-Harris is "algebraic geometry" then surely Huybrechts is as well! :) Even if your aim is to learn more abstract scheme theory, I think it's very important and helpful (at least it has been for me) to gain some intuition by learning about complex manifolds and varieties. It also provides some historical context. – Kevin H. Lin Dec 17 '09 at 11:59
I'm starting to like this book, by Görtz and Wedhorn. (and hoping in volume II soon...) Similarly to Qing Liu's wonderful book, it seems to me to be a good compromise between Hartshorne and EGA.
-
Also Eisenbud.
Every algebraic geometer needs to know at least some commutative algebra. And this is a very good introductory textbook, which teaches commutative algebra rigorously but at the same time provides a good geometric explanation.
-
Eisenbud's book is wonderfully written and a pleasure to read,but it's too damn long and has everything in the world in it,making it really tough to focus with. It joins Spivak and Lee's SMOOTH MANIFOLDS with the dubious distinction of being books everyone loves,but can't really use for coursework. – The Mathemagician Jul 7 '10 at 5:41
If you know french, you might enjoy David Harari's course notes. These are the notes for a basic course in schemes and cohomology of sheaves. He combines the best parts of Hartshorne with the best parts of Liu's book. Hartshorne doesn't always do things in the nicest possible way, and the same is of course true for Liu.
I agree that Vakil's notes are great, since they also contain a lot of motivation, ideas and examples. But does anyone know where to get the files with this year's notes? I only found the notes of previous years on the web.
-
He's not posting them online yet; he's been handing out chunks of notes on various topics, but he wants to edit them more before posting. – Rebecca Bellovin Oct 25 '09 at 21:25
Also lots of things on jmilne.org
-
Macdonald "Algebraic geometry: Introduction to schemes" (not only about noetherian schemes), Dieudonné's two booklets with focus on the motivation and history, the first chapter in Demazure, Gabriel "Groupes algebraique I", Mumford's "red book".
Mumford suggested in a letter to Grothendieck to publish a suitable edited selection of letters from Grothendieck to his friends, because the letters he received from him were "by far the most important things which explained your ideas and insights ... vivid and unencumbered by the customary style of formal french publications ... express(ing) succintly the essential ideas and motivations and often giv(ing) quite complete ideas about how to overcome the main technical problems ... a clear alternative (to the existing texts) for students who wish to gain access rapidly to the core of your ideas". (Found in the very beautifull 2nd collection - when I got it from the library I could not stop reading in it, which happens to me rarely with such collections, despite the associated saga)
-
Biased by my personal taste maybe, I think, Harder's two-volume book(with the third one not completed yet) Lectures on Algebraic Geometry is wonderful. The author develops the algebraic side of our subject carefully and always strikes a good balance between abstract and concrete. If you can torlerate the English written by a German, perhaps some parts of Harder's are more appealing than those of Shafarevich and Hartshorne!
-
Mukai's Introduction to Invariants and Moduli surely deserves to be on this list.
-
The red book by Mumford is nice, better than Hartshorne in my opinion (which is nice as well). At a far more abstract level, EGA's are excellent, proofs are well detailed but intuition is completly absent. For a down to earth introduction, Milne's notes are nice (but they don't go to the scheme level, they give the taste of it).
-
I recently completed a book on algebraic geometry. The PDF file may be freely downloaded: Introduction to Algebraic Geometry
It is also available in paperback:Amazon listing
-
I've found something extraordinary and of equally extrordinary pedigree online recently. I mentioned it briefly in response to R. Vakil's question about the best way to introduce schemes to students. But this question is really where it belongs and I hope word of it spreads far and wide from here.
Last fall at MIT, Micheal Artin taught an introductory course in algebraic geometry that required only a year of basic algebra at the level of his textbook. The official text was William Fulton's Algebraic Curves, but Artin also wrote an extensive set of lecture notes and exercise sets. I found them quite wonderful and very much in the spirit of his classic textbook(By the way, simply can't wait for the second edition.).
Not only has he posted these notes for download, he's asked anyone working through them to email him any errors found and suggestions for improvements.All the course materials can be found at the MIT webpage. I've also posted the link at MathOnline, of course.
I don't know if most of the hardcore algebraic geometers here would recommend these materials for a beginning course. But for any student not looking to specialize in AG, I can't think of a better source to begin with. That's just my opinion. But it certainly belongs as a possible response to this question. Then again, it may be too softball for the experts,particularly those of the Grothendieck school.
Here's keeping our fingers crossed that this is the beginning of the gestation of a full blown text on the subject by Artin.
-
Dear Andrew, please put spaces after your punctuation. – Anweshi Jul 25 '10 at 7:52
@Anweshi: Andrew has stated before that this is due to some typesetting bug on his end. @Andrew: I took this class for most of the semester. The lecture notes were actually scribed by the students, so caveat emptor. – Qiaochu Yuan Jul 25 '10 at 9:14
For what it's worth, I don't really believe that there's a bug causing these issues. Looking at Andrew L's posts over time, there has been a gradual improvement in the use of correct punctuation. I find it hard to imagine a software issue with these effects. – Scott Morrison Jul 25 '10 at 18:36
@To All Above: To be honest,it's really a little of both. Sometimes,the LaTeX doesn't cooperate and I'm still learning how to use it. Then again,I'm not really a stickler for grammatical etiquitte.The second observation is really not professional on my part and I'm seriously trying to make an effort to improve. @Qiaochu This is the fundamental problem with all free lecture note sources. Artin's algebra book,however,began life that way and that worked out pretty well.Let's hope he finds time to edit them and post corrected versions. – The Mathemagician Jul 25 '10 at 19:19
@Andrew L: It is not true that I will your throw writings away if you miss a comma. I edited a good number of your posts which were almost impossible to decipher and even requested you to write better. For instance the comments left at mathoverflow.net/questions/32736 . If you want others to read your stuff, please at least put in some effort to reduce typographical unpleasantness. If you didn't know English and if you were a mathematical genius with linguistic difficulties, then this could be tolerated. But Scott's comment above indicates that you actually know how to write properly; – Anweshi Jul 26 '10 at 21:18
Whereas it is actually not quite a textbook, it is becoming a very popular reference. In recent talks it was even used as the almost exclusively!
And indeed, there are a lot of high quality 'articles', and often you can find alternative approaches to a theory or a problem, which are more suitable for you. In addition, you can actually ask questions (a feature thoroughly missed in e.g. Hartshorne's book).
-
Hodge, Pedoe, Methods of Algebraic Geometry.
-
Artie, that's exactly what I like about it. I am an old person, and find Hartshorne almost unreadable. – Alexandre Eremenko Jul 15 '13 at 21:14
## protected by S. Carnahan♦Dec 10 '14 at 1:24
Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
|
2015-07-01 12:18:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5144860148429871, "perplexity": 977.88614086283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094924.48/warc/CC-MAIN-20150627031814-00266-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://brilliant.org/problems/do-you-know-your-trig-graphs/
|
Do You Know Your Trig Graphs?
Geometry Level 2
What is the amplitude of the graph of $\Large f(x) = \sin\left(x + \frac{\pi}{3}\right) + \cos\left(x+\frac{\pi}{6}\right)?$
×
|
2020-11-28 11:16:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8180100917816162, "perplexity": 1678.5308697292778}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195417.37/warc/CC-MAIN-20201128095617-20201128125617-00392.warc.gz"}
|
https://cstheory.stackexchange.com/questions/2676/examples-of-hardness-phase-transitions/2680
|
# Examples of hardness phase transitions
Suppose we have a problem parameterized by a real-valued parameter p which is "easy" to solve when $p=p_0$ and "hard" when $p=p_1$ for some values $p_0$, $p_1$.
One example is counting spin configurations on graphs. Counting weighted proper colorings, independent sets, Eulerian subgraphs correspond to partition functions of hardcore, Potts and Ising models respectively, which are easy to approximate for "high temperature" and hard for "low temperature". For simple MCMC, hardness phase transition corresponds to a point at which mixing time jumps from polynomial to exponential (Martineli,2006).
Another example is inference in probabilistic models. We "simplify" given model by taking $1-p$, $p$ combination of it with a "all variables are independent" model. For $p=1$ the problem is trivial, for $p=0$ it is intractable, and hardness threshold lies somewhere in between. For the most popular inference method, problem becomes hard when the method fails to converge, and the point when it happens corresponds to the phase transition (in a physical sense) of a certain Gibbs distribution (Tatikonda,2002).
What are other interesting examples of the hardness "jump" as some continuous parameter is varied?
Motivation: to see examples of another "dimension" of hardness besides graph type or logic type
In standard worst-case approximation, there are many sharp thresholds as the approximation factor varies.
For example, for 3LIN, satisfying as many given Boolean linear equations on 3 variables each, there is a simple random assignment approximation algorithm for approximation 1/2, but any approximation better than some t=1/2+o(1) is already as hard as exact SAT (conjectured to require exponential time).
I'm not exactly sure if this is the type of problem you were looking for, but the phase transition of NP-Complete problems is a (by now) well known phenomenon. See Brian Hayes's articles "Can't Get No Satisfaction" about the 3-SAT phase transition and "The Easiest Hard Problem" about the Number Partition Phase transition, for some popular articles on the subject.
Selman and Kirkpatrick were first to show numerically that the phase transition for 3-SAT was when the ratio of clauses to variables was at around 4.3.
Gent and Walsh were first to show numerically that the phase transition for the Number Partition Problem happened when the ratio of bits to list length was about 1. Later this was proved analytically by Borgs, Chayes and Pittel.
Travelling Salesman, Graph Coloring, Hamiltonian Cycle, amongst others, also appear to have phase transitions for a suitable parameterization of problem instance creation. I think it's safe to say that it is a commonly held belief that all NP-Complete problems exhibit a phase transition for a suitable parameterization.
Associated to (some) noise models for quantum computation is a threshold value for the noise level, above which the noisy gates can be simulated by Clifford gates, such that the quantum computation processes becomes efficiently simulable. As a start, see Plenio and Virmani, Upper bounds on fault tolerance thresholds of noisy Clifford-based quantum computers (arXiv:0810.4340v1).
Solvable models like this inform us regarding an ubiquitous practical problem: for a specified physical quantum system in contact with a thermal reservoir (possibly at zero temperature), are the noise levels associated to that thermal reservoir below or above the threshold for efficient simulation with classical resources? If the latter, what simulation algorithms are optimal?
A particularly striking example of a phase transition is the maximum degree bound for Exactly-$k$-SAT (X$k$SAT), in which each clause contains exactly $k$ distinct literals. The problem flips from being trivially easy (always satisfiable) to being NP-complete by adding one to the associated parameter.
Let $f(k)$ denote the largest number such that any X$k$SAT instance in which any variable occurs in at most $f(k)$ clauses is guaranteed to be satisfiable. If each variable only occurs in just one clause, then the instance is trivially satisfiable (just set each variable to the value that makes the corresponding literal true). On the other hand, the collection of all $2^k$ clauses on the same $k$ variables is unsatisfiable. So it follows that $1 \le f(k) < 2^k$.
An X$k$SAT instance has a natural (non-logic) meaning as asking whether there exists an $n$-bit message which avoids some specified $k$-bit submessages. One can also rescale the parameter in a natural way to $f(k)/2^k$, which then takes a real value in the interval from 0 to 1.
Instances in which variables can occur at most $f(k)$ times are all trivially satisfiable. However, the class of instances in which variables can occur at most $f(k)+1$ times is already NP-complete.
• Jan Kratochvíl, Petr Savický and Zsolt Tuza, One More Occurrence of Variables Makes Satisfiability Jump from Trivial to NP-Complete, SIAM J. Comput. 22(1) 203–210, 1993. doi:10.1137/0222015
It is also interesting that quite tight bounds are known for $f(k)$. The above paper derived a lower bound from the Lovász Local Lemma, and unsatisfiable instances have been explicitly constructed more recently for the upper bound. In short, $f(k) = \Theta(2^k/k)$.
|
2020-12-05 11:30:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7275314927101135, "perplexity": 490.41526550622035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747774.97/warc/CC-MAIN-20201205104937-20201205134937-00440.warc.gz"}
|
https://answers.opencv.org/questions/130792/revisions/
|
# Revision history [back]
### Camera calibration and pose estimation (OpenGL + OpenCV)
Hello i'm fairly new to OpenCV. I'm trying to estimate the 3D pose of the camera in order to draw a 3D teapot using OpenGL. I have been testing for about a week and I partially understand the theory, I tried to replicate an example but I do not get it to appear correctly. I get the keypoints using SIFT and looks right.
I use this function in order to obtain the intrinsics and extrinsics parameters:
ret, K, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, im_size,None,None)
When I have these parameters I create the loop to draw the teapot:
def setup():
pygame.init()
pygame.display.set_mode((im_size[0], im_size[1]), pygame.OPENGL | pygame.DOUBLEBUF)
pygame.display.set_caption('OpenGL AR demo')
setup()
S = 1 # Selected Image
while True:
event = pygame.event.poll()
if event.type in (pygame.QUIT, pygame.KEYDOWN):
break
draw_background(I[S - 1])
set_projection_from_camera(K)
set_modelview_from_camera(rvecs[S-1], tvecs[S-1])
draw_teapot(100)
pygame.display.flip()
pygame.time.wait(50)
I estimate the projection using this function:
def set_projection_from_camera(K):
glMatrixMode(GL_PROJECTION)
fx = K[0,0]
fy = K[1,1]
fovy = 2*arctan(0.5*im_size[1]/fy)*180/pi
aspect = (im_size[0]*fy)/(im_size[1]*fx)
# define the near and far clipping planes
near = 0.1
far = 500000.0
# set perspective
gluPerspective(fovy, aspect, near, far)
glViewport(0, 0, im_size[0], im_size[1])
I estimate the modelview using this function:
def set_modelview_from_camera(rvec, tvec):
glMatrixMode(GL_MODELVIEW)
Rx = array([[1, 0, 0], [0, 0, -1], [0, 1, 0]]) # rotate the teapot
M = eye(4)
M[:3, :3] = dot(Rx, cv2.Rodrigues(rvec)[0])
M[:3, 3] = tvec.T
cv2GlMat = array([[1,0,0,0],[0,-1,0,0],[0,0,-1,0],[0,0,0,1]]) #OpenCV -> OpenGL matrix
M = dot(cv2GlMat, M)
m = M.T.flatten()
The function to draw the background (the real image)
def draw_background(I):
bg_image = Image.fromarray(I)
bg_data = bg_image.tobytes('raw', 'RGBX', 0, -1)
glMatrixMode(GL_MODELVIEW)
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
# bind the texture
glEnable(GL_TEXTURE_2D)
glBindTexture(GL_TEXTURE_2D, glGenTextures(1))
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,bg_image.size[0],bg_image.size[1],0,GL_RGBA,GL_UNSIGNED_BYTE,bg_data)
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST)
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST)
# create quad to fill the whole window
glTexCoord2f(0.0,0.0); glVertex3f(-1,-1,-1.0)
glTexCoord2f(1.0,0.0); glVertex3f( 1,-1,-1.0)
glTexCoord2f(1.0,1.0); glVertex3f( 1, 1,-1.0)
glTexCoord2f(0.0,1.0); glVertex3f(-1, 1,-1.0)
glEnd()
# clear the texture
glDeleteTextures(1)
I understand the concepts of Opencv, but I can not put together and the object appears correctly on the screen.
### Camera calibration and pose estimation (OpenGL + OpenCV)
Hello i'm fairly new to OpenCV. I'm trying to estimate the 3D pose of the camera in order to draw a 3D teapot using OpenGL. I have been testing for about a week and I partially understand the theory, I tried to replicate an example but I do not get it to appear correctly. I get the keypoints using SIFT and looks right.
I use this function in order to obtain the intrinsics and extrinsics parameters:
ret, K, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, im_size,None,None)
When I have these parameters I create the loop to draw the teapot:
def setup():
pygame.init()
pygame.display.set_mode((im_size[0], im_size[1]), pygame.OPENGL | pygame.DOUBLEBUF)
pygame.display.set_caption('OpenGL AR demo')
setup()
S = 1 # Selected Image
while True:
event = pygame.event.poll()
if event.type in (pygame.QUIT, pygame.KEYDOWN):
break
draw_background(I[S - 1])
set_projection_from_camera(K)
set_modelview_from_camera(rvecs[S-1], tvecs[S-1])
draw_teapot(100)
pygame.display.flip()
pygame.time.wait(50)
I estimate the projection using this function:
def set_projection_from_camera(K):
glMatrixMode(GL_PROJECTION)
fx = K[0,0]
fy = K[1,1]
fovy = 2*arctan(0.5*im_size[1]/fy)*180/pi
aspect = (im_size[0]*fy)/(im_size[1]*fx)
# define the near and far clipping planes
near = 0.1
far = 500000.0
# set perspective
gluPerspective(fovy, aspect, near, far)
glViewport(0, 0, im_size[0], im_size[1])
I estimate the modelview using this function:
def set_modelview_from_camera(rvec, tvec):
glMatrixMode(GL_MODELVIEW)
Rx = array([[1, 0, 0], [0, 0, -1], [0, 1, 0]]) # rotate the teapot
M = eye(4)
M[:3, :3] = dot(Rx, cv2.Rodrigues(rvec)[0])
M[:3, 3] = tvec.T
cv2GlMat = array([[1,0,0,0],[0,-1,0,0],[0,0,-1,0],[0,0,0,1]]) #OpenCV -> OpenGL matrix
M = dot(cv2GlMat, M)
m = M.T.flatten()
The function to draw the background (the real image)
def draw_background(I):
bg_image = Image.fromarray(I)
bg_data = bg_image.tobytes('raw', 'RGBX', 0, -1)
glMatrixMode(GL_MODELVIEW)
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
# bind the texture
glEnable(GL_TEXTURE_2D)
glBindTexture(GL_TEXTURE_2D, glGenTextures(1))
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,bg_image.size[0],bg_image.size[1],0,GL_RGBA,GL_UNSIGNED_BYTE,bg_data)
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST)
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST)
# create quad to fill the whole window
glTexCoord2f(0.0,0.0); glVertex3f(-1,-1,-1.0)
glTexCoord2f(1.0,0.0); glVertex3f( 1,-1,-1.0)
glTexCoord2f(1.0,1.0); glVertex3f( 1, 1,-1.0)
glTexCoord2f(0.0,1.0); glVertex3f(-1, 1,-1.0)
glEnd()
# clear the texture
glDeleteTextures(1)
I understand the concepts of Opencv, Opencv Calibration, but I can not cannot put together into OpenGL and show the object appears correctly on the screen.
|
2022-10-04 13:20:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34388044476509094, "perplexity": 12150.063368522182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00783.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/226817-vectors-f-g.html
|
# Thread: Vectors F and G
1. ## Vectors F and G
The vectors F and G represent two forces acting on an object as indicated by the attached picture. Compute, to two decimal places, the magnitude and direction of the resultant. Give the direction of the resultant by specifying the angle theta between vector F and the resultant.
Note: Alpha = 60°.
2. ## Re: Vectors F and G
Originally Posted by nycmath
The vectors F and G represent two forces acting on an object as indicated by the attached picture. Compute, to two decimal places, the magnitude and direction of the resultant. Give the direction of the resultant by specifying the angle theta between vector F and the resultant.
Note: Alpha = 60°.
this one is pretty basic.
Let $\vec{G}=(10,0)$ then $F=(8\cos(60\deg),8\sin(60\deg))=(4,4\sqrt{3})$
$F+G=(14,4\sqrt{3})$
the length is given by $|F+G|=\sqrt{14^2+(4\sqrt{3})^2}=\sqrt{244}=2\sqrt {61}$
$\theta=\arctan\left(\dfrac{4\sqrt{3}}{14}\right)= 26.3\deg$
3. ## Re: Vectors F and G
Originally Posted by nycmath
The vectors F and G represent two forces acting on an object as indicated by the attached picture. Compute, to two decimal places, the magnitude and direction of the resultant. Give the direction of the resultant by specifying the angle theta between vector F and the resultant.
Note: Alpha = 60°.
Good morning!
1. Use the Cosine rule to determine the magnitude of the resultant force R. Use the angle and the lengthes of the indicated triangle:
$\vec R^2 = \vec F^2 + \vec G^2 - 2 \cdot |\vec F| \cdot | \vec G | \cdot \cos(180^\circ - \alpha)$
2. To determine the value of $\theta$ use the Cosine rule again:
$\cos(\theta) = \frac{\vec G^2 - \vec R^2 - \vec F^2}{-2 \cdot |\vec R| \cdot |\vec F|}$
4. ## Re: Vectors F and G
Thank you so much.
5. ## Re: Vectors F and G
Originally Posted by romsek
this one is pretty basic.
Let $\vec{G}=(10,0)$ then $F=(8\cos(60\deg),8\sin(60\deg))=(4,4\sqrt{3})$
$F+G=(14,4\sqrt{3})$
the length is given by $|F+G|=\sqrt{14^2+(4\sqrt{3})^2}=\sqrt{244}=2\sqrt {61}$
$\theta=\arctan\left(\dfrac{4\sqrt{3}}{14}\right)= 26.3\deg$
I applied your steps to the follow question and was able to determine the magnitude but my answer for theta was not correct.
|F| = 5N, |G| = 4N and alpha = 80°.
I got |F+G| = 6.92 N.
For theta, I got 45.29° but the correct answer in the textbook is 34.67°.
Can you tell me how to get the correct answer for theta?
6. ## Re: Vectors F and G
I should warn you that finding the angle $\theta$ isn't always quite so straightforward. You have to examine the signs of the x and y components of the vector and determine what quadrant it lies in. Then you can adjust the arctan result as necessary to place it in the correct quadrant.
For example suppose $\vec{v}=(-1,-1)$
$\arctan\left(\dfrac{-1}{-1}\right)=\arctan(\dfrac{1}{1})=\dfrac{\pi}{4}$
but the angle of $\vec{v}$ is actually $\dfrac{5\pi}{4}\text{ or }-\dfrac{3\pi}{4}$
7. ## Re: Vectors F and G
Originally Posted by earboth
Good morning!
1. Use the Cosine rule to determine the magnitude of the resultant force R. Use the angle and the lengthes of the indicated triangle:
$\vec R^2 = \vec F^2 + \vec G^2 - 2 \cdot |\vec F| \cdot | \vec G | \cdot \cos(180^\circ - \alpha)$
2. To determine the value of $\theta$ use the Cosine rule again:
$\cos(\theta) = \frac{\vec G^2 - \vec R^2 - \vec F^2}{-2 \cdot |\vec R| \cdot |\vec F|}$
I love your picture replies. I have seen your work and love the geometry as part of each reply.
8. ## Re: Vectors F and G
Originally Posted by nycmath
I applied your steps to the follow question and was able to determine the magnitude but my answer for theta was not correct.
|F| = 5N, |G| = 4N and alpha = 80°.
I got |F+G| = 6.92 N.
For theta, I got 45.29° but the correct answer in the textbook is 34.67°.
Can you tell me how to get the correct answer for theta?
I get the same answer you do.
9. ## Re: Vectors F and G
Thank you, romsek. By the way, what is the meaning of your username? Look for one or two more questions later in the precalculus and calculus forums. Having so much fun learning math with you and the other tutors.
10. ## Re: Vectors F and G
Originally Posted by nycmath
Thank you, romsek. By the way, what is the meaning of your username? Look for one or two more questions later in the precalculus and calculus forums. Having so much fun learning math with you and the other tutors.
It means about as close to nothing as is possible. It originated from a randomly generated selection of pre-approved names for a game I used to play.
11. ## Re: Vectors F and G
Hi,
Here's another slightly different way to solve the problem:
|
2017-04-30 23:02:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6621291637420654, "perplexity": 852.1609857492617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125881.93/warc/CC-MAIN-20170423031205-00458-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://nbviewer.jupyter.org/github/psychemedia/showntell/blob/maths/OpenLearn_Geometry.ipynb
|
# OpenLearn Geometry¶
This notebook recreates some of the content featured in the OpenLearn course Geometry.
The notebook includes several hidden code cells that generate the a range of geometric figures.
To render the images, go to the Cell menu and select Run All.
To view/hide the code used to generate the figures, click on the Hide/Reveal Code Cell Inputs button in the notebook toolbar.
To make changes to the diagrams, click in the appropriate code input cell, make your change, and then run the cell using the Run Cell ("Play") button in the toolbar or via the keyboard shortcut SHIFT-ENTER.
Entering Ctrl-Z (or CMD-Z) in the code cell will undo your edits...
## Angles¶
The Try some yourself activity includes the following shape.
In [1]:
%load_ext tikz_magic
In [2]:
%%tikz
\usetikzlibrary{positioning}
\coordinate (A) at (0,0) ;
\coordinate (B) at (5,0.3) ;
\coordinate (C) at (4.5,-2) ;
\coordinate (D) at (2,-3) ;
\draw (A) node[left]{A} -- (B)node[right]{B};
\draw (B) -- (C)node[right]{C};
\draw (C) -- (A) ;
\draw (A) -- (D) node[below]{D};
\draw (D) -- (C);
\begin{scope}
\clip (B) -- (A) -- (C);
\draw (A) circle[radius=1.1];
\end{scope}
\node at (A)[below right=-0.05 and 0.5 ] {$\alpha$};
\begin{scope}
\clip (C) -- (A) -- (D);
\draw (A) circle[radius=1.1];
\end{scope}
\node at (A)[below right=0.3 and 0.3 ] {$\gamma$};
\begin{scope}
\clip (B) -- (C) -- (A);
\draw (C) circle[radius=1];
\end{scope}
\node at (C)[above left=0.25 and 0.2 ] {$\beta$};
\begin{scope}
\clip (A) -- (C) -- (D);
\draw (C) circle[radius=1];
\end{scope}
\node at (C)[left=0.4 ] {$\delta$};
Out[2]:
## Geometric shapes – circles¶
Original link
All circles are the same shape – they can only have different sizes.
In a circle, all the points are the same distance from a point called the centre. The centre is often labelled with the letter O.
In [3]:
%%tikz
%https://tex.stackexchange.com/a/223219
\def\radius{2}% radius of the circle
\def\tilt{30}% angle for the arc
%origin
\coordinate (O) at (0,0) ;
% Circle
\draw (O) circle[radius=\radius] node[font=\tiny, right]{{\em{O}} (Centre)};
%Centre point
\draw[black,fill=black] (0,0) circle [radius=0.03];
%Diameter
%latex-latex defines the 'latex' arrow head at each end of the line
\draw[latex-latex ] (\tilt:\radius) -- (180+\tilt:\radius)
node[font=\tiny,above, midway,rotate=\tilt]{Diameter};
%Radius
\draw[latex-latex ] (O) -- (\tilt+270:\radius)
node[font=\tiny,above, midway, above right]{Radius};
%Circumference
\draw (\tilt+30:\radius) -- +(0.5,1) -- +(0.75,1) node [right]{\tiny{Circumference}} ;
Out[3]:
The outside edge of a circle is called the circumference. A straight line from the centre to a point on the circumference is called a radius of the circle (the plural of radius is radii).
A line with both ends on the circumference and passing through the centre is called a diameter. Any diameter cuts the circle into two halves called semicircles.
In [17]:
%%tikz
% Define radius
\def\radius{1.5}
%origin
\coordinate (O) at (0,0) ;
\path (O) +(\radius,0) coordinate (arcOrigin);
%Draw dashed semi-circle
\draw [dashed] (arcOrigin) arc(0:-180:\radius);
%Draw grey semicircle - the -- cycle component closes the shape
\draw [fill={black!30}] (arcOrigin) arc(0:180:\radius)
%Close the figure and add a diameter label
-- cycle node [font=\tiny,below,midway]{Diameter};
%Centre point
\draw[black,fill=black] (O) circle [radius=0.03];
%Circumference label - build a relative line to connent circumference and label
\draw (30:\radius) -- ++(0.5,1) -- +(0.25,0) node [font=\tiny,right,
align=left]{Half the \\ circumference} ;
Out[17]:
In the circle below, the lines labelled OA, OB, OC, OD and OE are all radii, and AD and BE are diameters. The points A, B, C, D and E all lie on the circumference.
In [5]:
%%tikz
\def\radius{2}% radius of the circle
%origin
\coordinate (O) at (0,0) ;
% Circle
\draw (O) circle[radius=\radius] node[font=\tiny, left]{\em{O}};
%Centre point
\draw[black,fill=black] (0,0) circle [radius=0.03];
\draw (O) -- (125:\radius) node[font=\tiny,above]{A};
\draw (O) -- (60:\radius) node[font=\tiny,above right]{B};
\draw (O) -- (0:\radius) node[font=\tiny,right]{C};
\draw (O) -- (-50:\radius) node[font=\tiny,right]{D};
\draw (O) -- (-140:\radius) node[font=\tiny,left]{E};
Out[5]:
Although the terms ‘radius’, ‘diameter’ and ‘circumference’ each denote a certain line, these words are also employed to mean the lengths of those lines. So it is common to say, for example, ‘Mark a point on the circumference’ and ‘The circumference of this circle is 7.3 cm’. It is obvious from the context whether the line itself or the length is being referred to.
### Extra Examples¶
Handy fragments from Stack Overflow..
In [6]:
%%tikz
%https://tex.stackexchange.com/a/197711
\path (120:3) coordinate (A) (0:3) coordinate (B) (0:0) coordinate (C);
\draw (A)
-- (B) node [at start, above left] {$A$} node [midway, above] {$c$}
-- (C) node [at start, right] {$B$} node [midway, below] {$a$}
-- (A) node [at start, below] {$C$} node [midway, below] {$b$}
-- cycle;
\draw [dashed] (A) |- (C) node [midway, below left] {$P$};
%The -- (60:.6) element adds the tick to the angle
\draw (0:.5) arc (0:120:.5) (60:.4) -- (60:.6);
Out[6]:
In [ ]:
In [4]:
%%tikz --no-wrap
%via https://tex.stackexchange.com/a/397793/151162
\usetikzlibrary{angles,quotes}
\begin{tikzpicture}[x=4cm, y=4cm, axes/.style={thin, gray, ->},
dot/.style={.. dot={#1:0:;}},
.. dot/.style args={#1:#2:#3;}{insert path={
coordinate (#1)
node [circle, fill, inner sep=0, minimum size=2pt,label=#2:#1]{}
}}]
\clip (-0.25, -0.25) rectangle (1.5,1.5);
\draw[axes] (-1.2,0) -- (1.2,0) node[right] {$x$};
\draw[axes] (0,-1.2) -- (0,1.2) node[above] {$y$};
\def\a{40}
\path
(0,0) [dot=O:225]
(0:1) [dot=A:315]
(\a:1) [dot=B:90]
(0:cos \a) [dot=C:270]
(\a:sec \a) [dot=D]
(1, cosec \a-cot \a) [dot=E];
\draw (O) circle[radius=1];
\draw (O) -- (B) -- (C);
\draw (B) -- (D) -- (A);
\draw [dashed] (B) -- (A);
\draw [dashed] (B) -- (E);
\pic ["$\theta$", draw, ->, angle radius=1cm] {angle=C--O--B};
\path (O) -- (B) node [midway, above] {$1$};
\path (O) -- (C) node [midway, below] {$\cos\theta$};
\end{tikzpicture}
Out[4]:
In [ ]:
|
2021-04-16 08:17:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8753321766853333, "perplexity": 7474.814479862487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088731.42/warc/CC-MAIN-20210416065116-20210416095116-00379.warc.gz"}
|
https://proofwiki.org/wiki/Value_of_Vandermonde_Determinant/Formulation_1/Proof_1
|
# Value of Vandermonde Determinant/Formulation 1/Proof 1
## Theorem
Let $V_n$ be the Vandermonde determinant of order $n$ defined as the following formulation:
$V_n = \begin {vmatrix} 1 & x_1 & {x_1}^2 & \cdots & {x_1}^{n - 2} & {x_1}^{n - 1} \\ 1 & x_2 & {x_2}^2 & \cdots & {x_2}^{n - 2} & {x_2}^{n - 1} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 1 & x_n & {x_n}^2 & \cdots & {x_n}^{n - 2} & {x_n}^{n - 1} \end {vmatrix}$
Its value is given by:
$\ds V_n = \prod_{1 \mathop \le i \mathop < j \mathop \le n} \paren {x_j - x_i}$
## Proof
Let $V_n = \begin{vmatrix} 1 & x_1 & {x_1}^2 & \cdots & {x_1}^{n - 2} & {x_1}^{n - 1} \\ 1 & x_2 & {x_2}^2 & \cdots & {x_2}^{n - 2} & {x_2}^{n - 1} \\ 1 & x_3 & {x_3}^2 & \cdots & {x_3}^{n - 2} & {x_3}^{n - 1} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 1 & x_{n - 1} & {x_{n - 1} }^2 & \cdots & {x_{n - 1} }^{n - 2} & {x_{n - 1} }^{n - 1} \\ 1 & x_n & {x_n}^2 & \cdots & {x_n}^{n - 2} & {x_n}^{n - 1} \end{vmatrix}$.
By Multiple of Row Added to Row of Determinant, we can subtract row 1 from each of the other rows and leave $V_n$ unchanged:
$V_n = \begin{vmatrix} 1 & x_1 & {x_1}^2 & \cdots & {x_1}^{n - 2} & {x_1}^{n - 1} \\ 0 & x_2 - x_1 & {x_2}^2 - {x_1}^2 & \cdots & {x_2}^{n - 2} - {x_1}^{n - 2} & {x_2}^{n - 1} - {x_1}^{n - 1} \\ 0 & x_3 - x_1 & {x_3}^2 - {x_1}^2 & \cdots & {x_3}^{n - 2} - {x_1}^{n - 2} & {x_3}^{n - 1} - {x_1}^{n - 1} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & x_{n-1} - x_1 & {x_{n - 1} }^2 - {x_1}^2 & \cdots & {x_{n - 1} }^{n - 2} - {x_1}^{n - 2} & {x_{n - 1} }^{n - 1} - {x_1}^{n - 1} \\ 0 & x_n - x_1 & {x_n}^2 - {x_1}^2 & \cdots & {x_n}^{n - 2} - {x_1}^{n - 2} & {x_n}^{n - 1} - {x_1}^{n - 1} \end{vmatrix}$
Similarly without changing the value of $V_n$, we can subtract, in order:
$x_1$ times column $n - 1$ from column $n$
$x_1$ times column $n - 2$ from column $n - 1$
and so on, till we subtract:
$x_1$ times column $1$ from column $2$.
The first row will vanish all apart from the first element $a_{11} = 1$.
On all the other rows, we get, with new $i$ and $j$:
$a_{i j} = \paren {x_i^{j - 1} - x_1^{j - 1} } - \paren {x_1 x_i^{j - 2} - x_1^{j - 1} } = \paren {x_i - x_1} x_i^{j - 2}$:
$V_n = \begin {vmatrix} 1 & 0 & 0 & \cdots & 0 & 0 \\ 0 & x_2 - x_1 & \paren {x_2 - x_1} x_2 & \cdots & \paren {x_2 - x_1} {x_2}^{n - 3} & \paren {x_2 - x_1} {x_2}^{n - 2} \\ 0 & x_3 - x_1 & \paren {x_3 - x_1} x_3 & \cdots & \paren {x_3 - x_1} {x_3}^{n - 3} & \paren {x_3 - x_1} {x_3}^{n - 2} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & x_{n - 1} - x_1 & \paren {x_{n - 1} - x_1} x_{n - 1} & \cdots & \paren {x_{n - 1} - x_1} {x_{n - 1} }^{n - 3} & \paren {x_{n - 1} - x_1} {x_{n - 1} }^{n - 2} \\ 0 & x_n - x_1 & \paren {x_n - x_1} x_n & \cdots & \paren {x_n - x_1} {x_n}^{n - 3} & \paren {x_n - x_1} {x_n}^{n - 2} \end {vmatrix}$
For all rows apart from the first, the $k$th row has the constant factor $\paren {x_k - x_1}$.
So we can extract all these as factors, and from Determinant with Row Multiplied by Constant, we get:
$\ds V_n = \prod_{k \mathop = 2}^n \paren {x_k - x_1} \begin {vmatrix} 1 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 1 & x_2 & \cdots & {x_2}^{n - 3} & {x_2}^{n - 2} \\ 0 & 1 & x_3 & \cdots & {x_3}^{n - 3} & {x_3}^{n - 2} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 1 & x_{n - 1} & \cdots & {x_{n - 1} }^{n - 3} & {x_{n - 1} }^{n - 2} \\ 0 & 1 & x_n & \cdots & {x_n}^{n - 3} & {x_n}^{n - 2} \end {vmatrix}$
From Determinant with Unit Element in Otherwise Zero Row, we can see that this directly gives us:
$\ds V_n = \prod_{k \mathop = 2}^n \paren {x_k - x_1} \begin {vmatrix} 1 & x_2 & \cdots & {x_2}^{n - 3} & {x_2}^{n - 2} \\ 1 & x_3 & \cdots & {x_3}^{n - 3} & {x_3}^{n - 2} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 1 & x_{n - 1} & \cdots & {x_{n - 1} }^{n - 3} & {x_{n - 1} }^{n - 2} \\ 1 & x_n & \cdots & {x_n}^{n - 3} & {x_n}^{n - 2} \end{vmatrix}$
and it can be seen that:
$\ds V_n = \prod_{k \mathop = 2}^n \paren {x_k - x_1} V_{n - 1}$
$V_2$, by the time we get to it (it will concern elements $x_{n - 1}$ and $x_n$), can be calculated directly using the formula for calculating a Determinant of Order 2:
$V_2 = \begin {vmatrix} 1 & x_{n - 1} \\ 1 & x_n \end {vmatrix} = x_n - x_{n - 1}$
The result follows.
$\blacksquare$
|
2022-01-19 23:34:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9598721861839294, "perplexity": 358.14519598273716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301592.29/warc/CC-MAIN-20220119215632-20220120005632-00569.warc.gz"}
|
https://electronics.stackexchange.com/questions/256071/calculating-base-current-and-resistor-required-from-transistor-datasheet-do-i-n
|
# Calculating base current and resistor required from transistor datasheet? Do I need a different transistor?
I am extremely new to circuits and using transistors, but after reading many other posts and articles I am still confused about terms and not sure if I am understanding this correctly.
I have been trying to use raspberry pi to use as a switch to turn on a motor. However I realized that I could not use PN2222 as the current limitation IIUC is less than a few hundred mA, much less than 1.5A. So I have been trying to get a new transistor but have been very lost on reading the datasheet. After much research, I think I have the basics, but wanted to double check from the experts here whether what I am doing is not going to blow anything up.
My main guide has been this post: Need help calculating resistance for transistor base
(Tried to simulate this but I don't seem to be getting any useful data - getting 0 V everywhere or N/A)
simulate this circuit – Schematic created using CircuitLab
Datasheet: sheet
Questions
1. So the motor I have is 12V which takes in minimum 1.5A. Using I=VR, does this mean that the resistance is 8Ohm and so I should use 8 Ohm in my calculation?
2. If I understand correctly, to calculate how much current is required from base, I need to use $h_{FE}$. At 1.5A collector current, gain is approximately 30 (from the graph, although I am not sure how that $V_{CE}=4V$ actually affects anything). So to support 1.5A, does that mean $I_b$ needs to be $\frac{1.5A}{30}=50mA$? If the max the source can output is 16mA, is the max current going through collector->emitter $16mA * 30 = 480mA$?
3. So the base resistor required would be $R=\frac{V_{BaseResistor}}{I_B}=\frac{3.3V - V_{BE}}{50mA} = \frac{3.3V - 1V}{50mA} = 44Ohm$ But since I can't do 50mA, do I just calculate this for 16mA?
4. From the calculation above, it seems like I can't actually use my current setup to enable the motor properly. Is this correct? In this case, is it better to get a new transistor (is there a way to find correct transistor more easily?) or is it possible to provide external power to increase amperage provided to transistor?
Sorry if my questions seem all over the place I'm still very confused whether whatever I am doing is correct.
• For this you would be much better off using a MOSFET instead of a BJT. If you must use BJT then you want a Darlington Pair to give you a greater gain. – Majenko Sep 4 '16 at 20:37
• Don't forget a flyback diode across your motor to protect your switch. – efox29 Sep 4 '16 at 20:37
• @Majenko wait, I'm confused, isn't MOSFET also a transistor? From what I read I thought that no matter what type I go with, they all have to go through this calculation? – blah900 Sep 4 '16 at 20:45
• MOSFETs and BJTs are both forms of transistor, but they operate very very differently. And for just switching a motor on and off you will be in the saturation region, so there really isn't any calculating to do. As long as your threshold is well below your logic high, and the MOSFET can handle the current, and the on-resistance is low enough there's nothing else really to worry about. – Majenko Sep 4 '16 at 20:47
• I'd suggest reading through the following tutorial to help you get started. Might help tie a bunch of things together. learn.sparkfun.com/tutorials/transistors/… – Fuzzy_Bunnys Sep 4 '16 at 20:48
Well thought out and written question.
You are quite correct in your calculations, the 4V Vce is important, it means you need a 16V supply or else your motor will only see 8V and run slow and under power. This lost power will heat up the transistor at 4V x 1.5A = 6W so require a heat sink. The current will likely be a bit lower as the motor is getting less than 12V but not neat 480mA
To get the full current even with the 16V supply the low hfe means that you need to find more gain and this is usually done with a logic buffer to give you the extra drive or an extra transistor to amplify the logic output.
As others have suggested a logic drive MOSFET is a strong contender as long as you are not trying to switch it at high speeds to control the motor speed, this will cause heating in the MOSFET if you do not make use of more elaborate gate drive control.
While starting out in an effort to minimise the risk of having motor voltage reach your controller (and for general galvanic isolation for a lot of reasons) I would recommend a relay as well. The relay you can usually drive with a single transistor and it would be selected to drive the motor with a safe margin.
Remember that your motor starting current may be much higher than the rated running current and if your transistor or relay contact are rated too low you may have regular failures. A 5A rating would be a happy margin.
As mentioned you want a fly-back or free-wheeling diode across the motor (or relay) coil to protect your transistor.
EDIT:
There are also darlington transistors available and these can be used but will have the same high Vce saturation voltage. Your load of 12V and 1.5A is often these days handled with MOSFET or relay when using microcontrollers.
Here is a picture search that may help find ideas. There are lots of alternatives that are worth considering to find what will suit you best.
|
2020-01-20 11:54:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4631901681423187, "perplexity": 848.1068904965737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598726.39/warc/CC-MAIN-20200120110422-20200120134422-00449.warc.gz"}
|
https://chemistry.stackexchange.com/questions/51587/density-functional-definition
|
Density Functional Definition
Is it wrong to say that Density Functional means that Electron Density is a function of the orbitals (wave function) of all electrons in 3 dimensions, if so, why ?
• I don't really understand your question. The electron density $\varrho (\vec{r})$ is defined to be a function of the many-particle wave function, i.e. $\varrho (\vec{r}) = \langle \Psi | \hat{\varrho} (\vec{r}) | \Psi \rangle$, where $\hat{\varrho} (\vec{r})$ is the density operator, $\vec{r}$ is the position vector, and $\Psi$ is the many-particle wave-function. But that has nothing to do with the definition of a density functional. A density functional in its most general meaning is just a functional of the aforementioned electron density. Maybe you should provide a bit more context. – Philipp May 23 '16 at 8:20
• @Mr.Why The energy is a functional of the wavefunction, i.e. $E=E[\psi]$. – user23061 Jun 7 '16 at 9:24
• You are right . – M.ghorab Jun 7 '16 at 23:45
• Eignvalue is not as same as function – M.ghorab Jun 10 '16 at 12:01
In general, a functional $F$ is a mapping from an arbitrary set $\mathcal{X}$ of functions to the set of complex numbers $\mathbb{C}$ or the set of real numbers $\mathbb{R}$: $$F : \mathcal{X} \mapsto \mathbb{R}.$$ or $$F : \mathcal{X} \mapsto \mathbb{C}.$$
For example, if you consider $\mathcal{X}$ as the set of polynomials with real coefficients, you can define a functional $F$ as $$F[f] = \int_0^1f(x)\,dx$$ i.e. your functional $F$ takes a polynomial function $f\in\mathcal{X}$ (for example $f(x)=3x+1/2$) as an argument and returns a scalar (2 for $f(x)=3x+1/2$, as you can easily verify).
A density functional is simply a functional $F[f]$ where the argument $f$ is the electron density $\rho(\vec{r})$ (i.e. a density functional is a functional of the electron density). For example Hohenberg and Kohn showed that the energy $\epsilon$ of a quantum system is a functional of the density $$\epsilon=E[\rho]$$ This means that when you plug the electron density of your system $\rho(\vec{r})$ into the energy functional $E[\rho]$ you get a number $\epsilon$, which is the energy of your system. The whole energy functional is not known explicitly, but some of its components are known. For example for the external potential energy we have $$V[\rho] = \int v(\vec{r})\rho(\vec{r})d\vec{r}$$ and for the Coulomb interaction between electrons we have $$J[\rho] = \frac{1}{2}\iint \frac{\rho(\vec{r})\rho(\vec{r}')}{|\vec{r}-\vec{r}'|}\,d\vec{r}d\vec{r}'$$ which are clearly functionals of the electron density.
• (i.e. density functional is a functional of the electron density). So cannot it be said that density functional means that density is it self a functional of some thing else (orbitals=wavefunctions) ? – M.ghorab Aug 2 '17 at 21:02
• The density functional is distinct from the density itself. The density maps from a position vector (3 real numbers) to a single real number $\rho:\;\mathbb{R}^3 \rightarrow \mathbb{R}$. The density functional maps from an entire function to a single real number $F:\;\mathcal{X} \rightarrow \mathbb{R}$. Mapping from a function to something else is what makes something a functional. – user213305 Aug 2 '17 at 22:36
• Also, the orbitals themselves are fictitious; there are infinitely many choices of orbitals for the same wavefunction but some are more sensible than others. Also, although the density can be expressed in terms of the wavefunction: $\rho(r) = |\psi(r)|^2$ both side just equal a number and require us to input a position vector so the density is still a function mapping from vectors to numbers, as is the wavefunction: $\rho : \mathbb{R}^3\rightarrow \mathbb{R}$, $\psi : \mathbb{R}^3\rightarrow \mathbb{C}$. – user213305 Aug 2 '17 at 22:42
From the first Hohenberg–Kohn theorem, it is known that electronic energy is a functional of electron density, $$E\el = E\el[\rho(\vec{r}_{1})] \, .$$ i.e. electronic energy $E\el$ is a function that takes another function, namely, electron density $\rho(\vec{r}_{1})$, as its input argument and returns a scalar value (real number). So, density functional is a functional of electron density that returns (possibly approximate) electronic energy or a part of it, if $E\el$ is subdivided into parts.
|
2019-05-22 18:53:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.868818998336792, "perplexity": 197.37988078966532}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256948.48/warc/CC-MAIN-20190522183240-20190522205240-00452.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/209764-exam-tomorrow-span-vs-subspace-linear-algebra.html
|
# Math Help - Exam Tomorrow: Span vs. Subspace in Linear Algebra
1. ## Exam Tomorrow: Span vs. Subspace in Linear Algebra
Hello,
I cannot decipher the difference between span and subspace with respect to linear algebra and their definitions. I have an exam tomorrow, would anybody be able to thoroughly describe the difference?
Thanks and take care,
Justin
2. ## Re: Exam Tomorrow: Span vs. Subspace in Linear Algebra
a subspace U of a vector space V is a subset U of the underlying set V that is itself a vector space with the same operations as V. this means 3 things:
1) the operation + of V when applied to vectors u,w in U always gives another vector in U (that is: u+w is in U whenever u and w are)
2) the scalar multiplication of V when applied to a vector u in U always gives another vector in U (for any scalar a in the field F, au is in U whenever u is).
3) U is non-empty: equivalently (and usually easier to check): the 0-vector of V lies in U.
the span span(S) of a set of vectors S = {v1,v2,...,vk} is the set of all linear combinations {a1v1+a2v2+...+ak: aj in F, vj in S}.
the span of a set is always a subspace, and a subspace U is always the span of some smaller subset S, called a BASIS for U.
so a spanning set is a basis "with some extra vectors thrown in". for example {(1,0),(0,1),(3,4)} is a spanning set for the plane R2, but we don't need the vector (3,4) just the first 2 will suffice.
big picture:
you have a vector space V. in it are LOTS of vectors (usually infinitely many). instead of trying to catalog every single one of them, we want to study just a smaller amount of them.
for R2, a basis is {(1,0),(0,1)}. instead of studying every (x,y), we can just study (1,0) and (0,1), since (x,y) = (x,0) + (0,y) = x(1,0) + y(0,1). this means that most of what we know about points in the plane can be handled "one coordinate at a time".
of course, sometimes we are just given a handful of vectors, and we want to create the smallest subspace that contains all of them (to keep the space we're studying as simple as possible). that subspace is span(S).
subspaces are vector spaces, just "smaller ones" that live in "larger ones" (we start with V, and consider some smaller U).
spans are ALSO vector spaces, but "larger spaces" that contain some given set S (we start with S and expand it until it becomes a vector space, by considering all linear combinations of S).
so it really depends on what you're given to start with. usually, the "mommy space" V is given. then you are given a subset S.
if S satisfies the 3 rules above, then S itself is a vector space, and thus S is a subspace of V. if S is NOT a subspace, we can make it into one, by taking linear combinations of S.
{(1,0)} is not a subspace of R2. this is because a(1,0) = (a,0) which is not (1,0) for EVERY real number a. span({(1,0)}) = {(a,0): a in R} IS a subspace of R2 (you might recognize this as "the x-axis").
short version: span is something we do to a SET, to make it INTO a vector space. subspaces are subsets that are ALREADY vector spaces (no extra effort required).
|
2015-04-27 14:10:29
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8557049036026001, "perplexity": 628.4670139591649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246658376.88/warc/CC-MAIN-20150417045738-00173-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://digling.org/bdpa/faq.php
|
### Alignment Analyses
Alignment analyses are the most common way to compare sequences. Given that phonetic sequences are the basic comparanda in both historical linguistics and dialectology, it is therefore straightforward to assume that alignment analyses play a crucial role in both disciplines. Without alignments, i.e., without the explicit matching of sounds, neither could regular sound correspondences be detected nor could cognacy between words or genetic relationship between languages be proven. However, although language comparison is always based on an implicit alignment of words, it is rarely explicitly visualized or termed as such, and in the rare cases where scholars explicitly use alignments to visualize correspondence patterns in words, it merely serves illustrational purposes.
### Basic Formats for Alignments Analyses
In order to exchange, edit, and compare phonetic alignments, different formats are used in the BDPA. Basically, we distinguish between formats for pairwise alignments and for multiple alignments. For practical reasons, the BDPA uses the alignment formats generally employed in LingPy. All formats are text-based and can be edited with help of simple text editors.
The basic format for the representation of multiple alignment analyses is the MSA-format. Files in this format have the extension "msa". The first line of an MSA file serves as an identifier for the dataset from which the alignment was taken. There are no further format restrictions and the user can freely decide what to use as an identifier, as long as it does not exceed the first line. In the BDPA, we use the names of our subsets as dataset identifiers. The second line is reserved as an identifier for the set of aligned sound sequences. The identifier can again be freely chosen by the user. In the BDPA, we generally use the meaning of the sound sequences as identifier, but we also add additional information, such as the anceestral from (in language families) or the orthography of the corresponding word in the standard variety (in dialect datasets). The following lines give the phonetic sequences in aligned form, separated by a tab-stop, and preceded by language identifiers (ISO-code, language name, dialect location) in the first column of the alignment matrix. The hash symbol ("#") is used as a comment character. When placed in the beginning of a line, it indicates that the line should be ignored when parsing the file . Inspired from alignment formats in bioinformatics, LingPy allows for specific additional lines which can be used to annotate the alignments. Instances of metathesis, for example, may be represented by adding a line which starts with the keyword "SWAPS", with a plus character ("+") marking the beginning of a swapped region, the dash character ("-") its center and another plus character the end. All sites which are not affected by swaps contain a dot ("."). In the BDPA, 66 out of 750 multiple alignments contain instances of metathesis and are regularly annotated in the way just described. As an example, consider the file harry_potter.msa:
1 Harry Potter Testset
2 Woldemort (in different languages)
3 English v o l - d e m o r t
4 German. w a l - d e m a r -
5 Russian v - l a d i m i r -
6 SWAPS.. . + - + . . . . . .
Basically, the MSA-format can also be used to represent pairwise alignment analyses. However, since each MSA-file, is a single text-file, we would need 7 197 different text-files to represent all sequence pairs of our master benchmark for pairwise alignment analyses. Using such a large amount of text-files to represent the rather small amount of information available in pairwise alignments is not only impractical as a shared digital resource, but also very inefficient for computation.
In order to deal with large amounts of pairwise alignments in one and the same text-file, LingPy offers an additional format for pairwise alignment analyses. This format is called PSA-format, and files in the format have the extension "psa". As for the MSA-format, the first line of a PSA-file is reserved for an identifier that refers to the dataset from which the data was taken. The sequence pairs themselves are given in triplets, with a sequence identifier in the first line of a triplet (containing the meaning, or orthographical information) and the two sequences in the second and third line contain the alignment matrix with the language identifiers being placed in the first column. All triplets (sequence pair identifier and two sequences) are separated by one empty line. As an example, consider the file harry_potter.psa:
1 Harry Potter Testset
2 Woldemort in German and Russian
3 German. w a l - d e m a r
4 Russian v - l a d i m i r
5
6 Woldemort in English and Russian
7 English w o l - d e m o r t
8 Russian v - l a d i m i r -
9
10 Woldemort in English and German
11 English w o l d e m o r t
12 German. w a l d e m a r -
13
In the BDPA, the pairwise benchmarks, as described above, are provided in PSA-format. Additionally, we extracted all possible pairwise alignments inherent in our master set of 750 multiple alignments and offer them for download in PSA-format. You can download both MSA and PSA files for each subset from here.
### Citing BDPA
If you use this database, please cite the following paper:
• List, Johann-Mattis and Jelena Prokić. (2014). A benchmark database of phonetic alignments in historical linguistics and dialectology. In: Proceedings of the International Conference on Language Resources and Evaluation (LREC), 26 — 31 May 2014, Reykjavik. 288-294.
The paper can be downloaded from this link. Please make sure that you also cite all individual sources of BDPA which you are using. For example, if you use the alignments of the Bai dialects in BDPA, you should quote both original sources from which they were taken, namely:
• Wang, F. (2006): Comparison of languages in contact. The distillation method and the case of Bai. Taipei: INstitue of Linguistics Academia Sinica.
• Allen, B. (2007): Bai dialect survey. SIL International. ULR: http://www.sil.org/silesr/2007/silesr2007-012.pdf
### Sources
All the sources we used to create the alignments can be found here.
### Contact
For technical questions regarding the data, please contact Johann-Mattis List (Philipps-Universität Marburg) or Jelena Prokić (Philipps-Universität Marburg).
|
2019-12-11 15:15:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.404583603143692, "perplexity": 2402.4879503279503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540531917.10/warc/CC-MAIN-20191211131640-20191211155640-00309.warc.gz"}
|
https://mathbabe.org/category/guest-post/page/2/
|
Archive
Archive for the ‘guest post’ Category
Guest post: Clustering and predicting NYC taxi activity
This is a guest post by Deepak Subburam, a data scientist who works at Tessellate.
from NYCTaxi.info
Greetings fellow Mathbabers! At Cathy’s invitation, I am writing here about NYCTaxi.info, a public service web app my co-founder and I have developed. It overlays on a Google map around you estimated taxi activity, as expected number of passenger pickups and dropoffs this current hour. We modeled these estimates from the recently released 2013 NYC taxi trips dataset comprising 173 million trips, the same dataset that Cathy’s post last week on deanonymization referenced. Our work will not help you stalk your favorite NYC celebrity, but guide your search for a taxi and maybe save some commute time. My writeup below shall take you through the four broad stages our work proceeded through: data extraction and cleaning , clustering, modeling, and visualization.
We extract three columns from the data: the longitude and latitude GPS coordinates of the passenger pickup or dropoff location, and the timestamp. We make no distinction between pickups and dropoffs, since both of these events imply an available taxicab at that location. The data was generally clean, with a very small fraction of a percent of coordinates looking bad, e.g. in the middle of the Hudson River. These coordinate errors get screened out by the clustering step that follows.
We cluster the pickup and dropoff locations into areas of high density, i.e. where many pickups and dropoffs happen, to determine where on the map it is worth making and displaying estimates of taxi activity. We rolled our own algorithm, a variation on heatmap generation, after finding existing clustering algorithms such as K-means unsuitable—we are seeking centroids of areas of high density rather than cluster membership per se. See figure below which shows the cluster centers as identified by our algorithm on a square-mile patch of Manhattan. The axes represent the longitude and latitude of the area; the small blue crosses a random sample of pickups and dropoffs; and the red numbers the identified cluster centers, in descending order of activity.
Taxi activity clusters
We then model taxi activity at each cluster. We discretize time into hourly intervals—for each cluster, we sum all pickups and dropoffs that occur each hour in 2013. So our datapoints now are triples of the form [<cluster>, <hour>, <activity>], with <hour> being some hour in 2013 and <activity> being the number of pickups and dropoffs that occurred in hour <hour> in cluster <cluster>. We then regress each <activity> against neighboring clusters’ and neighboring times’ <activity> values. This regression serves to smooth estimates across time and space, smoothing out effects of special events or weather in the prior year that don’t repeat this year. It required some tricky choices on arranging and aligning the various data elements; not technically difficult or maybe even interesting, but nevertheless likely better part of an hour at a whiteboard to explain. In other words, typical data science. We then extrapolate these predictions to 2014, by mapping each hour in 2014 to the most similar hour in 2013. So we now have a prediction at each cluster location, for each hour in 2014, the number of passenger pickups and dropoffs.
We display these predictions by overlaying them on a Google maps at the corresponding cluster locations. We round <activity> to values like 20, 30 to avoid giving users number dyslexia. We color the labels based on these values, using the black body radiation color temperatures for the color scale, as that is one of two color scales where the ordering of change is perceptually intuitive.
If you live in New York, we hope you find NYCTaxi.info useful. Regardless, we look forward to receiving any comments.
Guest post: The dangers of evidence-based sentencing
This is a guest post by Luis Daniel, a research fellow at The GovLab at NYU where he works on issues dealing with tech and policy. He tweets @luisdaniel12. Crossposted at the GovLab.
What is Evidence-based Sentencing?
For several decades, parole and probation departments have been using research-backed assessments to determine the best supervision and treatment strategies for offenders to try and reduce the risk of recidivism. In recent years, state and county justice systems have started to apply these risk and needs assessment tools (RNA’s) to other parts of the criminal process.
Of particular concern is the use of automated tools to determine imprisonment terms. This relatively new practice of applying RNA information into the sentencing process is known as evidence-based sentencing (EBS).
What the Models Do
The different parameters used to determine risk vary by state, and most EBS tools use information that has been central to sentencing schemes for many years such as an offender’s criminal history. However, an increasing amount of states have been utilizing static factors such as gender, age, marital status, education level, employment history, and other demographic information to determine risk and inform sentencing. Especially alarming is the fact that the majority of these risk assessment tools do not take an offender’s particular case into account.
This practice has drawn sharp criticism from Attorney General Eric Holder who says “using static factors from a criminal’s background could perpetuate racial bias in a system that already delivers 20% longer sentences for young black men than for other offenders.” In the annual letter to the US Sentencing Commission, the Attorney General’s Office states that “utilizing such tools for determining prison sentences to be served will have a disparate and adverse impact on offenders from poor communities already struggling with social ills.” Other concerns cite the probable unconstitutionality of using group-based characteristics in risk assessments.
Where the Models Are Used
It is difficult to precisely quantify how many states and counties currently implement these instruments, although at least 20 states have implemented some form of EBS. Some of the states or states with counties that have implemented some sort of EBS (any type of sentencing: parole, imprisonment, etc) are: Pennsylvania, Tennessee, Vermont, Kentucky, Virginia, Arizona, Colorado, California, Idaho, Indiana, Missouri, Nebraska, Ohio, Oregon, Texas, and Wisconsin.
The Role of Race, Education, and Friendship
Overwhelmingly states do not include race in the risk assessments since there seems to be a general consensus that doing so would be unconstitutional. However, even though these tools do not take race into consideration directly, many of the variables used such as economic status, education level, and employment correlate with race. African-Americans and Hispanics are already disproportionately incarcerated and determining sentences based on these variables might cause further racial disparities.
The very socioeconomic characteristics such as income and education level used in risk assessments are the characteristics that are already strong predictors of whether someone will go to prison. For example, high school dropouts are 47 times more likely to be incarcerated than people in their similar age group who received a four-year college degree. It is reasonable to suspect that courts that include education level as a risk predictor will further exacerbate these disparities.
Some states, such as Texas, take into account peer relations and considers associating with other offenders as a “salient problem”. Considering that Texas is in 4th place in the rate of people under some sort of correctional control (parole, probation, etc) and that the rate is 1 in 11 for black males in the United States it is likely that this metric would disproportionately affect African-Americans.
Sonja Starr’s paper
Even so, in some cases, socioeconomic and demographic variables receive significant weight. In her forthcoming paper in the Stanford Law Review, Sonja Starr provides a telling example of how these factors are used in presentence reports. From her paper:
For instance, in Missouri, pre-sentence reports include a score for each defendant on a scale from -8 to 7, where “4-7 is rated ‘good,’ 2-3 is ‘above average,’ 0-1 is ‘average’, -1 to -2 is ‘below average,’ and -3 to -8 is ‘poor.’ Unlike most instruments in use, Missouri’s does not include gender. However, an unemployed high school dropout will score three points worse than an employed high school graduate—potentially making the difference between “good” and “average,” or between “average” and “poor.” Likewise, a defendant under age 22 will score three points worse than a defendant over 45. By comparison, having previously served time in prison is worth one point; having four or more prior misdemeanor convictions that resulted in jail time adds one point (three or fewer adds none); having previously had parole or probation revoked is worth one point; and a prison escape is worth one point. Meanwhile, current crime type and severity receive no weight.
Starr argues that such simple point systems may “linearize” a variable’s effect. In the underlying regression models used to calculate risk, some of the variable’s effects do not translate linearly into changes in probability of recidivism, but they are treated as such by the model.
Another criticism Starr makes is that they often make predictions on an individual based on averages of a group. Starr says these predictions can predict with reasonable precision the average recidivism rate for all offenders who share the same characteristics as the defendant, but that does not make it necessarily useful for individual predictions.
The Future of EBS Tools
The Model Penal Code is currently in the process of being revised and is set to include these risk assessment tools in the sentencing process. According to Starr, this is a serious development because it reflects the increased support of these practices and because of the Model Penal Code’s great influence in guiding penal codes in other states. Attorney General Eric Holder has already spoken against the practice, but it will be interesting to see whether his successor will continue this campaign.
Even if EBS can accurately measure risk of recidivism (which is uncertain according to Starr), does that mean that a greater prison sentence will result in less future offenses after the offender is released? EBS does not seek to answer this question. Further, if knowing there is a harsh penalty for a particular crime is a deterrent to commit said crime, wouldn’t adding more uncertainty to sentencing (EBS tools are not always transparent and sometimes proprietary) effectively remove this deterrent?
Even though many questions remain unanswered and while several people have been critical of the practice, it seems like there is great support for the use of these instruments. They are especially easy to support when they are overwhelmingly regarded as progressive and scientific, something Starr refutes. While there is certainly a place for data analytics and actuarial methods in the criminal justice system, it is important that such research be applied with the appropriate caution. Or perhaps not at all. Even if the tools had full statistical support, the risk of further exacerbating an already disparate criminal justice system should be enough to halt this practice.
Both Starr and Holder believe there is a strong case to be made that the risk prediction instruments now in use are unconstitutional. But EBS has strong advocates, so it’s a difficult subject. Ultimately, evidence-based sentencing is used to determine a person’s sentencing not based on what the person has done, but who that person is.
Guest post: New Federal Banking Regulations Undermine Obama Infrastructure Stance
This is a guest post by Marc Joffe, a former Senior Director at Moody’s Analytics, who founded Public Sector Credit Solutions in 2011 to educate the public about the risk – or lack of risk – in government securities. Marc published an open source government bond rating tool in 2012 and launched a transparent credit scoring platform for California cities in 2013. Currently, Marc blogs for Bitvore, a company which sifts the internet to provide market intelligence to municipal bond investors.
Obama administration officials frequently talk about the need to improve the nation’s infrastructure. Yet new regulations published by the Federal Reserve, FDIC and OCC run counter to this policy by limiting the market for municipal bonds.
On Wednesday, bank regulators published a new rule requiring large banks to hold a minimum level of high quality liquid assets (HQLAs). This requirement is intended to protect banks during a financial crisis, and thus reduce the risk of a bank failure or government bailout. Just about everyone would agree that that’s a good thing.
The problem is that regulators allow banks to use foreign government securities, corporate bonds and even stocks as HQLAs, but not US municipal bonds. Unless this changes, banks will have to unload their municipal holdings and won’t be able to purchase new state and local government bonds when they’re issued. The new regulation will thereby reduce the demand for bonds needed to finance roads, bridges, airports, schools and other infrastructure projects. Less demand for these bonds will mean higher interest rates.
Municipal bond issuance is already depressed. According to data from SIFMA, total municipal bonds outstanding are lower now than in 2009 – and this is in nominal dollar terms. Scary headlines about Detroit and Puerto Rico, rating agency downgrades and negative pronouncements from market analysts have scared off many investors. Now with banks exiting the market, the premium that local governments have to pay relative to Treasury bonds will likely increase.
If the new rule had limited HQLA’s to just Treasuries, I could have understood it. But since the regulators are letting banks hold assets that are as risky as or even riskier than municipal bonds, I am missing the logic. Consider the following:
• No state has defaulted on a general obligation bond since 1933. Defaults on bonds issued by cities are also extremely rare – affecting about one in one thousand bonds per year. Other classes of municipal bonds have higher default rates, but not radically different from those of corporate bonds.
• Bonds issued by foreign governments can and do default. For example, private investors took a 70% haircut when Greek debt was restructured in 2012.
• Regulators explained their decision to exclude municipal bonds because of thin trading volumes, but this is also the case with corporate bonds. On Tuesday, FINRA reported a total of only 6446 daily corporate bond trades across a universe of perhaps 300,000 issues. So, in other words, the average corporate bond trades less than once per day. Not very liquid.
• Stocks are more liquid, but can lose value very rapidly during a crisis as we saw in 1929, 1987 and again in 2008-2009. Trading in individual stocks can also be halted.
Perhaps the most ironic result of the regulation involves municipal bond insurance. Under the new rules, a bank can purchase bonds or stock issued by Assured Guaranty or MBIA – two major municipal bond insurers – but they can’t buy state and local government bonds insured by those companies. Since these insurance companies would have to pay interest and principal on defaulted municipal securities before they pay interest and dividends to their own investors, their securities are clearly more risky than the insured municipal bonds.
Regulators have expressed a willingness to tweak the new HQLA regulations now that they are in place. I hope this is one area they will reconsider. Mandating that banks hold safe securities is a good thing; now we need a more data-driven definition of just what safe means. By including municipal securities in HQLA, bank regulators can also get on the same page as the rest of the Obama administration.
Categories: economics, finance, guest post
Guest Post: Bring Back The Slide Rule!
This is a guest post by Gary Cornell, a mathematician, writer, publisher, and recent founder of StemForums.
I was was having a wonderful ramen lunch with the mathbabe and, as is all too common when two broad minded Ph.D.’s in math get together, we started talking about the horrible state math education is in for both advanced high school students and undergraduates.
One amusing thing we discovered pretty quickly is that we had independently come up with the same (radical) solution to at least part of the problem: throw out the traditional sequence which goes through first and second year calculus and replace it with a unified probability, statistics, calculus course where the calculus component was only for the smoothest of functions and moreover the applications of calculus are only to statistics and probability. Not only is everything much more practical and easier to motivate in such a course, students would hopefully learn a skill that is essential nowadays: how to separate out statistically good information from the large amount of statistical crap that is out there.
Of course, the downside is that the (interesting) subtleties that come from the proofs, the study of non-smooth functions and for that matter all the other stuff interesting to prospective physicists like DiffEQ’s would have to be reserved for different courses. (We also were in agreement that Gonick’s beyond wonderful“Cartoon Guide To Statistics” should be required reading for all the students in these courses, but I digress…)
The real point of this blog post is based on what happened next: but first you have to know I’m more or less one generation older than the mathbabe. This meant I was both able and willing to preface my next point with the words: “You know when I was young, in one way students were much better off because…” Now it is well known that using this phrase to preface a discussion often poisons the discussion but occasionally, as I hope in this case, some practices from days gone by ago can if brought back, help solve some of today’s educational problems.
By the way, and apropos of nothing, there is a cure for people prone to too frequent use of this phrase: go quickly to YouTube and repeatedly make them watch Monty Python’s Four Yorkshireman until cured:
Anyway, the point I made was that I am a member of the last generation of students who had to use slide rules. Another good reference is: here. Both these references are great and I recommend them. (The latter being more technical.) For those who have never heard of them, in a nutshell, a slide rule is an analog device that uses logarithms under the hood to do (sufficiently accurate in most cases) approximate multiplication, division, roots etc.
The key point is that using a slide rule requires the user to keep track of the “order of magnitude” of the answers— because slide rules only give you four or so significant digits. This meant students of my generation when taking science and math courses were continuously exposed to order of magnitude calculations and you just couldn’t escape from having to make order of magnitude calculations all the time—students nowadays, not so much. Calculators have made skill at doing order of magnitude calculations (or Fermi calculations as they are often lovingly called) an add-on rather than a base line skill and that is a really bad thing. (Actually my belief that bringing back slide rules would be a good thing goes back a ways: when that when I was a Program Director at the NSF in the 90’s, I actually tried to get someone to submit a proposal which would have been called “On the use of a hand held analog device to improve science and math education!” Didn’t have much luck.)
Anyway, if you want to try a slide rule out, alas, good vintage slide rules have become collectible and so expensive— because baby boomers like me are buying the ones we couldn’t afford when we were in high school – but the nice thing is there are lots of sites like this one which show you how to make your own.
Finally, while I don’t think they will ever be as much fun as using a slide rule, you could still allow calculators in classrooms.
Why? Because it would be trivial to have a mode in the TI calculator or the Casio calculator that all high school students seem to use, called “significant digits only.” With the right kind of problems this mode would require students to do order of magnitude calculations because they would never be able to enter trailing or leading zeroes and we could easily stick them with problems having a lot of them!
But calculators really bug me in classrooms and, so I can’t resist pointing out one last flaw in their omnipresence: it makes students believe in the possibility of ridiculously high precision results in the real world. After all, nothing they are likely to encounter in their work (and certainly not in their lives) will ever need (or even have) 14 digits of accuracy and, more to the point, when you see a high precision result in the real world, it is likely to be totally bogus when examined under the hood.
A simple mathematical model of congressional geriatric penis pumps
This is a guest post written by Stephanie Yang and reposted from her blogStephanie and I went to graduate school at Harvard together. She is now a quantitative analyst living in New York City, and will be joining the data science team at Foursquare next month.
Last week’s hysterical report by the Daily Show’s Samantha Bee on federally funded penis pumps contained a quote which piqued our quantitative interest. Listen carefully at the 4:00 mark, when Ilyse Hogue proclaims authoritatively:
“Statistics show that probably some our members of congress have a vested interested in having penis pumps covered by Medicare!”
Ilya’s wording is vague, and intentionally so. Statistically, a lot of things are “probably” true, and many details are contained in the word “probably”. In this post we present a simple statistical model to clarify what Ilya means.
First we state our assumptions. We assume that penis pumps are uniformly distributed among male Medicare recipients and that no man has received two pumps. These are relatively mild assumptions. We also assume that what Ilya refers to as “members of Congress [with] a vested interested in having penis pumps covered by Medicare,” specifically means male member of congress who received a penis pump covered by federal funds. Of course, one could argue that female members congress could also have a vested interested in penis pumps as well, but we do not want to go there.
Now the number crunching. According to the report, Medicare has spent a total of $172 million supplying penis pumps to recipients, at “360 bucks a pop.” This means a total of 478,000 penis pumps bought from 2006 to 2011. 45% of the current 49,435,610 Medicare recipients are male. In other words, Medicare bought one penis pump for every 46.5 eligible men. Inverting this, we can say that 2.15% of male Medicare recipients received a penis pump. There are currently 128 members of congress (32 senators plus 96 representatives) who are males over the age of 65 and therefore Medicare-eligible. The probability that none of them received a federally funded penis pump is: $(1-0.0215)^{128} \approx 6.19\%$ In other words, the chances of at least one member of congress having said penis pumps is 93.8%, which is just shy of the 95% confidence that most statisticians agree on as significant. In order to get to 95% confidence, we need a total of 138 male members of congress who are over the age of 65, and this has not happened yet as of 2014. Nevertheless, the estimate is close enough for us to agree with Ilya that there is probably someone member of congress who has one. Is it possible that there two or more penis pump recipients in congress? We did notice that Ilya’s quote refers to plural members of congress. Under the assumptions laid out above, the probability of having at least two federally funded penis pumps in congress is: $1- {128 \choose 0} (1- 0.0215)^{128} - {128 \choose 1}(1-0.0215)^{127} (0.0215)^1 \approx 76.3\%$ Again, we would say this is probably true, though not nearly with the same amount of confidence as before. In order to reach 95% confidence that there are two or moreq congressional federally funded penis pump, we would need 200 or more Medicare-eligible males in congress, which is unlikely to happen anytime soon. Note: As a corollary to these calculations, I became the first developer in the history of mankind to type the following command: git merge --squash penispump. Guest rant about rude kids Today’s guest post was written by Amie, who describes herself as a mom of a 9 and a 14-year-old, mathematician, and bigmouth. Nota bene: this was originally posted on Facebook as a spontaneous rant. Please don’t miscontrue it as an academic argument. Time for a rant. I’ll preface this by saying that while my kids are creative, beautiful souls, so are many (perhaps all) children I’ve met, and it would be the height of arrogance to take credit for that as a parent. But one thing my husband and I can take credit for are their good manners, because that took work to develop. The first phrase I taught me daughter was “thank you,” and it’s been put to good use over the years. I’m also loathe to tell other parents what to do, but this is an exception: teach your fucking kids to say “please” and “thank you”. If you are fortunate to visit another country, teach them to say “please” and “thank you” in the native language. After a week in paradise at a Club Med in Mexico, I’m at some kind of breaking point with rude rich people and their spoiled kids. And that includes the Europeans. Maybe especially the Europeans. What is it that when you’re in France everyone’s all “thank you and have a nice day” but when these petit bourgeois assholes come to Cancun they treat Mexicans like nonhumans? My son held the door for a face-lifted Russian lady today who didn’t even say thank you. Anyway, back to kids: I’m not saying that you should suppress your kids’ nature joie de vivre and boisterous, rambunctious energy (though if that’s what they’re like, please keep them away from adults who are not in the mood for it). Just teach them to treat other people with basic respect and courtesy. That means prompting them to say “please,” “thank you,” and “nice to meet you” when they interact with other people. Jordan Ellenberg just posted how a huge number of people accepted to the math Ph.D. program at the University of Wisconsin never wrote to tell him that they had accepted other offers. When other people are on a wait list! Whose fault is this? THE PARENTS’ FAULT. Damn parents. Come on!! P.S. Those of you who have put in the effort to raise polite kids: believe me, I’ve noticed. So has everyone else. Categories: guest post, rant Ya’ make your own luck, n’est-ce pas? This is a guest post by Leopold Dilg. There’s little chance we can underestimate our American virtues, since our overlords so seldom miss an opportunity to point them out. A case in point – in fact, le plus grand du genre, though my fingers tremble as I type that French expression, for reasons I’ll explain soon enough – is the Cadillac commercial that interrupted the broadcast of the Olympics every few minutes. A masterpiece of casting and directing and location scouting, the ad follows a middle-aged man, muscular enough but not too proud to show a little paunch – manifestly a Master of the Universe – strutting around his chillingly modernist$10 million vacation house (or is it his first or fifth home? no matter), every pore oozing the manly, smirky bearing that sent Republican country-club women swooning over W.
It starts with Our Hero, viewed from the back, staring down his infinity pool. He pivots and stares down the viewer. He shows himself to be one of the more philosophical species of the MotU genus. “Why do we work so hard?” he puzzles. “For this? For stuff?….” We’re thrown off balance: Will this son of Goldman Sachs go all Walden Pond on us? Fat chance.
Now, still barefooted in his shorts and polo shirt, he’s prowling his sleak living room (his two daughters and stay-at-home wife passively reading their magazines and ignoring the camera, props in his world no less than his unused pool and The Car yet to be seen) spitting bile at those foreign pansies who “stop by the café” after work and “take August off!….OFF!” Those French will stop at nothing.
“Why aren’t YOU like that,” he says, again staring us down and we yield to the intimidation. (Well gee, sir, of course I’m not. Who wants a month off? Not me, absolutely, no way.) “Why aren’t WE like that” he continues – an irresistible demand for totalizing merger. He’s got us now, we’re goose-stepping around the TV, chanting “USA! USA! No Augusts off! No Augusts off!”
No, he sneers, we’re “crazy, hardworking believers.” But those Frogs – the weaklings who called for a double-check about the WMDs before we Americans blasted Iraqi children to smithereens (woops, someone forgot to tell McDonalds, the official restaurant of the U.S. Olympic team, about the Freedom Fries thing; the offensive French Fries are THERE, right in our faces in the very next commercial, when the athletes bite gold medals and the awe-struck audience bites chicken nuggets, the Lunch of Champions) – might well think we’re “nuts.”
“Whatever,” he shrugs, end of discussion, who cares what they think. “Were the Wright Brothers insane? Bill Gates? Les Paul?… ALI?” He’s got us off-balance again – gee, after all, we DO kinda like Les Paul’s guitar, and we REALLY like Ali.
Of course! Never in a million years would the hip jazz guitarist insist on taking an August holiday. And the imprisoned-for-draft-dodging boxer couldn’t possibly side with the café-loafers on the WMD thing. Gee, or maybe…. But our MotU leaves us no time for stray dissenting thoughts. Throwing lunar dust in our eyes, he discloses that WE were the ones who landed on the moon. “And you know what we got?” Oh my god, that X-ray stare again, I can’t look away. “BORED. So we left.” YEAH, we’re chanting and goose-stepping again, “USA! USA! We got bored! We got bored!”
Gosh, I think maybe I DID see Buzz Aldrin drumming his fingers on the lunar module and looking at his watch. “But…” – he’s now heading into his bedroom, but first another stare, and pointing to the ceiling – “…we got a car up there, and left the keys in it. You know why? Because WE’re the only ones goin’ back up there, THAT’s why.” YES! YES! Of COURSE! HE’S going back to the moon, I’M going back to the moon, YOU’RE going back to the moon, WE’RE ALL going back to the moon. EVERYONE WITH A U.S. PASSPORT is going back to the moon!!
Damn, if only the NASA budget wasn’t cut after all that looting by the Wall Street boys to pay for their $10 million vacation homes, WE’D all be going to get the keys and turn the ignition on the rover that’s been sitting 45 years in the lunar garage waiting for us. But again – he must be reading our mind – he’s leaving us no time for dissent, he pops immediately out of his bedroom in his$12,000 suit, gives us the evil eye again, yanks us from the edge of complaint with a sharp, “But I digress!” and besides he’s got us distracted with the best tailoring we’ve ever seen.
Finally, he’s out in the driveway, making his way to the shiny car that’ll carry him to lower Manhattan. (But where’s the chauffer? And don’t those MotUs drive Mazerattis and Bentleys? Is this guy trying to pull one over on the suburban rubes who buy Cadillacs stupidly thinking they’ve made it to the big time?)
Now the climax: “You work hard, you create your own luck, and you gotta believe anything is possible,” he declaims.
Yes, we believe that! The 17 million unemployed and underemployed, the 47 million who need food stamps to keep from starving, the 8 million families thrown out of their homes – WE ALL BELIEVE. From all the windows in the neighborhood, from all the apartments across Harlem, from Sandy-shattered homes in Brooklyn and Staten Island, from the barren blast furnaces of Bethlehem and Youngstown, from the foreclosed neighborhoods in Detroit and Phoenix, from the 70-year olds doing Wal-mart inventory because their retirement went bust, from all the kitchens of all the families carrying \$1 trillion in college debt, I hear the national chant, “YOU MAKE YOUR OWN LUCK! YOU MAKE YOUR OWN LUCK!”
And finally – the denouement – from the front seat of his car, our Master of the Universe answers the question we’d all but forgotten. “As for all the stuff? That’s the upside of taking only two weeks off in August.” Then the final cold-blooded stare and – too true to be true – a manly wink, the kind of wink that makes us all collaborators and comrades-in-arms, and he inserts the final dagger: “N’est-ce pas?”
N’est-ce pas?
Categories: guest post
|
2016-09-30 01:30:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2849399149417877, "perplexity": 3443.402700019298}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661974.78/warc/CC-MAIN-20160924173741-00201-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://rssaketh.github.io/project_pages/obj_disc_new.html
|
The Pursuit of Knowledge:
Discovering and Localizing novel concepts using Dual Memory
ICCV 2021
Saketh Rambhatla Rama Chellappa Abhinav Shrivastava
[Paper] [Supplementary] [Poster] [Workshop Challenge]
A detector trained on 20 VOC (known) classes struggles on out-of-distribution images (e.g., COCO) in the presence of novel (unknown) objects (e.g., bear). Our discovery and localization frame-work builds on this detector and can reliably localize and group semantically meaningful "patterns" in images with both known and novel objects in challenging images. Novel objects belonging to the same class are assigned the same bounding box color. Best viewed in color
# Abstract
We tackle object category discovery, which is the problem of discovering and localizing novel objects in a large unlabeled dataset. While existing methods show results on datasets with less cluttered scenes and fewer object instances per image, we present our results on the challenging COCO dataset. Moreover, we argue that, rather than discovering new categories from scratch, discovery algorithms can benefit from identifying what is already known and focusing their attention on the unknown. We propose a method that exploits prior knowledge about certain object types to discover new categories by leveraging two memory modules, namely Working and Semantic memory. We show the performance of our detector on the COCO minival dataset to demonstrate its in-the-wild capabilities.
# Approach Overview
Our system operates sequentially, processing one image at a time. First, the Encoding module processes the image and outputs candidate regions and features. The Retrieval module assigns each region to either Semantic or Working memory. In the demonstration, after the first iteration, two regions (one humans and a car) have been assigned to the two slots of the Semantic memory and the remaining regions have been assigned to four different slots (Slot 1-4) in the Working memory. In the next iteration, the retrieval module assigns four regions (two humans and two cars) to the Semantic memory while, the remaining regions have been assigned to two previously created slots (Slot 1, 3) and three new slots (Slot 5-7). Note that the retrieval module can either decide to populate an existing slot in the Working memory or create new patterns if necessary. This capability eliminates the need to know the number of "unknown" objects apriori. More details about the approach can be found here.
# Qualitative results on Object Discovery
Concepts discovered by our method in COCO 2014 train set that can be evaluated using ground truth annotations
Concepts discovered by our method in COCO 2014 train set that cannot be evaluated using ground truth annotations. Check this out for more qualitative results.
# Qualitative results on Object Detection
To demonstrate performance of our approach on unseen data and to demonstrate its practical utility, we evaluate detectors obtained from our approach on COCO-minival. The detectors display a lot of intra-class variation. We achieve the highest AP of 17.38% forthe bear class and a lowest mAP of 0.08% for traffic lights.
# Workshop and Challenge Information
The discovery setup and evaluation protocol described in this paper will be hosted as a challenge on the Visual Perception and Learning in an Open World workshop (CVPR 2022). For more details of the challenge, visit this doc. Baseline code to perform discovery is available here. Teams can submit an entry to the leaderboard by emailing their results to anubhav[AT]umd[DOT]edu or pulkit[AT]umd[DOT]edu with the subject "[VPLOW-CHALLENGE-SUBMISSION]; Team Name: ". The leaderboard will be updated everyday at 11pm ET. Clarification of ranking metric: Teams will be ranked based on the Normalized AuC metric. The formula for Normalized AuC is given by $$AuC * e^{-C/N}$$, where $$AuC$$ is the Area under curve of cumulative purity and coverage. $$C$$ is the number of clusters generated by a method and $$N$$ is the total number of annotations in COCO 2014 train set. The AuC in its current form doesn't penalize overclustering. The Normalized AuC on the other hand penalizes methods which overcluster.
|
2022-06-27 05:06:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2325199544429779, "perplexity": 1913.030315946328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103328647.18/warc/CC-MAIN-20220627043200-20220627073200-00363.warc.gz"}
|
https://zbmath.org/authors/?q=ai%3Alandau.zeph-a
|
# zbMATH — the first resource for mathematics
## Landau, Zeph A.
Compute Distance To:
Author ID: landau.zeph-a Published as: Landau, Z.; Landau, Zeph; Landau, Zeph A. External Links: MGP · Math-Net.Ru · Wikidata
Documents Indexed: 32 Publications since 1995
all top 5
#### Co-Authors
2 single-authored 7 Balan, Radu V. 6 Aharonov, Dorit 6 Arad, Itai 6 Casazza, Peter George 6 Landau, Henry Jacob 5 Abrams, Aaron 5 Heil, Christopher E. 5 Pommersheim, James E. 4 Vazirani, Umesh V. 4 Zaslow, Eric 2 Jones, Vaughan Frederick Randal 2 Kempe, Julia 2 Lloyd, Seth 2 Regev, Oded 2 Sunder, Viakalathur S. 2 van Dam, Wim 2 Vidick, Thomas 1 Babson, Eric K. 1 Daubechies, Ingrid Chantal 1 Ganzell, Sandy 1 Gharibian, Sevag 1 Huang, Yichen 1 Kodiyalam, Vijay 1 Kuwahara, Tomotaka 1 Reid, O. 1 Russell, Alexander C. 1 Shin, Seung Woo 1 Su, Francis Edward 1 Yershov, I.
all top 5
#### Serials
3 The Journal of Fourier Analysis and Applications 2 Journal of Functional Analysis 2 SIAM Journal on Computing 1 Communications in Mathematical Physics 1 Israel Journal of Mathematics 1 Theory of Probability and its Applications 1 Geometriae Dedicata 1 Pacific Journal of Mathematics 1 Social Choice and Welfare 1 Algorithmica 1 Journal of Theoretical Probability 1 Random Structures & Algorithms 1 SIAM Review 1 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 1 Indagationes Mathematicae. New Series 1 Applied and Computational Harmonic Analysis 1 Combinatorics, Probability and Computing 1 The Electronic Journal of Combinatorics 1 Advances in Computational Mathematics 1 Electronic Research Announcements of the American Mathematical Society 1 Journal of Statistical Mechanics: Theory and Experiment 1 Foundations and Trends in Theoretical Computer Science 1 Journal of Probability and Statistics
all top 5
#### Fields
9 Functional analysis (46-XX) 9 Computer science (68-XX) 8 Harmonic analysis on Euclidean spaces (42-XX) 8 Quantum theory (81-XX) 3 Probability theory and stochastic processes (60-XX) 3 Statistical mechanics, structure of matter (82-XX) 2 Combinatorics (05-XX) 2 Associative rings and algebras (16-XX) 2 Category theory; homological algebra (18-XX) 2 Manifolds and cell complexes (57-XX) 2 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 Group theory and generalizations (20-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Operator theory (47-XX) 1 Statistics (62-XX) 1 Mechanics of particles and systems (70-XX) 1 Information and communication theory, circuits (94-XX)
#### Citations contained in zbMATH
26 Publications have been cited 412 times in 332 Documents Cited by Year
Gabor time-frequency lattices and the Wexler-Raz identity. Zbl 0888.47018
Daubechies, Ingrid; Landau, H. J.; Landau, Zeph
1995
Density, overcompleteness, and localization of frames. I: Theory. Zbl 1096.42014
Balan, Radu; Casazza, Peter G.; Heil, Christopher; Landau, Zeph
2006
Adiabatic quantum computation is equivalent to standard quantum computation. Zbl 1134.81009
Aharonov, Dorit; Van Dam, Wim; Kempe, Julia; Landau, Zeph; Lloyd, Seth; Regev, Oded
2007
Density, overcompleteness, and localization of frames. II: Gabor systems. Zbl 1097.42022
Balan, Radu; Casazza, Peter G.; Heil, Christopher; Landau, Zeph
2006
Deficits and excesses of frames. Zbl 1029.42030
Balan, Radu; Casazza, Peter G.; Heil, Christopher; Landau, Zeph
2003
Quantum Hamiltonian complexity. Zbl 1329.68117
Gharibian, Sevag; Huang, Yichen; Landau, Zeph; Shin, Seung Woo
2014
A polynomial quantum algorithm for approximating the Jones polynomial. Zbl 1301.68129
Aharonov, Dorit; Jones, Vaughan; Landau, Zeph
2006
Exchange relation planar algebras. Zbl 1022.46039
Landau, Zeph A.
2002
Adiabatic quantum computation is equivalent to standard quantum computation. Zbl 1152.81008
Aharonov, Dorit; van Dam, Wim; Kempe, Julia; Landau, Zeph; Lloyd, Seth; Regev, Oded
2008
The planar algebra associated to a Kac algebra. Zbl 1039.46049
Kodiyalam, Vijay; Landau, Zeph; Sunder, V. S.
2003
A polynomial quantum algorithm for approximating the Jones polynomial. Zbl 1191.68313
Aharonov, Dorit; Jones, Vaughan; Landau, Zeph
2009
Redundancy for localized frames. Zbl 1254.42037
Balan, Radu; Casazza, Pete; Landau, Zeph
2011
Random Cayley graphs are expanders: a simple proof of the Alon-Roichman theorem. Zbl 1053.05060
Landau, Zeph; Russell, Alexander
2004
Excesses of Gabor frames. Zbl 1028.42021
Balan, Radu; Casazza, Peter G.; Heil, Christopher; Landau, Zeph
2003
Quantum computation and the evaluation of tensor networks. Zbl 1209.68261
2010
The detectability lemma and quantum gap amplification. Zbl 1304.68049
Aharonov, Dorit; Arad, Itai; Landau, Zeph; Vazirani, Umesh
2009
Measure functions for frames. Zbl 1133.46012
2007
Density, overcompleteness, and localization of frames. Zbl 1142.42313
Balan, Radu; Casazza, Peter G.; Heil, Christopher; Landau, Zeph
2006
Fuss-Catalan algebras and chains of intermediate subfactors. Zbl 1055.46511
Landau, Zeph A.
2001
Rigorous RG algorithms and area laws for low energy eigenstates in 1D. Zbl 1376.81079
Arad, Itai; Landau, Zeph; Vazirani, Umesh; Vidick, Thomas
2017
Planar depth and planar subalgebras. Zbl 1030.46078
Landau, Zeph; Sunder, V. S.
2002
A fair division solution to the problem of redistricting. Zbl 1184.91189
Landau, Z.; Reid, O.; Yershov, I.
2009
On the trigonometric moment problem in two dimensions. Zbl 1258.42023
Landau, H. J.; Landau, Zeph
2012
Rigorous RG algorithms and area laws for low energy eigenstates in 1D. Zbl 1406.81101
Arad, Itai; Landau, Zeph; Vazirani, Umesh V.; Vidick, Thomas
2017
Fair division and redistricting. Zbl 1307.91156
Landau, Zeph; Su, Francis Edward
2014
The 1D area law and the complexity of quantum states: a combinatorial approach. Zbl 1292.81010
Aharonov, Dorit; Arad, Itai; Landau, Zeph; Vazirani, Umesh
2011
Rigorous RG algorithms and area laws for low energy eigenstates in 1D. Zbl 1376.81079
Arad, Itai; Landau, Zeph; Vazirani, Umesh; Vidick, Thomas
2017
Rigorous RG algorithms and area laws for low energy eigenstates in 1D. Zbl 1406.81101
Arad, Itai; Landau, Zeph; Vazirani, Umesh V.; Vidick, Thomas
2017
Quantum Hamiltonian complexity. Zbl 1329.68117
Gharibian, Sevag; Huang, Yichen; Landau, Zeph; Shin, Seung Woo
2014
Fair division and redistricting. Zbl 1307.91156
Landau, Zeph; Su, Francis Edward
2014
On the trigonometric moment problem in two dimensions. Zbl 1258.42023
Landau, H. J.; Landau, Zeph
2012
Redundancy for localized frames. Zbl 1254.42037
Balan, Radu; Casazza, Pete; Landau, Zeph
2011
The 1D area law and the complexity of quantum states: a combinatorial approach. Zbl 1292.81010
Aharonov, Dorit; Arad, Itai; Landau, Zeph; Vazirani, Umesh
2011
Quantum computation and the evaluation of tensor networks. Zbl 1209.68261
2010
A polynomial quantum algorithm for approximating the Jones polynomial. Zbl 1191.68313
Aharonov, Dorit; Jones, Vaughan; Landau, Zeph
2009
The detectability lemma and quantum gap amplification. Zbl 1304.68049
Aharonov, Dorit; Arad, Itai; Landau, Zeph; Vazirani, Umesh
2009
A fair division solution to the problem of redistricting. Zbl 1184.91189
Landau, Z.; Reid, O.; Yershov, I.
2009
Adiabatic quantum computation is equivalent to standard quantum computation. Zbl 1152.81008
Aharonov, Dorit; van Dam, Wim; Kempe, Julia; Landau, Zeph; Lloyd, Seth; Regev, Oded
2008
Adiabatic quantum computation is equivalent to standard quantum computation. Zbl 1134.81009
Aharonov, Dorit; Van Dam, Wim; Kempe, Julia; Landau, Zeph; Lloyd, Seth; Regev, Oded
2007
Measure functions for frames. Zbl 1133.46012
2007
Density, overcompleteness, and localization of frames. I: Theory. Zbl 1096.42014
Balan, Radu; Casazza, Peter G.; Heil, Christopher; Landau, Zeph
2006
Density, overcompleteness, and localization of frames. II: Gabor systems. Zbl 1097.42022
Balan, Radu; Casazza, Peter G.; Heil, Christopher; Landau, Zeph
2006
A polynomial quantum algorithm for approximating the Jones polynomial. Zbl 1301.68129
Aharonov, Dorit; Jones, Vaughan; Landau, Zeph
2006
Density, overcompleteness, and localization of frames. Zbl 1142.42313
Balan, Radu; Casazza, Peter G.; Heil, Christopher; Landau, Zeph
2006
Random Cayley graphs are expanders: a simple proof of the Alon-Roichman theorem. Zbl 1053.05060
Landau, Zeph; Russell, Alexander
2004
Deficits and excesses of frames. Zbl 1029.42030
Balan, Radu; Casazza, Peter G.; Heil, Christopher; Landau, Zeph
2003
The planar algebra associated to a Kac algebra. Zbl 1039.46049
Kodiyalam, Vijay; Landau, Zeph; Sunder, V. S.
2003
Excesses of Gabor frames. Zbl 1028.42021
Balan, Radu; Casazza, Peter G.; Heil, Christopher; Landau, Zeph
2003
Exchange relation planar algebras. Zbl 1022.46039
Landau, Zeph A.
2002
Planar depth and planar subalgebras. Zbl 1030.46078
Landau, Zeph; Sunder, V. S.
2002
Fuss-Catalan algebras and chains of intermediate subfactors. Zbl 1055.46511
Landau, Zeph A.
2001
Gabor time-frequency lattices and the Wexler-Raz identity. Zbl 0888.47018
Daubechies, Ingrid; Landau, H. J.; Landau, Zeph
1995
all top 5
#### Cited by 486 Authors
20 Gröchenig, Karlheinz 10 Han, Deguang 10 Lu, Songfeng 10 Sun, Jie 9 Sun, Wenchang 8 Casazza, Peter George 8 Heil, Christopher E. 8 Sun, Qiyu 7 Luef, Franz 7 Romero, José Luis 6 Balan, Radu V. 6 Christensen, Ole 6 Landau, Zeph A. 6 Liu, Fang 6 Shen, Zuowei 5 Feichtinger, Hans Georg 5 Kauffman, Louis Hirsch 5 Kodiyalam, Vijay 5 Koo, Yooyoung 5 Kutyniok, Gitta 5 Li, Yunzhang 5 Lim, Jae Kun 5 Liu, Zhengwei 5 Sunder, Viakalathur S. 4 Aharonov, Dorit 4 Gabardo, Jean-Pierre 4 Jakobsen, Mads Sielemann 4 Kashefi, Elham 4 Krishtal, Ilya Arkadievich 4 Li, Shidong 4 Pfander, Götz E. 3 Brandão, Fernando G. S. L. 3 Cirac, Juan Ignacio 3 Freedman, Michael Hartley 3 Grossman, Pinhas 3 Ji, Hui 3 Kaiblinger, Norbert 3 Lidar, Daniel A. 3 Lomonaco, Samuel J. jun. 3 Morrison, Scott 3 Myers, Robert C. 3 Strohmer, Thomas 2 Aldroubi, Akram 2 Antezana, Jorge 2 Arad, Itai 2 Bakshi, Keshab Chandra 2 Balazs, Peter 2 Baskakov, Anatoliĭ Grigor’evich 2 Bhattacharyya, Arpan 2 Bishop, Shannon 2 Bravyi, Sergey B. 2 Cabrelli, Carlos A. 2 Choi, Vicky Siu-Ngan 2 Corach, Gustavo 2 Cui, Shawn Xingshan 2 Das, Paramita 2 De, Sandipan 2 Dutkay, Dorin Ervin 2 Eldar, Lior 2 Fan, Zhitao 2 Futamura, Fumiko 2 Gao, Chao 2 Ge, Yimin 2 Geraci, Joseph 2 Geronimo, Jeffrey S. 2 Ghosh, Shamindra Kumar 2 Gosset, David 2 Grohs, Philipp 2 Gupta, Ved Prakash 2 Haimi, Antti 2 Janssen, Augustus Josephus Elizabeth Maria 2 Jones, Vaughan Frederick Randal 2 Kastoryano, Michael J. 2 Krovi, Hari 2 Kuperberg, Gregory John 2 Labate, Demetrio 2 Lammers, Mark C. 2 Larson, David Royal 2 Leinert, Michael 2 Lemm, Marius 2 Lemvig, Jakob 2 Liu, Bei 2 Matusiak, Ewa 2 Mitkovski, Mishko 2 Molter, Ursula Maria 2 Ogawa, Hidemitsu 2 Ortega-Cerdà, Joaquim 2 Palcoux, Sebastien 2 Pérez-García, David 2 Peters, Emily 2 Powell, Alexander M. 2 Ren, Yunxiang 2 Ron, Amos 2 Ruiz, Mariano A. 2 Rzeszotnik, Ziemowit 2 Severini, Simone 2 Snyder, Noah 2 Søndergaard, Peter L. 2 Stöckler, Joachim 2 Stoeva, Diana T. ...and 386 more Authors
all top 5
#### Cited in 106 Serials
24 Journal of Functional Analysis 24 The Journal of Fourier Analysis and Applications 22 Quantum Information Processing 18 Applied and Computational Harmonic Analysis 14 Journal of Mathematical Physics 11 Transactions of the American Mathematical Society 9 Journal of Mathematical Analysis and Applications 8 Communications in Mathematical Physics 8 Journal of High Energy Physics 7 Advances in Mathematics 6 Journal of Approximation Theory 6 Proceedings of the American Mathematical Society 6 Advances in Computational Mathematics 5 International Journal of Mathematics 5 International Journal of Quantum Information 4 International Journal of Theoretical Physics 4 Monatshefte für Mathematik 4 Numerical Functional Analysis and Optimization 4 Acta Applicandae Mathematicae 4 The Journal of Geometric Analysis 4 Linear Algebra and its Applications 4 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 4 New Journal of Physics 3 Journal of Statistical Physics 3 Integral Equations and Operator Theory 3 SIAM Journal on Computing 3 Theoretical Computer Science 3 MSCS. Mathematical Structures in Computer Science 3 Journal of Physics A: Mathematical and Theoretical 2 Letters in Mathematical Physics 2 Reviews of Modern Physics 2 Results in Mathematics 2 Advances in Applied Mathematics 2 Constructive Approximation 2 Journal of the American Mathematical Society 2 Random Structures & Algorithms 2 Journal of Contemporary Mathematical Analysis. Armenian Academy of Sciences 2 Annals of Physics 2 Electronic Research Announcements of the American Mathematical Society 2 Open Systems & Information Dynamics 2 Acta Mathematica Sinica. English Series 2 Comptes Rendus. Mathématique. Académie des Sciences, Paris 2 Banach Journal of Mathematical Analysis 1 Computers & Mathematics with Applications 1 Israel Journal of Mathematics 1 Linear and Multilinear Algebra 1 Mathematical Notes 1 Physics Letters. A 1 Physics Reports 1 Theoretical and Mathematical Physics 1 Mathematics of Computation 1 Chaos, Solitons and Fractals 1 Annales de l’Institut Fourier 1 Applied Mathematics and Computation 1 Duke Mathematical Journal 1 Inventiones Mathematicae 1 Journal of Algebra 1 Journal of Computational and Applied Mathematics 1 Journal of Pure and Applied Algebra 1 Mathematische Nachrichten 1 Michigan Mathematical Journal 1 Osaka Journal of Mathematics 1 Pacific Journal of Mathematics 1 European Journal of Combinatorics 1 Combinatorica 1 Circuits, Systems, and Signal Processing 1 Physica D 1 Social Choice and Welfare 1 Statistical Science 1 Revista Matemática Iberoamericana 1 Mathematical and Computer Modelling 1 SIAM Journal on Discrete Mathematics 1 Journal of Scientific Computing 1 Machine Learning 1 Proceedings of the National Academy of Sciences of the United States of America 1 SIAM Review 1 Journal of Knot Theory and its Ramifications 1 Russian Journal of Mathematical Physics 1 Journal of Mathematical Sciences (New York) 1 Annales Mathématiques Blaise Pascal 1 Advances in Applied Clifford Algebras 1 Selecta Mathematica. New Series 1 Mathematical Communications 1 Theory of Computing Systems 1 Journal of Inequalities and Applications 1 Revista Matemática Complutense 1 Philosophical Transactions of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 1 Annales Henri Poincaré 1 Algebraic & Geometric Topology 1 Journal of the Australian Mathematical Society 1 Journal of Systems Science and Complexity 1 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 1 Multiscale Modeling & Simulation 1 Sampling Theory in Signal and Image Processing 1 Analysis and Applications (Singapore) 1 International Journal of Wavelets, Multiresolution and Information Processing 1 Journal of Function Spaces and Applications 1 Complex Analysis and Operator Theory 1 Ars Mathematica Contemporanea 1 Physical Review A, Third Series ...and 6 more Serials
all top 5
#### Cited in 42 Fields
158 Harmonic analysis on Euclidean spaces (42-XX) 111 Quantum theory (81-XX) 78 Functional analysis (46-XX) 56 Computer science (68-XX) 42 Information and communication theory, circuits (94-XX) 39 Operator theory (47-XX) 23 Statistical mechanics, structure of matter (82-XX) 21 Approximations and expansions (41-XX) 16 Combinatorics (05-XX) 16 Manifolds and cell complexes (57-XX) 16 Numerical analysis (65-XX) 13 Abstract harmonic analysis (43-XX) 8 Category theory; homological algebra (18-XX) 7 Linear and multilinear algebra; matrix theory (15-XX) 7 Associative rings and algebras (16-XX) 7 Group theory and generalizations (20-XX) 7 Relativity and gravitational theory (83-XX) 6 Partial differential equations (35-XX) 6 Probability theory and stochastic processes (60-XX) 5 Number theory (11-XX) 5 Topological groups, Lie groups (22-XX) 4 Order, lattices, ordered algebraic structures (06-XX) 4 Operations research, mathematical programming (90-XX) 3 Algebraic geometry (14-XX) 3 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 2 Mathematical logic and foundations (03-XX) 2 Functions of a complex variable (30-XX) 2 Special functions (33-XX) 2 Dynamical systems and ergodic theory (37-XX) 2 Integral transforms, operational calculus (44-XX) 2 Global analysis, analysis on manifolds (58-XX) 2 Statistics (62-XX) 2 Mechanics of particles and systems (70-XX) 2 Classical thermodynamics, heat transfer (80-XX) 1 History and biography (01-XX) 1 Nonassociative rings and algebras (17-XX) 1 $$K$$-theory (19-XX) 1 Potential theory (31-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Difference and functional equations (39-XX) 1 Sequences, series, summability (40-XX) 1 Biology and other natural sciences (92-XX)
#### Wikidata Timeline
The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
|
2021-01-20 18:07:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4152829051017761, "perplexity": 9347.359015512617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521139.30/warc/CC-MAIN-20210120151257-20210120181257-00582.warc.gz"}
|
http://math.stackexchange.com/questions/569848/summations-can-someone-explain-the-following-solution-to-a-summation-problem
|
# Summations: Can Someone Explain The Following Solution to a Summation Problem?
I have the following solution to a problem that I'm attempting to understand but I cannot find a rule online which explains it. Can someone please explain where the i comes from in the following summation?
$X=\sum\limits_{i=1}^{n-1}\sum\limits_{j=i}^{2n+1}(1)$
(In the following line I understand where the $(2n + 2)$ comes from because since 1 is being subtracted in the index you must add 1 to the variable. But i do not understand why the i is added why is it not just $\sum\limits_{i=1}^{n}(2n + 2 + 1)$ which is what i arrived at in my own solution?)
$=\sum\limits_{i=1}^{n-1}(2n+1-i+1) = \sum\limits_{i=1}^{n-1}(2n+2-i)$
$=(n-1)(2n+2) - \sum\limits_{i=1}^{n-1}(i)$
(please also explain how the second term is derived? Why is is not the normal arithmetic sequence sum of: $n(n + 1) / 2$ ?)
$=(n-1)(2n+2) - (n-1)\frac{1+(n-1)}{2}$
The rest of the solution is trivial simplification that I understand. the lines which begin with = are the actual solution parenthetical statements are my personal thoughts / questions
-
can you possibly explain further as i do not understand why 5 and 10 must be considered in finding a solution to the problem. – user17321 Nov 17 '13 at 1:17
|
2014-09-23 02:23:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6920773983001709, "perplexity": 195.75826874632102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137906.42/warc/CC-MAIN-20140914011217-00005-ip-10-234-18-248.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2541172/confusion-proof-by-contradiction-when-starting-from-conclusion
|
# Confusion proof by contradiction when starting from conclusion
I'm not completely sure how to phrase my question, so bare with me.
First example:
If I would need to prove 'Suppose $n$ is integer, if $n$ is odd, then $n^2$ is odd' then a proof by contradiction would look something like this.
Suppose $n$ is integer, if $n$ is odd then $n^2$ is even.
Assume $n$ is odd, so $n=2a+1$.
Therefore $(2a+1)^2$ is even
And $4a^2+4a+1$ is even
Let $b=2a^2+2a$
Therefore $2b+1$ is even.
So in this case you start from the hypothesis to (dis)proof the conclusion.
The confusion
My confusion is about when this is done the conclusion is used to deal with the hypothesis e.g.
Suppose n is integer, if n^2 is odd, then n is odd.
Proof by contrapositive
If proof by contrapositive would be used, you could rewrite this to: if n is even, then $n^2$ is even.
And this is pretty straight forward to prove since you can plug in the knowledge about $n$ (the hypotheses) into the $n^2$ (the conclusion)
But how can this be proved using contradiction?
I have looked at various resources and it seems that knowledge about the conclusion is plugged back into the hypotheses for example:
Suppose $n$ is integer, if $n^2$ is odd, then $n$ is even.
Assume $n$ is even, then $n=2a$
Now plug this into $n^2$ is odd, then $(2a)^2$ is odd
Let $b$ is $2a^2$
And $(2a)^2$ can be simplified to
$2b$ is odd Which is a contradiction.
But this feels fishy. Normally you work from the hypothesis and to proof the conclusion. But it seems that in this case, the reverse is allowed.
Another example of my confusion: Suppose a is integer. If a^2 is even, then a is even. Proof by contradiction. Suppose a is integer. If a^2 is even, then a is odd. Since a is odd, then a=2c+1. Then a^2=(2c+1)^2=2(2c^2+2c)+1. So a^2 is odd, which is a contradiction.
In the above example you plug knowledge of the conclusion back into the hypothesis. Normally with direct proof you plug knowledge from the hypothesis into the conclusion. This is what is causing my confusion.
And Another example of my confusion:
Is the proof using contradiction for both these statements exactly the same?
1: Suppose n is integer. If n is odd, then n^2 is odd.
2: Suppose n is integer. If n^2 is odd, then n is odd.
• Not sure I follow. The statements "$n$ odd $\implies n^2$ odd" and "$n^2$ odd $\implies n$ odd" are not equivalent. – lulu Nov 28 '17 at 13:53
• You are right. A negation is needed on the second one. – user370967 Nov 28 '17 at 13:54
• @lulu yes, they are not equivalent. They are 2 different examples of proofs, but help to clarify where my problem is. – pveentjer Nov 28 '17 at 13:56
• Well...I don't see the first proof as being a "proof by contradiction". Rather, it is a (perfectly valid) proof by explicit computation: $(2a+1)^2=4a^2+4a+1=2\times(2a+2)+1$ is odd. End of proof. – lulu Nov 28 '17 at 14:01
• Typo: left off the multiplicative factor of $a$ in my expression (doesn't change the argument). – lulu Nov 28 '17 at 14:10
I think, we have to go back using logical symbols. If $A$ and $B$ are statements (like "$n$ is even" or "$n^2$ is odd") which is either true or false.
$A\vee B$ means "$A$ or $B$"
$A\wedge B$ means "$A$ and $B$"
$\neg A$ means "not $A$"
The implication $A\Rightarrow B$ is defined as $\neg A\vee B$ and has the meaning "If $A$, then $B$".
Suppose $\neg(A\Rightarrow B)$ is true. If you get a contradiction, then you deduce that $\neg(R\Rightarrow B)$ is false hence $A\Rightarrow B$ is true. This is called proof by contradition. But be careful: $\neg(A\Rightarrow B)$ is equivalent to $A\wedge \neg B$.
On the other hand you can show that $A\Rightarrow B$ is equivalent to $\neg B\Rightarrow \neg A$, which is called the contraposition. So if you prove that the contraposition is true, than your original statement is true too.
Let us go to your example:
Proof by contraposition to "If $n$ is odd, then $n^2$ is odd."
Suppose $n$ is odd and $n^2$ is even. Then there exists integer $k,m$ such that $n=2k+1$ and $n^2=2m$. We get $$2m=n^2=(2k+1)^2=4k^2+4k+1=2(2k^2+2k)+1$$ which is a contradition since the LHS is even while the RHS is odd.
Using proof of contraposition to "If $n^2$ even, then $n$ is even".
The contrapostion is: "If $n$ is odd, then $n^2$ is odd", which we proved by contradiction above.
And Another example of my confusion:
Is the proof using contradiction for both these statements exactly the same?
1: Suppose n is integer. If n is odd, then n^2 is odd.
2: Suppose n is integer. If n^2 is odd, then n is odd.
First, we suppose the negation of the statement is true, which is
1': Suppose $n$ is odd and $n^2$ is even.
2': Suppose $n^2$ is odd and $n$ is even.
Since $1$ and $2$ are different (not equivalent) statements, so $1'$ and $2'$ are.
But in fact, you will produce the same contradiction in both cases with the same idea/way. But if you suppose $1'$ you conclude $1$ and if you suppose $2'$ you conclude $2$.
$1$ and $2$ together means:
$n$ is odd if and only if $n^2$ is odd.
This case is a little bit special, since you can use the same arguments for both directions. That is not always natural. Normally you need totally different ways to prove an equivalence.
• Ok. And now proof 'if n^2 is odd, then n is odd' using contradiction. See page 115 from 'Book of Proof' people.vcu.edu/~rhammack/BookOfProof/Contradict.pdf. – pveentjer Nov 28 '17 at 14:30
• They did the same as me. If you like to proof "If $n^2$ is odd, then $n$ is odd" you have to consider the negation which is "$n^2$ is odd and $n$ is even" and produce a contradiction. – Mundron Schmidt Nov 28 '17 at 14:37
• You agree that for this proof, knowledge from the conclusion is plugged back into the hypothesis? Normally with direct proofs you plug knowledge from the hypothesis in the conclusion. This reverse behavior for the above contradiction proof is the source of my confusion.. – pveentjer Nov 28 '17 at 14:39
• If you do a proof by contradiction, you have to drop the idea of "conclude one statement from the other". You suppose that the hypothesis AND the negation of the conclusion are given. You have both statements and combine them to a contradiction. – Mundron Schmidt Nov 28 '17 at 14:47
• So the proof by contradiction for A->B, or B->A would be the same? See the last example of my opening post. edit Not the same because either you get 'A and not B' or 'B and not A' as base for the contradictions. – pveentjer Nov 30 '17 at 14:27
Afters studying the topic some more, I have gained some deeper insights.
So imagine the statement S needs to be proved by contradiction, then
~S->(r/\ ~r)
So in other words, the negation of S would lead to some contradiction where r is both true and false. Since the contradiction is always false, the only way to make this statement true, is ~S is false. Since ~S is false, S must be true.
Assume that S is defined as some implication:
p1/\p2/\.../\pn->q
If we plug this into a proof by contradiction we would get:
~(p1/\p2/\.../\pn->q)->(px/\~px)
Where px is one of the contradicting premises. If we clean this up a bit:
~(~(p1/\p2/\.../\pn)\/q)->(px/\~px) (Implication conversion law)
~(~p1\/~p2\/...\/~pn\/q)->(px/\~px) (de Morgan)
(~~p1/\~~p2/\.../\~~pn/\~q)->(px/\~px) (de Morgan)
(p1/\p2/\.../\pn/\~q)->(px/\~px) (Double negation law)
To answer my own question, there is no more implication between the premises and the conclusion and therefor knowledge of the conclusion can be plugged into the premise or vice versa
|
2019-08-26 00:47:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7864468097686768, "perplexity": 350.8431015011067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330913.72/warc/CC-MAIN-20190826000512-20190826022512-00306.warc.gz"}
|
https://math.stackexchange.com/questions/3084823/is-a-n-sum-alpha-0n-textround-sin-alpha-bounded
|
# Is $a_n=\sum_{\alpha=0}^n\;\text{round}(\sin\alpha)$ bounded?
Is the sequence $$(a_n)$$ bounded if $$a_n=\sum_{\alpha=0}^n\;\text{round}(\sin\alpha)?$$ Edit: round$$(x)$$ simply means round $$x$$ to the nearest integer; the case where $$x=\pm0.5$$ may be ignored since $$\pi$$ is irrational.
• by round, you mean the integer part ? – Thinking Jan 23 at 17:54
• What is "round"? If it is the smallest integer part, then I suspect $a_n \rightarrow -\infty$. – rtybase Jan 23 at 17:54
• Are you from the "actually good math problems" Facebook group ? – Gabriel Romon Jan 23 at 18:02
• Didn't come up with an answer, but some thoughts: It cannot go to a real number when n goes to infinite cause it will keep jumping by 1 or -1. So if there is a bound it should be infinite or minus infinite. From symmetry, I guess it won't go to either of those.. – Shaq Jan 23 at 18:14
• The equidistribution of the multiples of $1/\pi$ mod $1$ gives you $a_n/n \to 0$, but I doubt $a_n$ is bounded. Numerically: I get $a_{87210}=-3$, $a_{191203} = -4$, $a_{503892}=-5$, $a_{816581} = -6$. – Robert Israel Jan 23 at 18:38
(op asked if he could post bcs it's originally a problem I found out, so I'm going to show what we've globally already found before this post was made).
Obviously round means rounding to the nearest integer. Ok, so basically, the first thing I tried to do was to find patterns. I first showed that $$\left\lfloor\frac3\pi n-\frac12\right\rfloor=\left\{\begin{array}{ccc}2~\textrm{or}~5&\iff&\textrm{Round}(\sin n)=0\\0~\text{or}~1&\iff&\textrm{Round}(\sin n)=1\\3~\text{or}~4&\iff&\textrm{Round}(\sin n)=-1\end{array}\right.$$ This tool will be useful later. I then noticed that the sequence had a few sporadic cycling patterns which all looked the same (nothing, increment, increment, nothing, decrement, decrement, repeat). found out a way to show where those patterns broke : $$\begin{array}{cl} &\left(\left\lfloor\frac3\pi n-\frac12\right\rfloor-n\right)-\left(\left\lfloor\frac3\pi(n-1)-\frac12\right\rfloor-(n-1)\right)=-1\\ \iff&\left\lfloor\frac3\pi n-\frac12\right\rfloor=\left\lfloor\frac3\pi(n-1)-\frac12\right\rfloor \end{array}$$ these are all the values of $$n$$ which satisfy this equation. afterwards, did some stuff to find out that the gap $$G$$ between two solutions is $$\begin{array}{cl} &\lfloor1/(1-3/\pi)\rfloor\le G\le\lceil1/(1-3/\pi)\rceil\\ \iff&22\le G\le23 \end{array}$$ Also, the sequence $$\left(\underset{\alpha\le n,~\alpha\in\mathbb N\setminus\left\{n:\lfloor3n/\pi-1/2\rfloor=\lfloor3(n-1)/\pi-1/2\rfloor\right\}}{\sum\text{Round}\sin\alpha}\right)_{n\ge0}$$ is bounded by 0 and 2, forgot to add that.
Elie Ben Shlomo from the Facebook group "actually good math problems" found out another related question on MathOverflow.
Jack Heimrath from the same group used Birkhoff's ergodic theorem to show that the sum itself is $$o(n)$$
Griffin Macris, who was really helpful with this problem too, found a lot of results too, but I'd rather let him say what he found. although it's not really his major contribution, he found out that $$|a_{12026980763}|=7$$ and that's the highest value in the sequence found yet.
Mars Industrial found out this really good paper on arxiv which I don't really understand but sure looks related.
And a lot of other people found a lot of approaches which I unfortunately don't really get.
• I think that the best we can do is $a_n=O(n^{1-\frac1{\mu-1}+\epsilon})$ where $\mu$ is the irrationality measure of $\pi$. – i707107 Jan 31 at 1:53
It is not an answer to your question (I believe the answer should be no), but rather to a related question, which is already nontrivial:
Is there a subsequence of $$(a_n)_n$$ which is bounded ?
It is a direct application of the Denjoy-Koksma inequality (see https://en.wikipedia.org/wiki/Denjoy%E2%80%93Koksma_inequality), which in this case states that, if we denote $$f:[0,1[\to [0,1[, f(x)=\{x+\frac1{2\pi} \}$$ and $$\phi(x)=round(sin(2\pi x)),$$ (which is clearly of bounded variation, namely $$Var(\phi)=4$$, and $$\int_0^1 \phi=0$$) then for any integer couple $$(p,q)$$ such that the following diophantine inequality holds $$\left|\frac1{2\pi}-\frac{p}q\right|\leq \frac{1}{q^2},$$
then $$|a_{q-1}|=\left| \sum_ {k=0}^{q-1} \phi(f^k (0)) \right|\leq Var(\phi)=4.$$ There exist infinitely many couples $$p,q$$ satisfying the above diophantine inequality. For example, the convergents of the continued fraction expansion of $$1/2\pi$$. Thus, there exists a bounded, 'explicit' subsequence of $$(a_n)_n$$.
|
2019-05-22 06:49:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 25, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8322797417640686, "perplexity": 307.52824588845755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256764.75/warc/CC-MAIN-20190522063112-20190522085112-00549.warc.gz"}
|
https://brilliant.org/discussions/thread/relativity-paradox-B/
|
I can't resolve a problem I thought of in special relativity, was hoping somebody here could help. This is going to sound somewhat strange, but say I'm floating in space, no accelerations involved. Suddenly a spacecraft flies past me, inside are a bunch of incompetent physicists trying to make a nuclear bomb and test it in their spacecraft (leading to their certain demise). From their perspective, the nuclear fission reaction fails, say, because they didn't make the nuclear fuel dense enough for the reaction to happen properly (it wasn't dense enough to get the reaction going) so the experiment fails and they survive.
However from my perspective, they are travelling near to the speed of light, and their length is shortened in their direction of motion. This means that from my perspective, the nuclear fuel is now much more dense (its been squashed lengthways). This happens to just be enough to get the nuclear fuel to critical density and the reaction doesn't fail from my perspective, the bomb explodes and the ship gets destroyed.
...I feel I'm missing something painfully obvious, thanks in advance if you can explain this.
Note by Jord W
4 years, 6 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
The blast is still going to happen at their perspective., not yours, how would the reaction achieve critical mass, if they are traveling at constant speed ?
- 4 years, 2 months ago
i don't understand what you're asking, btw doesn't matter i've got this resolved by someone else anyway
- 4 years, 2 months ago
|
2018-07-15 23:20:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9855155348777771, "perplexity": 2291.806891088504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589022.38/warc/CC-MAIN-20180715222830-20180716002830-00246.warc.gz"}
|
https://zbmath.org/1005.62043
|
## Least squares estimation with complexity penalties.(English)Zbl 1005.62043
Summary: We examine the regression model $$Y_i=g_0(z_i)+W_i$$, $$i=1,\dots,n$$, and the penalized least squares estimator $\widehat g_n=\arg \min_{g\in{\mathcal G}}\bigl \{\|Y-g\|^2+ \text{pen}^2(g)\bigr\},$ where $$\text{pen} (g)$$ is a penalty on the complexity of the function $$g$$. We show that a rate of convergence for $$\widehat g_n$$ is determined by the entropy of the sets ${\mathcal G}_*(\delta) =\bigl\{g\in{\mathcal G}: \|g-g_*\|^2+ \text{pen}^2 (g)\leq\delta^2 \bigr\},\;\delta>0,$ where $$g_*=\arg \min_{g\in {\mathcal G}}\{\|g-g_0 \|^2+\text{pen}^2(g)\}$$ (say). As examples, we consider Sobolov and dimension penalties.
### MSC:
62G08 Nonparametric regression and quantile regression
### Keywords:
entropy; model selection; penalized least squares
|
2023-03-22 03:06:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.82506263256073, "perplexity": 445.7070029239426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00546.warc.gz"}
|
http://www.math.ucr.edu/home/baez/diary/index.html
|
## Diary — November 2018
#### November 1, 2018
The Cliffs of Hathor
This movie was made from 25 minutes of photos taken by the Rosetta spacecraft when it was several kilometers from comet Churyumov-Gerasimenko. They were taken on June 1st, 2016. On September 30th of that year, Rosetta was deliberately crashed into the comet and the mission ended. These photos were nicely assembled into an animated gif by landru79 on Twitter just recently.
This place is called the Cliffs of Hathor. The "snow" is dust moving slowly; you can also see some stars moving downward in the background, due to the rotation of the comet.
#### November 2, 2018
Starting on the 47th page of the pdf, you can see information about the crash of mammal, bird, reptile, amphibian and fish populations worldwide:
The situation is worse for freshwater species: they're down by about 83% since 1970:
On land, the worst declines are occurring in the "Neotropical" region: Central and South America. Mammals, birds, reptiles and amphibian population have dropped by about 89% since 1970.
#### November 6, 2018
Slowly lower yourself toward the event horizon of black hole. As you do, look up. Your view of the outside universe will shrink to a point — and become brighter and brighter, tending to infinite brightness!
These effects don't happen if you simply let yourself fall in to the black hole. If you do that, your view of the outside world will not shrink to a point, and the light you see will not be intensified by blueshifting — because you'll be falling along with it!
Andrew Hamilton made this animated gif. See more here:
#### November 11, 2018
Good news: starting early next year, Eddie Bernice Johnson will be the first African-American to lead the House Committee on Science, Space, and Technology. She'll also be the first woman to do it. She's the first Democrat to lead this committee since 2011. She's the first who isn't a climate science denier since 2011. And she's the first with some training in science since the 1990s: she was chief psychiatric nurse at the Dallas Veteran's Administration Hospital for 16 years.
#### November 13, 2018
This number has 317 digits, all ones. It's prime. 317 is also prime!
That's not a coincidence. A number whose digits are all 1 can only be prime if the number of digits is prime!
This works in any base, not just base ten. Can you see the quick proof?
A prime whose decimal digits are all ones is called a "repunit prime". The largest known repunit prime has 1031 digits.
Mathematicians believe there are infinitely many repunit primes, but nobody can prove it yet.
Why does this matter?
The density of primes decreases slowly, like $$1/\ln(N)$$. So if numbers whose digits have 'no good reason not to be prime', there should be infinitely many of them. This idea gives a probabilistic argument that there should be infinitely many repunit primes.
But what does probability really mean when it comes to prime numbers? God didn't choose them by rolling dice!
This is why silly-sounding puzzles about primes can actually be important: they challenge our understanding of randomness and determinism.
There might be infinitely many true facts about primes that are true just because it's overwhelmingly 'probable' that they're true... but not for any reason we can convert into a proof.
However, even this has not yet been proved.
Clouds of mystery surround us.
#### November 22, 2018
It's Thanksgiving!
I am thankful for the beauty of mathematics and physics, which always go deeper than I expect.
For example, Hamilton's equations describe the motion of a particle if you know its energy. But they turn out to look a lot like Maxwell's relations in thermodynamics!
Maxwell's relations connect the temperature, pressure, volume and entropy of a box of gas — or indeed, a box of anything in equilibrium. Nobody told me they're just Hamilton's equations with different letters and vertical lines thrown in.
So I decided to see what happens if I wrote Hamilton's equations in the same style as the Maxwell relations. It freaked me out at first. What does it mean to take the partial derivative of $$q$$ in the $$t$$ direction while holding $$p$$ constant?
But it turns out to be okay. Indeed, this was a useful clue. I thought about it longer and realized what was going on.
You get equations like Hamilton's whenever a system extremizes something subject to constraints. A moving particle minimizes action; a box of gas maximizes entropy.
So: whenever you see unexplained patterns in math or physics, write them down in your notebook. Think about them from time to time. Clarify. Simplify.
Soon you'll never be bored. And if you get stuck and frustrated, just ask people. True seekers will be happy to help.
#### November 24, 2018
The game of "58 holes", or "hounds and jackals", is very ancient. Two players took turns rolling dice to move their pieces forward. This copy comes from Thebes, Egypt. It was made in the reign of Amenemhat IV, during 1814–1805 BC, in the Twelfth Dynasty of the Middle Kingdom. It's now at The Metropolitan Museum of Art.
But why 58 holes? That's a strange number!
The holes come in two groups of 29. Nobody knows the rules for sure! But the Russian game expert Dmitriy Skiryuk argued that the players move their pieces from holes A to 29 and then the large shared hole H, where they exit the board.
If so, each player really has 30 holes! That makes more sense: the number 60 was very important in Egypt and the Middle East. So "58 holes" is a red herring.
You can see Skiryuk's hypothesized rules, the above pictures, and more here:
The game was really widespread: here's one from a pillaged Iron Age tomb in Necropolis B at Tepe Sialk, Iran. It's now in the Louvre.
But here's something even cooler. The game was just found in Azerbaijan, almost 2000 kilometers from the Middle East — chiseled into a rock by Bronze Age herders!
|
2019-01-23 19:32:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4713417589664459, "perplexity": 1929.625943115014}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584350539.86/warc/CC-MAIN-20190123193004-20190123215004-00031.warc.gz"}
|
https://plainmath.net/90653/doug-earns-10-50-per-hour-working-at-a
|
# Doug earns $10.50 per hour working at a restaurant. On Friday, he spent 1.75 hours cleaning, 2.33 hours doing paperwork and 1 hour and 25 minutes serving costumers. What were Doug's earnings? Nyasia Flowers 2022-09-16 Answered Doug earns$10.50 per hour working at a restaurant. On Friday, he spent 1.75 hours cleaning, 2.33 hours doing paperwork and 1 hour and 25 minutes serving costumers. What were Doug's earnings?
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Medwsa1c
Change 1 hour and 25 minutes to the correct notation:
1 hour and 25 minutes = 60 minutes + 25 minutes = 85 minutes
Do
$\frac{85}{60}=\frac{x}{100}$
So
$x=\frac{85\cdot 100}{60}$
$x=\text{141.67 minutes}\to \text{1.42 hours}$
Add the total number of hours together:
1.75 hours + 2.33 hours + 1.42 hours = 5.5 hours
Doug earns $10.50 per hour, so for 5.5 hours, multiply 10.50 by 5.5:$10.50⋅5.5=\$57.75
|
2022-09-26 14:59:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 32, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.542813241481781, "perplexity": 3657.9166811891737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00294.warc.gz"}
|
https://inginious.org/course/tutorial/03_tasks
|
Tasks, or activities, or assignments, form the course content. Each INGInious task is composed of a problem set graded at the same time (run on a predefined environment) and possibly depending on each other. A task is linked to a course and is identified with a course id/task id pair.
Tasks are entirely editable from the micro-LMS (webapp). However, these parameters are stored in a task.yaml, in the task-associated folder.
#### Basic settings
Except the name and context information, the task basic settings are mainly used by the micro-LMS (webapp). These settings include:
• Submission mode : If you allow your students to work in group/team, you need to configure the submission mode accordingly.
• Submission storage : This option limits the size of the submission history.
• Submission limit : This option is used to restrict the number of submissions students can make per time period.
• Evaluation submission : This option provides download facility for the administrator by tagging a submission as the reference one. Note that submissions are tagged just after execution.
• Accessibility : Some tasks may be made accessible for a short amount of time only if some deadline is applicable.
#### Container setup
INGInious runs the test suite in a container, an operating system component allowing resources isolation in a faster way than with virtual machines.
This is further simplified by the usage of Docker, an open-source API for creating and defining containers, also providing disk image abstraction, making the definition of additional INGInious environments easy for the administrators.
Except the mcq environment used for multiple_choice questions, all the INGInious environments will start a container when launching the test suite. These containers can be preconfigured with the following parameters:
• (Hard) timeout : The timeout value is the maximum CPU (computation) time allowed for the task. Once this threshold is reached, the container is shut down and the student is returned a Time out feedback. The hard timeout is the maximum wall (human) time allowed for the task.
• Memory : This is the maximum amount of RAM the container can allocate. If this value is evaluated as too high by the INGInious agent, it will warn you at the container launch.
• Output limit : This is the maximum amount of data that can be output from the container. This parameter is useful if you need to print student generated data on feedback.
• Grading environment : The grading environment is defined by the provided software set.
• Internet connectivity : If, for some reason, you need to access the Internet during the tests, check this option is activated.
#### Subproblems
Different kinds of problems can be displayed on the INGInious task page:
• Code : This box displays an editable text area with syntax highlighting and automatic indentation.
• File upload : This box provides a file upload facility, if several files have to be submitted in a single archive for instance.
• Match : This box displays a small input field used for matching student and expected result.
• MCQ : This box displays a multiple choice question, with the ability to select multiple valid answers, and displaying feedback for each chosen option.
Match and MCQ questions can be automatically graded using the mcq environment that will use the feedback defined using the task editor. However, all problem inputs can be fetched from a container-based environment.
Task files are mainly used for launching the tests in container-based environments. Those tests will be started using the run file. This file must be placed at the task files root folder and be executable (either script or binary).
Two special subdirectories can be created in the task files folder:
• /public : This folder is publicly available from the frontends and can be used to share some initial documents, implementations, or skeletons with students. To give access to those file, place a link to taskid/filepath inside your task description.
• /student : This folder is used in combination with the run_student API to provide another level of isolation while running the tests. More information about this folder will be provided further in this tutorial.
### Description
Task and problems context descriptions can be formatted using reStructuredText syntax and $$\LaTeX$$ syntax for mathematical expressions. Please refer to both their documentation to find out the full set of features.
$$\LaTeX$$ expressions can be inserted via the following snippet:
This is a :math:\LaTeX expression !
Syntax-highlighted code blocks can be inserted via the following snippet and CodeMirror language identifiers:
Copy and paste the code below:
.. code-block:: python
print "Hello World!"
### Let's take a tour
Your course contains two tasks task1 and task2. You want them to weight respectively for 20% and 80% of the final student course score. How can you do that ?
The Container setup tab allows you to set some limitation parameters for code execution. Does it apply to the built-in mcq environment ?
##### Question 3: Task and problems contexts
<p style="text-align: center;">This is a new paragraph.</p>
Is this a correct context description ?
##### Question 5: The run file
Container-based grading environments require an executable run script at the root of your task files folder to initiate the tests. What is the particularity of that file ?
##### Question 6: Serving static files
You want to display the UML scheme of a class to be implemented by the student as an image on the task context. This image file has been specifically created for that assignment. How do you do that ?
|
2019-01-17 19:21:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.478915274143219, "perplexity": 2633.706300440747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659063.33/warc/CC-MAIN-20190117184304-20190117210304-00205.warc.gz"}
|
http://www.canadaka.net/video/7-canadian-political/page4
|
47 videos
Sub Categories
Liberal (31) Conservative (14) NDP (12) Green (63) Anti-Liberal (14) Anti-Conservative (23) Anti-Bloc Quebecois (1) Anti-NDP (2) Anti-Green (0)
## Where does money come from? Lets ask our Politicians
Where does money come from? Why are we in a recession?
added: Thu Aug 2009 | Length 00:00 | Views: 1308 | Comments: 0
## YouTube Interview with Prime Minister Harper
added: Wed Mar 2010 | Length 00:00 | Views: 1155 | Comments: 0
Search:
## Donate!
October´s Goal: $150.00 Amount in:$0.00 Left to go: \$150.00
|
2020-10-24 14:58:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5000481009483337, "perplexity": 12390.308920864882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107883636.39/warc/CC-MAIN-20201024135444-20201024165444-00471.warc.gz"}
|
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/7872/2/a/bx/
|
# Properties
Label 7872.2.a.bx Level 7872 Weight 2 Character orbit 7872.a Self dual yes Analytic conductor 62.858 Analytic rank 1 Dimension 3 CM no Inner twists 1
# Related objects
## Newspace parameters
Level: $$N$$ = $$7872 = 2^{6} \cdot 3 \cdot 41$$ Weight: $$k$$ = $$2$$ Character orbit: $$[\chi]$$ = 7872.a (trivial)
## Newform invariants
Self dual: yes Analytic conductor: $$62.8582364712$$ Analytic rank: $$1$$ Dimension: $$3$$ Coefficient field: 3.3.316.1 Coefficient ring: $$\Z[a_1, \ldots, a_{11}]$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 123) Fricke sign: $$1$$ Sato-Tate group: $\mathrm{SU}(2)$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of a basis $$1,\beta_1,\beta_2$$ for the coefficient ring described below. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q + q^{3} + ( -1 - \beta_{1} + \beta_{2} ) q^{5} + ( 1 - \beta_{1} - \beta_{2} ) q^{7} + q^{9} +O(q^{10})$$ $$q + q^{3} + ( -1 - \beta_{1} + \beta_{2} ) q^{5} + ( 1 - \beta_{1} - \beta_{2} ) q^{7} + q^{9} + ( 1 + \beta_{1} ) q^{11} + ( -3 + \beta_{1} - \beta_{2} ) q^{13} + ( -1 - \beta_{1} + \beta_{2} ) q^{15} + ( 1 - \beta_{1} + 2 \beta_{2} ) q^{17} + ( -1 + \beta_{1} - \beta_{2} ) q^{19} + ( 1 - \beta_{1} - \beta_{2} ) q^{21} + ( -3 - \beta_{1} + \beta_{2} ) q^{23} + ( 1 + 2 \beta_{1} - 4 \beta_{2} ) q^{25} + q^{27} + ( 1 + 3 \beta_{1} ) q^{29} + ( -2 + 4 \beta_{1} + \beta_{2} ) q^{31} + ( 1 + \beta_{1} ) q^{33} + ( -2 - 2 \beta_{1} + 4 \beta_{2} ) q^{35} + ( -6 - 2 \beta_{1} + \beta_{2} ) q^{37} + ( -3 + \beta_{1} - \beta_{2} ) q^{39} + q^{41} + ( -4 + 2 \beta_{1} - 5 \beta_{2} ) q^{43} + ( -1 - \beta_{1} + \beta_{2} ) q^{45} + ( 1 + \beta_{1} + 2 \beta_{2} ) q^{47} + ( 3 + 2 \beta_{1} ) q^{49} + ( 1 - \beta_{1} + 2 \beta_{2} ) q^{51} + ( -6 + 4 \beta_{1} - 2 \beta_{2} ) q^{53} + ( -3 - \beta_{1} + \beta_{2} ) q^{55} + ( -1 + \beta_{1} - \beta_{2} ) q^{57} + ( 2 + 2 \beta_{1} + 2 \beta_{2} ) q^{59} + ( 2 + 2 \beta_{1} - \beta_{2} ) q^{61} + ( 1 - \beta_{1} - \beta_{2} ) q^{63} + ( -2 + 2 \beta_{1} ) q^{65} + ( -2 - 6 \beta_{1} + 4 \beta_{2} ) q^{67} + ( -3 - \beta_{1} + \beta_{2} ) q^{69} + ( -11 + \beta_{1} ) q^{71} + ( 2 - 2 \beta_{1} - 3 \beta_{2} ) q^{73} + ( 1 + 2 \beta_{1} - 4 \beta_{2} ) q^{75} + ( -3 - \beta_{1} - 3 \beta_{2} ) q^{77} + ( -8 + 4 \beta_{1} - 2 \beta_{2} ) q^{79} + q^{81} + ( 5 - \beta_{1} + 3 \beta_{2} ) q^{83} + ( 7 + \beta_{1} - 5 \beta_{2} ) q^{85} + ( 1 + 3 \beta_{1} ) q^{87} + ( 6 - 4 \beta_{1} ) q^{89} + ( -2 + 6 \beta_{1} ) q^{91} + ( -2 + 4 \beta_{1} + \beta_{2} ) q^{93} + ( -4 + 2 \beta_{2} ) q^{95} + ( -3 - 3 \beta_{1} + \beta_{2} ) q^{97} + ( 1 + \beta_{1} ) q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$3q + 3q^{3} - 4q^{5} + 2q^{7} + 3q^{9} + O(q^{10})$$ $$3q + 3q^{3} - 4q^{5} + 2q^{7} + 3q^{9} + 4q^{11} - 8q^{13} - 4q^{15} + 2q^{17} - 2q^{19} + 2q^{21} - 10q^{23} + 5q^{25} + 3q^{27} + 6q^{29} - 2q^{31} + 4q^{33} - 8q^{35} - 20q^{37} - 8q^{39} + 3q^{41} - 10q^{43} - 4q^{45} + 4q^{47} + 11q^{49} + 2q^{51} - 14q^{53} - 10q^{55} - 2q^{57} + 8q^{59} + 8q^{61} + 2q^{63} - 4q^{65} - 12q^{67} - 10q^{69} - 32q^{71} + 4q^{73} + 5q^{75} - 10q^{77} - 20q^{79} + 3q^{81} + 14q^{83} + 22q^{85} + 6q^{87} + 14q^{89} - 2q^{93} - 12q^{95} - 12q^{97} + 4q^{99} + O(q^{100})$$
Basis of coefficient ring in terms of a root $$\nu$$ of $$x^{3} - x^{2} - 4 x + 2$$:
$$\beta_{0}$$ $$=$$ $$1$$ $$\beta_{1}$$ $$=$$ $$\nu$$ $$\beta_{2}$$ $$=$$ $$\nu^{2} - 3$$
$$1$$ $$=$$ $$\beta_0$$ $$\nu$$ $$=$$ $$\beta_{1}$$ $$\nu^{2}$$ $$=$$ $$\beta_{2} + 3$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
1.1
0.470683 2.34292 −1.81361
0 1.00000 0 −4.24914 0 3.30777 0 1.00000 0
1.2 0 1.00000 0 −0.853635 0 −3.83221 0 1.00000 0
1.3 0 1.00000 0 1.10278 0 2.52444 0 1.00000 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
This newform does not admit any (nontrivial) inner twists.
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 7872.2.a.bx 3
4.b odd 2 1 7872.2.a.bs 3
8.b even 2 1 123.2.a.d 3
8.d odd 2 1 1968.2.a.w 3
24.f even 2 1 5904.2.a.bd 3
24.h odd 2 1 369.2.a.e 3
40.f even 2 1 3075.2.a.t 3
56.h odd 2 1 6027.2.a.s 3
120.i odd 2 1 9225.2.a.bx 3
328.g even 2 1 5043.2.a.n 3
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
123.2.a.d 3 8.b even 2 1
369.2.a.e 3 24.h odd 2 1
1968.2.a.w 3 8.d odd 2 1
3075.2.a.t 3 40.f even 2 1
5043.2.a.n 3 328.g even 2 1
5904.2.a.bd 3 24.f even 2 1
6027.2.a.s 3 56.h odd 2 1
7872.2.a.bs 3 4.b odd 2 1
7872.2.a.bx 3 1.a even 1 1 trivial
9225.2.a.bx 3 120.i odd 2 1
## Atkin-Lehner signs
$$p$$ Sign
$$2$$ $$1$$
$$3$$ $$-1$$
$$41$$ $$-1$$
## Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(\Gamma_0(7872))$$:
$$T_{5}^{3} + 4 T_{5}^{2} - 2 T_{5} - 4$$ $$T_{7}^{3} - 2 T_{7}^{2} - 14 T_{7} + 32$$ $$T_{11}^{3} - 4 T_{11}^{2} + T_{11} + 4$$ $$T_{13}^{3} + 8 T_{13}^{2} + 14 T_{13} - 4$$
## Hecke Characteristic Polynomials
$p$ $F_p(T)$
$2$
$3$ $$( 1 - T )^{3}$$
$5$ $$1 + 4 T + 13 T^{2} + 36 T^{3} + 65 T^{4} + 100 T^{5} + 125 T^{6}$$
$7$ $$1 - 2 T + 7 T^{2} + 4 T^{3} + 49 T^{4} - 98 T^{5} + 343 T^{6}$$
$11$ $$1 - 4 T + 34 T^{2} - 84 T^{3} + 374 T^{4} - 484 T^{5} + 1331 T^{6}$$
$13$ $$1 + 8 T + 53 T^{2} + 204 T^{3} + 689 T^{4} + 1352 T^{5} + 2197 T^{6}$$
$17$ $$1 - 2 T + 28 T^{2} - 6 T^{3} + 476 T^{4} - 578 T^{5} + 4913 T^{6}$$
$19$ $$1 + 2 T + 51 T^{2} + 68 T^{3} + 969 T^{4} + 722 T^{5} + 6859 T^{6}$$
$23$ $$1 + 10 T + 95 T^{2} + 476 T^{3} + 2185 T^{4} + 5290 T^{5} + 12167 T^{6}$$
$29$ $$1 - 6 T + 60 T^{2} - 262 T^{3} + 1740 T^{4} - 5046 T^{5} + 24389 T^{6}$$
$31$ $$1 + 2 T + 2 T^{2} - 132 T^{3} + 62 T^{4} + 1922 T^{5} + 29791 T^{6}$$
$37$ $$1 + 20 T + 228 T^{2} + 1646 T^{3} + 8436 T^{4} + 27380 T^{5} + 50653 T^{6}$$
$41$ $$( 1 - T )^{3}$$
$43$ $$1 + 10 T + 10 T^{2} - 296 T^{3} + 430 T^{4} + 18490 T^{5} + 79507 T^{6}$$
$47$ $$1 - 4 T + 106 T^{2} - 384 T^{3} + 4982 T^{4} - 8836 T^{5} + 103823 T^{6}$$
$53$ $$1 + 14 T + 159 T^{2} + 1452 T^{3} + 8427 T^{4} + 39326 T^{5} + 148877 T^{6}$$
$59$ $$1 - 8 T + 137 T^{2} - 976 T^{3} + 8083 T^{4} - 27848 T^{5} + 205379 T^{6}$$
$61$ $$1 - 8 T + 188 T^{2} - 930 T^{3} + 11468 T^{4} - 29768 T^{5} + 226981 T^{6}$$
$67$ $$1 + 12 T + 77 T^{2} + 632 T^{3} + 5159 T^{4} + 53868 T^{5} + 300763 T^{6}$$
$71$ $$1 + 32 T + 550 T^{2} + 5712 T^{3} + 39050 T^{4} + 161312 T^{5} + 357911 T^{6}$$
$73$ $$1 - 4 T + 120 T^{2} - 130 T^{3} + 8760 T^{4} - 21316 T^{5} + 389017 T^{6}$$
$79$ $$1 + 20 T + 305 T^{2} + 3192 T^{3} + 24095 T^{4} + 124820 T^{5} + 493039 T^{6}$$
$83$ $$1 - 14 T + 259 T^{2} - 2028 T^{3} + 21497 T^{4} - 96446 T^{5} + 571787 T^{6}$$
$89$ $$1 - 14 T + 263 T^{2} - 2308 T^{3} + 23407 T^{4} - 110894 T^{5} + 704969 T^{6}$$
$97$ $$1 + 12 T + 305 T^{2} + 2180 T^{3} + 29585 T^{4} + 112908 T^{5} + 912673 T^{6}$$
|
2020-04-06 02:42:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9472140073776245, "perplexity": 4338.640732072582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371612531.68/warc/CC-MAIN-20200406004220-20200406034720-00416.warc.gz"}
|
https://houbb.github.io/2019/02/26/java-time-nanotime-02
|
# currentTimeMills
/**
* Returns the current time in milliseconds. Note that
* while the unit of time of the return value is a millisecond,
* the granularity of the value depends on the underlying
* operating system and may be larger. For example, many
* operating systems measure time in units of tens of
* milliseconds.
*
* <p> See the description of the class <code>Date</code> for
* a discussion of slight discrepancies that may arise between
* "computer time" and coordinated universal time (UTC).
*
* @return the difference, measured in milliseconds, between
* the current time and midnight, January 1, 1970 UTC.
* @see java.util.Date
*/
public static native long currentTimeMillis();
# System.nanoTime
## 方法声明
/**
* Returns the current value of the running Java Virtual Machine's
* high-resolution time source, in nanoseconds.
*
* <p>This method can only be used to measure elapsed time and is
* not related to any other notion of system or wall-clock time.
* The value returned represents nanoseconds since some fixed but
* arbitrary <i>origin</i> time (perhaps in the future, so values
* may be negative). The same origin is used by all invocations of
* this method in an instance of a Java virtual machine; other
* virtual machine instances are likely to use a different origin.
*
* <p>This method provides nanosecond precision, but not necessarily
* nanosecond resolution (that is, how frequently the value changes)
* - no guarantees are made except that the resolution is at least as
* good as that of {@link #currentTimeMillis()}.
*
* <p>Differences in successive calls that span greater than
* approximately 292 years (2<sup>63</sup> nanoseconds) will not
* correctly compute elapsed time due to numerical overflow.
*
* <p>The values returned by this method become meaningful only when
* the difference between two such values, obtained within the same
* instance of a Java virtual machine, is computed.
*
* <p> For example, to measure how long some code takes to execute:
* <pre> {@code
* long startTime = System.nanoTime();
* // ... the code being measured ...
* long estimatedTime = System.nanoTime() - startTime;}</pre>
*
* <p>To compare two nanoTime values
* <pre> {@code
* long t0 = System.nanoTime();
* ...
* long t1 = System.nanoTime();}</pre>
*
* one should use {@code t1 - t0 < 0}, not {@code t1 < t0},
* because of the possibility of numerical overflow.
*
* @return the current value of the running Java Virtual Machine's
* high-resolution time source, in nanoseconds
* @since 1.5
*/
public static native long nanoTime();
## 使用方式
public static void main(String[] args) {
long start = System.nanoTime();
//do sth...
long end = System.nanoTime();
System.out.println("Time: " + (end - start));
}
## 使用注意
To compare two nanoTime values
* <pre> {@code
* long t0 = System.nanoTime();
* ...
* long t1 = System.nanoTime();}</pre>
*
* one should use {@code t1 - t0 < 0}, not {@code t1 < t0},
* because of the possibility of numerical overflow.
JDK表明比较两个nanoTime的时候,应该用t1 - t2 > 0的方式来比较,而不能用 t1 > t2的方式来比较,因为nanoTime在获取时有数值溢出的可能。
### 问什么要这么比较
Nano时间不是’真实’时间,它只是一个计数器,当某些未指定的事件发生时(可能是计算机启动),计数器从一些未指定的数字开始递增。 它会溢出,在某些时候变为负数。 如果你的t0恰好在它溢出之前(即非常大的正数),并且你的t1刚好在(非常大的负数)之后,
# 总结
## 二者的区别
(1)System.currentTimeMillis 返回的毫秒,这个毫秒其实就是自1970年1月1日0时起的毫秒数.
(2)java 中 System.nanoTime() 返回的是纳秒,nanoTime 而返回的可能是任意时间,甚至可能是负数
# 时间单位
ns(nanosecond):纳秒, 时间单位。一秒的 10 亿分之一,即等于10的负9次方秒。常用作 内存读写速度的单位。
1纳秒=0.000001 毫秒
1纳秒=0.00000 0001 秒
# 拓展阅读
jdk8 时间类
jdk8 ChronoUnit 日期枚举类
# 参考资料
JDK NanoTime比较
java-system-nanotime-runs-too-slow
## 源码
JVM源码分析之System.currentTimeMillis及nanoTime原理详解
|
2020-07-12 00:45:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30152809619903564, "perplexity": 12465.878325760239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657129257.81/warc/CC-MAIN-20200711224142-20200712014142-00320.warc.gz"}
|
https://physics.stackexchange.com/questions/244519/is-keplers-first-law-a-consequence-of-the-conservation-of-angular-momentum
|
Is Keplers First Law a consequence of the conservation of angular momentum?
It would seem that once you have deduced that the angular momentum is conserved then you can deduce:
$r^2\dot{\theta}=h$ is constant
Combining this with the radial equation of motion then yeilds a differential equation whose solution is Kepler's First Law. So is Kepler's First Law a consequence of the conservation of angular momentum or am I missing something?
Well, yes and no. You need to use conservation of angular momentum but you also need to use the radial equation, which is specific for the case of gravity. A different radial force law would still conserve angular momentum but it wouldn't have elliptical orbits. Not to mention that conservation of angular momentum can be deduced from Newton's laws and the law of gravitation. So it doesn't seem very useful to say that the first law is a consequence of conservation of angular momentum, since you need a lot more than that to prove it. The same goes for the third law.
The second law, however, can be deduced just from conservation of angular momentum, so it holds for any central force, not just gravity.
No, not uniquely. Conservation of angular momentum is a necessary condition, but it is not sufficient.
Kepler's First Law says that the planets orbit in elliptical paths with the Sun at a focus of the ellipse. This specifically depends on
• an inverse-square law force and
• a negative total mechanical energy, with the reference zero for the potential energy is infinite separation distance.
This force and this energy are not dictated by conservation of angular momentum. Conservation of angular momentum results for any form of a central (aka, radial) force.
If the force is repulsive or the energy is too large, the orbit will not be elliptical. The first case would happen for like-signed charges orbiting each other (not planetary motion), and the second, a comet which executes a parabolic or hyperbolic orbit. These systems conserve angular momentum, but definitely don't follow Kepler's elliptical law., not
|
2019-10-19 13:11:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8471614718437195, "perplexity": 210.9893858309649}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986693979.65/warc/CC-MAIN-20191019114429-20191019141929-00007.warc.gz"}
|
https://mersenneforum.org/showpost.php?s=f48f80dd78efe49381354defbf2be099&p=541590&postcount=18
|
View Single Post
2020-04-02, 11:03 #18 Nick Dec 2012 The Netherlands 3×5×97 Posts This game does not get anywhere useful with series. A similar game with polynomials and rational functions, however, is more fruitful. For polynomials f and g with real coefficients, define f to be greater than g if f(x)>g(x) for all sufficiently large x. To put this precisely, f>g iff there exists a real number c such that, for all real x≥c, f(x)>g(x). Convince yourself of the following: For constant polynomials, this definition agrees with our existing definition of "greater than" for real numbers. For any f,g exactly one of the following holds: f>g, f=g,, g>f. For any f,g,h, if f>g and g>h then f>h. For any f,g,h, if f>g then f+h>g+h. For any f,g, if f>0 and g>0 then fg>0. In technical terms, we summarize this by saying that the set $$\mathbb{R}[X]$$ of all polynomials with real coefficients forms an ordered ring with ordering defined in the above way. Given polynomials f,g with real coefficients, if g≠0 then we can form a fraction $$\frac{f}{g}$$. This is called a rational function over the reals. For rational functions $$\frac{f_1}{g_1}$$ and $$\frac{f_2}{g_2}$$, we define $$\frac{f_1}{g_1}$$ to be greater than $$\frac{f_2}{g_2}$$ if $$f_1g_2>f_2g_1$$ (using the ordering for polynomials defined earlier). (Just as with fractions of integers, it is possible to express each rational function in more than one way, but it is not difficult to show that this definition is independent of the representations we choose.) Convince yourself of the following: For rational functions with 1 as the denominator, this definition agrees with the earlier one. For any $$f_1,g_1,f_2,g_2$$, exactly one of the following holds: $$\frac{f_1}{g_1}>\frac{f_2}{g_2}$$, $$\frac{f_1}{g_1}=\frac{f_2}{g_2}$$, $$\frac{f_2}{g_2}>\frac{f_1}{g_1}$$. For any $$f_1,g_1,f_2,g_2,f_3,g_3$$, if $$\frac{f_1}{g_1}>\frac{f_2}{g_2}$$ and $$\frac{f_2}{g_2}>\frac{f_3}{g_3}$$ then $$\frac{f_1}{g_1}>\frac{f_3}{g_3}$$. For any $$f_1,g_1,f_2,g_2,f_3,g_3$$, if $$\frac{f_1}{g_1}>\frac{f_2}{g_2}$$ then $$\frac{f_1}{g_1}+\frac{f_3}{g_3}>\frac{f_2}{g_2}+\frac{f_3}{g_3}$$. For any $$f_1,g_1,f_2,g_2$$, if $$\frac{f_1}{g_1}>\frac{0}{1}$$ and $$\frac{f_2}{g_2}>\frac{0}{1}$$ then $$\frac{f_1}{g_1}\frac{f_2}{g_2}>\frac{0}{1}$$. In technical terms, we summarize this by saying that the set $$\mathbb{R}(X)$$ of all rational functions over the reals forms an ordered field with ordering defined in the above way. But if f is just the polynomial X then, for any real number c we have f>c. So this ordered field contains a copy of the real numbers as a bounded subset! It follows that this ordered field is not complete, and therefore that the usual rules of limits and calculus no longer apply here.
|
2020-10-29 02:20:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9032791256904602, "perplexity": 232.54759639423693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107902683.56/warc/CC-MAIN-20201029010437-20201029040437-00089.warc.gz"}
|
https://math.stackexchange.com/questions/3178800/fr%c3%a9chet-derivative-of-the-energy-functional/3178823#3178823
|
# Fréchet derivative of the energy functional
Let $$\Omega \subset\mathbb{R}^n$$ be an open set and $$E(u)=\frac{1}{2}\int_{\Omega} | \nabla u|^2 \quad (u \in H_0^1 (\Omega)).$$ Then, what is the Fréchet derivative of the functional $$E$$? And why? (I want to show it directly...)
The Frechet derivative $$DE$$, if it exists, is unique and satisfies
$$E(u+h)=E(u)+DE(h)+r(h),\$$ where $$r(h)$$ is $$o(h).$$ So, if we can find a candidate that satisfies the equation, we are done.
Claim (admittedly with the foreknowledge that the claim is true):
$$DE(h)=\int_{\Omega}\langle \nabla u,\nabla h\rangle$$
The proof is a calculation:
$$E(u+h)-E(u)=\frac{1}{2}\left (\int_{\Omega} | \nabla (u+h)|^2-\int_{\Omega} | \nabla (u)|^2\right )=\frac{1}{2}\left (\int_{\Omega} \langle\nabla (u+h),\nabla (u+h)\rangle-\int_{\Omega} | \nabla (u)|^2\right )=\int_{\Omega}\langle \nabla u,\nabla h\rangle+\frac{1}{2}\int_{\Omega}\langle \nabla h,\nabla h\rangle,$$
from which we see that, setting $$r(h)=\frac{1}{2}\int_{\Omega}\langle \nabla h,\nabla h\rangle$$ and noting that is is $$o(h)$$, we have
$$DE(h)=\int_{\Omega}\langle \nabla u,\nabla h\rangle.$$
it's $$-\Delta u$$ (as a functional...). Why, you may ask...
We require that $$E(u+h)-E(u) = \langle \nabla E(u) , h \rangle$$ for any $$h \in H_0^1 (\Omega)$$.
$$E(u+h)-E(u) = \frac{1}{2}\int_\Omega 2 \nabla u \nabla h$$. Now use integration by parts on this expression to get the answer.
We get that $$\langle \nabla E(u) , h \rangle = \langle -\Delta u , h\rangle\>$$. This tells us we can associate the functional $$\nabla E(u)$$ acting on a function $$h$$ with integrating $$h$$ times $$-\Delta u$$.
• Thank you. I have a question. We need to show the limit of $\frac{E(h)}{||h||_{H_0^1}}$ as h approaches 0 in $H_0^1$ is 0. How is it showed? Apr 7 '19 at 21:54
• So we need to show that $E(h)$ decays to $0$ faster than $\int_\Omega h^2$ does. Can you think of how to do that? Hint: IBP and Cauchy-Schwarz Apr 7 '19 at 22:03
• Sorry, I can’t understand... Apr 7 '19 at 23:42
• Ok i will write answer in a little bit Apr 8 '19 at 15:24
|
2021-10-15 20:10:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9376643300056458, "perplexity": 244.0969890024315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00600.warc.gz"}
|
http://physics.stackexchange.com/questions/41214/action-on-lard-oil
|
# Action on Lard Oil
If water is mixed with lard oil and heated (creating some super-critical liquid with water), how does this affect the volatility of the mixture in comparison with its purity..?
So, My question is: What happens when a mixture of lard oil and water is compressed and then heated to over some $\approx800^o\text{C}$ almost instantaneously?
Note: I'm no physicist or chemist, so simplicity would be appreciated :-)
-
Are you making a car that runs on pigs? – Ron Maimon Oct 19 '12 at 14:12
Nice suite to Chemistry.SE I think so... – Waffle's Crazy Peanut Oct 19 '12 at 14:42
add comment
## 1 Answer
I don't know of anyone who has done the experiment, but I'd guess that under those conditions the lard will rapidly hydolyse and you'd be left with a solution of glycerol and saturated fatty acids in water. Actually at 800C the fatty acids will probably degrade and polymerise and you'd be left with poorly characterised brown gunk.
Are you trying to make biodiesel from the lard? If so you need much lower temperatures - probably around 200C rather than 800C.
-
+1: Just like everytime. For now, I should add "Revival of the Chemist". But, I've got a question :- Isn't this better suite to chemistry rather than physics..? 'cause it's a bit BORING... – Waffle's Crazy Peanut Oct 19 '12 at 14:44
Yes, I'd guess it's better on the chemistry SE. – John Rennie Oct 19 '12 at 15:08
add comment
|
2013-12-18 13:17:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7107024788856506, "perplexity": 3167.862138092505}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345758566/warc/CC-MAIN-20131218054918-00001-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://clima.github.io/OceananigansDocumentation/stable/numerical_implementation/finite_volume/
|
# Finite volume method on a staggered grid
The Oceananigans.jl staggered grid is defined by a rectilinear array of cuboids of horizontal dimensions $\Delta x_{i, j, k}, \Delta y_{i, j, k}$ and vertical dimension $\Delta z_{i, j, k}$, where $(i, j, k)$ index the location of each cell in the staggered grid. Note that the indices $(i, j, k)$ increase with increasing coordinate $(x, y, z)$.
A schematic of Oceananigans.jl finite volumes for a two-dimensional staggered grid in $(x, z)$. Tracers $c$ and pressure $p$ are defined at the center of the control volume. The $u$ control volumes are centered on the left and right edges of the pressure control volume while the $w$ control volumes are centered on the top and bottom edges of the pressure control volumes. The indexing convention places the $i^{\rm{th}}$ $u$-node on cell $x$-faces to the left of the $i$ tracer point at cell centers.
Dropping explicit indexing, the areas of cell faces are given by
$$$A_x = \Delta y \Delta z, \quad A_y = \Delta x \Delta z, \quad A_z = \Delta x \Delta y \, ,$$$
so that each cell encloses a volume $V = \Delta x \Delta y \Delta z$.
A finite volume method discretizes a continuous quantity $c$ by considering its average over a finite volume:
$$$c_{i, j, k} \equiv \frac{1}{V_{i, j, k}} \int c(\boldsymbol{x}) \, \mathrm{d} V_{i, j, k} \, .$$$
The finite volumes that discretize each of $u$, $v$, and $w$ are located on a grid which is "staggered" with respect to the grid that defines tracer finite volumes. The nodes, or central points of the velocity finite volumes are co-located with the faces of the tracer finite volume. In particular, the $u$-nodes are located in the center of the "$x$-face" (east of the tracer point), $v$-nodes are located on $y$-faces south of the tracer point, and $w$-nodes are located on $z$-faces downwards from the tracer point.
|
2022-11-29 22:14:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9159660935401917, "perplexity": 567.5646724095972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710711.7/warc/CC-MAIN-20221129200438-20221129230438-00641.warc.gz"}
|
https://en.m.wikiquote.org/wiki/Viscosity
|
# Viscosity
physical property of a fluid
Viscosity is a physical measure of how a fluid moves when a shear stress is applied to it. Viscosity is caused by a fluid's internal frictional forces.
## Quotes
• In connexion with the experimental determinatination of viscosity it should be noted that the flow of liquid is influenced in a very marked way by the driving pressure that is used. Bose and Rauert have made measurements at pressures from 0.005 to 2 kilograms per sq. cm., and find that whilst Poiseuille's Law holds for low pressures, very marked deviations are found when the pressure is increased, and in some instances the relative rates of flow are reversed, the more viscous of two liquids flowing more readily and becoming the less viscous at high pressure.
• The viscosity of blood has long been used as an indicator in the understanding and treatment of disease, and the advent of modern viscometers allows its measurement with ever-improving clinical convenience. However, these advances have not been matched by theoretical developments that can yield a quantitative understanding of blood’s microrheology and its possible connection to relevant biomolecules (e.g., fibrinogen). Using coarse-grained molecular dynamics and two different red blood cell models, we accurately predict the dependence of blood viscosity on shear rate and hematocrit.
• Dmitry A. Fedosov, Wenxiao Pan, Bruce Caswell, Gerhard Gompper, and George E. Karniadakis (2011). "Predicting human blood viscosity in silico". Proceedings of the National Academy of Sciences 108 (29): 11772–11777. DOI:10.1073/pnas.1101210108.
• ... To describe the motion of a fluid, we must give it properties at every point ... We will write the force density as the sum of three terms. We have already considered the pressure force per unit volume, –${\displaystyle \nabla }$ p. Then there are the “external” forces which act at a distance—like gravity or electricity. When they are conservative forces with a potential per unit mass, ${\displaystyle \phi }$ , they give a force density –${\displaystyle \rho \nabla \phi }$ . (If the external forces are not conservative, we would have to write ƒext for the external force per unit volume.) Then there is another “internal” force per unit volume, which is due to the fact that in a flowing fluid there can also be a shearing stress. This is called the viscous force, which we will write ƒvisc. Our equation of motion is ${\displaystyle \rho \times }$ (acceleration) = –${\displaystyle \nabla }$ p –${\displaystyle \rho \nabla \phi }$ +ƒvisc ... When we drop the viscosity term, we will be making an approximation which describes some ideal stuff rather than real water. John von Neumann was well aware of the tremendous difference between what happens when you don’t have the viscous terms and when you do, and he was also aware that, during most of the development of hydrodynamics until about 1900, almost the main interest was in solving beautiful mathematical problems with this approximation which had almost nothing to do with real fluids. He characterized the theorist who made such analyses as a man who studied “dry water.” Such analyses leave out an essential property of the fluid.
• Taking now for granted that instability arises generally even in those cases in which the inviscid equation allows only a neutral solution, the question arises how viscosity can cause instability. From simple arguments one would expect damping rather than amplifying. But here one should remember that an inviscid fluid is a system of an infinite number of degrees of freedom, which normally interact so that the energy is dissipated among all modes of vibration. It is only for very special geometrical conditions that this transfer of energy does not take place. Therefore, if a neutral disturbance is possible in the inviscid fluid, the viscosity may easily change the phases of vibration in such a manner that the transfer of energy begins, which then means amplification of the vibration.
• Werner Heisenberg: "On the stability of laminar flow". Proc. Intern. Math. Congress, Cambridge, Mass. 1950. vol. 2. pp. 292–296. (quote from p. 295)
|
2021-05-07 14:32:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.672393262386322, "perplexity": 652.6039211329711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988793.99/warc/CC-MAIN-20210507120655-20210507150655-00011.warc.gz"}
|
https://brilliant.org/problems/you-never-know/
|
# You never know
Robert tosses a coin three times. The probability that he gets atleast 2 heads is:
|
2018-06-23 20:13:04
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9077728390693665, "perplexity": 842.3204521125894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865181.83/warc/CC-MAIN-20180623190945-20180623210945-00108.warc.gz"}
|
http://math.stackexchange.com/questions/279999/isometric-isomorphism-of-hilbert-spaces-and-orthonormal-basis
|
# Isometric isomorphism of Hilbert spaces and orthonormal basis
If I have an isomorphism of two separable Hilbert spaces that preserves norms, does the isomorphism map orthnormal basis to orthonormal basis? I can't show it.
-
You need to show that the isomorphism preserves scalar products. Try expressing a scalar product as a function of the norm by means of a polarization identity. – Giuseppe Negro Jan 16 '13 at 12:14
I'm not sure where separability and completeness with respect to norm come in, it seems to hold for any linear isometry between inner product spaces:
Let $T: H \to H'$ be a linear isometry. Let $e_i$ be an orthonormal basis of $H$. We want to show that $\langle Te_i , Te_j \rangle = \langle e_i , e_j \rangle$.
$T$ is an isometry, that is, $\|Tx\| = \|x\|$, and the norm is given by $\|x\|^2 = \langle x,x \rangle$. You don't need the "full" polarisation identity: note that $\langle x-y , x-y\rangle = \|x-y\|^2 = \|x\|^2 - 2 \langle x,y \rangle + \|y\|^2$ and hence $2 \langle x,y \rangle = \|x\|^2 - \|x-y\|^2 + \|y\|^2$.
Then
\begin{align} 2 \langle Te_i , Te_j \rangle &= \|Te_i\|^2 - \|Te_i - Te_j\|^2 + \|Te_j\|^2\\ &= \|e_i\|^2 - \|e_i-e_j\|^2 + \|e_j\|^2 \\ &= 2 \langle e_i , e_j \rangle \end{align}
which proves the claim.
-
|
2015-07-29 22:44:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999723434448242, "perplexity": 387.13324561796185}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986646.29/warc/CC-MAIN-20150728002306-00094-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://emaths.in/relation-between-degrees-and-radians/
|
: +91 124 4007927
# RELATION BETWEEN DEGREES AND RADIANS
Home » Trigonometry » RELATION BETWEEN DEGREES AND RADIANS
Relation between Degrees and Radians
A radian is a measure of an angle, indicating the ratio between the arc length and the radius of a circle, which is expressed as follows:
Since, the arc length of a full circle is the same as the circumference of a circle, which is written as, then, the radian of the full circle is:
The radian measure of angle:
The circular measure of angle means the number of radians a circle contains. Hence, the radian (circular) measure of a right angle is.
The formula for converting radians to degrees:
Example
Convert 2.4 radians measure into degree measure
Use the formula for converting radians to degrees:
Substitute the radian as follows:
The formula for converting degrees to radians:
Example
Convert into radian measure
Use the formula for converting degrees to radians:
Substitute the degree as follows:
The relationship between degree measure and radian measures for some standard angles are given below:
Question 1:
Convert 1 radian into degree measure.
Solution
Question 2:
Convert 5. 2 radians measure into degree measure
Solution
Use the formula for converting radians to degrees:
Substitute the radian as follows:
Therefore, 5. 2 radian is converted to .
Question 3:
Convert into radian measure
Solution
Use the formula for converting degrees to radians:
Substitute the degree as follows:
Therefore, is converted to 3.49 radians.
Posted on
|
2018-05-22 00:34:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9886460304260254, "perplexity": 1584.7286685202432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864572.13/warc/CC-MAIN-20180521235548-20180522015548-00509.warc.gz"}
|
https://electronics.stackexchange.com/questions/95229/correctly-wire-up-a-switching-regulator
|
# Correctly wire up a Switching Regulator
I am looking to use a Texas Instruments MC33063 Buck/Boost regulator. I have never used a switching regulator before so this may be a conceptual issue.
My input voltage will probably be anywhere from 6 to 12V and I would like to use the regulator to buck down to 5V. However, I get confused when I read through the data sheet. Here's the layout for a step down from the sheet:
So the data sheet shows that if I were to input 25V, given this setup, I can get 5V. However what about 6 to 12V for a Vin?
Also, the optional filter is shown that $$\V_{out} = 1.25(1 + R_2/R_1)\$$. Is this independent of the input voltage? If so, what is the advantages of varying input voltages or is it simply flexibility?
Most buck converters (including this one) have an internal voltage reference and a feedback loop, which regulates the output voltage. The output voltage is set by the resistors R1 and R2. The output voltage is independent of the input voltage fluctuations. The output voltage formula is not related the optional filter.
Switch mode converters can be layout sensitive, so it's usually a good idea to follow a reference design. On the other hand, this one has a top frequency of only 250kHz, so it may be more forgiving than switchers with higher frequencies.
There's more details about the principles of operation of buck converter here and here.
edit: A somewhat odd thing, however, is that the resistor values in the drawing don't quite check with the output voltage
$V_{out}=1.25\left( 1 + \cfrac{R_2}{R_1} \right)=1.25\left( 1 + \cfrac{3.8}{1.2} \right) = 5.2 V$
should be 5.0V. I wonder if there's a reason for the extra 0.2V?
There are several calculator tools you can use to make the calculations (or you can use the equations of the datasheet). One of them is this spreadsheet available from ONsemi.
If you set the output to 5v and the input to 6v then you will get an error because the calculated duty cycle becomes more than 84%. For 5v in step down mode you'll need about 7.5v minimum input.
the optional filter is shown that $Vout = 1.25(1 + \frac{R2}{R1})$
That equation is not related to the filter but to the output voltage of the step down circuit, the filter is just used to reduce the ripple of the output.
If so, what is the advantages of varying input voltages or is it simply flexibility?
If the output voltage changed when the input voltage changed (within limits of course) then this wouldn't really be a voltage regulator.
• How did you calculate the 84%? Is that ton/toff? This is really interesting. Also, what are the ramifications when the duty cycle goes beyond 84%? – Nick Williams Jan 2 '14 at 19:25
• The 84% is written in the spreadsheet I linked. I found it mentioned in note 6 in this [application note](www.onsemi.com/pub_link/Collateral/AN920-D.PDF) . 'Note that the ratio of ton/(ton + toff) does not exceed the maximum of 6/7 or 0.857' – alexan_e Jan 2 '14 at 20:35
• Wow, this is a really nice spreadsheet. – Nick Williams Jan 2 '14 at 21:48
If you look at the output transistors Q1 and Q2, the base of Q2 needs to be about 1.2 to 1.4 volts higher than the emitter of Q1 for it to switch on. On this basis alone, the output voltage that can be achieved is going to be no-greater than input voltage level minus 1.2 to 1.4 volts.
If you are looking to regulate 6V to 5V then this circuit won't work. As the input voltage gets a little below 6.5 volts the output will start to reduce.
• Have a look at at Linear Technologies offerings - they have some devices that run from internal transistors that can probably get down to 6V (maybe 1A output at 5V). Also take note that they utilize a technique called bootstrapping which, in effect provides a higher supply voltage to the output driver - equivalent to raising pin 8 higher than pin 1 on your circuit. – Andy aka Jan 2 '14 at 21:39
• In addition to what Andy have said. There are buck converters, which use P-channel MOSFET as a switch. They can also work with small headroom. [Nick W, welcome to the wonderful world of switch mode regulators, by the way.] – Nick Alexeev Jan 2 '14 at 22:01
• So would using an external pFET or pnp -- as in figure 9b in the datasheet of the TI MC33063 mentioned by Nick Williams -- be adequate to buck 6V down to 5V ? – davidcary Jan 4 '14 at 3:58
|
2021-06-24 11:53:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4621904492378235, "perplexity": 1001.4248875604519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488553635.87/warc/CC-MAIN-20210624110458-20210624140458-00426.warc.gz"}
|
http://www.braude.ac.il/?catid=%7B6E75E717-9F0F-413E-AD95-9E4D8CB239CC%7D
|
Seminars, 2005
Lectures:
• 2.11.2005
Prof. A. Korol, Institute of Evolution, University of Haifa, Haifa.
Some problems of multilocus genetics: Mathematical & computer modeling
Abstract
To a large extent, the activity of laboratory of mathematical & populations genetics can be considered as evolutionary genetics/genomics and bioinformatics. A short outline of main objectives and results of these studies is presented below:
1. Evolution of sex and recombination: Building and testing theoretical models
aimed to explain the factors responsible for evolution of sex and recombination; role of sex and recombination in population adaptation and genome organization; adaptive value of major properties of recombination and mutation systems; ecological-genetic regulation of recombination and mutation. Our results include formal explanation (modifier models) of recombination/mutation properties, and demonstration of dynamic complexity (dynamic chaos) in simple population-genetic models with panimixia and partial sexual reproduction.
2. Genome sequence organization on the above-gene level: New measures
(compositional spectra based on fuzzy linguistics) for sequence comparisons on
the whole genome level. “Genome dialect” concept.
3. Genome mapping: Multilocus mapping allowing reliable ordering of DNA markers and genes (by reduction to “traveling salesman problem” - TSP). As a tool of discrete optimization for this challenging problem with complexity ~ n! (where n ~102-103) new heuristics for Evolution Strategy algorithms are developed in our lab. Even more challenging is mapping based on parallel data from different labs (synchronous TSP).
4. Genetic architecture of complex (quantitative) traits: Methods for genetic
mapping of quantitative traits loci (QTL), joint analysis of multiple trait complexes across the genome, using data scored in different ecological conditions. Multiple-trait QTL analysis for revealing genomic determinants of microarray expression patterns.
5. Structural genomics: New algorithms and tools for hierarchical clustering of microarray expression arrays based on novel highly efficient heuristics for Evolution Strategy algorithms (by reduction of the phylogeny problem to TSP). Evolutionary tree reconstruction for multi-site sequence data in challenging situations of many hundreds (thousands) of genotypes or species in the presence of recombination.
• 20.06.2005
Dr. E. Braverman, University of Calgary, Canada
On stability of equations with several delays and Mackey Glass equation with variable coefficients
Abstract
In the first part of the talk, some new results on stability of linear delay equations with several delays and variable delays and coefficients are presented. These results can be applied to the local stability of nonlinear equations. As an example, we consider the Mackey-Glass equation with variable coefficients and a non constant delay $N = [(r(t)N(g(t)))/(1+(N(g(t))^g)] - b(t)N(t)$ which models white blood cell production. Other qualitative properties of this equation, such as boundedness of solutions, persistence and oscillation, are also discussed. It is also demonstrated that with two delays the equation does not keep the persistence property.
• 02.06.2005
Prof. S. Schochet, Tel-Aviv University Israel
Are scalar viscous traveling waves still interesting?
Abstract
Three problems not covered by the standard theory for scalar viscous traveling waves will be discussed:
1) Global stability in higher dimensions:
2) Saturating viscosity:
3) Non-integrable perturbations:
The results presented are joint work with Shoshana Kamin (1) and with Shlomo Engelberg (2-3).
• 23.05.2005
Dr. B. Abramovitz & Dr. M. Berezina, ORT Braude College, Israel
Some Remarks on Open and Multiple Choice Tests
Abstract
This lecture is based on a joint work with Prof. Abraham Berman which is currently in publication process in “International Journal of Mathematical Education in Science and Technology”.. We discus some of the shortcomings of multiple choice tests in Mathematics given to undergraduate engineering students. Examples are presented, where the disadvantages of a multiple choice test are given.
• 05.05.2005
Prof. V. Ryazanov, Inst. Appl. Math. & Mech. NASU, Ukraine
Mappings with finite distortion
Abstract
Various classes of mappings with finite distortion like finite length and finite area distortion mappings are considered. Such classes are intensively studied during the last decade by many leading experts in the mapping theory as Frederick Gehring, Karri Astala, Tadeush Iwaniec, Pekka Koskela, Olli Martio, Gaven Martin, Juha Heinonen, Uri Srebro, Eduard Yakubov and others.
• 10.04.2005
Prof. D. Shoikhet, ORT Braude College, Israel
A Flower Structure of Backward Flow Invariant Domains for Semigroups of Holomorphic Function
Abstract
• 03.02.05
Prof. G. Weinstein, University of Alabama at Birmingham, U.S.A.
A Counter-Example to a Penrose Inequality for Charged Black Holes
Abstract
We construct a time-symmetric asymptotically flat initial data set to the Einstein -Maxwell Equations which satisfies $$m-1/2(R+\frac {Q^2}}R})<0$$ where m is the total mass, $R=\sqrt{\frac A {4\pi}}$ is the area radius of the outermost horizon and Q is the total charge. This yields a counter-example to a natural extension of the Penrose Inequality to charged black holes.
• 11.01.05
Dr. V. Turetsky, Technion, Israel
A Priori Estimates for Elliptic Systems in Fractional Sobolev Spaces with Applications to Geometry
Abstract
We will present a priori estimates for systems with coefficients in Bessel potential spaces. These results have applications to Riemanian geometry, where the Riemanian metric $g$ belongs to a certain Bessel potential space. Our motivation to deal with elliptic systems arises from the studying of Cauchy problem for Einstein Equations.
|
2018-02-20 21:15:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3947641849517822, "perplexity": 3632.098169246351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813109.36/warc/CC-MAIN-20180220204917-20180220224917-00541.warc.gz"}
|
https://itectec.com/superuser/convert-ova-to-vhd-for-usage-in-hyper-v/
|
# Mac – Convert OVA to VHD for usage in Hyper-V
hyper-vvirtual machinevirtualization
I have a OVA file that I need to convert to VHD in order to use Hyper-V. Opening the .ova file in winrar gives me one .ovf file and one .vmdk file. I tested the program Microsoft Virtual Machine Converter 3.0 that was recommended on SU but it required a host server. I do not have that, I only have the file.
Import ovf and/or vmdk to Hyper-V
Quite easy actually, install VirtualBox that comes with the program VBoxManage.exe. It can be used with clonehd to specify the new format of the disk. You specify the original disk file, in this case the .vmdk, and then give a location and name to output the .vhd.
Open a cmd prompt, CD to C:\Program Files\Oracle\VirtualBox or Virtualbox install directory and then run:
VBoxManage.exe clonehd --format vhd "C:\temp\VM\disk1.vmdk" "C:\temp\VM\disk1.vhd"
Documentation for VirtualBox: https://www.virtualbox.org/manual/ch08.html#vboxmanage-clonevdi
Then I created a new virtual machine from Hyper-V Manager and selected "Use an existing virtual hard disk". Worked perfectly.
Use this guide for internet access:
https://superuser.com/a/472854/405096
|
2021-09-22 02:26:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5497555136680603, "perplexity": 9430.109706116244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057303.94/warc/CC-MAIN-20210922011746-20210922041746-00366.warc.gz"}
|
http://goatleaps.xyz/media/maths/programming/tools/Epicycles-drawing.html
|
It is a result of Fourier analysis that any closed path in the complex plane can be arbitrarily approximated by some complex function:
We may obtain such an approximation quite efficiently using the discrete Fourier transform. The sum can be nicely visualized as a chain of epicycles (circles). You can draw your own path below to see it constructed using epicycles!
The origins of epicycles have a curious history. For more information see »Wikipedia«.
1. Click; 2. Define your points (in order); 3. Click the Pause/Start button.
Speed
Maximum number of circles
Scale
Frames per second
|
2019-08-20 21:06:26
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8668137788772583, "perplexity": 811.2543835420963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315618.73/warc/CC-MAIN-20190820200701-20190820222701-00347.warc.gz"}
|
https://mathoverflow.net/questions/232087/have-there-been-any-updates-on-mochizukis-proposed-proof-of-the-abc-conjecture/288691
|
# Have there been any updates on Mochizuki's proposed proof of the abc conjecture?
In August 2012, a proof of the abc conjecture was proposed by Shinichi Mochizuki. However, the proof was based on a "Inter-universal Teichmüller theory" which Mochizuki himself pioneered. It was known from the beginning that it would take experts months to understand his work enough to be able to verify the proof. Are there any updates on the validity of this proof?
• Here is the last thing I've seen: notes by bcnrd on the recent workshop at oxford on IUTT --- mathbabe.org/2015/12/15/… Feb 25 '16 at 5:01
• Note that there are a small group of people, no more than three or so, who say they understand the papers and think them correct. Their best efforts to explain the theory, which is what people really want to know about, are described in the blog post Vidit links to. Let's say that there is another workshop coming later this year where people are hopeful of more progress. Feb 25 '16 at 8:59
• @DavidRoberts: is this upcoming workshop publicly announced yet, and if so, can you point us to the announcement? Feb 25 '16 at 9:04
• maths.nottingham.ac.uk/personal/ibf/files/kyoto.iut.html Feb 25 '16 at 9:24
• Mainichi Shinbun reports that Mochizuki's proof has been accepted for a special issue of "Publications of RIMS" (PRIMS) by a group of independent referees who have taken 8 years to arrive at their verdict that it is correct. mainichi.jp/articles/20200403/k00/00m/040/093000c Apr 3 '20 at 5:45
In January, Vesselin Dimitrov posted to the arXiv a preprint showing that Mochizuki's work, if correct, would be effective. While this doesn't validate Mochizuki's work it does do a few things:
1. It shows that people are understanding more of the proof.
2. It gives another avenue through which to check whether Mochizuki's work is invalid.
3. It makes Mochizuki's work that much more important.
• Dimitrov's paper treats Mochizuki's IUT ideas and results as a black box, replacing the appeal to a proof in one of Mochizuki's much earlier pre-IUT papers (reference [8] in Dimitrov's paper), so unfortunately it doesn't involve #1 or #2 (in terms of the core material which has not been disseminating; the material in [8] hasn't been related to the difficulties that have arisen). But it very much contributes in the direction of #3, which is of course a very good thing! Feb 25 '16 at 15:47
• @nfdc23 I think you misunderstood my comment. Regarding #2, since (at least in principle) Mochizuki's work is now effective, it may be possible to find counter-examples to some of his claims. Of course, one of the criticisms I've seen of the work is the lack of motivating examples, so this might just be a theoretical rather than practical consideration. Feb 25 '16 at 20:04
• Thanks for clarifying the intent of #2. My understanding from discussing this stuff with Dimitrov is that making explicit the "effective" constants he gets is a daunting task, and that most likely such explicit constants will not be practical (i.e., not suitable for testing against examples). Feb 26 '16 at 5:39
• That has been my experience when making things effective as well. Of course, if Mochizuki's work does check out, I can imagine lots of people will be very interested in accomplishing that "daunting task"! Feb 26 '16 at 17:18
• Does an effective abc conjecture give an effective Mordell conjecture?
– user19475
Dec 18 '17 at 5:08
September 2018: There has been a back-and-forth in 2018 between Shinichi Mochizuki and Yuichiro Hoshi (MoHo) in Kyoto, and Peter Scholze and Jakob Stix (ScSt) in Germany, with ScSt spending a week in Kyoto in March 2018 to confer with MoHo.
ScSt have released a report saying they believe there is a gap in the proof of Corollary 3.12 in IUTT-3, and Mochizuki has posted a reply saying that ScSt are missing some understanding of the background theory. It sounds like ScSt are still skeptical, and at minimum further clarification is needed about proving this corollary.
• The article by Erica Klarreich was really good! Sep 21 '18 at 0:26
• The only criticism of M that I can understand that has real bearing on the ScSt report is that he claims (Comment (Lin) in kurims.kyoto-u.ac.jp/~motizuki/Cmt2018-08.pdf) they assume certain maps between 1-dimensional ordered vector spaces over R (the commutative hexagon at the end) are linear, when they are not. I feel it would be most useful if these maps could be transparently defined so we can see where the problem lies. Sep 21 '18 at 5:26
• A Corollary with a 9-page proof? Is that a record?
– bof
Sep 21 '18 at 23:51
• To a complete outsider, the thinly veiled insults Mochizuki addresses to Scholze and Stix in that response are surprising, to say the least. Sep 22 '18 at 15:17
• See also "Comments on Mochizuki’s 2018 Report" by David Roberts : thehighergeometer.files.wordpress.com/2018/09/… Oct 17 '18 at 22:32
Today (3 April 2020) his papers have been accepted for publication on RIMS journal.
https://www.nature.com/articles/d41586-020-00998-2
• What a disgrace! Apr 3 '20 at 22:52
• @MoziburUllah: Journals don't publish questionable papers and hope that the community sorts itself out. At least no decent journal would willingly choose to do that. Something seriously wrong has happened here, and I can't imagine any editorial board being happy with this. This is not to say that journals won't make mistakes -- that'll of course happen -- just that no journal would/should walk into a situation like this. Apr 4 '20 at 1:33
• @MoziburUllah: You don't really know what you're talking about here. No one in the number theory community believes this result -- apart from acolytes of Mochizuki in Nottingham and Japan. And I don't think this sorry state of affairs has been seen in any of the other breakthroughs in mathematics that have happened over the last 20 years -- many of them quite complicated. Apr 4 '20 at 2:00
• @MoziburUllah: No one wants to be like string theory! And math doesn't have to go down that path. Anyway, I'm done with responding. Apr 4 '20 at 3:25
• On Woit's blog, there is a very interesting comment by Peter Scholze that he has made in the light of the current press coverage. Apr 6 '20 at 14:40
I think that not much has changed since 2012, in terms of general consensus within the mathematical community.
There's some very interesting opinions and notes on the topic (see for example the one by Brian Conrad mentioned in the comments above, or this one by Ivan Fesenko), but not a lot of people seem to have a strong opinion yet as to whether IUT implies Szpiro's conjecture or not.
On the other hand, Mochizuki has two reports on the progress of the verification process, which have a lot of information that you might find helpful.
What's interesting with the Scholze-Stix rebuttal is that (staring from mathematically a long way away) there is a reasonable proof strategy which would fit the Scholze-Stix rebuttal and Mochizuki rejoinder well. The obvious objection to it being right is: well, Scholze-Stix would have seen it, and even if somehow not Mochizuki would have explained it, right? But maybe it is worth posting here, in order that someone explains why it is not what is going on and not correct. So here goes...
Very caricatured, the proof of Mochizuki's Corollary 3.12 is supposed to give two different (complicated) transforms from a set $$S$$ to a set $$T$$, along with inequalities regarding an associated parameter $$f(t)$$, and what comes out for a given $$s\in S$$ is the inequality $$c(x)f(t)\ge d(x)f(t')$$. Here $$x$$ is the arithmetic information which Mochizuki wants to get some control of, and $$c$$ and $$d$$ are (simple') functions which depend on the transforms chosen but not on the $$s\in S$$.
The obvious way to get something useful out of this is to ask that $$t=t'$$; this is insisting that the Scholze-Stix diagram is commutative. Then you can cancel the $$f(t)$$ factor and get an inequality involving $$x$$. This looks like it's what Mochizuki wants to do (he says the images are the same). One way to get $$t=t'$$ is to choose a couple of spaces equal (this choice fixes the transforms).
Scholze and Stix find that in this case you get a trivial inequality, and claim that anything else which gets $$t=t'$$ is likely to give the same result. Mochizuki agrees, and says that the reason is that in this case his transforms don't do anything interesting (he also says the Scholze-Stix choice is essentially the only way to get $$t=t'$$). This is consistent with Scholze-Stix saying that Mochizuki's use of anabelian geometry doesn't seem to be doing anything.
The other two things Scholze and Stix simplify are polymorphism' to morphism, which in this caricature means they consider one $$s\in S$$ as above, where Mochizuki wants to consider all $$s\in S$$ (polymorphism). And averaging over the result, which is meaningless if you have only one morphism.
But one can also work as follows. Consider all $$s\in S$$, and you get a collection of inequalities $$c(x)f(t)\ge d(x)f(t')$$, where $$t$$ and $$t'$$ are images of $$s$$ under Mochizuki's two transforms. If as $$s$$ ranges over $$S$$, you get the same collection of elements appearing as $$t$$ and as $$t'$$, just permuted, then this is exactly what Mochizuki means by saying the polymorphism images are the same (as sets, even though the individual morphism images aren't the same). In this case, when you average the collection of inequalities, as Mochizuki wants to do, you get an inequality which is useful: the average of the $$f(t)$$ equals the average of the $$f(t')$$, because they're the same sum permuted, so you can cancel it and get $$c(x)\ge d(x)$$, this time (Mochizuki claims) with different $$c$$ and $$d$$ and hence meaningful content.
This is entirely consistent with Scholze-Stix saying that polymorphisms and averages don't appear to play a role - in this caricature, they would be playing no role in 400+ pages, except exactly at this point.
• Has someone considered sharing this answer with Scholze/Stix? Nov 14 '18 at 20:11
• I was hoping someone expert would point out quickly why it is wrong..! Nov 15 '18 at 22:50
• Having asked an expert, it seems that at best Mochizuki's proof isn't clear enough to decide whether the above is part of the strategy. More likely, the above is simply nonsense (or, a coincidental resemblance to a proof strategy that's not what's intended). Nov 19 '18 at 21:03
I just read on Google+ that the paper will be published in 2018 in a Japanese journal whose editor-in-chief is Mochizuki himself. See https://plus.google.com/+johncbaez999/posts/DWtbKSG9BWD
• plus.google.com/+johncbaez999/posts/DWtbKSG9BWD Dec 17 '17 at 19:42
• Not to mention commentary by Peter Woit at Not Even Wrong, by Lieven le Bruyn on G+, and others. Dec 17 '17 at 22:29
• Apparently the papers have not been accepted for publication. It's unclear how that claim originated. Dec 24 '17 at 22:47
|
2021-12-07 19:33:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 31, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6448363065719604, "perplexity": 728.0508320945863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00029.warc.gz"}
|
https://open.kattis.com/contests/nar20practice15/problems/methodicmultiplication
|
Hide
# Problem MMethodic Multiplication
Giuseppe Peano, public domain
After one computer crash too many, Alonso has had enough of all this shoddy software and poorly written code! He decides that in order for this situation to improve, the glass house that is modern programming needs to be torn down and rebuilt from scratch using only completely formal axiomatic reasoning. As one of the first steps, he decides to implement arithmetic with natural numbers using the Peano axioms.
The Peano axioms (named after Italian mathematican Giuseppe Peano) are an axiomatic formalization of the arithmetic properties of the natural numbers. We have two symbols: the constant $0$, and a unary successor function $S$. The natural numbers, starting at $0$, are then $0$, $S(0)$, $S(S(0))$, $S(S(S(0)))$, and so on. With these two symbols, the operations of addition and multiplication are defined inductively by the following axioms: for any natural numbers $x$ and $y$, we have
\begin{align*} x + 0 & = x & x \cdot 0 & = 0 \\ x + S(y) & = S(x + y) & x \cdot S(y) & = x \cdot y + x \end{align*}
The two axioms on the left define addition, and the two on the right define multiplication.
For instance, given $x = S(S(0))$ and $y = S(0)$ we can repeatedly apply these axioms to derive
\begin{align*} x \cdot y & = S(S(0)) \cdot S(0) = S(S(0)) \cdot 0 + S(S(0))\\ & = 0 + S(S(0)) = S(0 + S(0)) = S(S(0 + 0)) = S(S(0)) \end{align*}
Write a program which given two natural numbers $x$ and $y$, defined in Peano arithmetic, computes the product $x \cdot y$.
## Input
The input consists of two lines. Each line contains a natural number defined in Peano arithmatic, using at most $1\, 000$ characters.
## Output
Output the product of the two input numbers.
Sample Input 1 Sample Output 1
S(S(0))
S(S(S(0)))
S(S(S(S(S(S(0))))))
Sample Input 2 Sample Output 2
S(S(S(S(S(0)))))
0
0
CPU Time limit 2 seconds
Memory limit 1024 MB
Statistics Show
|
2022-08-19 09:09:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9943587183952332, "perplexity": 731.409631235623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573630.12/warc/CC-MAIN-20220819070211-20220819100211-00449.warc.gz"}
|
http://physics.stackexchange.com/tags/elementary-particles/new
|
# Tag Info
9
Well, it depends how you define "distinct fundamental particle". If you insist that Wigner's classification is what defines a particle, i.e. "particle = irreducible unitary representation of the Lorentz/Poincare group", then the photon is two particles, as you say. But, more commonly, we do not look at the particles like this - particles arise as the ...
1
All the other answers that "no, there is no triggering event, it just happens, quantum mechanics is like that" are perfectly right. But let's look at the experimental evidence for these answers. Yes indeed, there is considerable experimental evidence that heavily falsifies the idea that there is a triggering event. This evidence is the statistical ...
0
Consider a particle in a box, but where the box has a thin wall, and the energy level outside the box is lower than that inside. (This is, e.g., a neutron in an unstable heavy nucleus.) Follow the development of this wave in time. It will tunnel out eventually to the lower energy state and propogate away. From this, you can see that the decay is always ...
2
An " intuitive" approach is to consider that in QM, the exact location of particles doesn't exist. They're all probability waves, and you never have a 100% chance to find a particle in exactly one place. So for unstable nuclear atoms, the probability function of the protons and neutrons are smeared out even further. There's a significant non-zero ...
13
As the other answers state, the individual nuclei have a probability of decay and this happens randomly, as they sit there. You are correct though in wondering about a trigger, because at the atomic level that is exactly what happens with lasing, induced-emission = induced-decay. Spontaneous decay is random, controlled by the quantum mechanical individual ...
10
There really is none. Unstable elements (and unstable elementary particles) can decay into a less energetic state. However, each kind of decay depends on a quantum mechanical process, this is tunneling for $\alpha$, a virtual $W^\pm$ for $\beta$ or a transition from one nuclear shell to another for $\gamma$. Now these underlying processes can be strongly ...
6
Nothing happens! It's random! The nucleus is in an unstable state, and unstable states have a certain small probability to decay within a given amount of time (how small depends on the nucleus). There's not much else to it! Sometimes decay can be stimulated but the type of decay you're talking about is truly random.
0
Is this related to electroweak symmetry breaking and the Higgs field? Yes. There is a particular mixture of the $W^0$ and $B$ bosons that propagates freely in the Higgs field condensate; this freely propagating state is the photon. Why are mesons (hadrons) mentioned?? There was a time when the weak intermediate vector bosons were referred to as ...
2
Atoms are a small building blocks of matter, but not the smallest. Atoms are made up of electrons, protons, and neutrons. Electrons are (as far as anyone has been able to tell) honest-to-god point particles, i.e. they have no further internal constituents (I'm not saying they don't have spin or that they can't be described by wave functions, etc., just ...
1
Are you arguing that because particles come in pairs - particle and anti-particle - then there should be an even number of massless particles? If so, the argument fails because the photon is its own antiparticle i.e. there is no antiphoton.
0
Electrons, protons, neutrons as well as their antiparticles are able to receive and to emit photons. The photon exchange is possible between each of this particles and antiparticles and this does not change the properties of photons. Once emitted photons are the linear propagation of energy in the form of a oscillating electric and a oscillating magnetic ...
Top 50 recent answers are included
|
2015-03-03 07:32:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7765253186225891, "perplexity": 374.8035969603616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463122.1/warc/CC-MAIN-20150226074103-00078-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://www.perfectforms.com/forum/reply/re-fix-textinput-field/
|
# Re: Fix TextInput Field
Home Forum General Fix TextInput Field Re: Fix TextInput Field
#4783
ijobling
Participant
Consider the script you have set up to do this. If you have not set any conditions and you have done this on the ‘form is opened’ or similar trigger, then it will run every time.
Look at adding in a condition before these actions such that it will not run every time the form is opened. (ie set a condition so this only runs when the form is in a specific ‘stage’ ..probably in this case your first stage)
Have a look at the tutorials where these methods (and others you are likely to require in the future as well) are explained:
/Documentation/complete_tutorial/html/perfectforms_tutorials.htm
|
2020-06-02 07:30:58
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8552255034446716, "perplexity": 1762.1076926369872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347423915.42/warc/CC-MAIN-20200602064854-20200602094854-00217.warc.gz"}
|
https://www.studyadda.com/ncert-solution/decimal_q24/541/44988
|
• # question_answer 24) Express as kg using decimals. (a) 2 g (b) 100 g (c) 3750 g (d) 5 kg 8 g (e) 26 kg 50 g
(a) We know that, $162mm=162\times \frac{1}{10}cm=\frac{162}{10}cm=16\frac{2}{10}cm=\left( 16+\frac{2}{10} \right)cm$ $=(16+0.2)cm=16.2cm$ $10mm=1cm\Rightarrow 1mm=1cm$ (b) We know that, $\therefore$ $83mm=83\times \frac{1}{10}cm=\frac{83}{10}cm=8\frac{3}{10}cm=\left( 8+\frac{3}{10} \right)cm$ $=(8+0.3)cm=8.3cm$ (c) We know that, $k=9\,cm\,5\,mm$ $10mm=1cm\Rightarrow 1mm=\frac{1}{10}cm$$\therefore$ (d) We know that, $k=9\,cm\,5\,mm$ $=9cm+5\times \frac{1}{10}cm=9cm+\frac{5}{10}cm$ $=9cm+0.5cm=(9+0.5)cm=9.5cm$ $=65\,mm$ (e) We know that, $10\,mm=1\,cm\Rightarrow 1\,mm=1/10cm$ $\therefore$$=65\times \frac{1}{10}cm=\frac{65}{10}cm$ $=6\frac{5}{10}cm=\left( 6+\frac{5}{10} \right)cm=(6+0.5)cm=6.5cm$ $\therefore$
|
2020-09-18 07:57:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8659414649009705, "perplexity": 3758.38906027388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400187354.1/warc/CC-MAIN-20200918061627-20200918091627-00489.warc.gz"}
|
https://www.yaclass.in/p/mathematics-cbse/class-7/data-handling-1485/chance-and-probability-2302/re-e6ec439c-fc6d-497a-bc14-76460fc1007b
|
PUMPA - THE SMART LEARNING APP
Helps you to prepare for any school test or exam
What is the probability of your friend at the school who have a birthday on next Saturday?
The probability of your friend birthday falls on a Saturday is $\frac{i}{i}$.
[Note: Submit the answer in fraction without solving].
|
2022-08-13 00:23:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20116585493087769, "perplexity": 4461.280473197672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571847.45/warc/CC-MAIN-20220812230927-20220813020927-00611.warc.gz"}
|
https://themosekblog.blogspot.com/
|
Tuesday, November 6, 2018
Reseller in China
shanshu.ai (Cardinal Operations) is the official reseller of MOSEK in China. Customers in China interested in acquiring a MOSEK license are welcome to visit
for details.
Friday, August 31, 2018
Conic modeling cheatsheet
As a supplement to the MOSEK Modeling Cookbook, here is s a quick reference guide to some useful conic models (click image for PDF):
Friday, August 17, 2018
Solving SDP with millions of matrix variables
Is it feasible in practice to solve semidefinite optimization problems with a huge number of matrix variables?
We recently received a problem from a structural engineering application with approximately the following parameters:
• 1 500 000 three-dimensional matrix variables,
• 750 000 three-dimensional rotated quadratic cones,
• $8\cdot 10^6$ scalar variables, $15\cdot 10^6$ linear constraints, $45\cdot 10^6$ nonzeros.
On a DELL PowerEdge R730 server with 2 Xeon E5-2687W v4 3.0GHZ the optimal solution is found in about 161 minutes on 24 threads using the latest MOSEK 8.1.0.59 with memory peaking at about 60GB. Due to the nature of the problem we disabled the linear dependency check and otherwise used all standard parameter settings.
Friday, June 29, 2018
.NET Core support
From release 8.1.0.56 we initialize support for .NET Core, the cross-platform implementation of .NET.
MOSEK for .NET Core is distributed as a platform-independent NuGet package, which can be downloaded directly from our website. See the documentation for .NET APIs for installation instructions.
Friday, June 22, 2018
MOSEK at ISMP 2018
Here is a complete schedule of our talks at ISMP 2018 in chronological order:
SpeakerTitleSessionTimeRoom
Sven WieseThe Mixed-integer Conic Optimizer in MOSEKMixed-Integer Conic OptimizationMon, 5:00PMA. DURKHEIM, Bld. A
Henrik FribergProjection and presolve in MOSEK: exponential and power conesTheory and algorithms in conic linear programming 1Tue, 8:30AMS. LC5, Bld. L
Joachim DahlExtending MOSEK with exponential conesTheory and algorithms in conic linear programming 2Wed, 8:30AMS. 20, Bld. G
Erling AndersenMOSEK version 9Progress in Conic and MIP SolversWed, 3:15PMA. PITRES, Bld. O
Michał AdamaszekExponential cone in MOSEK: overview and applicationsRelative Entropy Optimization IFri, 3:15PMS. LC5, Bld. L
Check the program for any last-minute changes.
The session on Mixed-Integer Conic Optimization (Mon, 5:00PM) is organized by Sven Wiese. The full program of that session is:
• Lucas Letocart, Exact methods based on SDP for the k-item quadratic knapsack problem
• Tristan Gally, Knapsack Constraints over the Positive Semidefinite Cone
• Sven Wiese, The Mixed-integer Conic Optimizer in MOSEK
Tuesday, June 12, 2018
Elementary intro to infeasibility certificates
We occasionally receive support questions asking about the meaning and practical use of infeasibility certificates, usually when the user expected the problem to be feasible. While an infeasibility certificate is a well-defined mathematical object based on Farkas' lemma, it is not intuitive for everyone how to use it to address the basic question of "What is wrong with my problem formulation?". We will try to explain this on a very simple example.
In this post we don't even consider optimization, but restrict to elementary linear algebra. Suppose we want to find a solution to a system of linear equations:
$$\begin{array}{llllllllllll} & 2x_1 & - & x_2 & + & x_3 & - & 3x_4 & + & x_5 & = & 1, \\ & & & x_2 & -& 2x_3 & & & +& x_5 & = & -1,\\ & 3x_1 & & & - & x_3 & - & x_4 & & & = & 2, \\ & 2x_1 & + & x_2 & - & 3x_3 & - & 3x_4 & + & 3x_5 & = & -0.5. \end{array}$$
This system of linear equations is in fact infeasible (has no solution). One way to see it is to multiply the equations by coefficients given on the right-hand side and add them up:
$$\begin{array}{lllllllllllll} & 2x_1 & - & x_2 & + & x_3 & - & 3x_4 & + & x_5 & = & 1 & / \cdot (-1) \\ & & & x_2 & -& 2x_3 & & & +& x_5 & = & -1 & / \cdot (-2) \\ & 3x_1 & & & - & x_3 & - & x_4 & & & = & 2 & / \cdot 0 \\ & 2x_1 & + & x_2 & - & 3x_3 & - & 3x_4 & + & 3x_5 & = & -0.5 & / \cdot 1 \\ \hline & & & & & & & & & 0 & = & 0.5. & \end{array}$$
We get an obvious contradiction, which proves the system has no solution. The vector
$$y = [-1, -2, 0, 1]$$
of weights used above is therefore a proof (certificate) of infeasibility. MOSEK produces such a certificate automatically. Here is a simple script in MATLAB that computes precisely that vector:
As output we get:
• General theory guarantees that the dual variable y is a convenient place that can store the certificate.
• When a system of equations has no solution then an appropriate certificate is guaranteed to exist. This is a basic variant of Farkas' lemma, but it should be intuitively clear: your favorite method of solving linear systems (for instance Gaussian elimination) boils down to taking successive linear combinations of equations, a process which ends up either with a solution or with an "obvious" contradiction of the form $0=1$.
Using the certificate to debug a model
In the example above $y_3=0$ which means that the third equation does not matter: infeasibility is caused already by some combination of the 1st, 2nd and 4th equation. In many practical situations the infeasibility certificate $y$ will have very few nonzeros, and those nonzeros determine a subproblem (subset of equations) which alone cause infeasibility. The user can configure MOSEK to print an infeasibility report, which in our example will look like:
This report is nothing else than a listing of equations with nonzero entries in $y$, and $y$ is the difference between Dual lower and Dual upper. Analyzing this set (which hopefully is much smaller than the full problem) can help locate a possible modeling error which makes the problem infeasible.
To conclude, let us now phrase the above discussion in matrix notation. A linear equation system
$$Ax=b$$
is infeasible if and only if there is a vector $y$ such that
$$A^Ty = 0 \quad \mbox{and}\quad b^Ty \neq 0.$$
The situation is analogous for linear problems with inequality constraints. We will look at an example of that in another blog post.
Thursday, May 31, 2018
Perspective functions (and Luxemburg norms)
If $\varphi:\mathbb{R}_+\to\mathbb{R}$ is a convex function then its perspective function $\tilde{\varphi}:\mathbb{R}_+\times\mathbb{R}_+\to\mathbb{R}$ is defined as
$$\tilde{\varphi}(x,y) = y\varphi(\frac{x}{y}).$$
Moreover, under these conditions, the epigraph of the perspective function
$$\left\{(t,x,y)~:~t\geq y\varphi(\frac{x}{y})\right\}$$
(or, to be precise, its appropriate closure) is a convex cone. Here are some familiar examples:
• $\varphi(x)=x^2$. Then $\tilde{\varphi}(x,y)=\frac{x^2}{y}$, familiar to some as quad-over-lin. The epigraph of $\tilde{\varphi}$, described by $ty\geq x^2$, is the Lorentz cone (rescaled rotated quadratic cone).
• $\varphi(x)=x^p$. Then $\tilde{\varphi}(x,y)=\frac{x^p}{y^{p-1}}$ and the epigraph of $\tilde{\varphi}$, described equivalently by $t^{1/p}y^{1-1/p}\geq |x|$ is the 3-dimensional power cone (with parameter $p$).
• $\varphi(x)=\exp(x)$. Then the epigraph $t\geq y\exp(x/y)$ is the exponential cone.
• $\varphi(x)=x\log(x)$. Then the epigraph $t\geq x\log(x/y)$ is the relative entropy cone.
We bring this up in connection with a series of blogposts by Dirk Lorenz here and here. For a monotone increasing, convex, nonnegative $\varphi$ with $\varphi(0)=0$ he defines a norm on $\mathbb{R}^n$ via
$$\|x\|_\varphi=\inf\left\{\lambda>0~:~\sum_i\varphi\left(\frac{|x_i|}{\lambda}\right)\leq 1\right\},$$
and we can ask when the norm bound $t\geq \|x\|_\varphi$ is conic representable. The answer is: if the epigraph of the perspective function $\tilde{\varphi}$ is representable, then so is the epigraph of $\|\cdot\|_\varphi$. The reason is that the inequality
$$\sum_i\varphi\left(\frac{|x_i|}{t}\right)\leq 1$$
is equivalent to
$$\begin{array}{ll} w_i\geq |x_i| & i=1,\ldots,n,\\ s_i\geq t\varphi\left(\frac{w_i}{t}\right) & i=1,\ldots,n,\\ t=\sum_i s_i. & \end{array}$$
That covers for example $\varphi(x)=x^2$, $\varphi(x)=x^p$ ($p>1$), $\varphi(x)=\exp(x)-1$, $\varphi(x)=x\log(1+x)$ and so on.
Figure: smallest $(\exp(x)-1)$-Luxemburg-norm disk containing a random set of points.
|
2018-11-15 07:07:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5824316143989563, "perplexity": 1567.6206177514362}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742567.46/warc/CC-MAIN-20181115054518-20181115080518-00146.warc.gz"}
|
https://blog.chriszimmerman.net/page2/
|
### The One Benefit of Software Estimation
Software estimates are mostly a waste of time. Here are some observations I have made in my career that have driven me to this conclusion:
### Build, Clean, and Rebuild in Visual Studio
In the .NET environment, the process of compiling source code into output files is called building. Two common ways of building are via an IDE, such as the Visual Studio IDE (or the recent Rider IDE from JetBrains), and via the command line using devenv.exe.
### The Dreyfus Model and "It depends"
As a software developer, I’ve noticed that experienced developers will often start the answer a question directed at them with “It depends.”
### Release With Less Stress
I’ve worked in places where software releases can be an event ridden with stress and anxiety. The cause of this stress comes from several sources, such as pressure to meet a release date, or an issue creeping up in production that was not caught during testing. I’d like to share with you a few steps that you can take to ensure that your release process is less stressful and not such a big deal.
|
2022-06-27 11:41:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3143779933452606, "perplexity": 1598.2562962708864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103331729.20/warc/CC-MAIN-20220627103810-20220627133810-00489.warc.gz"}
|
https://tex.stackexchange.com/questions/114256/missing-final-brackets-in-one-equation-and-in-other-in-the-equations
|
# Missing final brackets (in one equation { and in other - ]) in the equations
This is my first question on tex.stackexchange, so the formatting will be terrible.
I've got a problem with three lengthy equations - the closing brackets, in all three formulas, are not displayed.
Technical info - MikTex x64 (2.9), latest TeXnicCenter (Beta 1 I think it is), Windows 8 x64. In TeXnicCenter I have selected XeLaTeX -> PDF, as it is the optimal way for display of my language.
My document settings are following:
\documentclass[12pt,a4paper]{article}
\usepackage{graphicx}
\usepackage{polyglossia}
\usepackage{xltxtra}
\usepackage{xunicode}
\usepackage{units}
\usepackage{amsmath}
\usepackage{pstricks}
\usepackage[top=2.5cm, left=3cm, bottom=2.5cm, right=2.5cm]{geometry}
\usepackage{setspace}
\onehalfspacing
\setmainfont{Times New Roman}
\setdefaultlanguage{latvian}
\setotherlanguages{english, russian}
\begin{document}
And the equations are:
$\begin{split} F'=\delta m\left\{f\frac{M_1l}{r^3}\left[3\cos^2\phi\cos^2(\lambda-D)-1\right]+f\frac{M_2l}{R^3_1}\left(3\cos^2\phi\cos^2\lambda-1\right)-\\-\frac{1}{2}f\frac{M_2r_1}{R^3_1}\cos\phi\cos\left(\lambda-D\right)\left(1+3\cos2D\right)-\frac{3}{2}f\frac{M_2r_1}{R^3_1}\cos\phi\sin\left(\lambda-D\right)\sin2D\right\} \end{split}$
$\begin{split} F^n=\delta m\left\{-\frac{3}{2}f\frac{M_1l}{r^3}\sin 2\phi\cos^2\left(\lambda-D\right)-\frac{3}{2}f\frac{M_2l}{R^3_1}\sin 2\phi\cos^2\lambda-\\-\frac{1}{2}f\frac{M_2r_1}{R^3_1}\sin\phi\left[3\sin\left(\lambda-D\right)\sind 2D-\cos\left(\lambda-D\right)\left(1_3\cos 2D\right)\right]\right\} \end{split}$
$\begin{split} F^m=\sigma m\left[-\frac{3}{2}f\frac{M_1l}{r^3}\cos\phi\sin 2\left(\lambda-D\right)-\frac{3}{2}f\frac{M_2l}{R^3_1}\cos\phi\sin 2\lambda+\\+\frac{1}{2}f\frac{M_2}{R^3_1}r_1\sin\left(\lambda-D\right)\left(1+3\cos 2D\right)+\frac{3}{2}f\frac{M_2}{R^3_1}r_1\cos\left(\lambda-D\right)\sin 2D\right] \end{split}$
I've made a screenshot from TeXworks (PDF mode) with the problem -
And I've also posted my compile log to the pastebin - http://pastebin.com/CEZ2EUjU
And what I have noticed - each equation has added me, approximately, 12 new errors.
• Welcome to TeX.SX! Usually, we don't put a greeting or a “thank you” in our posts. While this might seem strange at first, it is not a sign of lack of politeness, but rather part of our trying to keep everything very concise. Accepting and upvoting answers is the preferred way here to say “thank you” to users who helped you. – Marco Daniel May 14 '13 at 16:11
• You can't have \left in one equation and \right in a different one. – egreg May 14 '13 at 16:13
You can't use \left in one line of a split and \right in another one. You should also use an align* environment (and don't use redundant \left and \right); the big delimiters must be set by hand.
\documentclass{article}
\usepackage{amsmath}
\begin{document}
\begin{align*}
F'&=\delta m\biggl\{f\frac{M_1l}{r^3}\bigl[3\cos^2\phi\cos^2(\lambda-D)-1\bigr]
+f\frac{M_2l}{R^3_1}(3\cos^2\phi\cos^2\lambda-1)-{} \\
-\frac{3}{2}f\frac{M_2r_1}{R^3_1}\cos\phi\sin(\lambda-D)\sin2D\biggr\} \2ex] F^n&=\delta m\biggl\{-\frac{3}{2}f\frac{M_1l}{r^3}\sin 2\phi\cos^2(\lambda-D) -\frac{3}{2}f\frac{M_2l}{R^3_1}\sin 2\phi\cos^2\lambda-{} \\ &\qquad{}-\frac{1}{2}f\frac{M_2r_1}{R^3_1}\sin\phi\bigl[3\sin(\lambda-D)\sin2D- \cos(\lambda-D)(1_3\cos 2D)\bigl]\biggl\} \\[2ex] F^m&=\sigma m\biggl[-\frac{3}{2}f\frac{M_1l}{r^3}\cos\phi\sin 2(\lambda-D) -\frac{3}{2}f\frac{M_2l}{R^3_1}\cos\phi\sin 2\lambda+{} \\ &\qquad{}+\frac{1}{2}f\frac{M_2}{R^3_1}r_1\sin(\lambda-D)(1+3\cos 2D) +\frac{3}{2}f\frac{M_2}{R^3_1}r_1\cos(\lambda-D)\sin 2D\biggr] \end{align*} \end{document} • First of all I did an align*, so the three equals sign can be aligned to each other. It's not good to stack \[... formulas.
• I removed all the inner \left and \right that do nothing except adding unwanted spaces. However, I increased the size of a [...] pair to make clearer their correspondence (it's in the second equation; there's no need for doing this in the first formula)
• In order to make clear that each second line is a continuation, I added a \qquad of space to push it to the right of the alignment point.
• Before or after the "isolated" minus or plus signs, I put {} in order to get correct spacing, otherwise they would not work as binary operation because of how TeX determines the difference between $-1$ and $2-1$.
• Most important, I set by hand the size of the main delimiters, because so you have full control over them even if they are in different lines.
As a side note, I wouldn't repeat the operation sign at the break point; it's a bad habit of Russian typography, that's not used much in Western countries. I find it distracting and ambiguous: in the first equation is it "minus minus" that makes "plus"? It isn't, I know, but why repeating it? The reader finds the break, goes on the next line where it's clear that the formula continues.
• Interesting, I'll have to look into it. Your result looks way better than mine, albeit I do not understand sever parts of it. – user30709 May 14 '13 at 16:33
• Thanks, I've read the edited answer and now it is clear on how do I improve my equations. I have a question on $...$ - what do they do, in fact. I just put the there because the TeXnicCenter does so, if I use 'Formula' button. As for the operation sign at break point - it is USSR legacy that I have to stick to, as it is how professors here write it as well. – user30709 May 14 '13 at 17:32
The -\\- parts are main source of the problem. You should add closing \right's, in particular \right., before \\, and opening \left's after it.
• This works because, by chance, the second part of the formula has the same size as the first one. It wouldn't give the same size of the delimiters in other cases. Plus it's wrong to use three consecutive $...$ environments. – egreg May 14 '13 at 16:33
• @egreg Certainly. And, additionally, instead of (1_3\cos 2D) should be (1-3\cos 2D). (And I think your solution should be accepted). – Przemysław Scherwentke May 14 '13 at 17:05
|
2019-08-19 14:41:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9126994609832764, "perplexity": 1739.2365210866956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314752.21/warc/CC-MAIN-20190819134354-20190819160354-00178.warc.gz"}
|
https://mathstodon.xyz/@ZevenKorian
|
You’d think of something, ask your friends, and they would be like “Gosh, I don’t know” and then you’d be like “Oh, well, guess that’s a thing I’ll never know” and then everyone just proceeds with their fucking day.
So $$n! \neq m^k$$ for integers $$n, m, k > 1$$ because by Bertrand's postulate there's a prime between $$\lceil n/2 \rceil$$ and $$n$$, right?
But invoking Bertrand's postulate seems like a big rock for this.
When you see $$\cong\mathbb{K}^{nm}$$ and you immediately think "ok, there has to be a matrix somewhere"
(The natural lifting of $$\omega \in \Lambda^1(M)$$ is $$\mathcal{L}_1\omega$$, where $$1: TM \longrightarrow TM$$ is the identity map.)
I spent a ridiculous amount of time trying to find the natural lifting of $$dz-x_1\,dy^1+\dots+x_n\,dy^n$$
Emojo Theatre proudly presents:
The Mathematicians
Sup! I wrote so much math that they had to stop naming things after me so the other mathematicians could get some shoutouts too.
I revolutionised mathematics and was a pompous twit about telling others I already scooped them.
I had to pretend to be a dude so they'd let me in.
I really like thinking about rabbit fornication, yo.
I really hate beans. Love casual ocean murder, though.
Watch "A Different Way to Solve Quadratic Equations" on YouTube - A Different Way to Solve Quadratic Equations: youtu.be/ZBalWWHYFQc
Here we go again
You have to love how this paper basically forced the fᵤ and fᵥ notation for partial derivatives just by saying
no, I cannot derivate on any given manifold
but you know what I can do? this
*deep fried df(∂u) and df(∂v)*
Oh boy the explanation of what the Thom-Boardman symbols are is rather complicated here. I think it's time to whip out my good ol' friend Gibson...
my notes are very sloppy in general lol you can tell they are /by/ me and /for/ me as I skip everything I remember from last year and carefully explain everything I didn't understand the first time I read it...
whatever gets into the final thesis is going to need a serious revision but I hope I can explain the concepts more clearly by then
I'm not totally into ᵣJᵏ(n,p) as notation because the ᵣ makes it hard to handwrite the symbol and the author of this book keeps getting the ᵣ everywhere else lol because of course latex assumes it's a subindex for whatever you wrote /before/, not /after/
non mathematicians: i hate math because i hate numbers
me, a mathematician: what the frick is a number
Another suggestion is g.co/kgs/fktbUb
This one seems to go deeper and (I assume) is more complete as a reference. It uses techniques from sheaf theory.
Going to make this into a thread in case someone else is interested. I found this book which seems to be an introduction to the subject written *for* a particular course. Also, CC BY-NC-SA (kudos to Jiří Lebl!)
There is a thread on mathoverflow (mathoverflow.net/questions/313) but I ask here anyway in case someone is familiar with the subject
Do you know a book on several complex variables I can use for reference? Basically to check when the rules IRⁿ can be applied to Cⁿ (e.g. Hadamard's lemma apparently holds in Cⁿ)
The proofs being complicated is not a problem as I don't think I will read them; I've been told several complex variables is a tough branch of complex analysis.
I like #blackfriday . I get emails from newsletters I almost forgot to unsubscribe from
|
2020-10-01 05:05:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7248566150665283, "perplexity": 1339.4529843588255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402130615.94/warc/CC-MAIN-20201001030529-20201001060529-00083.warc.gz"}
|
http://mathoverflow.net/questions/26250/how-to-determine-kernels-of-maps-between-algebraic-k-1-groups
|
# How to determine kernels of maps between algebraic K_1-groups
Suppose we have a ring homomorphism $\varphi: R \to S$, say an injection (e.g. coming from an injection $H \to G$ of finite groups and $R=\mathbb{Z}_p[H],S=\mathbb{Z}_p[G])$, what can be said about the kernel of $K_1(\varphi)$? Since I'm after all interested in Iwasawa-algebras let's suppose R,S are semilocal and by Vaserstein the canonical maps $i_R: R^\times\to K_1(R),~ i_S:S\times\to K_1(S)$ are surjective and we get have a comm. diagram $i_S\circ \varphi=K_1(\varphi)\circ i_R.$
There certainly are kernels sometimes: Let H be abelian, $G=H \rtimes C_2$ the semidirect produkt, where $C_2$ acts by inversion. Since $i_S$ factors through the abelianization of $S^{\times}$, we see that 2H is in the kernel.
-
|
2015-01-27 23:26:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9806773662567139, "perplexity": 523.4214591729982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122039674.71/warc/CC-MAIN-20150124175359-00052-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/algebra-problem-solving-by-rearranging.593201/
|
# Algebra Problem, solving by rearranging
1. Apr 3, 2012
### PotentialE
1. The problem statement, all variables and given/known data
Let x and y be real numbers with x+y=1 and (x2 + y2)(x3 + y3) = 12. What is the value of x2 + y2 ?
2. Relevant equations
Sum of Cubes: (a3 + b3) = (a+b)(a2-ab+b2)
3. The attempt at a solution
I plugged in the sum of cubes to the equation that equals 12 to get:
(x2 + y2)(x+y)(x2-xy+y2) = 12
and since (x+y) = 1,
(x2 + y2)(x2-xy+y2) = 12
and therefore,
(x2 + y2)(x2-xy+y2) = (x2 + y2)(x3 + y3)
cancellation:
(x+y)(x2-xy+y2) = (x3 + y3) = 12
then I plugged (x3 + y3) for 12 to the original equation:
(x2 + y2)(x3 + y3) = (x3 + y3)
and that means that (x2 + y2) = 1, right?
2. Apr 3, 2012
### PotentialE
The answer key that I just found said that the answer is 3...
Where did I go wrong?
3. Apr 3, 2012
### SammyS
Staff Emeritus
How did you arrive at the above line ?
4. Apr 3, 2012
### PotentialE
what I meant was:
and therefore,
(x2 + y2)(x+y)(x2-xy+y2) = (x2 + y2)(x3 + y3)
cancellations:
(x2 + y2)(x2-xy+y2) = (x2 + y2)(x3 + y3) = 12
(x2-xy+y2) = (x3 + y3)
I plugged in the sum of cubes to the equation that equals 12 to get:
(x2 + y2)(x+y)(x2-xy+y2) = 12
and since (x+y) = 1,
(x2 + y2)(x2-xy+y2) = 12
and therefore,
(x2 + y2)(x2-xy+y2) = (x2 + y2)(x3 + y3)
cancellation:
(x2-xy+y2) = (x3 + y3)
Now I realize that the last part is wrong, but where do I go from here?
5. Apr 4, 2012
### e^(i Pi)+1=0
How did you get from
(x2 + y2)(x2-xy+y2) = 12
to
(x2 + y2)(x+y)(x2-xy+y2) = (x2 + y2)(x3 + y3) ?
I'm not saying it's wrong, I just don't follow it.
Anyway, after several pages of algebra, I also arrived at x2+y2=1
Are you sure you're looking at the right answer?
$x=1±\frac{1}{\sqrt{2}} , 1±\frac{\sqrt{5}i}{\sqrt{3}}$
edit: I plugged it in to the original equation and it doesn't work, so I must have made a mistake somewhere. Odd that we both got the same answer..
Last edited: Apr 4, 2012
6. Apr 4, 2012
### RoshanBBQ
Let
$$A = x^2 + y^2$$
$$(x^2 + y^2)(x^3 + y^3) = 12$$
$$A(x+y) (x^2-x y+y^2)=12$$
$$A(A-x y)=12$$
$$x+y = 1 \rightarrow (x+y)^2 = 1 \rightarrow x^2+y^2+2xy = 1 \rightarrow xy = \frac{1-x^2-y^2}{2}=\frac{1-A}{2}$$
$$A(A-\frac{1-A}{2})=12$$
$$A(\frac{3}{2}A-\frac{1}{2})=12$$
$$3A^2-A-24=0$$
You get two answers, but one is negative. We know the answer must be positive. The answer is 3.
7. Apr 4, 2012
### verty
This is an olympiad type question, you need to play with it. Start with x = 1-y, then get formulas for (x^2 + y^2) and (x^3 + y^3). Look to make a substitution.
8. Apr 4, 2012
### PotentialE
thanks for your help! that makes perfect sense
|
2017-08-18 20:59:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5868150591850281, "perplexity": 2595.9578275905665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105108.31/warc/CC-MAIN-20170818194744-20170818214744-00225.warc.gz"}
|
http://phys114115lab.capuphysics.ca/App%20I%20-%20formal%20report/rpt%20conclusion.htm
|
## The Formal Report
Course Support Lab Contents
Conclusion
Description
A conclusion is a short section (often only one to three sentences) summarizing the major results of the report, usually answering the experiment's objective. If values where determined in the report, then this section clearly summarizes these results along with how accurate they are (the uncertainty of the results). It usually compares the report results with expected outcomes.
PhysLab Specifications
A conclusion summarizes the major results of the report along with context to what is being reported. If relevant, the concluding results are compared to expected values (using a percentage or absolute difference). The conclusion must be self-referencing. Any values quoted must be presented in proper final form. It is not acceptable to combine the discussion with the conclusion into one section.
A self-referencing conclusion implies that one can read the conclusion without reading any other part of the report and fully understand what it says. This requires a clear context (foundations to the statements made) and clear definitions of all symbols used in the conclusion.
If a comparison with an expected value is made, then the proper comparative statement must be made. (for example "The calculated result was $$7.6 \pm 1.2g$$. This result is 2g higher than expected," or "The expected result is $$3\%$$ higher than the determined result of $$34 \pm 1.3$$"
Expected Practice
The conclusion usually starts with one or two sentences that provide the context for the information. If numerical results were obtained, then a typical or summary numerical result with uncertainty is stated. Often concluding statements can be drawn from a good discussion. A good conclusion also indicates how the investigation would be furthered (in the opinion of the author), usually in a single sentence.
|
2018-01-19 09:01:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7532878518104553, "perplexity": 1032.2835655371703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887849.3/warc/CC-MAIN-20180119085553-20180119105553-00762.warc.gz"}
|
https://math.stackexchange.com/questions/3460977/group-with-a-given-presentation-finite-or-infinite
|
# Group with a given presentation, finite or infinite?
Consider the group with following presentation,
$$G=\langle s,t : s^2=1, (st)^{3}=1\rangle$$
Is this group finite or infinite?
I tried to manipulate the relations and could only get $$(ts)^3=1$$. I don't know how to proceed further. Any hints?
• Doesn't $t$ have infinite order in this group? – Greg Martin Dec 3 at 7:06
• @GregMartin I thought the same thing. But I cannot prove it. – Abhikumbale Dec 3 at 7:07
Hint: Instead of taking $$s$$ and $$t$$ as generators, take $$s$$ and $$st$$ as generators. How else can you describe the group then?
Writing $$u=st$$, we have $$G=\langle s,u\mid s^2=1, u^3=1\rangle$$. But this just means that $$G$$ is the free product of a cyclic group of order $$2$$ (generated by $$s$$) and a cyclic group of order $$3$$ (generated by $$u$$). In particular, $$G$$ is infinite, because for instance there are infinitely many distinct reduced words of the form $$sususu\dots$$.
• Its not immediately obvious that free products are infinite (I mean, you have to prove that no two reduced sequences are equal). One way to do this is to let the group act on the infinite tree where every vertex has degree two or three, and no two vertices of the same degree are incident. The action is: fix a vertex $V_s$ of valency $2$ and another $V_u$ of valency $3$ then $s$ acts by rotating the tree around $V_s$ while $u$ acts by rotating the tree around $V_u$. Then find the paths in the tree corresponding to these products, and note that they are non-equal for non-equal products. – user1729 Dec 3 at 11:00
• @user1729 Take two rotations of the plane, of order $k\ge 2$ and $\ell\ge 2$. They have no common fixed point, hence generate an infinite group. So the free product of any two nontrivial cyclic groups is infinite. – YCor Dec 3 at 13:32
|
2019-12-13 16:18:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9293845295906067, "perplexity": 176.44007728807338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540564599.32/warc/CC-MAIN-20191213150805-20191213174805-00184.warc.gz"}
|
https://ncatlab.org/nlab/show/irreducible+representation
|
# nLab irreducible representation
Contents
### Context
#### Representation theory
representation theory
geometric representation theory
# Contents
## Idea
An irreducible representation – often abbreviated irrep – is a representation that has no smaller non-trivial representations “sitting inside it”.
Similarly for irreducible modules.
## Definition
Given some algebraic structure, such as a group, equipped with a notion of (linear) representation, an irreducible representation is a representation that has no nontrivial proper subobject in the category of all representations in question and yet which is not itself trivial either. In other words, an irrep is a simple object in the category of representations.
Notice that there is also the closely related but in general different notion of an indecomposable representation. Every irrep is indecomposable, but the converse may fail.
A representation that has proper nontrivial subrepresentations but can not be decomposed into a direct sum of such representations is an indecomposable representation but still reducible.
In good cases for finite dimensional representations, the two notions (irreducible, indecomposable) coincide.
|
2019-04-20 08:23:37
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9581692218780518, "perplexity": 772.7960867986425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529472.24/warc/CC-MAIN-20190420080927-20190420102927-00486.warc.gz"}
|
https://scicomp.stackexchange.com/questions/21584/memory-requirement-to-find-eigenvalues-and-vectors-of-large-sparse-matrix
|
Memory requirement to find eigenvalues and -vectors of large sparse matrix
How can I estimate how much memory will be needed to find eigenvalues and eigenvectors of a given large sparse matrix?
I have a real symmetric matrix with roughly $5 \times 10^4$ rows and columns, and an average of $10$ nonzero elements per row. I would like to find the smallest eigenvalue and the corresponding eigenvector, using the built-in Eigensystem function in Mathematica (which treats the matrix as sparse and uses an ARPACK Arnoldi algorithm). Is there a simple way of estimating how much memory this will take?
• For your problem, Mathematica's Eigensystem function will be using an Arnoldi-type algorithm for eigenvalue computations (most likely the ARPACK package). In addition to storing the matrix itself, the size of the ARPACK workspace is roughly ${\cal O}(n \times (m+2))$ floats (or doubles), where $n$ is the number of rows/columns of your matrix, and $m$ is the size of your Lanczos basis (see the docs for details). Also, you're looking for the smallest eigenvalue so you'll likely want to do shift-invert, which is likely to add to the total (per-step) memory cost. – GoHokies Dec 16 '15 at 11:47
• Also, a minor nitpick: you're not actually diagonalizing the matrix, just computing the invariant subspace associated to your $\lambda_{\rm min}$. – GoHokies Dec 16 '15 at 11:49
• @GoHokies I've changed the title of the question to make it clearer (hopefully) that I don't need to diagonalize the matrix. – Stephen Powell Dec 16 '15 at 12:13
• @GoHokies I've added a link to the implementation notes, which confirm that Mathematica uses ARPACK. – Stephen Powell Dec 16 '15 at 12:28
• thanks. Have a look here as well. If you're looking for a single eigenvalue, you can try working with an $m ~ 5$, so the total ARPACK memory requirement (excluding that of the matrix inversion when doing a shift-invert) is about the same as that of storing the matrix itself (I'm assuming, cf. your OP, that you have about $10$ nonzeros per row, and due to symmetry only half of those are actually stored in memory). – GoHokies Dec 16 '15 at 12:28
|
2019-08-18 15:20:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5443235635757446, "perplexity": 442.3659427200787}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313936.42/warc/CC-MAIN-20190818145013-20190818171013-00074.warc.gz"}
|
http://mathoverflow.net/questions/56693/subfields-of-a-function-field?answertab=active
|
Subfields of a function field
Is there an algorithm for generating (some or all) subfields of a certain genus of a given function field (even a random one,I mean for example generating a random elliptic subfield of a certain given function field). I did a quick search and it seems to me that the problem is heavily treated in the case of cyclic and Hermitian function fields, but I was wondering what do we know in general case. Is there something that I can do in Magma?
On the other hand, do we have an algorithm to check if $F$ is a subfield of $E$, When $F, E$ are function fields (of one variable)? Florian Hess told me that somebody developed such an algorithm using his automorphism algorithm but I don't have much luck finding it.
In order to stick to the tradition, I give a motivation also: Subfields of function fields with a rich automorphism group are subject to cover attack in cryptography when they are not one of those few which are fixed by an automorphism of the cover.
Thank you very much indeed!
-
This seems to be a hard problem. For example, given curves $C,E$ over a finite field, with $E$ elliptic, the zeta function of a given curve $C$ will reveal whether there exists a morphism $C\to E$, via Tate's theorem that $Hom(Jac(C),E)\otimes Z_l$ is the module of Galois-invariants in $Hom(T_l(Jac(C),T_l(T)$ (here $T_l$ is the $l$-adic Tate module). However, it's difficult to make Tate's theorem effective. OTOH a general curve will have no morphisms to any other curve of positive genus, so I'm not sure what you mean by "a random one". – inkspot Feb 26 '11 at 10:55
The solutions to the subfield problem in the case of number fields should pretty much work here as well. For example, take primitive elements of $E$ and $F$ (random elements should do), find their minimal polynomials over $K[t]$ (solve linear system), find a completion of $K[t]$ over which both polynomials have a factor (find a prime ideal with solutions and use Hensel's lemma), and finally, you can use LLL to try to express the first element as a linear combination of powers of the second. For other methods for number fields, which have function field analogues, see Cohen. – Dror Speiser Feb 26 '11 at 15:09
@Dror, there is no canonical $t$. – Felipe Voloch Feb 26 '11 at 21:01
@Drar, As Felipe said, exactly the problem is that $K(t)$ is not as good as $\mathbb{Q}$ for number fields. This is why automorphism group is richer than the Galois group and why it's harder to be computed. I think the use of Hess's automorphism algorithm is to check all the (non-canonical but isomorphic)possible ways of embedding of $F$ in $E$ and check if it's work but I don't know the detail. – Syed Feb 26 '11 at 21:08
@Inkspot I don't get "a general curve will have no morphisms to any other curve of positive genus" part. If $F$ is a subfield of $F$ then there's morphism from the defining curve of $F$ to the defining curve of $E$. Isn't in second chapter of Silverman? By random, I mean given function field $F$. Is there a way to generate a subfield of it, of given genus. As much as I understand you say there's no subfield other than rational subfields (of genus zero)? – Syed Feb 26 '11 at 21:18
The algorithm to embed function fields, ie. to test if a function field E can be embedded into a function field F has been developed (and implemented in Magma) by a student of Florian Hess: Gerriet Möhlmann as part of his Diploma work. His thesis (in German) can be found at http://www.math.tu-berlin.de/~kant/publications/diplom/moehlmann.pdf. The method is an extension of Florian's automorphism algorithm, in Magma, it is available through the Inclusions command.
To generate function fields of a given genus there are a few possibilities, none of them worked out completely. If the field can be obtained as an Abelian extension (eg. (hyper)elliptic curves have a degree 2 model) then class field theory can be used to generate all such fields. Similar, soluble exxtensions can be constructed this way. For general extensions, one could use Hunter's theorem to get bounds on the valuations of a primitive element and then enumerate all polynomials that might have such roots. Both methods have in common that they produce too many field extensions that correspond to isomorphic curves. The class field theoretic approach has the advantage of being available through Magma.... (I can provide details if anyone is interested)
-
Here is an algorithm (horribly inefficient) to generate all non-hyperelliptic, non-rational, separable subfields of a non-hyperelliptic function field $F$ over a finite field $K$. Let $\Omega$ be the space of global holomorphic differentials of $F/K$. For any $K$-subspace $V$ of $\Omega$, choose a basis $v_1,\ldots,v_m$ of $V$, compute the elements $v_j/v_1,j>1$ of $F$ (and compute the algebraic relations among these $v_j$), let $E_V$ be the subfield they generate. If $E_V \ne F$ and is not rational, then you found a subfield as above. All such subfields will appear this way (proof left to the reader). There are only finitely many such $V$ since $K$ is assume finite.
Don't even dream of implementing this algorithm as is. Using the numerator of the zeta function, its factors and the Cartier operator, you can perhaps cut down the number of $V$'s that need to be tested. Maybe hyperelliptic subfields can be dealt with by using quadratic differentials.
If Florian Hess can't do it, you are probably out of luck, as far as implementation goes.
Added later: For a hyperelliptic subfield of genus $>1$, one still has a subspace $V$ but the corresponding $E_V$ is the canonical rational subfield of the hyperelliptic field. In this case, the field will be intermediate between $F$ and $E_V$ and perhaps the suggestion of Dror Speiser of using number field arguments might lead to it. It's the elliptic fields that are going to be hard to get.
-
"It's the elliptic fields that are going to be hard to get." Exactly, although Tate's theorem can be used effectively to exclude them. But if they are there, then Tate tells you what they are, but does not even bound the degree of the morphism $C\to E$. – inkspot Feb 27 '11 at 11:46
|
2014-11-28 08:52:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8829213976860046, "perplexity": 323.2255704383852}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009900.4/warc/CC-MAIN-20141125155649-00003-ip-10-235-23-156.ec2.internal.warc.gz"}
|
https://irzu.org/research/if-size-of-integer-is-4-in-c-then-why-its-showing-me-that-its-less-than-1/
|
# if Size of integer is 4 in C, then why It’s showing me that it’s less than -1?
Due to the usual arithmetic conversions if in an expression one operand has an unsigned integer type and other has a signed integer type and the rank of the unsigned integer type is greater than or equal to the rank of the signed integer type then the operand with the signed integer type is converted to the unsigned integer type.
In the condition of this if statement
if(sizeof(int) > -1)
the expression sizeof( int ) has the unsigned type size_t (that usually corresponds to the type unsigned long int) the rank of which is greater than the rank of the type int that is the type of the integer constant -1;
So the expression -1 is converted to the type size_t by propagating the sign bit and as a result is converted to a very big unsigned integer value.
Here is a demonstration program
#include <stdio.h>
int main( void )
{
printf( "( int )-1 = %d\n", -1 );
printf( "( size_t )-1 = %zu\n", ( size_t )-1 );
}
The program output is
( int )-1 = -1
( size_t )-1 = 18446744073709551615
In the condition of the second if statement
if((int)(sizeof(int)) > -1)
the both operands have the signed type int due to the explicit casting the first operand to the type int. So neither conversion takes place and the operand -1 has the negative value -1.
Pay attention to that there is a nuance. In some implementations the size of the unsigned type unsigned long int (or of size_t) is equal to the size of the signed type signed long long int.
In means that objects of the type signed long long int are unable to represent all values of the type unsigned long int.
So in expressions with operands of the type signed long long int and the type unsigned long int the both operands are converted to the type unsigned long long int again due to the same usual arithmetic conversions. and if you will write for example
if ( sizeof(int) > -1ll)
where there is used the operand -1ll of the type signed long long int the rank of which is greater than the rank of the operand of the type size_t nevertheless the condition again will be evaluated to false because the operand -1ll will be converted to the type unsigned long long int.
|
2022-11-29 17:27:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19050872325897217, "perplexity": 1259.3813916874294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710710.91/warc/CC-MAIN-20221129164449-20221129194449-00030.warc.gz"}
|
http://cerco.cs.unibo.it/browser/Deliverables/D1.2/Presentations/WP4-dominic.tex?rev=1849
|
# source:Deliverables/D1.2/Presentations/WP4-dominic.tex@1849
Last change on this file since 1849 was 1849, checked in by mulligan, 8 years ago
Added title pages to split talk into three separate sections
File size: 18.4 KB
Line
1\documentclass{beamer}
2
3\usepackage{amssymb}
4\usepackage[english]{babel}
5\usepackage{listings}
6\usepackage{microtype}
7
8\usetheme{Frankfurt}
9\logo{\includegraphics[height=1.0cm]{fetopen.png}}
10
11\author{Dominic Mulligan}
12\title{CerCo Work Package 4}
13\date{CerCo project review meeting\\March 2012}
14
15\lstdefinelanguage{matita-ocaml}
16 {keywords={definition,coercion,lemma,theorem,remark,inductive,record,qed,let,in,rec,match,return,with,Type,try},
17 morekeywords={[2]whd,normalize,elim,cases,destruct},
18 morekeywords={[3]type,of},
19 mathescape=true,
20 }
21
22\lstset{language=matita-ocaml,basicstyle=\scriptsize\tt,columns=flexible,breaklines=false,
23 keywordstyle=\color{red}\bfseries,
24 keywordstyle=[2]\color{blue},
25 keywordstyle=[3]\color{blue}\bfseries,
27 stringstyle=\color{blue},
28 showspaces=false,showstringspaces=false}
29
30\begin{document}
31
32\begin{frame}
33\maketitle
34\end{frame}
35
36\begin{frame}
37\frametitle{Summary}
38Relevant tasks: T4.2 and T4.3 (from the CerCo Contract):
39\begin{quotation}
41Functional encoding in the Calculus of Inductive Construction (indicative effort: UNIBO: 8; UDP: 2; UEDIN: 0)
42\end{quotation}
43
44\begin{quotation}
46Formal semantics of intermediate languages (indicative effort: UNIBO: 4; UDP: 0; UEDIN: 0)
47\end{quotation}
48\end{frame}
49
50\begin{frame}
51\frametitle{Contents}
52\tableofcontents
53\end{frame}
54
55\section{Rationalisation of backend languages}
56
57\begin{frame}
58\begin{center}
59Rationalisation of backend languages
60\end{center}
61\end{frame}
62
63\begin{frame}
64\frametitle{Backend intermediate languages I}
65\begin{itemize}
66\item
67OCaml prototype has five backend intermediate languages: RTLabs, RTL, ERTL, LTL, LIN
68\item
69RTLabs is the frontier' between backend and frontend, last abstract language
70\item
71RTLabs, RTL, ERTL and LTL are graph based languages: functions represented as graphs of statements, with entry and exit points
72\item
73LIN is a linearised form of LTL, and is the exit point of the compiler's backend
74\item
75In contrast to frontend, backend is very different to CompCert's
76\end{itemize}
77\end{frame}
78
79\begin{frame}
80\frametitle{Backend intermediate languages II}
81\vspace{-1em}
82\begin{small}
83\begin{tabbing}
85\textsf{RTLabs}\\
86\> $\downarrow$ \> copy propagation \color{red}{$\times$} \\
87\> $\downarrow$ \> instruction selection \color{green}{{\checkmark}} \\
88\> $\downarrow$ \> change of memory models in compiler \color{green}{{\checkmark}} \\
89\textsf{RTL}\\
90\> $\downarrow$ \> constant propagation \color{red}{$\times$} \\
91\> $\downarrow$ \> calling convention made explicit \color{green}{{\checkmark}} \\
92\> $\downarrow$ \> layout of activation records \color{green}{{\checkmark}} \\
93\textsf{ERTL}\\
94\> $\downarrow$ \> register allocation and spilling \color{green}{{\checkmark}} \\
95\> $\downarrow$ \> dead code elimination \color{green}{{\checkmark}} \\
96\textsf{LTL}\\
97\> $\downarrow$ \> function linearisation \color{green}{{\checkmark}} \\
98\> $\downarrow$ \> branch compression \color{red}{$\times$} \\
99\textsf{LIN}\\
100\> $\downarrow$ \> relabeling \color{green}{{\checkmark}} \\
101\textsf{ASM}
102\end{tabbing}
103\end{small}
104\end{frame}
105
106\begin{frame}
107\frametitle{\texttt{Joint}: a new approach I}
108\begin{itemize}
109\item
110Consecutive languages in backend must be similar
111\item
112Transformations between languages translate away some small specific set of features
113\item
114But looking at OCaml code, not clear precisely what differences between languages are, as code is repeated
115\item
116Not clear if translation passes can commute, for instance
117\item
118CerCo passes are in a different order to CompCert (calling convention and register allocation done in different places)
119\item
120Instruction selection done early: changing subset of instructions used would require instructions to be duplicated everywhere in backend
121\end{itemize}
122\end{frame}
123
124\begin{frame}
125\frametitle{\texttt{Joint}: a new approach II}
126\begin{itemize}
127\item
128Idea: all of these languages are just instances of a single language
129\item
130This language \texttt{Joint} is parameterised by a type of registers to be used in instructions, and so forth
131\item
132Each language after RTLabs is now just defined as the \texttt{Joint} language instantiated with some concrete types
133\item
134Similarly for semantics: common definitions that take e.g. type representing program counters as parameters
135\end{itemize}
136\end{frame}
137
138\begin{frame}[fragile]
139\frametitle{\texttt{Joint}: a new approach III}
140\texttt{Joint} instructions allow us to embed language-specific instructions:
141\begin{lstlisting}
142inductive joint_instruction (p: params__) (globals: list ident): Type[0] :=
143 | COMMENT: String $\rightarrow$ joint_instruction p globals
144 | COST_LABEL: costlabel $\rightarrow$ joint_instruction p globals
145 ...
146 | COND: acc_a_reg p $\rightarrow$ label $\rightarrow$ joint_instruction p globals
147 | extension: extend_statements p $\rightarrow$ joint_instruction p globals.
148\end{lstlisting}
149
150\begin{lstlisting}
151inductive ertl_statement_extension: Type[0] :=
152 | ertl_st_ext_new_frame: ertl_statement_extension
153 | ertl_st_ext_del_frame: ertl_statement_extension
154 | ertl_st_ext_frame_size: register $\rightarrow$ ertl_statement_extension.
155\end{lstlisting}
156\end{frame}
157
158\begin{frame}
159\frametitle{\texttt{Joint}: a new approach IV}
160\begin{itemize}
161\item
162Languages that provide extensions need to provide translations and semantics for those extensions
163\item
164Everything else can be handled at the \texttt{Joint}-level
165\item
166This modularises the handling of these languages
167\end{itemize}
168\end{frame}
169
170\begin{frame}
172\begin{itemize}
173\item
174We can recover the concrete OCaml languages by instantiating parameterized types
175\item
176Why use \texttt{Joint}?
177\item
178Reduces repeated code (fewer bugs, or places to change)
179\item
180Unify some proofs, making correctness proof easier
181\end{itemize}
182\end{frame}
183
184\begin{frame}
186\begin{itemize}
187\item
188Easier to add new intermediate languages as needed
189\item
190Easier to see relationship between consecutive languages at a glance
191\item
192MCS-51 instruction set embedded in \texttt{Joint} syntax
193\item
194Simplifies instruction selection
195\item
196We can investigate which translation passes commute much more easily
197\end{itemize}
198\end{frame}
199
200\begin{frame}
201\frametitle{Semantics of \texttt{Joint} I}
202\begin{itemize}
203\item
204As mentioned, use of \texttt{Joint} also unifies semantics of these languages
205\item
206We use several sets of records, which represent the state that a program is in
207\item
208These records are parametric in representations for e.g. frames
209\end{itemize}
210\end{frame}
211
212\begin{frame}
213\frametitle{A new intermediate language}
214\begin{itemize}
215\item
216Matita backend includes a new intermediate language: RTLntc
217\item
218Sits between RTL and ERTL
219\item
220RTLntc is the RTL language where all tailcalls have been eliminated
221\item
222This language is implicit' in the OCaml compiler
223\item
224There, the RTL to ERTL transformation eliminates tailcalls as part of translation
225\item
226But including an extra, explicit intermediate language is almost free' using the \texttt{Joint} language approach
227\end{itemize}
228\end{frame}
229
230\begin{frame}
231\frametitle{The LTL to LIN transform I}
232\begin{itemize}
233\item
234\texttt{Joint} clearly separates fetching from program execution
235\item
236We can vary how one works whilst fixing the other
237\item
238Linearisation is moving from fetching from a graph-based language to fetching from a list-based program representation
239\item
240The order of transformations in OCaml prototype is fixed
241\item
242Linearisation takes place at a fixed place, in the translation between LTL and LIN
243\item
244The Matita compiler is different: linearisation is a generic process
245\item
246Any graph-based language can now be linearised
247\end{itemize}
248\end{frame}
249
250\begin{frame}
251\frametitle{The LTL to LIN transform II}
252\begin{itemize}
253\item
254CompCert backend linearises much sooner than CerCo's
255\item
256Can now experiment with linearising much earlier
257\item
258Many transformations and optimisations can work fine on a linearised form
259\item
260Only place in the (current) backend that requires a graph-based language is in the ERTL pass, where we do a dataflow analysis
261\end{itemize}
262\end{frame}
263
264\section{Assembler correctness proof and structured traces}
265
266\begin{frame}
267\begin{center}
268Assembler correctness proof and structured traces
269\end{center}
270\end{frame}
271
272\begin{frame}
273\frametitle{Time not reported}
274\begin{itemize}
275\item
276We had six months of time which is not reported on in any deliverable
277\item
278We invested this time working on:
279\begin{itemize}
280\item
281The global proof sketch
282\item
283The setup of proof infrastructure', common definitions, lemmas, invariants etc. required for main body of proof
284\item
285The proof of correctness for the assembler
286\item
287A notion of structured traces', used throughout the compiler formalisation, as a means of eventually proving that the compiler correctly preserves costs
288\item
289Structured traces were defined in collaboration with the team at UEDIN
290\end{itemize}
291\end{itemize}
292\end{frame}
293
294\begin{frame}
295\frametitle{Assembler}
296\begin{itemize}
297\item
298After LIN, compiler spits out assembly language for MCS-51
299\item
300Assembler has pseudoinstructions similar to many commercial assembly languages
301\item
302For instance, instead of computed jumps (e.g. \texttt{SJMP} to a specific address), compiler can simply spit out a generic jump instruction to a label
303\item
304Simplifies the compiler, at the expense of introducing more proof obligations
305\item
306Now need a formalized assembler (a step further than CompCert)
307\end{itemize}
308\end{frame}
309
310\begin{frame}
311\frametitle{A problem: jump expansion}
312\begin{itemize}
313\item
314Jump expansion' is our name for the standard branch displacement' problem
315\item
316Given a pseudojump to a label $l$, how best can this be expanded into an assembly instruction \texttt{SJMP}, \texttt{AJMP} or \texttt{LJMP} to a concrete address?
317\item
318Problem also applies to conditional jumps
319\item
320Problem especially relevant for MCS-51 as it has a small code memory, therefore aggressive expansion of jumps into smallest possible concrete jump instruction needed
321\item
322But a known hard problem (NP-complete depending on architecture), and easy to imagine knotty configurations where size of jumps are interdependent
323\end{itemize}
324\end{frame}
325
326\begin{frame}
327\frametitle{Jump expansion I}
328\begin{itemize}
329\item
330We employed the following tactic: split the decision over how any particular pseudoinstruction is expanded from pseudoinstruction expansion
331\item
332Call the decision maker a policy'
333\item
334We started the proof of correctness for the assembler based on the premise that a correct policy exists
335\item
336Further, we know that the assembler only fails to assemble a program if a good policy does not exist (a side-effect of using dependent types)
337\item
338A bad policy is a function that expands a given pseudojump into a concrete jump instruction that is too small' for the distance to be jumped, or makes the program consume too much memory
339\end{itemize}
340\end{frame}
341
342\begin{frame}
343\frametitle{Jump expansion II}
344\begin{itemize}
345\item
346Jaap Boender at UNIBO has been working on a verified implementation of a good jump expansion policy for the MCS-51
347\item
348The strategy initially translates all pseudojumps as \texttt{SJMP} and then increases their size if necessary
349\item
350Termination of the procedure is proved, as well as a safety property, stating that jumps are not expanded into jumps that are too long
351\item
352His strategy is not optimal (though the computed solution is optimal for the strategy employed)
353\item
354Jaap's work is the first formal treatment of the jump expansion problem'
355\end{itemize}
356\end{frame}
357
358\begin{frame}
359\frametitle{Assembler correctness proof}
360\begin{itemize}
361\item
362Assuming the existence of a good jump expansion property, we completed about 75\% of the correctness proof for the assembler
363\item
364Jaap's work has just been completed (modulo a few missing lemmas)
365\item
366Postponed the remainder of main assembler proof to start work on other tasks (and for Jaap to finish)
367\item
368We intend to return to proof, and publish an account of the work (possibly) as a journal paper
369\end{itemize}
370\end{frame}
371
372\begin{frame}[fragile]
373\frametitle{Who pays? I}
374\begin{columns}
375\begin{column}[b]{0.5\linewidth}
376\centering
377In C:
378\begin{lstlisting}
379int main(int argc, char** argv) {
380 cost_label1:
381 ...
382 some_function();
383 cost_label2:
384 ...
385}
386\end{lstlisting}
387\end{column}
388\begin{column}[b]{0.5\linewidth}
389\centering
390In ASM:
391\begin{lstlisting}
392 ...
393 main:
394 ...
395 cost_label1:
396 ...
397 LCALL some_function
398 cost_label2:
399 ...
400\end{lstlisting}
401\end{column}
402\end{columns}
403\begin{itemize}
404\item
405Where do we put cost labels to capture execution costs?
406\item
407Proof obligations complicated by panoply of labels
408\item
409Doesn't work well with \texttt{g(h() + 2 + f())}
410\item
411Is \texttt{cost\_label2} ever reached?
412\item
413\texttt{some\_function()} may not return correctly
414\end{itemize}
415\end{frame}
416
417\begin{frame}
418\frametitle{Who pays? II}
419\begin{itemize}
420\item
421Solution: omit \texttt{cost\_label2} and just keep \texttt{cost\_label1}
422\item
423We pay for everything up front' when entering a function
424\item
425No need to prove \texttt{some\_function()} terminates
426\item
427But now execution of functions in CerCo takes a particular form
428\item
429Functions begin with a label, call other functions that begin with a label, eventually return, but \emph{return} to the correct place
430\item
431Recursive structure'
432\end{itemize}
433\end{frame}
434
435\begin{frame}
436\frametitle{Structured traces I}
437\begin{itemize}
438\item
439We introduced a notion of structured traces'
440\item
441These are intended to statically capture the (good) execution traces of a program
442\item
443To borrow a slogan: they are the computational content of a well-formed program's execution'
444\item
445Come in two variants: inductive and coinductive
446\item
447Inductive captures program execution traces that eventually halt, coinductive ones that diverge
448\end{itemize}
449\end{frame}
450
451\begin{frame}
452\frametitle{Structured traces II}
453\begin{itemize}
454\item
455I focus on the inductive variety, as used the most (for now) in the backend
456\item
457In particular, used in the proof that static and dynamic cost computations coincide
458\item
459Traces preserved by backend compilation, initially created at RTL
460\item
461This will be explained later
462\end{itemize}
463\end{frame}
464
465\begin{frame}
466\frametitle{Structured traces III}
467\begin{itemize}
468\item
469Central insight is that program execution is always in the body of some function (from \texttt{main} onwards)
470\item
471A well formed program must have labels appearing at certain spots
472\item
473Similarly, the final instruction executed when executing a function must be a \texttt{RET}
474\item
475Execution must then continue in body of calling function, at correct place
476\item
477These invariants, and others, are crystalised in the specific syntactic form of a structured trace
478\end{itemize}
479\end{frame}
480
481\begin{frame}
482\frametitle{Recursive structure of good' execution}
483Structure captured by structured traces:
484\begin{center}
485\includegraphics[scale=0.33]{recursive_structure.png}
486\end{center}
487\end{frame}
488
489\begin{frame}
490\frametitle{Static and dynamic costs I}
491\begin{itemize}
492\item
493Given a structured trace, we can compute its associated cost
494\item
495In previous slide, cost of trace is cost assigned to \texttt{label\_1} + \texttt{label\_2} + \texttt{label\_3} (+ \texttt{label\_4})
496\item
497This is the \emph{static} cost of a program execution
498\item
499Similarly, given a program counter and a code memory (corresponding to the trace), we can compute a \emph{dynamic cost} of a simple block
500\item
501Do this by repeatedly fetching, obtaining the next instruction, and a new program counter
502\item
503This requires some predicates defining what a good program' and what a good program counter' are
504\item
505Want program counters on instruction boundaries
506\end{itemize}
507\end{frame}
508
509\begin{frame}
510\frametitle{Static and dynamic costs II}
511\begin{itemize}
512\item
513We aim to prove that the dynamic and static cost calculations coincide
514\item
515This would imply that the static cost computation is correct
516\item
517This proof is surprisingly tricky to complete (about 3 man months of work so far)
518\item
520\end{itemize}
521\end{frame}
522
523\section{Changes to tools and prototypes, looking forward}
524
525\begin{frame}
526\begin{center}
527Changes to tools and prototypes, looking forward
528\end{center}
529\end{frame}
530
531\begin{frame}
532\frametitle{Changes ported to OCaml prototype}
533\begin{itemize}
534\item
535Bug fixes spotted in the formalisation so far have been merged back into the OCaml compiler
536\item
537Larger changes like the \texttt{Joint} machinery have so far not
538\item
539It is unclear whether they will be
540\item
541Just a generalisation of what is already there
542\item
543Supposed to make formalisation easier
544\item
545Further, we want to ensure that the untrusted compiler is as correct as possible, for experiments in e.g. Frama-C
546\item
547Porting a large change to the untrusted compiler would jeopardise these experiments
548\end{itemize}
549\end{frame}
550
551\begin{frame}
552\frametitle{Improvements in Matita}
553\begin{itemize}
554\item
555Part of the motivation for using Matita was for CerCo to act a stress test'
556\item
557The proofs talked about in this talk have done this
558\item
559Many improvements to Matita have been made since the last project meeting
560\item
561These include major speed-ups of e.g. printing large goals, bug fixes, the porting of CerCo code to standard library, and more options for passing to tactics
562\end{itemize}
563\end{frame}
564
565\begin{frame}
566\frametitle{The next period}
567UNIBO has following pool of remaining manpower (postdocs):
568\begin{center}
569\begin{tabular}{ll}
570Person & Man months remaining \\
571\hline
572Boender & 10 months \\
573Mulligan & 6 months \\
574Tranquilli & 10 months \\
575\end{tabular}
576\end{center}
577\begin{itemize}
578\item
579Boender finishing assembler correctness proof
580\item
581Mulligan proofs of correctness for 1 intermediate language
582\item
583Tranquilli proofs of correctness for 2 intermediate languages
584\item
585Sacerdoti Coen floating'
586\item
587Believe we have enough manpower to complete backend (required 21 man months)
588\end{itemize}
589\end{frame}
590
591\begin{frame}
592\frametitle{Summary}
593We have:
594\begin{itemize}
595\item
596Translated the OCaml prototype's backend intermediate languages into Matita
597\item
598Implemented the translations between languages, and given the intermediate languages a semantics
599\item
600Refactored many of the backend intermediate languages into a common, parametric joint' language, that is later specialised
601\item
602Spotted opportunities for possibly commuting backend translation passes
603\item
604Used six months to define structured traces and start the proof of correctness for the assembler
605\item
606Distinguished our proof from CompCert's by heavy use of dependent types throughout whole compiler
607\end{itemize}
608\end{frame}
609
610\end{document}
Note: See TracBrowser for help on using the repository browser.
|
2020-07-09 18:59:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3198840618133545, "perplexity": 8397.666629953384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655900614.47/warc/CC-MAIN-20200709162634-20200709192634-00229.warc.gz"}
|
https://pypi.org/project/andes/1.1.4/
|
Python software for symbolic power system modeling and numerical analysis.
# ANDES
Python Software for Symbolic Power System Modeling and Numerical Analysis.
Latest Stable
Documentation
Try on Binder
Code Quality
Build Status
# Why ANDES
This software could be of interest to you if you are working on DAE modeling, simulation, and control for power systems. It has features that may be useful if you are applying deep (reinforcement) learning to such systems.
ANDES is by far easier to use for developing differential-algebraic equation (DAE) based models for power system dynamic simulation than other tools such as PSAT, Dome and PST, while maintaining high numerical efficiency.
ANDES comes with a rich set of commercial-grade dynamic models with all details implemented, including limiters, saturation, and zeroing out time constants.
ANDES produces credible simulation results. The following table shows that
1. For the Northeast Power Coordinating Council (NPCC) 140-bus system (with GENROU, GENCLS, TGOV1 and IEEEX1), ANDES results match perfectly with that from TSAT.
2. For the Western Electricity Coordinating Council (WECC) 179-bus system (with GENROU, IEEEG1, EXST1, ESST3A, ESDC2A, IEEEST and ST2CUT), ANDES results match closely with those from TSAT and PSS/E. Note that TSAT and PSS/E results are not identical, either.
NPCC Case Study WECC Case Study
ANDES provides a descriptive modeling framework in a scripting environment. Modeling DAE-based devices is as simple as describing the mathematical equations. Numerical code will be automatically generated for fast simulation.
Controller Model and Equation ANDES Code
Diagram:
Write into DAEs:
In ANDES, what you simulate is what you document. ANDES automatically generates model documentation, and the docs always stay up to date. The screenshot below is the generated documentation for the implemented IEEEG1 model.
• a rich library of transfer functions and discontinuous components (including limiters, deadbands, and saturation functions) available for prototyping models, which can be effortlessly instantiated as multiple devices for system analysis
• routines including Newton method for power flow calculation, implicit trapezoidal method for time-domain simulation, and full eigenvalue analysis
• developed with performance in mind. While written in Python, ANDES comes with a performance package and can finish a 20-second transient simulation of a 2000-bus system in a few seconds on a typical desktop computer
• out-of-the-box PSS/E raw and dyr data support for available models. Once a model is developed, inputs from a dyr file can be immediately supported
ANDES is currently under active development. Use the following resources to get involved.
# Get Started with ANDES
ANDES is a Python package and needs to be installed. We recommend Miniconda if you don't insist on an existing Python environment. Downloaded and install the latest 64-bit Miniconda3 for your platform from https://conda.io/miniconda.html.
Step 1: (Optional) Open the Anaconda Prompt (shell on Linux and macOS) and create a new environment.
Use the following command in the Anaconda Prompt:
conda create --name andes python=3.7
Step 2: Add the conda-forge channel and set it to default. Do
conda config --add channels conda-forge
conda config --set channel_priority flexible
Step 3: Activate the new environment
This step needs to be executed every time a new Anaconda Prompt or shell is open. At the prompt, do
conda activate andes
• Extract the package to a folder where source code resides. Try to avoid spaces in any folder name.
• Change directory to the ANDES root directory, which contains setup.py. In the prompt, run the following commands in sequence.
conda install --file requirements.txt --yes
conda install --file requirements-dev.txt --yes
pip install -e .
Observe if any error is thrown. If not, ANDES is successfully installed in the development mode.
Step 5: Test ANDES
After the installation, run andes selftest and check if all tests pass.
# Run Simulations
ANDES can be used as a command-line tool or a library. The following explains the command-line usage, which comes handy to run studies.
For a tutorial to use ANDES as a library, visit the interactive tutorial.
ANDES is invoked from the command line using the command andes. Running andes without any input is equal to andes -h or andes --help, which prints out a preamble and help commands:
_ _ | Version 0.8.3.post24+g8caf858a
/_\ _ _ __| |___ ___ | Python 3.7.1 on Darwin, 04/06/2020 08:47:43 PM
/ _ \| ' \/ _ / -_|_-< |
/_/ \_\_||_\__,_\___/__/ | This program comes with ABSOLUTELY NO WARRANTY.
usage: andes [-h] [-v {10,20,30,40,50}]
{run,plot,misc,prepare,doc,selftest} ...
positional arguments:
{run,plot,misc,prepare,doc,selftest}
[run] run simulation routine; [plot] plot simulation
results; [doc] quick documentation; [prepare] run the
symbolic-to-numeric preparation; [misc] miscellaneous
functions.
optional arguments:
-h, --help show this help message and exit
-v {10,20,30,40,50}, --verbose {10,20,30,40,50}
Program logging level in 10-DEBUG, 20-INFO,
30-WARNING, 40-ERROR or 50-CRITICAL.
The first level of commands are chosen from {run,plot,doc,misc,prepare,selftest}. Each command contains a group of subcommands, which can be looked up by appending -h to the first-level command. For example, use andes run -h to look up the subcommands in run.
andes has an option for the program verbosity level, controlled by -v or --verbose. Accepted levels are the same as in the logging module: 10 - DEBUG, 20 - INFO, 30 - WARNING, 40 - ERROR, 50 - CRITICAL. To show debugging outputs, use -v 10.
## Step 1: Power Flow
Pass the path to the case file to andes run to perform power flow calculation. It is recommended to change directory to the folder containing the test case before running.
The Kundur's two-area system can be located under andes/cases/kundur with the namekundur_full.xlsx. Locate the folder in your system and use cd to change directory. To run power flow calculation, do
andes run kundur_full.xlsx
Power flow reports will be saved to the directory where andes is called. The power flow report, named kundur_full_out.txt, contains four sections:
• system statistics,
• ac bus and dc node data,
• ac line data,
• the initialized values of algebraic variables and state variables.
## Step 2: Dynamic Analyses
ANDES comes with two dynamic analysis routines: time-domain simulation and eigenvalue analysis.
Option -r or -routine is used to specify the routine, followed by the routine name. Available routine names include pflow, tds, eig.
• pflow is the default power flow calculation and can be omitted.
• tds is for time domain simulation.
• eig is for for eigenvalue analysis.
To run time-domain simulation for kundur_full.xlsx in the current directory, do
andes run kundur_full.xlsx -r tds
Two output files, kundur_full_out.lst and kundur_full_out.npy will be created for variable names and values, respectively.
Likewise, to run eigenvalue analysis for kundur_full.xlsx, use
andes run kundur_full.xlsx -r eig
The eigenvalue report will be written in a text file named kundur_full_eig.txt.
### PSS/E raw and dyr support
ANDES supports the PSS/E v32 raw and dyr files for power flow and dynamic studies. Example raw and dyr files can be found in andes/cases/kundur. To perform a time-domain simulation for kundur.raw and kundur_full.dyr, run
andes run kundur.raw --addfile kundur_full.dyr -r tds
where --addfile takes the dyr file. Please note that the support for dyr file is limited to the models available in ANDES.
Alternatively, one can convert the PSS/E data to an ANDES xlsx file with
andes run kundur.raw --addfile kundur_full.dyr --convert
Edits such as adding models can be made to the xlsx file before simulation.
## Step 3: Plot Results
andes plot is the command-line tool for plotting. Currently, it only supports time-domain simulation data. Three arguments are needed: file name, x-axis variable index, and y-axis variable index (or indices).
Variable indices can be looked up by opening the kundur_full_out.lst file as plain text. Index 0 is always the simulation time.
Multiple y-axis variable indices can be provided in eithers space-separated format or the Pythonic comma-separated style.
To plot speed (omega) for all generators with indices 2, 8, 14, 20, either do
andes plot kundur_full_out.npy 0 2 8 14 20
or
andes plot kundur_full_out.npy 0 2:21:6
# Configure ANDES
ANDES uses a config file to set runtime configs for system, routines and models. The config file is loaded at the time when ANDES is invoked or imported.
At the command-line prompt,
• andes misc --save saves all configs to a file. By default, it goes to ~/.andes/andes.conf.
• andes misc --edit is a shortcut for editing the config file. It takes an optional editor name.
Without an editor name, the following default editor is used:
• On Microsoft Windows, it will open up a notepad.
• On Linux, it will use the \$EDITOR environment variable or use vim by default.
• On macOS, the default is vim.
# Format Converter
## Input Converter
ANDES recognizes a few input formats (MATPOWER, PSS/E and ANDES xlsx) and can convert input to the xlsx format. This function is useful when one wants to use models that are unique in ANDES.
• andes run CASENAME.ext --convert performs the conversion to xlsx, where CASENAME.ext is the full test case name.
• andes run CASENAME.ext --convert-all performs the conversion and create empty sheets for all supported models.
• andes run CASENAME.xlsx --add-book ADD_BOOK, where ADD_BOOK is the workbook name (the sane as the model name) to be added.
For example, to convert wscc9.raw in the current folder to the ANDES xlsx format, run
andes run wscc9.raw --convert
The command will write the output to wscc9.xlsx in the current directory. An additional dyr file can be included through --addfile, as shown in Step 2: Dynamic Analysis. Power flow models and dynamic models will be consolidated and written to a single xlsx file.
### Adding Model Template to an Existing xlsx File
To add new models to an existing xlsx file, one needs to create new workbooks (shown tabs at the bottom), --add-book can add model templates to an existing xlsx file. To add models GENROU and TGOV1 to the xlsx file wscc9.xlsx, run
andes run wscc9.xlsx --add-book GENROU,TGOV1
Two workbooks named "GENROU" and "TGOV1" will appear in the new wscc9.xlsx file.
Warning: --add-book will overwrite the original file. All empty workbooks will be discarded. It is recommended to make copies to backup your cases.
## Output Converter
The output converter is used to convert .npy output to a comma-separated (csv) file.
To convert, do andes plot OUTPUTNAME.npy -c , where OUTPUTNAME.npy is the file name of the simulation output.
For example, to convert kundur_full_out.npy (in the current directory) to a csv file, run
andes plot kundur_full_out.npy -c
The output will be written to kundur_full_out.csv in the current directory.
# Model Development
The steps to develop new models are outlined. New models will need to be written in Python and incorporated in the ANDES source code. Models are placed under andes/models with a descriptive file name for the model type.
If a new file is created, import the building block classes at the top of the file
from andes.core.model import ModelData, Model
from andes.core.param import IdxParam, NumParam, ExtParam
from andes.core.var import Algeb, State, ExtAlgeb, ExtState
from andes.core.service import ConstService, ExtService
from andes.core.discrete import AntiWindup
The TGOV1 model will be used to illustrate the model development process.
## Step 1: Define Parameters
Create a class to hold parameters that will be loaded from the data file. The class inherits from ModelData
class TGOV1Data(ModelData):
def __init__(self):
self.syn = IdxParam(model='SynGen',
info='Synchronous generator idx',
mandatory=True,
)
self.R = NumParam(info='Speed regulation gain under machine base',
tex_name='R',
default=0.05,
unit='p.u.',
ipower=True,
)
self.wref0 = NumParam(info='Base speed reference',
tex_name=r'\omega_{ref0}',
default=1.0,
unit='p.u.',
)
self.VMAX = NumParam(info='Maximum valve position',
tex_name='V_{max}',
unit='p.u.',
default=1.2,
power=True,
)
self.VMIN = NumParam(info='Minimum valve position',
tex_name='V_{min}',
unit='p.u.',
default=0.0,
power=True,
)
self.T1 = NumParam(info='Valve time constant',
default=0.1,
tex_name='T_1')
default=0.2,
tex_name='T_2')
self.T3 = NumParam(info='Lead-lag lag time constant',
default=10.0,
tex_name='T_3')
self.Dt = NumParam(info='Turbine damping coefficient',
default=0.0,
tex_name='D_t',
power=True,
)
Note that the example above has all the parameters loaded in one class. In practice, it is recommended to create a base class for common parameters and let TGOV2Data inherit from it. See the code in andes/models/governor.py for the example.
## Step 2: Define Externals
Next, another class to hold the non-parameter instances is created. The class inherits from Model and takes three positional arguments by the constructor.
The code below defines parameters, variables and services retrieved from external models (specifically , generators).
class TGOV1Model(Model):
def __init__(self, system, config):
self.Sn = ExtParam(src='Sn',
model='SynGen',
indexer=self.syn,
tex_name='S_m',
info='Rated power from generator',
unit='MVA',
export=False,
)
self.Vn = ExtParam(src='Vn',
model='SynGen',
indexer=self.syn,
tex_name='V_m',
info='Rated voltage from generator',
unit='kV',
export=False,
)
self.tm0 = ExtService(src='tm',
model='SynGen',
indexer=self.syn,
tex_name=r'\tau_{m0}',
info='Initial mechanical input')
self.omega = ExtState(src='omega',
model='SynGen',
indexer=self.syn,
tex_name=r'\omega',
info='Generator speed',
unit='p.u.'
)
In addition, a service can be defined for the inverse of the gain
self.gain = ConstService(v_str='u / R',
tex_name='G',
)
## Step 3: Define Variables
First of all, the turbine governor output modifies the generator power input. Therefore, the generator input variable should be retrieved by the governor. Next, internal variables can be defined.
# mechanical torque input of generators
self.tm = ExtAlgeb(src='tm',
model='SynGen',
indexer=self.syn,
tex_name=r'\tau_m',
info='Mechanical power to generator',
)
self.pout = Algeb(info='Turbine final output power',
tex_name='P_{out}',
)
self.wref = Algeb(info='Speed reference variable',
tex_name=r'\omega_{ref}',
)
self.pref = Algeb(info='Reference power input',
tex_name='P_{ref}',
)
self.wd = Algeb(info='Generator under speed',
unit='p.u.',
tex_name=r'\omega_{dev}',
)
self.pd = Algeb(info='Pref plus under speed times gain',
unit='p.u.',
tex_name="P_d",
)
self.LAG_y = State(info='State in lag transfer function',
tex_name=r"y_{LAG}",
)
self.LAG_lim = AntiWindup(u=self.LAG_y,
lower=self.VMIN,
upper=self.VMAX,
tex_name='lim_{lag}',
)
self.LL_x = State(info='State in lead-lag transfer function',
tex_name="x'_{LL}",
)
tex_name='y_{LL}',
)
## Step 4: Define Equations
Set up the equation associated with each variable. Algebraic equations are in the form of 0 = g(x, y). Differential equations are in the form of T \dot{x} = f(x, y).
self.tm.e_str = 'u*(pout - tm0)'
self.wref.e_str = 'wref0 - wref'
self.pref.e_str = 'tm0 * R - pref'
self.wd.e_str = '(wref - omega) - wd'
self.pd.e_str='(wd + pref) * gain - pd'
self.LAG_x.e_str = 'LAG_lim_zi * (1 * pd - LAG_y) / T1'
self.LL_x.e_str = '(LAG_y - LL_x) / T3'
self.LL_y.e_str='T2 / T3 * (LAG_y - LL_x) + LL_x - LL_y'
self.pout.e_str = '(LL_y + Dt * wd) - pout'
## Step 5: Define Initializers
Initializers are used to set up initial values for variables. Initializers are evaluated in the same sequence as the declaration of variables. Initializer evaluation results are set to the corresponding variable. Usually, only internal variables (Algeb and State) require initializers.
self.wref.v_str = 'wref0'
self.pout.v_str = 'tm0'
self.LL_y.v_str = 'LAG_x'
self.LL_x.v_str = 'LAG_x'
self.LAG_x.v_str = 'pd'
self.pd.v_str = 'tm0'
self.wd.v_str = '0'
self.pref.v_str = 'tm0 * R'
Alternatively, equations and initializers can be passed to keyword arguments e_str and v_str, respectively , of the corresponding instance.
## Step 6: Finalize
This step provides additional information on the model. The group to which the device belongs need to be specified, and the routine this model supports need to updated.
For example, TGOV1 belongs to the TurbineGov group, which is defined in andes/models/group.py. TGOV1 participates in the time-domain simulation and is not involved in power flow. The snipet below is added to the constructor of class TGOV1Model.
self.group = 'TurbineGov'
self.flags.update({'tds': True})
Next, a TGOV1 class need to be created as the final class. It is a bit boilerplate as of the current implementation.
class TGOV1(TGOV1Data, TGOV1Model):
def __init__(self, system, config):
TGOV1Data.__init__(self)
TGOV1Model.__init__(self, system, config)
One more step, the class needs to be added to the package __init__.py file to be loaded. Edit andes/models/__init__.py and add to non_jit whose keys are the file names and values are the classes in the file. To add TGOV1, locate the line with key governor and add TGOV1 to the value list so that it looks like
non_jit = OrderedDict([
# ...
('governor', ['TG2', 'TGOV1']),
# ...
])
Finally, run andes prepare` from the command-line to re-generate code for the new model.
# API Reference
The official documentation explains the complete list of modeling components. The most commonly used ones are highlighted in the following.
# Who is Using ANDES?
Please let us know if you are using ANDES for research or projects. We kindly request you to cite our paper if you find ANDES useful.
This work was supported in part by the Engineering Research Center Program of the National Science Foundation and the Department of Energy under NSF Award Number EEC-1041877 and the CURENT Industry Partnership Program.
See GitHub contributors for the contributor list.
|
2021-04-20 02:16:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2880915105342865, "perplexity": 9333.10852442986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038921860.72/warc/CC-MAIN-20210419235235-20210420025235-00348.warc.gz"}
|
https://angularquestions.com/2019/02/11/custom-xml-attribute-name-for-xslt/
|
# Custom XML Attribute Name for XSLT
I have a XSLT that generate AngularJS, with model=’xxx’
Now, I have to make a XSLT to use the same source but the target will be Angular6,
I Change my XSLT to this: (only model to [(model)])
<xsl:template name="InputText">
<div class="input-group">
<input type="{@type}" class="form-control {@class}"
>
<xsl:attribute name="[(model)]">
<xsl:value-of select="@model"/>
</xsl:attribute>
</input>
</div>
But, when I transform I have the following message:
FATAL ERROR: 'line 131: You cannot call an attribute '[(model)]''
I understand the error, but my question is, I must to use some escape or additional parameter.
Test: My xslt execution is with Java.
Source: AngularJS
|
2019-02-18 08:31:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33702197670936584, "perplexity": 12488.80421739514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484772.43/warc/CC-MAIN-20190218074121-20190218100121-00287.warc.gz"}
|
https://openstax.org/books/principles-finance/pages/13-1-measures-of-center
|
Principles of Finance
# 13.1Measures of Center
Principles of Finance13.1 Measures of Center
By the end of this section, you will be able to:
• Calculate various measures of the average of a data set, such as mean, median, mode, and geometric mean.
• Recognize when a certain measure of center is more appropriate to use, such as weighted mean.
• Distinguish among arithmetic mean, geometric mean, and weighted mean.
### Arithmetic Mean
The average of a data set is a way of describing location. The most widely used measures of the center of a data set are the mean (average), median, and mode. The arithmetic mean is the most common measure of the average. We will discuss the geometric mean later.
Note that the words mean and average are often used interchangeably. The substitution of one word for the other is common practice. The technical term is arithmetic mean, and average technically refers only to a center location. Formally, the arithmetic mean is called the first moment of the distribution by mathematicians. However, in practice among non-statisticians, average is commonly accepted as a synonym for arithmetic mean.
To calculate the arithmetic mean value of 50 stock portfolios, add the 50 portfolio dollar values together and divide the sum by 50. To calculate the arithmetic mean for a set of numbers, add the numbers together and then divide by the number of data values.
In statistical analysis, you will encounter two types of data sets: sample data and population data. Population data represents all the outcomes or measurements that are of interest. Sample data represents outcomes or measurements collected from a subset, or part, of the population of interest.
The notation $x¯x¯$ is used to indicate the sample mean, where the arithmetic mean is calculated based on data taken from a sample. The notation $∑x∑x$ is used to denote the sum of the data values, and $nn$ is used to indicate the number of data values in the sample, also known as the sample size.
The sample mean can be calculated using the following formula:
13.1
Finance professionals often rely on averages of Treasury bill auction amounts to determine their value. Table 13.1 lists the Treasury bill auction amounts for a sample of auctions from December 2020.
Maturity Amount ($Billions) 4-week T-bills$32.9
8-week T-bills 38.4
13-week T-bills 63.1
26-week T-bills 59.6
52-week T-bills 39.7
Total $233.7 Table 13.1 United States Treasury Bill Auctions, December 22 and 24, 2020 (source: Treasury Direct) To calculate the arithmetic mean of the amount paid for Treasury bills at auction, in billions of dollars, we use the following formula: $x¯=∑xn=233.75=46.74x¯=∑xn=233.75=46.74$ 13.2 ### Median To determine the median of a data set, order the data from smallest to largest, and then find the middle value in the ordered data set. For example, to find the median value of 50 portfolios, find the number that splits the data into two equal parts. The portfolio values owned by 25 people will be below the median, and 25 people will have portfolio values above the median. The median is generally a better measure of the average when there are extreme values or outliers in the data set. An outlier or extreme value is a data value that is significantly different from the other data values in a data set. The median is preferred when outliers are present because the median is not affected by the numerical values of the outliers. The ordered data set from Table 13.1 appears as follows: 13.3 The middle value in this ordered data set is the third data value, which is 39.7. Thus, the median is$39.7 billion.
You can quickly find the location of the median by using the expression . The variable n represents the total number of data values in the sample. If n is an odd number, the median is the middle value of the data values when ordered from smallest to largest. If n is an even number, the median is equal to the two middle values of the ordered data values added together and divided by 2. In the example from Table 13.1, there are five data values, so n = 5. To identify the position of the median, calculate , which is , or 3. This indicates that the median is located in the third data position, which corresponds to the value 39.7.
As mentioned earlier, when outliers are present in a data set, the mean can be nonrepresentative of the center of the data set, and the median will provide a better measure of center. The following Think It Through example illustrates this point.
### Think It Through
#### Finding the Measure of Center
Suppose that in a small village of 50 people, one person earns a salary of $5 million per year, and the other 49 individuals each earn$30,000. Which is the better measure of center: the mean or the median?
### Mode
Another measure of center is the mode. The mode is the most frequent value. There can be more than one mode in a data set as long as those values have the same frequency and that frequency is the highest. A data set with two modes is called bimodal. For example, assume that the weekly closing stock price for a technology stock, in dollars, is recorded for 20 consecutive weeks as follows:
13.5
To find the mode, determine the most frequent score, which is 72. It occurs five times. Thus, the mode of this data set is 72. It is helpful to know that the most common closing price of this particular stock over the past 20 weeks has been $72.00. ### Geometric Mean The arithmetic mean, median, and mode are all measures of the center of a data set, or the average. They are all, in their own way, trying to measure the common point within the data—that which is “normal.” In the case of the arithmetic mean, this is accomplished by finding the value from which all points are equal linear distances. We can imagine that all the data values are combined through addition and then distributed back to each data point in equal amounts. The geometric mean redistributes not the sum of the values but their product. It is calculated by multiplying all the individual values and then redistributing them in equal portions such that the total product remains the same. This can be seen from the formula for the geometric mean, x̃ (pronounced x-tilde): $x~=x1·x2⋯xnnx~=x1·x2⋯xnn$ 13.6 The geometric mean is relevant in economics and finance for dealing with growth—of markets, in investments, and so on. For an example of a finance application, assume we would like to know the equivalent percentage growth rate over a five-year period, given the yearly growth rates for the investment. For a five-year period, the annual rate of return for a certificate of deposit (CD) investment is as follows: 3.21%, 2.79%, 1.88%, 1.42%, 1.17%. Find the single percentage growth rate that is equivalent to these five annual consecutive rates of return. The geometric mean of these five rates of return will provide the solution. To calculate the geometric mean for these values (which must all be positive), first multiply1 the rates of return together—after adding 1 to the decimal equivalent of each interest rate—and then take the nth root of the product. We are interested in calculating the equivalent overall rate of return for the yearly rates of return, which can be expressed as 1.0321, 1.0279, 1.0188, 1.0142, and 1.0117: 13.7 Based on the geometric mean, the equivalent annual rate of return for this time period is 2.09%. ### Weighted Mean A weighted mean is a measure of the center, or average, of a data set where each data value is assigned a corresponding weight. A common financial application of a weighted mean is in determining the average price per share for a certain stock when the stock has been purchased at different points in time and at different share prices. To calculate a weighted mean, create a table with the data values in one column and the weights in a second column. Then create a third column in which each data value is multiplied by each weight on a row-by-row basis. Then, the weighted mean is calculated as the sum of the results from the third column divided by the sum of the weights. ### Think It Through #### Calculating the Weighted Mean Assume your portfolio contains 1,000 shares of XYZ Corporation, purchased on three different dates, as shown in Table 13.2. Calculate the weighted mean of the purchase price for the 1,000 shares. Date Purchased Purchase Price ($) Number of Shares Purchased Price (\$) Times
Number of Shares
January 17 78 200 15,600
February 10 122 300 36,600
March 23 131 500 65,500
Total NA 1,000 117,700
Table 13.2 1,000 Shares of XYZ Corporation
### Footnotes
• 1In this chapter, the interpunct dot will be used to indicate the multiplication operation in formulas.
Order a print copy
As an Amazon Associate we earn from qualifying purchases.
|
2023-02-08 19:38:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6130064129829407, "perplexity": 542.3168431768281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500904.44/warc/CC-MAIN-20230208191211-20230208221211-00288.warc.gz"}
|
https://mathematica.stackexchange.com/questions/166203/choleskydecomposition-and-simplify
|
# CholeskyDecomposition and Simplify
I have a problem when trying to simplify a $6\times 6$ matrix after Cholesky decomposition. I tried all the regular operations such as
FullSimplify[Hnew[z], Element[z, Reals]]
or
$Assumptions = z ∈ Reals where my matrix is Hnew[z_] := {{1, 1 + z, 1 + z, 0, -3 z, -3 z}, {1 + z, 1, 1 + z, -3 z, 0, -3 z}, {1 + z, 1 + z, 1, -3 z, -3 z, 0}, {0, -3 z, -3 z, 1, 1 + z, 1 + z}, {-3 z, 0, -3 z, 1 + z, 1, 1 + z}, {-3 z, -3 z, 0, 1 + z, 1 + z, 1}} but Mathematica evaluates indefinitely when I gave it FullSimplify[CholeskyDecomposition[Hnew[z]], z > 0] and it ignores assumptions. Also I tried Refine, Simplify and Assuming, but nothing makes Mathematica delete all the conjugates are reals. It just calculates so long, that I need to abort the calculation. Does anybody has experience with CholeskyDecomposition who is willing to help me out? P.S. I'm new here. ## 2 Answers The documentation for CholeskyDecomposition tells us the function argument must be a positive definite matrix. We can prove, however, that your matrix is not positive definite. Here's how: For a matrix to be positive definite, all of its eigenvalues must be positive real numbers. So, we look at its eigenvalues, like this: Eigenvalues[ { {1, 1 + z, 1 + z, 0, -3 z, -3 z}, {1 + z, 1, 1 + z, -3 z,0, -3 z}, {1 + z, 1 + z, 1, -3 z, -3 z, 0}, {0, -3 z, -3 z, 1, 1 + z, 1 + z}, {-3 z, 0, -3 z, 1 + z, 1, 1 + z}, {-3 z, -3 z, 0, 1 + z, 1 + z, 1} } ] (* {3 - 4 z, -4 z, -4 z, 2 z, 2 z, 3 + 8 z} *) We quickly see there is no real value of$z$that gives all positive eigenvalues, so CholeskyDecomposition should not be used. Alternatively, one can use the$\mathbf L\mathbf D\mathbf L^\top$decomposition to avoid the square roots needed by Cholesky. Using the routine in this answer, we get the diagonal factor$\mathbf D\$ and check for conditions such that all of them are positive:
LDLT[mat_?SymmetricMatrixQ] :=
Module[{n = Length[mat], mt = mat, v, w},
Do[
If[j > 1,
w = mt[[j, ;; j - 1]]; v = w Take[Diagonal[mt], j - 1];
mt[[j, j]] -= w.v;
If[j < n,
mt[[j + 1 ;;, j]] -= mt[[j + 1 ;;, ;; j - 1]].v]];
mt[[j + 1 ;;, j]] /= mt[[j, j]],
{j, n}];
{LowerTriangularize[mt, -1] + IdentityMatrix[n], Diagonal[mt]}]
LDLT[{{1, 1 + z, 1 + z, 0, -3 z, -3 z},
{1 + z, 1, 1 + z, -3 z, 0, -3 z},
{1 + z, 1 + z, 1, -3 z, -3 z, 0},
{0, -3 z, -3 z, 1, 1 + z, 1 + z},
{-3 z, 0, -3 z, 1 + z, 1, 1 + z},
{-3 z, -3 z, 0, 1 + z, 1 + z, 1}}] // Last // Simplify
{1, -z (2 + z), -((z (3 + 2 z))/(2 + z)), (3 + 20 z)/(3 + 2 z),
-((16 z (-3 - 8 z + 8 z^2))/(3 + 20 z)), (4 z (-9 - 12 z + 32 z^2))/(-3 - 8 z + 8 z^2)}
Reduce[And @@ Thread[% > 0], z]
False
and thus, we come to the same conclusion as in Louis's answer.
|
2021-03-01 06:55:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.546654462814331, "perplexity": 8927.608449347354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362133.53/warc/CC-MAIN-20210301060310-20210301090310-00287.warc.gz"}
|
https://www.physicsforums.com/threads/propagation-of-light.72585/
|
# Propagation of Light
1. Apr 22, 2005
### sghaussi
Hi! I was wondering if you can give me some adivice on how to approach this problem:
In a physics lab, light with a wavelength of 560 nm travels in air from a laser to a photocell in a time of 16.5 ns. When a slab of glass with a thickness of 0.860 m is placed in the light beam, with the beam incident along the normal to the parallel faces of the slab, it takes the light a time of 21.3 ns to travel from the laser to the photocell.
What is the wavelength of the light in the glass?
Use 3×108 m/s for the speed of light in a vacuum.
My main problem is that I don't know how the thickness of the medium is important.
Thank you in advance,
Sahar
2. Apr 22, 2005
how long did the light take to travel the glass?
thats when the thickness counts
you can get your index once you figure this out
as the light goes through the glass, the frequency doesn't change just the wavelength
what does this tell you?
Last edited: Apr 22, 2005
3. Apr 22, 2005
### Andrew Mason
The first thing to do is to find how long the original path is.
$$s_0 = c\Delta t_0$$
For the path through the glass, there are two parts:
$$s_{air} = c\Delta t_{air}$$ and
$$s_{glass} = v_{glass}\Delta t_{glass}$$
so you know, or can work out: $s_{air},\Delta t_{air}, s_{glass}, \Delta t_{glass}$
From that you should be able to work out $v_{glass}$ and wavelength follows from that.
AM
Last edited: Apr 22, 2005
|
2017-07-20 14:51:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6600262522697449, "perplexity": 599.427884950512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423222.65/warc/CC-MAIN-20170720141821-20170720161821-00245.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-2-4-3x-1-9#167875
|
# How do you solve 2.4^(3x+1)= 9?
Sep 12, 2015
$x = \frac{\log 9 - \log 8}{6 \log 2} = 0 , 02832$
#### Explanation:
Using laws of exponents and indices you may write the equation as
${2}^{1} \cdot {2}^{6 x} \cdot {2}^{2} = {3}^{2}$
$\therefore {2}^{6 x + 3} = {3}^{2}$
$\therefore {2}^{6 x} = \frac{{3}^{2}}{{2}^{3}} = \frac{9}{8}$
Now taking the logarithm on both sides and sing laws of logs we get
$6 x \log 2 = \log 9 - \log 8$
$\therefore x = \frac{\log 9 - \log 8}{6 \log 2} = 0 , 02832$
|
2021-12-04 20:53:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9811019897460938, "perplexity": 1878.3074710549488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363006.60/warc/CC-MAIN-20211204185021-20211204215021-00624.warc.gz"}
|
https://www.physicsforums.com/threads/vectors-problem.720065/
|
# Vectors problem
Hello, my problem is attached as a picture. Could you give me some guidelines on how to approach the problem? (i know the given formulas are derived using QM (probably), and i'm not "scared" from them, i just need to know where to do a cross or a dot product, and maybe how to approach the last part of the problem - the speed of contact with the ground. This would probably mean that the vector displacement finish point will be at (a, b, 0).)
#### Attachments
• 56.4 KB Views: 336
Related Introductory Physics Homework Help News on Phys.org
Simon Bridge
You have the equation of motion along with clues to $\vec{V}$ and $\vec{A}$ - why not use this?
It may help if you pick a direction for U - if it has to be general, then $\vec{V}=U\hat{V}$ which will have zero y component (because it is "horizontal").
|
2020-02-24 05:19:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5883070230484009, "perplexity": 399.3086130031866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145897.19/warc/CC-MAIN-20200224040929-20200224070929-00441.warc.gz"}
|
https://tex.stackexchange.com/questions/354872/using-path-options-to-set-color-for-a-custom-shading
|
# Using path options to set color for a custom shading
I'm using a custom shading to make cylindrical bars.
The shading uses a color that is defined in the color list.
I can change the color by using a \colorlet expression. But I'd like to be able to do it with an option in the draw command.
Here's my code, with my latest attempts to set the color commented out. I haven't been able to find anything in the documentation or on stackexchange that shows me how to do it.
\documentclass{standalone}
\usepackage{tikz}
\usetikzlibrary{decorations}
{\pgfpoint{0bp}{5bp}}
{color(0bp)=(white);
color(28bp)=(mycolor);
color(60bp)=(mycolor)}
{100bp}
{color(0bp)=(mycolor);
color(25bp)=(mycolor);
color(55bp)=(white);
color(75bp)=(mycolor);
color(100bp)=(mycolor)}
\def\cylindricalsphere{
\begin{pgfscope}
\pgfpathcircle{\pgfpoint{0}{0}}{5pt}
\end{pgfscope}}
\pgfdeclaredecoration{cylindricalbarspheres}{initial}
{
\state{initial}[width=1pt,next state=middle]{\cylindricalsphere}
\state{middle}[width=1pt]{}
\state{final}{\cylindricalsphere}}
\def\cylindricalsegment{
\begin{pgfscope}
\pgfpathrectanglecorners
{\pgfpoint{-.55pt}{-5pt}}
{\pgfpoint{.55pt}{5pt}}
\end{pgfscope}}
\pgfdeclaredecoration{cylindricalbarsegments}{initial}
{
\state{initial}[width=1pt,next state=middle]{
%\pgfkeysgetvalue{/tikz/path color}{pcolor}
%\colorlet{mycolor}{pcolor}
\cylindricalsegment}
\state{middle}[width=1pt]{
\cylindricalsegment}
\state{final}{}
}
\tikzset{
cylindricalbar/.style={
preaction={decorate,decoration=cylindricalbarspheres},
postaction={decorate,decoration=cylindricalbarsegments},
}}
\begin{document}
\begin{tikzpicture}
\colorlet{mycolor}{blue}
\path [cylindricalbar,color=green] (0,0) arc (90:0:3.5);
\end{tikzpicture}
\end{document}
You can declare a new TikZ option to store the color and use it into shading declarations.
\documentclass{standalone}
\usepackage{tikz}
\usetikzlibrary{decorations}
\tikzset{
}
{\pgfpoint{0bp}{5bp}}
{color(0bp)=(white);
color(28bp)=(\mycolor);
color(60bp)=(\mycolor)}
{100bp}
{color(0bp)=(\mycolor);
color(25bp)=(\mycolor);
color(55bp)=(white);
color(75bp)=(\mycolor);
color(100bp)=(\mycolor)}
\def\cylindricalsphere{
\begin{pgfscope}
\pgfpathcircle{\pgfpoint{0}{0}}{5pt}
\end{pgfscope}}
\pgfdeclaredecoration{cylindricalbarspheres}{initial}
{
\state{initial}[width=1pt,next state=middle]{\cylindricalsphere}
\state{middle}[width=1pt]{}
\state{final}{\cylindricalsphere}}
\def\cylindricalsegment{
\begin{pgfscope}
\pgfpathrectanglecorners
{\pgfpoint{-.55pt}{-5pt}}
{\pgfpoint{.55pt}{5pt}}
\end{pgfscope}}
\pgfdeclaredecoration{cylindricalbarsegments}{initial}
{
\state{initial}[width=1pt,next state=middle]{
%\pgfkeysgetvalue{/tikz/path color}{pcolor}
%\colorlet{mycolor}{pcolor}
\cylindricalsegment}
\state{middle}[width=1pt]{
\cylindricalsegment}
\state{final}{}
}
\tikzset{
cylindricalbar/.style={
preaction={decorate,decoration=cylindricalbarspheres},
postaction={decorate,decoration=cylindricalbarsegments},
}}
\begin{document}
\begin{tikzpicture}
\path [cylindricalbar, cylindrical shading color=red] (0,0) arc (90:0:3.5);
\path [cylindricalbar, cylindrical shading color=green] (0,-0.5) arc (90:0:2.75);
\path [cylindricalbar, cylindrical shading color=blue] (0,-1) arc (90:0:2);
\path [cylindricalbar] (0,-1.5) arc (90:0:1.5);
\end{tikzpicture}
\end{document}
• I guess your bar endings disappeared in some cases, too? – cfr Mar 1 '17 at 2:07
• @cfr Do you have an example of disappearance? – Paul Gaborit Mar 1 '17 at 6:55
• @cfr. Yes. But I've not investigated the problem. – Ignasi Mar 1 '17 at 8:54
• @PaulGaborit: In my code, change green bar to be: \path [cylindricalbar, cylindrical shading color=green] (0,-0.5) arc (90:0:3.0); and lower end disappears. – Ignasi Mar 1 '17 at 8:55
• @PaulGaborit I tried the same with -0.5, but also had some where y was positive. I think if you change all the arcs to 90:0 in my answer, you'll get 2 disappearing lower ends, which is why I used 90:-5. – cfr Mar 1 '17 at 14:17
I tend to use a .code handler and just set the colour directly.
cylindrical bar colour/.code={
\colorlet{mycolor}{#1}%
},
cylindrical bar colour=black,
This creates a key cylindrical bar colour whose argument will be used to set mycolor. black is used as an initial value.
Then,
\begin{tikzpicture}
\path [cylindricalbar] (0,.5) arc (90:-5:4);
\path [cylindricalbar, cylindrical bar colour=green] (0,0) arc (90:-5:3.5);
\path [cylindricalbar, cylindrical bar colour=blue] (0,1) arc (90:-5:4.5);
\path [cylindricalbar, cylindrical bar colour=magenta] (0,1.5) arc (90:-5:5);
\end{tikzpicture}
gives
Complete code:
\documentclass[border=10pt,tikz]{standalone}
{\pgfpoint{0bp}{5bp}}
{color(0bp)=(white);
color(28bp)=(mycolor);
color(60bp)=(mycolor)}
{100bp}
{color(0bp)=(mycolor);
color(25bp)=(mycolor);
color(55bp)=(white);
color(75bp)=(mycolor);
color(100bp)=(mycolor)}
\def\cylindricalsphere{
\begin{pgfscope}
\pgfpathcircle{\pgfpoint{0}{0}}{5pt}
\end{pgfscope}}
\pgfdeclaredecoration{cylindricalbarspheres}{initial}
{
\state{initial}[width=1pt,next state=middle]{\cylindricalsphere}
\state{middle}[width=1pt]{}
\state{final}{\cylindricalsphere}}
\def\cylindricalsegment{
\begin{pgfscope}
\pgfpathrectanglecorners
{\pgfpoint{-.55pt}{-5pt}}
{\pgfpoint{.55pt}{5pt}}
\end{pgfscope}}
\pgfdeclaredecoration{cylindricalbarsegments}{initial}
{
\state{initial}[width=1pt,next state=middle]{
%\pgfkeysgetvalue{/tikz/path color}{pcolor}
%\colorlet{mycolor}{pcolor}
\cylindricalsegment}
\state{middle}[width=1pt]{
\cylindricalsegment}
\state{final}{}
}
\tikzset{
cylindricalbar/.style={
preaction={decorate,decoration=cylindricalbarspheres},
postaction={decorate,decoration=cylindricalbarsegments},
},
cylindrical bar colour/.code={
\colorlet{mycolor}{#1}%
},
cylindrical bar colour=black,
}
\begin{document}
\begin{tikzpicture}
\path [cylindricalbar] (0,.5) arc (90:-5:4);
\path [cylindricalbar, cylindrical bar colour=green] (0,0) arc (90:-5:3.5);
\path [cylindricalbar, cylindrical bar colour=blue] (0,1) arc (90:-5:4.5);
\path [cylindricalbar, cylindrical bar colour=magenta] (0,1.5) arc (90:-5:5);
\end{tikzpicture}
\end{document}
By the way, are your cylinder endings meant to disappear in some cases? I had to do some experimentation to get 4 arcs which all had two round ends.
|
2019-06-26 13:00:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8475731611251831, "perplexity": 6607.25573500741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000306.84/warc/CC-MAIN-20190626114215-20190626140215-00491.warc.gz"}
|
https://itectec.com/ubuntu/ubuntu-getting-screen-resolution-correct-with-nvidia-drivers-duplicate/
|
# Ubuntu – Getting screen resolution correct with nvidia drivers
graphicsnvidiaresolution
I have a new install of ubuntu.
Upon first installing the nvidia drivers are not active and I get the correct screen resolution. 1680×1050
Then I install the nvidia drivers and the best resolution I can get is 1280×1024.
In searching around there is a lot of information related to this and similar issues. I have tried tips with xrandr, manually installing the drivers, etc, etc. Finding the right information is proving troublesome however.
I know that the graphics card can out put the correct resolution because it does until the nvidia drivers are activated. So does anyone here know the solution? (Why does this have to be so hard?)
This is not a new whiz bang system, but one I put together with spare parts.
Monitor: ViewSonic VX2025WM — This monitor worked correctly on my other ubuntu system with the nvidia drivers, but was connected with VGA instead of digital.
————–xorg.conf————after— 1. Install the restricted drivers System —> Hardware Drivers——-2. Select the recommended drivers, install and reboot. ——
(# nvidia-xconfig: X configuration file generated by nvidia-xconfig
(# nvidia-xconfig: version 1.0 (buildmeister@builder75) Sun Nov 8 21:50:38 PST 2009
Section "ServerLayout"
Identifier "Layout0"
Screen 0 "Screen0"
InputDevice "Keyboard0" "CoreKeyboard"
InputDevice "Mouse0" "CorePointer"
EndSection
Section "Files"
EndSection
Section "InputDevice"
# generated from default
Identifier "Mouse0"
Driver "mouse"
Option "Protocol" "auto"
Option "Device" "/dev/psaux"
Option "Emulate3Buttons" "no"
Option "ZAxisMapping" "4 5"
EndSection
Section "InputDevice"
# generated from default
Identifier "Keyboard0"
Driver "kbd"
EndSection
Section "Monitor"
Identifier "Monitor0"
VendorName "Unknown"
ModelName "Unknown"
HorizSync 30.0 - 110.0
VertRefresh 50.0 - 150.0
Option "DPMS"
EndSection
Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Device0"
Monitor "Monitor0"
DefaultDepth 24
SubSection "Display"
Depth 24
EndSubSection
EndSection
So, any suggestions on this? At this point I'm assuming that the issue has to do with a good xorg.conf file and possibly EDID. A clear set of docs on this issue is hard to find. In searching the forums and other web sites I've found LOTS of others with similar issues, but it's all so scattered that it's hard to tell which ones are not dead ends. Given that many posts are dated as far back as 2006 and earlier, and that nvidia cards are so ubiquitous, it's hard to understand why there isn't an easier solution.
|
2021-07-30 13:31:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20335270464420319, "perplexity": 5944.531807051495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153966.60/warc/CC-MAIN-20210730122926-20210730152926-00366.warc.gz"}
|
https://creprep.wordpress.com/2013/04/22/weibull-distribution/
|
# Weibull Distribution
Weibull Distribution
A continuous distribution useful for modeling time to failure data. For reliability practitioners, the Weibull distribution is a versatile and powerful tool. I often fit a Weibull when first confronted with a life dataset, as it is provides a reasonable fit given the flexibility provided by the distributions parameters.
The beta, β, value is called the shape parameter and describes the shape of the distribution, think histogram. It ranges from describing data that show a decreasing failure rate over time, β <1, to a with an increasing failure rate, β >1. When β =1 the Weibull distribution exactly equals an Exponential distribution, and describes a constant failure rate.
Here is the formula for the Weibull Distribution probability density function. The PDF is like a histogram as it shows the relative rate of failure over time.
$\displaystyle \begin{array}{l}f(x)=\frac{\beta }{\eta }{{\left( \frac{x-\gamma }{\eta } \right)}^{\beta -1}}{{e}^{-{{\left( \frac{x-\gamma }{\eta } \right)}^{\beta }}}},\text{ for }x\ge \gamma \\f(x)=0,\text{ for }x<\gamma \end{array}$
A few plots will show the impact the β value has on the look of the distribution. The x axis is time, and y axis the probability density.
And while the static images are common, and many would overlay the images onto one plot, I think it would be better if it was animated.
There is a lot more to the Weibull distribution and I’ll be writing more soon. In the meantime here are two references that are worth reviewing.
Webb, Willie M., Andrew N. O’Connor, Mohammad Modarres,, and Ali Mosleh. “Probability Distributions Used in Reliability Engineering.” In Probability Distributions Used in Reliability Engineering. College Park, Maryland: Center for Risk and Reliability.
Abernethy, Robert B. The New Weibull Handbook. 4th ed. North Palm Beach, Florida: Robert B. Abernethy, September, 2000.
This entry was posted in II. Probability and Statistics for Reliability and tagged by Fred Schenkelberg. Bookmark the permalink.
|
2020-10-22 06:38:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7387046813964844, "perplexity": 1951.8458967275424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878921.41/warc/CC-MAIN-20201022053410-20201022083410-00439.warc.gz"}
|
https://worldbuilding.stackexchange.com/questions/168436/can-these-climates-biomes-exist-near-each-other
|
# Can these climates/biomes exist near each other?
I am writing a story, in which I would like to have three locations within about a half hour to an hour driving distance (assuming population, road systems and automobiles on par with the contemporary real world). The location is unspecified; it is Earth-like, but not necessarily Earth. (As long as "Earth-like plants and animals" are plausible, feel free to play with the planet!)
• Location A has [sub]tropical vegetation and at least one valley/canyon of moderate size. (Something like southern California, or Hawaii, or your stereotypical jungle/rainforest. Basically, I want to transplant something that closely resembles the San Diego zoo here). It needs to be close to a decent sized urban area, but doesn't need to be especially flat.
• Location B is somewhat more temperate and has a mixture of trees (possibly both evergreen and deciduous) and grasses, the latter of which can be due to human activity. It can be a little arid, but closer to the North American East Coast, Midwest or Pacific Northwest is preferred. It needs to be flat enough to support a light to heavy suburban population, as well as several specific locations that are "mostly" flat. (In other words, it isn't San Francisco or the side of a mountain.) Bonus points if the trees change color in autumn.
• Location C has a typical annual snowfall of at least 0.3m. Ideally, C and B would be the same place, or at least within about 15-20 minutes of each other.
Is it possible for such locations/climates/biomes to exist in such proximity? If so, what major geological or geothermal features and/or differences in elevation would be necessary to achieve this? (Maybe the top and bottom of a mesa? A flat-rimmed caldera, with A in the bottom and B on the rim? Maybe location A has some sort of geothermal heat source?)
• Go to Cannes, in southern France. You will be on the shore of the Mediterranean, with typical mediterranean climate; sunbathing in perfectly possible in winter. Admire the Palais des Festivals and the multitude of luxury yachts. Then drive 40 km (25 miles) to the ski resort of Gréolières-les-Neiges. (Or do a Google search for ski resorts côte d'azur.) – AlexP Feb 13 at 20:46
• @levininja, "about a half hour to an hour driving distance"... which, yeah, depends on the roads and what-not, and I know it's a fuzzy answer, but that's what matters story-wise. As a ballpark, call it 40km; it could be a bit more if there's a good freeway for most of that, or less if the roads have to be narrow and winding. – Matthew Feb 13 at 21:03
• Much better this time! Only thing I'd recommend is that you nòt award the Green Check until at least two or three days have passed. This gives you a wider range of answers to review and also doesn't put potential respondents off the task. – elemtilas Feb 14 at 0:34
## The key is coastal mountains, and good roads.
Most of the US West Coast has urban areas that never freeze, perhaps 90 minutes from mountains high enough to provide respectable skiing. Another place that comes to mind is Bergen, Norway.
For instance last year, the ski slopes an hour out from Sacramento/Roseville announced that skiing would continue into July. I am not kidding. There was a lot of snowpack, and it would last that long. The road from Roseville to Truckee is reasonably well-developed; 55 for trucks and all the cars drive 70.
2-lane roads would have the characteristic of US-50, a bit twisty but not excessively so, with most of the distance being a 60 mph cruise except for hairpins and twisty sections here and there. It really depends on how insane you want to make your geography; your snow scenario requires altitude not jaggedness, so you can have snowy regions with gentle, easy-to-build terrain.
The normal happening is that the ocean moderates the temperature of the air to at least 0C and realistically higher; it arrives at the coast saturated with humidity. This temperate air keeps the coastal cities warm. When the air hits the mountains, it must go up; where it is made colder because of the altitude and reduction in atmosphereic pressure. Cold air can't hold as much moisture as warm air, so it must shed the humidity, and down comes snow. Very reliably.
• If you want to make the driving distance even shorter, just reverse the mountains. The Sierra Nevada has a long, fairly gentle slope on the west side, and a pretty steep drop on the east. Given clear roads and no traffic (admittedly a rarity :-() I can drive from ~4500 ft elevation to ~8900 ft Mt Rose Summit in about 20 minutes. – jamesqf Feb 14 at 3:19
• The en.wikipedia.org/wiki/Southern_Alps in new zealand is also a good example. – Borgh Feb 14 at 8:04
• Where in driving distance from Bergen, Norway is a subtropical forest or anything of the kind? – Tomáš Zato - Reinstate Monica Feb 14 at 16:32
• @TomášZato-ReinstateMonica I'm thinking of the original Slow TV video, where the train leaves Bergen with not a sign of snow, and then Voosh! Through the first tunnel into deep winter. I assumed Bergen benefits from a significant coastal effect. – Harper - Reinstate Monica Feb 14 at 17:21
• Lake Tahoe in California comes to mind as well. Fairly famous for the fact that during certain times of year you can water-ski and snow ski in the same afternoon – Bitsplease Feb 14 at 18:36
This makes me think of the Cascades mountains in Oregon and Washington. The mountains are snow-covered but right next to subtropical on the West side because the wind brings weather patterns in off the ocean there. And location C is on the other side of the mountains, on the East side, there are certainly some areas that are temperate if they are close to the mountains, but once you go much more west it gets into desert.
The Big Island (Hawaii Island) absolutely fits your needs. Mauna Kea (our tallest mountain, actually the worlds tallest mountain-look it up) has snow on it right now,we have temperate-type forests on the slopes, tropical jungle and desert all around. In fact every climate on earth except arctic is represented on this one island. you can drive "essentially" all the way around it in 3 hours (although you would miss a-lot) or cross it and hit say even 5 climate/ecological zones in an hour or two.
• Welcome to WorldBuilding. This is a nice, succinct answer. Thank you for the contribution. Please take a look at our tour to be familiar with the site: worldbuilding.stackexchange.com/tour Also, the help center will help you ask great questions and write great answers: worldbuilding.stackexchange.com/help See you around! – SRM Feb 14 at 14:48
This will work in San Diego itself.
Only your location B has to be a flat highland "mesa", so it can be both forested and suitable for sizeable human population. Location "A" we can find next to the ocean, and "C" in the higher mountains, behind "B" location.
For general location, you need to stick to low latitudes, and on US West Coast you can't be higher than Southern California if you want to have subtropical forest like "A".
• Any idea how much elevation difference we're talking about? What the roads would be like between "A" and "B"? – Matthew Feb 13 at 21:15
• In San Diego, that has to be at least 600 meters. It should be high enough to block moisture moving from A to C. – Alexander Feb 13 at 21:26
• Uh... why do I not want C to get humidity? The snow needs to come from somewhere... – Matthew Feb 13 at 21:51
• @Matthew ouch, I read the question wrong. I will edit the answer. – Alexander Feb 13 at 22:07
• FWIW, I'm actually reconsidering if I should switch to this as the accepted answer. I went with Harper's because it is more detailed and e.g. considers the roads, but yours has the critical point that I need a mesa. Mountains, at least by themselves, don't satisfy my requirement of having (more than just a handful of) people at "B". – Matthew Feb 13 at 23:54
|
2020-10-27 06:03:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2722356915473938, "perplexity": 2379.0554610300373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107893402.83/warc/CC-MAIN-20201027052750-20201027082750-00561.warc.gz"}
|
https://calculus123.com/index.php?title=More_about_manifolds&oldid=956
|
This site is devoted to mathematics and its applications. Created and run by Peter Saveliev.
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
## Quotients of manifolds
Example. Let consider the quotient of the circle ${\bf S}^1/_{\sim}$ with two different equivalence relations. The circle has neighborhoods of each point homeomorphic to ${\bf R}$, i.e., it's a $1$-dimensional manifold. Now we want to see the effect of gluing on its structure.
Suppose the circle is centered at $0$ on the $xy$-plane.
Firstly, suppose $$(x,y) \sim (x,-y).$$
In other words, each point is identified with the one symmetric with respect to the $x$-axis. Then $$X = {\bf S}^1/_{\sim}$$ is not a $1$-manifold because $(1,0)$ and $(-1,0)$ don't have neighborhoods homeomorphic to ${\bf R}$ anymore. Indeed, they are homeomorphic to the ray ${\bf R}^+$.
In fact, $X$ is simply the half circle: $${\bf S}^1/_{\sim} = {\bf I} = [0,1].$$
Example. Secondly, suppose $$(x,y) \sim (-x,-y).$$
In other words, each point is identified with the one symmetric with respect the origin (the antipodal point). Then $X = {\bf S}^1/_{\sim}$ is still a $1$-manifold as the image shows.
What is it? It's another circle: $${\bf S}^1/_{\sim} = {\bf S}^1.$$
Exercise. Consider this kind of antipodal equivalence relation for the spheres ${\bf S}^2/_{\sim}$, ..., ${\bf S}^n/_{\sim}$ with $x \sim -x$.
Example. Next, let's consider a certain quotient of the disk: $$Y = {\bf B}^2/_{\sim}.$$
First, observe that $${\partial}{\bf B}^2 = {\bf S}_1,$$ therefore the identification considered above applies:
$u \sim -u$ for $u{\in}{\partial}{\bf B}^2$.
In other words, the edge of the disk is glued to itself, with a twist.
Observe that even though the disk has a boundary, $Y$ is a surface without boundary. Indeed, a point $x$ on the edge has a neighborhood homeomorphic to half-disk but, when it's glued to its antipodal point, the two half-disks form a whole disk.
The end result is called the projective plane ${\bf P}^2$.
Why isn't it, say, the sphere? Because it contains the Mobius band:
Such surfaces are called one-sided, or non-orientable.
## Manifolds on the grid
We have defined manifolds in a mathematical, indirect way. For applications, we would like to think about manifolds as certain kind of cubical complexes.
Specifically, how can we define manifolds on the grid?
Of course, we keep the definition, but we want to find an explicit definition of an $n$-manifold.
Recall that for each cell that belongs to the complex, so do all of its faces. What if this $n$-complex is an $n$-manifold?
Observe:
1. There are no $k$-cells with $k>n$.
2. The manifold is compact if and only if it has a finite number of cells.
3. An ($n-1$)-cell is a face of one or two $n$-cells.
Since each $(n-1)$-cell on the grid belongs to exactly two $n$-cells...
item 3 above determines if this subset of the grid is an $n$-manifold without boundary:
• always two;
or with boundary:
• may be one.
Example:
This is a 1-manifold.
Example:
Problem, no patch $\simeq {\bf R}^2$ or ${\bf R}^2_+$.
## Manifolds as preimages
What we expect to see in dimension 2 is a relief:
It is the graph of a function $f \colon {\bf R}^2 \rightarrow {\bf R}$.
Suppose $f$ is continuous. Recall the notion of "level curves" from calculus (or more precisely we should be talking about level sets).
For a given $b \in {\bf R}$, it's the preimage of a point $$f^{-1}(b) = \{(x,y) \colon f(x,y)=b\}$$ under $f$.
Example: It's not always a curve.
If $f$ is constant, then $f^{-1}(c) = {\bf R}^2$ or $\emptyset$.
But "typically" we see this:
It does look like all of these are curves...
They are. But are they $1$-manifolds?
No, not all of them.
Not $1$-manifolds:
Let's classify the level sets of a twice differentiable function.
Dimension 1: When is a level set of $f \colon {\bf R} \rightarrow {\bf R}$ a $0$-manifold?
Let's classify all $b$'s into two categories:
1. the preimage $f^{-1}(b)$ contains only finitely many point(s), and
2. the others.
The former is a $0$-manifold.
We know that this is the first option ahead of time if we compute the derivative at $x=a$ with $f(a)=b$ and $$f'(a) \neq 0.$$
Indeed, if $f'(a) > 0$, then $f'(x) > 0$ for $x \in (a-\delta,a+\delta)$, so $f$ is increasing on $(a-\delta,a+\delta)$. Then there is inverse $f^{-1}$ defined on some $(b-\epsilon, b+\epsilon)$ ("local inverse").
Hence $f^{-1}(b)$ is the preimage of $b$ under $f$ and $f^{-1}(b)$ is the value of $f^{-1}$ at $B$.
Simply put: $\{f^{-1}(b)\}$ is a single point, for the restriction of $f$.
Do same for each $b$, on a finite interval there will be only finite number of such $b$'s (follows from compactness). So yes, it's a $0$-manifold.
Dimension $2$: Given $f \colon {\bf R}^2 \rightarrow {\bf R}$, is $f^{-1}(A)$ a $1$-manifold?
Let's look for a pattern similar to the dimension $1$ case.
At maximum or minimum points, the partial derivatives equal zero, or, in coordinate free way: $${\rm grad} f(a)=0,$$ or even simpler $$f'(a)=0.$$ Same for a saddle though.
There are a few types of functions then...
For parametric curves: if $f \colon {\bf R} \rightarrow {\bf R}^N$, $A \in {\bf R}^N$, then $f^{-1}(A)$ is a $0$-manifold, if $f'(a) \neq 0$.
For vector fields: $f \colon {\bf R}^2 \rightarrow {\bf R}^2$ (input: points, output: vectors).
Is $f^{-1}(A) \subset {\bf R}^2$ a $0$-manifold? Turns out requiring $f'(a) \neq 0$ is not enough.
Example:
Clearly, $f'(a) \neq 0$ but $f^{-1}(A) \neq {\rm point}$!
Here $f^{-1}(A)$ is all these points, a whole line! Why? Because: there is no change of $F$ in this direction.
In terms of the derivatives we have here:
• $\frac{\partial f}{\partial x} \neq 0$, and
• $\frac{\partial f}{\partial y} = 0.$
To see why this is bad, consider the case when this does work.
Suppose $e_1, e_2$ basis in ${\bf R}^2$. When is $f'(e_1), f'(e_2)$ still a basis in ${\bf R}^2$?
Is it enough to require: neither is $0$?
No. It's possible that $f'(e_1)=f'(e_2)$! or $f'(e_1)=\lambda f'(e_2)$, etc...
Example:
In the "explosion", $f^{-1}(A) = {\rm point}$.
Now the solution...
Implicit function theorem. Given $f \colon {\bf R}^N \rightarrow {\bf R}^k$, $N \geq k$. Then $f'(a)$ is a linear operator, and we say that $a$ is a regular point (not singular) if $${\rm rank} f'(a) = k.$$ (recall $f_a' \colon {\bf R}^n \rightarrow {\bf R}^k$).
• Then $f^{-1}(A)$ is an $(N-k)$-manifold, where $A=f(a)$ (without boundary).
• Moreover, if $f \in C^k$ then $f^{-1}(A)$ is a $C^k$-manifold.
Explanation:
Given ${\bf R}^2 \rightarrow {\bf R}$, $N=2$, $k=1$.
Then we are looking at the intersection of the graph of $f$ and the plane $z=A$, around the point $(a,A)$. Since $f$ is differentiable, it behaves as the tangent plane, essentially, at $a$. This plane is determined by $f'(a)$ and called also the best affine approximation.
So, it then suffices to consider the intersection of these planes...
In dimension $N+k$:
.
Condition ${\rm rank} f'(a)=2$ assures that the intersection of these planes is "transversal", so that it's a line. This line approximates the intersection of the graph with the plane. It turns out to be a $1$-manifold and so is its projection on the $xy$-plane.
## Exercises
1. Consider these four sets:
• $A_r=\{(x.y):x^2-y^2=r\}$,
• $B_r=\{(x.y):x^2-y^2=r\}$,
• $C_r=\{(x.y):x^2-y^2=r\}$,
• $D_r=\{(x.y):x^2-y^2=r\}$,
parametrized by a real number $r$. For each answer: (a) Is it a surface? (b) If it is, what's its dimension? (c) Is it connected?
2. Suppose $f:R^n \rightarrow R^n$ is a smooth functions satisfying $ff=f$. (a) Prove that $f(R^n)$ is a smooth manifold. (b) What characteristic of $f$ determines the dimension of this manifold.
3. Describe the following manifolds:
• the set of straight lines through the origin in 3-space;
• the set of all great circles on the sphere;
• the set of number triples $(x:y:z)$ except $(0:0:0)$ modulo the equivalence relation $(x:y:z)\sim (kx:ky:kz)$ for all real $k$;
• the set of pairs $(p,q)$, where $q$ is a point on the plane and $p$ is a straight line through $q$;
• the configuration space of $n$ rigid bodies connected by rods consecutively with the ends fixed.
|
2022-05-26 16:35:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9417423605918884, "perplexity": 398.6454769096063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662619221.81/warc/CC-MAIN-20220526162749-20220526192749-00396.warc.gz"}
|
http://mathhelpforum.com/advanced-statistics/54695-porbabilities.html
|
1. ## Porbabilities
I got a homework assignement with 12 questions and I figured out 7 of them. I have no clue how to do the rest of them. On the ones I knew how to do, I used the binomial distribution and P(A|B)= P(A)(P(B|A)/P(B). Here are the problems:
I am not looking for answers. I am looking for a formula to use for them or something. I want to be able to do them.
1)Urn I contains two red chips and four white chips; urn II contains 3 red and one white. Ac chip is drawn at random from urn I and transfered to urn II. Then a chip is drawn from urn II.
a) what is the prob the chip is drawn from urn II is red?
b) Given that a red chip is drawn from urn II, what is the probability a white chip is drawn from urn I?
2) Urn I has 3 red chip, 2 black chips, and 5 white chips; urn II has 2 red, 4 black, and 3 white chips. One chip is drawn at random from each urn. What is the probability both chips are the same color?
3)A coin for which P(heads) = p is to be tossed twice. For what value of p will the probability of the event " same side comes up twice" be minimized.
4) Two fair dice are rolled. What is the prob the number appearing on one will be twice the number appearing on the other?
5) An urn contains two white chips and one red chip. One is drawn at random and replaced with an additional chip of the same color. The procedure is repeated 2 more times. Find the probabilities associated with the eight points in the sample space.
2. In #1(a). Calculate $P(R_2 ) = P(R_2 \cap R_1 ) + P(R_2 \cap W_1 )$.
In #1(b). Calculate $P(W_1 |R_2 ) = \frac{{P(R_2 \cap W_1 )}}
{{P(R_2 )}}$
.
In #3, minimize the function $p^2 + \left( {1 - p} \right)^2$.
Now you respond showing work that you have done on the others.
3. ## OKay
I may need a little more help with Problem 1. For 1 I got:
How do I solve for Probability of R2 union R1.
If they were independent I would just multiply the 2 together. I don't know how to find it since they dependent. I just started this class and this is my first probability class.
4. Well of course they are not independent! What happens first effects what happens second. You are adding a red or white chip to urn II.
$P(R_1 \cap R_2 ) = P(R_2 |R_1 )P(R_1 ) = \left[ {\frac{4}{5}} \right]\left[ {\frac{2}{6}} \right]$
The probability of a red first times the probability of a red second.
Note, putting a red into urn II changes the probability of a red second.
5. ## i tried problem 2
For number 2, I multiplied the prb of red first by red second. then white first by white second. finally black first by black second. I got .32222. does that sound right?
For number 4, is it just 1/12? Because there are 36 total ways you can roll 2 dice and only 3 ways of getting 2 times the one? (1,2)(2,4)(3,6)
I figured out number 5. i just made a tree diagram. it was pretty easy once i thought about it.
I have no idea what to do for number 3
6. Originally Posted by PensFan10
I multiplied the prb of red first by red second. then white first by white second. finally black first by black second.
I got .32222. does that sound right?
DO NOT ask me to verify numbers! I don’t do that very well.
In fact, I always required students to ‘set up’ the calculation for the answer.
Once I see the set-up, I know if the answer is correct.
So, you need to show how you got that answer.
|
2016-10-24 22:38:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5690338015556335, "perplexity": 506.4565429193965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719784.62/warc/CC-MAIN-20161020183839-00070-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/929030/integral-with-rational-functions-of-powers-and-exponentials
|
Integral with rational functions of powers and exponentials
Any ideas how to solve: $$\int_0^\infty x^{n+\frac{1}{2}} (e^{a x }-1)^{-\frac{1}{2}} e^{i x t} dx$$ where $a$ and $t$ are real, positive constants; $n$ is a positive integer.
I think the problem comes from having rational functions in both powers and exponential functions.
I tried to get ride of the rational power, but it didn't really help $$\frac{\partial}{\partial q } \int_0^\infty x^{n} (e^{a x }-1)^{-\frac{1}{2}} e^{i x t+ q x^{1/2}} dx$$
Having an hint how to solve this for $t=0$ would already be useful. Thanks!
• Out of curiousity, in what context did this integral appear? – Semiclassical Sep 12 '14 at 19:09
• It a function of my own that I need for my research. But it is well defined, and should have a finite integrale value. I just don't know how to find the analytical form. – Aurelia Sep 12 '14 at 19:20
• Ok. I ask because my physics brain sees the $n=0$ case as the Fourier transform of the density of states for a boson gas (with zero chemical potential). Probably just a coincidence. – Semiclassical Sep 12 '14 at 19:23
• This is where is comes from indeed. I normally use the DOS square, i.e. $\frac{x}{e^{ax}-1}$, in which case the integral is easy to solve – Aurelia Sep 12 '14 at 19:32
• Oof, I'd quite missed that square-root. That indeed makes life interesting. I'll see what I can do... – Semiclassical Sep 12 '14 at 19:36
It's actually enough to resolve the $n=0$ case, since $t$-derivatives of $$F(t)=\int_0^\infty \frac{e^{i x t}}{\sqrt{e^{2x}-1}} x^{1/2}\,dx$$ will bring down more powers of $x$; I've chosen units for so that $a=1$ for convenience. Even with these simplifications, I don't know how to compute a closed-form and so will have to settle for an appropriate series expansion. We rewrite the fraction in the integrand and expand in powers of exponenetials $$\frac{e^{i x t}}{\sqrt{e^{x}-1}}=\frac{e^{i x t-x/2}}{\sqrt{1-e^{-x}}}=\sum_{k=0}^\infty \binom{2k}{k}\left(\frac{e^{-x}}{4}\right)^k e^{i x t-x/2}=\sum_{k=0}^\infty \binom{2k}{k}4^{-k} e^{i x t-(k+1/2)x}.$$
Integrating term-by-term then gives
$$F(t)=\sum_{k=0}^\infty \binom{2k}{k}4^{-k}\cdot \frac{1}{2}\pi^{1/2} [(k+\frac{1}{2})-i x]^{-3/2}=\sum_{k=0}^\infty \binom{2k}{k}\frac{ 2^{-2k-1}\pi^{1/2}}{(k+\frac{1}{2}-i t)^{3/2}}$$
which is a rather formidable result. It's possible that this can be resummed as a hypergeometric series of some sort; I'll see what I can find.
$x^{n+\frac{1}{2}} (1-e^{-a x })^{-\frac{1}{2}} e^{x(-\frac{a}{2})} dx=x^{n+\frac{1}{2}}(\sum_p\frac{(2p-1)!!}{2p!!}e^{-a(p+\frac{1}{2})x})$.
So $\int x^{n+\frac{1}{2}} (1-e^{-a x })^{-\frac{1}{2}} e^{x(-\frac{a}{2})} dx=\Gamma(n+1.5).\sum_p \frac{(2p-1)!!}{2p!!}\frac{1}{a(p+\frac{1}{2})}^{n+1.5}$
But I guess you already had this, and this is not really practical...
|
2019-12-11 09:22:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9308058023452759, "perplexity": 236.80073995341718}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530452.95/warc/CC-MAIN-20191211074417-20191211102417-00479.warc.gz"}
|
https://ask.sagemath.org/questions/8438/revisions/
|
# Revision history [back]
### Speeding up matrix multiplication?
I'm currently trying write code to compute with overconvergent modular symbols. In iterating a Hecke operator, the key (i.e. most time consuming) operation that is performed tons of times is simply taking the product of a large dense matrix with a vector, both with integral entries. The matrix is say 100 by 100 and the entries are on the order of $10^{100}$.
Is there any faster way to do this computation than use SAGE's intrinsic matrix times a vector command?
### Speeding up matrix multiplication?
I'm currently trying write code to compute with overconvergent modular symbols. In iterating a Hecke operator, the key (i.e. most time consuming) operation that is performed tons of times is simply taking the product of a large dense matrix with a vector, both with integral entries. The matrix is say 100 by 100 and the entries are on the order of $10^{100}$.
Is there any faster way to do this computation than use using SAGE's intrinsic matrix times a vector command?
### Speeding up matrix multiplication?
I'm currently trying write code to compute with overconvergent modular symbols. In iterating a Hecke operator, the key (i.e. most time consuming) operation that is performed tons of times is simply taking the product of a large dense matrix say $M$ with a vector, vector $v$, both with integral entries. The
More precisely, let $p$ be a (relatively small) prime (think $p=11$) and $N$ some integer (think 100). I have an $N$ by $N$ matrix is say 100 and am interested in quickly computing the product $M \cdot v$ modulo $p^N$.
I am simply using the intrinsic SAGE command of multiplying a matrix by 100 a vector, and the entries are on the order of $10^{100}$. I was surprised to see that working with matrices over ${\bf Z}/p^n{\bf Z}$ was much (i.e. 10 times) slower than working with matrices over ${\bf Z}$.
Is My question: is there any a faster way to do this computation than using SAGE's intrinsic matrix times a vector command?command over ${\bf Z}$?
|
2020-02-25 18:17:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9218544960021973, "perplexity": 366.67921723493953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146127.10/warc/CC-MAIN-20200225172036-20200225202036-00419.warc.gz"}
|
https://ashishkumarletslearn.com/lecture-8-class-11-limits-and-derivatives/
|
The secret of success in life is for a man to be ready for his opportunity when it comes. – Benjamin Disraeli
All four cases of Direct Derivative Derivation of rule for direct derivative of algebra function using first principle
$$[f(x) \pm g(x)]’ = f'(x) \pm g'(x)$$
$$[f(x) \times g(x)]’ = f(x) \times g'(x) + g(x) \times f(x)$$
$$\left [ \frac{f(x)}{g(x)} \right ]^’ = \frac{g(x) f'(x) – f(x) g'(x)}{g^2(x)}$$
|
2020-05-26 22:57:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42934107780456543, "perplexity": 2285.7336023212024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391923.3/warc/CC-MAIN-20200526222359-20200527012359-00366.warc.gz"}
|
https://astrophytheory.com/tag/differential-equations/
|
# A Narrow, Technical Problem in Partial Differential Equations
While I was in school, one of my professors set this problem to me and my classmates and challenged us to solve it over the next few days. I found the challenge intriguing and it fascinated me, so I thought it was worth sharing. The problem was this:
Show that
$\displaystyle v(x,t) = \int_{-\infty}^{\infty} f(x-y,t)g(y)dy, (1.1)$
where $\displaystyle g(y)$ has finite support and also satisfies the PDE
$\displaystyle \frac{\partial v}{\partial t} = -\kappa \frac{\partial^{2}v}{\partial x^{2}}. (1.2)$
First off, what does finite support mean? Mathematically speaking, a function has support which is characterized by a subset of its domain whose members do not map to zero, and yet are finite. (Just as a quick note: much of the proper definitions require an understanding in mathematical analysis and measure theory, something which I have not studied in detail, so take that explanation with a grain of salt.)
As for the solution, we can rewrite the given PDE as
$\displaystyle \frac{\partial v}{\partial t} - \kappa \frac{\partial^{2}v}{\partial x^{2}} = 0. (2)$
The PDE requires a first-order time derivative and a second-order spatial derivative.
$\displaystyle \therefore \frac{\partial v}{\partial t} = \frac{\partial}{\partial t}\int_{-\infty}^{\infty} f(x-y,t)g(y)dy, (3.1)$
and
$\displaystyle \frac{\partial^{2} v}{\partial x^{2}} = \frac{\partial^{2}}{\partial x^{2}}\int_{-\infty}^{\infty} f(x-y,t)g(y)dy. (3.2)$
Next, we substitute Eqs. (3.1) and (3.2) into Eq.(2), yielding
$\displaystyle \frac{\partial}{\partial t}\int_{-\infty}^{\infty} f(x-y,t)g(y)dy -\kappa \frac{\partial^{2}}{\partial x^{2}}\int_{-\infty}^{\infty} f(x-y,t)g(y)dy = 0. (4)$
Note that taking the derivative of a function and then integrating that function is equivalent to integrating the function and differentiating the same function, in conjunction with the fact that the sum or difference of the integrals is the integral of the sum or difference (proofs of these facts are typically covered in a course in real analysis). Taking advantage of these gives
$\displaystyle \int_{-\infty}^{\infty} \bigg\{\frac{\partial}{\partial t}f(x-y,t)-\kappa\frac{\partial^{2}}{\partial x^{2}}f(x-y,t)\bigg\}g(y)dy = 0. (5)$
Notice that the terms contained in the brackets equate to $\displaystyle 0$. This means that
$\displaystyle \int_{-\infty}^{\infty} 0 \cdot g(y)dy = 0. (6)$
This implies that the function $\displaystyle v(x,t)$ does satisfy the given PDE (Eq.(2)).
References:
Definition of Support in Mathematics: https://en.wikipedia.org/wiki/Support_(mathematics)
# Derivation of the Finite-Difference Equations
In my final semester, my course load included a graduate course that had two modules: astronomical instrumentation and numerical modeling. The latter focused on developing the equations of motion of geophysical fluid dynamics (See Research in Magnetohydrodynamics). Such equations are then converted into an algorithm based on a specific type of numerical method of solving the exact differential equation.
The purpose of this post is to derive the finite-difference equations. Specifically, I will be deriving the forward, backward, centered first order equations. We start with the Taylor expansion about the points $x_{0}= \pm h$:
$\displaystyle f(x+h)=\sum_{n=1}^{\infty}\frac{h^{n}}{n!}\frac{d^{n}f}{dx^{n}}, (1)$
and
$\displaystyle f(x-h)=\sum_{n=1}^{\infty}(-1)^{n}\frac{h^{n}}{n!}\frac{d^{n}f}{dx^{n}}. (2)$
Let $f(x_{j})=f_{j}, f(x_{j}+h)=f_{j+1}, f(x_{j}-h)=f_{j-1}$. Therefore, if we consider the following differences…
$\displaystyle f_{j+1}-f_{j}=f^{\prime}_{j}+f^{\prime \prime}_{j}\frac{h^{2}}{2!}+...+f^{n}_{j}\frac{h^{n}}{n!}, (3)$
and
$\displaystyle f_{j}-f_{j-1}=hf^{\prime}_{j}-\frac{h^{2}}{2!}f^{\prime \prime}_{j}+...\mp \frac{h^{n}}{n!}f^{n}_{j}, (4)$
and
$\displaystyle f_{j+1}-f_{j-1}=2hf^{\prime}_{j}+\frac{2h^{3}}{3!}f^{\prime\prime\prime}_{j}+..\mp \frac{h^{n}}{n!}f^{n}_{j}, (5)$
and if we keep only linear terms, we get
$\displaystyle f^{\prime}_{j}=\frac{f_{j+1}-f_{j}}{h}+\mathcal{O}(h), (6)$
$\displaystyle f^{\prime}_{j}=\frac{f_{j}-f_{j-1}}{h}+\mathcal{O}(h), (7)$
and
$\displaystyle f^{\prime}_{j}=\frac{f_{j+1}-f_{j-1}}{2h}+\mathcal{O}(h)$
where the first is the forward difference, the second is the backward difference, the last is the centered difference, and $\mathcal{O}(h)$ represents the quadratic, cubic, quartic,quintic,etc. terms. One can use similar logic to derive the second-order finite-difference equations.
# Deriving the Bessel Function of the First Kind for Zeroth Order
NOTE: I verified the solution using the following text: Boyce, W. and DiPrima, R. Elementary Differential Equations.
In this post, I shall be deriving the Bessel function of the first kind for the zeroth order Bessel differential equation. Bessel’s equation is encountered when solving differential equations in cylindrical coordinates and is of the form
$\displaystyle x^{2}\frac{d^{2}y}{dx^{2}}+x\frac{dy}{dx}+(x^{2}-\nu^{2})y(x)=0, (1)$
where $\nu = 0$ describes the order zero of Bessel’s equation. I shall be making use of the assumption
$\displaystyle y(x)=\sum_{j=0}^{\infty}a_{j}x^{j+r}, (2)$
where upon taking the first and second order derivatives gives us
$\displaystyle \frac{dy}{dx}=\sum_{j=0}^{\infty}(j+r)a_{j}x^{j+r-1}, (3)$
and
$\displaystyle \frac{d^{2}y}{dx^{2}}=\sum_{j=0}^{\infty}(j+r)(j+r-1)a_{j}x^{j+r-2}. (4)$
Substitution into Eq.(1) and noting the order of the equation we arrive at
$\displaystyle x^{2}\sum_{j=0}^{\infty}(j+r)(j+r-1)a_{j}x^{j+r-2}+x\sum_{j=0}^{\infty}(j+r)a_{j}x^{j+r-1}+x^{2}\sum_{j=0}^{\infty}a_{j}x^{j+r}=0. (5)$
Distribution and simplification of Eq.(5) yields
$\displaystyle \sum_{j=0}^{\infty}\bigg\{(j+r)(j+r-1)+(j+r)\bigg\}a_{j}x^{j+r}+\sum_{j=0}^{\infty}a_{j}x^{j+r+2}=0. (6)$
If we evaluate the terms in which $j=0$ and $j=1$, we get the following
$\displaystyle a_{0}\bigg\{r(r-1)+r\bigg\}x^{r}+a_{1}\bigg\{(1+r)r+(1+r)\bigg\}x^{r+1}+\sum_{j=2}^{\infty}\bigg\{[(j+r)(j+r-1)+(j+r)]a_{j}+a_{j-2}\bigg\}x^{j+r}=0, (7)$
where I have introduced the dummy variable $m=(j+r)-2$ and I have shifted the indices downward by 2. Consider now the indicial equation (coefficients of $a_{0}x^{r}$),
$\displaystyle r(r-1)+r=0, (8)$
which upon solving gives $r=r_{1}=r_{2}=0$. We may determine the recurrence relation from summation terms from which we get
$\displaystyle a_{j}(r)=\frac{-a_{j-2}(r)}{[(j+r)(j+r-1)+(j+r)]}=\frac{-a_{j-2}(r)}{(j+r)^{2}}. (9)$
To determine $J_{0}(x)$ we let $r=0$ in which case the recurrence relation becomes
$\displaystyle a_{j}=\frac{-a_{j-2}}{j^{2}}, (10)$
where $j=2,4,6,...$. Thus we have
$\displaystyle J_{0}(x)=a_{0}x^{0}+a_{1}x+... (11)$
The only way the second term above is 0 is if $a_{1}=0$. So, the successive terms are $a_{3},a_{5},a_{7},..., = 0$. Let $j=2k$, where $k\in \mathbb{Z}^{+}$, then the recurrence relation is again modified to
$\displaystyle a_{2k}=\frac{-a_{2k-2}}{(2k)^{2}}. (12)$
In general, for any value of $k$, one finds the expression
$\displaystyle ... \frac{(-1)^{k}a_{0}x^{2k}}{2^{2k}(k!)^{2}}. (13)$
Thus our solution for the Bessel function of the first kind is
$\displaystyle J_{0}(x)=a_{0}\bigg\{1+\sum_{k=1}^{\infty}\frac{(-1)^{k}x^{2k}}{2^{2k}(k!)^{2}}\bigg\}. (14)$
# Consequences and some Elementary Theorems of the Ideal One-Fluid Magnetohydrodynamic Equations
SOURCE FOR CONTENT:
Priest, E. Magnetohydrodynamics of the Sun, 2014. Cambridge University Press. Ch.2.;
Davidson, P.A., 2001. An Introduction to Magnetohydrodynamics. Ch.4.
We have seen how to derive the induction equation from Maxwell’s equations assuming no charge and assuming that the plasma velocity is non-relativistic. Thus, we have the induction equation as being
$\displaystyle \frac{\partial \textbf{B}}{\partial t}=\nabla \times (\textbf{v}\times \textbf{B})+\lambda \nabla^{2}\textbf{B}. (1)$
Many texts in MHD make the comparison of the induction equation to the vorticity equation
$\displaystyle \frac{\partial \Omega}{\partial t}= \nabla \times (\textbf{v} \times \Omega)+\nu \nabla^{2}\Omega, (2)$
where I have made use of the vector identity
$\nabla \times (\textbf{X}\times \textbf{Y})=\textbf{X}(\nabla \cdot \textbf{Y})-\textbf{Y}(\nabla \cdot \textbf{X})+(\textbf{Y}\cdot \nabla)\textbf{X}-(\textbf{X}\cdot \nabla)\textbf{Y}$.
Indeed, if we do compare the induction equation (Eq.(1)) to the vorticity equation (Eq.(2)) we easily see the resemblance between the two. The first term on the right hand side of Eq.(1)/ Eq.(2) determines the advection of magnetic field lines/vortex field lines; the second term on the right hand side deals with the diffusion of the magnetic field lines/vortex field lines.
From this, we can impose restrictions and thus look at the consequences of the induction equation (since it governs the evolution of the magnetic field). Furthermore, we see that we can modify the kinematic theorems of classical vortex dynamics to describe the properties of magnetic field lines. After discussing the direct consequences of the induction equation, I will discuss a few theorems of vortex dynamics and then introduce their MHD analogue.
Inherent to this is magnetic Reynold’s number. In geophysical fluid dynamics, the Reynolds number (not the magnetic Reynolds number) is a ratio between the viscous forces per volume and the inertial forces per volume given by
$\displaystyle Re=\frac{ul}{V}, (3)$
where $u, l, V$ represent the typical fluid velocity, length scale and typical volume respectively. The magnetic Reynolds number is the ratio between the advective and diffusive terms of the induction equation. There are two canoncial regimes: (1) $Re_{m}<<1$, and (2)$Re_{m}>>1$ The former is sometimes called the diffusive limit and the latter is called either the Ideal limit or the infinite conductivity limit (I prefer to call it the ideal limit, since the terms infinite conductivity limit is not quite accurate).
Case I: $Re_{m}<<1$
Consider again the induction equation
$\displaystyle \frac{\partial \textbf{B}}{\partial t}=\nabla \times (\textbf{v}\times \textbf{B})+\lambda\nabla^{2}\textbf{B}.$
If we then assume that we are dealing with incompressible flows (i.e. $(\nabla \cdot \textbf{v})=0$) then we can use the aforementioned vector identity to write the induction equation as
$\displaystyle \frac{D\textbf{B}}{Dt}=(\textbf{B}\cdot \nabla)\textbf{v}+\lambda\nabla^{2}\textbf{B}. (4)$
In the regime for which $Re_{m}<<1$, the induction equation for incompressible flows (Eq.(4)) assumes the form
$\displaystyle \frac{\partial \textbf{B}}{\partial t}=\lambda \nabla^{2}\textbf{B}. (5)$
Compare this now to the following equation,
$\displaystyle \frac{\partial T}{\partial t}=\alpha \nabla^{2}T. (6)$
We see that the magnetic field lines are diffused through the plasma.
Case II: $Re_{m}>>1$
If we now consider the case for which the advective term dominates, we see that the induction equation takes the form
$\displaystyle \frac{\partial \textbf{B}}{\partial t}=\nabla \times (\textbf{v}\times \textbf{B}). (7)$
Mathematically, what this suggests is that the magnetic field lines become “frozen-in” the plasma, giving rise to Alfven’s theorem of flux freezing.
Many astrophysical systems require a high magnetic Reynolds number. Such systems include the solar magnetic field (heliospheric current sheet), planetary dynamos (Earth, Jupiter, and Saturn), and galactic magnetic fields.
Kelvin’s Theorem & Helmholtz’s Theorem:
Kelvin’s Theorem: Consider a vortex tube in which we have that $(\nabla \cdot \Omega)=0$, in which case
$\displaystyle \oint \Omega \cdot d\textbf{S}=0, (8)$
and consider also the curve taken around a closed surface, (we call this curve a material curve $C_{m}(t)$) we may define the circulation as being
$\displaystyle \Gamma = \oint_{C_{m}(t)}\textbf{v}\cdot d\textbf{l}. (9)$
Thus, Kelvin’s theorem states that if the material curve is closed and it consists of identical fluid particles then the circulation, given by Eq.(9), is temporally invariant.
Helmholtz’s Theorem:
Part I: Suppose we consider a fluid element which lies on a vortex line at some initial time $t=t_{0}$, according to this theorem it states that this fluid element will continue to lie on that vortex line indefinitely.
Part II: This part says that the flux of vorticity
$\displaystyle \Phi = \int \Omega \cdot d\textbf{S}, (10)$
remains constant for each cross-sectional area and is also invariant with respect to time.
Now the magnetic analogue of Helmholtz’s Theorems are found to be Alfven’s theorem of flux freezing and conservation of magnetic flux, magnetic field lines, and magnetic topology.
The first says that fluid elements which lie along magnetic field lines will continue to do so indefinitely; basically the same for the first Helmholtz theorem.
The second requires a more detailed argument to demonstrate why it works but it says that the magnetic flux through the plasma remains constant. The third says that magnetic field lines, hence the magnetic structure may be stretched and deformed in many ways, but the magnetic topology overall remains the same.
The justification for these last two require some proof-like arguments and I will leave that to another post.
In my project, I considered the case of high magnetic Reynolds number in order to examine the MHD processes present in region of metallic hydrogen present in Jupiter’s interior.
In the next post, I will “prove” the theorems I mention and discuss the project.
# Basic Equations of Ideal One-Fluid Magnetohydrodynamics: (Part V) The Energy Equations and Summary
SOURCE FOR CONTENT: Priest E., Magnetohydrodynamics of the Sun, 2014. Ch. 2. Cambridge University Press.
The final subset of equations deals with the energy equations. My undergraduate research did not take into account the thermodynamics of conducting fluid in order to keep the math relatively simple. However, in order to understand MHD one must take into account these considerations. Therefore, there are three essential equations that are indicative of the energy equations:
I. Heat Equation:
We may write this equation in terms of the entropy $S$ as
$\displaystyle \rho T \bigg\{\frac{\partial S}{\partial t}+(\nabla \cdot \textbf{v})S\bigg\}=-\mathcal{L}, (1)$
where $\mathcal{L}$ represents the net effect of energy sinks and sources and is called the energy loss function. For simplicity, one typically writes the form of the heat equation to be
$\displaystyle \frac{\rho^{\gamma}}{\gamma -1}\frac{d}{dt}\bigg\{\frac{P}{\rho^{\gamma}}\bigg\}=-\mathcal{L}. (2)$
2. Conduction
For this equation one considers the explicit form of the energy loss function as being
$\displaystyle \mathcal{L}=\nabla \cdot \textbf{q}+L_{r}-\frac{J^{2}}{\sigma}-F_{H}, (3)$
where $\textbf{q}$ represents heat flux by particle conduction, $L_{r}$ is the net radiation, $J^{2}/\sigma$ is the Ohmic dissipation, and $F_{H}$ represents external heating sources, if any exist. The term $\textbf{q}$ is given by
$\textbf{q}=-\kappa \nabla T, (4)$
where $\kappa$ is the thermal conduction tensor.
The equation for radiation can be written as a variation of the diffusion equation for temperature
$\displaystyle \frac{DT}{Dt}=\kappa \nabla^{2}T (5)$
where $\kappa$ here denotes the thermal diffusivity given by
$\displaystyle \kappa = \frac{\kappa_{r}}{\rho c_{P}}. (6)$
We may write the final form of the energy equation as
$\displaystyle \frac{\rho^{\gamma}}{\gamma-1}\frac{d}{dt}\bigg\{\frac{P}{\rho^{\gamma}}\bigg\}=-\nabla \cdot \textbf{q}-L_{r}+J^{2}/\sigma+F_{H}, (7)$
where $\textbf{q}$ is given by Eq.(4).
As far as my undergraduate research is concerned, I am including these equations to be complete.
So to summarize the series so far, I have derived most of the basic equations of ideal one-fluid model of magnetohydrodynamics. The equations are
$\displaystyle \frac{\partial \textbf{B}}{\partial t}=(\textbf{v}\times \textbf{B})+\lambda \nabla^{2}\textbf{B}, (A)$
$\displaystyle \frac{\partial \textbf{v}}{\partial t}+(\nabla \cdot \textbf{v})\textbf{v}=-\frac{1}{\rho}\nabla\bigg\{P+\frac{B^{2}}{2\mu_{0}}\bigg\}+\frac{(\nabla \cdot \textbf{B})\textbf{B}}{\mu_{0}\rho}, (B)$
$\displaystyle \frac{\partial \rho}{\partial t}+(\nabla \cdot \rho\textbf{v})=0, (C)$
$\displaystyle \frac{\partial \Omega}{\partial t}+(\nabla \cdot \textbf{v})\Omega = (\nabla \cdot \Omega)\textbf{v}+\nu \nabla^{2}\Omega, (D)$
$\displaystyle P = \frac{k_{B}}{m}\rho T = \frac{\tilde{R}}{\tilde{\mu}}\rho T, (E)$ (Ideal Gas Law)
and
$\displaystyle \frac{\rho^{\gamma}}{\gamma-1}\frac{d}{dt}\bigg\{\frac{P}{\rho^{\gamma}}\bigg\}=-\nabla \cdot \textbf{q}-L_{r}+J^{2}/\sigma +F_{H}. (F)$
We also have the following ancillary equations
$\displaystyle (\nabla \cdot \textbf{B})=0, (G.1)$
since we haven’t found evidence of the existence of magnetic monopoles. We also have that
$\displaystyle \nabla \times \textbf{B}=\mu_{0}\textbf{J}, (G.2)$
where we are assuming that the plasma velocity $v << c$ (i.e. non-relativistic). Finally for incompressible flows we know that $(\nabla \cdot \textbf{v})=0$ corresponding to isopycnal flows.
In the next post, I will discuss some of the consequences of these equations and some elementary theorems involving conservation of magnetic flux and magnetic field line topology.
# Solution to the Hermite Differential Equation
One typically finds the Hermite differential equation in the context of an infinite square well potential and the consequential solution of the Schrödinger equation. However, I will consider this equation is its “raw” mathematical form viz.
$\displaystyle \frac{d^{2}y}{dx^{2}}-2x\frac{dy}{dx}+\lambda y(x) =0. (1)$
First we will consider the more general case, leaving $\lambda$ undefined. The second case will consider in a future post $\lambda = 2n, n\in \mathbb{Z}^{+}$, where $\mathbb{Z}^{+}=\bigg\{x\in\mathbb{Z}|x > 0\bigg\}.$
PART I:
Let us assume the solution has the form
$\displaystyle y(x)=\sum_{j=0}^{\infty}a_{j}x^{j}. (2)$
Now we take the necessary derivatives
$\displaystyle y^{\prime}(x)=\sum_{j=1}^{\infty}ja_{j}x^{j-1}, (3)$
$\displaystyle y^{\prime \prime}(x)=\sum_{j=2}^{\infty} j(j-1)a_{j}x^{j-2}, (4)$
where upon substitution yields the following
$\displaystyle \sum_{j=2}^{\infty}j(j-1)a_{j}x^{j-2}-\sum_{j=1}^{\infty}2ja_{j}x^{j}+\sum_{j=0}^{\infty}\lambda a_{j}x^{j}=0, (5)$
Introducing the dummy variable $m=j-2$ and using this and its variants we arrive at
$\displaystyle \sum_{j=0}^{\infty}(j+2)(j+1)a_{j+2}x^{j}-\sum_{j=0}^{\infty}2ja_{j}x^{j}+\sum_{j=0}^{\infty}\lambda a_{j}x^{j}=0. (6)$
Bringing this under one summation sign…
$\displaystyle \sum_{j=0}^{\infty}[(j+2)(j+1)a_{j+2}-2ja_{j}+\lambda a_{j}]x^{j}=0. (7)$
Since $\displaystyle \sum_{j=0}^{\infty}x^{j}\neq 0$, we therefore require that
$\displaystyle (j+2)(j+1)a_{j+2}=(2j - \lambda)a_{j}, (8)$
or
$\displaystyle a_{j+2}=\frac{(2j-\lambda)a_{j}}{(j+2)(j+1)}. (9)$
This is our recurrence relation. If we let $j=0,1,2,3,...$ we arrive at two linearly independent solutions (one even and one odd) in terms of the fundamental coefficients $a_{0}$ and $a_{1}$ which may be written as
$\displaystyle y_{even}(x)= a_{0}\bigg\{1+\sum_{j=0}^{j/2}\frac{(-1)^{j}(\lambda -2j)!}{(j+2)!}x^{j}\bigg\}, (10)$
and
$\displaystyle y_{odd}(x)=a_{1}\bigg\{\sum_{j=0}^{(j-1)/2}\frac{(-1)^{j}(\lambda-2j)!}{(j+2)!}x^{j}\bigg\}. (11)$
Thus, our final solution is the following
$\displaystyle y(x)=y_{even}(x)+y_{odd}(x), (12.1)$
$\displaystyle y(x)=a_{0}\bigg\{1+\sum_{j=0}^{j/2}\frac{(-1)^{j}(\lambda-2j)!}{(j+2)!}x^{j+2}\bigg\}+a_{1}\bigg\{x+\sum_{j=1}^{(j-1)/2}\frac{(-1)^{j}(\lambda-2j)!}{(j+2)!}x^{j+2}\bigg\}. (12.2)$
# Legendre Polynomials
Some time ago, I wrote a post discussing the solution to Legendre’s ODE. In that post, I discussed what is an alternative definition of Legendre polynomials in which I stated Rodriguez’s formula:
$\displaystyle \frac{1}{2^{p}p!}\frac{d^{p}}{dx^{p}}\bigg\{(x^{2}-1)^{p}\bigg\}, (0.1)$
where
$\displaystyle P_{p}(x)=\sum_{n=0}^{\alpha}\frac{(-1)^{n}(2p-2n)!}{2^{p}{n!}(p-n)!(p-2n)!} (0.2)$,
and
$\displaystyle P_{p}(x)=\sum_{n=0}^{\beta}\frac{(-1)^{n}(2p-2n)!}{2^{p}{n!}(p-n)!(p-2n)!} (0.3)$
in which I have let $\displaystyle \alpha=p/2$ and $\displaystyle \beta=(p-1)/2$ corresponding to the even and odd expressions for the Legendre polynomials.
However, in this post I shall be using the approach of the generating function. This will be from a purely mathematical perspective, so I am not applying this to any particular topic of physics.
Consider a triangle with sides $\displaystyle X,Y.Z$ and angles $\displaystyle \theta, \phi, \lambda$. The law of cosines therefore maintains that
$\displaystyle Z^{2}=X^{2}+Y^{2}-2XY\cos{(\lambda)}. (1)$
We can factor out $\displaystyle X^{2}$ from the left hand side of Eq.(1), take the square root and invert this yielding
$\displaystyle \frac{1}{Z}=\frac{1}{X}\bigg\{1+\bigg\{\frac{Y}{X}\bigg\}^{2}-2\bigg\{\frac{Y}{X}\bigg\}\cos{(\lambda)}\bigg\}^{-1/2}. (2)$
Now, we can expand this by means of the binomial expansion. Let $\displaystyle \kappa \equiv \bigg\{\frac{Y}{X}\bigg\}^{2}-2\bigg\{\frac{Y}{X}\bigg\}\cos{(\lambda)}$, therefore the binomial expansion is
$\displaystyle \frac{1}{(1+\kappa)^{1/2}}=1-\frac{1}{2}\kappa+\frac{3}{8}\kappa^{2}-\frac{5}{16}\kappa^{3}+... (3)$
Hence if we expand this in terms of the sides and angle(s) of the triangle and group by powers of $\displaystyle (y/x)$ we get
$\displaystyle \frac{1}{Z}=\frac{1}{X}\bigg\{1+\bigg\{\frac{Y}{X}\bigg\}\cos{(\lambda)}+\bigg\{\frac{Y}{X}\bigg\}^{2}\frac{1}{2}(3\cos^{2}{(\lambda)}-1)+\bigg\{\frac{Y}{X}\bigg\}^{3}\frac{1}{2}(5\cos^{3}{(\lambda)}-3\cos{(\lambda)}\bigg\}.(4)$
Notice the coefficients, these are precisely the expressions for the Legendre polynomials. Therefore, we see that
$\displaystyle \frac{1}{Z}=\frac{1}{X}\bigg\{\sum_{l=0}^{\infty}\bigg\{\frac{Y}{X}\bigg\}^{l}P_{l}(\cos{(\lambda)}\bigg\}, (5)$
or
$\displaystyle \frac{1}{Z}=\frac{1}{\sqrt[]{X^{2}+Y^{2}-2XY\cos{(\lambda)}}}=\sum_{l=0}^{\infty}\frac{Y^{l}}{X^{l+1}}P_{l}(\cos{(\lambda)}. (6)$
Thus we see that the generating function $\displaystyle 1/Z$ generates the Legendre polynomials. Two prominent uses of these polynomials includes gravity and its application to the theory of potentials of a spherical mass distributions, and the other is that of electrostatics. For example, suppose we have the potential equation
$\displaystyle V(r)=\frac{1}{4\pi\epsilon_{0}}\int_{V}\rho(R)\frac{\hat{\mathcal{R}}}{\mathcal{R_{0}}}d\tau. (7.1)$
We may use the result of the generating function to get the following result for the electric potential due an arbitrary charge distribution
$\displaystyle V(\mathcal{R})=\frac{1}{4\pi\epsilon_{0}}\sum_{l=0}^{\infty}\frac{\mathcal{R}^{l}}{\mathcal{R_{0}}^{l+1}}\int P_{l}(\cos{(\lambda)}). (7.2)$
(For more details, see Chapter 3 of Griffith’s text: Introduction to Electrodynamics.)
# Monte Carlo Simulations of Radiative Transfer: Basics of Radiative Transfer Theory (Part IIa)
SOURCES FOR CONTENT:
1. Chandrasekhar, S., 1960. “Radiative Transfer”. Dover. 1.
2. Choudhuri, A.R., 2010. “Astrophysics for Physicists”. Cambridge University Press. 2.
3. Boyce, W.E., and DiPrima, R.C., 2005. “Elementary Differential Equations”. John Wiley & Sons. 2.1.
Recall from last time , the radiative transfer equation
$\displaystyle \frac{1}{\epsilon \rho}\frac{dI_{\nu}}{ds}= M_{\nu}-N_{\nu}I_{\nu}, (1)$
where $M_{\nu}$ and $N_{\nu}$ are the emission and absorption coefficients, respectively. We can further define the absorption coefficient to be equivalent to $\epsilon \rho$. Hence,
$\displaystyle N_{\nu}=\frac{d\tau_{\nu}}{ds}, (2)$
which upon rearrangement and substitution in Eq. (1) gives
$\displaystyle \frac{dI_{\nu}(\tau_{\nu})}{d\tau_{\nu}}+I_{\nu}(\tau_{\nu})= U_{\nu}(\tau_{\nu}). (3)$
We may solve this equation by using the method of integrating factors, by which we multiply Eq.(3) by some unknown function (the integrating factor) $\mu(\tau_{\nu})$ yielding
$\displaystyle \mu(\tau_{\nu})\frac{dI_{\nu}(\tau_{\nu})}{d\tau_{\nu}}+\mu(\tau_{\nu})I_{\nu}(\tau_{\nu})=\mu(\tau_{\nu})U_{\nu}(\tau_{\nu}). (4)$
Upon examining Eq.(4), we see that the left hand side is the product rule. It follows that
$\displaystyle \frac{d}{d\tau_{\nu}}\bigg\{\mu(\tau_{\nu})I_{\nu}(\tau_{\nu})\bigg\}=\mu({\tau_{\nu}})U_{\nu}(\tau_{\nu}). (5)$
This only works if $d(\mu(\tau_{\nu}))/d\tau_{\nu}=\mu(\tau_{\nu})$. To show that this is valid, consider the equation for $\mu(\tau_{\nu})$ only:
$\displaystyle \frac{d\mu(\tau_{\nu})}{d\tau_{\nu}}=\mu(\tau_{\nu}). (6.1)$
This is a separable ordinary differential equation so we can rearrange and integrate to get
$\displaystyle \int \frac{d\mu(\tau_{\nu})}{\mu(\tau_{\nu})}=\int d\tau_{\nu}\implies \ln(\mu(\tau_{\nu}))= \tau_{\nu}+C, (6.2)$
where $C$ is some constant of integration. Let us assume that the constant of integration is $0$, and let us also take the exponential of (6.2). This gives us
$\displaystyle \mu(\tau_{\nu})=\exp{(\tau_{\nu})}. (6.3)$
This is our integrating factor. Just as a check, let us take the derivative of our integrating factor with respect to $d\tau_{\nu}$,
$\displaystyle \frac{d}{d\tau_{\nu}}\exp{(\tau_{\nu})}=\exp{(\tau_{\nu})},$
Thus this requirement is satisfied. If we now return to Eq.(4) and substitute in our integrating factor we get
$\displaystyle \frac{d}{d\tau_{\nu}}\bigg\{\exp{(\tau_{\nu})}I_{\nu}(\tau_{\nu})\bigg\}=\exp{(\tau_{\nu})}U_{\nu}(\tau_{\nu}). (7)$
We can treat this as a separable differential equation so we can integrate immediately. However, we are integrating from an optical depth $0$ to some optical depth $\tau_{\nu}$, hence we have that
$\displaystyle \int_{0}^{\tau_{\nu}}d\bigg\{\exp{(\tau_{\nu})}I_{\nu}(\tau_{\nu})\bigg\}=\int_{0}^{\tau_{\nu}}\bigg\{\exp{(\bar{\tau}_{\nu})}U_{\nu}(\bar{\tau}_{\nu})\bigg\}d\bar{\tau}_{\nu}, (8)$
We find that
$\displaystyle \exp{(\tau_{\nu})}I_{\nu}(\tau_{\nu})-I_{\nu}(0)=\int_{0}^{\tau_{\nu}}\bigg\{\exp{(\bar{\tau}_{\nu})}U_{\nu}(\bar{\tau}_{\nu})\bigg\}d\bar{\tau}_{\nu} (9),$
where if we add $I_{\nu}(0)$ and divide by $\exp{(\tau_{\nu})}$ we arrive at the general solution of the radiative transfer equation
$\displaystyle I_{\nu}(\tau_{\nu}) = I_{\nu}(0)\exp{(-\tau_{\nu})}+\int_{0}^{\tau_{\nu}}\exp{(\bar{\tau}_{\nu}-\tau_{\nu})}U_{\nu}(\bar{\tau}_{\nu})d\bar{\tau}_{\nu}. (10)$
This is the mathematically formal solution to the radiative transfer equation. While mathematically sound, much of the more interesting physical phenomena require more complicated equations and therefore more sophisticated methods of solving them (an example would be the use of quadrature formulae or $n$-th approximation for isotropic scattering).
Recall also that in general we can write the phase function $p(\theta,\phi; \theta^{\prime},\phi^{\prime})$ via the following
$\displaystyle p(\theta,\phi;\theta^{\prime},\phi^{\prime})=\sum_{l=0}^{\infty}\gamma_{l}P_{l}(\cos{\Theta}). (11)$
Let us consider the case for which $l=0$ in the sum given by (11). This then would mean that the phase function is constant
$p(\theta,\phi;\theta^{\prime},\phi^{\prime})=\gamma_{0}=const. (12)$
Such a phase function is consistent with isotropic scattering. The term isotropic means, in this context, that radiation scattered is the same in all directions. Such a case yields a source function of the form
$\displaystyle U_{\nu}(\tau_{\nu})=\frac{1}{4\pi}\int_{0}^{\pi}\int_{0}^{2\pi}\gamma_{0}I_{\nu}(\tau_{\nu})\sin{\theta^{\prime}}d\theta^{\prime}d\phi^{\prime}, (13)$
where upon use in the radiative transfer equation we get the integro-differential equation
$\displaystyle \frac{dI_{\nu}(\tau_{\nu})}{d\tau_{\nu}}+I_{\nu}(\tau_{\nu})= \frac{1}{4\pi}\int_{0}^{\pi}\int_{0}^{2\pi}\gamma_{0}I_{\nu}(\tau_{\nu})\sin{\theta^{\prime}}d\theta^{\prime}d\phi^{\prime}. (14)$
Solution of this equation is beyond the scope of the project. In the next post I will discuss Rayleigh scattering and the corresponding phase function.
# Monte Carlo Simulations of Radiative Transfer: Basics of Radiative Transfer Theory (Part I)
SOURCE FOR CONTENT: Chandrasekhar, S., 1960. Radiative Transfer. 1.
In this post, I will be discussing the basics of radiative transfer theory necessary to understand the methods used in this project. I will start with some definitions, then I will look at the radiative transfer equation and consider two simple cases of scattering.
The first definition we require is the specific intensity, which is the amount of energy associated with a specific frequency $dE_{\nu}$ passing through an area $dA$ constrained to a solid angle $d\Omega$ in a time $dt$. We may write this mathematically as
$dE_{\nu}=I_{\nu}\cos{\theta}d\nu d\Sigma d\Omega dt. (1)$
We must also consider the net flux given by
$\displaystyle d\nu d\Sigma dt \int I_{\nu}\cos{\theta}d\Omega, (2)$
where if we integrate over all solid angles $\Omega$ we get
$\pi F_{\nu}=\displaystyle \int I_{\nu}\cos{\theta}d\Omega. (3)$
Let $d\Lambda$ be an element of the surface $\Lambda$ in a volume $V$ through which radiation passes. Further let $\Theta$ and $\theta$ denote the angles which form normals with respect to elements $d\Lambda$ and $d\Sigma$. These surfaces are joined by these normals and hence we have the surface across which energy flows includes the elements $d\Lambda$ and $d\Sigma$, given by the following:
$I_{\nu}\cos{\Theta}d\Sigma d\Omega^{\prime}d\nu = I_{\nu}d\nu \frac{\cos{\Theta}\cos{\theta}d\Sigma d\Lambda}{r^{2}} (4),$
where $d\Omega^{\prime}=d\Lambda \cos{\Theta}/r^{2}$ is the solid angle subtended by the surface element $d\Lambda$ at a point $P$ and volume element $dV=ld\Sigma \cos{\theta}$ is the volume that is intercepted in volume $V$. If we take this further, and integrate over all $V$ and $\Omega$ we arrive at
$\displaystyle \frac{d\nu}{c}\int dV \int I_{\nu} d\Omega=\frac{V}{c}d\nu \int I_{\nu}d\Omega, (5)$
where if the radiation travels some distance $L$ in the volume, then we must multiply Eq.(5) by $l/c$, where $c$ is the speed of light.
We now define the integrated energy density as being
$U_{\nu}=\displaystyle \frac{1}{c}\int I_{\nu}d\Omega, (6.1)$
while the average intensity is
$J_{\nu}=\displaystyle \frac{1}{4\pi}\int I_{\nu}d\nu, (6.2)$
and the relation between these two equations is
$U_{\nu}=\frac{4\pi}{c}J_{\nu}. (6.3)$
I will now introduce the radiative transfer equation. This equation is a balance between the amount of radiation absorbed and the radiation that is emitted. The equation is,
$\frac{dI_{\nu}}{ds}=-\epsilon \rho I_{\nu}+h_{\nu}\rho, (7)$
where if we divide by $\epsilon \rho$ we get
$-\frac{1}{\epsilon_{\nu}\rho}\frac{dI_{\nu}}{ds}=I_{\nu}+U_{\nu}(\theta, \phi), (8)$
where $U(\theta,\phi)$ represents the source function given by
$U_{\nu}(\theta,\phi)=\displaystyle \frac{1}{4\pi}\int_{0}^{\pi}\int_{0}^{2\pi}p(\theta,\phi;\theta^{\prime},\phi^{\prime})I_{\nu}\sin{\theta^{\prime}}d\theta^{\prime}d\phi^{\prime}. (9)$
The source function is typically the ratio between the absorption and emission coefficients. One of the terms in the source function is the phase function which varies according to the specific scattering geometry. In its most general form, we can represent the phase function as an expansion of Legendre polynomials:
$p(\theta, \phi; \theta^{\prime},\phi^{\prime})=\displaystyle \sum_{j=0}^{\infty}\gamma_{j}P_{j}(\mu), (10)$
where we have let $\mu = \cos{\theta}$ (in keeping with our notation in previous posts).
In Part II, we will discuss a few simple cases of scattering and their corresponding phase functions, as well as obtaining the formal solution of the radiative transfer equation. (DISCLAIMER: While this solution will be consistent in a mathematical sense, it is not exactly an insightful solution since much of the more interesting and complex cases involve the solution of either integro-differential equations or pure integral equations (a possible new topic).)
# Simple Harmonic Oscillators (SHOs) (Part I)
We all experience or see this happening in our everyday experience: objects moving back and forth. In physics, these objects are called simple harmonic oscillators. While I was taking my undergraduate physics course, one of my favorite topics was SHOs because of the way the mathematics and physics work in tandem to explain something we see everyday. The purpose of this post is to engage followers to get them to think about this phenomenon in a more critical manner.
Every object has a position at which these objects tend to remain at rest, and if they are subjected to some perturbation, that object will oscillate about this equilibrium point until they resume their state of rest. If we pull or push an object with an applied force $F_{A}$ we find that this force is proportional to Hooke’s law of elasticity, that is, $F_{A}=-k\textbf{r}$. If we consider other forces we also find that there exists a force balance between the restoring force (our applied force), a resistance force, and a forcing function, which we assume to have the form
$F=F_{forcing}+F_{A}-F_{R}= -k\textbf{r}-\beta \dot{\textbf{r}}; (1)$
note that we are assuming that the resistance force is proportional to the speed of an object. Suppose further that we are inducing these oscillations in a periodic manner by given by
$F_{forcing}=F_{0}\cos{\omega t}. (2)$
Now, to be more precise, we really should define the position vector. So, $\textbf{r}=x\hat{i}+y\hat{j}+z\hat{k}$. Therefore, we actually have a system of three second order linear non-homogeneous ordinary differential equations in three variables:
$m\ddot{ x}+\beta \dot{x}+kx=F_{0}\cos{\omega t}, (3.1)$
$m\ddot{y}+\beta \dot{y}+ky=F_{0}\cos{\omega t}, (3.2)$
$m\ddot{z}+\beta \dot{z}+kz=F_{0}\cos{\omega t}. (3.3)$
(QUICK NOTE: In the above equations, I am using the Newtonian notation for derivatives, only for convenience.) I will just make some simplifications. I will divide both sides by the mass, and I will define the following parameters: $\gamma \equiv \beta/m$, $\omega_{0} \equiv k/m$, and $\alpha \equiv F_{0}/m$. Furthermore, I am only going to consider the $y$ component of this system. Thus, the equation that we seek to solve is
$\ddot{y}+\gamma \dot{y}+\omega_{0}y=\alpha\cos{\omega t}. (4)$
Now, in order to solve this non-homogeneous equation, we use the method of undetermined coefficients. By this we mean to say that the general solution to the non-homogeneous equation is of the form
$y = Ay_{1}(t)+By_{2}(t)+Y(t), (5)$
where $Y(t)$ is the particular solution to the non-homogeneous equation and the other two terms are the fundamental solutions of the homogeneous equation:
$\ddot{y}_{h}+\gamma \dot{y}_{h}+\omega_{0} y_{h} = 0. (6)$
Let $y_{h}(t)=D\exp{(\lambda t)}$. Taking the first and second time derivatives, we get $\dot{y}_{h}(t)=\lambda D\exp{(\lambda t)}$ and $\ddot{y}_{h}(t)=\lambda^{2}D\exp{(\lambda t)}$. Therefore, Eq. (6) becomes, after factoring out the exponential term,
$D\exp{(\lambda t)}[\lambda^{2}+\gamma \lambda +\omega_{0}]=0. (7)$
Since $D\exp{(\lambda t)}\neq 0$, it follows that
$\lambda^{2}+\gamma \lambda +\omega_{0}=0. (8)$
This is just a disguised form of a quadratic equation whose solution is obtained by the quadratic formula:
$\lambda =\frac{-\gamma \pm \sqrt[]{\gamma^{2}-4\omega_{0}}}{2}. (9)$
Part II of this post will discuss the three distinct cases for which the discriminant $\sqrt[]{\gamma^{2}-4\omega_{0}}$ is greater than, equal to , or less than 0, and the consequent solutions. I will also obtain the solution to the non-homogeneous equation in that post as well.
|
2021-03-01 01:05:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 235, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9158247113227844, "perplexity": 279.5731831434806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00416.warc.gz"}
|
http://dons.directory/pages/latest/
|
# Latest Posts
Showing latest 10 of 30 posts
January 01, 2000
# The Five Pillars of Islam
Go to post
## 1. Shahada — Profession of Faith
The belief and profession that “There is no god but God, and Muhammad is the Messenger of God”. One becomes a Muslim by reciting this phrase with conviction.
## 2. Salat — Prayer
Pray facing Mecca five times a day: at dawn, noon, mid-afternoon, sunset, and after dark. Prayer includes a recitation of the opening chapter (sura) of the Qur’an. Men gather in the mosque for the noonday prayer on Friday.
## 3. Zakat — Alms
The faithful must donate a fixed portion of their income to benefit of the community, or to the church. This must be done both as a religious duty and to secure the blessings associated with charity.
## 4. Sawm — Fasting
During the daylight hours of Ramadan, the ninth month of the Islamic calendar, all healthy adult Muslims must abstain from food and drink. Through this temporary deprivation they renew awareness of and gratitude for God’s bounty. The Qur’an was first revealed during this month.
## 5. Hajj — Pilgrimage
Health and finances permitting, the faithful must visit the holy city of Mecca at least once. The Ka’ba, a cubical structure covered in black embroidered hangings, is at the center of the Haram Mosque in Mecca, present-day Saudi Arabia. The Ka’ba is the house which Ibrahim built for God, and the faithful face in its direction (Qibla) when they pray. Since the time of the Prophet Muhammad, the faithful have gathered around the Ka’ba on the eighth and twelfth days of the final month of the Islamic calendar.
January 01, 2000
# Doing Easy
## An Essay By William S. Burroughs
Go to post
DE is a way of doing. It is a way of doing everything you do. DE simply means doing whatever you do in the easiest most relaxed way you can manage which is also the quickest and most efficient way, as you will find as you advance in DE.
You can start right now tidying up your flat, moving furniture or books, washing dishes, making tea, sorting papers. Consider the weight of objects exactly how much force is needed to get the object from here to there. Consider its shape and texture and function where exactly does it belong. Use just the amount of force necessary to get the object from here to there. Don’t fumble, jerk, grab an object. Drop cool possessive fingers onto it like a gentle old cop making a soft arrest. Guide the dustpan lightly to the floor as if you were landing a plane. When you touch an object weigh it with your fingers, feel your fingers on the object, the skin, blood, muscles, tendons of you hand and arm. Consider these extensions of yourself as precision instruments to perform every movement smoothly and well.
Handle objects with consideration and they will show you all their little tricks. Don’t tug or pull at a zipper. Guide the little metal teeth smoothly along feeling the sinuous ripples of cloth and flexible metal. Replacing the cap on a tube of toothpaste… (and this should always be done at once. Few things are worse than and uncapped tube, maladroitly squeezed, twisting up out of the bathroom glass drooling paste, unless it be a tube with the cap barbarously forced on all askew against the threads). Replacing the cap let the very tips of your fingers protrude beyond the cap contacting the end of the tube guiding the cap into place. Using your fingertips as a landing gear will enable you to drop any light object silently and surely into its place.
Remember every object has its place. If you dont find that place and put that thing there it will jump out at you and trip you or rap you painfully across the knuckles. It will nudge you and clutch at you and get in your way. Often such objects belong in the wastebasket but often its just that they are out of place. Learn to place an object firmly and quietly in its place and do not let your fingers move that object as they leave it there. When you put down a cup separate your fingers cleanly from the cup. Do not let them catch in the handle and if they do repeat the movement until fingers separate clean. If you dont catch that nervous finger that won’t let go of that handle you may twitch hot tea across the Duchess.
Never let a poorly executed sequence pass. If you throw a match at a wastebasket and miss, get right up and put that match in the wastebasket. If you have time repeat the cast that failed. There is a always a reason for missing an easy toss. Repeat the toss and you will find it. If you rap your knuckles against a window jamb or door. If you brush your leg against a desk or a bed, if you catch your feet in the curled-up corner of a rug, or strike a toe against a desk or chair go back and repeat the sequence. You will be surprised to find how far off course you were to hit that window jamb, that door, that chair. Get back on course and do it again. How can you pilot a spacecraft if you can’t find your way around your own apartment? It’s just like retaking a movie shot until you get it right. And you will begin to feel yourself in a film moving with ease and speed. But don’t try for speed at first. Try for relaxed smoothness taking as much time as you need to perform an action. If you drop an object, break and object, spill anything, knock painfully against anything, galvanically clutch an object, pay particular attention to the retake. You may find out why and forestall a repeat performance. If the object is broken sweep up the pieces and remove them from the room at once. If the object is intact or you have a duplicate object repeat sequence. You may experience a strange feeling as if the objects are alive and hostile trying to twist out of your fingers, slam noisily down on a table, jump out at you and stub your toe or trip you. Repeat sequence until objects are brought to order.
Here is student at work. At two feet he tosses red plastic milk cap at the orange garbage bucket. The cap sails over the bucket like a flying saucer. He tries again. Same result. He examines the cap and finds that one edge is crushed down. He pries the edge back into place. Now the cap will drop obediently into the bucket. Every object you touch is alive with your life and your will.
The student tosses cigarette box at wastebasket and it bounces out from the cardboard cover from a metal coat hanger, which is resting diagonally across the wastebasket and never should be there at all. If an ashtray is emptied into that wastebasket the cardboard triangle will split the ashes and the butts scattering both on the floor. Student takes a box of matches from his coat pocket preparatory to lighting cigarette from new package on table. With the matches in one hand he makes another toss and misses of course his fingers are in future time lighting cigarette. He retrieves package puts the matches down and now stopping slightly legs bent hop skip over the washstand and into the wastebasket, miracle of the Zen master who hits a target in the dark these little miracles will occur more an more often as you advance in DE… the ball of paper tossed over the shoulder into the wastebasket, the blanket flipped and settled just into place that seems to fold itself under the brown satin fingers of an old Persian merchant. Objects move into place at your lightest touch. You slip into it like a film moving with such ease you hardly know you are doing it. You’d come into the kitchen expecting to find a sink full of dirty dishes and instead every dish is put away and the kitchen shines. The Little People have been there and done your work fingers light and cold as spring wind through the rooms.
The student considers heavy objects. Tape recorder on the desk taking up too much space and he doesnt use it very often. So put it under the washstand. Weigh it with the hands. First attempt the cord and socket leaps across the desk like a frightened snake. He bumps his back on the washstand putting the recorder under it. Try again lift with legs not back. He hits the lamp. He looks at that lamp. It is a horrible disjointed object the joints tightened with cellophane tape disconnected when not in use the cord leaps out and wraps around his feet sometimes jerking the lamp off the desk. Remove that lamp from the room and buy a new one. Now try again lifting shifting pivoting dropping on the legs just so and right under the washstand.
You will discover clumsy things you’ve been doing for years until you think that is just the way things are. Here is an American student who for years has clawed at the red plastic cap on English milk bottle you see American caps have a little tab and he has been looking for that old tab all these years. Then one day in a friend’s kitchen he saw a cap depressed at the center. Next morning he tries it and the miracle occurs. Just the right pressure in the center and he lifts the cap off with deft fingers and replaces it. He does this several times in wonder and in awe and ell he might him a college professor and very technical too planarian worms learn quicker than that for years he has been putting on his socks after he puts on his pants so he has to roll up pants and pants and socks get clawed in together so why not put on the socks before the pants? He is learning the simple miracles … The Miracle of the Washstand Glass… we all know the glass there on a rusty razor blade streaked with pink tooth paste a decapitated tube writhing up out of it… quick fingers go to work and Glass sparkles like the Holy Grail in the morning sunlight. Now he does the wallet drill. For years he has carried his money in the left side pocket of his pants reaching down to fish out eh naked money… bumping his fingers against the sharp edges of the notes. Often the notes were in two stacks and puling out the one could drop the other on the floor. The left side pocket of the pants is most difficult to pick but worse things can happen than a picked pocket one can dine out on that for a season. Two manicured fingers sliding into the well-cut suit wafted into the waiting hand and engraved message from the Queen. Surely this is the easy way. Besides no student of DE would have his pocket picked applying DE in the street, picking his route through slower walkers, dont get stuck behind that baby carriage, careful when you round a corner dont bump into somebody coming round the other way. He takes the wallet out in front a mirror, removes notes, counts notes, replaces notes. As rapidly as he can with no fumbling, catching note edges on wallet, or other errors. That is a basic principle which must be repeated. When speed is crucial to the operation you must find your speed the fastest you can perform the operation with out error. Don’t try for speed at first it will come his fingers will rustle through the wallet with a touch light as dead leaves and crinkle discreetly the note that will bribe a South American customs official into overlooking a shrunken down head. The customs agent smiles a collector’s smile the smile of a connoisseur. Such a crinkle he has not heard since a French jewel thief with crudely forged papers made a crinkling sound over them with his hands and there is the note neatly folded into a false passport.
Now some one will say… But if I have to think about every move I make …You only have to think and break down movement into a series of still pictures to be studied and corrected because you have not found the easy way. Once you find the easy way you dont have to think about it will almost do itself.
Operations performed on your person… brushing teeth, washing, etc. can lead you to correct a defect before it develops. Here is student with a light case of bleeding gums. His dentist has instructed him to massage gums by placing little splinters of wood called Inter Dens between the teeth and massaging gum with seesaw motion. He snatches at Inter Dens, opens his mouth in a stiff grimace and jabs at a gum with a shaking hand. Now he remembers his DE. Start over. Take out eh little splinters of wood like small chopsticks joined at the base and separate them gently. Now find where the bleeding is. Relax face and move Inter Dens up and down gently firmly gums relaxed direct your attention to that spot. No not getting better and better just let the attention of your whole body and all the healing power of your body flow with it. A soapy hand on your lower back feeling the muscles and vertebrae can catch a dislocation right there and save you a visit to the osteopath. Illness and disability is largely a matter of neglect. You ignore something because it is painful and it becomes more uncomfortable through neglect and you neglect it further. Everyday tasks become painful and boring because you think of them as WORK something solid and heavy to be fumbled and stumbled over. Overcome this block and you will find that DE can be applied to anything you do even to the final discipline of doing nothing. The easier you do it the less you have to do. He who has learned to do nothing with his whole mind and body will have everything done for him.
Let us now apply DE to a simple test: the old Western quick draw gunfight. Only one gun fighter ever really grasped the concept of DE and that was Wyatt Earp. Nobody ever beat him. Wyatt Earp said: It’s not the first shot that counts. It’s the first shot that hits. Point is to draw aim and fire and deliver the slug an inch above the belt buckle
That’s DE. How fast can you do it and get it done?
It is related that a young boy once incurred the wrath of Two Gun McGee ? McGee has sworn to kill him and is even now preparing himself in a series of saloons. The boy has never been in a gunfight and Wyatt Earp advises him to leave town while McGee is still two saloons away. The boy refuses to leave.
“All right” Earp tells him “You can hit a circle four inches square at six feet can’t you? all right take your time and hit it.” Wyatt flattens himself against a wall calling out once more “Take your time, kid.”
(How fast can you take your time, kid?)
At this moment McGee bursts through the door a .45 in each hand spittin lead all over the town. A drummer from St. Louis is a bit slow hitting the floor and catches a slug in the forehead. A boy peacefully eating chop suey in the Chinese restaurant next door stops a slug with his thigh.
Now the kid draws his gun steadies it in both hands aims and fires at six feet hitting Two Gun McGee squarely in the stomach. The heavy slug knocks him back against the wall. He manages to get off one last shot and bring down the chandelier. The boy fires again and sends a bullet ripping through McGee’s liver and another through his chest.
The beginner can think of DE as a game. You are running an obstacle course the obstacles set up by your opponent. As soon as you attempt to put DE into practice you will find that you have an opponent very clever and resourceful with detailed knowledge of you weaknesses and above all expert in diverting your attention for the moment necessary to drop a plate on the kitchen floor. Who or what is this opponent that makes you spill drop and fumble slip and fall? Groddeck and Freud called it the IT a built in self-destructive mechanism. Mr Hubbard calls it the Reactive Mind. You will disconnect IT as you advance in the discipline of DE. De Brings you into direct conflict with the IT in present time where you can control your moves. You can beat the IT in present time.
Take the inverse skill of the IT back into your own hands. These skills belong to you. Make them yours. You know where the wastebasket is. You can land objects in that wastebasket over you shoulder. You know how to touch and move and pick up things. Regaining these physical skills is of course simply a prelude to regaining other skills and knowledge that you have and cannot make available for your use. You know your entire past history just what year month and hour everything happened. If you have heard a language for any length of time you know that language. You have a computer in your brain. DE will show you how to use it. But that is another chapter.
DE applies to ALL operations carried out inside the body … brain waves, digestion, blood pressure and rate of heart beats … and that is another chapter…
“And now I have stray cats to feed and my class at the Leprosarium.”
Lady Sutton-Smith raises a distant umbrella…
“I hope you find your way … The address in empty streets…”
January 01, 2000
Go to post
## Savoury Version
Combine:
• Carrot shredded lengthwise (to create long fibers)
• Olive or coconut oil
• Apple cider vinegar
• Salt and garlic to taste
## Sweet Version
Combine:
• Carrot shredded lengthwise (to create long fibers)
• Fresh strawberries and/or raisins,
• Banana
• Lemon juice for freshness
### Note on Carrots:
3-5 Carrots each day confers shining golden skin. You can put a bag of carrots in a slow cooker to soften them, and eat them over the course of the week. I generally avoid vegetables, but carrots are one of the few that I eat regularly. The others are potato, peas, garlic and onion— It may be that the only possible benefit afforded by leafy vegetables is a hormetic tolerance to their poisonous constituents.
January 01, 2000
# Ginger and Ginseng Drink (Morning)
Go to post
Combine:
• 5 parts water/mineral water
• 1 part pure ginger juice
• 1 tsp brown sugar
• 1 tsp dried ginseng root (powdered)
Refrigerate.
NB: This recipe can be made in bulk in a glass bottle. Shake before pouring to properly distribute ginger fibres and ginseng powder.
Aside from being anti-inflammatory, antioxidant, and a digestive, ginger is a hunger suppressant and aids brain function (as does sugar), making this a good aid for fasting.
To dissolve sugar well, add it first, pour a splash of boiling water and stir before adding ginger juice and remaining water.
January 01, 2000
# Liver Disc
Go to post
Method:
• Blend raw liver thoroughly
• Pour onto tray, spread thinly
• Freeze overnight
• Slice and store sealed
• Use: Add to smoothies (see below) or to cooking.
Example use in bolognese sauce: Once sauce is cooked, stir through liver until melted and integrated, so as not to cook it. Another tip: crush garlic and stir through sauce raw; unlike usual practice of adding garlic early, this preserves garlic flavour and avoids cooking.
Example use in smoothie recipe:
• Frozen liver
• Frozen blueberries
• Pine pollen
• 3 eggs
• Kefir
• Cream
• Milk
• Honey
• Lemon juice
January 01, 2000
# Assorted Recipes from Timor Leste
Go to post
## Sambal Lu’at Chili Paste
• Chili
• Garlic
• Ginger
• Galangal
• Lemongrass
• Kaffir lime
• Basil
• Coriander
• Salt
### Method:
• Grind chilies
• Peel ginger/galangal
• Zest citruses
• Grate/chop all finely
• Mix all and add salt
• Refrigerate and ferment for 2 days in clean jar
NB: Lasts ~3-4 wks. Recipes vary, and some use varying combinations of the above ingredients, as well as red onion, tomato, scallion.
## Bilimbi Paste
Combine as above:
• Bilimbi (sliced or mashed)
• Lime
• Lemon
• Chili
• Garlic
## Batch-grilled Fish:
mackerel / halibut / sardine / snapper / barramundi / whiting
• Preheat grill hot, ~220° (430F)
• Oil fish heavily and season
• 8-10 min grill-time per inch of thickness
January 01, 2000
# Pituri
Go to post
Pituri was a drug used by aboriginal Australians, made by mixing ground leaves of plants containing nicotine with wood ash, whose alkaline nature aids absorbtion and potentiates the drug. Elders who knew the lore of pituri use are long dead, and in keeping with indigenous values, took their cultural secrets to the grave rather than share them with the uninitiated. However, reports yield various account of the effects of pituri, ranging from depressive laxity to strength and endurance, hyperactivity, and verbosity. Pituri was said to be the substance that allowed aboriginals to walk hundreds of miles over days without food or water, and was taken before meetings and warfare.
It is variously made with Duboisia hopwoodii or other similar plants such as those abovementioned of the genus Nicotiana. Hopwoodii is high in nicotine, but unlike many nicotiana it is low in nornicotine, a minor tobacco alkaloid and metabolite. This gives Hopwoodii slightly different effects, possibly leading to a more hyperactive perceived state. This squares with the descriptions of pituri’s effects as inducing hyperactivity and energy.
&&&
January 01, 2000
# Note Capture Stack
Go to post
## Capture
• Shell/Python scripts —> Obsidian
• Obsidian
• VSCode
• Evernote
• Markdown
• Copy as Markdown Chrome Extension (Research Dumps)
### Research Dump —> Article Pipeline
• All tabs open in one window
• Copy as Markdown Chrome Extension: Copy all tabs as links
• Create new research dump note and add title (Research session is now captured/frozen)
• Take notes in new document while reading through sources
• Embed research link dump at bottom of new note/journal entry, to function as further reading list
January 01, 2000
# The Full-Stack Kitchen
Go to post
## Devices
• Quail egg peeling machine (automatic better than manually operated)
• Garlic peeling machine (water peeler? dry peeler?)
• Slow cooker
• Crock pot
• Cast iron griddled
• Cast iron flat
• Pot
• Rice cooker
• Charcoal grill
• Weber
• Chinese charcoal grill
• Large tubs for spices
January 01, 2000
# Herbs and Supplements
Go to post
• Creatine
• Glycine
• Garlic
• Ginger
• Ginseng
• Iodine
• Magnesium
• Pine pollen
• Probiotics
• Theanine
• Kelp
• Roe
• Oysters
• Cacao
• Aspirin
• Niacin
• Magnesium
• Goji Berry
## Interested/Researching:
• BPC-157
• Blue lotus
• Deer antler tonic
• Bacopa monnieri
• Yohimbe/yohimbine
• Ashwaghanda
• Betel leaf
• Shilajit
• Rhodiola rosea — Powdered root
• Guarana
• White goose berries
• Maca
• Catuaba
• Hemp
• ORMUS
• Taurine
• Alpha GPC
• L-carnitine
• Gingko biloba
• NALT (tyrosine)
• Yerba mate
• Monosodium Glutamate: Unfairly maligned?
### Chinese Herb Mixture
The following herb mixture is used in this this ginseng chicken tonic soup recipe:
• Huang Jing (Siberian Solomon’s Seal)
• Cosmic Qi, Yang power, tonifies all Three Treasures. Makes the body light and clears the eyes.
• Goji Berry (Fructus Lycii)
• Vision, liver and kidneys, neuroprotective
• Dang Shen (Codonopsis pilosula)
• Enhance Qi and improve digestion, nourish blood, tonify lungs, boost vitality.
• Chinese Yam
• Ginseng Root
• Manifold benefits
• Astralagus (Huang Qi)
## Mushrooms
• Chaga
• Lions mane
• Reishi
• Cordyceps
• Psilocybe subaeruginosa
• Psilocybe cubensis
## Tobacco
### Growing and curing
Species:
• Nicotiana tabacum: Common tobacco— usually illegal without license, or requires payment of a tax. Flowers are white with a pink tinge.
• Nicotiana alata: Named varieties of this and other species including N. x sanderae are frequently grown in gardens. The variety ‘Lime Green’ has lime green flowers. Others range through colours of green, crimson, purple, salmon and white.
• Nicotiana sylvestris: Scented, pendulous snow white flowers hang from a tall, imposing plant.
Pituri
# Pituri
Pituri was a drug used by aboriginal Australians, made by mixing ground leaves of plants containing nicotine with wood ash, whose alkaline nature aids absorbtion and potentiates the drug. Elders who knew the lore of pituri use are long dead, and in keeping with indigenous values, took their cultural secrets to the grave rather than share them with the uninitiated. However, reports yield various account of the effects of pituri, ranging from depressive laxity to strength and endurance, hyperactivity, and verbosity. Pituri was said to be the substance that allowed aboriginals to walk hundreds of miles over days without food or water, and was taken before meetings and warfare.
It is variously made with Duboisia hopwoodii or other similar plants such as those abovementioned of the genus Nicotiana. Hopwoodii is high in nicotine, but unlike many nicotiana it is low in nornicotine, a minor tobacco alkaloid and metabolite. This gives Hopwoodii slightly different effects, possibly leading to a more hyperactive perceived state. This squares with the descriptions of pituri’s effects as inducing hyperactivity and energy.
&&&
## Notes on optimisation
### Ginseng
Dried ginseng root can be found at asian grocers, or order in bulk online. While there are many supplements for ginseng, it’s easier to determine the content by simply buying the root. Dried ginseng root is hard and snaps like a twig, so when blended it pulverises well. This powder can go straight into tea or smoothies. I use this powder in the below-mentioned morning drink with ginger.
### Garlic
There aren’t many convenient ways to regularly supplement garlic— since it’s bothersome to peel and doesn’t freeze well, bulk packs of fresh peeled garlic that can be found at asian grocers aren’t very useful. To optimise peeling, pick fat bulbs with dry skin. Pickled garlic lasts a long time and is a good secondary or backup garlic source. Full fresh cloves can be chewed with a mouthful of milk to alleviate garlic breath and avoid getting spiced out, as well as neutralizing acidity in the stomach.
#### Black garlic
Black garlic is aged, potentiated garlic. Garlic is slowly cooked for between 10-40 days, resulting in a soft, mild garlic that can be spread on bread, eaten unaccompanied or added to cooking. All benefits of fresh garlic are maintained.
Preparation:
• Wrap full heads of garlic in foil (can be wrapped individually or as a group, but wrapping must be tight)
• place in slow cooker, rice cooker, or crock pot on warm setting
• leave for 33 days
### Ginger
Though raw sliced root can be chewed, used in cooking, or brewed in tea, I find that real 100% ginger juice is readily available online, cost effective, bulk-friendly and convenient, and by my own account confers all benefits of ginger root.
Every morning I drink a glass of the below recipe, which I make in bulk every couple of weeks:
Morning Ginger
# Ginger and Ginseng Drink (Morning)
Combine:
• 5 parts water/mineral water
• 1 part pure ginger juice
• 1 tsp brown sugar
• 1 tsp dried ginseng root (powdered)
Refrigerate.
NB: This recipe can be made in bulk in a glass bottle. Shake before pouring to properly distribute ginger fibres and ginseng powder.
Aside from being anti-inflammatory, antioxidant, and a digestive, ginger is a hunger suppressant and aids brain function (as does sugar), making this a good aid for fasting.
To dissolve sugar well, add it first, pour a splash of boiling water and stir before adding ginger juice and remaining water.
### Pine pollen
In tea, or mixed with honey.
Pine pollen is a natural multivitamin and phytoandrogen used in traditional chinese medicine and endorsed by broscience. If you have had success fermenting pine pollen, or know of studies addressing this, please alert me with results.
### Glycine
As nightcap, with water and lemon juice, or in tea.
### Creatine
In tea or smoothies.
### Liver
The taste of liver is difficult to conceal without cooking, but the below method is helpful:
Liver Disc
# Liver Disc
Method:
• Blend raw liver thoroughly
• Pour onto tray, spread thinly
• Freeze overnight
• Slice and store sealed
• Use: Add to smoothies (see below) or to cooking.
Example use in bolognese sauce: Once sauce is cooked, stir through liver until melted and integrated, so as not to cook it. Another tip: crush garlic and stir through sauce raw; unlike usual practice of adding garlic early, this preserves garlic flavour and avoids cooking.
Example use in smoothie recipe:
• Frozen liver
• Frozen blueberries
• Pine pollen
• 3 eggs
• Kefir
• Cream
• Milk
• Honey
• Lemon juice
## Brands
• Sun potion
• Solgar
• Iodoral
• Cynomel-T3 and Cynoplus-T3 + T4
January 01, 2000
# Origin and History of the Chicken
Go to post
## Ancestry
• Chickens are largely descended from the Red junglefowl, a bird which none could say looks entirely dissimilar to the modern chicken. They are still scientifically classified as the same species, and can freely interbreed.
• Chickens share about 70-80% of their DNA with this still-surviving species, domesticated by humans in the Hellenistic period (4-2C BCE) after millennia of being used for cockfighting.
• Subsequently, interbreeding occurred with the grey, green, and Sri Lankan junglefowls, leading to the multifarious chicken breeds we have today.
• Chickens retain adapted characteristics of junglefowl which take advantage of the vast quantities of seed produced during the end of the multi-decade bamboo seeding cycle, causing it to breed prolifically when exposed to large amounts of food.
• In nature, junglefowl populations would balloon every few decades due to the abundant food produced by the simultaneous seeding of many bamboo plants, and slowly dwindle until the next seed cycle. This creates population graph which looks like an inverse Sawtooth wave.
• Humans took advantage of the junglefowl’s capacity for prolific reproduction when exposed to abundant food by feeding them all the time, leaving them in permanent state of hyperfertility.
## Sleep
• Chickens are one of few animals which show unihemispheric slow-wave sleep (USWS), a type of sleep where one half of the brain rests while the other half remains alert— this also means they sleep with one eye open, corresponding to the half of the brain which is not sleeping.
• No animal does not require sleep— this information is obscured by the varied forms of sleep that some animals exhibit, but animals like bullfrogs or bees, often said not to require sleep, actually do sleep. Bees do not require ‘recovery sleep’, meaning that after long periods of wakefulness they will be fully energised by a sleep of normal duration.
January 01, 2000
# Reptile Dispatches 0.1
Go to post
## Bullet Hell
I sleep long and deep and dream in symbols. I go to war in haywire visions every night, deployed into a hot red waste, a brutal goon in Bullet Hell. I wake up battered half to death wrapped in vines, red in tooth and claw.
In waking life I range the streets bedevilled by homeless drug-addicted warlocks, fat-tongued witches, brutal diviners deployed in hot blitzes to ravage and charge me. I turn them back with dry power spells and bloodhex their bodies— they angulate groaningly, splitting open in grey ash clouds. I beat them hollow and mince them to chalk, and they return later as hypnagogic apparitions, enterprising to destroy me in another plane. There too, I turn them back, split them open again. I am under Interminable assault, savaged by sigil-drawing gas station summoners who conjure from the air1, vulgar lizards with faces on their backs, ballistic pythonesses cracking shellfish raw in bars. I strike three limbs at once and turn them back, split them open, spilling ash taken by the wind like murmurations of starlings.
I’m training bodyweight and kickboxing, transformed into a mutagenic goon by Clean Soul Protocol2 (CSP) and rigorously controlled sensory input. Bodyweight training minmaxes on two axes in mutual tension to optimise for Strength to Weight Ratio (S/W), a universal metric ordained by God. Biological hierarchy is revealed by the S/W. There is no harm in being heavier than one looks; only benefits. Body fat is an affront to the virtue of pure physiological economy; a bag of pollution belying the mechanoid goblin under your skin. Dense limbs swing faster and harder, making calisthenic training symbiotic with combat striking, a better and more dignified art than the weaponized cuddling of jiu-jitsu, which binds you up with a single opponent and necessitates lying on the ground, vulnerable to head stomps from accomplices.
there’s nothing in my stomach so my body eats itself, dining on dead wood and growing leaner, harder. My bones grow dense as cast-iron and my raw fists move fast when I hurl them. I can leap vertically as high as my head, plant my back foot and swing a leg through a telephone pole like it’s made of cheeto, heavy bone cracking dry wood to splinters. when I see people I look through them and when they talk I dodge words like flicked matches— I flick matches at the bouncer and am thrown out, onto the wet street again with the reptiles.
## Biomimicry
Corpses walk the street these days, every second person you pass has some piece or other hanging off them, grey skin with brains herniating out through their skull. People amble around haggard like they’ve been through labour, wilted and doubled over, blood turned to battery acid. The spine is a cannon used to shoot the soul up to God3, and walking around bent over causes misfires, and black smoke. People decay and come apart like wet paper as they stray from the path of God— he bears wounds who brawls with indomitable fate, and he who spits acid has a mouthful of sores— it pains him to speak. Homes carved into walls, Virgin Girl Massage Palace, Lotus Eater Marble Temple Tea House, Police Station.
1. Self-assembling crystalline microstructures (low-temperature Mother-of-Pearl enamel)
2. Energy regeneration (Swarm logic software)
3. Structural color (Peacock tail pigment-free material shading)
4. Aquaporin desalination membrane (Forward osmosis filters)
5. Fungally innoculated seeds (Volcano farming)
## Boiled Angels
Late Friday evening I’m praying at the Chinese restaurant with the abalone and dragon crayfish in tanks, buckets of blue swimmer crabs and swarms of tiny shrimp, wet bald ducks swinging shiny in the window. Here I meet with two angular chainsmokers, twins who speak as one like bald androids. They brought a girl too, skinny neck with a fresh cattlebrand peeking out under her collar. It seems clear she’s some kind of military technology, a viperess with the spirit boiled out.
This building has seven floors, three below and three above. We descend, stepping downward into the building’s machinery, leaving ginseng tea steaming on the table next to shots of deer antler tonic.
Below ground now, we descend past rows of sweatshop slaves embroidering demonskin brogues and bat-leather belts, bent over like scribbling chroniclers, they don’t look up as I pass. The viperess leads us deeper, past meat fridges lined like vineyards with bloodless carcasses swinging among the blinking lights of server racks whirring in the dark, rows extending back further than the eye can see. By the stairs a skinny bald child in mandarin collar and clogs is sweeping away ice crystals and dust with a millet broom.
1. Psychic Defence against Enhanced Warlords : Pyramid sets of 10 classical mudras cycled + Pranayama breathing — Extended upside down hanging counteracts psychic attacks and Sahasrara Chakra floods with kundalini; the body becomes immortal. Haṭha Yoga Pradīpikā will not teach you this fortification against alien intelligence / altered entities.
2. for breakfast, 3 cloves garlic, kelp, cold terrine, blueberries and a probiotic pill with tonic tea (pounded dry ginseng root, deer antler tonic, and goji berries, teaspoon of brown sugar). Fast until late, broken by white fish, salmon roe and oysters, or dark greens and chicken hearts with black broth. 30 minutes rowing or kickboxing, 5x8 clean and press, and a pyramid set of deadlifts or 4x farmer’s carries to failure. 2-5 minutes cumulative dead hangs. Eggs, glycine, cinnamon and taurine, prayer. Sleep 9 hrs.
3. Posture — Unblock all spinal meridians and the barrel of this cannon is cleared for launch. Fire travel upward. Horse-riding stance when stationary, Opening Outward Movement each day. When walking, head should remain on a single horizontal plane, and steps should be long — Heel makes contact with ground as toe leaves it, foot rolling forward like a wheel with calf muscles activated as ankle extends to point toe toward ground, propelling body forward.
January 01, 2000
# Compiled Notes on Science
Go to post
## Science and Uncertainty
“Religion teaches society to be satisfied with not understanding the world.”
This phrase is a bloodhex wrought in word warlockery and goblin worship. Science ‘understands’ reality by reconstructing it in a simulation pre-committed to explicability, generating conclusions analogically. ie. reality¹ = that which we can understand² → we understand² reality¹
Even good science doesn’t understand; only explores. Besides, the REAL world is implicitly unintelligible to man’s feeble mind, and to accept this is to acknowledge the highest truth we are capable of grasping— FLEE from the neurotic self-placation of concocted ‘understanding’.
This coup against Truth flows from man’s breathless FEAR— of uncertainty, of ineluctable ignorance… but most finally of God, and our inadequacy in his shadow. But man IS inadequate— immeasurably so. By man’s very finitude he is made infinitely inferior to the infinitude of God.
Rather than accept his station, man does all he can do: redefine reality in analogy + simulacra, words + metaphor (every word is a metaphor). He refactors the terminology to self-aggrandise: Yes, we are certain about the nature of reality, since reality is comprised of that which we are certain of.
God is Beauty, Strength, Health, Wealth + Harmony idealised. His very image strikes existential panic in the resentful man the same way an incel is stricken by the ideal Beauty of a passing girl. Presented with its ungraspable antithesis, ugliness shrieks; a vampire in sunlight.
So u see, contained in this neurosis is a rejection of everything which constitutes flourishing. Beauty casts ugliness in its shadow; to look from above is necessarily to look down one’s nose. And man, having styled himself master of reality, hates to be looked down upon.
But I marvel at the Beauty of God’s nostrils. I love to be looked upon by something so beautiful; I reach upward, stretching to meet Him. I know He won’t come down to meet me, else he would not be God.
The fundamental attachment of scientism is not to Truth, but to certainty— a small box which Truth cannot fit into. No fact whose mechanism is inexplicable can be considered ‘scientific’, and so nothing inexplicable (a category including everything valuable) can be known by science
Certainty PRECLUDES Truth— any concrete assertion (science’s sole currency) can be shown to rely on either circular reasoning (gravity pulls objects together bc such is the nature of gravity), regression (molecules made of atoms made of…), or an axiomatic terminus (big bang)
any metaphysical argument is either circular, regressive, or axiomatic— scientific assertions are the same, only disguised as observations about the material world. Eg, The Big Bang is an axiomatic argument whose logical terms are projected onto materiality
A good heuristic is that something is TRUE precisely to the extent that it DOES NOT makes sense. Words are blunt objects to paw at Divinity, groping at a statue of Her in a dark room… I believe that when u die + ur consciousness dissolves, the lights in this room flicker on.
Science (like any form of measurement) is only useful in constructing a simple model of reality from which to make predictions. On God’s death rationalism came to replace Him, to replace reality. An atheist will say that the all Truth is scientific (reality is ‘made of science’).
Besides, when science is deprived of metaphysical authority it is revealed to be simple empirical observation, the normal mode of operation for any human.
## Science and Myth
The first principle of a ‘purely rational’ worldview is that nothing means anything, and that the real is co-extensive with the explicable. It has no attachment to Truth whatsoever, only to certainty— this is its only significant difference from a mystical worldview.
Science is every bit as mythical as religion. To capture a mystical/enchanted phenomena using scientific terms is to recast a story about “beings acted upon by Gods” into a story about “matter acted upon by forces”. The difference is that the latter characters are uninteresting.
The final conclusion of science is to show that it was a pointless endeavour to begin with. Any purely rationalistic interrogation of the universe must eventually discover that it can go no farther, and arrive finally at a negation of its fundamental rational axioms. This is because any possible discovery such a method of inquiry are already contained within its very precepts— investigate the world using a schema which accepts only mechanistic material causation as sound data, and one discovers that the world is purely material, and operates causally and mechanistically. This is shown most clearly in theoretical physics, a purely abstract systematic science relatively unbound from observation in comparison to the natural sciences. It is the first of the sciences to conclude that at bottom, the universe does not operate on rational principles.
How to make knowledge:
• Pick method of investigation (write algorithm)
• Plug in real data (while discriminating based on data format accepted by algorithm)
• Wait (for data accretion to satisfy all avenues of inquiry)
• Method finds that its precepts == outputs
Attachment to method in science kneecaps man’s greatness. The better u understand its fragile, paranoid+constrictive ‘method’, the more obvious it becomes just how advanced humanity could be if science were ideal wild west of thought, anarchistic and devoid of universal method, promoting indiscriminately the greatness and ascension of man, his power and flourishing.
Science’s dogmatism far outstrips that of religion. The foremost scientific principle, that any new inquiry must be grounded in the framework validated by previous findings, is corrupt + dogmatic, strangles the darwinian selection mechanism which could make science powerful, preserves the more popular theory, not the truer one.
Unimpeded by method, an idea is true because it works— it is correct to the extent of its functionality. This is the way that man evolved his senses, faculties for simplification and organisation of chaotic data. Remember: man’s reason is but another of these senses, as physiologically contingent as any.
The scientific criterion for ‘truth’ is mechanistic intelligibility: If an observation cannot be explained mechanistically it is not scientifically true. It’s ‘unscientific’. Worse, the only acceptable explanation is material causation (which, beneath its veil, is only an all-too-human reliance on narrativity)
In this way, the scientistic weltanschauung DEMANDS meaninglessness, and the psychological profile of the scientist exposes this as his original aim. Once meaning and enchantment are drained from phenomena, once it is stripped bare of mystic knowledge… this void they call scientific truth, Enlightenment.
This reveals the question of science as a psychological + anthropological one. Investigation into the scientistic psyche a far more rewarding endeavour. What psychosexual developmental error, what delusion dominates u so that u get off on intellectual submission, self-subversion?
Scientific obsession w causality doesn’t enrich knowledge, only impedes its development, for an observation must be ‘understood’ (read: wrangled into cheap syllogism) before it can be built upon. Result is not objectivity, but amplification of man’s most embarrassing blindnesses.
The logic of tradition does not suffer this blind spot: It does not demand to know why something works, for the proof of an idea’s truth is efficacy; its longevity is a function of applicability… a far better definition for truth than one contingent on rational intelligibility.
When unmoored from guiding purpose, science angles at removing human subjectivity to create pure information by aggregating data gathered from the experience of fundamentally non-rational animals— There is no reason that this data should be considered any more objective than the individual nodes from which it is aggregated. To the contrary, it exaggerates and deifies man’s deepest irrationality: his fetishisation of reason.
Science:
-Explanation = proof of conclusion. method = paramount authority. applicability = not considered.
-Truth contingent on explicability by material causation.
-Consensus = universal truth, inevitably superseded despite resistance.
-knowledge immeasurably stunted
-Longevity (effectiveness proxy) = Adequate proof
-No claim to Truth as such, only ‘what we do’, ‘what we have long done’, ‘what has worked’.
-Conclusion constantly masaged, iterated infinitely w/o established method, invites supersession. -knowledge flourishes
Traditional practices arise naturally from global-scale decentralised render farm composed of communities of amateur experimenters continuously testing ideas simply by living life. Those who are wrong die. Those who are right survive. Truth serves life, don’t u know? In fact, there IS no scientific ‘tradition’. With its veil lifted, the scientific process is haphazard, unmoored from purpose and disjointed in time.
-paradigm arises by consensus, validity of subsequent findings hinges on agreement with prevailing paradigm.
-inevitably findings in tension with paradigm accrue, inviting new theories to explain outlying evidence.
-New paradigm arises by consensus, achieves status of ‘truth’. repeat. All worthwhile scientific discoveries are made on the basis of intuition, and all current governing frameworks governing scientific inquiry were founded on deviation from what was once considered scientific truth.
January 01, 2000
# Resources for Exocore Features
Go to post
## Other Ideas / To-do
• ‘ID’ YAML metadata tag for Wiki notes, to allow provision of both a title and an ID, rather than the ID being the title. Allows for independent manipulation of ID, and mnemonic titles. Some info on identifiers in ZK
• Addition of VSCode tags for identifiers to insert into template notes
• Citation management doubling as PDF storage/database Manage Citations for a Zettelkasten • Zettelkasten Method
• Images should all link to themselves
• Need backlink graph visualisation — steal from here?
• FIX Netlify build error when invalid links
• edit image paste settings to be simplest possible — base image path, input box etc
• Pinned Notes:
• Pinned note appears in sidebar
• Github Actions daily push
• Build ~3 more themes
January 01, 2000
# Predictive Processing and the Free Energy Principle
Go to post
## Classical model of action:
• Optimal action depends on state of the world
• Therefore, first step of action is to (1) form a belief (analyse surroundings/prospects)
• (2) imagine a value function of next state brought about by action
• (3) optimise action that maximises value of the next state
## Model of action
• Classical model doesn’t work when the best next thing to do is to search for/resolve uncertainty
• Optimal action depends on beliefs about the world, and subsequent action
• Further, it’s a function of the order in which you interrogate the world
• Therefore the functional (function of a function) to be optimised is a function of beliefs
• Optimal action therefore is optimising sequences or policies of actions
• To be optimised: a function of a belief, integrated over time
## Free Energy Principle:
• The goal of a self-organising (eg biological) system is to minimise prediction error (surprise), also called ‘free energy’, by forming continually-updated beliefs/inferences about the world from which to form policies of action
• Friston considers this an organising principle of all life and intelligence
• To be alive (to be a system that resists disorder and dissolution) is to act in ways that reduce the gulf between your expectations and your sensory inputs (AKA, to minimise free energy)
• If a prototypical agent, or a ‘good agent’ minimises free energy (thereby minimising ‘surprise’), they must believe that the actions they take minimised expected free energy
• expected free energy associated with a policy of action is minimised
## Markov Blanket:
The Markov Blanket is a concept in machine learning which is essentially a shield that separates one set of variables from others in a layered, hierarchical system. The blanket defines the boundaries of a given system. That is, in cognition, a cognitive version of a cell membrane shielding states inside the blanket from states outside. This is the schema by which surprise is minimised— the Markov blanket is a set of variables sufficiently complete that another random variable can be inferred from it . If a Markov blanket is minimal (parsimonious) (cannot drop any variable without losing information), it is called a Markov boundary.
January 01, 2000
# Underpinnings of the Exocore
Go to post
## Digitally-Integrated Mind Palace
• Navigability
• Memorability
• Hijacking and piggybacking on existing human mnemonic faculties
## Semantic Internet
• Plain text
• Accessibility
• Universality
• Standards-compliance
• Portability
• Static Website Delivery
## Writing as Thinking, Written Output as Consolidated Thought
• Feynman Technique
• General —> specific, scattered -> polished
## Data Ownership and Escaping Net Serfdom
• Digital owned space
• Customisability
• Local Instance
• Digital and Personal Legacy
## FOSS
• Non-proprietary (open source) file formats
## Network Sublimation
• Collaboration
• Webrings
• Remchat
• The New Internet
## Frictionlessness
• Local storage
• No internet required
• No coding required
• Searchability — in contention with static design
• The Roman Room
• The Memex
• The Zettelkasten
• Web 1.0
• IRC
• Webrings
• Digital Gardens
• Memex
• Compendium
• Zettelkasten
• Hyperdraft
## Visual data representations that piggyback on human mnemonic faculties
### Chernoff Faces
“Chernoff faces, invented by applied mathematician, statistician and physicist Herman Chernoff in 1973, display multivariate data in the shape of a human face. The individual parts, such as eyes, ears, mouth and nose represent values of the variables by their shape, size, placement and orientation.”
### Urbit Names
Prefixes Suffixes
---------- ----------
0. doz zod
1. mar nec
2. bin bud
3. wan wes
4. sam sev
5. lit per
6. sig sut
7. hid let
8. fid ful
9. lis pen
10. sog syt
11. dir dur
12. wac wep
13. sab ser
14. wis wyl
15. sib sun
Example:
8 bits galaxy ~lyt
16 bits star ~diglyt
32 bits planet ~picder-ragsyt
64 bits moon ~diglyt-diglyt-picder-ragsyt
128 bits comet ~racmus-mollen-fallyt-linpex--watres-sibbur-modlux-rinmex
# The Lukasa
“Court historians known as bana balute (“men of memory”) run their fingertips across the surface of a lukasa or point to its features while reciting genealogies, king lists, maps of protocol, migration stories, and the great Luba Epic, a preeminent oral narrative that records how the culture heroes, Mbidi Kiluwe and his son Kalala Ilunga, introduced royal political practices and etiquette. “
January 01, 2000
# Usufruct
Go to post
Usufruct is a legal concept referring to a right in property which confers on the holder the right to use and benefit from the property without altering, damaging, or destroying it. A usufructary does not own the property but does have a legal interest in it which is sanctioned or contractually allowed by the owner.
A usufructary has two of the three civilian property property interests in the property, usus and fructus— they do not have the interest of abusus, which entitles them to alienate, destroy, consume or sell the property.
## The three civilian property interests:
• Usus: The right to use or enjoy a thing posessed, directly and without alteration
• Fructus: The right to derive profit from a thing possessed, eg. by lease, cultivation, taxing on entry, etc. Fructus (from ‘fruit’) allows a person to benefit from the sale of renewable commodities of the property.
• Abusus: The right to consume, destroy, or transfer the property. This interest is not conferred upon the usufructary.
## Notes:
• Roman law considered usufruct a type of personal servitude, where the usufructary had no posession of the property. Under a rental agreement today, a person has even more restricted rights over a property than did a usufructary in Rome, but is yet not considered a personal servant.
• The Law of Modes directed owners of productive property not to harvest the edges of their fields so that the poor may collect the gleanings. This confers a kind of usufructary right by default onto the poor.
• “Earth belongs – in usufruct – to the living.” (Thomas Jefferson).
January 01, 2000
# Dopamine, L-dopa and Pattern Detection
Go to post
Production of neurotransmitter dopamine is stimulated by novelty, and it facilitates learning, information storage and pattern-recognition, as well as regulating emotion. Pattern-detection is important to learning, because the brain is able to compress complex raw data by identifying repetitious elements and storing information in association with the pattern, rather than making space for each node of information to be stored separately. For example, there is no need to memorize 1000 patterns of digits in order to count from 1 to 1000; the pattern is regular enough that the brain can derive each integer from a pattern it has stored, without storing each data point that the pattern produces.
However, patterns are not pure representations of the world, or even of the data being apprehended by the brain— they are mnemonic data structures which necessarily reduce the complexity of information in order to store it more efficiently. Pattern-matching is generally considered to be helpful for learning, and this may be true is learning is equated with remembering. However, is learning-as-remembering conducive to understanding? Sensitivity to pattern-detection can be alternately phrased as tendency to apply narrative. Humans cannot resist but apply narrative to phenomena, and it seems that a compulsion to apprehend data in a logical or causal sequence is deeply ingrained in the human brain, ported over from a form of intelligence that evolved to understand the physical world, where causality is a ubiquitous feature. For this reason, making judgements on inert data is a human default, and takes serious conscious effort to avoid.
It is therefore unsurprising that dopamine also lowers skepticism. If logical sequences (patterns) appear more readily, an inflated subset of chaotic phenomena appears to ‘make sense’, and so the suspension of belief is more easily overcome. L-dopa, a drug which is metabolized as dopamine and used to treat Parkinson’s, makes people more vulnerable to pattern-detection, and has a notable side-effect that causes some patients to develop sudden gambling additions— patients see clear patterns in random phenomena, leading them to believe they will be more successful than they will be in reality.
Summary: Pattern detection is conducive to memorization, but not necessarily to clear thinking; in many instances apprehension of a pattern is a reduction of phenomena too complex to be faithfully reduced. Heightened dopamine can bolster addictive compulsions and increase credulousness, as patterns are more readily detected and chaotic sequences of action appear to make more sense. Pattern-detection is enhanced by dopamine production, and tendencies to compulsive action can result.
January 01, 2000
# Drink Recipes
Go to post
## Quietude/Nightcap
• Glycine
• L-theanine
• Taurine
• Magnesium glycinate
• Mix into Kombucha or mineral water.
## Concentration/Mental feats
• Magnesium glycinate
• L-theanine
• Glycine
• Creatine
• Honey (optional)
• Stir through black coffee or blueberry juice.
Note: Adjust magnesium, glycine and strength of coffee contrapuntally depending on constitution and desired result.
## Physical feats
• Raw cacao nibs or powder
• Rhodiola
• Creatine
• Honey
• Raw eggs— optionally, add kefir/milk/yoghurt/banana to taste.
## ACV Refresher / Digestive
• Dash of apple cider vinegar
• Teaspoon of glycine
• Add to glass of mineral water
Note: Do not drink this too often; your bones will dissolve and you will die.
## Orange Milk
Blend:
• Orange juice or peeled oranges
• Honey
• Greek yoghurt
• Milk, if using oranges
• Coldbrew Coffee
• Add coffee grounds to water in glass bottle, shake well, and leave in fridge to brew overnight.
## Hot Chocolate
Blend well and heat:
• Warm milk
• Cream
• Unsalted butter
• Honey
• Raw cacao nibs
## Banana Milk
• Microwave or briefly fry banana with butter
• Blend well with honey
NB: microwaves will likely soon be found to contribute to ill health, and should be avoided.
## Potency Tea
In hot water:
• Pine pollen
• Powdered ginger
• Yohimbe
• Powdered korean ginseng root
• Honey
January 01, 2000
# Samgyetang Ginseng Chicken Soup (Korean Style)
Go to post
## For one serving:
• Cornish hen/spatchcock
• quarter cup (ideally short grain) rice, soaked for 15 minutes in hot water
• Ginseng root
• large dried jujube (red date)
• 8 garlic cloves
• Spring onions, chopped
• Salt
• Ground black pepper
## Prepare:
• Wash and salt hen, and rinse rice with cold water
• Stuff with rice, 1 ginseng root, 1 jujube, 8 cloves of garlic
• Boil from cold water, then simmer for 1.5 hours
• Occasionally ladle any settled broth over hens
• Serve Hens whole in a bowl and pour over broth
• Sprinkle spring onions and pepper
Compare with chinese ginseng chicken tonic soup
January 01, 2000
# Ginseng Chicken Tonic Soup (Chinese Style)
Go to post
10 min preparation, 1-4 hrs cooking
## Spices:
• Huang Jing (Siberian Solomon’s Seal)
• Goji Berry (Fructus Lycii)
• Dang Shen (Codonopsis pilosula)
• Chinese Yam
• Ginseng Root
• Astralagus (Huang Qi)
These can usually be found in a Chinese grocer as a single mix, but buying each separate and in bulk avoids packing markup— you can store a giant glass jar of this for years, cheaply, and only have to buy fresh chicken.
See the benefits of these spices here.
Any kind of chicken is fine, but chicken with bones, like a Maryland or a whole spatchcock (small chicken) works best.
Prepare:
• Rinse, dry and salt chicken. Let sit for 30 minutes.
• Place herbs, water and chicken in a pot and bring to a boil, then reduce to simmer.
• Occasionally check in to remove solids from surface of soup or add water if needed.
• Cook for 1-4 hrs — 50% reduction is ideal.
• Remove chicken and chop to serving size. Season soup with salt and serve in a bowl.
Compare with korean ginseng chicken soup
January 01, 2000
# Notes on Remilia’s New Internet
Go to post
the vision
um um u um uh hhhhhh
the visionis the new internet
the visiton is . remco
the new internet is realtim e( meguca) (s0machat )
the new art is remilia ( milady ) *( bonkler)
this is what. I beleive in . when I said Ibelieve in the vision
spider im so drunk incoehrent righr t now you will have to forgive m e for mty terseness
the new internet is digial dovertnty . exocore. realtime chat. OpenBSD
tiling window manager .terminal .
the new internet is network spirituality
the new internet
it all made sense to me
right now . is like a drea m for me. im rdreaming
the new internet is a reevaluation of our social interaction with each other . and a reformation of our dynamic with each other as USERS
the new internet is whit e hearted ( light hearted)
the real time chat BSD exocore digital sogivern homestead terminal tiling window manger are the aestehtic surface level
the deeper level is the metaphyiscs of the internet and how we interpret the virtual world
the rleationship betwene user interfaces with our mental spatiotemporal matrix of virtual reality
the relationship of social interfaces with our sociocultural development
digital culture is in turmoil as ou r third spaces are full y owned by pltaforfmrsd . w e are fully plugged in to .
eletronic womb . fetus . umbilical cord (100GB ETHERNET ) Into the SPINE
next stage o f humanutiy . traditionalism ( VAT) (susptended . sensory deperevation) / ( retrun)
3 monitors
112wsx . socio temporal nexus
cxomputer hell
the user interface will stop existing once the machine learning models fully understand us
the syncreticism between the old and the young. the tools to make computers approachable for the old are the fundemental primitive that the young understand comptuers throguh . its ahoreseshi ( horse shoe)
xcomputers must be increasingly MORE addictive . fqast . emotional . no response time given . immediate off th
e cuf f answers . full information throughput betwene interlocutors
charles has not adjusted his body to the netwrok . he is still getting “carsick “ becauyse his body has no t adapted to its new organs
having a new monitor added is like getting a new limb stitched onto your bodyt . conversely. I was mutilated when my diamondtron 2060u broke
stock traders with 30 monitors a re like the hindu dieties
January 01, 2000
# The American Samurai
Go to post
It is said that business poisons friendships, but it’s not true— Business relationships are an ideal form of friendship, and a corporate entity can form the most cohesive and highly-evolved group dynamics possible. Ongoing, mutually productive business dealings demand personal virtue more than a typical friendship.
Business is a zone of consequence, accountability, and competition; in such an environment virtues are blast-tested constantly. When money enters a friendship and things go sour, the true machinery at play is that scarcities of virtue are revealed. Greed poisons friendships, as does untrustworthiness or envy— it is only the introduction of money which has made these qualities material.
A shrewd businessman searches for the same qualities in an associate as every person should look for in a friend. Many people have no problem maintaining a friendship with someone of loserish character, because they don’t rely on them for their livelihood.
So, the premium on personal virtue increases with the level of consequence. In competitive spheres with no means of recourse to a higher body, there is no insulation from consequence, and a market for virtue is born. Cosmopolites will wonder how the Russian empire grew by 50 miles a day for 300 years without an intermediary institution between the masses and the monarch— well, a healthy culture of underground terrorist extremism flourished, and rulers who made too many mistakes were not granted the mercy of being voted out. Creating a platform and a market for appeals and excuses is likely to spawn a lot of both.
Cutthroat, competitive, unregulated environments tribalise people, which is often wrongly conceived as a process of disconnection— it’s not. People are disconnected by default, and tribalisation only brings them closer. An outgroup and an ingroup mutually manifest each other in their opposition, like darkness and light— each exists precisely to the extent that the other does. In an environment where an individual can afford to pay betrayals no mind, the value of loyalty is depreciated. Where there are no consequences to breaches of trust, there is no market for trustworthiness.
Virtues like trustworthiness or loyalty are revealed by situations that demand them, and so domains of direct consequence and accountability produce an ideal form of cameraderie . This makes a mafia the mythically heightened form of a business, where stakes are existential and mechanisms for recourse are truly absent, even hostile.
In unregulated business sectors cameraderie is high, as is productivity— a function of the supremely fruitful relationships formed between virtuous men. If the ultimate font of abundance is the wild west, then the ultimate businessman is a cowboy. In the historical wild west of the American frontier the cowboy arose as a romantic hero, spun into a scrappier, Byronic form suited to New World capital— the The American Chevalier. The American Samurai.
January 01, 2000
# HYPER-LOVE
## Finally... HYPER-LOVE
Go to post
When my wife is sponging the coconut oil from my body and bedecking me with ceremonial turkish silk houndstooth posting robe and red sash before I enter the computing chamber for the 6am opening of the XLR realtime imageboard, I will be intoning a psalm to HYPER-LOVE.
When I’m adopting kibi-dachi horseriding stance at my standing desk to drink in energies from panoramic views of dry red plateus and the chapparal forest beyond, I will know that HYPER-LOVE brought me here.
When I’m getting dismembered by four enterprising young cyber-cultists for the Urbit star private key tattooed on my ankle, I’ll be thinking about HYPER-LOVE.
When I’m signing ‘Yours in Christ’ on my transaction hash for a 2.5 Monero contribution to the runaway viral assassination betting marketplace protocol crowdfunding the execution of public officials, I’ll be dreaming of HYPER-LOVE.
When I’m explaining to the postman through the intercom that I can’t come to the gate to sign for my package because my driveway is 6 miles long and that he should leave my dark web acid shipment at the guardhouse, I’ll be thinking of HYPER-LOVE.
When I’m receiving the results of my blood test to discover whether my 12-week research chemical cycle has caused adrenal suppression and the pathologist is asking why they don’t show any COVID-19 antibodies, I am thinking about HYPER-LOVE.
When my boys and I are hiding from tattooed pirates and breaking into a cold storage vault housed in a remote cult compound to reclaim my Urbit star and get my fucking foot back, I’ll be thinking about HYPER-LOVE.
When I am watching the white robes of my paperless indentured servants glisten under high-pressure sodium grow-lamps as they bend over rows of red korean ginseng in my adapted hydroponic microgreen trough-farm warehouse, I will be wondering where I would be without HYPER-LOVE.
When the blueberry juice, deer antler tonic and Sufi devotional chants be hitting and the membrane between this world and a greater one grows thin and I’m surrounded by luminous hovering angels at war with sin, banishing the woeful groaning of arthropod goblins from this worldly plane, bathing me in the light of blazing eternity and strengthening my limbs and rendering my skin golden and bones dense and heavy as cast iron and moving in unison as dancing onionskin ghosts to sanctify the spirit of every creature that lives under the shimmering disc of God’s great red sun, I am linking arms with my friends and we are chanting HYPER-LOVE.
January 01, 2000
# I’m Caught Up
## On Everything
Go to post
OK guys, I’m caught up. I’m finally caught up on Web3, and Urbit, and the Assembly in Austin, and the Wet Brain Panel, and Ethereum scalability, and the distinction between optimistic and zero-knowledge ETH rollups, and Arbitrum (which idc about), and I read the Wikipedia page for Merkle tables, and I learned what shard chains are, and I kind of gave up on figuring out templeDAO unless someone can really make it exciting for me.
I’m caught up on the developments in the mythos of Milady Sonora wherein SWAT teams opened fire on her packed rave in New York City, and how the feds were tipped off by a rival DAO whose board of directors feared the fresh, youthful iconoclasm and enlightened prescience that will wash clean the poisoned digital landscape, and how there was coronavirus in the handsoap and speed in Himi’s coke that night.
I’m caught up on Chinese municipal tier system, and I finally looked at a map of New York and apparently Tribeca stands for Triangle below Canal, like as in Canal Street and oh yeah I found out what the Drunken Canal is, and I read a NYT article about it and the author put Dimes Square in quotation marks which I thought uhh Xennial moment, and apparently Soho stands for South of Hou—idc, and I’m caught up on the OHM/TIME drama and how jawz deactivated but maybe he’s back now, idk (idc), and now I know who Dean Kissick is, and who Honor Levy is and apparently she’s half-jewish which I learned because I listened to an episode of Wet Brain (Moldbug episode, on Spotify) and now I know who Walter Pearce, Tyler Hobbs, Waheed Zai (idc), and Zach Lieberman are.
I’m caught up on the MiladyMaker Minecraft Server and its 100,000-page whitepaper shitpost, and I had dinner with the firm partners, and I avoided getting ejected from the steakhouse for being unvaxxed and I didn’t even have to make a scene, and I got a new suit at the TM Lewin liquidation sale because people weren’t buying suits in quarantine (only shirts), and in the groupchat the boys are staking ICE and burning FRAX, and something about MIM and AVAX idk, and I got the Andrew Tate follow to complete the goated follower stack (Thomas777, Land, Solbrah, Landshark, Tate). I’m caught up on pay-to-earn Axie slaves in the Phillipines, and the revival movement for marquee tags in standards-compliant HTML, And I’m NOT caught up on the Rittenhouse trial (idc).
Now, to look toward tomorrow— toward the Remilia Island Virtual Compound, and to a simultaneous rebirth and gamification of human slavery through the burgeoning indentured South-East-Asian cookie-clicker serf economy backdoor, and to a teary-eyed toast at the private gallery show anticipating the rejuvenation of slavery as such in the eyes of the public consciousness, and to eyeball-analysing AR vaccine-taker detection goggles trained on a GAN, and to an ocean of starry-eyed Vtuber starlet angels making their first million, and to the one young Decentralised-Autonomous-Corporation-as-performance-art who risked it all and won the hearts and minds of seven billion Americans across the globe.
Yours,
James Arthur Liao
|
2022-10-04 01:29:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2025909125804901, "perplexity": 10174.893959355366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00046.warc.gz"}
|
http://www.maplesoft.com/support/help/Maple/view.aspx?path=evalf/Int
|
Numerical Integration - Maple Help
Numerical Integration
Calling Sequence
evalf(Int(f, x=a..b, ...)) $\mathrm{evalf}\left({{∫}}_{a}^{b}f\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}x\right)$ evalf(Int(f, a..b, ...)) evalf(Int(f, list-of-equations, ...)) evalf(Int(f, list-of-ranges, ...)) evalf(int(f, x=a..b)) $\mathrm{evalf}\left({∫}_{a}^{b}f\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}ⅆx\right)$
Parameters
f - algebraic expression or procedure; the integrand x - name; the variable of integration a, b - endpoints of the interval of integration list-of-equations - list of equations [x1=a1..b1, ..., xn=an..bn] list-of-ranges - list of ranges [a1..b1, ..., an..bn] ... - (optional) zero or more options, as described below
Description
• The most common command for numerical integration is evalf(Int(f, x=a..b)) where the integration command is expressed in inert form to avoid first invoking the symbolic integration routines. It is also possible to invoke evalf on an unevaluated integral returned by the symbolic int command, as in evalf(int(f, x=a..b)), if it happens that symbolic int fails (returns an unevaluated integral).
• All numerical integration calling sequences can also be accessed directly from the int command by using the numeric option.
• You can enter the command evalf/Int using either the 1-D or 2-D calling sequence. For example, evalf(Int(1/(x^2+1), x=0..infinity)) is equivalent to $\mathrm{evalf}\left({{\int }}_{0}^{\mathrm{\infty }}\frac{1}{{x}^{2}+1}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}x\right)$.
• The integrand f may be another unevaluated integral, that is, multiple integrals are supported. A special list syntax (see below) can be used to specify multiple integrals, rather than using nested integrals. Integrals expressed in the standard non-list notation are referred to as 1-D (one-dimensional) integrals including the case of nested 1-D integrals.
• If the integrand f is specified as a procedure or a Maple operator, then the second argument must be a range a..b and not an equation, that is, a variable of integration must not be specified.
• Various levels of user information are displayed during the computation if infolevel[evalf/int] is assigned values between 1 and 4.
Optional Arguments
• Additional options may be specified as equations. (For backward compatibility some options are accepted as values rather than equations, as specified below.) An option is one of the following forms:
method = or digits = or epsilon = methodoptions = maxintervals =
• The specification method = (or simply ) indicates a particular numerical integration method to be applied. The methods that can be specified are described below. By default, a hybrid symbolic-numeric strategy is applied.
• The specification digits = (or simply ) indicates the number of digits of precision for the computation. Some additional guard digits are carried during the computation to attempt to achieve a result with correct digits (although a larger tolerance can be specified by using the 'epsilon' option). By default, the Maple environment variable Digits specifies the precision for the computation.
• The specification epsilon = specifies the relative error tolerance for the computed result. The routines attempt to achieve a final result with a relative error less than this value. By default, the relative error tolerance which the routines attempt to achieve for the final result is
$\mathrm{eps}=0.5{10}^{1-\mathrm{digits}}$
where digits is the precision specified for the computation. In attempting to achieve this accuracy, the working value of Digits is increased as deemed necessary. It is an error to specify 'epsilon' smaller than the default value above, and for any value larger than 1e-3 the value 1e-3 is used instead if the method in use is deterministic (i.e. not the MonteCarlo or Cuba methods).
Note: For some integrands, the numerical accuracy attained when computing values of the integrand may be insufficient to allow the value of the integral to be computed to the default tolerance $\mathrm{eps}$ (even though the computation is using some number of guard digits). In such cases, specifying a larger tolerance (relative to the setting of digits) via the 'epsilon' option may be helpful. Alternatively, increasing Digits and fixing 'epsilon' may provide the desired answer (see the end of the examples section).
• The specification methodoptions = specifies a list of zero or more options that are specific to a method selected with the method option. In particular, if the _d01ajc or _d01akc method is selected, one can supply an option of the form methodoptions=[maxintervals = ] to specify a maximal number of subintervals that can be used internally by those methods.
• For backward compatibility, the option maxintervals = for the _d01ajc and _d01akc methods can also be specified as a separate option, as an argument to Int directly, rather than in the methodoptions option.
Outline of the Numerical Integration Polyalgorithm (1-D Integrals)
• In the default case (no particular method specified), the problem is first passed to NAG integration routines if Digits is not too large (that is, if Digits <= evalhf(Digits)). The NAG routines are in a compiled C library and hence operate at hardware floating-point speed. If the NAG routines cannot perform the integration, then some singularity handling may be performed and control may pass back to the NAG routines with a modified problem. Native Maple routines are invoked if the NAG routines cannot solve the problem (for example, if Digits is too large or if the integrand involves functions for which hardware floating-point evaluation is not supported).
• The native Maple hybrid symbolic-numeric solution strategy is as follows. The default numerical method applied is Clenshaw-Curtis quadrature (_CCquad). If slow convergence is detected then there must be singularities in or near the interval of integration (perhaps in the complex plane). Some techniques of symbolic analysis are used to deal with the singularities. For problems with non-removable endpoint singularities, an adaptive double-exponential quadrature method (_Dexp) is applied.
• If singularities interior to the interval are suspected, then an attempt is made to locate the singularities in order to break up the interval of integration. Finally, if still unsuccessful, then the interval is subdivided and the _Dexp method is applied, or if the method was already _Dexp or _Sinc then an adaptive Gaussian quadrature method (_Gquad) is applied.
• For the limits of integration, the values infinity and/or -infinity are valid, and a symbolic-numeric strategy attempts to deal with singularities. Techniques employed include variable transformations, subtracting out the singularity, and integration of a truncated generalized series near the singularity.
• No singularity handling is attempted in the case where the integrand f is specified as a procedure or a Maple operator.
Special (List) Syntax for Multiple Integrals
• A numerical multiple integration problem may be specified in a natural way using nested one-dimensional integrals, for example:
evalf( Int(...(Int(Int(f, x1=a1..b1), x2=a2..b2), ...), xn=an..bn) )
where the integrand f depends on x1, x2, ..., xn. Such a problem may also be specified using the following special multiple integration notation with a list as the second argument:
evalf( Int(f, [x1=a1..b1, x2=a2..b2, ..., xn=an..bn]) ) .
• Additional optional arguments may be stated just as in the case of 1-D integration. Also as in 1-D integration, the integrand f may be specified as a procedure in which case the second argument must be a list of ranges: [a1..b1, a2..b2, ..., an..bn].
• Whether a multiple integration problem is stated using nested integrals or using the list notation, the arguments will be extracted so as to invoke the same numerical multiple integration routines.
The Method Names
• The optional argument method = (or simply ) accepts the following method names.
method = _DEFAULT -- equivalent to not specifying a method; the solution strategy outlined above is applied for 1-D integrals; for multiple integrals, the problem is passed to the _cuhre method and if it fails, then the problem is treated via nested 1-D integration. method = _NoNAG -- indicates to avoid calling NAG routines; otherwise follow the _DEFAULT strategy. method = _NoMultiple -- indicates to avoid calling numerical multiple integration routines; compute multiple integrals via nested 1-D integration.
Maple Methods
• Specifying a method indicates to try only that method (in particular, no NAG methods and no singularity handling).
method = _CCquad -- Clenshaw-Curtis quadrature method. method = _Dexp -- adaptive double-exponential method. method = _Gquad -- adaptive Gaussian quadrature method. method = _Sinc -- adaptive sinc quadrature method. method = _NCrule -- adaptive Newton-Cotes method "quanc8". Note that in contrast to the other Maple methods listed here, "quanc8" (method = _NCrule) is a fixed-order method and hence it is not recommended for very high precisions (e.g. Digits > 15).
NAG Methods
• Specifying a method indicates to try only that method (in particular, no singularity handling and no Maple methods).
method = _d01ajc -- for finite interval of integration; allows for badly behaved integrands; uses adaptive Gauss 10-point and Kronrod 21-point rules. method = _d01akc -- for finite interval of integration, oscillating integrands; uses adaptive Gauss 30-point and Kronrod 61-point rules. method = _d01amc -- for semi-infinite/infinite interval of integration.
Multiple Integration Methods
• These methods are for multiple integrals over a hyperrectangle, that is, the limits of integration are finite constants.
• Specifying a method indicates to try only that method (in particular, do not revert to nested 1-D integration).
method = _cuhre -- dimensions 2 to 15; ACM TOMS Algorithm 698. method = _MonteCarlo -- Monte Carlo method; for low accuracy only (less than 5 digits of accuracy); NAG routine 'd01gbc'. method = _CubaVegas -- Vegas method; for low accuracy. For details and method-specific options, see evalf/Int/cuba. method = _CubaSuave -- Suave method; for low accuracy. For details and method-specific options, see evalf/Int/cuba. method = _CubaDivonne -- Divonne method; for low accuracy. For details and method-specific options, see evalf/Int/cuba. method = _CubaCuhre -- Cuhre method. For details and method-specific options, see evalf/Int/cuba.
Examples
> $\mathrm{evalf}\left({{∫}}_{0}^{1}\frac{{ⅇ}^{-{x}^{3}}}{{x}^{2}+1}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}x\right)$
${0.6649369431}$ (1)
> $\mathrm{evalf}\left({{∫}}_{0}^{\mathrm{∞}}\frac{1}{{x}^{2}+1}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}x\right)$
${1.570796327}$ (2)
> $\mathrm{evalf}\left({{∫}}_{0}^{\mathrm{∞}}\mathrm{sin}\left(x\right)\mathrm{ln}\left(x\right){ⅇ}^{-{x}^{3}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}x\right)$
${-}{0.1957885158}$ (3)
The following integrals are computed to higher precision.
> $\mathrm{e1}≔\frac{1}{\mathrm{Γ}\left(x\right)}:$
> ${{∫}}_{0}^{2}\mathrm{e1}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}x=\mathrm{evalf}\left(\mathrm{Int}\left(\mathrm{e1},x=0..2,\mathrm{digits}=20,\mathrm{method}=\mathrm{_Dexp}\right)\right)$
${{∫}}_{{0}}^{{2}}\frac{{1}}{{\mathrm{Γ}}{}\left({x}\right)}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}{x}{=}{1.6263783986861406145}$ (4)
> $\mathrm{e2}≔\frac{{ⅇ}^{v-\frac{{v}^{2}}{2}}}{1+\frac{1{ⅇ}^{v}}{2}}:$
> ${{∫}}_{0}^{\mathrm{∞}}\mathrm{e2}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}v=\mathrm{evalf}[20]\left({{∫}}_{0}^{\mathrm{∞}}\mathrm{e2}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}v\right)$
${{∫}}_{{0}}^{{\mathrm{∞}}}\frac{{{ⅇ}}^{{v}{-}\frac{{1}}{{2}}{}{{v}}^{{2}}}}{{1}{+}\frac{{1}}{{2}}{}{{ⅇ}}^{{v}}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}{v}{=}{1.3055168991185060654}$ (5)
> $\mathrm{e3}≔\frac{1}{1+\mathrm{ln}\left(1+x\right)}:$
> ${{∫}}_{0}^{1}\mathrm{e3}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}x=\mathrm{evalf}[32]\left(\mathrm{Int}\left(\mathrm{e3},x=0..1,\mathrm{method}=\mathrm{_Gquad}\right)\right)$
${{∫}}_{{0}}^{{1}}\frac{{1}}{{1}{+}{\mathrm{ln}}{}\left({1}{+}{x}\right)}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}{x}{=}{0.73716070962368003213791626905536}$ (6)
> $r≔{∫}_{-\mathrm{∞}}^{\mathrm{∞}}\mathrm{sech}\left(x\right){ⅇ}^{-{x}^{2}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}ⅆx$
${r}{:=}{{∫}}_{{-}{\mathrm{∞}}}^{{\mathrm{∞}}}{\mathrm{sech}}{}\left({x}\right){}{{ⅇ}}^{{-}{{x}}^{{2}}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}{x}$ (7)
> $\mathrm{evalf}\left(r\right)$
${1.479061171}$ (8)
> $\mathrm{evalf}[25]\left(r\right)$
${1.479061171449575890854454}$ (9)
The following command returns an error because procedure $f$ is invoked with argument $x$, a symbolic name.
> f := proc(x) if x < 2 then 2*x else x^2 end if; end proc;
${f}{:=}{\mathbf{proc}}\left({x}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{if}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{x}{<}{2}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{then}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{2}{*}{x}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{else}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{x}{^}{2}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end if}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (10)
> $\mathrm{evalf}\left({{∫}}_{0}^{3}f\left(x\right)\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}x\right)$
When the integrand $f$ is a procedure, the following syntax should be used.
> $\mathrm{evalf}\left(\mathrm{Int}\left(f,0..3\right)\right)$
${10.33333333}$ (11)
Note that the following command also works by delaying the evaluation of $f\left(x\right)$ via unevaluation quotes.
> $\mathrm{evalf}\left({{∫}}_{0}^{3}'f'\left(x\right)\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}x\right)$
${10.33333333}$ (12)
Multiple integrals may be expressed as nested one-dimensional integrals.
> ${{∫}}_{0}^{\sqrt{2}}{{∫}}_{0}^{3}{{∫}}_{0}^{4}\frac{{ⅇ}^{x+y+z}}{\left(5x+1\right)\left(10y+2\right)\left(15z+3\right)}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}x\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}y\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}z$
${{∫}}_{{0}}^{\sqrt{{2}}}{{∫}}_{{0}}^{{3}}{{∫}}_{{0}}^{{4}}\frac{{{ⅇ}}^{{x}{+}{y}{+}{z}}}{\left({5}{}{x}{+}{1}\right){}\left({10}{}{y}{+}{2}\right){}\left({15}{}{z}{+}{3}\right)}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}{x}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}{y}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}{z}$ (13)
> $\mathrm{evalf}\left(\right)$
${0.9331611325}$ (14)
Numerical multiple integration may also be invoked using a list syntax.
> $d≔1-{w}^{2}{x}^{2}{y}^{2}{z}^{2}:$
> $g≔d\mathrm{cos}\left(wxyz\right)-dwxyz\mathrm{sin}\left(wxyz\right)$
${g}{:=}\left({-}{{w}}^{{2}}{}{{x}}^{{2}}{}{{y}}^{{2}}{}{{z}}^{{2}}{+}{1}\right){}{\mathrm{cos}}{}\left({w}{}{x}{}{y}{}{z}\right){-}\left({-}{{w}}^{{2}}{}{{x}}^{{2}}{}{{y}}^{{2}}{}{{z}}^{{2}}{+}{1}\right){}{w}{}{x}{}{y}{}{z}{}{\mathrm{sin}}{}\left({w}{}{x}{}{y}{}{z}\right)$ (15)
> $\mathrm{evalf}\left({{∫}}_{0}^{1}{{∫}}_{0}^{1}{{∫}}_{0}^{1}{{∫}}_{0}^{1}g\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}w{ⅆ}x{ⅆ}y{ⅆ}z\right)$
${0.9717798177}$ (16)
When low accuracy is sufficient, the Monte Carlo method may be used.
> $h≔\frac{1}{2+\mathrm{sin}\left(\mathrm{π}\sqrt{87}\left(\mathrm{x1}+\mathrm{x2}+\mathrm{x3}+\mathrm{x4}+\mathrm{x5}+\mathrm{x6}\right)\right)}$
${h}{:=}\frac{{1}}{{2}{+}{\mathrm{sin}}{}\left({\mathrm{π}}{}\sqrt{{87}}{}\left({\mathrm{x1}}{+}{\mathrm{x2}}{+}{\mathrm{x3}}{+}{\mathrm{x4}}{+}{\mathrm{x5}}{+}{\mathrm{x6}}\right)\right)}$ (17)
> $\mathrm{evalf}\left(\mathrm{Int}\left(h,\left[\mathrm{x1}=-1..1,\mathrm{x2}=-1..1,\mathrm{x3}=-1..1,\mathrm{x4}=-1..1,\mathrm{x5}=-1..1,\mathrm{x6}=-1..1\right],\mathrm{method}=\mathrm{_MonteCarlo},\mathrm{ε}=0.005\right)\right)$
${36.91495206}$ (18)
Only trust about 3 digits when epsilon = 0.5e-2.
> $\mathrm{evalf}[3]\left(\right)$
${36.9}$ (19)
The following integrand has a region near x=0.5 where evaluation incurs catastrophic cancellation to the extent that the function cannot even be evaluated to 1 significant Digit at standard precision.
> $\mathrm{igrand}≔\frac{1}{2-\mathrm{sin}\left(\mathrm{π}x\right)-\mathrm{sin}\left(\frac{355x}{113}\right)}$
${\mathrm{igrand}}{:=}\frac{{1}}{{2}{-}{\mathrm{sin}}{}\left({\mathrm{π}}{}{x}\right){-}{\mathrm{sin}}{}\left(\frac{{355}}{{113}}{}{x}\right)}$ (20)
> $\mathrm{evalf}\left(\genfrac{}{}{0}{}{\mathrm{igrand}}{\phantom{x=0.5}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}|\phantom{\rule[-0.0ex]{0.1em}{0.0ex}}\genfrac{}{}{0}{}{\phantom{\mathrm{igrand}}}{x=0.5}\right)$
${\mathrm{Float}}{}\left({\mathrm{∞}}\right)$ (21)
Note that evalf fails to compute this integral with default settings.
> $\mathrm{evalf}\left({{∫}}_{0}^{1}\mathrm{igrand}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}x\right)$
${{∫}}_{{0.}}^{{1.}}\frac{{1}}{{2.}{-}{1.}{}{\mathrm{sin}}{}\left({3.141592654}{}{x}\right){-}{1.}{}{\mathrm{sin}}{}\left({3.141592920}{}{x}\right)}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}{x}$ (22)
So to compute the value of this integral to 10 digits, we need to add significant guard digits:
> $\mathrm{evalf}[15]\left(\genfrac{}{}{0}{}{\mathrm{igrand}}{\phantom{x=0.5}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}|\phantom{\rule[-0.0ex]{0.1em}{0.0ex}}\genfrac{}{}{0}{}{\phantom{\mathrm{igrand}}}{x=0.5}\right)$
${1.11111111111111}{}{{10}}^{{14}}$ (23)
> $\mathrm{evalf}[20]\left(\genfrac{}{}{0}{}{\mathrm{igrand}}{\phantom{x=0.5}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}|\phantom{\rule[-0.0ex]{0.1em}{0.0ex}}\genfrac{}{}{0}{}{\phantom{\mathrm{igrand}}}{x=0.5}\right)$
${1.1241778044582643369}{}{{10}}^{{14}}$ (24)
> $\mathrm{evalf}[25]\left(\genfrac{}{}{0}{}{\mathrm{igrand}}{\phantom{x=0.5}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}|\phantom{\rule[-0.0ex]{0.1em}{0.0ex}}\genfrac{}{}{0}{}{\phantom{\mathrm{igrand}}}{x=0.5}\right)$
${1.124177605944406775029070}{}{{10}}^{{14}}$ (25)
So we add 15 digits to assure we get the answer to 10 digits:
> $\mathrm{evalf}[25]\left(\mathrm{Int}\left(\mathrm{igrand},x=0..1,\mathrm{ε}=1.{10}^{-10}\right)\right)$
${1.499451605234141071490295}{}{{10}}^{{7}}$ (26)
In the following example, the default setting for the maximum number of subintervals, $500$ in this case, is not enough for successful integration using the NAG method for oscillatory integrands, $\mathrm{_d01akc}$, which would be suitable for this integrand. By supplying a higher upper bound, we can get successful completion with this method.
> $\mathrm{igrand}≔\frac{\mathrm{sin}\left({ⅇ}^{\left|x\right|}\right)}{1+{x}^{2}}$
${\mathrm{igrand}}{:=}\frac{{\mathrm{sin}}{}\left({{ⅇ}}^{\left|{x}\right|}\right)}{{{x}}^{{2}}{+}{1}}$ (27)
> $\mathrm{evalf}\left(\mathrm{Int}\left(\mathrm{igrand},x=-10..10,\mathrm{method}=\mathrm{_d01akc}\right)\right)$
> $\mathrm{evalf}\left(\mathrm{Int}\left(\mathrm{igrand},x=-10..10,\mathrm{method}=\mathrm{_d01akc},\mathrm{methodoptions}=\left[\mathrm{maxintervals}=1000\right]\right)\right)$
${1.235076653}$ (28)
|
2016-05-01 00:26:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 72, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8716789484024048, "perplexity": 2004.869354404028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860113541.87/warc/CC-MAIN-20160428161513-00039-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://www.acmicpc.net/problem/5682
|
시간 제한메모리 제한제출정답맞힌 사람정답 비율
1 초 128 MB139969.231%
문제
Candy has a stock of candy of F different flavors. She is going to make several packs of candy to sell them. Each pack must be either a flavored pack, containing candy of a single flavor, or a variety pack, containing candy of every flavor. Candy wants to make a nice packing with her candy. She decided that a nice packing must honor the following conditions:
• Each piece of candy must be placed in exactly one pack.
• Each pack, regardless of its type, must contain at least 2 pieces of candy.
• Each pack, regardless of its type, must contain the same number of pieces of candy.
• Within each variety pack, the number of pieces of candy of each flavor must be the same.
• There must be at least one variety pack.
• There must be at least one flavored pack of each flavor.
Candy is wondering how many different nice packings of candy she could make. Two nice packings of candy are considered different if and only if they differ in the number of flavored packs, or in the number of variety packs, or in the number of pieces of candy per pack. Since Candy will sell her candy during the closing ceremony of this contest, you are urged to answer her question as soon as you can.
입력
Each test case is described using two lines. The first line contains an integer F indicating the number of flavors (2 ≤ F ≤ 105). The second line contains F integers Ci, indicating the number of pieces of candy of each flavor (1 ≤ Ci ≤ 109 for 1 ≤ i ≤ F).
The last test case is followed by a line containing one zero.
출력
For each test case output a line with an integer representing the number of different nice packings of candy, according to the rules given above.
예제 입력 1
3
15 33 21
2
1 1
2
2 2
2
3 3
3
1000000000 1000000000 1000000000
0
예제 출력 1
4
0
0
1
832519396
출처
• 문제를 만든 사람: Pablo Ariel Heiber
|
2022-05-25 03:53:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2852000594139099, "perplexity": 1548.101925619501}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662578939.73/warc/CC-MAIN-20220525023952-20220525053952-00666.warc.gz"}
|
http://hellenicaworld.com/Science/Mathematics/en/MesocompacSpace.html
|
### - Art Gallery -
In mathematics, in the field of general topology, a topological space is said to be mesocompact if every open cover has a compact-finite open refinement.[1] That is, given any open cover, we can find an open refinement with the property that every compact set meets only finitely many members of the refinement.[2]
The following facts are true about mesocompactness:
Every compact space, and more generally every paracompact space is mesocompact. This follows from the fact that any locally finite cover is automatically compact-finite.
Every mesocompact space is metacompact, and hence also orthocompact. This follows from the fact that points are compact, and hence any compact-finite cover is automatically point finite.
Notes
Hart, Nagata & Vaughan, p200
Pearl, p23
Undergraduate Texts in Mathematics
Graduate Texts in Mathematics
Graduate Studies in Mathematics
Mathematics Encyclopedia
World
Index
|
2021-04-15 16:30:54
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8099350333213806, "perplexity": 760.5817513219565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038087714.38/warc/CC-MAIN-20210415160727-20210415190727-00064.warc.gz"}
|
http://math.stackexchange.com/questions/17966/how-can-we-sum-up-sin-and-cos-series-when-the-angles-are-in-arithmetic-pro
|
How can we sum up $\sin$ and $\cos$ series when the angles are in arithmetic progression?
How can we sum up $\sin$ and $\cos$ series when the angles are in A.P (arithmetic progression) ?For example here is the sum of $\cos$ series:
$$\large \sum_{k=0}^{n-1}\cos (a+k \cdot d) =\frac{\sin(n \times \frac{d}{2})}{\sin ( \frac{d}{2} )} \times \cos \biggl( \frac{ 2 a + (n-1)\cdot d}{2}\biggr)$$
There is a slight difference in case of $\sin$ ,which is: $$\large \sum_{k=0}^{n-1}\sin (a+k \cdot d) =\frac{\sin(n \times \frac{d}{2})}{\sin ( \frac{d}{2} )} \times \sin\biggl( \frac{2 a + (n-1)\cdot d}{2}\biggr)$$
How do we prove the above two identities?
-
You probably meant: $\sum_{k=0}^{n-1}\cos (a+k \cdot d) =\frac{\sin(n \times \frac{d}{2})}{\sin ( \frac{d}{2} )} \cdot \cos( \frac{ a + (n-1)\cdot d}{2})$ – Raskolnikov Jan 18 '11 at 9:57
Hint: reverse the series and sum it up term by term with the original series. So $\cos(a)+\cos(a+(n-1)\cdot d)$, etc... And use the Simpson formula for sums of cosines (and sines for the other identity). – Raskolnikov Jan 18 '11 at 10:03
Alternative hint: make an induction proof. – Raskolnikov Jan 18 '11 at 10:04
Simpson's formula?! Do you mean this: mathworld.wolfram.com/ProsthaphaeresisFormulas.html – Quixotic Jan 18 '11 at 10:04
Yes,that's the formulas I meant. – Raskolnikov Jan 18 '11 at 10:18
Let $$S = \sin{(a)} + \sin{(a+d)} + \cdots + \sin{(a+nd)}$$ Now multiply both sides by $\sin\frac{d}{2}$. Then you have $$S \times \sin\Bigl(\frac{d}{2}\Bigr) = \sin{(a)}\sin\Bigl(\frac{d}{2}\Bigr) + \sin{(a+d)}\cdot\sin\Bigl(\frac{d}{2}\Bigr) + \cdots + \sin{(a+nd)}\cdot\sin\Bigl(\frac{d}{2}\Bigr)$$
Now, note that $$\sin(a)\sin\Bigl(\frac{d}{2}\Bigr) = \frac{1}{2} \cdot \biggl[ \cos\Bigl(a-\frac{d}{2}\Bigr) - \cos\Bigl(a+\frac{d}{2}\Bigr)\biggr]$$ and $$\sin(a+d) \cdot \sin\Bigl(\frac{d}{2}\Bigr) = \frac{1}{2} \cdot \biggl[ \cos\Bigl(a + d -\frac{d}{2}\Bigr) - \cos\Bigl(a+d+\frac{d}{2}\Bigr) \biggr]$$
Then by doing the same thing you will have some terms cancelled out. You can easily see which terms are going to get Cancelled. Proceed and you should be able to get the formula.
I tried this by seeing this post. This has been worked for the case when $d=1$. Just take a look here:
-
Instead of brackets use parentheses in $\sin()$. – Quixotic Jan 20 '11 at 14:03
Writing $\cos x = \frac12 (e^{ix} + e^{-ix})$ will reduce the problem to computing two geometric sums.
-
and the $\sin$ one ? – Quixotic Jan 18 '11 at 10:25
The same trick, but with $\sin x=\frac{1}{2i} (e^{ix}-e^{-ix})$ instead. – Hans Lundmark Jan 18 '11 at 11:14
Or perhaps more simply, just sum up $e^{ix}$ and extract the real and imaginary parts... – Aryabhata Jan 18 '11 at 23:02
@Moron: That's true! – Hans Lundmark Jan 19 '11 at 7:04
|
2014-12-23 01:05:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9650548696517944, "perplexity": 568.6967184558391}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802777438.76/warc/CC-MAIN-20141217075257-00165-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://datascience.stackexchange.com/questions/55558/string-to-data-frame-column/55562
|
# String to Data frame column
I have 2 column in data frame, X and Y. And I have some string values stored in text, which I want to put in X, Y as shown in the example.
Example :
text=9 10 13 110 14 16
12 1 6 1 1 2
X Y
9 12
10 1
13 6
110 1
14 1
16 2
• Could you please explain more what you want to do? – Fatemeh Asgarinejad Jul 12 '19 at 9:22
• I want to create a data frame from list for digits . I have 2 column X and Y . in X I want to put all the 1st digits 9 ,10, 13, 110, 14, 16 and in Y I want to put 12, 1, 6 1 , 1, 2 so that both value of X map correctly – soumyajeet Jul 12 '19 at 9:29
If you are looking to hard-code it for only 2 columns, this can be achieved as follows:
import pandas as pd
df = pd.DataFrame()
text = '9 10 13 110 14 16 12 1 6 1 1 2'
text = text.split()
df['X'] = text[:int(len(text)/2)]
df['Y'] = text[int(len(text)/2):]
• 9 10 13 110 14 16 12 1 6 1 1 2 . These numbers are stored in a variable called text . I have to put in X and Y accordingly . – soumyajeet Jul 12 '19 at 9:33
• text=9 10 13 110 14 16 12 1 6 1 1 2 . these are the digits . I want to create a data frame , in X column I want to put 9 , 10 , 13, 110, 14, 16 and in Y = 12 1 6 1 1 2 – soumyajeet Jul 12 '19 at 9:39
I will assume your text is in two strings like this:
In [1]: import pandas as pd
In [2]: text1 = "9 10 13 110 14 16"
In [3]: text2 = "12 1 6 1 1 2"
A one-liner solution would be:
In [4] df = pd.DataFrame.from_records(zip(text1.split(" "), text2.split(" ")))
A Pandas Dataframe can be created by passing it one list (or tuple) for each row that you want in the table. This is done by using the from_records() method you see above.
So the steps that make the above line work:
1. split() each string on the spaces, to get a list of strings - one per value.
2. create each row that we want in the dataframe, which is each matched pair from the two lists of values. zip does exactly that for us.
3. Put the result into the from_records() method.
The final result:
In [7]: df
Out[7]:
0 1
0 9 12
1 10 1
2 13 6
3 110 1
4 14 1
5 16 2
Because we just gave the dataframe lists of strings, the values are still strings in teh dataframe. If you want to actually use them as number, you can use the astype() method, like this
df_integers = df.astype(int) # now contains integers
df_floats = df.astype(float) # now contains floats, i.e. decimal values
If I am understanding the question correctly, the solution should be like this:
import pandas as pd
text = "9 10 13 110 14 16 12 1 6 1 1 2"
text = text.split()
X_part = text[:int(len(text)/2)]
Y_part = text[int(len(text)/2):]
df = pd.DataFrame(columns=['X', 'Y'])
df['X'] = X_part
df['Y'] = Y_part
Output:
|
2021-04-23 11:44:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3703746795654297, "perplexity": 776.3923612159608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039617701.99/warc/CC-MAIN-20210423101141-20210423131141-00503.warc.gz"}
|
http://mathhelpforum.com/calculus/3154-trig-functions.html
|
# Math Help - Trig Functions?
1. ## Trig Functions?
I'm wondering if someone could help me list the Trig functions series.
For example:
Tan(x) = Sin(x)/Cos(x)
Sec(x) = 1/Cos(x)
I'm not sure if this is posted in the right section but I need these information for my Calc homeworks.
2. Originally Posted by nirva
I'm wondering if someone could help me list the Trig functions series.
For example:
Tan(x) = Sin(x)/Cos(x)
Sec(x) = 1/Cos(x)
I'm not sure if this is posted in the right section but I need these information for my Calc homeworks.
Are you sure that the examples give are what is expected?
Others are:
Cot(x)=1/Tan(x)=Cos(x)/Sin(x)
Cosec(x)=1/Sin(x).
RonL
3. Originally Posted by CaptainBlack
Are you sure that the examples give are what is expected?
Others are:
Cot(x)=1/Tan(x)=Cos(x)/Sin(x)
Cosec(x)=1/Sin(x).
RonL
Because those examples are what was given to some question like
$\int \frac {sin(x) + sec(x)} {tan(x)} dx$
Where sin(x)/tan(x) becomes sin(x)/{sin(x)/cos(x)}
What is Sec(x)/Tan(x) equal to by the way? Csc(x)?
4. yes it is equal to Csc(x)
5. Hello, nirva!
Because those examples are what was given to some question like: $\int\frac{\sin x + \sec x}{\tan x}\,dx$
Where $\frac{\sin x}{\tan x }$ becomes $\frac{\sin x}{\frac{\sin x}{\cos x}}$
What is $\frac{\sec x }{\tan x}$ equal to? . $\csc x$ ?
Yes . . . $\displaystyle{\frac{\sec x}{\tan x}\;=\;\frac{\frac{1}{\cos x}}{\frac{\sin x}{\cos x}} \;=\;\frac{1}{\cos x}\cdot\frac{\cos x}{\sin x} \;=\;\frac{1}\sin x} \;=\;\csc x }$
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
By the way, I prefer to simplify the function like this:
. . . $\displaystyle{\frac{\sin x + \sec x}{\tan x} \;= \;\frac{\sin x + \frac{1}{\cos x}}{\frac{\sin x}{\cos x}} }$
Multiply top and bottom by $\cos x$ **
. . . $\frac{\cos x\left(\sin x + \frac{1}{\cos x}\right)}{\cos x\left(\frac{\sin x}{\cos x}\right)} \;= \;\frac{\sin x\cos x + 1}{\sin x}$
Then make two fractions:
. . . $\frac{\sin x\cos x}{\sin x} + \frac{1}{\sin x} \;= \;\cos x + \csc x$
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
**
This is a technique used on a complex fraction,
. . a fraction with more than two "levels".
Example: $\frac{\frac{1}{3} + \frac{1}{2}}{\frac{1}{6} + \frac{1}{4}}$
Multiply top and bottom by the LCD of $all$ the denominators (12):
. . . $\frac{12\cdot\left(\frac{1}{3} + \frac{1}{2}\right)}{12\cdot\left(\frac{1}{6} + \frac{1}{4}\right)} \;= \;\frac{4 + 6}{2 + 3} \;= \;\frac{10}{5}\;=\;2$ . . . see?
6. Are you asking for the different trig identities? just wondering
|
2014-04-20 21:12:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9608438611030579, "perplexity": 474.56157983571245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://unimodular.net/blog/?cat=8
|
Category Archives: Combinatorics
Binomial identity and probability
The identity $\displaystyle \sum k \binom{n}{k} = n 2^{n-1}$ is pretty standard, and one can prove it algebraically by cancelling the k in the sum with the binomial coefficient and then using the binomial theorem summation or a combinatorial … Continue reading
Double Factorial
Using the double factorial notation to denote the following $\displaystyle n!! = \prod_{i=0}^{\lfloor \frac{n-1}{2} \rfloor} (n-2i)$ seems pretty standard. (See Wolfram and Wiki.) So $4!! = 4 \times 2 = 8$ but $(4!)! = 24!$. … Continue reading
Pascal’s triangle
Perhaps the most famous triangle of all. Take your calculator, and compute $11, 11^2, 11^3, 11^4$ … cute! Can you explain why? It’s so famous that there’s lots of information on the web about it. Named after Pascal but … Continue reading
Sicherman Dice
We all know the possible outcomes of throwing two usual six-sided dice. Have you ever wondered if there are other possible types of dice, i.e. still six-sided but with different face values, which gives the same outcome? The answer is … Continue reading
Posted in Combinatorics, Probability | 4 Comments
Lyness
Intrigued by the following very pretty combinatorial identity attributed to R.C. Lyness. $\sum_{r=0}^n \binom{n}{r} \binom{p}{s+r} \binom{q+r}{m+n} = \sum_{r=0}^n \binom{n}{r} \binom{q}{m+r} \binom{p+r}{s+n}$ Note how it interchanges p with q and m with s. Not much information on this person is … Continue reading
Learned a cool trick today. The finite projective plane of order n has $n^2 + n + 1$ points, $n^2 + n + 1$ lines, $n + 1$ points on each line, $n + 1$ lines passing each point. The … Continue reading
|
2013-05-20 22:12:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8372060656547546, "perplexity": 986.6354859161967}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00070-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://publikationen.bibliothek.kit.edu/1000075656
|
# Operator estimates for the crushed ice problem
Khrabustovskyi, Andrii; Post, Olaf
Abstract:
Let Δ$_{Ωε}$ be the Dirichlet Laplacian in the domain Ωε := Ω \ (∪$_{i}$D$_{iε}$). Here Ω ⊂ R$^{n}$and {D$_{iε}$}$_{i}$ is a family of tiny identical holes (“ice pieces”) distributed periodically in R$^{n}$ with period ε. We denote by cap (D$_{iε}$) the capacity of a single hole. It was known for a long time that −Δ$_{Ωε}$ converges to the operator −Δ$_{Ω}$ $+$ $q$ in strong resolvent sense provided the limit $q$ : $=$ lime$_{ε→0}$→0 cap(D$_{iε}$)ε$^{-n}$ exists and is finite. In the current contribution we improve this result deriving estimates for the rate of convergence in terms of operator norms. As an application, we establish the uniform convergence of the corresponding semi-groups and (for bounded Ω) an estimate for the difference of the $k$-th eigenvalue of −Δ$_{Ωε}$ and −Δ$_{Ωε}$ $+$ $q$. Our proofs relies on an abstract scheme for studying the convergence of operators in varying Hilbert spaces developed previously by the second author.
Zugehörige Institution(en) am KIT Institut für Analysis (IANA)Sonderforschungsbereich 1173 (SFB 1173) Publikationstyp Forschungsbericht Jahr 2017 Sprache Englisch Identifikator ISSN: 2365-662X URN: urn:nbn:de:swb:90-756565 KITopen-ID: 1000075656 Verlag KIT, Karlsruhe Umfang 22 S. Serie CRC 1173 ; 2017/24 Schlagworte crushed ice problem, homogenization, norm resolvent convergence, operator estimates, varying Hilbert spaces
KIT – Die Forschungsuniversität in der Helmholtz-Gemeinschaft KITopen Landing Page
|
2018-12-17 02:41:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8456614017486572, "perplexity": 5560.375084401136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828056.99/warc/CC-MAIN-20181217020710-20181217042710-00197.warc.gz"}
|
http://m.thermalfluidscentral.org/encyclopedia/index.php?title=Coupled_thermal_and_concentration_entry_effects&diff=5215&oldid=5209
|
# Coupled thermal and concentration entry effects
(Difference between revisions)
Revision as of 23:37, 26 June 2010 (view source)← Older edit Revision as of 01:43, 27 June 2010 (view source)Newer edit → Line 210: Line 210: where $\gamma$ is the eigenvalue for the conservation of species equation, and where $\gamma$ is the eigenvalue for the conservation of species equation, and $\Phi (\eta )$ $\Phi (\eta )$ - satisfies + + satisfies {| class="wikitable" border="0" {| class="wikitable" border="0" Line 240: Line 241: To solve eqs. (5.99) and (5.104) using the Runge-Kutta method it is necessary to specify two boundary conditions for each. However, there is only one boundary condition for each: eqs. (5.101) and (5.105), respectively. Since both eqs. (5.99) and (5.104) are homogeneous, one can assume that the other boundary conditions are To solve eqs. (5.99) and (5.104) using the Runge-Kutta method it is necessary to specify two boundary conditions for each. However, there is only one boundary condition for each: eqs. (5.101) and (5.105), respectively. Since both eqs. (5.99) and (5.104) are homogeneous, one can assume that the other boundary conditions are $\Theta (0)=\Phi (0)=1$ $\Theta (0)=\Phi (0)=1$ - and the solve eqs. (5.99) and (5.104) numerically. It is necessary to point out that the eigenvalue, β, is still unknown at this point and must be obtained by eq. (5.107). There will be a series of β which satisfy eq. (5.107), and for each value of βn there is one set of corresponding Θn and Φn functions $(n=1,2,3,\cdots )$. + and the solve eqs. (5.99) and (5.104) numerically. It is necessary to point out that the eigenvalue, β, is still unknown at this point and must be obtained by eq. (5.107). There will be a series of β which satisfy eq. (5.107), and for each value of βn there is one set of corresponding Θn and Φn functions $(n=1,2,3,\cdots )$. If we use any one of the eigenvalues, βn, and corresponding eigenfunctions, Θn and Φn, in eqs. (5.102) and (5.103), the solutions of eq. (5.90) and (5.91) become If we use any one of the eigenvalues, βn, and corresponding eigenfunctions, Θn and Φn, in eqs. (5.102) and (5.103), the solutions of eq. (5.90) and (5.91) become Line 356: Line 357: where where ${{\omega }_{sat,0}}$ ${{\omega }_{sat,0}}$ - is the saturation mass fraction corresponding to the inlet temperature T0. The resulting dimensionless governing equations and boundary conditions are + is the saturation mass fraction corresponding to the inlet temperature T0. The resulting dimensionless governing equations and boundary conditions are {| class="wikitable" border="0" {| class="wikitable" border="0" Line 413: Line 414: where where ${{\varphi }_{0}}=k{{h}_{sv}}(\omega -{{\omega }_{sat,0}})/({{c}_{p}}{q}''{{r}_{o}})$ ${{\varphi }_{0}}=k{{h}_{sv}}(\omega -{{\omega }_{sat,0}})/({{c}_{p}}{q}''{{r}_{o}})$ - in eq. (5.123). + in eq. (5.123). The sublimation problem under consideration is not homogeneous, because eq. (5.125) is a nonhomogeneous boundary condition. The solution of the problem is consistent with its particular (fully developed) solution as well as the solution of the corresponding homogeneous problem (Zhang and Chen, 1992): The sublimation problem under consideration is not homogeneous, because eq. (5.125) is a nonhomogeneous boundary condition. The solution of the problem is consistent with its particular (fully developed) solution as well as the solution of the corresponding homogeneous problem (Zhang and Chen, 1992): Line 507: Line 508: and ${{\beta }_{n}}$ is the eigenvalue of the corresponding homogeneous problem. and ${{\beta }_{n}}$ is the eigenvalue of the corresponding homogeneous problem. The Nusselt number based on the total heat flux at the external wall is The Nusselt number based on the total heat flux at the external wall is - {| class="wikitable" border="0" {| class="wikitable" border="0" |- |-
## Revision as of 01:43, 27 June 2010
There are many transport phenomena problems in which heat and mass transfer simultaneously occur. In some cases, such as sublimation and vapor deposition, they are coupled. These problems are usually treated as a single phase. However, coupled heat and mass transfer should both be considered even though they are modeled as being single phase. In this section, coupled forced internal convection in a circular tube will be presented for both adiabatic and constant wall heat flux.
## Sublimation inside an Adiabatic Tube
In addition to the external sublimation discussed in subsection 5.6.2, internal sublimation is also very important. Sublimation inside an adiabatic and externally heated tube will be analyzed in the current and the following subsections. The physical model of the problem under consideration is shown in Fig. 5.7 (Zhang and Chen, 1990). The inner surface of a circular tube with radius ro is coated with a layer of sublimable material which will sublime when gas flows through the tube. The fully-developed gas enters the tube with a uniform inlet mass fraction of the sublimable substance, ω0, and a uniform inlet temperature, T0. Since the outer wall surface is adiabatic, the latent heat of sublimation is supplied by the gas flow inside the tube; this in turn causes the change in gas temperature inside the tube. It is assumed that the flow inside the tube is incompressible laminar flow with constant properties. In order to solve the problem analytically, the following assumptions are made: 1. The entrance mass fraction, ω0, is assumed to be equal to the saturation mass fraction at the entry temperature, T0. 2. The saturation mass fraction can be expressed as a linear function of the corresponding temperature. 3. The mass transfer rate is small enough that the transverse velocity components can be neglected. The fully developed velocity profile in the tube is
$u=2{{u}_{m}}\left[ 1-{{\left( \frac{r}{{{r}_{o}}} \right)}^{2}} \right]$ (1)
where um is the mean velocity of the gas flow inside the tube. Neglecting axial conduction and diffusion, the energy and mass transfer equations are
$ur\frac{\partial T}{\partial x}=\alpha \frac{\partial }{\partial r}\left( r\frac{\partial T}{\partial r} \right)$ (1)
$ur\frac{\partial \omega }{\partial x}=D\frac{\partial }{\partial r}\left( r\frac{\partial \omega }{\partial r} \right)$ (1)
where D is mass diffusivity. Equations (5.82) and (5.83) are subjected to the following boundary conditions:
$T={{T}_{0}}\begin{matrix} , & x=0 \\\end{matrix}$ (1)
$\omega ={{\omega }_{0}}\begin{matrix} , & x=0 \\\end{matrix}$ (1)
$\frac{\partial T}{\partial r}=\frac{\partial \omega }{\partial r}=0\begin{matrix} , & r=0 \\\end{matrix}$ (1)
$-k\frac{\partial T}{\partial r}=\rho D{{h}_{sv}}\frac{\partial \omega }{\partial r}\begin{matrix} , & r={{r}_{o}} \\\end{matrix}$ (1)
Equation (5.87) implies that the latent heat of sublimation is supplied as the gas flows inside the tube. Another boundary condition at the tube wall is obtained by setting the mass fraction at the wall as the saturation mass fraction at the wall temperature (Kurosaki, 1973). According to the second assumption, the mass fraction and temperature at the inner wall have the following relationship:
$\omega =aT+b\begin{matrix} , & r={{r}_{o}} \\\end{matrix}$ (1)
where a and b are constants. The following non-dimensional variables are then introduced:
\begin{align} & \begin{matrix} \eta =\frac{r}{{{r}_{o}}}, & \xi =\frac{x}{{{r}_{0}}\text{Pe}}, & \text{Le}=\frac{\alpha }{D}, & \operatorname{Re}=\frac{2{{u}_{m}}{{r}_{o}}}{\nu } \\\end{matrix}, \\ & \begin{matrix} \text{Pe}=\frac{2{{u}_{m}}{{r}_{0}}}{\alpha }, & \theta =\frac{T-{{T}_{f}}}{{{T}_{0}}-{{T}_{f}}}, & \varphi =\frac{\omega -{{\omega }_{f}}}{{{\omega }_{0}}-{{\omega }_{f}}} & {} \\\end{matrix} \\ \end{align} (1)
where Tf and ωf are temperature and mass fraction of the sublimable substance, respectively, after heat and mass transfer are fully developed, and Le is Lewis number. Equations (5.82) – (5.88) then become
$\eta (1-{{\eta }^{2}})\frac{\partial \theta }{\partial \xi }=\frac{\partial }{\partial \eta }\left( \eta \frac{\partial \theta }{\partial \eta } \right)$ (1)
$\eta (1-{{\eta }^{2}})\frac{\partial \varphi }{\partial \xi }=\frac{1}{\text{Le}}\frac{\partial }{\partial \eta }\left( \eta \frac{\partial \varphi }{\partial \eta } \right)$ (1)
$\theta =\varphi =1\begin{matrix} , & \xi =0 \\\end{matrix}$ (1)
$\frac{\partial \theta }{\partial \eta }=\frac{\partial \varphi }{\partial \eta }=0\begin{matrix} , & \eta =0 \\\end{matrix}$ (1)
$-\frac{\partial \theta }{\partial \eta }=\frac{1}{\text{Le}}\frac{\partial \varphi }{\partial \eta }\begin{matrix} , & \eta =1 \\\end{matrix}$ (1)
$\varphi =\left( \frac{a{{h}_{sv}}}{{{c}_{p}}} \right)\theta \begin{matrix} , & \eta =1 \\\end{matrix}$ (1)
The heat and mass transfer eqs. (5.90) and (5.91) are independent, but their boundary conditions are coupled by eqs. (5.94) and (5.95). The solution of eqs. (5.90) and (5.91) can be obtained via separation of variables. It is assumed that the solution of θ can be expressed as a product of the function of η and a function of ξ, i.e.,
θ = Θ(η)Γ(ξ) (1)
Substituting eq. (5.96) into eq. (5.90), the energy equation becomes
$\frac{{{\Gamma }'}}{\Gamma }=\frac{\frac{d}{d\eta }\left( \frac{d\Theta }{d\eta } \right)}{\eta (1-{{\eta }^{2}})\Theta }=-{{\beta }^{2}}$ (1)
where β is the eigenvalue for the energy equation. Equation (5.97) can be rewritten as two ordinary differential equations:
Γ' + β2Γ = 0 (1)
$\frac{d}{d\eta }\left( \frac{d\Theta }{d\eta } \right)+{{\beta }^{2}}\eta (1-{{\eta }^{2}})\Theta =0$ (1)
The solution of eq. (5.98) is
$\Gamma ={{C}_{1}}{{e}^{-{{\beta }^{2}}\xi }}$ (1)
The boundary condition of eq. (5.99) at η = 0 is
Θ'(0) = 0 (1)
The dimensionless temperature is then
$\theta ={{C}_{1}}\Theta (\eta ){{e}^{-{{\beta }^{2}}\xi }}$ (1)
Similarly, the dimensionless mass fraction is
$\varphi ={{C}_{2}}\Phi (\eta ){{e}^{-{{\gamma }^{2}}\xi }}$ (1)
where γ is the eigenvalue for the conservation of species equation, and Φ(η)
satisfies
$\frac{d}{d\eta }\left( \frac{d\Phi }{d\eta } \right)+\text{Le}{{\gamma }^{2}}\eta (1-{{\eta }^{2}})\Phi =0$ (1)
and the boundary condition of eq. (5.104) at η = 0 is
Φ'(0) = 0 (1)
Substituting eqs. (5.102) – (5.103) into eqs. (5.94) – (5.95), one obtains β = γ (5.106)
$-\left( \frac{a{{h}_{sv}}}{{{c}_{p}}} \right)\frac{\Theta (1)}{\Phi (1)}=\text{Le}\frac{{\Theta }'(1)}{{\Phi }'(1)}$ (1)
To solve eqs. (5.99) and (5.104) using the Runge-Kutta method it is necessary to specify two boundary conditions for each. However, there is only one boundary condition for each: eqs. (5.101) and (5.105), respectively. Since both eqs. (5.99) and (5.104) are homogeneous, one can assume that the other boundary conditions are Θ(0) = Φ(0) = 1 and the solve eqs. (5.99) and (5.104) numerically. It is necessary to point out that the eigenvalue, β, is still unknown at this point and must be obtained by eq. (5.107). There will be a series of β which satisfy eq. (5.107), and for each value of βn there is one set of corresponding Θn and Φn functions $(n=1,2,3,\cdots )$. If we use any one of the eigenvalues, βn, and corresponding eigenfunctions, Θn and Φn, in eqs. (5.102) and (5.103), the solutions of eq. (5.90) and (5.91) become
$\theta ={{C}_{1}}{{\Theta }_{n}}(\eta ){{e}^{-{{\beta }_{n}}^{2}\xi }}$ (1)
$\varphi ={{C}_{2}}{{\Phi }_{n}}(\eta ){{e}^{-\beta _{n}^{2}\xi }}$ (1)
which satisfy all boundary conditions except those at ξ = 0. In order to satisfy boundary conditions at ξ = 0, one can assume that the final solutions of eqs. (5.90) and (5.91) are
$\theta =\sum\limits_{n=1}^{\infty }{{{G}_{n}}{{\Theta }_{n}}(\eta ){{e}^{-{{\beta }_{n}}^{2}\xi }}}$ (1)
$\varphi =\sum\limits_{n=1}^{\infty }{{{H}_{n}}{{\Phi }_{n}}(\eta ){{e}^{-\beta _{n}^{2}\xi }}}$ (1)
where Gn and Hn can be obtained by substituting eqs. (5.110) and (5.111) into eq. (5.92), i.e.,
$\text{Sh}=\frac{-D{{\left. \frac{\partial \omega }{\partial r} \right|}_{r={{r}_{o}}}}}{{{\omega }_{m}}-{{\omega }_{w}}}\frac{2{{r}_{o}}}{D}=-\frac{2}{{{\varphi }_{m}}-{{\varphi }_{w}}}\sum\limits_{n=1}^{\infty }{{{H}_{n}}{{e}^{-\beta _{n}^{2}\xi }}{{{{\Phi }'}}_{n}}(1)}$ (1)
$1=\sum\limits_{n=1}^{\infty }{{{G}_{n}}{{\Theta }_{n}}(\eta )}$ (5.112)
$1=\sum\limits_{n=1}^{\infty }{{{H}_{n}}{{\Phi }_{n}}(\eta )}$ (1)
Due to the orthogonal nature of the eigenfunctions Θn and Φn, expressions of Gn and Hn can be obtained by
${{G}_{n}}=\frac{\int_{0}^{1}{\eta (1-{{\eta }^{2}}){{\Theta }_{n}}(\eta )d\eta }+\left[ \frac{{{\Theta }_{n}}(1)}{{{\Phi }_{n}}(1)} \right]\int_{0}^{1}{\eta (1-{{\eta }^{2}}){{\Phi }_{n}}(\eta )d\eta }}{\int_{0}^{1}{\eta (1-{{\eta }^{2}})\left\{ \Theta _{n}^{2}(\eta )+\left( A{{h}_{sv}}/{{c}_{p}} \right){{\left[ \frac{{{\Theta }_{n}}(1)}{{{\Phi }_{n}}(1)} \right]}^{2}}\Phi _{n}^{2}(\eta ) \right\}d\eta }}$ (1)
${{H}_{n}}=\frac{A{{h}_{sv}}}{{{c}_{p}}}\frac{{{\Theta }_{n}}(1)}{{{\Phi }_{n}}(1)}{{G}_{n}}$ (1)
The Nusselt number due to convection and the Sherwood number due to diffusion are
$\text{Nu}=\frac{-k{{\left. \frac{\partial T}{\partial r} \right|}_{r={{r}_{o}}}}}{{{T}_{m}}-{{T}_{w}}}\frac{2{{r}_{o}}}{k}=-\frac{2}{{{\theta }_{m}}-{{\theta }_{w}}}\sum\limits_{n=1}^{\infty }{{{G}_{n}}{{e}^{-\beta _{n}^{2}\xi }}{{{{\Theta }'}}_{n}}(1)}$ (1)
$\text{Sh}=\frac{-D{{\left. \frac{\partial \omega }{\partial r} \right|}_{r={{r}_{o}}}}}{{{\omega }_{m}}-{{\omega }_{w}}}\frac{2{{r}_{o}}}{D}=-\frac{2}{{{\varphi }_{m}}-{{\varphi }_{w}}}\sum\limits_{n=1}^{\infty }{{{H}_{n}}{{e}^{-\beta _{n}^{2}\xi }}{{{{\Phi }'}}_{n}}(1)}$ (1)
where Tm and ωm are mean temperature and mean mass fraction in the tube. Figure 5.8 shows heat and mass transfer performance during sublimation inside an adiabatic tube. For all cases, both Nusselt and Sherwood numbers become constant when ξ is greater than a certain number, thus indicating that heat and mass transfer in the tube have become fully developed. The length of the entrance flow increases with an increasing Lewis number. While the fully developed Nusselt number increases with an increasing Lewis number, the Sherwood number decreases with an increasing Lewis number, because a larger Lewis number indicates larger thermal diffusivity or low mass diffusivity. The effect of (ahsv / cp) on the Nusselt and Sherwood numbers is relatively insignificant: both the Nusselt and Sherwood numbers increase with increasing for Le < 1, but increasing for Le > 1 results in decreasing Nusselt and Sherwood numbers.
## Sublimation inside a Tube Subjected to External Heating
When the inner wall of a tube with a sublimable-material-coated outer wall is heated by a uniform heat flux, q'' (see Fig. 5.9), the latent heat will be supplied by part of the heat flux at the wall. The remaining part of the heat flux will be used to heat the gas flowing through the tube. The problem can be described by eqs. (5.81) – (5.88), except that the boundary condition at the inner wall of the tube is replaced by
$\rho {{h}_{sv}}D\frac{\partial \omega }{\partial r}+k\frac{\partial T}{\partial r}={q}''\text{ at }r={{r}_{o}}$ (1)
where the thermal resistance of the tube wall is neglected because the tube wall and the coated layer are very thin.
The governing equations for sublimation inside a tube heated by a uniform heat flux can be non-dimensionalized by using the dimensionless variables defined in eq. (5.89), except the following:
$\begin{matrix} \theta =\frac{k(T-{{T}_{0}})}{{q}''{{r}_{o}}}, & \varphi = \\\end{matrix}\frac{{{h}_{sv}}(\omega -{{\omega }_{sat,0}})}{{{c}_{p}}{q}''{{r}_{o}}}$ (1)
where ωsat,0 is the saturation mass fraction corresponding to the inlet temperature T0. The resulting dimensionless governing equations and boundary conditions are
$\eta (1-{{\eta }^{2}})\frac{\partial \theta }{\partial \xi }=\frac{\partial }{\partial \eta }\left( \eta \frac{\partial \theta }{\partial \eta } \right)$ (1)
$\eta (1-{{\eta }^{2}})\frac{\partial \varphi }{\partial \xi }=\frac{1}{\text{Le}}\frac{\partial }{\partial \eta }\left( \eta \frac{\partial \varphi }{\partial \eta } \right)$ (1)
$\theta =0\begin{matrix} , & \xi =0 \\\end{matrix}$ (1)
$\varphi ={{\varphi }_{0}}\begin{matrix} , & \xi =0 \\\end{matrix}$ (1)
$\frac{\partial \theta }{\partial \eta }=\frac{\partial \varphi }{\partial \eta }=0\begin{matrix} , & \eta =0 \\\end{matrix}$ (1)
$\frac{\partial \theta }{\partial \eta }+\frac{1}{\text{Le}}\frac{\partial \varphi }{\partial \eta }=1\begin{matrix} , & \eta =1 \\\end{matrix}$ (1)
$\varphi =\left( \frac{a{{h}_{sv}}}{{{c}_{p}}} \right)\theta \begin{matrix} , & \eta =1 \\\end{matrix}$ (1)
where ${{\varphi }_{0}}=k{{h}_{sv}}(\omega -{{\omega }_{sat,0}})/({{c}_{p}}{q}''{{r}_{o}})$ in eq. (5.123). The sublimation problem under consideration is not homogeneous, because eq. (5.125) is a nonhomogeneous boundary condition. The solution of the problem is consistent with its particular (fully developed) solution as well as the solution of the corresponding homogeneous problem (Zhang and Chen, 1992):
θ(ξ,η) = θ1(ξ,η) + θ2(ξ,η) (1)
$\varphi (\xi ,\eta )={{\varphi }_{1}}(\xi ,\eta )+{{\varphi }_{2}}(\xi ,\eta )$ (1)
While the fully developed solutions of temperature and mass fraction, θ1(ξ,η) and ${{\varphi }_{1}}(\xi ,\eta )$, respectively, must satisfy eqs. (5.120) – (5.121) and (5.124) – (5.126), the corresponding homogeneous solutions of the temperature and mass fraction, θ2(ξ,η) and ${{\varphi }_{2}}(\xi ,\eta )$, must satisfy eqs. (5.120), (5.121), (5.124), and (5.126), as well as the following conditions:
${{\theta }_{2}}=-{{\theta }_{1}}(\xi ,\eta )\begin{matrix} , & \xi =0 \\\end{matrix}$ (1)
${{\varphi }_{2}}={{\varphi }_{0}}-{{\varphi }_{1}}(\xi ,\eta )\begin{matrix} , & \xi =0 \\\end{matrix}$ (1)
$\frac{\partial {{\theta }_{2}}}{\partial \eta }+\frac{1}{\text{Le}}\frac{\partial {{\varphi }_{2}}}{\partial \eta }=0\begin{matrix} , & \eta =1 \\\end{matrix}$ (1)
The fully developed profiles of the temperature and mass fraction are
\begin{align} & {{\theta }_{1}}=\frac{1}{1+a{{h}_{sv}}/{{c}_{p}}}\left[ 4\xi +{{\eta }^{2}}\left( 1-\frac{1}{4}{{\eta }^{2}} \right)+{{\varphi }_{0}} \right. \\ & \text{ }+\left. \frac{11\text{L}{{\text{e}}_{{}}}a{{h}_{sv}}/{{c}_{p}}-18a{{h}_{sv}}/{{c}_{p}}-7}{24(1+a{{h}_{sv}}/{{c}_{p}})} \right] \\ \end{align} (1)
\begin{align} & {{\varphi }_{1}}=\frac{a{{h}_{sv}}/{{c}_{p}}}{1+a{{h}_{sv}}/{{c}_{p}}}\left[ 4\xi +\text{L}{{\text{e}}_{{}}}{{\eta }^{2}}\left( 1-\frac{1}{4}{{\eta }^{2}} \right)+{{\varphi }_{0}} \right. \\ & \left. \text{ }-\frac{7L{{e}_{{}}}a{{h}_{sv}}/{{c}_{p}}+18Le-11}{24(1+a{{h}_{sv}}/{{c}_{p}})} \right] \\ \end{align} (1)
The solution of the corresponding homogeneous problem can be obtained by separation of variables:
${{\theta }_{2}}=\sum\limits_{n=1}^{\infty }{{{G}_{n}}{{\Theta }_{n}}(\eta ){{e}^{-{{\beta }_{n}}^{2}\xi }}}$ (1)
${{\varphi }_{2}}=\sum\limits_{n=1}^{\infty }{{{H}_{n}}{{\Phi }_{n}}(\eta ){{e}^{-\beta _{n}^{2}\xi }}}$ (1)
where
${{G}_{n}}=\frac{\int_{0}^{1}{\eta (1-{{\eta }^{2}}){{\theta }_{2}}(0,\eta ){{\Theta }_{n}}(\eta )d\eta }+\left[ \frac{{{\Theta }_{n}}(1)}{{{\Phi }_{n}}(1)} \right]\int_{0}^{1}{\eta (1-{{\eta }^{2}}){{\varphi }_{2}}(0,\eta ){{\Phi }_{n}}(\eta )d\eta }}{\int_{0}^{1}{\eta (1-{{\eta }^{2}})\left\{ \Theta _{n}^{2}(\eta )+\left( a{{h}_{sv}}/{{c}_{p}} \right){{\left[ \frac{{{\Theta }_{n}}(1)}{{{\Phi }_{n}}(1)} \right]}^{2}}\Phi _{n}^{2}(\eta ) \right\}d\eta }}$ (1)
${{H}_{n}}=\frac{a{{h}_{sv}}}{{{c}_{p}}}\frac{{{\Theta }_{n}}(1)}{{{\Phi }_{n}}(1)}{{G}_{n}}$ (1)
and βn is the eigenvalue of the corresponding homogeneous problem. The Nusselt number based on the total heat flux at the external wall is
\begin{align} & \text{Nu}=\frac{2{q}''{{r}_{0}}}{k({{T}_{w}}-{{T}_{m}})}=\frac{2}{{{\theta }_{w}}-{{\theta }_{m}}} \\ & =\frac{2(1+A{{h}_{sv}}/{{c}_{p}})}{\frac{11}{24}+\left( 1+\frac{a{{h}_{sv}}}{{{c}_{p}}} \right)\sum\limits_{n=1}^{\infty }{{{G}_{n}}{{e}^{-{{\beta }_{n}}^{2}\xi }}\left[ {{\Theta }_{n}}(1)+\frac{4}{\beta _{n}^{2}}{{{{\Theta }'}}_{n}}(1) \right]}} \\ \end{align} (1)
where θw and θm are dimensionless wall and mean temperatures, respectively.
The Nusselt number based on the convective heat transfer coefficient is
\begin{align} & \text{N}{{\text{u}}^{*}}=\frac{2{{h}_{x}}{{r}_{o}}}{k}=\frac{2{{r}_{o}}}{{{T}_{w}}-{{T}_{m}}}{{\left( \frac{\partial T}{\partial r} \right)}_{r={{r}_{o}}}}=\frac{2}{{{\theta }_{w}}-{{\theta }_{m}}}{{\left( \frac{\partial \theta }{\partial \eta } \right)}_{\eta =1}} \\ & =\frac{2+2(1+a{{h}_{sv}}/{{c}_{p}})\sum\limits_{n=1}^{\infty }{{{G}_{n}}{{e}^{-{{\beta }_{n}}^{2}\xi }}{{{{\Theta }'}}_{n}}(1)}}{\frac{11}{24}+\left( 1+\frac{a{{h}_{sv}}}{{{c}_{p}}} \right)\sum\limits_{n=1}^{\infty }{{{G}_{n}}{{e}^{-{{\beta }_{n}}^{2}\xi }}\left[ {{\Theta }_{n}}(1)+\frac{4}{\beta _{n}^{2}}{{{{\Theta }'}}_{n}}(1) \right]}} \\ \end{align} (1)
The Sherwood number is
$\text{Sh}=\frac{2{{h}_{m,x}}{{r}_{0}}}{D}=\frac{2{{r}_{0}}}{{{\omega }_{w}}-{{\omega }_{m}}}{{\left. \frac{\partial \omega }{\partial r} \right|}_{r={{r}_{o}}}}=\frac{2}{{{\varphi }_{w}}-{{\varphi }_{m}}}{{\left. \frac{\partial \varphi }{\partial \eta } \right|}_{\eta =1}}$
$=\frac{2\text{Le}\frac{a{{h}_{sv}}}{{{c}_{p}}}+2(1+\frac{a{{h}_{sv}}}{{{c}_{p}}})\sum\limits_{n=1}^{\infty }{{{H}_{n}}{{e}^{-{{\beta }_{n}}^{2}\xi }}{{{{\Phi }'}}_{n}}(1)}}{\frac{11}{24}\text{Le}\frac{a{{h}_{sv}}}{{{c}_{p}}}+\left( 1+\frac{a{{h}_{sv}}}{{{c}_{p}}} \right)\sum\limits_{n=1}^{\infty }{{{G}_{n}}{{e}^{-{{\beta }_{n}}^{2}\xi }}\left[ {{\Phi }_{n}}(1)+\frac{4}{\beta _{n}^{2}\text{Le}}{{{{\Phi }'}}_{n}}(1) \right]}}$ (1)
When the heat and mass transfer are fully developed, eqs. (5.138) – (5.140) reduce to
$\text{Nu}=\left( 1+\frac{a{{h}_{sv}}}{{{c}_{p}}} \right)\frac{48}{11}$ (1)
$\text{N}{{\text{u}}^{*}}=\frac{48}{11}$ (1)
$\text{Sh}=\frac{48}{11}$ (1)
The variations of the local Nusselt number based on total heat flux along the dimensionless location ξ are shown in Fig. 5.10. It is evident from Fig. 5.10(a) that Nu increases significantly with increasing (ahsv / cp). The Lewis number has very little effect on Nux when (ahsv / cp)= 0.1, but its effects become obvious in the region near the entrance when (ahsv / cp) = 1.0 and gradually diminishes in the region near the exit. ${{\varphi }_{0}}$ has almost no influence on Nu in almost the entire region when (ahsv / cp) = 1.0, as seen in Fig. 5.10(b). When (ahsv / cp)= 0.1, Nux increases slightly when ξ is small. The variation of the local Nusselt number based on convective heat flux, Nu*, is shown in Fig. 5.11(a). Only a single curve is obtained, which implies that Nu* remains unchanged when the mass transfer parameters are varied. The value of Nu* is exactly the same as for the process without sublimation. Figure 5.11(b) shows the Sherwood number for various parameters. It is evident that (ahsv / cp) and have no effect on Shx, and Le has an insignificant effect on Shx in the entry region.
|
2019-09-15 06:06:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 64, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9545052647590637, "perplexity": 964.2419342921349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514570740.10/warc/CC-MAIN-20190915052433-20190915074433-00209.warc.gz"}
|
https://www.jkcs.or.kr/journal/view.php?number=6877
|
J. Korean Ceram. Soc. > Volume 49(6); 2012 > Article
Journal of the Korean Ceramic Society 2012;49(6): 642. doi: https://doi.org/10.4191/kcers.2012.49.6.642
스퍼터링에 의한 펄스파워 캐패시터용 TiO2 박막의 제조 및 전기적특성 박상식 경북대학교 나노소재공학부 Preparation and Electrical Properties of TiO2 Films Prepared by Sputtering for a Pulse Power Capacitor Sang-Shik Park School of Nano-Materials Engineering, Kyungpook National University ABSTRACT $TiO_2$ thin films for a pulse power capacitor were deposited by RF magnetron sputtering. The effects of the deposition gas ratio and thickness on the crystallization and electrical properties of the $TiO_2$ films were investigated. The crystal structure of $TiO_2$ films deposited on Si substrates at room temperature changed to the anatase from the rutile phase with an increase in the oxygen partial pressure. Also, the crystallinity of the $TiO_2$ films was enhanced with an increase in the thickness of the films. However, $TiO_2$ films deposited on a PET substrate showed an amorphous structure, unlike those deposited on a Si substrate. An X-ray photoelectron spectroscopy(XPS) analysis revealed the formation of chemically stable $TiO_2$ films. The dielectric constant of the $TiO_2$ films as a function of the frequency was significantly changed with the thickness of the films. The films showed a dielectric constant of 100~110 at 1 kHz. However, the dissipation factors of the films were relatively high. Films with a thickness of about 1000nm showed a breakdown strength that exceeded 1000 kV/cm. Key words: Pulse power capacitor, $TiO_2$, Breakdown strength, R.f. sputtering, Dielectric constant
TOOLS
Full text via DOI
|
2019-07-16 20:23:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.480244904756546, "perplexity": 5769.111048988703}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524879.8/warc/CC-MAIN-20190716201412-20190716223412-00545.warc.gz"}
|
https://fr.bluerocktel.com/2/30065222dca835eb5c767d4b2463-equation-of-circle-with-radius-1
|
View question - Write the equation of the circle centered at (-5 , -10 ) that passes through (-11,-9). Question: Circle 1 has the equation .
It is of the form |z z 0 | = r and so it represents a circle, whose centre and radius are (2, 1) and 3 respectively. Amit wants to determine whether (2, -2) is also on the circle. Real life example of Bohr Radius is while climbing up the ladder you cant skip a step .You only climb a specific step in space between each ladder. Now we will see the variation in the Standard equation of a circle: Case 1: When centre of the circle is at the origin(0, 0) and radius in r. h = 0 and k = 0 On substituting in the standard This equation is the same as the general equation of a circle, it's just written in a different form. Equation Of Circles A Plus Topper. Solution : |z - 2 - i| = 3 |z - (2 + i)| = 3. Cite. Practice problems with worked out solutions, pictures and illustrations. Step 1: Type the circles radius and center in the corresponding fields. The above can be derived from intrinsic/natural differential equation of a circle is. Write the standard equation of the circle with center (4, 6) and radius 5. Since x 0 = x, x2 +(y 1)2 = 4.
Example. The horizontal h h and vertical k k translations represent the center of the circle. (x)2 + (y)2 = 81. That is, if the point satisfies the equation of the circle, it lies on the To write the equation of a circle in general form, simply expand the two brackets in its standard form ( a) 2 + (y b) 2 = r 2. Given the center of circle (x1, y1) and its radius r, find the equation of the circle having center (x1, y1) and having radius r. Output : x^2 + y^2 Solution for Find the equation of the circle with radius 1 and center (4, 1). Find the center and radius for the circle with = where A is the area of a circle and r is the radius.More generally, = where A is the area enclosed by an ellipse with semi-major axis a and semi-minor axis b. The Standard Equation Of A Circle Formula Everything You Need To Know Mashup Math.
When we work with a circle, there are several things to work out. Example: Convert the polar equation of a circle r = - 4 cos q into Cartesian coordinates. Step 1: Move the constant to the right side of the equation. Show Step-by-step Solutions. Worked example to create an equation for the tangent of a circle. 10th grade. Find the distance from the center to (2, -2). Search: Circle Geometry Practice Test. Step 2: Identify the radius of the circle, and let r equal this value. (2) d d s = d ( + ) d s = 1 a. A circle is a shape consisting of all points in a plane that are at a given distance from a given point, the centre.Equivalently, it is the curve traced out by a point that moves in a plane so that its distance from a given point is constant.The distance between any point of the circle and the centre is called the radius.Usually, the radius is required to be a positive number. Ex 11.1, 3 Find the equation of the circle with centre (1/2,1/4) and radius 1/12 We know that equation of a circle is (x h)2 + (y k)2 = r2 Where (h, k) is the centre & r is the radius Here Example 1: Find the radius of the circle whose center is O (x 1 2) 2 1 (y 2 4) 2 5 16Graph the given equation of the circle. answer choices . Examples. Find the Equation of the Circle (0,0) , r=3. Example Find the equation of the circle with centre $$(2, - 3)$$ and radius $$\sqrt 7$$ . The center is simply the midpoint of the given points. 11.7 Equations of Circles 629 3. algebraic-geometry circles. Hence, the equation with R unspecified is the general equation for the circle. Find the Equation of the Circle With:Centre (0, 1) and Radius 1. Find the equation of the circle with centre (1, 3 ) and radius 3 units Arc measure: The angle that an arc makes at the center of the circle of which it is a part . Completing the square to write equation in standard form of a circle. When the diameter is known, the formula is Radius = Diameter/ 2. Write the equation of the circle 2 x 2 + 2 y 2 4 x 16 y 38 = 0 in center radius form. How to Divide a Line Into Equal Parts Without Measuring: This is a trick I read about when trying to get through a woodworking project Enlarge the radius of the compass This page will show you how to solve a relationship involving an inequality 0), forming angles of 52 40:1 ratio dividing head calculator 40:1 ratio dividing head This means that, using Pythagoras theorem, the equation of a circle with radius r and centre (0, 0) is given by the formula $$x^2 + y^2 = r^2$$. Find The 1 = a2 + 1 4. The standard form for the equation of a circle is (x-h)^2+(y-k)^2=r^2, where r is the radius and (h,k) is the center. For example, consider a circle of radius r = 3 r = 3, that is centered at the r = R. is polar equation of a circle with radius R and a center at the pole (origin). Substituting the values of centre and radius, (x 2) 2 + (y 3) 2 = 1 2. x 2 4x + 4 + y 2 6y + 9 = 1. x 2 + y 2 4x 6y + 10 $( x + 6 ) ^{2} + ( y - 5 ) ^{2} = 49$ B. Find the center and the radius of the circle $x^2 + y^2 + 2x - 3y - \frac{3}{4} = 0$ example 3: ex 3: Find the equation of a circle in standard form, with a center at $C(-3,4)$ and passing through the point Find the equation of the circle with radius 1 and center C (1, -2).
The equation of a semicircle can be deduced from the equation of a circle. We can find the equation of any circle, given the coordinates of the center and the radius of the circle by applying the equation of circle formula. All of these values are related through the mathematical constant , or pi, which is the ratio of a circle's circumference to its diameter, and is approximately 3.14159. is an irrational number meaning that it 5 - Keep h and k constant Share. Radius and center for a circle equation in standard
Center (4,3) Radius = 5 units. (ii) |2z + 2 4i| = 2. Graphing the 2 x 2 + 2 y 2 4 x 16 y = 38. For example write the equation of a circle with centre (2, Centered at the origin. What is the equation of circle with (h) k center and r radius? Stack Exchange network consists of 180 Q&A communities including Stack Overflow, and how is it possible to solve for the equation, center, and radius of that circle? CBSE CBSE (Science) Class 11. 76% average accuracy. Step 1: Find the gradient of the radius of the circle. The radius is 5 units. What is the equation of the circle with a radius of 7 and center at $( 6, - 5 )$? The point (2, -2) doesn't lie on the circle because the calculated distance should be the same as the radius. An equation of a circle is an algebraic way to define all points that lie on the circumference of the circle. The calculator will generate a step by step explanations and circle graph. (x 0)2 + (y 1)2 = 4. This means that its center must be located at (4, 3), and its radius is 29. How the equation of a circle is derived given that the circle has centre O (0, 0) and O (a, b) Hint: In this problem, we are not given the center or radius however we can find the length of the diameter using the distance formula (Phythagoras) and then divide it by 2. t. e. A circle is a shape consisting of all points in a plane that are at a given distance from a given point, the centre. Solution The radius is 3 and the center is at the origin. The
|
2022-09-25 01:43:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7796412110328674, "perplexity": 186.22969197893207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00343.warc.gz"}
|
http://www.algebra.com/tutors/your-answers.mpl?userid=jim_thompson5910&from=13140
|
Algebra -> Tutoring on algebra.com -> See tutors' answers! Log On
Tutoring Home For Students Tools for Tutors Our Tutors Register Recently Solved
By Tutor
| By Problem Number |
Tutor:
# Recent problems solved by 'jim_thompson5910'
Jump to solutions: 0..29 , 30..59 , 60..89 , 90..119 , 120..149 , 150..179 , 180..209 , 210..239 , 240..269 , 270..299 , 300..329 , 330..359 , 360..389 , 390..419 , 420..449 , 450..479 , 480..509 , 510..539 , 540..569 , 570..599 , 600..629 , 630..659 , 660..689 , 690..719 , 720..749 , 750..779 , 780..809 , 810..839 , 840..869 , 870..899 , 900..929 , 930..959 , 960..989 , 990..1019 , 1020..1049 , 1050..1079 , 1080..1109 , 1110..1139 , 1140..1169 , 1170..1199 , 1200..1229 , 1230..1259 , 1260..1289 , 1290..1319 , 1320..1349 , 1350..1379 , 1380..1409 , 1410..1439 , 1440..1469 , 1470..1499 , 1500..1529 , 1530..1559 , 1560..1589 , 1590..1619 , 1620..1649 , 1650..1679 , 1680..1709 , 1710..1739 , 1740..1769 , 1770..1799 , 1800..1829 , 1830..1859 , 1860..1889 , 1890..1919 , 1920..1949 , 1950..1979 , 1980..2009 , 2010..2039 , 2040..2069 , 2070..2099 , 2100..2129 , 2130..2159 , 2160..2189 , 2190..2219 , 2220..2249 , 2250..2279 , 2280..2309 , 2310..2339 , 2340..2369 , 2370..2399 , 2400..2429 , 2430..2459 , 2460..2489 , 2490..2519 , 2520..2549 , 2550..2579 , 2580..2609 , 2610..2639 , 2640..2669 , 2670..2699 , 2700..2729 , 2730..2759 , 2760..2789 , 2790..2819 , 2820..2849 , 2850..2879 , 2880..2909 , 2910..2939 , 2940..2969 , 2970..2999 , 3000..3029 , 3030..3059 , 3060..3089 , 3090..3119 , 3120..3149 , 3150..3179 , 3180..3209 , 3210..3239 , 3240..3269 , 3270..3299 , 3300..3329 , 3330..3359 , 3360..3389 , 3390..3419 , 3420..3449 , 3450..3479 , 3480..3509 , 3510..3539 , 3540..3569 , 3570..3599 , 3600..3629 , 3630..3659 , 3660..3689 , 3690..3719 , 3720..3749 , 3750..3779 , 3780..3809 , 3810..3839 , 3840..3869 , 3870..3899 , 3900..3929 , 3930..3959 , 3960..3989 , 3990..4019 , 4020..4049 , 4050..4079 , 4080..4109 , 4110..4139 , 4140..4169 , 4170..4199 , 4200..4229 , 4230..4259 , 4260..4289 , 4290..4319 , 4320..4349 , 4350..4379 , 4380..4409 , 4410..4439 , 4440..4469 , 4470..4499 , 4500..4529 , 4530..4559 , 4560..4589 , 4590..4619 , 4620..4649 , 4650..4679 , 4680..4709 , 4710..4739 , 4740..4769 , 4770..4799 , 4800..4829 , 4830..4859 , 4860..4889 , 4890..4919 , 4920..4949 , 4950..4979 , 4980..5009 , 5010..5039 , 5040..5069 , 5070..5099 , 5100..5129 , 5130..5159 , 5160..5189 , 5190..5219 , 5220..5249 , 5250..5279 , 5280..5309 , 5310..5339 , 5340..5369 , 5370..5399 , 5400..5429 , 5430..5459 , 5460..5489 , 5490..5519 , 5520..5549 , 5550..5579 , 5580..5609 , 5610..5639 , 5640..5669 , 5670..5699 , 5700..5729 , 5730..5759 , 5760..5789 , 5790..5819 , 5820..5849 , 5850..5879 , 5880..5909 , 5910..5939 , 5940..5969 , 5970..5999 , 6000..6029 , 6030..6059 , 6060..6089 , 6090..6119 , 6120..6149 , 6150..6179 , 6180..6209 , 6210..6239 , 6240..6269 , 6270..6299 , 6300..6329 , 6330..6359 , 6360..6389 , 6390..6419 , 6420..6449 , 6450..6479 , 6480..6509 , 6510..6539 , 6540..6569 , 6570..6599 , 6600..6629 , 6630..6659 , 6660..6689 , 6690..6719 , 6720..6749 , 6750..6779 , 6780..6809 , 6810..6839 , 6840..6869 , 6870..6899 , 6900..6929 , 6930..6959 , 6960..6989 , 6990..7019 , 7020..7049 , 7050..7079 , 7080..7109 , 7110..7139 , 7140..7169 , 7170..7199 , 7200..7229 , 7230..7259 , 7260..7289 , 7290..7319 , 7320..7349 , 7350..7379 , 7380..7409 , 7410..7439 , 7440..7469 , 7470..7499 , 7500..7529 , 7530..7559 , 7560..7589 , 7590..7619 , 7620..7649 , 7650..7679 , 7680..7709 , 7710..7739 , 7740..7769 , 7770..7799 , 7800..7829 , 7830..7859 , 7860..7889 , 7890..7919 , 7920..7949 , 7950..7979 , 7980..8009 , 8010..8039 , 8040..8069 , 8070..8099 , 8100..8129 , 8130..8159 , 8160..8189 , 8190..8219 , 8220..8249 , 8250..8279 , 8280..8309 , 8310..8339 , 8340..8369 , 8370..8399 , 8400..8429 , 8430..8459 , 8460..8489 , 8490..8519 , 8520..8549 , 8550..8579 , 8580..8609 , 8610..8639 , 8640..8669 , 8670..8699 , 8700..8729 , 8730..8759 , 8760..8789 , 8790..8819 , 8820..8849 , 8850..8879 , 8880..8909 , 8910..8939 , 8940..8969 , 8970..8999 , 9000..9029 , 9030..9059 , 9060..9089 , 9090..9119 , 9120..9149 , 9150..9179 , 9180..9209 , 9210..9239 , 9240..9269 , 9270..9299 , 9300..9329 , 9330..9359 , 9360..9389 , 9390..9419 , 9420..9449 , 9450..9479 , 9480..9509 , 9510..9539 , 9540..9569 , 9570..9599 , 9600..9629 , 9630..9659 , 9660..9689 , 9690..9719 , 9720..9749 , 9750..9779 , 9780..9809 , 9810..9839 , 9840..9869 , 9870..9899 , 9900..9929 , 9930..9959 , 9960..9989 , 9990..10019 , 10020..10049 , 10050..10079 , 10080..10109 , 10110..10139 , 10140..10169 , 10170..10199 , 10200..10229 , 10230..10259 , 10260..10289 , 10290..10319 , 10320..10349 , 10350..10379 , 10380..10409 , 10410..10439 , 10440..10469 , 10470..10499 , 10500..10529 , 10530..10559 , 10560..10589 , 10590..10619 , 10620..10649 , 10650..10679 , 10680..10709 , 10710..10739 , 10740..10769 , 10770..10799 , 10800..10829 , 10830..10859 , 10860..10889 , 10890..10919 , 10920..10949 , 10950..10979 , 10980..11009 , 11010..11039 , 11040..11069 , 11070..11099 , 11100..11129 , 11130..11159 , 11160..11189 , 11190..11219 , 11220..11249 , 11250..11279 , 11280..11309 , 11310..11339 , 11340..11369 , 11370..11399 , 11400..11429 , 11430..11459 , 11460..11489 , 11490..11519 , 11520..11549 , 11550..11579 , 11580..11609 , 11610..11639 , 11640..11669 , 11670..11699 , 11700..11729 , 11730..11759 , 11760..11789 , 11790..11819 , 11820..11849 , 11850..11879 , 11880..11909 , 11910..11939 , 11940..11969 , 11970..11999 , 12000..12029 , 12030..12059 , 12060..12089 , 12090..12119 , 12120..12149 , 12150..12179 , 12180..12209 , 12210..12239 , 12240..12269 , 12270..12299 , 12300..12329 , 12330..12359 , 12360..12389 , 12390..12419 , 12420..12449 , 12450..12479 , 12480..12509 , 12510..12539 , 12540..12569 , 12570..12599 , 12600..12629 , 12630..12659 , 12660..12689 , 12690..12719 , 12720..12749 , 12750..12779 , 12780..12809 , 12810..12839 , 12840..12869 , 12870..12899 , 12900..12929 , 12930..12959 , 12960..12989 , 12990..13019 , 13020..13049 , 13050..13079 , 13080..13109 , 13110..13139 , 13140..13169 , 13170..13199 , 13200..13229 , 13230..13259 , 13260..13289 , 13290..13319 , 13320..13349 , 13350..13379 , 13380..13409 , 13410..13439 , 13440..13469 , 13470..13499 , 13500..13529 , 13530..13559 , 13560..13589 , 13590..13619 , 13620..13649 , 13650..13679 , 13680..13709 , 13710..13739 , 13740..13769 , 13770..13799 , 13800..13829 , 13830..13859 , 13860..13889 , 13890..13919 , 13920..13949 , 13950..13979 , 13980..14009 , 14010..14039 , 14040..14069 , 14070..14099 , 14100..14129 , 14130..14159 , 14160..14189 , 14190..14219 , 14220..14249 , 14250..14279 , 14280..14309 , 14310..14339 , 14340..14369 , 14370..14399 , 14400..14429 , 14430..14459 , 14460..14489 , 14490..14519 , 14520..14549 , 14550..14579 , 14580..14609 , 14610..14639 , 14640..14669 , 14670..14699 , 14700..14729 , 14730..14759 , 14760..14789 , 14790..14819 , 14820..14849 , 14850..14879 , 14880..14909 , 14910..14939 , 14940..14969 , 14970..14999 , 15000..15029 , 15030..15059 , 15060..15089 , 15090..15119 , 15120..15149 , 15150..15179 , 15180..15209 , 15210..15239 , 15240..15269 , 15270..15299 , 15300..15329 , 15330..15359 , 15360..15389 , 15390..15419 , 15420..15449 , 15450..15479 , 15480..15509 , 15510..15539 , 15540..15569 , 15570..15599 , 15600..15629 , 15630..15659 , 15660..15689 , 15690..15719 , 15720..15749 , 15750..15779 , 15780..15809 , 15810..15839 , 15840..15869 , 15870..15899 , 15900..15929 , 15930..15959 , 15960..15989 , 15990..16019 , 16020..16049 , 16050..16079 , 16080..16109 , 16110..16139 , 16140..16169 , 16170..16199 , 16200..16229 , 16230..16259 , 16260..16289 , 16290..16319 , 16320..16349 , 16350..16379 , 16380..16409 , 16410..16439 , 16440..16469 , 16470..16499 , 16500..16529 , 16530..16559 , 16560..16589 , 16590..16619 , 16620..16649 , 16650..16679 , 16680..16709 , 16710..16739 , 16740..16769 , 16770..16799 , 16800..16829 , 16830..16859 , 16860..16889 , 16890..16919 , 16920..16949 , 16950..16979 , 16980..17009 , 17010..17039 , 17040..17069 , 17070..17099 , 17100..17129 , 17130..17159 , 17160..17189 , 17190..17219 , 17220..17249 , 17250..17279 , 17280..17309 , 17310..17339 , 17340..17369 , 17370..17399 , 17400..17429 , 17430..17459 , 17460..17489 , 17490..17519 , 17520..17549 , 17550..17579 , 17580..17609 , 17610..17639 , 17640..17669 , 17670..17699 , 17700..17729 , 17730..17759 , 17760..17789 , 17790..17819 , 17820..17849 , 17850..17879 , 17880..17909 , 17910..17939 , 17940..17969 , 17970..17999 , 18000..18029 , 18030..18059 , 18060..18089 , 18090..18119 , 18120..18149 , 18150..18179 , 18180..18209 , 18210..18239 , 18240..18269 , 18270..18299 , 18300..18329 , 18330..18359 , 18360..18389 , 18390..18419 , 18420..18449 , 18450..18479 , 18480..18509 , 18510..18539 , 18540..18569 , 18570..18599 , 18600..18629 , 18630..18659 , 18660..18689 , 18690..18719 , 18720..18749 , 18750..18779 , 18780..18809 , 18810..18839 , 18840..18869 , 18870..18899 , 18900..18929 , 18930..18959 , 18960..18989 , 18990..19019 , 19020..19049 , 19050..19079 , 19080..19109 , 19110..19139 , 19140..19169 , 19170..19199 , 19200..19229 , 19230..19259 , 19260..19289 , 19290..19319 , 19320..19349 , 19350..19379 , 19380..19409 , 19410..19439 , 19440..19469 , 19470..19499 , 19500..19529 , 19530..19559 , 19560..19589 , 19590..19619 , 19620..19649 , 19650..19679 , 19680..19709 , 19710..19739 , 19740..19769 , 19770..19799 , 19800..19829 , 19830..19859 , 19860..19889 , 19890..19919 , 19920..19949 , 19950..19979 , 19980..20009 , 20010..20039 , 20040..20069 , 20070..20099 , 20100..20129 , 20130..20159 , 20160..20189 , 20190..20219 , 20220..20249 , 20250..20279 , 20280..20309 , 20310..20339 , 20340..20369 , 20370..20399 , 20400..20429 , 20430..20459 , 20460..20489 , 20490..20519 , 20520..20549 , 20550..20579 , 20580..20609 , 20610..20639 , 20640..20669 , 20670..20699 , 20700..20729 , 20730..20759 , 20760..20789 , 20790..20819 , 20820..20849 , 20850..20879 , 20880..20909 , 20910..20939 , 20940..20969 , 20970..20999 , 21000..21029 , 21030..21059 , 21060..21089 , 21090..21119 , 21120..21149 , 21150..21179 , 21180..21209 , 21210..21239 , 21240..21269 , 21270..21299 , 21300..21329 , 21330..21359 , 21360..21389 , 21390..21419 , 21420..21449 , 21450..21479 , 21480..21509 , 21510..21539 , 21540..21569 , 21570..21599 , 21600..21629 , 21630..21659 , 21660..21689 , 21690..21719 , 21720..21749 , 21750..21779 , 21780..21809 , 21810..21839 , 21840..21869 , 21870..21899 , 21900..21929 , 21930..21959 , 21960..21989 , 21990..22019 , 22020..22049 , 22050..22079 , 22080..22109 , 22110..22139 , 22140..22169 , 22170..22199 , 22200..22229 , 22230..22259 , 22260..22289 , 22290..22319 , 22320..22349 , 22350..22379 , 22380..22409 , 22410..22439 , 22440..22469 , 22470..22499 , 22500..22529 , 22530..22559 , 22560..22589 , 22590..22619 , 22620..22649 , 22650..22679 , 22680..22709 , 22710..22739 , 22740..22769 , 22770..22799 , 22800..22829 , 22830..22859 , 22860..22889 , 22890..22919 , 22920..22949 , 22950..22979 , 22980..23009 , 23010..23039 , 23040..23069 , 23070..23099 , 23100..23129 , 23130..23159 , 23160..23189 , 23190..23219 , 23220..23249 , 23250..23279 , 23280..23309 , 23310..23339 , 23340..23369 , 23370..23399 , 23400..23429 , 23430..23459 , 23460..23489 , 23490..23519 , 23520..23549 , 23550..23579 , 23580..23609 , 23610..23639 , 23640..23669 , 23670..23699 , 23700..23729 , 23730..23759 , 23760..23789 , 23790..23819 , 23820..23849 , 23850..23879 , 23880..23909 , 23910..23939 , 23940..23969 , 23970..23999 , 24000..24029 , 24030..24059 , 24060..24089 , 24090..24119 , 24120..24149 , 24150..24179 , 24180..24209 , 24210..24239 , 24240..24269 , 24270..24299 , 24300..24329 , 24330..24359 , 24360..24389 , 24390..24419 , 24420..24449 , 24450..24479 , 24480..24509 , 24510..24539 , 24540..24569 , 24570..24599 , 24600..24629 , 24630..24659 , 24660..24689 , 24690..24719 , 24720..24749 , 24750..24779 , 24780..24809 , 24810..24839 , 24840..24869 , 24870..24899 , 24900..24929 , 24930..24959 , 24960..24989 , 24990..25019 , 25020..25049 , 25050..25079 , 25080..25109 , 25110..25139 , 25140..25169 , 25170..25199 , 25200..25229 , 25230..25259 , 25260..25289 , 25290..25319 , 25320..25349 , 25350..25379 , 25380..25409 , 25410..25439 , 25440..25469 , 25470..25499 , 25500..25529 , 25530..25559 , 25560..25589 , 25590..25619 , 25620..25649 , 25650..25679 , 25680..25709 , 25710..25739 , 25740..25769 , 25770..25799 , 25800..25829 , 25830..25859 , 25860..25889 , 25890..25919 , 25920..25949 , 25950..25979 , 25980..26009 , 26010..26039 , 26040..26069 , 26070..26099 , 26100..26129 , 26130..26159 , 26160..26189 , 26190..26219 , 26220..26249 , 26250..26279 , 26280..26309 , 26310..26339 , 26340..26369 , 26370..26399 , 26400..26429 , 26430..26459 , 26460..26489 , 26490..26519 , 26520..26549 , 26550..26579 , 26580..26609 , 26610..26639 , 26640..26669 , 26670..26699 , 26700..26729 , 26730..26759 , 26760..26789 , 26790..26819 , 26820..26849 , 26850..26879 , 26880..26909 , 26910..26939 , 26940..26969 , 26970..26999 , 27000..27029 , 27030..27059 , 27060..27089 , 27090..27119 , 27120..27149 , 27150..27179 , 27180..27209 , 27210..27239 , 27240..27269 , 27270..27299 , 27300..27329 , 27330..27359 , 27360..27389 , 27390..27419 , 27420..27449 , 27450..27479 , 27480..27509 , 27510..27539 , 27540..27569 , 27570..27599 , 27600..27629 , 27630..27659 , 27660..27689 , 27690..27719 , 27720..27749 , 27750..27779 , 27780..27809 , 27810..27839 , 27840..27869 , 27870..27899 , 27900..27929 , 27930..27959 , 27960..27989 , 27990..28019 , 28020..28049 , 28050..28079 , 28080..28109 , 28110..28139 , 28140..28169 , 28170..28199 , 28200..28229 , 28230..28259 , 28260..28289 , 28290..28319 , 28320..28349 , 28350..28379 , 28380..28409 , 28410..28439 , 28440..28469 , 28470..28499 , 28500..28529 , 28530..28559 , 28560..28589 , 28590..28619 , 28620..28649 , 28650..28679 , 28680..28709, >>Next
Linear-equations/309691: what are the x and y intercepts of 9x+8y=14?
1 solutions
Answer 221492 by jim_thompson5910(28715) on 2010-05-29 19:07:46 (Show Source):
You can put this solution on YOUR website!
#### x-intercept
To find the x-intercept, plug in and solve for x
Plug in .
Multiply and 0 to get 0.
Simplify.
Divide both sides by to isolate .
So the x-intercept is .
------------------------------------------
#### y-intercept
To find the y-intercept, plug in and solve for y
Plug in .
Multiply and 0 to get 0.
Simplify.
Divide both sides by to isolate .
Reduce.
So the y-intercept is .
Trigonometry-basics/309689: We are proving identities and my book only has three examples. So I was hoping I could get some help solving a problem my teacher assigned on our test. I failed to answer it correctly, but he will be doing similar problems on our final exam. cos^2 x cos^2 y = -sin(x+y) sin(x-y) If you could show the steps and explain them I'd greatly appreciate it.1 solutions Answer 221484 by jim_thompson5910(28715) on 2010-05-29 18:37:27 (Show Source): You can put this solution on YOUR website!Unfortunately, cos^2 x cos^2 y = -sin(x+y) sin(x-y) is NOT an identity. You must be missing a symbol either in between the terms cos^2 x and cos^2 y or in between the terms -sin(x+y) and sin(x-y). Please repost with the correct problem.
Polynomials-and-rational-expressions/309680: On this question, they wanted me to simplify each sum or difference. (5g^2 - 2g)-(2g^2 + 6g) I didnt get it at all, but my friend said she got 35g^4 - 12g^2 Then i got some help from my youth pastor and he got -2g^2 - 8g which one is right, i am very confused. or are they both wrong?1 solutions Answer 221482 by jim_thompson5910(28715) on 2010-05-29 18:27:27 (Show Source): You can put this solution on YOUR website! So So unfortunately, they are both incorrect. It looks like your friend tried to multiply it out. I'm not quite sure as there are too many mistakes. Your pastor is on the right track, but has the wrong first term.
Linear-equations/309575: I need to find the slope of the line containing the points (8,-1) and (7,-9) Can anyone help. Thanks1 solutions Answer 221406 by jim_thompson5910(28715) on 2010-05-29 05:14:09 (Show Source): You can put this solution on YOUR website!Note: is the first point . So this means that and . Also, is the second point . So this means that and . Start with the slope formula. Plug in , , , and Subtract from to get Subtract from to get Reduce So the slope of the line that goes through the points and is
percentage/309559: The top floor is rectangular and has a perimeter of 520ft. the width of the top floor measures 20 ft. more than one half its length. What are the dimensions of the top floor? I figured it as 160 length and 100 width1 solutions Answer 221403 by jim_thompson5910(28715) on 2010-05-28 23:24:19 (Show Source): You can put this solution on YOUR website!You are correct. Good job.
Rational-functions/309551: What is log 10 1001 solutions Answer 221399 by jim_thompson5910(28715) on 2010-05-28 22:46:50 (Show Source): You can put this solution on YOUR website! which means that
logarithm/309530: Please help me solve this equation: Solve the logarithmic equation logarithmic equation . <-- logarithm of lnx(natural log of x) with base 2 equals =-1. I tried: . (log of natural log of x with base 2 equals -1) . (natural log of x equals 2 to the power of -1) . (natural log of x equals 1/2) . (x equals e to the power of 1/2) I put . back into the original equation and it didn't equal the answer.1 solutions Answer 221384 by jim_thompson5910(28715) on 2010-05-28 20:16:38 (Show Source): You can put this solution on YOUR website!Well is the answer. Start with the given equation. Plug in Use the identity to pull down the exponent. Take the natural log of 'e' to get 1. Multiply and simplify Rewrite as Pull down the exponent using the identity Evaluate the log base 2 of 2 to get 1. Multiply Since the final equation is an identity (ie is always true), this verifies the answer.
real-numbers/309488: for what values of x and y is x + 4yi=-5 - 24i please show me steps 1 solutions Answer 221372 by jim_thompson5910(28715) on 2010-05-28 18:38:19 (Show Source): You can put this solution on YOUR website!Hint: Equate the real and imaginary parts to get the two equations and
Trigonometry-basics/309300: Hello, I'm sorry if this question is confusing but i don't understand this, so it's sort of hard to explain. Okay, i have triangle ABC. Angle ABC=90 Degrees, Angle BCA=75 Degrees, and angle CAB=15 Degrees. If the length of line BC equals 20 units, what is the length of line AB? Please explain.1 solutions Answer 221202 by jim_thompson5910(28715) on 2010-05-28 00:22:09 (Show Source): You can put this solution on YOUR website!If you draw the triangle and label it's parts, you'll see that triangle ABC is a right triangle. Now let's use angle BCA=75 degrees as our reference angle. The side BC is the adjacent side while the side AB is the opposite side (again, a drawing will help). So we're going to use the tangent function (since tan=opposite/adjacent) So this means that tan(75)=BC/AB and that tan(75)=20/x where 'x' is the length of AB. Now compute tan(75) to get approximately 3.73205. So 3.73205=20/x Now you're job is to solve the equation 3.73205=20/x to find 'x' which is the length of AB. Note: There is a way to compute the tangent of 75 degrees exactly, but we don't need to do that here.
Systems-of-equations/309282: 3x/6 -4/12 =7 ; -36x +24=-504 In the following pair of equations, both sides of the equation on the left were multifplied by a number to get the equation on the right. What is the number? I know the answer is -72. If you could please help me find out how to get that answer that would be so wonderful. Thank you!!1 solutions Answer 221185 by jim_thompson5910(28715) on 2010-05-27 22:25:17 (Show Source): You can put this solution on YOUR website!Well a quick and easy way to get a common multiple of 2 (or more) numbers is to simply multiply them. For instance, a common multiple of 17 and 5 is 17*5=85 In this case, a common multiple between 6 and 12 is 6*12=72. It turns out that the numbers that share the common multiple will go into that common multiple. Ie and . What this means is that you can effectively clear out the denominators of the fractions leaving you with integer terms. For instance, multiply 72 by to get . In the given problem, the only difference is that 36x is really -36x. So instead we must multiply both sides by -72. Another quick way to realize that -72 is the magic number is to look at the right side. This side already has an integer which will make life easier for us. Take the number 7 and multiply it by some unknown number 'k' to get 7k. This is now equal to the new right side of -504. Set them equal to get 7k=-504. The task now is to solve the equation 7k=-504. Sure enough, solving the equation will get us k=-72 which is the number we're looking for.
Linear-equations/309284: Could you please help me find the answer to -2 Find the value of n for a line that passes through the points a(-7,n) and b(3,-27) and has a slope of -5/2. Thank you for helping me solve for the answer, I just can't figure this one out!1 solutions Answer 221184 by jim_thompson5910(28715) on 2010-05-27 22:16:57 (Show Source): You can put this solution on YOUR website!Hint: The slope equation is is the first point . So this means that and . Also, is the second point . So this means that and . We're given a slope of . So . Plug all of this info into the given formula at the top to get To make things simpler, you can rewrite as to get From here, solve the equation for 'n'.
Inverses/309251: what is the inverse of f(x)=7x-3 divided by 161 solutions Answer 221153 by jim_thompson5910(28715) on 2010-05-27 20:14:16 (Show Source): You can put this solution on YOUR website!Hint: Think of as . Now swap x and y to get . From here, solve for 'y' to find the inverse function.
Angles/309253: What is the complement of an angle whose measure is 43 degrees.1 solutions Answer 221151 by jim_thompson5910(28715) on 2010-05-27 20:12:45 (Show Source): You can put this solution on YOUR website!Remember, complement angles add to 90 degrees. So let 'x' be the complement to 43 degrees. This then means that .
Polynomials-and-rational-expressions/309254: 24x+48y1 solutions Answer 221149 by jim_thompson5910(28715) on 2010-05-27 20:11:42 (Show Source): You can put this solution on YOUR website!I don't know what you want to do here. Please post the full instructions and any work or thoughts about the problem.
Permutations/309244: In how many different ways can the letters in the word PAYMENT be arranged if the letters are taken 4 at a time?1 solutions Answer 221148 by jim_thompson5910(28715) on 2010-05-27 20:10:54 (Show Source): You can put this solution on YOUR website!You have 7 letters and 4 spots to place them, so you have 7 P 4 = 7!/(7-4)!=7!/3! = (7*6*5*4*3!)/3! = 7*6*5*4 = 840 different ways.
Quadratic-relations-and-conic-sections/309219: Write an equation for the hyperbola...I have no idea. vertices (0,6) and (0,-6) conjugate axis of 14. I'd really like to figure out on my own but I have no idea where to start..so if someone could give me a step-by-step solution, that would be wonderful! Thank you.1 solutions Answer 221108 by jim_thompson5910(28715) on 2010-05-27 18:24:13 (Show Source): You can put this solution on YOUR website!The center of the hyperbola is the midpoint of the line segment from vertex to vertex. So the midpoint of (0,6) and (0,-6) is ((0+0)/2, (6+(-6)/2) ---> (0, 0). So the center is (0,0). The center is in the form (h,k), so h=0 and k=0. In this case, 'a' is the length of the semi-minor axis and 'b' is the length of the semi-major axis. So the lengths of the conjugate and traverse axes are 2a and 2b units respectively. Since the traverse axis is 2b units long, this means that 2b=12 (since the distance between the vertices is 12 units) which means that b=6. Also, because the conjugate axis is 14 units long, and 2a is the length of the conjugate axis, this means that 2a=14 and a=7 Finally, recall that the general equation for a hyperbola which opens up vertically is . From here, all you need to do is plug in the right values and simplify.
Polynomials-and-rational-expressions/309155: Could you please help me solve this.If one of the zeros of y=x^3+bx+1 is 1, determine the value of b,and then solve x^3+bx+1=0.1 solutions Answer 221103 by jim_thompson5910(28715) on 2010-05-27 18:07:16 (Show Source): You can put this solution on YOUR website!If one of the zeros of is 1, this means that if you plug in x=1, then y will be zero. In other words, you'll have the equation . From here, you have a simple linear equation in which you can solve for 'b'. I'll let you do that.
Linear-systems/309181: Please help.. solve: 1)4x + 3y =7 2) 16x + 12y = 28 Thank you so much....Rebecca1 solutions Answer 221101 by jim_thompson5910(28715) on 2010-05-27 18:03:13 (Show Source): You can put this solution on YOUR website!Hint: If you multiply both sides of the first equation by -4, you get . Now add this to the second equation and you'll find that the 'y' terms cancel out. This will leave you with an equation in which you can solve for 'x'.
Inequalities/309042: If a^2b^3c > 0, which of the following statements must be true? I. bc > 0 II. ac > 0 III. ab > 0 1 solutions Answer 220960 by jim_thompson5910(28715) on 2010-05-27 06:09:40 (Show Source): You can put this solution on YOUR website!Hint: is certainly positive for all . So divide both sides by to get .
Square-cubic-other-roots/309038: How would I simplify the product to this problem? (√5 + 2)(√5 - 6)1 solutions Answer 220959 by jim_thompson5910(28715) on 2010-05-27 05:16:42 (Show Source): You can put this solution on YOUR website!Let to get the expression Start with the given expression. FOIL Combine like terms. Plug in Square to get Combine like terms. So
Sequences-and-series/309008: Is the formula explicit or recursive? Find the first five terms of the sequence. a. recursive; 1, -4, 16, -64, 256 b. recursive; 0, -16, -24, -48, -80 c. explicit; 1, -4, 16, -64, 256 d. explicit; 0, -8, -24, -48, -801 solutions Answer 220934 by jim_thompson5910(28715) on 2010-05-27 00:02:24 (Show Source): You can put this solution on YOUR website!It's explicit since you are able to find any term you want (eg the first or the eighty first) and you don't need to know anything about any previous terms. I'll let you find the first five terms. Simply plug in , , , , and and evaluate each individual expression.
Percentage-and-ratio-word-problems/309004: an iteam is originally priced at D dollars will be discounted at 35%. write an expression to represent the new price. i thought that you would take D/.35 but that is not right.1 solutions Answer 220930 by jim_thompson5910(28715) on 2010-05-26 23:56:13 (Show Source): You can put this solution on YOUR website!Say the item is $100, if you divide by 0.35, you then get which is clearly NOT a discount (unless you like paying more for something that should be less) Instead, when that same$100 item is discounted by 35%, this basically means that you are subtracting 35% of that item's price from the price of the item. Recall that 35% is 0.35 in decimal form. So if a item is originally \$100, then a 35% discount reduces the price to dollars. In general, an item at D dollars discounts to . Take note that if , then which is what we originally got. So the new price after the 35% discount is dollars.
Inequalities/308661: Which of the following statements must be true when a^2 < b^2 and a and b are not 0? I. a^2/a < b^2/a II. 1/a^2 > 1/b^2 III. (a + b) (a - b) < 0 1 solutions Answer 220928 by jim_thompson5910(28715) on 2010-05-26 23:37:37 (Show Source): You can put this solution on YOUR website!I. Clearly this is false. If , then the inequality sign should flip, but it does not. If the sign did flip, then it would only be true for negative values of 'a', but what if 'a' was positive? Since we have this uncertainty, statement I is false. II. If and , then and are both positive numbers. Recall that if and 'x' and 'y' are both positive, then which shows us that statement II is true. III. Start with the given inequality. Subtract from both sides. Factor the left side using the difference of squares. So statement III is true.
Inequalities/308695: If xy > 1 and z < 0, which of the following statements must be true? I. x > z II. xyz < -1 III. xy/z < 1/z 1 solutions Answer 220927 by jim_thompson5910(28715) on 2010-05-26 23:20:25 (Show Source): You can put this solution on YOUR website!I. False. For example, let x = -10 and z = -1. Clearly is false. If we let , then showing that is true. II. False, this is only true if . So is on the right track, but it is false. For example, if , and we let , then which is clearly not less than -1. We must make the requirement that the right side be 'z' and not -1. III. This is true since dividing both sides of an inequality by a negative number will flip the inequality sign. Basically, divide both sides of by the negative number 'z' to get (don't forget to flip the sign).
Inequalities/308996: Solve. 8 - 3x < -71 solutions Answer 220924 by jim_thompson5910(28715) on 2010-05-26 23:07:29 (Show Source): You can put this solution on YOUR website! Start with the given inequality. Subtract from both sides. Combine like terms on the right side. Divide both sides by to isolate . note: Remember, the inequality sign flips when we divide both sides by a negative number. Reduce. ---------------------------------------------------------------------- Answer: So the solution is
Quadratic-relations-and-conic-sections/308992: 4x^2 + ky^2 - 8x + 17y = 3 Find the vaule of k to make this equation a circle, ellipse, hyperbola, and parabola; so different vaules for k that will make those different kinds of equations. Any single one of them would be helpful if you can't necessairly figure them all out, so if you know anything at all pleaseee help. Thank you so much! 1 solutions Answer 220922 by jim_thompson5910(28715) on 2010-05-26 22:55:37 (Show Source): You can put this solution on YOUR website!This is very helpful to remember: For the general conic If , then the given conic above is an ellipse Furthermore, if , and , then the conic is also a circle If , then the given conic above is a parabola If , then the given conic above is a hyperbola First, let's subtract 3 from both sides to get So if we wanted to force to be a circle, then we must make sure that , , and . In this case, , , , , , and . Plug these values in to get and simplify to get . Solve for 'k' to get . So 'k' must be positive. Also, because we want and , and we know that , this means that as well. But . So This means that if , then we get the circle For any ellipse, just pick a positive 'k' value that is NOT equal to 4. This 'k' value will make true. For the parabola, just make since this satisfies the equation (basically, everything goes to zero since B and C are zero) And finally, for any hyperbola, reverse the idea of the ellipse and pick any negative 'k' value. This works because is essentially the opposite of
Equations/308098: -7x+20=-17x-10
1 solutions
Answer 220917 by jim_thompson5910(28715) on 2010-05-26 22:43:47 (Show Source):
You can put this solution on YOUR website!
For more help with solving linear equations, check out this linear equation solver.
Solved by pluggable solver: Linear Equation Solver Start with the given equation. Subtract from both sides. Add to both sides. Combine like terms on the left side. Combine like terms on the right side. Divide both sides by to isolate . Reduce.---------------------------------------------------------------------- Answer: So the solution is
|
2013-06-19 19:36:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3008929193019867, "perplexity": 8223.624863845858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709101476/warc/CC-MAIN-20130516125821-00006-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://pdebuyl.be/blog/2017/cython-module.html
|
## Developing a Cython library
For some time, I have used Cython to accelerate parts of Python programs. One stumbling block in going from Python/NumPy code to Cython code is the fact that one cannot access NumPy's random number generator from Cython without explicit declaration. Here, I give the steps to make a pip-installable 'cimportable' module, using the threefry random number generator as an example.
(Note: a draft of this article appeared online in june 2017 by mistake, this version is complete)
### The aim
The aim is that, starting with a Python code reading
import numpy as np
N=100
x=0
for i in range(N):
x = x + np.random.normal()
one can end up with a very similar Cython code
cimport numpy as np
cdef int i, N
cdef double x
N = 100
x = 0
for i in range(N):
x = x + np.random.normal()
With the obvious benefit of using the same module for the random number generator (RNG) with a simple interface.
This is impossible with the current state of NumPy, even though there is work in that direction ng-numpy-randomstate. This post is still relevant for other where Cython is involved contexts anyway.
### The challenge
Building a c-importable module just depends on having a corresponding .pxd file available in the path. The idea behind .pxd files is that they contain C-level (or cdef level) declarations whereas the implementation goes in the .pyx file with the same basename.
A consequence of this is that Python-type (regular def) functions do not appear in the .pxd file but only in the .pyx file and cannot be cimported in another cython file. They can of course be Python imported.
The challenge lies in a proper organization of these different parts and of a seamless packaging and installation via pip.
### Organization of the module
The module is named threefry after the corresponding Threefry RNG random123. It contains my implementation of the RNG as a C library and of a Cython wrapper.
I review below the steps, that I found via the documentation and quite a lot of trial and error.
#### Enable cimporting
To enable the use of the Cython wrapper from other Cython code, it is necessary to write a .pxd file, see the documentation on Sharing Declarations. .pxd files can exist on their own but in the present situation, we will use them with the same base name as the .pyx file. This way the .pxd file is automatically read by Cython when compiling the extension, it is as if its content was written in the .pyx file itself.
The .pxd can only contain plain C, cdef or cpdef declarations, pure Python declarations must go the in .pyx file.
Note: The .pxd file must be packaged with the final module, see below.
The file threefry.pxd contains the following declarations
from libc.stdint cimport uint64_t
cdef extern from "threefry.h":
...
cdef class rng:
...
meaning that the extension type threefry.rng will be accessible via a cimport from other modules. The implementation is stored in threefry.pyx.
With the aim of hiding the implementation details, I wrote a __init__.pxd file containing the following:
from threefry.threefry cimport rng
so that the user code looks like
cimport threefry
cdef threefry.rng r = threefry.rng(seed)
and I am free to refactor the code later if I wish to do so.
#### Compilation information
To cimport my module, there is one more critical step: providing the needed compiler flag for the C declaration, that is providing the include path for threefry.h (that must be read when compiling user code).
For this purpose, I define a utility routine get_include that can be called from the user's setup.py file as:
from setuptools import setup, Extension
from Cython.Build import cythonize
import threefry
setup(
ext_modules=cythonize(Extension('use_threefry', ["use_threefry.pyx"], include_dirs=[threefry.get_include()]))
)
Note: the argument include_dirs is given to Extension and not to cythonize.
#### Packaging
The .h and .pxd files must be added via the package_data argument to setup.
### Wrapping up
In short, to make a cimport-able module
1. Move the shared declarations to a .pxd file.
2. The implementation goes in the .pyx file, that will be installed as a compiled module.
3. The .pxd and .h files must be added to package_data.
4. A convenient way to obtain the include directories must be added.
All of this can be found in my random number generator package https://github.com/pdebuyl/threefry
The algorithm is from Salmon's et al paper Parallel Random Numbers: As Easy as 1, 2, 3, their code being distributed at random123. I wrote about it earlier in a blog post
## Comments !
Generated with Pelican. Theme based on MIT-licensed Skeleton.
|
2020-08-05 15:54:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30825749039649963, "perplexity": 4215.177371773438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735963.64/warc/CC-MAIN-20200805153603-20200805183603-00011.warc.gz"}
|
https://www.doubtnut.com/question-answer/what-is-the-measure-in-degrees-of-an-angle-that-is-pi-4-radians-185061978
|
Home
>
English
>
Class 12
>
Maths
>
Chapter
>
>
What is the measure in degrees...
Updated On: 27-06-2022
Get Answer to any question, just click a photo and upload the photo and get the answer completely free,
Text Solution
4^(@)25^(@)45^(@)90^(@)
Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams.
Transcript
what is the measured in degrees of an angle that is pi by 4 radians option 14 degree option to 25 degree option 45 degree or option for 90 degree ok so we know that Pi Radian can be written equal to 180 degrees the relation between Radian and degree is pi Radian can be written equal to 180 degrees now if in this equation for this relation with wide both sides of the equation by 4 then we get 5 by 4 Radian is equal to 180 by 4 ok and this ratio hundred 180 by 4 is
equal to 45 so we can diet that pi by 4 Radian aur pi by 4 didn't is equal to 180 by food is 45 week and died 45° so that's it that's the solution pi by 4 Radian is equal to 45 degrees and we can see that option number 3 is the correct option
|
2022-07-04 09:49:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8744379878044128, "perplexity": 866.325200304875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104364750.74/warc/CC-MAIN-20220704080332-20220704110332-00721.warc.gz"}
|
https://includestdio.com/7628.html
|
# introspection – How do I look inside a Python object?
## The Question :
306 people think this question is useful
I’m starting to code in various projects using Python (including Django web development and Panda3D game development).
To help me understand what’s going on, I would like to basically ‘look’ inside the Python objects to see how they tick – like their methods and properties.
So say I have a Python object, what would I need to print out its contents? Is that even possible?
363 people think this answer is useful
Python has a strong set of introspection features.
Take a look at the following built-in functions:
type() and dir() are particularly useful for inspecting the type of an object and its set of attributes, respectively.
186 people think this answer is useful
object.__dict__
65 people think this answer is useful
Second, use the dir() function.
65 people think this answer is useful
I’m surprised no one’s mentioned help yet!
In [1]: def foo():
...: "foo!"
...:
In [2]: help(foo)
Help on function foo in module __main__:
foo()
foo!
Help lets you read the docstring and get an idea of what attributes a class might have, which is pretty helpful.
27 people think this answer is useful
If this is for exploration to see what’s going on, I’d recommend looking at IPython. This adds various shortcuts to obtain an objects documentation, properties and even source code. For instance appending a “?” to a function will give the help for the object (effectively a shortcut for “help(obj)”, wheras using two ?’s (“func??“) will display the sourcecode if it is available.
There are also a lot of additional conveniences, like tab completion, pretty printing of results, result history etc. that make it very handy for this sort of exploratory programming.
For more programmatic use of introspection, the basic builtins like dir(), vars(), getattr etc will be useful, but it is well worth your time to check out the inspect module. To fetch the source of a function, use “inspect.getsource” eg, applying it to itself:
>>> print inspect.getsource(inspect.getsource)
def getsource(object):
"""Return the text of the source code for an object.
The argument may be a module, class, method, function, traceback, frame,
or code object. The source code is returned as a single string. An
IOError is raised if the source code cannot be retrieved."""
lines, lnum = getsourcelines(object)
return string.join(lines, '')
inspect.getargspec is also frequently useful if you’re dealing with wrapping or manipulating functions, as it will give the names and default values of function parameters.
20 people think this answer is useful
If you’re interested in a GUI for this, take a look at objbrowser. It uses the inspect module from the Python standard library for the object introspection underneath.
9 people think this answer is useful
You can list the attributes of a object with dir() in the shell:
>>> dir(object())
['__class__', '__delattr__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__']
Of course, there is also the inspect module: http://docs.python.org/library/inspect.html#module-inspect
8 people think this answer is useful
"""Visit http://diveintopython.net/"""
__author__ = "Mark Pilgrim (mark@diveintopython.org)"
def info(object, spacing=10, collapse=1):
"""Print methods and doc strings.
Takes module, class, list, dictionary, or string."""
methodList = [e for e in dir(object) if callable(getattr(object, e))]
processFunc = collapse and (lambda s: " ".join(s.split())) or (lambda s: s)
print "\n".join(["%s %s" %
(method.ljust(spacing),
processFunc(str(getattr(object, method).__doc__)))
for method in methodList])
if __name__ == "__main__":
print help.__doc__
8 people think this answer is useful
Try ppretty
from ppretty import ppretty
class A(object):
s = 5
def __init__(self):
self._p = 8
@property
def foo(self):
return range(10)
print ppretty(A(), indent=' ', depth=2, width=30, seq_length=6,
show_protected=True, show_private=False, show_static=True,
Output:
__main__.A at 0x1debd68L (
_p = 8,
foo = [0, 1, 2, ..., 7, 8, 9],
s = 5
)
8 people think this answer is useful
While pprint has been mentioned already by others I’d like to add some context.
The pprint module provides a capability to “pretty-print” arbitrary Python data structures in a form which can be used as input to the interpreter. If the formatted structures include objects which are not fundamental Python types, the representation may not be loadable. This may be the case if objects such as files, sockets, classes, or instances are included, as well as many other built-in objects which are not representable as Python constants.
pprint might be in high-demand by developers with a PHP background who are looking for an alternative to var_dump().
Objects with a dict attribute can be dumped nicely using pprint() mixed with vars(), which returns the __dict__ attribute for a module, class, instance, etc.:
from pprint import pprint
pprint(vars(your_object))
To dump all variables contained in the global or local scope simply use:
pprint(globals())
pprint(locals())
locals() shows variables defined in a function.
It’s also useful to access functions with their corresponding name as a string key, among other usages:
locals()['foo']() # foo()
globals()['foo']() # foo()
Similarly, using dir() to see the contents of a module, or the attributes of an object.
And there is still more.
7 people think this answer is useful
Others have already mentioned the dir() built-in which sounds like what you’re looking for, but here’s another good tip. Many libraries — including most of the standard library — are distributed in source form. Meaning you can pretty easily read the source code directly. The trick is in finding it; for example:
>>> import string
>>> string.__file__
'/usr/lib/python2.5/string.pyc'
The *.pyc file is compiled, so remove the trailing ‘c’ and open up the uncompiled *.py file in your favorite editor or file viewer:
/usr/lib/python2.5/string.py
I’ve found this incredibly useful for discovering things like which exceptions are raised from a given API. This kind of detail is rarely well-documented in the Python world.
4 people think this answer is useful
If you want to look at parameters and methods, as others have pointed out you may well use pprint or dir()
If you want to see the actual value of the contents, you can do
object.__dict__
4 people think this answer is useful
Two great tools for inspecting code are:
1. IPython. A python terminal that allows you to inspect using tab completion.
2. Eclipse with the PyDev plugin. It has an excellent debugger that allows you to break at a given spot and inspect objects by browsing all variables as a tree. You can even use the embedded terminal to try code at that spot or type the object and press ‘.’ to have it give code hints for you.
3 people think this answer is useful
pprint and dir together work great
3 people think this answer is useful
There is a python code library build just for this purpose: inspect Introduced in Python 2.7
3 people think this answer is useful
If you are interested to see the source code of the function corresponding to the object myobj, you can type in iPython or Jupyter Notebook:
myobj??
2 people think this answer is useful
import pprint
pprint.pprint(obj.__dict__)
or
pprint.pprint(vars(obj))
1 people think this answer is useful
If you want to look inside a live object, then python’s inspect module is a good answer. In general, it works for getting the source code of functions that are defined in a source file somewhere on disk. If you want to get the source of live functions and lambdas that were defined in the interpreter, you can use dill.source.getsource from dill. It also can get the code for from bound or unbound class methods and functions defined in curries… however, you might not be able to compile that code without the enclosing object’s code.
>>> from dill.source import getsource
>>>
... return x+y
...
>>> squared = lambda x:x**2
>>>
return x+y
>>> print getsource(squared)
squared = lambda x:x**2
>>>
>>> class Foo(object):
... def bar(self, x):
... return x*x+x
...
>>> f = Foo()
>>>
>>> print getsource(f.bar)
def bar(self, x):
return x*x+x
>>>
1 people think this answer is useful
vars(obj) returns the attributes of an object.
1 people think this answer is useful
Many good tipps already, but the shortest and easiest (not necessarily the best) has yet to be mentioned:
object?
0 people think this answer is useful
In addition if you want to look inside list and dictionaries, you can use pprint()
0 people think this answer is useful
In Python 3.8, you can print out the contents of an object by using the __dict__. For example,
class Person():
pass
person = Person()
## set attributes
person.first = 'Oyinda'
person.last = 'David'
## to see the content of the object
print(person.__dict__)
{"first": "Oyinda", "last": "David"}
-4 people think this answer is useful
Try using:
print(object.stringify())
• where object is the variable name of the object you are trying to inspect.
This prints out a nicely formatted and tabbed output showing all the hierarchy of keys and values in the object.
NOTE: This works in python3. Not sure if it works in earlier versions
UPDATE: This doesn’t work on all types of objects. If you encounter one of those types (like a Request object), use one of the following instead:
• dir(object())
or
import pprint then: pprint.pprint(object.__dict__)
|
2021-02-27 08:59:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24006405472755432, "perplexity": 3205.551936013591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358798.23/warc/CC-MAIN-20210227084805-20210227114805-00393.warc.gz"}
|
https://www.esaral.com/surface-chemistry-jee-main-previous-year-questions-with-solutions/
|
Surface Chemistry – JEE Main Previous Year Questions with Solutions
JEE Main Previous Year Papers Questions of Chemistry with Solutions are available at eSaral. Practicing JEE Main chapter wise questions of Chemistry will help the JEE aspirants in realizing the question pattern as well as help in analyzing weak & strong areas.
Simulator
Previous Years AIEEE/JEE Main Questions
Q. Which of the following statements is incorrect regarding physisorptions ?
(1) Under high pressure it results into multi molecular layer on adsorbent surface
(2) Enthalpy of adsorption $\left(\Delta \mathrm{H}_{\text {adsorption }}\right)$ is low and
positive
(3) It occurs because of Van der Waal’s forces
AIEEE-2009
Sol. (2)
$\Delta \mathrm{H}$ is negative
Q. According to Freundlich adsorption isotherm, which of the following is correct ?
(1) $\frac{\mathrm{x}}{\mathrm{m}} \propto \mathrm{p}^{0}$
(2) $\frac{\mathrm{x}}{\mathrm{m}} \propto \mathrm{p}^{1}$
(3) $\frac{\mathrm{x}}{\mathrm{m}} \propto \mathrm{p}^{1 / \mathrm{n}}$
(4) All the above are correct for different ranges of pressure
AIEEE-2012
Sol. (4)
Q. The coagulating power of electrolytes having ions $\mathrm{Na}^{+}, \mathrm{Al}^{3+}$ and $\mathrm{Ba}^{2+}$ for aresenic sulphide sol increases in the order :-
(1) $\mathrm{Al}^{3+}<\mathrm{Ba}^{2+}<\mathrm{Na}^{+}$
(2) $\mathrm{Na}^{+}<\mathrm{Ba}^{2+}<\mathrm{Al}^{3+}$.
(3) $\mathrm{Ba}^{2+}<\mathrm{Na}^{+}<\mathrm{Al}^{3+}$
(4) $\mathrm{Al}^{3+}<\mathrm{Na}^{+}<\mathrm{Ba}^{2+}$
JEE-Main 2013
Sol. (2)
According to hardley schuzle rule
Q. For a linear plot of log(x/m) versus log p in a Freundlich adsorption isotherm, which of the following statements is correct ? (k and n are constants)
(1) log (1/n) appears as the intercept
(2) Both k and 1/n appear in the slope term
(3) 1/n appears as the intercept
(4) Only 1/n appears as the slope
JEE-Main 2016
Sol. (4)
According to Freundlich isotherm
Q. The Tyndall effect is observed only when following conditions are satisfied :-
(a) The diameter of the dispersed particles is much smaller than the wavelength of the ligh
used.
(b) The diameter of the dispersed particle is not much smaller than the wavelength of the light
used.
(c) The refractive indices of the dispersed phase and dispersion medium are almost similar in
magnitude.
(d) The refractive indices of the dispersed phase and dispersion medium differ greatly in
magnitude.
(1) (a) and (d)
(2) (b) and (d)
(3) (a) and (c)
(4) (b) and (c)
JEE – Main – 2017
Sol. (2)
As per NCERT book (fact)
|
2020-05-28 04:54:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4421280026435852, "perplexity": 3800.0015009684444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396495.25/warc/CC-MAIN-20200528030851-20200528060851-00541.warc.gz"}
|