url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.physicsforums.com/threads/initial-value-problem.120067/
Initial-value problem 1. May 7, 2006 meteorologist1 Hello, I have trouble showing that the following initial-value problem has a unique solution. I also need to find this unique solution. y' = e^(t-y), where 0 <= t <= 1, and y(0) = 1. How can I test the Lipschitz condition on this? 2. May 8, 2006 Pyrrhus Well what have you done? $$\frac{dy}{dt} = e^{t} e^{-y}$$ 3. May 8, 2006 AKG What does your theorem on uniqueness say? My book has one where y' = F(y) for some function F of y. Here, your function is F(y,t) = et-y, it's a function of 2 variables. So maybe you have to show that for each t in [0,1], the function Ft(y) = et-y is Lipschitz as a function of y. Also, I think you only need to satisfy the Lipschitz condition for y=1, since that's the initial condition you're solving for. Again, just look at the precise statement of the theorem that I would assume is given to you. 4. May 9, 2006 meteorologist1 Yes, that's what I'm trying to show. For each pair of points (t,y1) and (t,y2) where t is in [0,1], we have |F(t,y1) - F(t,y2)| <= L |y1 - y2| where L is a Lipschitz constant for F. And y1 and y2 can be anything between positive and negative infinity. But it seems that it doesn't satisfy the Lipschitz condition if you write it out -- |F(t,y1) - F(t,y2)| isn't bounded. I'm not sure what other ways there are to show uniqueness. I found the IVP's solution using separable variables. I got y(t) = ln[(e^t) + e - 1]. 5. May 9, 2006 meteorologist1 Oh sorry, did you say that it only needs to satisfy the Lipschitz condition for y=1? How do you know that? My theorem for uniqueness says that f needs to satisfy a Lipschitz condition on D, where D is the convex set {(t,y)}, where 0 <= t <= 1 and -infinity < y < +infinity. Thanks. 6. May 10, 2006 AKG Because the existence-uniqueness theorem for ODEs is a local result. It needs to be Lipschitz near 1 for there to be a unique solution with initial value 1. There just needs to be a unique trajectory going through 1, not a unique trajectory everywhere. Of course, if its not Lipschitz everywhere, then you may not have a unique solution everywhere, but that's not really important. Actually, looking at the following graph (black = |1-e1-y|, green = |1-y|), even if you steepen the green graph by increasing L, it will never dominate the black curve, so I don't think it is Lipschitz) Attached Files: • untitled.JPG File size: 39 KB Views: 48 Last edited: May 10, 2006 7. May 10, 2006 meteorologist1 Ok I see. Thanks.
2016-10-28 03:12:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8354561924934387, "perplexity": 567.7340219724033}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721555.36/warc/CC-MAIN-20161020183841-00136-ip-10-171-6-4.ec2.internal.warc.gz"}
https://ltwork.net/in-what-way-is-the-vice-president-directly-connected-to--5710252
In what way is the vice president directly connected to the U. S. Senate?He or she reads the minutes Question: In what way is the vice president directly connected to the U. S. Senate? He or she reads the minutes of opening sessions and restores order. He or she selects the Senate majority leader and introduces bills. He or she nominates the chief justice of the Supreme Court. He or she presides over the Senate, and serves as tiebreaker. In which of the following countries would you not find parrots naturally in the wild? In which of the following countries would you not find parrots naturally in the wild?... A student adds a rectangular blocks of mass 26.10 g to graduated cylinder filled with 40.1 mL of water, the student read the new level A student adds a rectangular blocks of mass 26.10 g to graduated cylinder filled with 40.1 mL of water, the student read the new level of water to be 42.7 mL previously student measured the length, width and height of the block to be 3.0 cm by 1.0 cm by 1.0 cm... Amino acid-based hormones use a second messenger, cyclic amp, but steroid hormones trigger a response Amino acid-based hormones use a second messenger, cyclic amp, but steroid hormones trigger a response by their buildup along the cell membranes, stopping key materials from entering the cell must use atp to enter the cell, releasing adp into the cytoplasm, which then becomes cyclic amp complex insi... How is temperature defined in terms of molecular action? How is temperature defined in terms of molecular action?... What expression represents a starting velocity of 24ft/sec a. -10+24t-16t^2 b. 24 +32t -16t^2 c. 16+ 32t-24t^2 d. 124-16t^2 What expression represents a starting velocity of 24ft/sec a. -10+24t-16t^2 b. 24 +32t -16t^2 c. 16+ 32t-24t^2 d. 124-16t^2... What are the concentrations of H,0+ and OH in oranges that have a pH of 3.91? What are the concentrations of H,0+ and OH in oranges that have a pH of 3.91?... Phoenix Theatre decided to introduce a new shape of popcorn containers. It is like a triangular prism but without a top B. How Phoenix Theatre decided to introduce a new shape of popcorn containers. It is like a triangular prism but without a top B. How much cardboard was needed to create this box? A. How many ~cm³ of popcorn will it fit? Height of the triangle is 2 cm Side length is 4 cm height of the triangular prism Is... I am a pro artist. i can draw anything. right now i am drawing a portrait of billie ellish in z00m. if u wanna watch me draw a portrait i am a pro artist. i can draw anything. right now i am drawing a portrait of billie ellish in z00m. if u wanna watch me draw a portrait of billie ellish then join my z00m... Which sentence from “The Treasure of Lemon Brown” best helps readers understand Greg's father? Which sentence from “The Treasure of Lemon Brown” best helps readers understand Greg's father?... Wha was the name given to the belief thay life is made up of competitive struggles and only the fittest Wha was the name given to the belief thay life is made up of competitive struggles and only the fittest survive? $Wha was the name given to the belief thay life is made up of competitive struggles and only the fitt$... Joe’s parents bought him a new car for his birthday. They paid $6990 for a car that was originally priced at$7599. What percent Joe’s parents bought him a new car for his birthday. They paid $6990 for a car that was originally priced at$7599. What percent of the original price did they pay? [A] Set up a proportion that can be used to solve this problem. [B] Show all steps to solve the problem.... What is n (AUB) if (A)=25,n(B) = 10 and BCA What is n (AUB) if (A)=25,n(B) = 10 and BCA... Dinner in most Spanish-speaking countries is not consumed before 8:00 PM. Based on the readings and Dinner in most Spanish-speaking countries is not consumed before 8:00 PM. Based on the readings and vocabulary you learned in this seminar, do you think you would be able to not eat dinner until 8:00 PM. Why or why not? Make references to some vocabulary or culture items you learned to support your ... Write a job description for the new employee that includes the tasks you would like the person to complete and the qualities Write a job description for the new employee that includes the tasks you would like the person to complete and the qualities and skills the person should have... Due to the Earth's shape and orientation, areas near the Equator receive the greatest amount of energy from the Sun. Ocean currents Due to the Earth's shape and orientation, areas near the Equator receive the greatest amount of energy from the Sun. Ocean currents... The style of the Renaissance was characterized by a sense of gravity and a balance of individual parts to the whole. The style of the Renaissance was characterized by a sense of gravity and a balance of individual parts to the whole.... Join my zoom if your bored rn or if u want to watch It Chapter 2, Saw , deadpool or scream and other stuff being dead serious Join my zoom if your bored rn or if u want to watch It Chapter 2, Saw , deadpool or scream and other stuff being dead serious rn 722 1074 0292 AVEvp0 /'/''... Madison is taking an Uber to get to Short Pump Mall.The Uber driver charges a $2 flat rate for pick up andan additional$0.50 per mile. If Madison's Madison is taking an Uber to get to Short Pump Mall. The Uber driver charges a $2 flat rate for pick up and an additional$0.50 per mile. If Madison's ride cost her a total of \$10, how many miles did she travel? 2x + 0.50 = 10 2x -0.50 = 10 2 + 0.50x = 10 2 - 0.50x = 10...
2022-10-04 19:00:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22472406923770905, "perplexity": 3165.5227929439643}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00084.warc.gz"}
https://dmoj.ca/problem/bf2
## Lexicographically Least Substring View as PDF Points: 5 Time limit: 2.0s Memory limit: 64M Problem types ##### Brute Force Practice 2 You have a string (indexed from ) with no more than lowercase characters. Find the lexicographically least substring with a length of at least . A string is said to be lexicographically smaller than a string if and is a prefix of or and . Here, denotes the length of the string. #### Input The first line will have the string. The second line will have . #### Output Print the lexicographically least substring of length at least . #### Sample Input iloveprogramming 4 #### Sample Output ammi • onlyIfStatement  commented on Oct. 31, 2016, 11:48 p.m. edited Incorrect? NVM • Kirito  commented on Nov. 1, 2016, 8:09 a.m. edit 2 Are you implying that all the people who have AC got it by accident? Your program is wrong, try the following input. baaazzz 4 • onlyIfStatement  commented on Nov. 2, 2016, 6:58 p.m. edited IM A SPECIAL SNOW FLAKE • Kirito  commented on Nov. 2, 2016, 7:26 p.m. edited • onlyIfStatement  commented on Nov. 2, 2016, 7:35 p.m. edited excuse my stupidity it's been a long week • bobhob314  commented on Dec. 25, 2014, 11:25 a.m. (: ^ o ) Python hax always provide a nice meme
2018-03-18 05:47:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5499473810195923, "perplexity": 6134.329869986143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645538.8/warc/CC-MAIN-20180318052202-20180318072202-00487.warc.gz"}
http://mathhelpforum.com/algebra/194938-sum-arithmetic-sequence.html
# Thread: sum of arithmetic sequence 1. ## sum of arithmetic sequence A woman started a business with a workforce of 50 people. Every two weeks the number of people in the workforce is increased by 3 people. How many people were there in the workforce after 26 weeks (I answered 89, which is correct.) Each member of the workforce earned $600 per week. What was the total wage bill for this 26 weeks? I used the formula for the sum of an arithmetic sequence S=0.5n(a+l) and (for convention) 'eliminated' every other week where the wage bill remained the same to come up with S(half the wage bill) = 0.5 x 13 x (30 000 + 51 600) So the total wage bill is 13 x 81 600 =$ 1 060 800 However the textbook gives the answer to be $1 341 600 Where did I go wrong? 2. ## Re: sum of arithmetic sequence Hello Furor, I agree with your result for the wages. I think the result given in the text book is incorrect. Here is how it is I would like to suggest the following observation wages Weeks 30000 1-2 30000 +1800 2-4 30000 +2*1800 4-6 : : : sum of wages = 26*(30000) + 3600*(0+1+....+12) = 780000+ 3600*78 = 1060800 Now one thing that we can fairly agree is the fact that the 26*30000 component is the component that has to be paid as 50 workers are present from the beginning Lets deduct that from both the numbers 1060800 - 780000 = 280800 1341600 - 780000 = 561600 So we see that 561600 = 2*280800 which essentially tells us that a miscalculation has been done in terms of halving the number of weeks. Try to verify my calculations in case I made a mistake some where. Kalyan 3. ## Re: sum of arithmetic sequence Originally Posted by furor celtica A woman started a business with a workforce of 50 people. Every two weeks the number of people in the workforce is increased by 3 people. How many people were there in the workforce after 26 weeks (I answered 89, which is correct.) Each member of the workforce earned$600 per week. What was the total wage bill for this 26 weeks? I used the formula for the sum of an arithmetic sequence S=0.5n(a+l) and (for convention) 'eliminated' every other week where the wage bill remained the same to come up with S(half the wage bill) = 0.5 x 13 x (30 000 + 51 600) So the total wage bill is 13 x 81 600 = $1 060 800 However the textbook gives the answer to be$ 1 341 600 Where did I go wrong? Hello! For the first part of the question, I assume you did 50 + (3 people x 13 weeks) = 89 people, because every 2 weeks there is an increase of 3 people. Which is correct. Next part, because I have no idea the arithmetic sequence, i'll use the long method. 50 people x 600$x 26 weeks = 780,000 I did this because 50 people is a constant. 3 people x 600$ x 24 weeks = 43200 3 people x 600$x 22 weeks = 39600 3 people x 600$ x 20 weeks = 36000, Now the pattern decreases by 3600 so.. Week 18 = 32400 Week 16 = 28800 Week 14 = 25200 Week 12 = 21600 Week 10 = 18000 Week 8 = 14400 Week 6 = 10800 Week 4 = 7200 Week 2 = 3600 Add the numbers to get .. 280800 Add the 2 numbers to arrive at 280,800 + 780,000 = \$1 060 800. Either the book is wrong, or you and I both did something wrong. Good luck.
2017-06-24 00:15:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4043978154659271, "perplexity": 1532.4714302532304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320206.44/warc/CC-MAIN-20170623235306-20170624015306-00374.warc.gz"}
http://www.clungu.com/Distilling_a_Random_Forest_with_a_single_DecisionTree/
# Distilling a Random Forest to a single DecisionTree On HackerNews there was a topic discussed at some point about ways to distil knowledge from a complex (almost black box) large tree ensemble (a RandomForest with lots of sub-trees was used as an example). You would like to do this for multiple reasons, but one of them is model exaplainability, so a way to understand how that complex model behaves so you can draw conclusions and improve it (or guard against its failures). One comment really caught my eye: An alternative is to re-lable data with the ensemble’s outputs and then learn a decision tree over that. (source) This post is my attempt of testing this strategy out. # Get a dataset I’ll first use a clean and small dataset not to make things to complicated. from sklearn.datasets import load_iris import pandas as pd df = pd.DataFrame(data=dataset.data, columns=dataset.feature_names) df['target'] = dataset.target sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target 0 5.1 3.5 1.4 0.2 0 1 4.9 3.0 1.4 0.2 0 2 4.7 3.2 1.3 0.2 0 3 4.6 3.1 1.5 0.2 0 4 5.0 3.6 1.4 0.2 0 Then I’m going to split this dataset into a training set (random 70% of the data) and a test set (the other 15% of the data). from sklearn.model_selection import train_test_split df_train, df_test = train_test_split(df, test_size=0.3) df_train.shape, df_test.shape, df_train.shape[0] / (df_train.shape[0] + df_test.shape[0]) ((105, 5), (45, 5), 0.7) ## Train a Random Forest model And then I’m going to train a RandomForestClassifier that will solve this problem. I’m not going to be too concerned on the model performence (so I’m not going to really make it generalize well) because I’m really more interested in achieving the same behaviour (either good or bad as it is) with a DecisionTree proxy. from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import classification_report rf = RandomForestClassifier(n_estimators=100, max_depth=5, n_jobs=-1) rf.fit(df_train.drop(columns='target'), df_train['target']) print(classification_report(df_test['target'], rf.predict(df_test.drop(columns='target')))) precision recall f1-score support 0 1.00 1.00 1.00 10 1 1.00 0.95 0.98 21 2 0.93 1.00 0.97 14 accuracy 0.98 45 macro avg 0.98 0.98 0.98 45 weighted avg 0.98 0.98 0.98 45 ## Retraining a single decision tree that would approximate the RandomForest An alternative is to re-lable data with the ensemble’s outputs and then learn a decision tree over that. (source) What this actually means is doing the following steps: • Train a large, complex ensamble (the RandomForest model above) • Take all the data (including the test set) and add the predictions to it (even for the test set) • Overfitt a single DecisionTree over the all the data but training on the predictions of the RandomForests. What this basically does is creating a single DecisionTree that can predict exactly the same stuff as the RandomForest. We know that a DecisionTree has the ability to overfitt perfectly the training data if it is allowed to (for example, if we leave it to grow until leaf nodes contain only one datapoint). def __inp(df, exclude_columns=['target']): return df.drop(columns=list(set(exclude_columns) & set(df.columns))) def __out(df, target_column='target'): return df[target_column] def relable(df, model): df = df.copy() df['relabel'] = model.predict(__inp(df)) return df # relable everything df_train_tree = relable(df_train, rf) df_test_tree = relable(df_test, rf) df_tree = relable(df, rf) sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target relabel 97 6.2 2.9 4.3 1.3 1 1 114 5.8 2.8 5.1 2.4 2 2 125 7.2 3.2 6.0 1.8 2 2 110 6.5 3.2 5.1 2.0 2 2 113 5.7 2.5 5.0 2.0 2 2 We want to overfit the training data with the decision tree because what we are actually looking for is a single condensed tree that behaves exactly the same as the original RandomForest. And we want the DecisionTree to behave exactly the same as the RandomForest on the test data as well. That’s why we train on the full dataset here. from sklearn.tree import DecisionTreeClassifier from functools import partial from sklearn.metrics import f1_score __inp = partial(__inp, exclude_columns=['target', 'relabel']) __rel = partial(__out, target_column='relabel') __f1_score = partial(f1_score, average="macro") dt = DecisionTreeClassifier(max_depth=None, min_samples_leaf=1, min_impurity_split=0) dt.fit(__inp(df_tree), __rel(df_tree)) print(f"This should show that we've completely overfitted the relables (i.e. F1 score == 1.0). \nSo we've mimiked the RandomForest's behaviour perfectly!") print(classification_report(__rel(df_train_tree), dt.predict(__inp(df_train_tree)))) assert __f1_score(__rel(df_train_tree), dt.predict(__inp(df_train_tree))) == 1.0 print("\n\n") print(f"This shows the performance on the actual target values of the test set (never seen).") print(classification_report(__out(df_test_tree), dt.predict(__inp(df_test_tree)))) assert __f1_score(__out(df_test), rf.predict(__inp(df_test_tree))) == __f1_score(__out(df_test), dt.predict(__inp(df_test_tree))) This should show that we've completely overfitted the relables (i.e. F1 score == 1.0). So we've mimiked the RandomForest's behaviour perfectly! precision recall f1-score support 0 1.00 1.00 1.00 40 1 1.00 1.00 1.00 29 2 1.00 1.00 1.00 36 accuracy 1.00 105 macro avg 1.00 1.00 1.00 105 weighted avg 1.00 1.00 1.00 105 This shows the performance on the actual target values of the test set (never seen). precision recall f1-score support 0 1.00 1.00 1.00 10 1 1.00 0.95 0.98 21 2 0.93 1.00 0.97 14 accuracy 0.98 45 macro avg 0.98 0.98 0.98 45 weighted avg 0.98 0.98 0.98 45 So this did work, we have a DecisionTree that behaves exactly as the RandomForest. The only problem is that this applies only on the seen data (so it may be possible that the RandomForest generalises well on new / other data, but the DecisionTree will not, because it is a perfect aproximator trained only on the data available at that moment of training). We will test this in the following section. Neverthless, what tree did we got? !apt-get update; apt-get -y install graphviz !pip install dtreeviz from dtreeviz.trees import * viz = dtreeviz(dt, __inp(df_tree), __out(df_tree), target_name='species', feature_names=__inp(df_test_tree).columns, class_names=dataset.target_names.tolist() ) viz ## Does this aproximation hold for unseen data? As I’ve said before, the only problem is that this strategy applies strictly only on the seen/available data (so it may be possible that the RandomForest generalisez well on new / other data, but the DecisionTree will not, because it is a perfect aproximator trained only on the data available at that moment of training). To thest this out we will do a three-way split, leaving the third unseen to the DecitionTree and we will replicate the experiment above from sklearn.model_selection import train_test_split df_train, df_rest = train_test_split(df, test_size=0.3) df_test, df_future = train_test_split(df_rest, test_size=0.5) df_all = pd.concat((df_train, df_test)) df_train.shape, df_test.shape, df_future.shape, df_test.shape[0] / (df_train.shape[0] + df_test.shape[0] + df_future.shape[0]) ((105, 5), (22, 5), (23, 5), 0.14666666666666667) We now have: • df_train - 70% of the data • df_test - 15% of the data • df_all = df_train + df_test • df_future - 15% of (simulated) future data # train a "generizable" RandomForest rf = RandomForestClassifier(n_estimators=100, max_depth=5, n_jobs=-1) rf.fit(__inp(df_train), __out(df_train)) print(f"RandomForest performance:") print(classification_report(__out(df_test), rf.predict(__inp(df_test)))) # relable **current** data df_train_tree = relable(df_train, rf) df_test_tree = relable(df_test, rf) df_tree = relable(df_all, rf) # train DecisionTree aproximator dt = DecisionTreeClassifier(max_depth=None, min_samples_leaf=1, min_impurity_split=0) dt.fit(__inp(df_tree), __rel(df_tree)) print("\n\n") print(f"This should show that we've completely overfitted the predictions (i.e. F1 score == 1.0). \nSo we've mimiked the RandomForest's behaviour perfectly!") print(classification_report(__rel(df_train_tree), dt.predict(__inp(df_train_tree)))) assert __f1_score(__rel(df_train_tree), dt.predict(__inp(df_train_tree))) == 1.0 print("\n\n") print(f"This shows the performance on the actual target values of the test set and \nthat they are equal to the performance of the RandomForest model on the same data.") print(classification_report(__out(df_test_tree), dt.predict(__inp(df_test_tree)))) assert __f1_score(__out(df_test), rf.predict(__inp(df_test_tree))) == __f1_score(__out(df_test), dt.predict(__inp(df_test_tree))) RandomForest performance: precision recall f1-score support 0 1.00 1.00 1.00 9 1 1.00 1.00 1.00 6 2 1.00 1.00 1.00 7 accuracy 1.00 22 macro avg 1.00 1.00 1.00 22 weighted avg 1.00 1.00 1.00 22 This should show that we've completely overfitted the predictions (i.e. F1 score == 1.0). So we've mimiked the RandomForest's behaviour perfectly! precision recall f1-score support 0 1.00 1.00 1.00 37 1 1.00 1.00 1.00 33 2 1.00 1.00 1.00 35 accuracy 1.00 105 macro avg 1.00 1.00 1.00 105 weighted avg 1.00 1.00 1.00 105 This shows the performance on the actual target values of the test set and that they are equal to the performance of the RandomForest model on the same data. precision recall f1-score support 0 1.00 1.00 1.00 9 1 1.00 1.00 1.00 6 2 1.00 1.00 1.00 7 accuracy 1.00 22 macro avg 1.00 1.00 1.00 22 weighted avg 1.00 1.00 1.00 22 Let’s see how we do on the future data now! print("Random Forest performance on future data:") print(classification_report(__out(df_future), rf.predict(__inp(df_future)))) print("DecisionTree aproximator on future data") print(classification_report(__out(df_future), dt.predict(__inp(df_future)))) Random Forest performance on future data: precision recall f1-score support 0 1.00 1.00 1.00 4 1 1.00 0.82 0.90 11 2 0.80 1.00 0.89 8 accuracy 0.91 23 macro avg 0.93 0.94 0.93 23 weighted avg 0.93 0.91 0.91 23 DecisionTree aproximator on future data precision recall f1-score support 0 1.00 1.00 1.00 4 1 1.00 0.82 0.90 11 2 0.80 1.00 0.89 8 accuracy 0.91 23 macro avg 0.93 0.94 0.93 23 weighted avg 0.93 0.91 0.91 23 From the performance of the two models above, you can see that they indeed reach the same performance (quite surprisingly I would say). Now, the code above is not determinstic so if you run all the cells from the beggining up until this point multiple times, you will see that the RandomForest has a different accuracy each time. Having said that, this particular comparision that we are interested (the performance of the RandomForest on the future data, compared on the perfirmance of the DecisionTree aproximator in the future data) is almost always the same. Almost (9 ot of 10), but not always. I’ve seen ocasional runs where the RandomForest outperformed slightly the DecisionTree. # Using a more challenging dataset The iris dataset is a toy dataset from scikit-learn because it has only 150 datapoints and very few lables. To thest the above approach more thoroughly we need to use a more plausible dataset (still for classification, to keep things consistent) with lots more features and datapoints. For this we’ve choosen the forest covertype dataset where we have 581012 datapoints, each with 54 features describing some 30x30m measurements of a plot of land. We need to predict the correct category of vegetation for each plot. from sklearn.datasets import fetch_covtype import pandas as pd dataset = fetch_covtype() df = pd.DataFrame(data=dataset.data, columns=["Elevation", "Aspect", "Slope", "Horizontal_Distance_To_Hydrology", "Vertical_Distance_To_Hydrology", "Horizontal_Distance_To_Roadways", "Hillshade_9am", "Hillshade_Noon", "Hillshade_3pm", "Horizontal_Distance_To_Fire_Points", "Wilderness_Area1", "Wilderness_Area2", "Wilderness_Area3", "Wilderness_Area4", "Soil_Type1", "Soil_Type2", "Soil_Type3", "Soil_Type4", "Soil_Type5", "Soil_Type6", "Soil_Type7", "Soil_Type8", "Soil_Type9", "Soil_Type10", "Soil_Type11", "Soil_Type12", "Soil_Type13", "Soil_Type14", "Soil_Type15", "Soil_Type16", "Soil_Type17", "Soil_Type18", "Soil_Type19", "Soil_Type20", "Soil_Type21", "Soil_Type22", "Soil_Type23", "Soil_Type24", "Soil_Type25", "Soil_Type26", "Soil_Type27", "Soil_Type28", "Soil_Type29", "Soil_Type30", "Soil_Type31", "Soil_Type32", "Soil_Type33", "Soil_Type34", "Soil_Type35", "Soil_Type36", "Soil_Type37", "Soil_Type38", "Soil_Type39", "Soil_Type40"]) df['target'] = dataset.target Elevation Aspect Slope Horizontal_Distance_To_Hydrology Vertical_Distance_To_Hydrology Horizontal_Distance_To_Roadways Hillshade_9am Hillshade_Noon Hillshade_3pm Horizontal_Distance_To_Fire_Points Wilderness_Area1 Wilderness_Area2 Wilderness_Area3 Wilderness_Area4 Soil_Type1 Soil_Type2 Soil_Type3 Soil_Type4 Soil_Type5 Soil_Type6 Soil_Type7 Soil_Type8 Soil_Type9 Soil_Type10 Soil_Type11 Soil_Type12 Soil_Type13 Soil_Type14 Soil_Type15 Soil_Type16 Soil_Type17 Soil_Type18 Soil_Type19 Soil_Type20 Soil_Type21 Soil_Type22 Soil_Type23 Soil_Type24 Soil_Type25 Soil_Type26 Soil_Type27 Soil_Type28 Soil_Type29 Soil_Type30 Soil_Type31 Soil_Type32 Soil_Type33 Soil_Type34 Soil_Type35 Soil_Type36 Soil_Type37 Soil_Type38 Soil_Type39 Soil_Type40 target 0 2596.0 51.0 3.0 258.0 0.0 510.0 221.0 232.0 148.0 6279.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5 1 2590.0 56.0 2.0 212.0 -6.0 390.0 220.0 235.0 151.0 6225.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5 2 2804.0 139.0 9.0 268.0 65.0 3180.0 234.0 238.0 135.0 6121.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2 3 2785.0 155.0 18.0 242.0 118.0 3090.0 238.0 238.0 122.0 6211.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2 4 2595.0 45.0 2.0 153.0 -1.0 391.0 220.0 234.0 150.0 6172.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5 We will again save a future dataset for later use. from sklearn.model_selection import train_test_split df_train, df_rest = train_test_split(df, test_size=0.3) df_test, df_future = train_test_split(df_rest, test_size=0.5) df_all = pd.concat((df_train, df_test)) df_train.shape, df_test.shape, df_future.shape, df_test.shape[0] / (df_train.shape[0] + df_test.shape[0] + df_future.shape[0]) ((406708, 55), (87152, 55), (87152, 55), 0.1500003442269695) # train a "generizable" RandomForest rf = RandomForestClassifier(n_estimators=100, max_depth=5, n_jobs=-1) rf.fit(__inp(df_train), __out(df_train)) print(f"RandomForest performance:") print(classification_report(__out(df_test), rf.predict(__inp(df_test)))) # relable **current** data df_train_tree = relable(df_train, rf) df_test_tree = relable(df_test, rf) df_tree = relable(df_all, rf) # train DecisionTree aproximator dt = DecisionTreeClassifier(max_depth=None, min_samples_leaf=1, min_impurity_split=0) dt.fit(__inp(df_tree), __rel(df_tree)) print("\n\n") print(f"This should show that we've completely overfitted the predictions (i.e. F1 score == 1.0). \nSo we've mimiked the RandomForest's behaviour perfectly!") print(classification_report(__rel(df_train_tree), dt.predict(__inp(df_train_tree)))) assert __f1_score(__rel(df_train_tree), dt.predict(__inp(df_train_tree))) == 1.0 print("\n\n") print(f"This shows the performance on the actual target values of the test set and \nthat they are equal to the performance of the RandomForest model on the same data.") print(classification_report(__out(df_test_tree), dt.predict(__inp(df_test_tree)))) assert __f1_score(__out(df_test), rf.predict(__inp(df_test_tree))) == __f1_score(__out(df_test), dt.predict(__inp(df_test_tree))) RandomForest performance: precision recall f1-score support 1 0.64 0.74 0.69 31900 2 0.71 0.76 0.74 42493 3 0.62 0.65 0.63 5254 4 0.00 0.00 0.00 414 5 0.00 0.00 0.00 1397 6 0.00 0.00 0.00 2673 7 0.00 0.00 0.00 3021 accuracy 0.68 87152 macro avg 0.28 0.31 0.29 87152 weighted avg 0.62 0.68 0.65 87152 This should show that we've completely overfitted the predictions (i.e. F1 score == 1.0). So we've mimiked the RandomForest's behaviour perfectly! precision recall f1-score support 1 1.00 1.00 1.00 169056 2 1.00 1.00 1.00 211823 3 1.00 1.00 1.00 25829 accuracy 1.00 406708 macro avg 1.00 1.00 1.00 406708 weighted avg 1.00 1.00 1.00 406708 This shows the performance on the actual target values of the test set and that they are equal to the performance of the RandomForest model on the same data. precision recall f1-score support 1 0.64 0.74 0.69 31900 2 0.71 0.76 0.74 42493 3 0.62 0.65 0.63 5254 4 0.00 0.00 0.00 414 5 0.00 0.00 0.00 1397 6 0.00 0.00 0.00 2673 7 0.00 0.00 0.00 3021 accuracy 0.68 87152 macro avg 0.28 0.31 0.29 87152 weighted avg 0.62 0.68 0.65 87152 Now that we have everything prepared, let’s just test what happens with the two models on this more plausible dataset. print("Random Forest performance on future data:") print(classification_report(__out(df_future), rf.predict(__inp(df_future)))) print("DecisionTree aproximator on future data") print(classification_report(__out(df_future), dt.predict(__inp(df_future)))) Random Forest performance on future data: precision recall f1-score support 1 0.65 0.74 0.69 31869 2 0.72 0.76 0.74 42488 3 0.62 0.64 0.63 5331 4 0.00 0.00 0.00 402 5 0.00 0.00 0.00 1389 6 0.00 0.00 0.00 2597 7 0.00 0.00 0.00 3076 accuracy 0.68 87152 macro avg 0.28 0.31 0.29 87152 weighted avg 0.62 0.68 0.65 87152 DecisionTree aproximator on future data precision recall f1-score support 1 0.65 0.74 0.69 31869 2 0.72 0.76 0.74 42488 3 0.62 0.64 0.63 5331 4 0.00 0.00 0.00 402 5 0.00 0.00 0.00 1389 6 0.00 0.00 0.00 2597 7 0.00 0.00 0.00 3076 accuracy 0.68 87152 macro avg 0.28 0.31 0.29 87152 weighted avg 0.62 0.68 0.65 87152 Again, we have exactly the same performance so this seems to work OK, but.. ## Is this generalizable to other tree-ensamble methods, like XGBoost? We will train a XGBoost model and try to reproduce the results above with it so see if is possible to distill the XGBoost model into a single decision tree. We will use the same covtype dataset since it’s large and tune the training and instantiation of the model to obtain a nice performant model. While in the previous experiment we didn’t really bother optimising the model, it is possible that a great generalizable model will show some differences when applying this process. from sklearn.datasets import fetch_covtype import pandas as pd dataset = fetch_covtype() df = pd.DataFrame(data=dataset.data, columns=["Elevation", "Aspect", "Slope", "Horizontal_Distance_To_Hydrology", "Vertical_Distance_To_Hydrology", "Horizontal_Distance_To_Roadways", "Hillshade_9am", "Hillshade_Noon", "Hillshade_3pm", "Horizontal_Distance_To_Fire_Points", "Wilderness_Area1", "Wilderness_Area2", "Wilderness_Area3", "Wilderness_Area4", "Soil_Type1", "Soil_Type2", "Soil_Type3", "Soil_Type4", "Soil_Type5", "Soil_Type6", "Soil_Type7", "Soil_Type8", "Soil_Type9", "Soil_Type10", "Soil_Type11", "Soil_Type12", "Soil_Type13", "Soil_Type14", "Soil_Type15", "Soil_Type16", "Soil_Type17", "Soil_Type18", "Soil_Type19", "Soil_Type20", "Soil_Type21", "Soil_Type22", "Soil_Type23", "Soil_Type24", "Soil_Type25", "Soil_Type26", "Soil_Type27", "Soil_Type28", "Soil_Type29", "Soil_Type30", "Soil_Type31", "Soil_Type32", "Soil_Type33", "Soil_Type34", "Soil_Type35", "Soil_Type36", "Soil_Type37", "Soil_Type38", "Soil_Type39", "Soil_Type40"]) df['target'] = dataset.target Elevation Aspect Slope Horizontal_Distance_To_Hydrology Vertical_Distance_To_Hydrology Horizontal_Distance_To_Roadways Hillshade_9am Hillshade_Noon Hillshade_3pm Horizontal_Distance_To_Fire_Points Wilderness_Area1 Wilderness_Area2 Wilderness_Area3 Wilderness_Area4 Soil_Type1 Soil_Type2 Soil_Type3 Soil_Type4 Soil_Type5 Soil_Type6 Soil_Type7 Soil_Type8 Soil_Type9 Soil_Type10 Soil_Type11 Soil_Type12 Soil_Type13 Soil_Type14 Soil_Type15 Soil_Type16 Soil_Type17 Soil_Type18 Soil_Type19 Soil_Type20 Soil_Type21 Soil_Type22 Soil_Type23 Soil_Type24 Soil_Type25 Soil_Type26 Soil_Type27 Soil_Type28 Soil_Type29 Soil_Type30 Soil_Type31 Soil_Type32 Soil_Type33 Soil_Type34 Soil_Type35 Soil_Type36 Soil_Type37 Soil_Type38 Soil_Type39 Soil_Type40 target 0 2596.0 51.0 3.0 258.0 0.0 510.0 221.0 232.0 148.0 6279.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5 1 2590.0 56.0 2.0 212.0 -6.0 390.0 220.0 235.0 151.0 6225.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5 2 2804.0 139.0 9.0 268.0 65.0 3180.0 234.0 238.0 135.0 6121.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2 3 2785.0 155.0 18.0 242.0 118.0 3090.0 238.0 238.0 122.0 6211.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2 4 2595.0 45.0 2.0 153.0 -1.0 391.0 220.0 234.0 150.0 6172.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5 from sklearn.model_selection import train_test_split df_train, df_rest = train_test_split(df, test_size=0.3) df_test, df_future = train_test_split(df_rest, test_size=0.5) df_all = pd.concat((df_train, df_test)) df_train.shape, df_test.shape, df_future.shape, df_test.shape[0] / (df_train.shape[0] + df_test.shape[0] + df_future.shape[0]) ((406708, 55), (87152, 55), (87152, 55), 0.1500003442269695) from xgboost import XGBClassifier # train a "generizable" RandomForest xgb = XGBClassifier( n_estimators=100, max_depth=20, n_jobs=-1, verbosity=1, booster='gbtree', objective='mlogloss', # num_class=len(np.unique(__out(df))) ) xgb.fit( X=__inp(df_train), y=__out(df_train), verbose=True, eval_set=[(__inp(df_test), __out(df_test))], eval_metric=['mlogloss'], early_stopping_rounds=4, ) print(f"XGBoost performance:") print(classification_report(__out(df_test), xgb.predict(__inp(df_test)))) # relable **current** data df_train_tree = relable(df_train, xgb) df_test_tree = relable(df_test, xgb) df_tree = relable(df_all, xgb) # train DecisionTree aproximator dt = DecisionTreeClassifier(max_depth=None, min_samples_leaf=1, min_impurity_split=0) dt.fit(__inp(df_tree), __rel(df_tree)) print("\n\n") print(f"This should show that we've completely overfitted the predictions (i.e. F1 score == 1.0). \nSo we've mimiked the RandomForest's behaviour perfectly!") print(classification_report(__rel(df_train_tree), dt.predict(__inp(df_train_tree)))) assert __f1_score(__rel(df_train_tree), dt.predict(__inp(df_train_tree))) == 1.0 print("\n\n") print(f"This shows the performance on the actual target values of the test set and \nthat they are equal to the performance of the RandomForest model on the same data.") print(classification_report(__out(df_test_tree), dt.predict(__inp(df_test_tree)))) assert __f1_score(__out(df_test), xgb.predict(__inp(df_test_tree))) == __f1_score(__out(df_test), dt.predict(__inp(df_test_tree))) [0] validation_0-mlogloss:1.76519 Will train until validation_0-mlogloss hasn't improved in 4 rounds. [1] validation_0-mlogloss:1.62483 [2] validation_0-mlogloss:1.50959 [3] validation_0-mlogloss:1.41254 ... [96] validation_0-mlogloss:0.497148 [97] validation_0-mlogloss:0.496566 [98] validation_0-mlogloss:0.495456 [99] validation_0-mlogloss:0.494433 XGBoost performance: precision recall f1-score support 1 0.78 0.77 0.77 31761 2 0.80 0.84 0.82 42533 3 0.75 0.86 0.80 5299 4 0.88 0.78 0.83 423 5 0.81 0.23 0.36 1392 6 0.68 0.38 0.49 2640 7 0.90 0.75 0.81 3104 accuracy 0.79 87152 macro avg 0.80 0.66 0.70 87152 weighted avg 0.79 0.79 0.78 87152 This should show that we've completely overfitted the predictions (i.e. F1 score == 1.0). So we've mimiked the RandomForest's behaviour perfectly! precision recall f1-score support 1 1.00 1.00 1.00 145591 2 1.00 1.00 1.00 210335 3 1.00 1.00 1.00 28306 4 1.00 1.00 1.00 1826 5 1.00 1.00 1.00 1975 6 1.00 1.00 1.00 6585 7 1.00 1.00 1.00 12090 accuracy 1.00 406708 macro avg 1.00 1.00 1.00 406708 weighted avg 1.00 1.00 1.00 406708 This shows the performance on the actual target values of the test set and that they are equal to the performance of the RandomForest model on the same data. precision recall f1-score support 1 0.78 0.77 0.77 31761 2 0.80 0.84 0.82 42533 3 0.75 0.86 0.80 5299 4 0.88 0.78 0.83 423 5 0.81 0.23 0.36 1392 6 0.68 0.38 0.49 2640 7 0.90 0.75 0.81 3104 accuracy 0.79 87152 macro avg 0.80 0.66 0.70 87152 weighted avg 0.79 0.79 0.78 87152 print("XGBoost performance on future data:") print(classification_report(__out(df_future), xgb.predict(__inp(df_future)))) print("DecisionTree aproximator on future data") print(classification_report(__out(df_future), dt.predict(__inp(df_future)))) XGBoost performance on future data: precision recall f1-score support 1 0.78 0.76 0.77 31881 2 0.79 0.84 0.82 42520 3 0.76 0.86 0.81 5326 4 0.85 0.73 0.79 421 5 0.82 0.23 0.35 1358 6 0.70 0.38 0.49 2623 7 0.88 0.74 0.80 3023 accuracy 0.79 87152 macro avg 0.80 0.65 0.69 87152 weighted avg 0.79 0.79 0.78 87152 DecisionTree aproximator on future data precision recall f1-score support 1 0.78 0.76 0.77 31881 2 0.79 0.84 0.82 42520 3 0.75 0.86 0.80 5326 4 0.84 0.73 0.79 421 5 0.81 0.23 0.36 1358 6 0.69 0.37 0.48 2623 7 0.88 0.74 0.80 3023 accuracy 0.79 87152 macro avg 0.79 0.65 0.69 87152 weighted avg 0.78 0.79 0.78 87152 If you look closely then you will see some diffrences in performance between the two, but overall they seem pretty close actually! # Conclusions Yes, it seems plausible that you can indeed aproximate a tree ensamble (either a RandomForest or an XGBoost - and most likely a GradientBoostedTree or a LigtGBM model bu I haven’t tested these) with into a single tree that you can later inspect and debug. There may be some performance drops between the two, but in my experiments, the distillation process mostly yielded the same results regardless if it was a XGBoost model or a RandomFores, or if we had a big or small dataset to train on. One advice I’d give is that if you go on this route you need to actually compare the performance of the two datasets with a fresh dataset (either kept asside from the beggining or gathered anew) because there is a difference between the two models. If you combine this with model exporting to code, you get quite a nice dependency free deloyment process of a large an powerfull model. In a future post I’d like to discuss also the GRANT option mentioned in the same HackerNews thread to see how it compares and performs. Graft, Reassemble, Answer delta, Neighbour sensitivity, Training delta (GRANT) You can open this notebook in Colab by using the button bellow: Tags: Updated:
2021-06-22 18:07:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28014567494392395, "perplexity": 3390.766698052335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488519183.85/warc/CC-MAIN-20210622155328-20210622185328-00053.warc.gz"}
http://www.nag.com/numeric/CL/nagdoc_cl23/html/F02/f02intro.html
f02 Chapter Contents NAG C Library Manual # NAG Library Chapter Introductionf02 – Eigenvalues and Eigenvectors ## 1  Scope of the Chapter This chapter provides functions for various types of matrix eigenvalue problem: • standard eigenvalue problems (finding eigenvalues and eigenvectors of a square matrix $A$); • singular value problems (finding singular values and singular vectors of a rectangular matrix $A$); • generalized eigenvalue problems (finding eigenvalues and eigenvectors of a matrix pencil $A-\lambda B$). Functions are provided for both real and complex data. The majority of functions for these problems can be found in Chapter f08 which contains software derived from LAPACK (see Anderson et al. (1999)). However, you should read the introduction to this chapter before turning to Chapter f08, especially if you are a new user. Chapter f12 contains functions for large sparse eigenvalue problems, although one such function is also available in this chapter. Chapters f02 and f08 contain Black Box (or Driver) functions that enable many problems to be solved by a call to a single function, and the decision trees in Section 4 direct you to the most appropriate functions in Chapters f02 and f08. ## 2  Background to the Problems Here we describe the different types of problem which can be tackled by the functions in this chapter, and give a brief outline of the methods used to solve them. If you have one specific type of problem to solve, you need only read the relevant sub-section and then turn to Section 3. Consult a standard textbook for a more thorough discussion, for example Golub and Van Loan (1996) or Parlett (1998). In each sub-section, we first describe the problem in terms of real matrices. The changes needed to adapt the discussion to complex matrices are usually simple and obvious: a matrix transpose such as ${Q}^{\mathrm{T}}$ must be replaced by its conjugate transpose ${Q}^{\mathrm{H}}$; symmetric matrices must be replaced by Hermitian matrices, and orthogonal matrices by unitary matrices. Any additional changes are noted at the end of the sub-section. ### 2.1  Standard Eigenvalue Problems Let $A$ be a square matrix of order $n$. The standard eigenvalue problem is to find eigenvalues, $\lambda$, and corresponding eigenvectors, $x\ne 0$, such that $Ax=λx.$ (1) (The phrase ‘eigenvalue problem’ is sometimes abbreviated to eigenproblem.) #### 2.1.1  Standard symmetric eigenvalue problems If $A$ is real symmetric, the eigenvalue problem has many desirable features, and it is advisable to take advantage of symmetry whenever possible. The eigenvalues $\lambda$ are all real, and the eigenvectors can be chosen to be mutually orthogonal. That is, we can write $Azi=λizi for ​i=1,2,…,n$ or equivalently: $AZ=ZΛ$ (2) where $\Lambda$ is a real diagonal matrix whose diagonal elements ${\lambda }_{i}$ are the eigenvalues, and $Z$ is a real orthogonal matrix whose columns ${z}_{i}$ are the eigenvectors. This implies that ${z}_{i}^{\mathrm{T}}{z}_{j}=0$ if $i\ne j$, and ${‖{z}_{i}‖}_{2}=1$. Equation (2) can be rewritten $A=ZΛZT.$ (3) This is known as the eigen-decomposition or spectral factorization of $A$. Eigenvalues of a real symmetric matrix are well-conditioned, that is, they are not unduly sensitive to perturbations in the original matrix $A$. The sensitivity of an eigenvector depends on how small the gap is between its eigenvalue and any other eigenvalue: the smaller the gap, the more sensitive the eigenvector. More details on the accuracy of computed eigenvalues and eigenvectors are given in the function documents, and in the f08 Chapter Introduction. For dense or band matrices, the computation of eigenvalues and eigenvectors proceeds in the following stages: 1. $A$ is reduced to a symmetric tridiagonal matrix $T$ by an orthogonal similarity transformation: $A=QT{Q}^{\mathrm{T}}$, where $Q$ is orthogonal. (A tridiagonal matrix is zero except for the main diagonal and the first subdiagonal and superdiagonal on either side.) $T$ has the same eigenvalues as $A$ and is easier to handle. 2. Eigenvalues and eigenvectors of $T$ are computed as required. If all eigenvalues (and optionally eigenvectors) are required, they are computed by the $QR$ algorithm, which effectively factorizes $T$ as $T=S\Lambda {S}^{\mathrm{T}}$, where $S$ is orthogonal, or by the divide-and-conquer method. If only selected eigenvalues are required, they are computed by bisection, and if selected eigenvectors are required, they are computed by inverse iteration. If $s$ is an eigenvector of $T$, then $Qs$ is an eigenvector of $A$. All the above remarks also apply – with the obvious changes – to the case when $A$ is a complex Hermitian matrix. The eigenvectors are complex, but the eigenvalues are all real, and so is the tridiagonal matrix $T$. #### 2.1.2  Standard nonsymmetric eigenvalue problems A real nonsymmetric matrix $A$ may have complex eigenvalues, occurring as complex conjugate pairs. If $x$ is an eigenvector corresponding to a complex eigenvalue $\lambda$, then the complex conjugate vector $\stackrel{-}{x}$ is the eigenvector corresponding to the complex conjugate eigenvalue $\stackrel{-}{\lambda }$. Note that the vector $x$ defined in equation (1) is sometimes called a right eigenvector; a left eigenvector $y$ is defined by $yHA=λyH or ATy=λ-y.$ Functions in this chapter only compute right eigenvectors (the usual requirement), but functions in Chapter f08 can compute left or right eigenvectors or both. The eigenvalue problem can be solved via the Schur factorization of $A$, defined as $A=ZTZT,$ where $Z$ is an orthogonal matrix and $T$ is a real upper quasi-triangular matrix, with the same eigenvalues as $A$. $T$ is called the Schur form of $A$. If all the eigenvalues of $A$ are real, then $T$ is upper triangular, and its diagonal elements are the eigenvalues of $A$. If $A$ has complex conjugate pairs of eigenvalues, then $T$ has $2$ by $2$ diagonal blocks, whose eigenvalues are the complex conjugate pairs of eigenvalues of $A$. (The structure of $T$ is simpler if the matrices are complex – see below.) For example, the following matrix is in quasi-triangular form $1 * * * 0 2 -1 * 0 1 2 * 0 0 0 3$ and has eigenvalues $1$, $2±i$, and $3$. (The elements indicated by ‘$*$’ may take any values.) The columns of $Z$ are called the Schur vectors. For each $k\left(1\le k\le n\right)$, the first $k$ columns of $Z$ form an orthonormal basis for the invariant subspace corresponding to the first $k$ eigenvalues on the diagonal of $T$. (An invariant subspace (for $A$) is a subspace $S$ such that for any vector $v$ in $S$, $Av$ is also in $S$.) Because this basis is orthonormal, it is preferable in many applications to compute Schur vectors rather than eigenvectors. It is possible to order the Schur factorization so that any desired set of $k$ eigenvalues occupy the $k$ leading positions on the diagonal of $T$, and functions for this purpose are provided in Chapter f08. Note that if $A$ is symmetric, the Schur vectors are the same as the eigenvectors, but if $A$ is nonsymmetric, they are distinct, and the Schur vectors, being orthonormal, are often more satisfactory to work with in numerical computation. Eigenvalues and eigenvectors of a nonsymmetric matrix may be ill-conditioned, that is, sensitive to perturbations in $A$. Chapter f08 contains functions which compute or estimate the condition numbers of eigenvalues and eigenvectors, and the f08 Chapter Introduction gives more details about the error analysis of nonsymmetric eigenproblems. The accuracy with which eigenvalues and eigenvectors can be obtained is often improved by balancing a matrix. This is discussed further in Section 3.4. Computation of eigenvalues, eigenvectors or the Schur factorization proceeds in the following stages: 1. $A$ is reduced to an upper Hessenberg matrix $H$ by an orthogonal similarity transformation: $A=QH{Q}^{\mathrm{T}}$, where $Q$ is orthogonal. (An upper Hessenberg matrix is zero below the first subdiagonal.) $H$ has the same eigenvalues as $A$, and is easier to handle. 2. The upper Hessenberg matrix $H$ is reduced to Schur form $T$ by the $QR$ algorithm, giving the Schur factorization $H=ST{S}^{\mathrm{T}}$. The eigenvalues of $A$ are obtained from the diagonal blocks of $T$. The matrix $Z$ of Schur vectors (if required) is computed as $Z=QS$. 3. After the eigenvalues have been found, eigenvectors may be computed, if required, in two different ways. Eigenvectors of $H$ can be computed by inverse iteration, and then pre-multiplied by $Q$ to give eigenvectors of $A$; this approach is usually preferred if only a few eigenvectors are required. Alternatively, eigenvectors of $T$ can be computed by back-substitution, and pre-multiplied by $Z$ to give eigenvectors of $A$. All the above remarks also apply – with the obvious changes – to the case when $A$ is a complex matrix. The eigenvalues are in general complex, so there is no need for special treatment of complex conjugate pairs, and the Schur form $T$ is simply a complex upper triangular matrix. ### 2.2  The Singular Value Decomposition The singular value decomposition (SVD) of a real $m$ by $n$ matrix $A$ is given by $A=UΣVT,$ where $U$ and $V$ are orthogonal and $\Sigma$ is an $m$ by $n$ diagonal matrix with real diagonal elements, ${\sigma }_{i}$, such that $σ1≥σ2≥⋯≥σminm,n≥0.$ The ${\sigma }_{i}$ are the singular values of $A$ and the first $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(m,n\right)$ columns of $U$ and $V$ are, respectively, the left and right singular vectors of $A$. The singular values and singular vectors satisfy $Avi=σiui and ATui=σivi$ where ${u}_{i}$ and ${v}_{i}$ are the $i$th columns of $U$ and $V$ respectively. The singular value decomposition of $A$ is closely related to the eigen-decompositions of the symmetric matrices ${A}^{\mathrm{T}}A$ or $A{A}^{\mathrm{T}}$, because: $ATAvi=σi2vi and AATui=σi2ui.$ However, these relationships are not recommended as a means of computing singular values or vectors unless $A$ is sparse and functions from Chapter f12 are to be used. If ${U}_{k}$, ${V}_{k}$ denote the leading $k$ columns of $U$ and $V$ respectively, and if ${\Sigma }_{k}$ denotes the leading principal submatrix of $\Sigma$, then $Ak ≡ Uk Σk VTk$ is the best rank-$k$ approximation to $A$ in both the $2$-norm and the Frobenius norm. Singular values are well-conditioned; that is, they are not unduly sensitive to perturbations in $A$. The sensitivity of a singular vector depends on how small the gap is between its singular value and any other singular value: the smaller the gap, the more sensitive the singular vector. More details on the accuracy of computed singular values and vectors are given in the function documents and in the f08 Chapter Introduction. The singular value decomposition is useful for the numerical determination of the rank of a matrix, and for solving linear least squares problems, especially when they are rank-deficient (or nearly so). See Chapter f04. Computation of singular values and vectors proceeds in the following stages: 1. $A$ is reduced to an upper bidiagonal matrix $B$ by an orthogonal transformation $A={U}_{1}B{V}_{1}^{\mathrm{T}}$, where ${U}_{1}$ and ${V}_{1}$ are orthogonal. (An upper bidiagonal matrix is zero except for the main diagonal and the first superdiagonal.) $B$ has the same singular values as $A$, and is easier to handle. 2. The SVD of the bidiagonal matrix $B$ is computed as $B={U}_{2}\Sigma {V}_{2}^{\mathrm{T}}$, where ${U}_{2}$ and ${V}_{2}$ are orthogonal and $\Sigma$ is diagonal as described above. Then in the SVD of $A$, $U={U}_{1}{U}_{2}$ and $V={V}_{1}{V}_{2}$. All the above remarks also apply – with the obvious changes – to the case when $A$ is a complex matrix. The singular vectors are complex, but the singular values are real and non-negative, and the bidiagonal matrix $B$ is also real. ### 2.3  Generalized Eigenvalue Problems Let $A$ and $B$ be square matrices of order $n$. The generalized eigenvalue problem is to find eigenvalues, $\lambda$, and corresponding eigenvectors, $x\ne 0$, such that $Ax=λBx.$ (4) For given $A$ and $B$, the set of all matrices of the form $A-\lambda B$ is called a pencil, and $\lambda$ and $x$ are said to be an eigenvalue and eigenvector of the pencil $A-\lambda B$. When $B$ is nonsingular, equation (4) is mathematically equivalent to $\left({B}^{-1}A\right)x=\lambda x$, and when $A$ is nonsingular, it is equivalent to $\left({A}^{-1}B\right)x=\left(1/\lambda \right)x$. Thus, in theory, if one of the matrices $A$ or $B$ is known to be nonsingular, the problem could be reduced to a standard eigenvalue problem. However, for this reduction to be satisfactory from the point of view of numerical stability, it is necessary not only that $B$ (or $A$) should be nonsingular, but that it should be well-conditioned with respect to inversion. The nearer $B$ is to singularity, the more unsatisfactory ${B}^{-1}A$ will be as a vehicle for determining the required eigenvalues. Well-determined eigenvalues of the original problem (4) may be poorly determined even by the correctly rounded version of ${B}^{-1}A$. We consider first a special class of problems in which $B$ is known to be nonsingular, and then return to the general case in the following sub-section. #### 2.3.1  Generalized symmetric-definite eigenvalue problems If $A$ and $B$ are symmetric and $B$ is positive definite, then the generalized eigenvalue problem has desirable properties similar to those of the standard symmetric eigenvalue problem. The eigenvalues are all real, and the eigenvectors, while not orthogonal in the usual sense, satisfy the relations ${z}_{i}^{\mathrm{T}}B{z}_{j}=0$ for $i\ne j$ and can be normalized so that ${z}_{i}^{\mathrm{T}}B{z}_{i}=1$. Note that it is not enough for $A$ and $B$ to be symmetric; $B$ must also be positive definite, which implies nonsingularity. Eigenproblems with these properties are referred to as symmetric-definite problems. If $\Lambda$ is the diagonal matrix whose diagonal elements are the eigenvalues, and $Z$ is the matrix whose columns are the eigenvectors, then $ZTAZ=Λ and ZTBZ=I.$ To compute eigenvalues and eigenvectors, the problem can be reduced to a standard symmetric eigenvalue problem, using the Cholesky factorization of $B$ as $L{L}^{\mathrm{T}}$ or ${U}^{\mathrm{T}}U$ (see Chapter f07). Note, however, that this reduction does implicitly involve the inversion of $B$, and hence this approach should not be used if $B$ is ill-conditioned with respect to inversion. For example, with $B=L{L}^{\mathrm{T}}$, we have $Az=λBz ⇔ L-1AL-T LTz = λLTz .$ Hence the eigenvalues of $Az=\lambda Bz$ are those of $Cy=\lambda y$, where $C$ is the symmetric matrix $C={L}^{-1}A{L}^{-\mathrm{T}}$ and $y={L}^{\mathrm{T}}z$. The standard symmetric eigenproblem $Cy=\lambda y$ may be solved by the methods described in Section 2.1.1. The eigenvectors $z$ of the original problem may be recovered by computing $z={L}^{-\mathrm{T}}y$. Most of the functions which solve this class of problems can also solve the closely related problems $ABx=λx or BAx=λx$ where again $A$ and $B$ are symmetric and $B$ is positive definite. See the function documents for details. All the above remarks also apply – with the obvious changes – to the case when $A$ and $B$ are complex Hermitian matrices. Such problems are called Hermitian-definite. The eigenvectors are complex, but the eigenvalues are all real. #### 2.3.2  Generalized nonsymmetric eigenvalue problems Any generalized eigenproblem which is not symmetric-definite with well-conditioned $B$ must be handled as if it were a general nonsymmetric problem. If $B$ is singular, the problem has infinite eigenvalues. These are not a problem; they are equivalent to zero eigenvalues of the problem $Bx=\mu Ax$. Computationally they appear as very large values. If $A$ and $B$ are both singular and have a common null space, then $A-\lambda B$ is singular for all $\lambda$; in other words, any value $\lambda$ can be regarded as an eigenvalue. Pencils with this property are called singular. As with standard nonsymmetric problems, a real problem may have complex eigenvalues, occurring as complex conjugate pairs. The generalized eigenvalue problem can be solved via the generalized Schur factorization of $A$ and $B$: $A=QUZT, B=QVZT$ where $Q$ and $Z$ are orthogonal, $V$ is upper triangular, and $U$ is upper quasi-triangular (defined just as in Section 2.1.2). If all the eigenvalues are real, then $U$ is upper triangular; the eigenvalues are given by ${\lambda }_{i}={u}_{ii}/{v}_{ii}$. If there are complex conjugate pairs of eigenvalues, then $U$ has $2$ by $2$ diagonal blocks. Eigenvalues and eigenvectors of a generalized nonsymmetric problem may be ill-conditioned; that is, sensitive to perturbations in $A$ or $B$. Particular care must be taken if, for some $i$, ${u}_{ii}={v}_{ii}=0$, or in practical terms if ${u}_{ii}$ and ${v}_{ii}$ are both small; this means that the pencil is singular, or approximately so. Not only is the particular value ${\lambda }_{i}$ undetermined, but also no reliance can be placed on any of the computed eigenvalues. See also the function documents. Computation of eigenvalues and eigenvectors proceeds in the following stages. 1. The pencil $A-\lambda B$ is reduced by an orthogonal transformation to a pencil $H-\lambda K$ in which $H$ is upper Hessenberg and $K$ is upper triangular: $A={Q}_{1}H{Z}_{1}^{\mathrm{T}}$ and $B={Q}_{1}K{Z}_{1}^{\mathrm{T}}$. The pencil $H-\lambda K$ has the same eigenvalues as $A-\lambda B$, and is easier to handle. 2. The upper Hessenberg matrix $H$ is reduced to upper quasi-triangular form, while $K$ is maintained in upper triangular form, using the $QZ$ algorithm. This gives the generalized Schur factorization: $H={Q}_{2}U{Z}_{2}$ and $K={Q}_{2}V{Z}_{2}$. 3. Eigenvectors of the pencil $U-\lambda V$ are computed (if required) by back-substitution, and pre-multiplied by ${Z}_{1}{Z}_{2}$ to give eigenvectors of $A$. All the above remarks also apply – with the obvious changes – to the case when $A$ and $B$ are complex matrices. The eigenvalues are in general complex, so there is no need for special treatment of complex conjugate pairs, and the matrix $U$ in the generalized Schur factorization is simply a complex upper triangular matrix. ## 3  Recommendations on Choice and Use of Available Functions ### 3.1  Black Box Functions and General Purpose Functions Functions in the NAG C Library for solving eigenvalue problems fall into two categories. 1. Black Box Functions: these are designed to solve a standard type of problem in a single call – for example, to compute all the eigenvalues and eigenvectors of a real symmetric matrix. You are recommended to use a black box function if there is one to meet your needs; refer to the decision tree in Section 4.1 or the index in Section 5. 2. General Purpose Functions: these perform the computational subtasks which make up the separate stages of the overall task, as described in Section 2 – for example, reducing a real symmetric matrix to tridiagonal form. General purpose functions are to be found, for historical reasons, some in this chapter, a few in Chapter f01, but most in Chapter f08. If there is no black box function that meets your needs, you will need to use one or more general purpose functions. Here are some of the more likely reasons why you may need to do this: • You wish to economize on storage for symmetric matrices (see Section 3.3). • You wish to find selected eigenvalues or eigenvectors of a generalized symmetric-definite eigenproblem (see also Section 3.2). The decision trees in Section 4.2 list the combinations of general purpose functions which are needed to solve many common types of problem. Sometimes a combination of a black box function and one or more general purpose functions will be the most convenient way to solve your problem: the black box function can be used to compute most of the results, and a general purpose function can be used to perform a subsidiary computation, such as computing condition numbers of eigenvalues and eigenvectors. ### 3.2  Computing Selected Eigenvalues and Eigenvectors The decision trees and the function documents make a distinction between functions which compute all eigenvalues or eigenvectors, and functions which compute selected eigenvalues or eigenvectors; the two classes of function use different algorithms. It is difficult to give clear guidance on which of these two classes of function to use in a particular case, especially with regard to computing eigenvectors. If you only wish to compute a very few eigenvectors, then a function for selected eigenvectors will be more economical, but if you want to compute a substantial subset (an old rule of thumb suggested more than 25%), then it may be more economical to compute all of them. Conversely, if you wish to compute all the eigenvectors of a sufficiently large symmetric tridiagonal matrix, the function for selected eigenvectors may be faster. The choice depends on the properties of the matrix and on the computing environment; if it is critical, you should perform your own timing tests. For dense nonsymmetric eigenproblems, there are no algorithms provided for computing selected eigenvalues; it is always necessary to compute all the eigenvalues, but you can then select specific eigenvectors for computation by inverse iteration. ### 3.3  Storage Schemes for Symmetric Matrices Functions which handle symmetric matrices are usually designed to use either the upper or lower triangle of the matrix; it is not necessary to store the whole matrix. If either the upper or lower triangle is stored conventionally in the upper or lower triangle of a two-dimensional array, the remaining elements of the array can be used to store other useful data. However, that is not always convenient, and if it is important to economize on storage, the upper or lower triangle can be stored in a one-dimensional array of length $n\left(n+1\right)/2$; in other words, the storage is almost halved. This storage format is referred to as packed storage. Functions designed for packed storage are usually less efficient, especially on high-performance computers, so there is a trade-off between storage and efficiency. A band matrix is one whose nonzero elements are confined to a relatively small number of subdiagonals or superdiagonals on either side of the main diagonal. Algorithms can take advantage of bandedness to reduce the amount of work and storage required. Functions which take advantage of packed storage or bandedness are provided for both standard symmetric eigenproblems and generalized symmetric-definite eigenproblems. ### 3.4  Balancing for Nonsymmmetric Eigenproblems There are two preprocessing steps which one may perform on a nonsymmetric matrix $A$ in order to make its eigenproblem easier. Together they are referred to as balancing. 1. Permutation: this involves reordering the rows and columns to make $A$ more nearly upper triangular (and thus closer to Schur form): ${A}^{\prime }=PA{P}^{\mathrm{T}}$, where $P$ is a permutation matrix. If $A$ has a significant number of zero elements, this preliminary permutation can reduce the amount of work required, and also improve the accuracy of the computed eigenvalues. In the extreme case, if $A$ is permutable to upper triangular form, then no floating point operations are needed to reduce it to Schur form. 2. Scaling: a diagonal matrix $D$ is used to make the rows and columns of ${A}^{\prime }$ more nearly equal in norm: ${A}^{\prime \prime }=D{A}^{\prime }{D}^{-1}$. Scaling can make the matrix norm smaller with respect to the eigenvalues, and so possibly reduce the inaccuracy contributed by roundoff (see Chapter II/11 of Wilkinson and Reinsch (1971)). Functions are provided in Chapter f08 for performing either or both of these preprocessing steps, and also for transforming computed eigenvectors or Schur vectors back to those of the original matrix. Black box functions in this chapter which compute the Schur factorization perform only the permutation step, since diagonal scaling is not in general an orthogonal transformation. The black box functions which compute eigenvectors perform both forms of balancing. ### 3.5  Non-uniqueness of Eigenvectors and Singular Vectors Eigenvectors, as defined by equations (1) or (4), are not uniquely defined. If $x$ is an eigenvector, then so is $kx$ where $k$ is any nonzero scalar. Eigenvectors computed by different algorithms, or on different computers, may appear to disagree completely, though in fact they differ only by a scalar factor (which may be complex). These differences should not be significant in any application in which the eigenvectors will be used, but they can arouse uncertainty about the correctness of computed results. Even if eigenvectors $x$ are normalized so that ${‖x‖}_{2}=1$, this is not sufficient to fix them uniquely, since they can still be multiplied by a scalar factor $k$ such that $\left|k\right|=1$. To counteract this inconvenience, most of the functions in this chapter, and in Chapter f08, normalize eigenvectors (and Schur vectors) so that ${‖x‖}_{2}=1$ and the component of $x$ with largest absolute value is real and positive. (There is still a possible indeterminacy if there are two components of equal largest absolute value – or in practice if they are very close – but this is rare.) In symmetric problems the computed eigenvalues are sorted into ascending order, but in nonsymmetric problems the order in which the computed eigenvalues are returned is dependent on the detailed working of the algorithm and may be sensitive to rounding errors. The Schur form and Schur vectors depend on the ordering of the eigenvalues and this is another possible cause of non-uniqueness when they are computed. However, it must be stressed again that variations in the results from this cause should not be significant. (Functions in Chapter f08 can be used to transform the Schur form and Schur vectors so that the eigenvalues appear in any given order if this is important.) In singular value problems, the left and right singular vectors $u$ and $v$ which correspond to a singular value $\sigma$ cannot be normalized independently: if $u$ is multiplied by a factor $k$ such that $\left|k\right|=1$, then $v$ must also be multiplied by $k$. Non-uniqueness also occurs among eigenvectors which correspond to a multiple eigenvalue, or among singular vectors which correspond to a multiple singular value. In practice, this is more likely to be apparent as the extreme sensitivity of eigenvectors which correspond to a cluster of close eigenvalues (or of singular vectors which correspond to a cluster of close singular values). ## 4  Decision Trees ### 4.1  Black Box Functions The decision tree for this section is divided into three sub-trees. Note: for the Chapter f08 functions there is generally a choice of simple and comprehensive function. The comprehensive functions return additional information such as condition and/or error estimates. ### Tree 1: Eigenvalues and Eigenvectors of Real Matrices Is the eigenproblem $Ax=\lambda Bx$? _yes Are $A$ and $B$ symmetric with $B$ positive definite and well-conditioned w.r.t inversion? _yes Are eigenvalues only required? _yes f02adc | | no| | | f02aec | no| | f02bjc no| The eigenproblem is $Ax=\lambda x$. Is $A$ symmetric? _yes Are eigenvalues only required? _yes f02aac | no| | f02abc no| Are eigenvalues only required? _yes f02afc no| Is the Schur factorization required? _yes See Chapter f08 no| Are all eigenvectors required? _yes f02agc no| f02ecc ### Tree 2: Eigenvalues and Eigenvectors of Complex Matrices Is the eigenproblem $Ax=\lambda Bx$? _yes See Chapter f08 no| The eigenproblem is $Ax=\lambda x$. Is $A$ Hermitian? _yes Are eigenvalues only required? _yes f02awc | no| | f02axc no| Are eigenvalues only required? _yes See Chapter f08 no| Is the Schur factorization required? _yes See Chapter f08 no| Are all eigenvectors required? _yes See Chapter f08 no| f02gcc ### Tree 3: Singular Values and Singular Vectors Is $A$ a complex matrix? _yes f02xec no| f02wec ### 4.2  General Purpose Functions (Eigenvalues and Eigenvectors) Functions for large sparse eigenvalue problems are to be found in Chapter f12, see the f12 Chapter Introduction. The decision tree for this section addressing dense problems, is divided into eight sub-trees: As it is very unlikely that one of the functions in this section will be called on its own, the other functions required to solve a given problem are listed in the order in which they should be called. ### 4.3  General Purpose Functions (Singular Value Decomposition) See Section 4.2 in the f08 Chapter Introduction. For real sparse matrices where only selected singular values are required (possibly with their singular vectors), functions from Chapter f12 may be applied to the symmetric matrix ${A}^{\mathrm{T}}A$; see Section 9 in nag_real_symm_sparse_eigensystem_iter (f12fbc). ## 5  Functionality Index Black Box functions, complex eigenproblem, selected eigenvalues and eigenvectors nag_complex_eigensystem_sel (f02gcc) complex Hermitian eigenproblem, all eigenvalues nag_hermitian_eigenvalues (f02awc) all eigenvalues and eigenvectors nag_hermitian_eigensystem (f02axc) complex singular value problem nag_complex_svd (f02xec) real eigenproblem, all eigenvalues nag_real_eigenvalues (f02afc) all eigenvalues and eigenvectors nag_real_eigensystem (f02agc) selected eigenvalues and eigenvectors nag_real_eigensystem_sel (f02ecc) real generalized eigenproblem, all eigenvalues and optionally eigenvectors nag_real_general_eigensystem (f02bjc) real generalized symmetric-definite eigenproblem, all eigenvalues and eigenvectors nag_real_symm_general_eigensystem (f02aec) real singular value problem nag_real_svd (f02wec) real symmetric eigenproblem, all eigenvalues nag_real_symm_eigenvalues (f02aac) all eigenvalues and eigenvectors nag_real_symm_eigensystem (f02abc) General Purpose functions (see also Chapter f12), real m by n matrix, leading terms SVD nag_real_partial_svd (f02wgc) None. ## 7  References Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore Parlett B N (1998) The Symmetric Eigenvalue Problem SIAM, Philadelphia Wilkinson J H and Reinsch C (1971) Handbook for Automatic Computation II, Linear Algebra Springer–Verlag f02 Chapter Contents
2014-10-24 12:03:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 308, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9752352833747864, "perplexity": 488.6070782209567}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119645898.42/warc/CC-MAIN-20141024030045-00213-ip-10-16-133-185.ec2.internal.warc.gz"}
https://q4interview.com/ques_ans.php?c=58&level=easy
# C Programming :: Basic Concepts Home > C Programming > Basic Concepts > General Questions NA SHSTTON 0 Solv. Corr. 0 Solv. In. Corr. 0 Attempted 0 M:0 S Avg. Time 1 / 65 What is the output of the following C Program? #include void main() { main(); } ACompiler error BStack overflow. CNone of these Explanation: main function calls itself again and again. Each time the function is called its return address is stored in the call stack. Since there is no condition to terminate the function call, the call stack overflows at runtime. So it terminates the program and results in an error. Workspace NA SHSTTON 0 Solv. Corr. 0 Solv. In. Corr. 0 Attempted 0 M:0 S Avg. Time 2 / 65 What is the output of the following C Program? #include void main() { clrscr(); } clrscr(); ANo output/error BCompilation Error CNone of these Explanation: The first clrscr() occurs inside a function. So it becomes a function call. In the second clrscr(); is a function declaration (because it is not inside any function). Workspace NA SHSTTON 0 Solv. Corr. 0 Solv. In. Corr. 0 Attempted 0 M:0 S Avg. Time 3 / 65 What is the output of the following C Program? #include void main() { int y; scanf("%d",&y); // Given Input is 2000 if( (y%4==0 && y%100 != 0) || y%100 == 0 ) printf("%d is a leap year"); else printf("%d is not a leap year"); } A2000 is not a leap year B2000 is a leap year CCompilation Error DNone of these Explanation: It is just a simple leap year program , when if condition is evaluated it returns true value. So, "2000 is a leap year" get printed. Workspace NA SHSTTON 0 Solv. Corr. 0 Solv. In. Corr. 0 Attempted 0 M:0 S Avg. Time 4 / 65 What will be the output of the below C program. #include void main() { int i=400,j=300; printf("%d..%d"); } AGarbage Value B400..300 CCompilation error DNone of these Explanation: printf takes the values of the first two assignments of the program. Any number of printf's may be given. All of them take only the first two values. If more number of assignments given in the program,then printf will take garbage values. Workspace NA SHSTTON 0 Solv. Corr. 0 Solv. In. Corr. 0 Attempted 0 M:0 S Avg. Time 5 / 65 What will be the output of the below C program. #include int main() { printf("%d", out); return 0; } int out=100; ACompiler error: undefined symbol out in function main. B100 C10 DNone of these Explanation: The rule is that a variable is available for use from the point of declaration. Even though a is a global variable, it is not available for main. Hence an error. Workspace NA SHSTTON 0 Solv. Corr. 0 Solv. In. Corr. 0 Attempted 0 M:0 S Avg. Time 6 / 65 What will be the output of the below C program. #include int main() { extern int i; i=20; printf("%d",sizeof(i)); return 0; } ACompilation error: undefined reference to i' BLinker error: undefined reference to i' C20 DNone of these Explanation: extern declaration specifies that the variable i is defined somewhere else. The compiler passes the external variable to be resolved by the linker. So compiler doesn't find an error. During linking the linker searches for the definition of i. Since it is not found the linker flags an error. Workspace NA SHSTTON 0 Solv. Corr. 0 Solv. In. Corr. 0 Attempted 0 M:0 S Avg. Time 7 / 65 What will be the output of the below C program. #include int main() { int i=-1; +i; printf("i = %d, +i = %d \n",i,+i); return 0; } ACompilation Error Bi=-1, +i=0 Ci = -1, +i = -1 DNone of these Explanation: Unary + is the only dummy operator in C. Where-ever it comes you can just ignore it just because it has no effect in the expressions (hence the name dummy operator). Workspace NA SHSTTON 0 Solv. Corr. 0 Solv. In. Corr. 0 Attempted 0 M:0 S Avg. Time 8 / 65 What will be the output of the below C program. #include int main() { int i=-1; -i; printf("i = %d, -i = %d \n",i,-i); return 0; } Ai = -1, -i = 1 Bi = 1, -i = 1 Ci = -1, -i = -1 DComilation error ENone of these Explanation: -i is executed and this execution doesn't affect the value of i. In printf first you just print the value of i. After that the value of the expression -i = -(-1) is printed. Workspace NA SHSTTON 0 Solv. Corr. 0 Solv. In. Corr. 0 Attempted 0 M:0 S Avg. Time 9 / 65 What will be the output of the below C program. #include int main() { int k=1; printf("%d==1 is %s",k,k==1?"TRUE":"FALSE"); return 0; } A1==1 is TRUE B1==1 is FALSE CCompilation error DNone of these Explanation: When two strings are placed together (or separated by white-space) they are concatenated (this is called as "stringization" operation). So the string is as if it is given as "%d==1 is %s". The conditional operator( ?: ) evaluates to "TRUE". Workspace NA SHSTTON 0 Solv. Corr. 0 Solv. In. Corr. 0 Attempted 0 M:0 S Avg. Time 10 / 65 What will be the output of the below C program. #include int main(){ int a= 0; int b = 20; char x =1; char y =10; if(a,b,x,y) printf("hello"); return 0; } Ahello BCompilation Error Cno output No Error DNone of these Explanation: The comma operator has associativity from left to right. Only the rightmost value is returned and the other values are evaluated and ignored. Thus the value of last variable y is returned to check in if. Since it is a non zero value if becomes true so, "hello" will be printed. Workspace
2019-12-06 13:15:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25975528359413147, "perplexity": 7941.591125076348}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488620.24/warc/CC-MAIN-20191206122529-20191206150529-00070.warc.gz"}
http://forum.zkoss.org/question/73191/zk6-maven-artifact-for-zkcomposite/
# ZK6 Maven artifact for zkcomposite testuser35 27 Hello together, exists a Maven artifact for zkcomposite in ZK6 RC2, if not will come for the final ZK6 version. Kind regards Stephan delete retag edit ## 9 Replies matthewgo 375 Hi Stephan, testuser35 27 Hi For the moment we downloaded it from the same link you mentioned above. However since we are using maven for building our project we would like to know, if there is a (public) maven repository containing the ZK-component library as maven artifact that we could simply add as dependency to our project. Kind regards, Stephan matthewgo 375 Hi Stephan, I see. We will evaluate if we can upload it to public maven. testuser35 27 testuser35 27 Hi Sorry for insisting on this, but do you already have an approximate timeline for setting up the maven artifacts? Are we talking about a few days or about weeks/months? If it will take longer, then we will have to (temporary) upload the jars to our local internal maven repository, however this will take some effort that we would like to avoid if not really necessary. ;) Kind regards, Stephan testuser35 27 Hello matthewgo, >>We will evaluate if we can upload it to public maven. Had you time for evaluation? Do you know if this topic still in working process? Thank you Kind regards, Stephan dtommasina 19 1 http://www.avanon.com/ Hi there we would also really be interested at the possibility to get the libraries over maven. Thanks Danilo blacksensei 234 2 Hello all !! any recent news about the ZK 6 artifacts in the maven public repo? this is a very serious need. thanks in advance for the team who is or will be working on it. dtommasina 19 1 http://www.avanon.com/ We opened an official support request about this and the ZK guys were so neat to publish the artifacts to maven. here the dependency definition. URL: http://mavensync.zkoss.org/maven2 <dependency> <groupId>org.zkoss.composite</groupId> <artifactId>zkcomposite</artifactId> <version>0.8.0</version> </dependency> [hide preview]
2019-03-26 18:38:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3550303280353546, "perplexity": 4523.338671972957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205600.75/warc/CC-MAIN-20190326180238-20190326202238-00252.warc.gz"}
http://math.stackexchange.com/questions/262782/find-distribution-of-time-instants-where-a-random-signal-assumes-fixed-value
Find distribution of time instants where a random signal assumes fixed value I have a stationary and ergodic stochastic process $N(t,\omega)$. For a fixed $t^*$ I know the distribution of random variable $N(t^*,\omega)$. Is there any way to know the distribution of time instants $(t_1, t_2, ... , t_n)$ in which $N(t,\omega) = N^*$ where $N^*$ is a scalar value? I can limit observation to a fixed time interval. -
2015-01-29 09:06:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9508981704711914, "perplexity": 165.08488672871232}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122022571.56/warc/CC-MAIN-20150124175342-00062-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.proofwiki.org/wiki/Subset_is_Compatible_with_Ordinal_Successor
# Subset is Compatible with Ordinal Successor ## Theorem Let $x$ and $y$ be ordinals and let $x^+$ denote the successor set of $x$. Let $x \in y$. Then: $x^+ \in y^+$ ## Proof 1 $\ds x \in y$ $\implies$ $\ds x \ne y$ No Membership Loops $\ds$ $\implies$ $\ds x^+ \ne y^+$ Equality of Successors $\ds x \in y$ $\implies$ $\ds y \notin x$ No Membership Loops $\ds$ $\implies$ $\ds y \notin x^+$ $x ≠ y$ and Definition of Successor Set $\ds y^+ \in x^+$ $\implies$ $\ds y \in x^+$ Successor Set of Ordinal is Ordinal, Ordinals are Transitive, and Set is Element of Successor The last part is a contradiction, so $y^+ \notin x^+$. $x^+ \in y^+$ $\blacksquare$ ## Proof 2 First note that by Successor Set of Ordinal is Ordinal, $x^+$ and $y^+$ are ordinals. Let $x \in y$. We wish to show that $x^+ \in y^+$. Either $x^+ \in y^+$, $y^+ \in x^+$, or $x^+ = y^+$. Aiming for a contradiction, suppose $y^+ = x^+$. Then $y \in x$ or $y = x$ by the definition of successor set. If $y = x$ then $x \in x$, contradicting the fact that Ordinal is not Element of Itself. If $y \in x$ then since an ordinal is transitive, $y \in y$, again contradicting Ordinal is not Element of Itself. Thus $y^+ ≠ x^+$. Aiming for a contradiction, suppose $y^+ \in x^+$. By definition of successor set, $y^+ \in x$ or $y^+ = x$. If $y^+ \in x$, then since $y^+$ and $x$ are both ordinals, $y^+ \subsetneqq x$. Then $y \in x$. Since $y$ is transitive, $y \in y$, contradicting Ordinal is not Element of Itself. If instead $y^+ = x$, then $x \in y \in y^+ = x$, so the same contradiction arises because $x$ is transitive. Thus $y^+ \notin x^+$. So the only remaining possibility, that $x^+ \in y^+$, must hold. $\blacksquare$ ## Proof 3 First note that by Successor Set of Ordinal is Ordinal, $x^+$ and $y^+$ are ordinals. By Ordinal Membership is Trichotomy, one of the following must be true: $\ds x^+$ $=$ $\ds y^+$ $\ds y^+$ $\in$ $\ds x^+$ $\ds x^+$ $\in$ $\ds y^+$ We will show that the first two are both false, so that the third must hold. Two preliminary facts: $(1)$ $:$ $\ds x$ $\ds \ne$ $\ds y$ $x \in y$ and Ordinal is not Element of Itself $(2)$ $:$ $\ds y$ $\ds \notin$ $\ds x$ $x \in y$ and Ordinal Membership is Asymmetric By $(1)$ and Equality of Successors: $x^+ \ne y^+$ Thus the first of the three possibilities is false. Aiming for a contradiction, suppose $y^+ \in x^+$. Then: $\ds y$ $\in$ $\ds y^+$ Definition of Successor Set $\ds \leadsto \ \$ $\ds y$ $\in$ $\ds x^+$ $y^+ \in x^+$ and $x^+$ is an ordinal and therefore transitive. $\ds \leadsto \ \$ $\ds y$ $\in$ $\ds x$ Definition of Successor Set $\, \ds \lor \,$ $\ds y$ $=$ $\ds x$ But we already know that $y \notin x$ by $(2)$ and $y \ne x$ by $(1)$. So this is a contradiction, and we conclude that $y^+ \notin x^+$. Thus we have shown that the second possibility is false. Thus the third and final one must hold: $x^+ \in y^+$. $\blacksquare$
2022-09-27 23:29:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9953098297119141, "perplexity": 312.6651201280893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00127.warc.gz"}
https://opus4.kobv.de/opus4-zib/frontdoor/index/index/docId/4242
## Markov State Models for Rare Events in Molecular Dynamics Please always quote using this URN: urn:nbn:de:0297-zib-42420 • Rare but important transition events between long lived states are a key feature of many molecular systems. In many cases the computation of rare event statistics by direct molecular dynamics (MD) simulations is infeasible even on the most powerful computers because of the immensely long simulation timescales needed. Recently a technique for spatial discretization of the molecular state space designed to help overcome such problems, so-called Markov State Models (MSMs), has attracted a lot of attention. We review the theoretical background and algorithmic realization of MSMs and illustrate their use by some numerical examples. Furthermore we introduce a novel approach to using MSMs for the efficient solution of optimal control problems that appear in applications where one desires to optimize molecular properties by means of external controls. $Rev: 13581$
2017-08-21 15:48:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39894339442253113, "perplexity": 758.4940365786746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109157.57/warc/CC-MAIN-20170821152953-20170821172953-00367.warc.gz"}
https://socratic.org/questions/what-is-the-equation-of-the-line-normal-to-f-x-lnx-1-x-at-x-3
# What is the equation of the line normal to f(x)=lnx-1/x at x=3? Jan 8, 2016 $f ' \left(x\right) = \frac{1}{x} - {x}^{-} 2 = \frac{1}{x} - \frac{1}{x} ^ 2 = \frac{x - 1}{x} ^ 2$ now the line equation is: $\left(y - {y}_{0}\right) = m \left(x - {x}_{0}\right)$ where: ${x}_{0} = 3$ ${y}_{0} = f \left({x}_{0}\right)$ $m = - \frac{1}{f ' \left({x}_{0}\right)}$ because we looking for the normal line then: $f ' \left(3\right) = \frac{3 - 1}{9} = \frac{2}{9}$ $- \frac{1}{f ' \left(x\right)} = - \frac{9}{2}$ $f \left({x}_{0}\right) = \ln 3 - \frac{1}{3}$ $y - \left(\ln 3 - \frac{1}{3}\right) = - \frac{9}{2} \left(x - 3\right)$ $y = - \frac{9}{2} x - \frac{27}{2} + \left(\ln 3 - \frac{1}{3}\right)$
2021-06-20 15:15:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42045435309410095, "perplexity": 2825.4941164620436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488249738.50/warc/CC-MAIN-20210620144819-20210620174819-00371.warc.gz"}
https://paezha.github.io/spatial-analysis-r/spatially-continuous-data-iv.html
# Chapter 37 Spatially Continuous Data IV NOTE: The source files for this book are available with companion package {isdas}. The source files are in Rmarkdown format and packed as templates. These files allow you execute code within the notebook, so that you can work interactively with the notes. ## 37.1 Learning objectives In the previous practice you were introduced to the concept of variographic analysis for fields/spatially continuous data. In this practice, we will learn: 1. How to use residual spatial pattern to estimate prediction errors. 2. Kriging: a method for optimal predictions. • Bailey TC and Gatrell AC (1995) Interactive Spatial Data Analysis, Chapters 5 and 6. Longman: Essex. • Bivand RS, Pebesma E, and Gomez-Rubio V (2008) Applied Spatial Data Analysis with R, Chapter 8. Springer: New York. • Brunsdon C and Comber L (2015) An Introduction to R for Spatial Analysis and Mapping, Chapter 6, Sections 6.7 and 6.8. Sage: Los Angeles. • Isaaks EH and Srivastava RM (1989) An Introduction to Applied Geostatistics, Chapter 12. Oxford University Press: Oxford. • O’Sullivan D and Unwin D (2010) Geographic Information Analysis, 2nd Edition, Chapters 9 and 10. John Wiley & Sons: New Jersey. ## 37.3 Preliminaries As usual, it is good practice to clear the working space to make sure that you do not have extraneous items there when you begin your work. The command in R to clear the workspace is rm (for “remove”), followed by a list of items to be removed. To clear the workspace from all objects, do the following: rm(list = ls()) Note that ls() lists all objects currently on the workspace. Load the libraries you will use in this activity: library(isdas) library(gstat) library(plotly) library(spdep) library(tidyverse) library(stars) data("Walker_Lake") You can verify the contents of the dataframe: summary(Walker_Lake) ## ID X Y V ## Length:470 Min. : 8.0 Min. : 8.0 Min. : 0.0 ## Class :character 1st Qu.: 51.0 1st Qu.: 80.0 1st Qu.: 182.0 ## Mode :character Median : 89.0 Median :139.5 Median : 425.2 ## Mean :111.1 Mean :141.3 Mean : 435.4 ## 3rd Qu.:170.0 3rd Qu.:208.0 3rd Qu.: 644.4 ## Max. :251.0 Max. :291.0 Max. :1528.1 ## ## U T ## Min. : 0.00 1: 45 ## 1st Qu.: 83.95 2:425 ## Median : 335.00 ## Mean : 613.27 ## 3rd Qu.: 883.20 ## Max. :5190.10 ## NA's :195 ## 37.4 Using residual spatial pattern to estimate prediction errors Previously, in Chapter @ref{spatially-continuous-data-ii} we discussed how to interpolate a field using trend surface analysis; we also saw how that method may lead to residuals that are not spatially independent. The implication of non-random residuals is that there is systematic residual pattern that the model did not capture; This, in turn, means that there is at least some information that can still be extracted from the residuals. Again, we will use the case of Walker Lake to explore one way to do this. As before, we first calculate the polynomial terms of the coordinates to fit a trend surface to the data: Walker_Lake <- mutate(Walker_Lake, X3 = X^3, X2Y = X^2 * Y, X2 = X^2, XY = X * Y, Y2 = Y^2, XY2 = X * Y^2, Y3 = Y^3) Given the polynomial expansion, we can proceed to estimate the following cubic trend surface model, which we already know provided the best fit to the data: WL.trend3 <- lm(formula = V ~ X3 + X2Y + X2 + X + XY + Y + Y2 + XY2 + Y3, data = Walker_Lake) summary(WL.trend3) ## ## Call: ## lm(formula = V ~ X3 + X2Y + X2 + X + XY + Y + Y2 + XY2 + Y3, ## data = Walker_Lake) ## ## Residuals: ## Min 1Q Median 3Q Max ## -564.19 -197.41 7.91 194.25 929.72 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -8.620e+00 1.227e+02 -0.070 0.944035 ## X3 1.533e-04 4.806e-05 3.190 0.001522 ** ## X2Y 6.139e-05 3.909e-05 1.570 0.117000 ## X2 -6.651e-02 1.838e-02 -3.618 0.000330 *** ## X 9.172e+00 2.386e+00 3.844 0.000138 *** ## XY -4.420e-02 1.430e-02 -3.092 0.002110 ** ## Y 4.794e+00 2.040e+00 2.350 0.019220 * ## Y2 -1.806e-03 1.327e-02 -0.136 0.891822 ## XY2 7.679e-05 2.956e-05 2.598 0.009669 ** ## Y3 -4.170e-05 2.819e-05 -1.479 0.139759 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 276.7 on 460 degrees of freedom ## Multiple R-squared: 0.1719, Adjusted R-squared: 0.1557 ## F-statistic: 10.61 on 9 and 460 DF, p-value: 5.381e-15 We can next visualize the residuals which, as you can see, do not appear to be random plot_ly(x = ~Walker_Lake$X, y = ~Walker_Lake$Y, z = ~WL.trend3$residuals, color = ~WL.trend3$residuals < 0, colors = c("blue", "red"), type = "scatter3d") ## No scatter3d mode specifed: ## Setting the mode to markers ## Read more about this attribute -> https://plotly.com/r/reference/#scatter-mode Now we will create an interpolation grid: # The function sequence() create a sequence of values from - to # using by as the step increment. In this case, we generate a grid # with points that are 2.5 m apart. X.p <- seq(from = 0.1, to = 255.1, by = 2.5) Y.p <- seq(from = 0.1, to = 295.1, by = 2.5) df.p <- expand.grid(X = X.p, Y = Y.p) WE can add the polynomial terms to this grid. Since our trend surface model was estimated using the cubic polynomial, we add those terms to the dataframe: df.p <- mutate(df.p, X3 = X^3, X2Y = X^2 * Y, X2 = X^2, XY = X * Y, Y2 = Y^2, XY2 = X * Y^2, Y3 = Y^3) The interpolated cubic surface is obtained by using the model and the interpolation grid as newdata: # The function predict() is used to make predictions given a model # and a possibly new dataset, different from the one used for estimation # of the model. WL.preds3 <- predict(WL.trend3, newdata = df.p, se.fit = TRUE, interval = "prediction", level = 0.95) The surface is converted into a matrix for 3D plotting: z.p3 <- matrix(data = WL.preds3$fit[,1], nrow = length(Y.p), ncol = length(X.p), byrow = TRUE) And plot: WL.plot3 <- plot_ly(x = ~X.p, y = ~Y.p, z = ~z.p3, type = "surface", colors = "YlOrRd") %>% layout(scene = list(aspectmode = "manual", aspectratio = list(x = 1, y = 1, z = 1))) WL.plot3 The trend surface provides a smooth estimate of the field. However, it is not sufficient to capture all systematic variation, and fails to produce random residuals. A possible way of enhancing this approach to interpolation is to exploit the information that remains in the residuals, for instance by the use of $$k$$-point means. We can illustrate this as follows. To interpolate the residuals, we first need the set of target points (the points for the interpolation), as well as the source (the observations): # We will use the prediction grid we used above to interpolate the residuals target_xy = expand.grid(x = X.p, y = Y.p) %>% st_as_sf(coords = c("x", "y")) # Convert the Walker_Lake dataframe to a simple features object using as follows: Walker_Lake.sf <- Walker_Lake %>% st_as_sf(coords = c("X", "Y")) # Append the residuals to the table Walker_Lake.sf$residuals <- WL.trend3$residuals It is possible now to use the kpointmean function to interpolate the residuals, for instance using $$k=5$$ neighbors: kpoint.5 <- kpointmean(source_xy = Walker_Lake.sf, target_xy = target_xy, z = residuals, k = 5) ## projected points Given the interpolated residuals, we can join them to the cubic trend surface, as follows: z.p3 <- matrix(data = WL.preds3$fit[,1] + kpoint.5$z, nrow = length(Y.p), ncol = length(X.p), byrow = TRUE) This is now the interpolated field that combines the trend surface and the estimated residuals: WL.plot3 <- plot_ly(x = ~X.p, y = ~Y.p, z = ~z.p3, type = "surface", colors = "YlOrRd") %>% layout(scene = list(aspectmode = "manual", aspectratio = list(x = 1, y = 1, z = 1))) WL.plot3 Of all the approaches that we have seen so far, this is the first that provides a genuine estimate of the following: $\hat{z}_p + \hat{\epsilon}_p$ With trend surface analysis providing a smooth estimator of the underlying field: $\hat{z}_p = f(x_p, y_p)$ And $$k$$-point means providing an estimator of: $\hat{\epsilon}_p$ A question is how to decide the number of neighbors to use in the calculation of the $$k$$-point means. As previously discussed, $$k$$=1 becomes identical to Voronoi polygons, and $$k = n$$ becomes the global mean. A second question concerns the way the average is calculated. As variographic analysis demonstrates, it is possible to estimate the way in which spatial dependence weakens with distance. Why should more distant points be weighted equally? The answer is, there is no reason why they should, and in fact, variographic analysis elegantly solves this, as well the question of how many points to use: all of them, with varying weights. Next, we will introduce kriging, a method for optimal prediction that is based on the use of variographic analysis. ## 37.5 Kriging: a method for optimal prediction. To introduce the method known as kriging, we will begin by positing a situation as follows: $\hat{z}_p + \hat{\epsilon}_p = \hat{f}(x_p, y_p) + \hat{\epsilon}_p$ where $$\hat{f}(x_p, y_p)$$ is a smooth estimator of an underlying field. We aim to predict $$\hat{\epsilon}_p$$ based on the observed residuals. We use an expression similar to the one used for IDW and $$k$$-point means in Chapter @ref{spatially-continuous-data-i} (we will use $$\lambda$$ for the weights to avoid confusing the the weights in variographic analysis): $\hat{\epsilon}_p = \sum_{i=1}^n {\lambda_{pi}\epsilon_i}$ That is, $$\hat{\epsilon}_p$$ is a linear combination of the prediction residuals from the trend: $\epsilon_i = z_i - \hat{f}(x_i, y_i)$ It is possible to define the following expected mean squared error, or prediction variance: $\sigma_{\epsilon}^2 = E[(\hat{\epsilon}_p - \epsilon_i)^2]$ The prediction variance measures how close, on average, the prediction error is to the residuals. The prediction variance can be decomposed as follows: $\sigma_{\epsilon}^2 = E[\hat{\epsilon}_p] + E[\epsilon_i] - 2E[\hat{\epsilon_i\epsilon}_p]$ It turns out (we will not show the detailed derivation, but it can be consulted here), that the expression for the prediction variance depends on the weights: $\sigma_{\epsilon}^2 = \sum_{i=1}^n \sum_{j=1}^n{\lambda_{ip}\lambda_{jp}C_{ij}} + \sigma^2 + 2\sum_{i=1}^{n}{\lambda_{ip}C_{ip}}$ where $$C_{ij}$$ is the autocovariance between observations at $$i$$ and $$j$$, and $$C_{ip}$$ is the autocovariance between the observation at $$i$$ and prediction location $$p$$. Fortunately for us, the semivariogram and the autocovariance is straightforward: $C_{z}(h) =\sigma^2 - \hat{\gamma}_{z}(h)$ This means that, given the distance $$h$$ between $$i$$ and $$j$$, and $$i$$ and $$p$$, we can use a semivariogram to obtain the autocovariances needed to calculate the prediction variance. We are still missing, however, the weights $$\lambda$$, which are not known a priori. These weights can be obtained if we use the following rules: The expectation of the prediction errors is zero (unbiassedness) Find the weights $$lambda$$ that minimize the prediction variance (optimal estimator). This makes sense, since we would like our predictions to be unbiased (i.e., accurate) and as precise as possible, that is, to have the smallest variance (recall the discussion about accuracy and precision in Chapter @ref{spatially-continuous-data-iii}). Again, solving the minimization problem is beyond the scope of our presentation, but it suffices to say that the result is as follows: $\mathbf{\lambda}_p = \mathbf{C}^{-1}\mathit{c}_{p}$ where $$\mathbf{C}$$ is the covariance matrix, and $$\mathit{c}_{p}$$ is the covariance vector for location $$p$$. In summary, kriging is a method to optimally estimate the value of a variable at $$p$$ as a weighted sum of the observations of the same variable at locations $$i$$. This method is known to have the properties of Best (in the sense that it minimizes the variance) Linear (because predictions are a linear combination of weights) Unbiased (since the estimators of the prediction errors are zero) Predictor, or BLUP. Kriging is implemented in the package gstat as follows. To put kriging to work we must first conduct variographic analysis of the residuals. The function variogram uses as an argument a simple features object that we can create as follows: Walker_Lake.sf <- Walker_Lake %>% st_as_sf(coords = c("X", "Y"), # Remove set to false to retain the X and Y coordinates # in the dataframe after they are converted to simple features remove = FALSE) The variogram of the residuals can be obtained by specifying a trend surface in the formula: variogram_v <- variogram(V ~ X3 + X2Y + X2 + X + XY + Y + Y2 + XY2 + Y3, data = Walker_Lake.sf) # Plot ggplot(data = variogram_v, aes(x = dist, y = gamma)) + geom_point() + geom_text(aes(label = np), # Nudge the labels away from the points nudge_y = -1500) + xlab("Distance") + ylab("Semivariance") You can verify that the semivariogram above corresponds to the residuals by repeating the analysis directly on the residuals. First join the residuals to the SpatialPointsDataFrame: Walker_Lake.sf$e <- WL.trend3$residuals And then calculate the semivariogram and plot: variogram_e <- variogram(e ~ 1, data = Walker_Lake.sf) # Plot ggplot(data = variogram_e, aes(x = dist, y = gamma)) + geom_point() + geom_text(aes(label = np), nudge_y = -1500) + xlab("Distance") + ylab("Semivariance") The empirical semivariogram is used to estimate a semivariogram function: variogram_v.t <- fit.variogram(variogram_v, model = vgm("Exp", "Sph", "Gau")) variogram_v.t ## model psill range ## 1 Nug 0.0 0.000000 ## 2 Exp 85554.4 9.910429 The variogram function plots as follows: gamma.t <- variogramLine(variogram_v.t, maxdist = 130) # Plot ggplot(data = variogram_v, aes(x = dist, y = gamma)) + geom_point(size = 3) + geom_line(data = gamma.t, aes(x = dist, y = gamma)) + xlab("Distance") + ylab("Semivariance") We will convert the prediction grid to a simple features object: df.sf <- df.p %>% st_as_sf(coords = c("X", "Y"), remove = FALSE) Then, we can krige the field as follows (ensure that packages sf and stars are installed): V.kriged <- krige(V ~ X3 + X2Y + X2 + X + XY + Y + Y2 + XY2 + Y3, Walker_Lake.sf, df.sf, variogram_v.t) ## [using universal kriging] Extract the predictions and prediction variance from the object V.kriged: V.km <- matrix(data = V.kriged$var1.pred, nrow = 119, ncol = 103, byrow = TRUE) V.sm <- matrix(data = V.kriged\$var1.var, nrow = 119, ncol = 103, byrow = TRUE) We can now plot the interpolated field: V.km.plot <- plot_ly(x = ~X.p, y = ~Y.p, z = ~V.km, type = "surface", colors = "YlOrRd") %>% layout(scene = list(aspectmode = "manual", aspectratio = list(x = 1, y = 1, z = 1))) V.km.plot Also, we can plot the kriging standard errors (the square root of the prediction variance). This gives an estimate of the uncertainty in the predictions: V.sm.plot <- plot_ly(x = ~X.p, y = ~Y.p, z = ~sqrt(V.sm), type = "surface", colors = "YlOrRd") %>% layout(scene = list(aspectmode = "manual", aspectratio = list(x = 1, y = 1, z = 1))) V.sm.plot Where are predictions more/less precise? ### References Bailey, T. C., and A. C. Gatrell. 1995. Interactive Spatial Data Analysis. Book. Essex: Addison Wesley Longman. Bivand, R. S., E. J. Pebesma, and V. Gómez-Rubio. 2008. Applied Spatial Data Analysis with r. Book. New York: Springer Science+Business Media. Brunsdon, Chris, and Lex Comber. 2015. An Introduction to r for Spatial Analysis and Mapping. Book. Sage. Isaaks, E. H., and R. M. Srivastava. 1989. Applied Geostatistics. Book. New York: Oxford University Press. O’Sullivan, David, and David Unwin. 2010. Geographic Information Analysis. Book. 2nd. Edition. Hoboken, New Jersey: John Wiley & Sons.
2022-08-15 08:51:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5700228810310364, "perplexity": 8270.507966778281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00323.warc.gz"}
http://mathoverflow.net/feeds/question/2390
Why does non-abelian group cohomology exist? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-21T08:03:05Z http://mathoverflow.net/feeds/question/2390 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/2390/why-does-non-abelian-group-cohomology-exist Why does non-abelian group cohomology exist? Hunter Brooks 2009-10-24T23:28:45Z 2010-02-21T13:12:16Z <p>If K is a non-abelian group on which a group G acts via automorphisms, we can define 1-cocycles and 1-coboundaries by mimicking the explicit formulas coming from the bar resolution in ordinary group cohomology, and thus we have a reasonable notion of H^1(G, K). </p> <p>It turns out we have a part of the expected long exact sequence, until this construction breaks down for building H^i when i > 1, where the long exact sequence stops. There are other analogues to ordinary group cohomology as well. The only proof I've ever seen of any of this is by hand. Is there some deeper explanation of why non-abelian group cohomology exists (and then ceases to exist)?</p> http://mathoverflow.net/questions/2390/why-does-non-abelian-group-cohomology-exist/2391#2391 Answer by Ilya Nikokoshev for Why does non-abelian group cohomology exist? Ilya Nikokoshev 2009-10-24T23:30:06Z 2009-10-24T23:30:06Z <p>Extensions exist for non-abelian groups too.</p> http://mathoverflow.net/questions/2390/why-does-non-abelian-group-cohomology-exist/2393#2393 Answer by Eric Wofsey for Why does non-abelian group cohomology exist? Eric Wofsey 2009-10-24T23:53:12Z 2009-10-24T23:53:12Z <p>Topologically, you could say that this is true because K(A,1) exists for nonabelian groups A. When the action of G on A is trivial, at least, H^1(G,A) should be homotopy classes of maps from K(G,1) to K(A,1) (the same way H^n(G,A)=H^n(K(G,1);A)=homotopy classes of maps K(G,1) \to K(A,n) for A abelian). In a similar way, H^0 is defined with coefficients in any pointed set.</p> http://mathoverflow.net/questions/2390/why-does-non-abelian-group-cohomology-exist/2397#2397 Answer by Reid Barton for Why does non-abelian group cohomology exist? Reid Barton 2009-10-25T00:13:58Z 2009-10-25T01:15:49Z <p>To elaborate on Eric's answer, I believe that H<sup>1-n</sup>(G, A) is &pi;<sub>n</sub> of the homotopy fixed point space K(A, 1)<sup>hG</sup>. That exact sequence which ends at H<sup>1</sup>--which is only a set, while H<sup>0</sup> is a group--is actually the long exact sequence of the fibration K(A, 1)<sup>hG</sup> -> K(B, 1)<sup>hG</sup> -> K(C, 1)<sup>hG</sup> in disguise.</p> <p>That reindexing 1-n is related to the 1 in K(A, 1); if A is <em>abelian</em>, then we can replace all the occurrences of 1 by r for any r &ge; 0, giving arbitrarily long segments of the long exact sequence. (Or you can use the language of spectra: H<sup>-n</sup>(G, A) = &pi;<sub>n</sub>((HA)<sup>hG</sup>). (HA)<sup>hG</sup> has nonzero homotopy groups only in non*positive* dimension.)</p> <p>K(A, 1)<sup>hG</sup> is a groupoid (it only has homotopy in dimensions 0 and 1) and I expect it should be the groupoid of some kind of extensions of G by A (where morphisms are isomorphisms of extensions which are the identity on G and A). H<sup>1</sup>(G, A) would then classify those extensions, and H<sup>0</sup>(G, A) = A<sup>G</sup> would be the automorphism group of the basepoint, which I guess is the semidirect product (this is easy to check).</p> http://mathoverflow.net/questions/2390/why-does-non-abelian-group-cohomology-exist/2420#2420 Answer by Mike Shulman for Why does non-abelian group cohomology exist? Mike Shulman 2009-10-25T03:07:28Z 2009-10-25T03:07:28Z <p>The topological viewpoint of Eric's answer applies to cohomology with nontrivial actions, too. If the action of G on A is not trivial, then the group cohomology H(G;A) can be identified with the topological cohomology of K(G,1) with local coefficients in the coefficient system (= locally constant sheaf) which is A with its action of &pi;<sub>1</sub>(K(G,1)) = G. So the real question is then, why is the sheaf cohomology H<sup>1</sup>(X;A) defined with coefficients in a sheaf of nonabelian groups, but not H<sup>n</sup> for n>1? This then essentially follows from the same argument (K(A,1) exists for any group A, but other K(A,n)s only for A abelian) but applied in the category (or (∞,1)-category, or model category, or whatever) of sheaves of spaces over X.</p> <p>There is also a sort of "higher nonabelian cohomology." For a nonabelian group A, you can't make K(A,2) but you can make B(hAut(K(A,1))) where hAut denotes the topological monoid of self-homotopy-equivalences, and you can think about homotopy classes of maps from a space X (such as K(G,1)) into B(hAut(K(A,1))) as a sort of "nonabelian H<sup>2</sup>." If A happens to be abelian, this construction contains the usual abelian H<sup>2</sup> via the map K(A,2) = B(K(A,1)) --> B(hAut(K(A,1))) given by letting K(A,1) act on itself via left translation (since it is a topological group whenever A is abelian). But even in this case, the "nonabelian H<sup>2</sup>" contains much more than the usual abelian H<sup>2</sup>, so it's a little misleading to call it "nonabelian H<sup>2</sup>."</p> http://mathoverflow.net/questions/2390/why-does-non-abelian-group-cohomology-exist/6966#6966 Answer by Greg Kuperberg for Why does non-abelian group cohomology exist? Greg Kuperberg 2009-11-27T17:14:13Z 2009-11-27T17:29:37Z <p>I don't know if this is a "deep" or "shallow" explanation, but if anyone is still reading this thread, here is a different explanation. I'll start with the preliminary comment that the cohomology of a group $K$ is a special case of cohomology of topological spaces. In topology in general, you get the same phenomenon that $H^k(X,G)$ is well-defined either when $G$ is abelian or $k=1$.</p> <p>Consider the definition of simplicial cohomology for locally finite simplicial complexes. Or, more generally, CW cohomology for locally finite, regular complexes &mdash; regular means mainly that each attaching map is embedded. You can define a $k$-cochain with coefficients in a group $G$ (or even in any set) as a function from the $k$-cells to $G$. In attempting to define the coboundary of a cochain $c$ on a $k+1$-cell $e$, you should multiply together the values of $c$ on the facets of $e$. The obvious problem is that if $G$ is non-abelian, the product is order-dependent. However, if $k=1$, geometry gives you a gift: The facets are cyclically ordered, and what you mainly wanted to know is whether the product is trivial. The criterion of whether a cyclic word is trivial is well-defined in any group, not just abelian groups. A similar but simpler phenomenon occurs for the notion of a coboundary: If $e$ is an oriented edge and $c$ is a 0-cochain, there is a non-abelian version of modifying a 1-cochain by $c$ because the vertices of $e$ are an ordered pair.</p> <p>So far this is just a more geometric version of Eric Wofsey's answer. It is very close to the fact that $\pi_1$ is non-abelian while higher homotopy groups are abelian &mdash; and therefore non-commutative classifying spaces exist only for $K(G,1)$. However, in this version of the explanation, something extra appears when $X=M$ is a 3-manifold.</p> <p>If $M$ is a 3-manifold, then not only are the edges of a face cyclically ordered, the faces incident to an edge are also cyclically ordered. It turns out that, at least at the level of computing the cardinality of $H^1(M,G)$, you can let $G$ be both non-commutative and non-cocommutative. In other words, $G$ can be replaced by a finite-dimensional Hopf algebra $H$ which does not need to be commutative or cocommutative. Finiteness is necessary because it is a counting invariant. The resulting invariant $\#(M,H)$ was a topic of my PhD thesis and is explained <a href="http://front.math.ucdavis.edu/9201.5301" rel="nofollow">here</a> and <a href="http://front.math.ucdavis.edu/9712.5187" rel="nofollow">here</a>. Although the motivation is original, the invariant is a special case of more standard quantum invariants defined by other people. (The same construction was also later found by three physicists, but I can't remember their names at all right now.)</p> <p>Many 3-manifolds are also classifying spaces of groups, so for these groups there is the same notion of noncommutative, non-cocommutative group cohomology.</p> http://mathoverflow.net/questions/2390/why-does-non-abelian-group-cohomology-exist/15970#15970 Answer by BCnrd for Why does non-abelian group cohomology exist? BCnrd 2010-02-21T13:12:16Z 2010-02-21T13:12:16Z <p>A concrete and arithmetically useful way to interpret it without appeal to explicit cocycle formulas is to express everything in the language of torsors. More specifically, for arithmetic purposes if the group G is Gal(F'/F) for a Galois extension F'/F and if the group K is H(F') for an F-group scheme H of finite type and K is equipped with the evident left G-action then ${\rm{H}}^1(G,K)$ is the set of isomorphism classes of H-torsors over F which split over F' (i.e.,, admit an F'-rational point). The low-degree exact sequence can then be expressed entirely in such terms, using pushouts and pullbacks with torsors. (Implicit in the argument is effectivity of Galois descent for H-torsors, which uses that H is quasi-projective over F.)</p> <p>This is useful in settings as varied as H an abelian variety and H a linear algebraic group, and even the non-smooth case. In fact, when using non-smooth H it is rather restrictive to use Galois cohomology (but not unnatural if studying Tate-Shafarevich sets with coefficients in an Aut-functor, such as for a projective variety), and in such cases the "right" variant that is often more useful is to work with torsors for the fppf topology over F. The torsor viewpoint also gives a useful perspective when working over richer base rings than fields, such as rings of S-integers in a global field, even in the case of a smooth coefficient group (over the ring of S-integers), for which the etale topology is "enough". See section 5.3 in Chapter I of Serre's book on Galois cohomology for the Galois case, Milne's "Etale cohomology" book for generalization with flat and \'etale topologies, and Appendix B in my paper on "Finiteness theorems for algebraic groups over function fields" for a concrete fleshing out of the dictionary between the torsor and Galois languages (where I work with affine group schemes, due to the context of that paper). </p> <p>Some papers of Mazur and Grothendieck (not as co-authors...) on abelian varieties make creative use of the torsor viewpoint when working with Tate-Shafarevich groups. The exact sequence for Brauer groups in global class field theory also has a useful interpretation via torsors; see Grothendieck's papers on Brauer groups for more in that direction (and somewhere in there he also discusses Tate-Shafarevich). Beware that when the base is not a field (or even when it is a field but we relax "quasi-projective" to "locally finite type" on $H$, such as for Aut-schemes of projective or proper varieties) then effectivity of descent for torsors is not at all clear, even with quasi-projective hypotheses, and so the torsors often need to be understood to be taken in the category of algebraic spaces (for which fppf descent is always effective). In the N\'eron Models book they have a discussion (somewhere in Chapter 6, I think) on effectivity of descent for torsors if one wishes to avoid algebraic spaces (under suitable hypotheses on the "coefficient group"), but working with algebraic spaces isn't so bad once one gets used to them and it is a more natural setting due to their better general behavior with respect to descent. </p>
2013-05-21 08:03:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9171550273895264, "perplexity": 779.0466019306374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699798457/warc/CC-MAIN-20130516102318-00018-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.maetoronto.com/ieltsblog-ielts-writing5.5-3-4
# IELTS ライティング 5.5の壁から抜け出す方法 3 & 4 3. アカデミック(文語)表現と口語表現が混ざっている IELTS ライティングでアカデミックな表現と砕けた口語表現が混ざっている文章は絶対に避けたい。しかし英語のアカデミック表現と口語表現の違いが分からないと判断が難しい。 これをもっと文語表現にすると、 ”I socialized with the man several days ago.” となります。 hang out は口語表現で文語ではありません。また guy も同様です。the other dayよりはseveral days ago の方がどちらかというとアカデミックな表現です。 この能力を養うには時間がかかり、すぐに直せることではありません。試験が迫っている方はその他のポイントを直すことの方が手っ取り早いです。 ÆThe blind date was a disaster. It was a complete debacle. I was intrigued by what my "friend" Sarah had told me about Bill; ÆThe blind date was more than a disaster. In fact, it was clearly a complete debacle. At first, I was somewhat intrigued by what my “friend” Sarah had told me about Bill; • 「対照、比較」 in contrast, while, whilst, on the contrary, on the other hand, meanwhile • 「譲歩」(自分の主張を押しとおすのではなく、相手の主張をある程度聞き入れること) although, though, even, even though, • 「追加」 additionally, in addition, also, besides, furthermore, moreover, what is more, too, likewise • 「結果」 as a result, as a consequence, consequently, hence, therefore, thus, so that, accordingly, eventually • 「例」 for example, for instance, take A for example, to illustrate, specifically, such as, to demonstrate • 「言い換えると」 In other words, to put it another way, namely, that is, that is to say, once again, to clarify • 「強調」 certainly, definitely, clearly, by all means, in fact, undoubtedly, without a doubt • 「逆説」 but, however, nevertheless, notwithstanding, despite, in spite of, and yet • 「条件」 If, given that, when, as long as, unless, in case, provided that, supposing that • 「結論」 In conclusion, to conclude, after all, finally, in summary, to summarize, therefore
2019-07-22 12:08:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8962745666503906, "perplexity": 7740.976191224262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528013.81/warc/CC-MAIN-20190722113215-20190722135215-00556.warc.gz"}
https://mathoverflow.net/questions/369524/stability-of-displacement-interpolation-in-optimal-transport
# Stability of displacement interpolation in optimal transport Let $$(X,d)$$ be a complete separable metric space, and let $$(\mathcal{P}_2 (X), W_2)$$ be the space of probability measures on $$X$$ with finite second moments, equipped with the 2-Wasserstein distance. It is known that discrete measures are dense inside $$(\mathcal{P}_2 (X), W_2)$$ - namely, given any $$\mu \in \mathcal{P}_2 (X)$$, and $$\delta>0$$, one can find a discrete measure $$\mu_\delta$$ with $$W_2 (\mu, \mu_\delta)<\delta$$. Now, let $$\mu_0, \mu_1 \in \mathcal{P}_2 (X)$$, and let $$\mu_t$$ be a $$W_2$$ geodesic connecting $$\mu_0$$ and $$\mu_1$$ (a.k.a. $$\mu_t$$ is a [not necessarily unique] displacement interpolation between $$\mu_0$$ and $$\mu_1$$). Is the displacement interpolation stable under discrete approximation? That is, can one pick discrete $$\mu_{0,n}, \mu_{1,n}$$ such that $$\mu_{t,n}$$ is close to $$\mu_t$$ for all $$t\in[0,1]$$? The displacement interpolation $$\mu_t$$ should not be fixed a priori, due to nonuniqueness of Wasserstein Geodesics. Thus, the correct question should be: fix the approximating sequences $$(\mu_{0,n}),(\mu_{1,n})$$ and $$W_2$$ geodesics $$\mu_{t,n}$$, and ask if there exists one $$\mu_t$$ close to $$\mu_{t,n}$$ for $$t \in [0,1]$$. • This is a comment rather than an answer, but I could not post it as a comment. Anyway, something useful in this direction can be found in Lemma 4.4 arxiv.org/pdf/1609.00782.pdf which, combined with Proposition 4.8 of arxiv.org/pdf/1311.4907.pdf gives you $W_2$ close $\mu_{t,n}$'s. Aug 19, 2020 at 8:10 • Certainly there are non-uniqueness issues. I actually meant something like: given a Wasserstein geodesic $\mu_t$, can we produce sequences $(\mu_{0,n})$ and $(\mu_{1,n})$ such that $(\mu_{t,n})$ converges to $\mu_t$ in some suitable sense. Aug 19, 2020 at 8:35 • I think is very hard to fix $\mu_t$ and produce, afterwards, approximating marginals $(\mu_{0,n}),(\mu_{1,n})$. If you stick to the other way, as in my answer, you can try to argue essentially by tightness to get $\mu_t$ from $\mu_{t,n}$. Finally, you just need to show that the limit $\mu_t$ is a $W_2$ geodesic. The first link I gave you follows this path. With this approach, is very hard to control whether you are converging to your fixed a priori Wasserstein geodesic, or to another. Aug 19, 2020 at 8:43 • There is not only the problem of uniqueness, but also of existence. Under the present assumptions, $\mu_{0,n}$ and $\mu_{1,n}$ may not be connected by a $W_2$-geodesic. Aug 19, 2020 at 9:21
2022-12-01 12:26:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 27, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9409269690513611, "perplexity": 151.21968962948787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710813.48/warc/CC-MAIN-20221201121601-20221201151601-00393.warc.gz"}
https://chemistry.stackexchange.com/questions/121513/will-distillation-do-the-job-of-purifying-the-water
# Will distillation do the job of purifying the water? I asked about my sophisticated water purification system for a generation ship on Worldbuilding Stack Exchange, and somebody mentioned this: If you're gonna go to all the trouble of distilling it, why not just have that be your only step? And it is a good question. But, from my chemistry knowledge, this just won't work. Here are the problems I see with distillation being the only step: 1. Proteins in blood and urine are going to froth up from the heat. If these get into the water, the water is going to smell like urine and nobody is going to want to drink it. This is the easiest issue to fix. Just make the apparatus tall enough and no urine froth will make it to the distallate. That or adding a trap to react with the urine froth and solidify it in between the 2 ends of the apparatus would work. 1. Any hydrochloric acid that is present from vomit is still going to be there because it changes the boiling point of water. You simply can't get rid of the acid via distillation alone, not even repeated distillation. You need either microbes that can break down hydrochloric acid or an acid base reaction to get rid of the acid. Hydrochloric acid forms an azeotrope with water. In fact it forms several depending on the acid concentration. So at some point, strongly acidic water will come over. Worst part is that a mixture of hydrochloric acid and water is colorless and odorless, so the only way you could tell is by density. Start to see those lines that occur when you mix liquids of different densities and you know strongly acidic water is coming over. Also, checking for pH would help but that would mean either an acid resistant pH sensor in the container with the distillate or stopping the distillation periodically and using pH strips. Checking for a change in density is much easier. 1. If acid vapors escape the system, as I think some will, then everybody on the ship will get irritated lungs, and some might get "Acid Pneumonia" and die from both the irritation due to the acid and the infection that the hot acid predisposes them to. There would be some infectious agents on the generation ship to keep their immune system in their prime and to help prevent autoimmune disorders from spreading like wildfire. But if this is combined with acid vapors, it is basically a death sentence for whoever gets infected. This is not even considering the fact that acid in your eyes can make you go blind No distillation system can 100% prevent vapors from escaping. Not even a vacuum distillation with a trap will do it. So in my opinion, distillation is necessary but can't be the only step, otherwise WCS, you get water that smells like urine that nobody wants to drink and hot acid vapors spread all over the ship causing lung irritation and blindness and everybody dies from a combination of the acid vapors and dehydration related both to heat and infection very quickly and you just have an empty hull in space. BCS, everybody gets heartburn from drinking the acidic water. So, is there a way to get rid of the hydrochloric acid when distilling the water? Or would the people distilling the water have to resort to an acid base reaction followed by a second distillation to get rid of the acid and any excess base? • Neutralise, then distill. – Karl Sep 20 '19 at 22:21 • and use activated carbon to remove some of the smelly stuff.... – Buck Thorn Sep 21 '19 at 8:13 The old school method of generating ultra-pure water by distillation was to add a small amount of potassium permanganate in a slightly alkaline solution. All the organics were oxidized by the permanganate ion in boiling water. A small of $$\ce{MnO2}$$ so formed during boiling would take care of the acid(s). This water was good enough for electrochemical experiments. HCl only forms an azeotrope at 20.2% composition and it boils around 110 Celsius. Today ultrapure water is produced by using quite expensive deionizers with UV lamps. UV destroys all the organics, and then an activated carbon filter absorbs them. Water produced from such deionizer has a specific resistance of $$18.2 \rm{M\Omega.cm}$$.
2020-01-20 15:54:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4064565896987915, "perplexity": 1769.7044057367812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598800.30/warc/CC-MAIN-20200120135447-20200120164447-00254.warc.gz"}
https://math.stackexchange.com/questions/1181737/kans-loop-group-construction
# Kan's loop group construction I'm looking for a good place to read about the loop group construction $G : \mathbf{sSet_0} \to \mathbf{sGrp}$ taking a reduced simplicial set $X$ and producing a simplicial group $GX$. I would also like a good reference for the functor $\overline W :\mathbf{sGrp} \to \mathbf{sSet_0}$, right adjoint to $G$. $G$ and $\overline W$ are described by Kan in "On Homotopy and c.s.s. Groups" (1958) and in Goerss and Jardine's "Simplicial Homotopy Theory" Chap. 5. However, Kan's article uses a slightly different definition from Goerss-Jardine, as well as pre-model-category-theoretic language. The description in Goerss-Jardine seems quite messy to me and seems to require more/better appreciation of cocycles in simplicial groups than I have. Also, what is a good name for $\overline W (-)$? "Simplicial classifying space"? Would $B(-)$ then be a better notation? Similarly would $\Omega(-)$ be a better notation for $G(-)$? • I like Stevenson's "Décalage and Kan's simplicial loop group functor" and some references therein. You can also look at May's "simplicial objects in algebraic topology". I also believe they're discussed in Curtis' "simplicial homotopy theory" survey paper. – Bruno Stonek Mar 19 '16 at 9:28
2021-01-25 03:47:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8038597702980042, "perplexity": 835.1231104979518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703564029.59/warc/CC-MAIN-20210125030118-20210125060118-00524.warc.gz"}
https://gerardnico.com/linear_algebra/echelon
# Linear System - Echelon Matrix The Echelon form is a generalization of triangular matrices. An $m * n$ matrix A is in echelon form if it satisfies the following condition: • for any row, if that row’s first nonzero entry is in position k • then every previous row’s first nonzero entry is in some position less than k. If a matrix is in echelon form, the nonzero rows form a basis for the row space
2019-04-22 10:41:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9188483357429504, "perplexity": 598.1837350029074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578551739.43/warc/CC-MAIN-20190422095521-20190422121521-00117.warc.gz"}
https://labs.tib.eu/arxiv/?author=B.%20Holden
• ### Occultations from an active accretion disk in a 72 day detached post-Algol system detected by K2(1801.06188) Jan. 18, 2018 astro-ph.SR Disks in binary systems can cause exotic eclipsing events. MWC 882 (BD-22 4376, EPIC 225300403) is such a disk-eclipsing system identified from observations during Campaign 11 of the K2 mission. We propose that MWC 882 is a post-Algol system with a B7 donor star of mass $0.542\pm0.053\,M_\odot$ in a 72 day period orbit around an A0 accreting star of mass $3.24\pm0.29\,M_\odot$. The $59.9\pm6.2\,R_\odot$ disk around the accreting star occults the donor star once every orbit, inducing 19 day long, 7% deep eclipses identified by K2, and subsequently found in pre-discovery ASAS and ASAS-SN observations. We coordinated a campaign of photometric and spectroscopic observations for MWC 882 to measure the dynamical masses of the components and to monitor the system during eclipse. We found the photometric eclipse to be gray to $\approx 1$%. We found the primary star exhibits spectroscopic signatures of active accretion, and observed gas absorption features from the disk during eclipse. We suggest MWC 882 initially consisted of a $\approx 3.6\,M_\odot$ donor star transferring mass via Roche lobe overflow to a $\approx 2.1\,M_\odot$ accretor in a $\approx 7$ day initial orbit. Through angular momentum conservation, the donor star is pushed outward during mass transfer to its current orbit of 72 days. The observed state of the system corresponds with the donor star having left the Red Giant Branch ~0.3 Myr ago, terminating active mass transfer. The present disk is expected to be short-lived ($10^2$ years) without an active feeding mechanism, presenting a challenge to this model. • ### The HDUV Survey: Six Lyman Continuum Emitter Candidates at z~2 Revealed by HST UV Imaging(1611.07038) Nov. 21, 2016 astro-ph.GA We present six galaxies at z~2 that show evidence of Lyman continuum (LyC) emission based on the newly acquired UV imaging of the Hubble Deep UV legacy survey (HDUV) conducted with the WFC3/UVIS camera on the Hubble Space Telescope (HST). At the redshift of these sources, the HDUV F275W images partially probe the ionizing continuum. By exploiting the HST multi-wavelength data available in the HDUV/GOODS fields, models of the UV spectral energy distributions, and detailed Monte Carlo simulations of the intergalactic medium absorption, we estimate the absolute ionizing photon escape fractions of these galaxies to be very high -- typically >60% (>13% for all sources at 90% likelihood). Our findings are in broad agreement with previous studies that found only a small fraction of galaxies to show high escape fraction. These six galaxies comprise the largest sample yet of LyC leaking candidates at z~2 whose inferred LyC flux has been cleanly observed at HST resolution. While three of our six candidates show evidence of hosting an active galactic nucleus (AGN), two of these are heavily obscured and their LyC emission appears to originate from star-forming regions rather than the central nucleus. This suggests an AGN-aided pathway for LyC escape from these sources. Extensive multi-wavelength data in the GOODS fields, especially the near-IR grism spectra from the 3D-HST survey, enable us to study the candidates in detail and tentatively test some recently proposed indirect methods to probe LyC leakage -- namely, the [OIII]/[OII] line ratio and the H$\beta-$UV slope diagram. High-resolution spectroscopic followup of our candidates will help constrain such indirect methods which are our only hope of studying $f_{esc}$ at z~5-9 in the fast-approaching era of the James Webb Space Telescope. • ### Z > 7 galaxies with red Spitzer/IRAC [3.6]-[4.5] colors in the full CANDELS data set: the brightest-known galaxies at Z ~ 7-9 and a probable spectroscopic confirmation at Z=7.48(1506.00854) April 1, 2016 astro-ph.GA We identify 4 unusually bright (H < 25.5) galaxies from HST and Spitzer CANDELS data with probable redshifts z ~ 7-9. These identifications include the brightest-known galaxies to date at z > 7.5. As Y-band observations are not available over the full CANDELS program to perform a standard Lyman-break selection of z > 7 galaxies, we employ an alternate strategy using deep Spitzer/IRAC data. We identify z ~ 7.1 - 9.1 galaxies by selecting z >~ 6 galaxies from the HST CANDELS data that show quite red IRAC [3.6]-[4.5] colors, indicating strong [OIII]+Hbeta lines in the 4.5 micron band. This selection strategy was validated using a modest sample for which we have deep Y-band coverage, and subsequently used to select the brightest z > 7 sources. Applying the IRAC criteria to all HST-selected optical-dropout galaxies over the full ~900 arcmin**2 of the CANDELS survey revealed four unusually bright z ~ 7.1, 7.6, 7.9 and 8.6 candidates. The median [3.6]-[4.5] color of our selected z ~ 7.1-9.1 sample is consistent with rest-frame [OIII]+Hbeta EWs of ~1500A, in the [4.5] band. Keck/MOSFIRE spectroscopy has been independently reported for two of our selected sources, showing Ly-alpha at redshifts of 7.7302+/-0.0006 and 8.683^+0.001_-0.004, respectively. We present similar Keck/MOSFIRE spectroscopy for a third selected galaxy with a probable 4.7sigma Ly-alpha line at z_spec=7.4770+/-0.0008. All three have H-band magnitudes of ~25 mag and are ~0.5 mag more luminous (M(UV) ~ -22.0) than any previously discovered z ~ 8 galaxy, with important implications for the UV LF. Our 3 brightest, highest redshift z > 7 galaxies all lie within the CANDELS EGS field, providing a dramatic illustration of the potential impact of field-to-field variance. • ### A Remarkably Luminous Galaxy at z=11.1 Measured with Hubble Space Telescope Grism Spectroscopy(1603.00461) March 1, 2016 astro-ph.GA We present Hubble WFC3/IR slitless grism spectra of a remarkably bright $z\gtrsim10$ galaxy candidate, GN-z11, identified initially from CANDELS/GOODS-N imaging data. A significant spectroscopic continuum break is detected at $\lambda=1.47\pm0.01~\mu$m. The new grism data, combined with the photometric data, rule out all plausible lower redshift solutions for this source. The only viable solution is that this continuum break is the Ly$\alpha$ break redshifted to ${z_\mathrm{grism}=11.09^{+0.08}_{-0.12}}$, just $\sim$400 Myr after the Big Bang. This observation extends the current spectroscopic frontier by 150 Myr to well before the Planck (instantaneous) cosmic reionization peak at z~8.8, demonstrating that galaxy build-up was well underway early in the reionization epoch at z>10. GN-z11 is remarkably and unexpectedly luminous for a galaxy at such an early time: its UV luminosity is 3x larger than L* measured at z~6-8. The Spitzer IRAC detections up to 4.5 $\mu$m of this galaxy are consistent with a stellar mass of ${\sim10^{9}~M_\odot}$. This spectroscopic redshift measurement suggests that the James Webb Space Telescope (JWST) will be able to similarly and easily confirm such sources at z>10 and characterize their physical properties through detailed spectroscopy. Furthermore, WFIRST, with its wide-field near-IR imaging, would find large numbers of similar galaxies and contribute greatly to JWST's spectroscopy, if it is launched early enough to overlap with JWST. • ### Ultradeep IRAC Imaging Over The HUDF And GOODS-South: Survey Design And Imaging Data Release(1507.08313) July 29, 2015 astro-ph.GA, astro-ph.IM The IRAC ultradeep field (IUDF) and IRAC Legacy over GOODS (IGOODS) programs are two ultradeep imaging surveys at 3.6{\mu}m and 4.5{\mu}m with the Spitzer Infrared Array Camera (IRAC). The primary aim is to directly detect the infrared light of reionization epoch galaxies at z > 7 and to constrain their stellar populations. The observations cover the Hubble Ultra Deep Field (HUDF), including the two HUDF parallel fields, and the CANDELS/GOODS-South, and are combined with archival data from all previous deep programs into one ultradeep dataset. The resulting imaging reaches unprecedented coverage in IRAC 3.6{\mu}m and 4.5{\mu}m ranging from > 50 hour over 150 arcmin^2, > 100 hour over 60 sq arcmin2, to 200 hour over 5 - 10 arcmin$^2$. This paper presents the survey description, data reduction, and public release of reduced mosaics on the same astrometric system as the CANDELS/GOODS-South WFC3 data. To facilitate prior-based WFC3+IRAC photometry, we introduce a new method to create high signal-to-noise PSFs from the IRAC data and reconstruct the complex spatial variation due to survey geometry. The PSF maps are included in the release, as are registered maps of subsets of the data to enable reliability and variability studies. Simulations show that the noise in the ultradeep IRAC images decreases approximately as the square root of integration time over the range 20 - 200 hours, well below the classical confusion limit, reaching 1{\sigma} point source sensitivities as faint as of 15 nJy (28.5 AB) at 3.6{\mu}m and 18 nJy (28.3 AB) at 4.5{\mu}m. The value of such ultradeep IRAC data is illustrated by direct detections of z = 7 - 8 galaxies as faint as HAB = 28. • ### Early-Type galaxies at z ~ 1.3. III. On the dependence of Formation Epochs and Star Formation Histories on Stellar Mass and Environment(1103.0265) March 1, 2011 astro-ph.CO We study the environmental dependence of stellar population properties at z ~ 1.3. We derive galaxy properties (stellar masses, ages and star formation histories) for samples of massive, red, passive early-type galaxies in two high-redshift clusters, RXJ0849+4452 and RXJ0848+4453 (with redshifts of z = 1.26 and 1.27, respectively), and compare them with those measured for the RDCS1252.9-2927 cluster at z=1.24 and with those measured for a similarly mass-selected sample of field contemporaries drawn from the GOODS-South Field. Robust estimates of the aforementioned parameters have been obtained by comparing a large grid of composite stellar population models with extensive 8-10 band photometric coverage, from the rest-frame far-ultraviolet to the infrared. We find no variations of the overall stellar population properties among the different samples of cluster early-type galaxies. However, when comparing cluster versus field stellar population properties we find that, even if the (star formation weighted) ages are similar and depend only on galaxy mass, the ones in the field do employ longer timescales to assemble their final mass. We find that, approximately 1 Gyr after the onset of star formation, the majority (75%) of cluster galaxies have already assembled most (> 80%) of their final mass, while, by the same time, fewer (35%) field ETGs have. Thus we conclude that while galaxy mass regulates the timing of galaxy formation, the environment regulates the timescale of their star formation histories. • ### From Shock Breakout to Peak and Beyond: Extensive Panchromatic Observations of the Type Ib Supernova 2008D associated with Swift X-ray Transient 080109(0805.2201) June 17, 2009 astro-ph We present extensive early photometric (ultraviolet through near-infrared) and spectroscopic (optical and near-infrared) data on supernova (SN) 2008D as well as X-ray data analysis on the associated Swift/X-ray transient (XRT) 080109. Our data span a time range of 5 hours before the detection of the X-ray transient to 150 days after its detection, and detailed analysis allowed us to derive constraints on the nature of the SN and its progenitor; throughout we draw comparisons with results presented in the literature and find several key aspects that differ. We show that the X-ray spectrum of XRT 080109 can be fit equally well by an absorbed power law or a superposition of about equal parts of both power law and blackbody. Our data first established that SN 2008D is a spectroscopically normal SN Ib (i.e., showing conspicuous He lines), and show that SN 2008D had a relatively long rise time of 18 days and a modest optical peak luminosity. The early-time light curves of the SN are dominated by a cooling stellar envelope (for \Delta t~0.1- 4 day, most pronounced in the blue bands) followed by 56^Ni decay. We construct a reliable measurement of the bolometric output for this stripped-envelope SN, and, combined with estimates of E_K and M_ej from the literature, estimate the stellar radius R_star of its probable Wolf-Rayet progenitor. According to the model of Waxman et al. and of Chevalier & Fransson, we derive R_star^{W07}= 1.2+/-0.7 R_sun and R_star^{CF08}= 12+/-7 R_sun, respectively; the latter being more in line with typical WN stars. Spectra obtained at 3 and 4 months after maximum light show double-peaked oxygen lines that we associate with departures from spherical symmetry, as has been suggested for the inner ejecta of a number of SN Ib cores. • ### The Environmental Dependence of Properties of Galaxies around the RDCSJ0910+54 Cluster at z=1.1(0808.1961) Aug. 14, 2008 astro-ph (Abridged) We report on the environmental dependence of properties of galaxies around the RDCSJ0910+54 cluster at z=1.1. We have obtained multi-band wide-field images of the cluster with Suprime-Cam and MOIRCS on Subaru and WFCAM on UKIRT. Also, an intensive spectroscopic campaign has been carried out using LRIS on Keck and FOCAS on Subaru. We discover a possible large-scale structure around the cluster in the form of three clumps of galaxies. This is potentially one of the largest structures found so far in the z>1 Universe. We then examine stellar populations of galaxies in the structure. Red galaxies have already become the dominant population in the cores of rich clusters at z~1, and the fraction of red galaxies has not strongly changed since then. The red fraction depends on richness of clusters in the sense that it is higher in rich clusters than in poor groups. The luminosity function of red galaxies in rich clusters is consistent with that in local clusters. On the other hand, luminosity function of red galaxies in poor groups shows a deficit of faint red galaxies. This confirms our earlier findings that galaxies follow an environment-dependent down-sizing evolution. There seems to be a large variation in the evolutionary phases of galaxies in groups with similar masses. Further studies of high-z clusters will be a promising way of addressing the role of nature and nurture effects on galaxy evolution. • ### The rest-frame $K$-band luminosity function of galaxies in clusters to $z=1.3$(astro-ph/0702050) Feb. 2, 2007 astro-ph We derive the rest-frame $K$-band luminosity function for galaxies in 32 clusters at $0.6 < z < 1.3$ using deep $3.6\mu$m and $4.5\mu$m imaging from the Spitzer Space Telescope InfraRed Array Camera (IRAC). The luminosity functions approximate the stellar mass function of the cluster galaxies. Their dependence on redshift indicates that massive cluster galaxies (to the characteristic luminosity $M^*_K$) are fully assembled at least at $z \sim 1.3$ and that little significant accretion takes place at later times. The existence of massive, highly evolved galaxies at these epochs is likely to represent a significant challenge to theories of hierarchical structure formation where such objects are formed by the late accretion of spheroidal systems at $z < 1$. • ### Weak-Lensing Detection at z~1.3: Measurement of the Two Lynx Clusters with Advanced Camera for Surveys(astro-ph/0601334) Jan. 16, 2006 astro-ph (Abridged) We present a HST/ACS weak-lensing study of RX J0849+4452 and RX J0848+4453, the two most distant (at z=1.26 and z=1.27, respectively) clusters yet measured with weak-lensing. The two clusters are separated by ~4' from each other and appear to form a supercluster in the Lynx field. Using our deep ACS F775W and F850LP imaging, we detected weak-lensing signals around both clusters at ~4 sigma levels. The mass distribution indicated by the reconstruction map is in good spatial agreement with the cluster galaxies. From the SIS fitting, we determined that RX J0849+4452 and RX J0848+4453 have similar projected masses of ~2.0x10^14 solar mass and ~2.1x10^14 solar mass, respectively, within a 0.5 Mpc (~60") aperture radius. • ### An Overdensity of Galaxies near the Most Distant Radio-Loud Quasar(astro-ph/0511734) Nov. 25, 2005 astro-ph A five square arcminute region around the luminous radio-loud quasar SDSS J0836+0054 (z=5.8) hosts a wealth of associated galaxies, characterized by very red (1.3 < i_775 - z_{850} < 2.0) color. The surface density of these z~5.8 candidates is approximately six times higher than the number expected from deep ACS fields. This is one of the highest galaxy overdensities at high redshifts, which may develop into a group or cluster. We also find evidence for a substructure associated with one of the candidates. It has two very faint companion objects within two arcseconds, which are likely to merge. The finding supports the results of a recent simulation that luminous quasars at high redshifts lie on the most prominent dark-matter filaments and are surrounded by many fainter galaxies. The quasar activity from these regions may signal the buildup of a massive system. • ### Cl 1205+44, a fossil group at z = 0.59(astro-ph/0501486) May 3, 2005 astro-ph This is a report of Chandra, XMM-Newton, HST and ARC observations of an extended X-ray source at z = 0.59. The apparent member galaxies range from spiral to elliptical and are all relatively red (i'-Ks about 3). We interpret this object to be a fossil group based on the difference between the brightness of the first and second brightest cluster members in the i'-band, and because the rest-frame bolometric X-ray luminosity is about 9.2x10^43 h70^-2 erg s^-1. This makes Cl 1205+44 the highest redshift fossil group yet reported. The system also contains a central double-lobed radio galaxy which appears to be growing via the accretion of smaller galaxies. We discuss the formation and evolution of fossil groups in light of the high redshift of Cl 1205+44. • ### A Dynamical Simulation of the Debris Disk Around HD 141569A(astro-ph/0503445) March 21, 2005 astro-ph We study the dynamical origin of the structures observed in the scattered-light images of the resolved debris disk around HD 141569A. We explore the roles of radiation pressure from the central star, gas drag from the gas disk, and the tidal forces from two nearby stars in creating and maintaining these structures. We use a simple one-dimensional axisymmetric model to show that the presence of the gas helps confine the dust and that a broad ring of dust is produced if a central hole exists in the disk. This model also suggests that the disk is in a transient, excited dynamical state, as the observed dust creation rate applied over the age of the star is inconsistent with submillimeter mass measurements. We model in two dimensions the effects of a fly-by encounter between the disk and a binary star in a prograde, parabolic, coplanar orbit. We track the spatial distribution of the disk's gas, planetesimals, and dust. We conclude that the surface density distribution reflects the planetesimal distribution for a wide range of parameters. Our most viable model features a disk of initial radius 400 AU, a gas mass of 50 M_earth, and beta = 4 and suggests that the system is being observed within 4000 yr of the fly-by periastron. The model reproduces some features of HD 141569A's disk, such as a broad single ring and large spiral arms, but it does not reproduce the observed multiple spiral rings or disk asymmetries nor the observed clearing in the inner disk. For the latter, we consider the effect of a 5 M_Jup planet in an eccentric orbit on the planetesimal distribution of HD 141569A. • ### The Morphology - Density Relation in z ~ 1 Clusters(astro-ph/0501224) Jan. 14, 2005 astro-ph We measure the morphology--density relation (MDR) and morphology-radius relation (MRR) for galaxies in seven z ~ 1 clusters that have been observed with the Advanced Camera for Surveys on board the Hubble Space Telescope. Simulations and independent comparisons of ourvisually derived morphologies indicate that ACS allows one to distinguish between E, S0, and spiral morphologies down to zmag = 24, corresponding to L/L* = 0.21 and 0.30 at z = 0.83 and z = 1.24, respectively. We adopt density and radius estimation methods that match those used at lower redshift in order to study the evolution of the MDR and MRR. We detect a change in the MDR between 0.8 < z < 1.2 and that observed at z ~ 0, consistent with recent work -- specifically, the growth in the bulge-dominated galaxy fraction, f_E+SO, with increasing density proceeds less rapidly at z ~ 1 than it does at z ~ 0. At z ~ 1 and density <= 500 galaxies/Mpc^2, we find <f_E+S0> = 0.72 +/- 0.10. At z ~ 0, an E+S0 population fraction of this magnitude occurs at densities about 5 times smaller. The evolution in the MDR is confined to densities >= 40 galaxies/Mpc^2 and appears to be primarily due to a deficit of S0 galaxies and an excess of Spiral+Irr galaxies relative to the local galaxy population. The Elliptical fraction - density relation exhibits no significant evolution between z = 1 and z = 0. We find mild evidence to suggest that the MDR is dependent on the bolometric X-ray luminosity of the intracluster medium. Implications for the evolution of the disk galaxy population in dense regions are discussed in the context of these observations. • ### The Luminosity Functions of the Galaxy Cluster MS1054-0321 at z=0.83 based on ACS Photometry(astro-ph/0411516) Dec. 10, 2004 astro-ph We present new measurements of the galaxy luminosity function (LF) and its dependence on local galaxy density, color, morphology, and clustocentric radius for the massive z=0.83 cluster MS1054-0321. Our analyses are based on imaging performed with the ACS onboard the HST in the F606W, F775W and F850LP passbands and extensive spectroscopic data obtained with the Keck LRIS. Our main results are based on a spectroscopically selected sample of 143 cluster members with morphological classifications derived from the ACS observations. Our three primary findings are (1) the faint-end slope of the LF is steepest in the bluest filter, (2) the LF in the inner part of the cluster (or highest density regions) has a flatter faint-end slope, and (3) the fraction of early-type galaxies is higher at the bright end of the LF, and gradually decreases toward fainter magnitudes. These characteristics are consistent with those in local galaxy clusters, indicating that, at least in massive clusters, the common characteristics of cluster LFs are established at z=0.83. We also find a 2sigma deficit of intrinsically faint, red galaxies (i-z>0.5, Mi>-19) in this cluster. This trend may suggest that faint, red galaxies (which are common in z<0.1 rich clusters) have not yet been created in this cluster at z=0.83. The giant-to-dwarf ratio in MS1054-0321 starts to increase inwards of the virial radius or when Sigma>30 Mpc^-2, coinciding with the environment where the galaxy star formation rate and the morphology-density relation start to appear. (abridged) • ### The Transformation of Cluster Galaxies at Intermeidate Redshift(astro-ph/0412083) Dec. 3, 2004 astro-ph We combine imaging data from the Advanced Camera for Surveys (ACS) with VLT/FORS optical spectroscopy to study the properties of star-forming galaxies in the z=0.837 cluster CL0152-1357. We have morphological information for 24 star-forming cluster galaxies, which range in morphology from late-type and irregular to compact early-type galaxies. We find that while most star-forming galaxies have $r_{625}-i_{775}$ colors bluer than 1.0, eight are in the red cluster sequence. Among the star-forming cluster population we find five compact early-type galaxies which have properties consistent with their identification as progenitors of dwarf elliptical galaxies. The spatial distribution of the star-forming cluster members is nonuniform. We find none within $R\sim 500$ Mpc of the cluster center, which is highly suggestive of an intracluster medium interaction. We derive star formation rates from [OII] $\lambda\lambda 3727$ line fluxes, and use these to compare the global star formation rate of CL0152-1357 to other clusters at low and intermediate redshifts. We find a tentative correlation between integrated star formation rates and $T_{X}$, in the sense that hotter clusters have lower integrated star formation rates. Additional data from clusters with low X-ray temperatures is needed to confirm this trend. We do not find a significant correlation with redshift, suggesting that evolution is either weak or absent between z=0.2-0.8. • ### The Luminosity Function of Early-Type Galaxies at z~0.75(astro-ph/0407644) July 30, 2004 astro-ph We measure the luminosity function of morphologically selected E/S0 galaxies from $z=0.5$ to $z=1.0$ using deep high resolution Advanced Camera for Surveys imaging data. Our analysis covers an area of $48\Box\arcmin$ (8$\times$ the area of the HDF-N) and extends 2 magnitudes deeper ($I\sim24$ mag) than was possible in the Deep Groth Strip Survey (DGSS). At $0.5<z<0.75$, we find $M_B^*-5\log h_{0.7}=-21.1\pm0.3$ and $\alpha=-0.53\pm0.2$, and at $0.75<z<1.0$, we find $M_B^*-5\log h_{0.7}=-21.4\pm0.2$. These luminosity functions are similar in both shape and number density to the luminosity function using morphological selection (e.g., DGSS), but are much steeper than the luminosity functions of samples selected using morphological proxies like the color or spectral energy distribution (e.g., CFRS, CADIS, or COMBO-17). The difference is due to the blue', $(U-V)_0<1.7$, E/S0 galaxies, which make up to $\sim30%$ of the sample at all magnitudes and an increasing proportion of faint galaxies. We thereby demonstrate the need for {\it both morphological and structural information} to constrain the evolution of galaxies. We find that the blue' E/S0 galaxies have the same average sizes and Sersic parameters as the red', $(U-V)_0>1.7$, E/S0 galaxies at brighter luminosities ($M_B<-20.1$), but are increasingly different at fainter magnitudes where blue' galaxies are both smaller and have lower Sersic parameters. Fits of the colors to stellar population models suggest that most E/S0 galaxies have short star-formation time scales ($\tau<1$ Gyr), and that galaxies have formed at an increasing rate from $z\sim8$ until $z\sim2$ after which there has been a gradual decline. • ### Ultra Compact Dwarf galaxies in Abell 1689: a photometric study with the ACS(astro-ph/0406613) June 28, 2004 astro-ph The properties of Ultra Compact Dwarf (UCD) galaxy candidates in Abell 1689 (z=0.183) are investigated, based on deep high resolution ACS images. A UCD candidate has to be unresolved, have i<28 (M_V<-11.5) mag and satisfy color limits derived from Bayesian photometric redshifts. We find 160 UCD candidates with 22<i<28 mag. It is estimated that about 100 of these are cluster members, based on their spatial distribution and photometric redshifts. For i>26.8 mag, the radial and luminosity distribution of the UCD candidates can be explained well by Abell 1689's globular cluster (GC) system. For i<26.8 mag, there is an overpopulation of 15 +/- 5 UCD candidates with respect to the GC luminosity function. For i<26 mag, the radial distribution of UCD candidates is more consistent with the dwarf galaxy population than with the GC system of Abell 1689. The UCD candidates follow a color-magnitude trend with a slope similar to that of Abell 1689's genuine dwarf galaxy population, but shifted fainter by about 2-3 mag. Two of the three brightest UCD candidates (M_V ~ -17 mag) are slightly resolved. At the distance of Abell 1689, these two objects would have King-profile core radii of ~35 pc and r_eff ~300 pc, implying luminosities and sizes 2-3 times those of M32's bulge. Additional photometric redshifts obtained with late type stellar and elliptical galaxy templates support the assignment of these two resolved sources to Abell 1689. Our findings imply that in Abell 1689 there are at least 10 UCDs with M_V<-12.7 mag. Compared to the UCDs in the Fornax cluster they are brighter, larger and have colors closer to normal dwarf galaxies. This suggests that they may be in an intermediate stage of the stripping process. Spectroscopy is needed to definitely confirm the existence of UCDs in Abell 1689. • ### X-ray temperature and morphology of z>0.8 clusters of galaxies(astro-ph/0110032) Oct. 1, 2001 astro-ph We discuss our current progress in studying a sample of z>0.8 clusters of galaxies from the ROSAT Distant Cluster Survey. To date, we have Chandra observations for four of the ten clusters. We find that the morphology of two of these four are quite regular, with deviations from circular of less than 5%, while two are strikingly elliptical. When the temperatures and luminosities of our sample are grouped with six other high-redshift measurements, there is no measured evolution in the luminosity-temperature relation. We identify a number of X-ray emitting point sources that are potential cluster members. These could be sources of intracluster medium heating, adding the entropy necessary to explain the cluster luminosity-temperature relation. • ### Measuring $\Omega_m$ with the ROSAT Deep Cluster Survey(astro-ph/0106428) June 24, 2001 astro-ph We analyze the ROSAT Deep Cluster Survey (RDCS) to derive cosmological constraints from the evolution of the cluster X-ray luminosity distribution. The sample contains 103 galaxy clusters out to z=0.85 and flux-limit Flim=3 10^{-14} cgs (RDCS-3) in the [0.5-2.0] keV energy band, with a high-z extension containing four clusters at 0.90<z<1.26 and F>1 10^{-14} cgs (RDCS-1). Model predictions for the cluster mass function are converted into the X-ray luminosity function in two steps. First we convert mass into intra-cluster gas temperature by assuming hydrostatic equilibrium. Then temperature is converted into X-ray luminosity by using the most recent data on the Lx-T relation for nearby and distant clusters. These include the Chandra data for seven distant clusters at 0.57<z<1.27. From RDCS-3 we find \Omega_m=0.35+/-0.12 and \sigma_8=0.66+/-0.06 for a spatially flat Universe with cosmological constant, with no significant constraint on \Gamma . Even accounting for theoretical and observational uncertainties in the mass/X-ray luminosity conversion, an Einstein-de-Sitter model is always excluded at far more than the 3sigma level. • ### The Intracluster Medium in z > 1 Galaxy Clusters(astro-ph/0012250) Dec. 12, 2000 astro-ph The Chandra X-ray Observatory was used to obtain a 190 ks image of three high redshift galaxy clusters in one observation. The results of our analysis of these data are reported for the two z > 1 clusters in this Lynx field, including the most distant known X-ray selected cluster. Spatially-extended X-ray emission was detected from both these clusters, indicating the presence of hot gas in their intracluster media. A fit to the X-ray spectrum of RX J0849+4452, at z=1.26, yields a temperature of kT = 5.8^{+2.8}_{-1.7} keV. Using this temperature and the assumption of an isothermal sphere, the total mass of RX J0849+4452 is found to be 4.0^{+2.4}_{-1.9} X 10^{14} h_{65}^{-1} M_{\sun} within r = 1 h_{65}^{-1} Mpc. The T_x for RX J0849+4452 approximately agrees with the expectation based on its L_{bol} = 3.3^{+0.9}_{-0.5} X 10^{44}$erg s$^{-1} according to the low redshift L_x - T_x relation. The very different distributions of X-ray emitting gas and of the red member galaxies in the two z > 1 clusters, in contrast to the similarity of the optical/IR colors of those galaxies, suggests that the early-type galaxies mostly formed before their host clusters. • ### The CFH Optical PDCS survey (COP) I: The Data(astro-ph/0003137) March 9, 2000 astro-ph This paper presents and gives the COP (COP: CFHT Optical PDCS; CFHT: Canada-France-Hawaii Telescope; PDCS: Palomar Distant Cluster Survey) survey data. We describe our photometric and spectroscopic observations with the MOS multi-slit spectrograph at the CFH telescope. A comparison of the photometry from the PDCS (Postman et al. 1996) catalogs and from the new images we have obtained at the CFH telescope shows that the different magnitude systems can be cross-calibrated. After identification between the PDCS catalogues and our new images, we built catalogues with redshift, coordinates and V, I and Rmagnitudes. We have classified the galaxies along the lines of sight into field and structure galaxies using a gap technique (Katgert et al. 1996). In total we have observed 18 significant structures along the 10 lines of sight. • ### Deep probing of the Coma cluster(astro-ph/9802355) Feb. 27, 1998 astro-ph As a continuation of our study of the faint galaxy luminosity function in the Coma cluster of galaxies, we report here on the first spectroscopic observations of very faint galaxies (R $\le$ 21.5) in the direction of the core of this cluster. Out of these 34 galaxies, only one may have a redshift consistent with Coma, all others are background objects. The predicted number of Coma galaxies is 6.7 $\pm$ 6.0, according to Bernstein et al. (1995, B95). If we add the 17 galaxies observed by Secker (1997), we end up with 5 galaxies belonging to Coma, while the expected number is 16.0 $\pm$ 11.0 according to B95. Notice that these two independent surveys lead to the same results. Although the observations and predicted values agree within the error, such results raise into question the validity of statistical subtraction of background objects commonly used to derive cluster luminosity functions (e.g. B95). More spectroscopic observations in this cluster and others are therefore urgently needed. As a by-product, we report on the discovery of a physical structure at a redshift z$\simeq$0.5. • ### A Deep Probing of the Coma cluster(astro-ph/9708214) Aug. 22, 1997 astro-ph We present here results from a deep spectroscopic survey of the Coma cluster of galaxies (29 galaxies between 18.98<m_R<21.5). Only 1 of these galaxies is within Coma compared to an expected 6.7 galaxies computed from nearby control fields. This discrepancy potentially indicates that Coma's faint end luminosity function has been grossly overestimated and raises concerns about the validity of using 2D statistical subtraction to correct for the background galaxy population when constructing cluster luminosity functions. • ### A Search For Large-Scale Structure at High Redshift(astro-ph/9702025) Feb. 3, 1997 astro-ph We present new and exciting results on our search for large-scale structure at high redshift. Specifically, we have just completed a detailed analysis of the area surrounding the cluster CL0016+16 (z=0.546) and have the most compelling evidence yet that this cluster resides in the middle of a supercluster. From the distribution of galaxies and clusters we find that the supercluster appears to be a sheet of galaxies, viewed almost edge-on, with a radial extent of 31 Mpc, transverse dimension of 12 Mpc, and a thickness of ~4 Mpc. The surface density and velocity dispersion of this coherent structure are consistent with the properties of the Great Wall'' in the CfA redshift survey.
2021-03-02 04:45:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6337916254997253, "perplexity": 3318.216132876106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363217.42/warc/CC-MAIN-20210302034236-20210302064236-00171.warc.gz"}
https://ask.openstack.org/en/users/21154/brunodzogovicgmailcom/?sort=recent
2018-08-30 10:52:34 -0500 answered a question sctp load balancer I was extensively researching this option and came to a conclusion that OpenStack doesn't natively support routing of SCTP headers. You will need to use third party solutions such as GRE tunneling or L2TP, likely with OvS. Overlay is the only option unfortunately. You can establish a master GRE node and from there relay other tunnels towards machines that you need. Also you can do it with VPN as well, that if you need some sort of encryption (or IPsec). 2017-07-07 03:06:08 -0500 asked a question OpenStack deployment fails with message "Failed tasks: Task[primary-database/16]" This happens at the controller node. I have 3 controller nodes, 10 compute nodes, one ceilometer, 6 storage and one monitor node. I have tried to delete each of the controller nodes and try to see if the error persists, and whenever I delete the next controller node - still the same error is reported on one of the others. In the end I tried with one controller node, still the error is the same. Tried to troubleshoot this problem with no luck so far. If anyone has encountered something similar, please share your opinions and experiences. 2017-07-06 13:23:08 ERR (/Stage[main]/Osnailyfacter::Database::Database_backend_wait/Osnailyfacter::Wait_for_backend[mysql]/Haproxy_backend_status[mysql]) /usr/bin/puppet:8:in ' 2017-07-06 13:23:08 ERR (/Stage[main]/Osnailyfacter::Database::Database_backend_wait/Osnailyfacter::Wait_for_backend[mysql]/Haproxy_backend_status[mysql]) /usr/lib/ruby/vendor_ruby/puppet/util/command_line.rb:92:in execute' 2017-07-06 13:23:08 ERR (/Stage[main]/Osnailyfacter::Database::Database_backend_wait/Osnailyfacter::Wait_for_backend[mysql]/Haproxy_backend_status[mysql]) /usr/lib/ruby/vendor_ruby/puppet/util/command_line.rb:146:in run' 2017-07-06 13:23:08 ERR (/Stage[main]/Osnailyfacter::Database::Database_backend_wait/Osnailyfacter::Wait_for_backend[mysql]/Haproxy_backend_status[mysql]) /usr/lib/ruby/vendor_ruby/puppet/application.rb:381:in run' 2017-07-06 13:23:08 ERR (/Stage[main]/Osnailyfacter::Database::Database_backend_wait/Osnailyfacter::Wait_for_backend[mysql]/Haproxy_backend_status[mysql]) /usr/lib/ruby/vendor_ruby/puppet/util.rb:496:in exit_on_fail' 2017-07-06 13:23:08 ERR (/Stage[main]/Osnailyfacter::Database::Database_backend_wait/Osnailyfacter::Wait_for_backend[mysql]/Haproxy_backend_status[mysql]) /usr/lib/ruby/vendor_ruby/puppet/application.rb:381:in block in run' 2017-07-06 13:23:08 ERR (/Stage[main]/Osnailyfacter::Database::Database_backend_wait/Osnailyfacter::Wait_for_backend[mysql]/Haproxy_backend_status[mysql]) /usr/lib/ruby/vendor_ruby/puppet/application.rb:507:in plugin_hook' 2017-07-06 13:23:08 ERR (/Stage[main]/Osnailyfacter::Database::Database_backend_wait/Osnailyfacter::Wait_for_backend[mysql]/Haproxy_backend_status[mysql]) /usr/lib/ruby/vendor_ruby/puppet/application.rb:381:in block (2 levels) in run' 2017-07-06 13:23:08 ERR (/Stage[main]/Osnailyfacter::Database::Database_backend_wait/Osnailyfacter::Wait_for_backend[mysql]/Haproxy_backend_status[mysql]) /usr/lib/ruby/vendor_ruby/puppet/application/apply.rb:159:in `run_command' 2017-04-27 06:17:54 -0500 received badge ● Enthusiast
2020-10-23 06:36:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21504321694374084, "perplexity": 9015.957007905385}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880656.25/warc/CC-MAIN-20201023043931-20201023073931-00381.warc.gz"}
https://www.mapleprimes.com/users/Carl%20Love/replies
Carl Love ## 20954 Reputation 8 years, 211 days Natick, Massachusetts, United States My name was formerly Carl Devore. ## Plot too big... @Earl The plot in your worksheet is much too big for my computer to handle. The GUI alone consumed 13 Gig of memory, and I needed to kill my entire Maple session. However, I can see from your code that you are using a coloring technique completely different from what I described. ## Just trying to read the code... @acer I agree in general with all that you said. However, you might not realize that my only purpose (and I'm guessing VV's only purpose) was to read (as a human) the code of partition1 out of mathematical interest. I wasn't trying to extract the code to execute it. I think that both of our methods are useful for just reading the code. Whenever I'm on a code-reading exploration like that, I habitually set verboseproc to 2 or 3 and opaquemodules to false (whether it ends up being needed or not). My comment about browsing the library was meant to be a remark on the difficulties one faces when trying to read the library (just as a human reader, not for execution purposes). I didn't mean to suggest that some large percentage of those numeric-name-only entries were similar to partition1. By the way, why is the library mentioned specifically in that bit of the Inert tree that I extracted? ## Avoiding piecewise?... I guess that you're avoiding piecewise because you can use neither it nor Heaviside in off-the-shelf Fortran? ## ToInert, pointto... @vv Here's another way to get to the code of that procedure: interface(verboseproc= 3): kernelopts(opaquemodules= false): #not strictly needed in this case indets( ToInert(eval(combinat:-partition)), And( specfunc(_Inert_ASSIGNEDLOCALNAME), satisfies(x-> op(1,x)="partition1") ) ); {_Inert_ASSIGNEDLOCALNAME("partition1", "PROC", 36893490238314748452, _Inert_ATTRIBUTE(_Inert_STRING("C:\Progr\ am Files\Maple 2021\lib\maple.mla")))} P:= pointto(op([1,3], %)); P := partition1 eval(P); Browsing through the main library with LibraryTools, I chagrinfully note that the vast majority (about 90%) of entries are anonymous, with their only "name" being an integer of 5 or 6 digits. ## *Much* faster with remember table... @vv If you replace option autocompile with option remember, then for the largest of the cases that you showed, the *non*-compiled procedure is 169 times faster than the compiled procedure. ## I've seen it also... @tomleslie I've seen that problem also in Maple 2021. For me, it didn't occur with this worksheet. I suspect that this bug has nothing to do with one's code and instead is based on environment ## In what way doesn't it work in 2021?... @Preben Alsholm As far as I can see, it works in Maple 2021. Which part isn't working for you? ## @vsnyder You wrote: Another prob... @vsnyder You wrote: • Another problem (that isn't mentioned in the Maple Advanced Programming Guide) is that sending statements to EvalMapleStatement isn't quite the same as typing them into the Maple textual interrace. In the Maple textual interface, you can lay out a "proc" definition on several lines, as you would in C or Fortran or Haskell. To submit it to EvalMapleStatement, it needs to be combined into one long stream of text, from "proc" to "end proc". That is completely false. You either haven't understood anything that I've said in this thread regarding line breaks, white space, etc., or you've chosen to ignore it. ## 4 equation, 4 unknowns... @MapleEnthusiast You have a system of 4 equations in 4 unknowns, so I don't know why you think that its solution has a free variable that you can arbitrarily set to 1. It is much, much, much easier to find numeric solutions of sysP with fsolve than it is to work with the huge symbolic solution I generated in the Answer. For example: fsolve(sysP, {vz[]}=~ 0.1..infinity); {z[2, 1] = 1.204997696, z[2, 2] = 1.168035195, z[4, 1] = 3.038404729, z[4, 2] = 3.170903246} Remember that these need to be squared to get the corresponding y values. ## Removing some bloat... @elonzyy You wrote: • to_list := a -> convert(a, list); Better (for generic a): to_list:= convert/list: If you're expecting a to be a certain type (Vector, table, set, etc.), then there are better ways. • map_i := (f, a) -> zip(f, to_list(a), [$(1..nops(to_list(a)))]); Yes, that's bloated due to unnecessary quotes and parentheses and inefficient due to the repetition of to_list. Better: map_i:= (f,a)-> (L-> f~(L, [$1..nops(L)]))(to_list(a)): • flat_map_i := (f, a) -> ListTools:-Flatten(map_i(f, a), 1); flat_map := (f, a) -> ListTools:-Flatten(map(f, a), 1); Those two aren't even worth typing or giving a name to. You use an f that creates a sublist from a sequence, and then you turn that sublist back into a sequence. It'd be better to simply not create any list with in the first place, for example, map_i(1@@0, [3,5,7]); (1@@0 is the multi-argument identity function. An equivalent form for it is ()-> args.) ## The reason for this exercise... @Zeineb The reason that your instructor assigned you this exercise is specifically to teach you some of the same things that I've been telling you. This function was specifically designed to cause trouble (an erratic sequence) for Newton's method when the initial value is several "humps" away from the true solution. Look at a plot of f and another of its derivative on the interval x= 0..22. Note the wild oscillations of the derivative. If the derivative is near zero, the tangent line is nearly horizontal, and thus the point where the tangent intersects the x-axis (which is, by definition, the next x) will be far away from the current x. Here's a procedure that I think will help you see the "bare bones" of what's happening. You can use it as an alternative to Newton in Student:-NumericalAnalysis. You should especially experiment with different values of digits and different initial values. Newton:= proc( f::algebraic, X0::(name=realcons), { digits::posint:= Digits, tolerance::positive:= .5*10^(2-Digits), maxiters::posint:= 20, criterion::identical(relative, absolute, residual):= ':-relative' } ) local X:= Array(1..2, [rhs(X0)+1, rhs(X0)], datatype= sfloat), x:= lhs(X0), F:= subs( _f= unapply(f,x), proc(x) option remember; evalf(_f(x)) end proc ), N:= subs(_d= unapply(diff(f,x), x), x-> x-F(x)/evalf(_d(x))), Crit:= table([ ':-absolute'= ((a,b,t)-> abs(a-b) < t), ':-relative'= ((a,b,t)-> abs(1-b/a) < t), ':-residual'= ((a,b,t)-> abs(F(a)) < t) ]), k ; Digits:= digits; for k from 3 to 2+maxiters do if Crit[criterion](X[k-1], X[k-2], tolerance) then break fi; X(k):= N(X[k-1]) od; if k > 2+maxiters then print("Did not converge") fi; seq(X[2..]) end proc : f:= sin(2*x)+3*x+2*cos(x)^2-61: Newton(f, x=0, digits= 15, tolerance= .5e-13, criterion= relative); ## Transcript of Fortran... @vsnyder I meant a transcript of the Fortran calls to EvalMapleExpression, not a transcript of how the results appear  within Maple. But at this point I think that it's clear to both of us what you were doing wrong: The procedure definition, or any expression, must appear as a single call to EvalMapleExpression. It's make no difference whether the argument to that call contains line breaks. Analogously, if you were entering code into Maple by hand, regardless of interface, no single expression (even a thousand-line module) could be split over multiple execution groups (command prompts); nor would you be allowed to press Return until it was completely entered. The file attachment tool in MaplePrimes is the green uparrow rather than the more-usual paperclip icon on a lot of sites. At least the bright color makes it easier to see. ## Piecewise polynomials... @Rookieplayer Yes, I noticed many SLATEC routines related to piecewise polynomials (just use your browser's in-page search feature (usually Ctrl-F) on the page https://gams.nist.gov/cgi-bin/serve.cgi/PackageModules/SLATEC), but nothing for non-polynomials. Of course, your functions are piecewise-analytic, so they could be approximated by piecewise polynomials; however, I think that the approach that I described in my previous Reply will be far more accurate, far more computationally efficient, and much easier to implement. By the way, in case you don't already know this: You should check out Maple's command CodeGeneration:-Fortran, which I used to generate HVS above. ## Use Heaviside form... @Rookieplayer I can't find piecewise in SLATEC either. Before sending your code to Fortran, convert all the piecewises to Heaviside (this could be done all at once with the subsindets command). I can't find Heaviside in SLATEC either, but it'd be trivial to implement. Substitute HVS (or another Fortran-acceptable name) for Heaviside (this could be done all at once with subs). Then use Fortran code such as this for HVS: doubleprecision function HVS (x) doubleprecision x if (x .le. 0.0D0) then HVS = 0.0D0 return else HVS = 0.1D1 return end if end ## My latex(...) results are different from... My latex(...results are quite different from yours: restart: Physics:-Version(); The "Physics Updates" package is not installed interface(version); Standard Worksheet Interface, Maple 2021.0, Windows 10, March 5 2021 Build ID 1523359 interface(typesetting); extended interface(prettyprint); 2 latex:-Settings(); [cacheresults = true, commabetweentensorindices = false, invisibletimes = " ", leavespaceafterfunctionname = false, linelength = 66, powersoftrigonometricfunctions = mixed, spaceaftersqrt = true, usecolor = true, usedisplaystyleinput = true, useimaginaryunit = I, useinputlineprompt = true, userestrictedtypesetting = false, usespecialfunctionrules = true, usetypesettingcurrentsettings = false] sol:= u(r,t) = invlaplace( BesselJ(0,10*(-s)^(1/2)*r)/BesselJ(0,20*(-s)^(1/2))*s/(s^2+1), s, t ) - invlaplace( BesselJ(0,10*(-s)^(1/2)*r)/BesselJ(0,20*(-s)^(1/2))/s, s, t ) - cos(t) + 1 : latex(sol); u \! \left(r , t\right) = \mathcal{L}^{-1}\! \left(\frac{J_{0}\! \left(10 \sqrt{-s}\, r \right) s}{J_{0}\! \left(20 \sqrt{-s}\right) \left(s^{2}+1\right)}, s , t\right) -\mathcal{L}^{-1}\! \left(\frac{J_{0}\! \left(10 \sqrt{-s}\, r \right)} {J_{0}\! \left(20 \sqrt{-s}\right) s}, s , t\right)-\cos \! \left(t \right)+1 1 2 3 4 5 6 7 Last Page 1 of 586 
2021-05-10 00:57:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6309522390365601, "perplexity": 3035.2582034920288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989030.65/warc/CC-MAIN-20210510003422-20210510033422-00565.warc.gz"}
https://levineast.school.nz/
## LEVIN EAST SCHOOL LEVIN EAST SCHOOL IS A YEAR 0 – 6 CONTRIBUTING PRIMARY SCHOOL, WHICH WAS ESTABLISHED IN 1953. IT HAS A ROLL OF APPROXIMATELY 380 AND IS LOCATED IN THE GROWING HOROWHENUA REGION. Nestled between the Tararua Ranges and the Tasman Sea, the area boasts an abundance of natural features and a friendly and welcoming community. At Levin East School, we have a fantastic and progressive teaching team, a supportive and engaged board of trustees and a passionate and active school community. The school is well regarded in the area and our team embrace and engage with modern pedagogical practices. ## School Vision To provide foundational education so all our learners can be successful citizens. ## Our Values ### Respect / Whakaute For ourselves, others and our environment is essential. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ### Innovation / Auahatanga Is renewing, changing or creating more effective ways of doing things better. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ### Courage / Manawanui Is the willingness to have a go even in the face of uncertainty. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ### Collaboration / Kotahitanga Is the process of working with others to produce something special. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ### Excellence / Tararuatanga Is the quality of being outstanding and achieving to your full potential. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ### Resilience / Manahau Is the ability to positively step up and move on from challenging situations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2022-07-01 17:29:31
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9621912837028503, "perplexity": 39.638532736924304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103943339.53/warc/CC-MAIN-20220701155803-20220701185803-00072.warc.gz"}
http://openstudy.com/updates/55af1300e4b0ce10565c8d01
• anonymous Use the Counting Principle to find the probability of choosing the 8 winning lottery numbers when the numbers are chosen at random from 0 to 9 Mathematics Looking for something else? Not the answer you are looking for? Search for more explanations.
2017-03-30 16:53:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8053485751152039, "perplexity": 361.4570007366665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218195419.89/warc/CC-MAIN-20170322212955-00568-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.zbmath.org/?q=an%3A1048.11049
# zbMATH — the first resource for mathematics The $$n$$th root of the mirror map. (English) Zbl 1048.11049 Yui, Noriko (ed.) et al., Calabi-Yau varieties and mirror symmetry. Providence, RI: American Mathematical Society (AMS) (ISBN 0-8218-3355-3/hbk). Fields Inst. Commun. 38, 195-199 (2003). The authors consider the differential equation $(\Theta^{N-1}-Nz(N\Theta +1)\cdots (N\Theta +N-1))f(z)=0$ where $$\Theta =z\dfrac{d}{dz}$$, $$N$$ is an odd prime number. Let $$f_N$$, $$g_N$$ be the power series solutions, with the asymptotic form $$f_N(z)=1+O(z)$$, $$g_N(z)=f_N(z)\log z+G_N(z)$$, $$G_N(z)=O(z)$$. It is proved that all the coefficients of the power series $$\exp \left( \frac{G_N}{Nf_N}\right)$$ are integers. This result implies the integrality property of the $$N$$th root of the mirror map of a Calabi-Yau manifold and improves an earlier result [B. H. Lian and S. T. Yau, Lectures in algebra and geometry. Proceedings of the international conference on algebra and geometry, National Taiwan University, Taipei, Taiwan, 1995, 215–227 (1998; Zbl 0998.12009)] about the integrality of $$\exp \left( \frac{G_N}{f_N}\right)$$. The technique is based on Dwork’s theory of $$p$$-adic hypergeometric functions [B. Dwork, Ann. Sci. Éc. Norm. Supér., IV. Ser. 6, 295–315 (1973; Zbl 0309.14020)]. For the entire collection see [Zbl 1022.00014]. ##### MSC: 11G25 Varieties over finite and local fields 34M15 Algebraic aspects (differential-algebraic, hypertranscendence, group-theoretical) of ordinary differential equations in the complex domain 12H25 $$p$$-adic differential equations 14J32 Calabi-Yau manifolds (algebro-geometric aspects) 32Q25 Calabi-Yau theory (complex-analytic aspects) 33C20 Generalized hypergeometric series, $${}_pF_q$$
2021-07-28 05:07:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.782142162322998, "perplexity": 770.5996105862635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153521.1/warc/CC-MAIN-20210728025548-20210728055548-00295.warc.gz"}
https://tenpy.readthedocs.io/en/latest/reference/tenpy.linalg.np_conserved.concatenate.html
# concatenate¶ tenpy.linalg.np_conserved.concatenate(arrays, axis=0, copy=True)[source] Stack arrays along a given axis, similar as np.concatenate. Stacks the qind of the array, without sorting/blocking. Labels are inherited from the first array only. Parameters • arrays (iterable of Array) – The arrays to be stacked. They must have the same shape and charge data except on the specified axis. • axis (int | str) – Leg index or label of the first array. Defines the axis along which the arrays are stacked. • copy (bool) – Whether to copy the data blocks. Returns stacked – Concatenation of the given arrays along the specified axis. Return type Array Array.sort_legcharge()
2020-08-13 06:44:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4383584260940552, "perplexity": 5861.418705177378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738960.69/warc/CC-MAIN-20200813043927-20200813073927-00412.warc.gz"}
https://discourse.julialang.org/t/dynamic-latex/69600
# Dynamic Latex Sorry if a silly question, but is there a way to use variables in latex equations in pluto.jl markdown so that the equations change dynamically. Thanks, The new guy loving this language I have not tried it for LaTeX, but in normal strings you can write a dollar-sign and then the variable ( as in ‘“My string has $var symbols”’, where the variable ‘var’ is defined elsewhere), to insert the value of the given variable in the string. I think that should work for LaTeX as well. It’s a bit tricky because of the stupid stupid stupid TeX notation ($$ instead of ). What you can do is use Markdown.parse("String \$x = $x\$") instead of md"String $x = ???$", that way you first build the string, and then the markdown. 3 Likes I have tried exactly that unsure if im missing something md""" \lim_{h\to $(var)} “”" Outputs the equation with$(var) in it Sweet thank you il give that a shot Oh also, iirc you can use double backtick for the math mode, instead of \\$ 1 Like Yup, just tested and both work perfectly. Thanks again, Brian I whipped this quick script up if anyone if interested to make life a lil easier. BrianTipton1/LatexParser.jl (github.com)
2021-10-25 02:07:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.84749835729599, "perplexity": 3690.8894936378074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00134.warc.gz"}
https://testbook.com/question-answer/the-value-ofrm-frac243fracn5ti--6103d4821121abaa11e726e0
# The value of $$\rm \frac{243^{\frac{n}{5}}\times {3^{2n + 1}}}{{9^n} \times {3^{n - 1}}}$$? This question was previously asked in Territorial Army Paper I : Official Practice Test Paper - 4 View all Territorial Army Papers > 1. 1 2. 9 3. 3 4. 3n Option 2 : 9 ## Detailed Solution Given: $$\rm \frac{243^{\frac{n}{5}}\times {3^{2n + 1}}}{{9^n} \times {3^{n - 1}}}$$ Concept Used: ax + y = ax × ay ax - y = ax ÷ ay $$\rm a^{\frac{p}{q}}=\sqrt[q] {a^p}$$ Calculation: $$\rm \frac{243^{\frac{n}{5}}\times {3^{2n + 1}}}{{9^n} \times {3^{n - 1}}}$$ $$\rm \frac{{\left(3^5\right)}^{\frac{n}{5}}\times {3^{2n + 1}}}{{{\left(3^2\right)}^n} \times {3^{n - 1}}}$$ $$\rm \frac{3^n\times {3^{2n + 1}}}{3^{2n} \times {3^{n - 1}}}$$ $$\rm \frac{{3^{3n + 1}}}{{3^{3n - 1}}}$$ = 3(3n + 1) - (3n - 1) = 32 = 9
2022-01-18 10:34:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.258282870054245, "perplexity": 9009.32586701109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300810.66/warc/CC-MAIN-20220118092443-20220118122443-00606.warc.gz"}
http://sundae.triumf.ca/pub2/SSP/subsubsection3_5_32_4.html
### And The AND of the first and second operands is placed in the first-operand location. Operands are treated as unstructured logical quantities, and the connective AND is applied bit by bit. A bit position in the result is set to one if the corresponding bit positions in both operands contain a one; otherwise, the result bit is set to zero. Resulting Condition Code: 0 ~ Result is zero 1 ~ Result not zero 2 ~ - 3 ~ - Program Exceptions: Access (fetch, operand 2, N ) fetch and store, operand 1, NI) Programming Note The instruction AND may be used to set a bit to zero. The immediate byte's position in the operand is reversed. Given a word-aligned address D, bytes will be accessed as follows: \end{tabular}
2021-11-27 16:50:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5844886302947998, "perplexity": 4024.7137854804128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358208.31/warc/CC-MAIN-20211127163427-20211127193427-00572.warc.gz"}
https://socratic.org/questions/how-do-you-write-4x-2-4x-2-in-vertex-form
# How do you write 4x^2-4x+2 in vertex form? May 21, 2015 Given $y = 4 {x}^{2} - 4 x + 2$ The general vertex form is $y = m {\left(x - a\right)}^{2} + b$ where $\left(a , b\right)$ is the vertex of the parabola Extract $m$ $y = 4 \left({x}^{2} - x\right) + 2$ Complete the square $y = 4 \left({x}^{2} - x + \frac{1}{4}\right) + 2 + 1$ Rewrite in vertex form $y = 4 \left(x - \frac{1}{2}\right) + 3$
2020-07-09 21:05:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6709854006767273, "perplexity": 4657.421006318101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655901509.58/warc/CC-MAIN-20200709193741-20200709223741-00179.warc.gz"}
https://www.physicsforums.com/threads/cmb-flux-density.330423/
# CMB Flux Density 1. Aug 13, 2009 ### Nabeshin Does anyone know where I can find numbers (or how to derive) the CMB flux density (W/m^2)? I'm only really interested in our present cosmological time, so a solution may assume the CMB to be at a constant temperature. Last edited: Aug 13, 2009 2. Aug 14, 2009 ### Chalnoth Well, the CMB is almost a perfect black body as T=2.725K. So you can compute it directly from the black body spectrum (Planck's Law): 3. Aug 14, 2009 ### Nabeshin First, using planck's law would give the energy radiated per unit surface area at that temperature, but that's per unit surface area of the radiating body, isn't it? In which case the surface area is... the entire universe? What I'm interested in is if you have, say, a 1 m^2 surface, how much energy does it absorb from the cmb? It's relatively easy to do this for a star or a single radiating black body, like a star, by computing total energy radiated and then spreading it evenly over a sphere of radius r. I don't really know how to extend this to the cmb though... 4. Aug 14, 2009 ### Chalnoth Also the energy absorbed per unit surface area. This works for the CMB because it's isotropic (as opposed to the light from a star which only comes from a small area of the sky). So you don't multiply that value by any area to get the flux density of the CMB. 5. Aug 14, 2009 ### Nabeshin So just $$A \sigma T^4$$ should work for total power absorbed, then? Interesting that it should turn out to be so simple! 6. Aug 14, 2009 ### Chalnoth Yup. Just bear in mind that "A" there would be dependent upon how you are doing the measurement, and would typically be the area of the beam of the detector on a telescope.
2018-07-19 02:20:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8667314052581787, "perplexity": 533.5689209009415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590443.0/warc/CC-MAIN-20180719012155-20180719032155-00389.warc.gz"}
https://gamedev.stackexchange.com/questions/109273/set-sprite-to-current-animation-texture
# set sprite to current animation texture In my game I have some box2d bodies where I add sprites using the following code in my render() method. for (Body body : worldBodies) { if (body.getUserData() instanceof Sprite) { Sprite sprite = (Sprite) body.getUserData(); Vector2 position = body.getPosition(); sprite.setPosition(position.x - sprite.getWidth() / 2 , position.y - sprite.getHeight() / 2); sprite.draw(batch); } } One of the bodies has to be animated. birdAnimation = new Animation(1, birdAtlas.getRegions()); birdAnimation.setPlayMode(Animation.PlayMode.LOOP_PINGPONG); This is the animation and now I tried to set the body's sprite obstacle6 to the current textureRegion from the Animation unsing this code: obstacle6.setRegion(birdAnimation.getKeyFrame(delta)); Somehow it just shows the first texture of the atlas. How can I get it to change? Or is there an other way to animate a box2d body? If you need any other information just comment. Don't pass delta to getKeyFrame, getKeyFrame takes the state time. The state time is likely something like the sum of all the deltas you've seen so far. The state time indicates how "far" into the animation you've gone, and by constantly passing delta (which if you're running at 60 fps will be 0.016s) the Animation will always yield the first frame.
2020-02-17 07:47:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21881502866744995, "perplexity": 4128.124956743984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141749.3/warc/CC-MAIN-20200217055517-20200217085517-00476.warc.gz"}
http://physics.aps.org/authors/uwe_oberlack
# Uwe Oberlack Uwe Oberlack studied physics at the Technical University in Munich, Germany, from which he also received his Ph.D. for research performed at the Max Planck Institute of Extraterrestrial Physics with the COMPTEL gamma-ray telescope aboard the Compton Gamma-Ray Observatory. For his thesis research on radioactive ${}^{26}\text{Al}$, including the generation of the first all-sky maps in this line, he was awarded the Otto Hahn Medal by the Max Planck Society in 1998. After post-doctoral research at Columbia University, working on a novel balloon-borne Compton telescope, he moved to Rice University in Houston, TX, where he has been on the faculty since 2001. While his research focus has since shifted to the direct search for dark matter with the XENON project, he continues to pursue interests in gamma-ray astronomy and its instrumentation.
2016-09-25 19:15:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3377350866794586, "perplexity": 1554.0226479868008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660342.42/warc/CC-MAIN-20160924173740-00034-ip-10-143-35-109.ec2.internal.warc.gz"}
https://koasas.kaist.ac.kr/handle/10203/29212
#### Combustion characteristics of high ash anthracite coals in fluidized beds = 저열량 무연탄의 유동층 연소특성 Cited 0 time in Cited 0 time in • Hit : 455 Combustion and hydrodynamic characteristics of high ash contents anthracite coal in a cold model fluidized bed (0.38 m-ID $\times$ 9.6 m-high) and two fluidized bed comustors ($0.3 \times 0.3 m \times 4.7 \mbox{m-high},\; 1.01 \times 0.83 \times 4.2 \mbox{m-high}$) have been studied. The effects of fluidizing gas velocity, bed temperature, static bed height, air-fuel ratio, and solids recycle rate on the combustion characteristics such as the axial temperature profile, carbon conversion in each particle size, overall carbon combustion efficiency, and particle entrainment rate have been determined in two combustors. A theoretical model for a mean bubble size and its frequency based on the collision theory with the random spatial bubble distribution in freely bubbling gas-fluidized beds has been developed. A hemispherical bubble velocity diagram about the time-averaged instantaneous bubble motion is constructed in a fluidized bed to determine the average bubble collision frequency. The proposed theoretical model equation for predicting bubble size is $$(U-U_{mf}) (D_b-D_{bo}) + 0.474 g^{1/2} (D_b^{3/2}-D_{bo}^{3/2}) = 1.132 (U-U_{mf})h$$ As can be seen in the above model equation, the gradient of bubble size increases linearly with bubble voidage. Also, the bubble Froude number increases along the bed height with bubble voidage. The bubble Froude number represents approximately a linear relationship with the average fractional change of square root of the static energy of bubble rise along the bed height. The present model of bubble size is found to represent well the data in literature. The overal particle entrainment was measured by collecting the entrained particles in dust collectors and it is analyzed by the one-dimensional particle motion in the freeboard, the entrainment rate at the bed surface, the distribution of particle rising velocity from the bed surface, and the particle attrition in the bed. The overall entrainment rate of particles at the bed surf... Kim, Sang-DoneresearcherSon, Jae-Ekresearcher김상돈researcher손재익researcher Description 한국과학기술원 : 화학공학과, Publisher 한국과학기술원 Issue Date 1989 Identifier 61335/325007 / 000795273 Language eng Description 학위논문(박사) - 한국과학기술원 : 화학공학과, 1989.8, [ xviii, 338 p. ] URI http://hdl.handle.net/10203/29212
2021-06-21 06:24:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28625354170799255, "perplexity": 3423.1666108897407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488268274.66/warc/CC-MAIN-20210621055537-20210621085537-00088.warc.gz"}
https://veekyforums.com/thread/8029203/science/traveling-salesman-algorithm.html
# Traveling Salesman Algorithm How has nobody come up with a way to solve the traveling salesman algorithm efficiently? Why cant you just use a powerful computer to brute force it quickly? Other urls found in this thread: The point is that there is no way to solve it other than by brute force so whats the problem with using brute force? does it really take that long to solve when using a lot of destinations? If your list of places is billions long, then yes. Travelling salesman and cities is just an example, its the structure of the question and the inability to solve it without brute force that makes it interesting What the fuck are you asking? They have come up with ways to solve the travelling salesman algorithms efficiently, they're called Local Optimization Algorithms, or NLP algorithms - Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization... They all solve the TSP problem within a reasonable amount of time. Do they solve it by finding the global minimum, the absolute best solution? No. The only way to do it is by checking all possible permutations - by brute forcing it. The more powerful computer you have, the faster it will solve, but considering the number of points you have in the problem, even the most powerful computer will only help so much. If you consider that by adding only 1 vertex to the problem, the complexity increases by a factor of O(n!), when you have to many vertices it's idiotic to brute force. I don't know what you're asking. The solution is easy - brute force, but since you can't find the solution during your own lifetime if you have too many vertices in the problem, you fall back on Local Optimization Algorithms - which work incredibly well if you ask me. For any real problem that I could formulate as a TSP problem, I would run it through a couple of different algorithms couple of times, playing with parameters, and then just take the best solution I got and seal the deal. I'd even put my signature on it, no matter what the consequences were. That's just the way it is. A* or dijkstra now fuck off >want to find shortest path between the 20 starbucks in my town >have to do literally 2432902008176640000 computations this is why we dont like brute force > places is billions no that is the whole problem mate 50 cities is allready a problem Quantum computers would find the solution with a single iteration using cone tracing per vertex method. what is that >If your list of places is billions long, then yes. You don't really need billions for this to be a problem, Total solutions are (n-1)!/2 So if n = 100, there are 4.6663 E+155 solutions. If n = 20, there are 608,225,502,044,160 solutions. n=billions is a pipe dream. Also: if someone invented a non-brute-force solution, it might go a long way towards solving P vs NP. anyone tried a multipole kind of approach? which is? The fast multipole method used in physics to speed up simulations en.wikipedia.org/wiki/Fast_multipole_method we could pretend cities close to each other to attract each other stronger than those far away (like gravity). Make it yourself! The algorithms that solve TSP problems are generally very easy to implement and program. If you can make out the idea from that link, go ahead try and see what you come up with. You don't even need to be a good programmer, you can do it in MALTAB probably if you formulate your solution well. Well then there you go Yes I suppose it's about time I take some swings at it while I'm in that mode of thinking. Are you just putting words together without knowing their meaning? >solve the traveling salesman algorithm efficiently? >brute force it everyone else knew what i meant The time complexity of a naive TTSM approach (AKA "brute-forcing it") is O(n!). So if you have a million spots, you have to do 1000000! passes. "Efficiently" is generally taken to mean polynomial time, ie like O(n^3) or something meaning that if your input is of size n then it will take in the worst case kn^3 for some constant k that doesn't change with n time to solve. The brute force way to solve TSP is O(n!) time which is a lot worse and not even remotely polynomial (the polynomial must be fixed and have constant powers). Even for 100 locations that's 100! which is a 158 digit number. Keep in mind there are still practical ways to in general solve it without it completely exploding (ie you can solve for more towns doing this *usually* than compared to a naive bruteforce) if you can do things like attach geometric coordinates to the places, but even then there's no known way to get a polynomial time in the worst-case. Also keep in mind that there are such things like approximation schemes. A k-approximation scheme is an algorithm such that it always returns an answer to within k*opt where opt is the actual optimum. So you see things like 3/2-approximation schemes so if the true shortest path is 200 then it will always return an answer of at most 300 and that can be proven, and these approximation schemes are usually polynomial time (otherwise why bother). I don't know much about actual TSP approximation algorithms or what k would be if there even are any, but I would imagine they would benefit from attaching geometric coordinates instead of just looking at a graph. More like anything past n=20 or so Do you know the psudocode for the brute force way? I'd be interested in seeing what the skeleton of an O(n!) algorithm looks like dijkstra's can't really solve the travelling salesman problem though it can return a pretty reasonable answer, but not the true solution An O(n!) naive one would literally just be like: for each permutation: if cost < best update best However more likely you'd write a backtracking algorithm which would cut off as soon as it realized that even on the path it's considering so far it won't do as well as the best known path so far. Lots of intractable problems are tackled this way. It's usually done via a recursive call. something like backtrack(v, visited, cost): { visited[u] = true if all vertices visited, update best if possible for u in neighborhood of v: if visited[u] == false and best > cost + weight(v, u): backtrack(v, visited, cost + weight(v, u)) visited[u] = false } as a naive backtrack (or something like that, just writing this in the comment box) see: en.wikipedia.org/wiki/Backtracking It can return a pretty reasonable answer? How would you adopt it? I mean, you'd want to visit every single vertex at least once, so...? I thought about looking at a start vertex then for every other vertex pair doing dijkstra to them and back, but even that won't even get you every vertex. Do you look through all paths P generated by dijkstra and then choose one that maximizes $\frac{|P|}{cost(P)}$ or some other heuristic, then try and build on that? I'm trying to see how you'd come up with a reasonable approximation scheme using dijkstra but I don't get it, although I only thought about it for less than a minute. You kinda caught me in a corner. I do distinctively remember explaining why Dijkstra's can be used to at least approximate a solution to the TSP, but that was a long time ago and I haven't touched the algorithms in almost a year now. theta(n!) that's why you can't just "brute force it quickly" This post kinda says it all. If you can come up with a way to solve TSP in polynomial time, you will be a very wealthy fellow. I just figured out how to do it basically, to put it in very simple terms, there is no way to find *the MOST optimal solution without looking at every single possible path. it's an n! problem. Of course, it can be approximated much more efficiently. place oats on a table in the relative positions of your cities, and let slime mold have a go at it hmm...so why wouldn't a greedy approach work for TSP? >hmm...so why wouldn't a greedy approach work for TSP? Greedy would take /sic but the shortest is /sci. >needing to brute force something an elementary kid does on a weekly basis >Mathematicians Let's see a cute little kindergartner girl get the optimal route between 1000 cities then! No. You proposed a solution that indicated you were more interested in sounding smart if you were right than doing even preliminary research on the problem or the terms in your post. >is so mentally decent that he doesn't know the difference between elementary and kindergarten. >Mathematicians >mentally decent Kindergartner sounds cuter than elementary schooler! Although both are nice... Also >mentally decent You mean deficient? Because that describes you more than me. Although it's pretty obvious that you're just shitposting. >You mean deficient? Because that describes you more than me. Wife with automobile correct is marsh sometimes. 2/10 >let's find the optimal route out of many possible routes >posts a puzzle with one fixed route 1/10 if bait please commit sudoku/10 if srs what the fuck is this supposed to be >He can't see the clear riemannian manifold brainlet detected looks like danny davito hiding behind a very flamboyant fox Then by all means publish and reap your rewards! I'll be waiting on the edge of my seat to read your paper. >in charge of not being trolled >Mathematicians For every possible number of n, have a subroutine that does it thatcmany number od times, and at the beginning call the currect subroutine based on n. Voilà, polynomial time complexity 4/25/2016. The day a random user solved P vs NP problem on Veeky Forums I know, right? Jesus, why couldn't I have been smart enough to figure that out? I humbly bow to user's brilliance. Hello, I am from /b/. Your post means nothing to me, and I think you make up big words. Good day. >Voilà and just like that a millenium prize problem was solved rest assured i will be on the phone to the clay institute within minutes to register your discovery >tfw going to sleep every night thinking about Hamiltonian cycles for the last 10 years want off this ride wasnt trying to sound smart? i used what i thought were basic terms but apparently werent the right ones i still cant belive the only way to find the best solution is by brute forcing it. say you had 10 destinations. isnt there a way you could get the distance between any 2 destinations and use that to figure out the best route? i know its not that simple otherwise someone wouldve solved it already It's a hard problem. It's not even in NP since you can't verify a solution in polynomial time, although it is NP-hard. (so not even in NP-Complete it's so hard). You have to remember that simply coming up with a Hamilton cycle (cycle with same order as graph - visits every node) in a graph is NP-Complete, and TSP is doing not only that, but finding the shortest Hamilton cycle out of all Hamilton cycles. Even the decision problem of Hamilton cycle ("Does this graph have a Hamilton cycle? Yes or no.") is NP-hard in general. Of course for special classes of graphs you can do it efficiently (proper circular arc graphs (which are equivalent to local transitive tournament orientations of digraphs IIRC) come to mind, but I'm sure there's others), even in low-polynonial time (like O(n^2)). You might be able to come up with a polynomial approximation scheme that gets a "good enough" solution though. > isnt there a way you could get the distance between any 2 destinations and use that to figure out the best route? Well first off, which variant of the TSP are we talking about? When you say distance, do you mean min-distance (ie Dijkstra, or I guess Floyd-warhsall to get all-pairs) in the graph, or are you talking about an embedding of a graph where distances between cities correspond to Euclidean distances? And is it a labelled complete graph, or is the diameter of the graph greater than 1? If you impose enough restrictions on the graph you can get an algorithm for whatever restricted class of graphs that might be polynomial time. the algorithm that is the only known 100% solution is the brute force algorithm and that takes a lot of time when you have 100+ routes to consider. The purpose of algorithms is to solve real world problems with real world time constraints. They have algorithms that are 99+% which is good enough >solve >Genetic... Annealing.. AI doesn't solve squat, it makes guesses that can't be proved rigorously by anyone ever. Saying that you can "solve" an NP-complete problem is dangerous because you'll give stupid or uneducated people the wrong idea. You should say that "we can derive a reasonable approximation within polynomial time", or "we can get the answer some percentage of the time", or "we can program this AI to solve it but we have no way to prove that it can actually do it with any degree of certainty >They have algorithms that are 99+% which is good enough [citation needed] P vs NP solution is under the assumption that there is no parallel processing. The algorithmic complexity is measured in number of steps complete, not actual seconds of run time. Splitting the steps into subroutines doesn't eliminate steps or step growth
2022-10-07 13:26:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5261346697807312, "perplexity": 921.0085769316383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00721.warc.gz"}
https://www.physicsforums.com/threads/eigenvalues-of-l-operators.198166/
# Eigenvalues of L operators 1. Nov 14, 2007 ### El Hombre Invisible Hi 1. The problem statement, all variables and given/known data We're given the operators Lx, Ly and Lz in matrix form and asked to show that they have the correct eigenvalues for l=1. Obviously no problem determining the values and Lz comes out right, however we've never actually seen the e.v.s for Lx and Ly. 2. Relevant equations I tried finding the eigenvalues in the wave formulation for Lx and Ly using the operators in polar spherical co-ords: $$L_{x} = i\hbar(sin\phi\frac{d}{d\theta} + cot \theta cos \phi \frac{d}{d\phi})$$ $$L_{y} = i\hbar(-cos\phi\frac{d}{d\theta} + cot \theta sin \phi \frac{d}{d\phi})$$ Dealing with l = 1, so my spherical harmonics are in one of two forms: $$Y_{1,0} = (\frac{3}{4\pi})^{1/2}cos\theta$$ $$Y_{1,\pm1} = \mp(\frac{3}{8\pi})^{1/2}sin\theta exp(\pm i \phi)$$ 3. The attempt at a solution Well, applying the operators to the wfs gives me nothing like an eigenvalue. For instance: $$L_{x}Y_{1,0} = -i \hbar sin \phi tan \theta Y_{1,0}$$ Anyone see where I'm going wrong, or just happen to know offhand the eigenvalues for Lx and Ly? Cheers, El Hombre Last edited: Nov 15, 2007 2. Nov 14, 2007 ### nrqed It's not wrong because the Y_lm are simply NOT eigenstates of L_x and L_y! The eigenstates of L_x and L_y are linear combinations of the Y_lm. But if the operators were given to you as matrices in the first place, why do you use differential operators? Simply find the eigenvalues of the matrices you were given ! 3. Nov 15, 2007 ### El Hombre Invisible Hi First off, those LaTex codes didn't quite come out the way I intended. Second, like I said I had no problem finding the eigenvalues of the matrix operators. The problem is in verifying them. I get the same eigenvalues for each matrix, i.e. mh, with the three possible values of m for l=1. The problem is I have nothing to check these against except common sense. 4. Nov 15, 2007 ### nrqed Oh, so you want to double check your calculation in position space! Is your L_z matrix diagonal? Then if you have the eigenstates of L_x, say, as a column vector, just reexpress that eigenstate in terms of the Y_lm and apply the differential operator expression of L_x on it as a check. I mean, if L_z is -1,0,1 on the diagonal then it means that the column vector with 1 at the top represents Y_(l=1, m=-1), the column vector with one in the middle represents Y(l=1, m=0) etc. 5. Nov 15, 2007 ### El Hombre Invisible Hi nrqed Yes, Lz is diagonal. In fact, I'll struggle with LaTex and tell you the matrices: $$L_{x} = \frac{\hbar}{\sqrt{2}} \left(\begin{array}{ccc}0&1&0\\1&0&1\\0&1&0\end{array}\right)$$ $$L_{y} = \frac{\hbar}{\sqrt{2}} \left(\begin{array}{ccc}0&i&0\\-i&0&i\\0&-i&0\end{array}\right)$$ $$L_{z} = \hbar \left(\begin{array}{ccc}1&0&0\\0&0&0\\0&0&-1\end{array}\right)$$ This gives my eigenvectors for Lx as: $$\frac{1}{2} \left(\begin{array}{c}1&\sqrt{2}&1\end{array}\right) \frac{1}{\sqrt{2}} \left(\begin{array}{c}1&0&-1\end{array}\right) \frac{1}{2} \left(\begin{array}{c}1&-\sqrt{2}&1\end{array}\right)$$ So if I get this right, the first eigenstate on the left corresponds to: $$\frac{1}{2} Y_{11} + \sqrt{2} Y_{10} + \frac{1}{2} Y_{1,-1}$$. Is that right? Cheers, El Hombre 6. Nov 15, 2007 ### nrqed Hi. Thanks for posting them, I did not feel like typing them all! You are riht except for one tiny mistake: the coefficient of the Y_10 is $\frac{1}{\sqrt{2}}$, not sqrt(2) . Now, if you apply L_x written as a differential operator on that expression (that will be long!) you should find that it's an eigenstate with the expected eigenvalue. But if you want to just make one double check, you should use the second eigenstate of L_x which at least has no Y_10 term! It's long to check this way but that's the way to do it if you really don't mind all the algebra. The fact that you got the correct eigenvalues for the matrices L_x and L_Y is already a convincing argument that you di dit right. If you want to be sure that you got the correct corresponding eigenstates, you coudl simply apply the matrices on your answer and check that it works. That would be much quicker than using the representations as differential operators. 7. Nov 16, 2007 ### El Hombre Invisible Hi Yeah, that's my latexing again. If you click on it you'll see I did put a \frac in there but to no avail. I don't know why they come out like that but I just don't spend enough time on here to get to grips with the grammar. Cheers. Yes, that does seem overcomplicated, to the point where I'm wondering if it can possibly have been what was intended. But even if it isn't I am at least wiser now. Just not about latex. Thanks again for all your help. El Hombre
2017-03-30 03:07:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8484930992126465, "perplexity": 1039.3642018312069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191984.96/warc/CC-MAIN-20170322212951-00223-ip-10-233-31-227.ec2.internal.warc.gz"}
https://webchem.science.ru.nl/chemical-analysis/confidence-intervals/
# Prediction & Confidence intervals ### How good are the titrations? In order to get an idea of the agreement between the results of fifteen students, each having done three replicate measurements, one could count how many titration results are not further away than 0.005 M from the mean value of 0.1005 M. You can verify this by clicking on the "Submit" box below. Next, find how many results differ less than 0.0017 M, the standard deviation, from the true value. How many are closer to the mean value than 2 times and 3 times the standard deviation? The reverse procedure is also possible. Using the next "Submit" button, you can calculate what interval around the mean value contains 95% of all titration results. ### From the past to the future... The principal idea in statistics is the notion that more or less the same results would be expected if the same group of students would perform the same titrations again. Although the individual results would be different, the distribution of the results would be similar. The distribution of the measurements is often approximated by a normal distribution. This is completely defined by only two values: the mean and the standard deviation of the distribution. The spread around the mean value, as measured by the standard deviation, is directly related to the width of a prediction interval. Now, what is a prediction interval anyway? ### Prediction intervals A prediction interval of 95 percent simply means that we expect 95% of all future measurements to fall within this interval. This also means that 5% of all measurements are expected to fall outside! Likewise, prediction intervals of 90% and 99% are often used. The exact calculation of a prediction interval requires a bit of background which is beyond this course; however, approximate values for prediction intervals can easily be explained. We already hinted that the width of a prediction interval is related to the standard deviation of the data. Now, as a rule of thumb, a prediction interval of 95% is obtained by taking the mean plus or minus twice the standard deviation. A prediction interval of 99% (approximately) is given by the mean plus or minus three times the standard deviation: Question: why are 99% prediction intervals wider than 95% prediction intervals? We now see that prediction intervals routinely are constructed from previous data. This implies that the intervals are only valid if we expect the future data to behave in the same way! #### The limit of detection A direct application of prediction intervals is the determination of the limit of detection (LOD) of quantitative analytical methods. A definition of the LOD is: the LOD is the smallest signal value that is significantly (e.g. with 99% confidence) different from the signal of a true blank. To assess the LOD, a sufficient number of true blank values should be measured (preferably more than 20). The LOD is then equal to the mean signal of these measurements plus three times the standard deviation. Using this procedure (because the LOD is the upper bound of a 99% prediction interval), you are 99% sure that a sample yielding a larger signal value than the LOD is not a blank, so the signal is actually due to analyte. ### Confidence intervals A 95-percent prediction interval implies that there is a 95 percent chance that another titration experiment would find a value in that range (provided it is executed in exactly the same way as all the other volume determinations, and by the same people). However, each student performed 10 volume determinations, and took the mean value of these as the final result. Obviously, the histogram of all these mean values shows considerably less variation than the individual volume determinations (remember, errors cancel out!). This means that the standard deviation of a mean value is smaller than the standard deviation for individual values. The histograms of the individual measurements and the mean values are depicted below. Clearly, there is one student with quite a low mean value. The means of the other students are very close indeed. The relation between the standard deviation of the individual titration results and the standard deviation for mean values is given by where $n$ is the number of measurements used to calculate the mean. $\sigma$, the Greek lowercase letter sigma, is often used as the symbol for the standard deviation and $\mu$, the Greek lowercase letter mu, as the symbol for the mean (but the latter one does not occur in the equation). Confidence intervals are calculated in exactly the same way as prediction intervals for individual measurements, only the standard deviation for the mean is used instead of the standard deviation of the individual measurements. This formula also explains why the mean is more precise when we use more data: its prediction interval becomes narrower. Again, note that this does not mean that the standard deviation of the individual measurements gets smaller! Now, continue with the questions on this subject.
2023-02-07 18:07:39
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9017715454101562, "perplexity": 326.8114609423375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500628.77/warc/CC-MAIN-20230207170138-20230207200138-00751.warc.gz"}
https://mathhelpboards.com/threads/problem-of-the-week-8-may-21st-2012.1095/
# Problem of the Week #8 - May 21st, 2012 Status Not open for further replies. #### Chris L T521 ##### Well-known member Staff member Thanks to those who participated in last week's POTW. Here's this week's problem! ----- Problem: The norm of a $m\times n$ matrix $A=[a_{ij}]$ is given by the formula $\|A\| = \sqrt{\sum_{i=1}^m\sum_{j=1}^na_{ij}^2}.$ For an $n\times n$ square matrix $A$, show that the value of $r$ that minimizes $\|A-rI\|^2$ is $r=\text{tr}\,(A)/n$, where $\text{tr}\,(A)$ is the trace of $A$ (i.e. the sum of the main diagonal elements of $A$). ----- #### Chris L T521 ##### Well-known member Staff member The following members answered the question correctly: Sudharaka, PaulRS, Opalg, and Bacterius. There were two different ways of going about this problem. Sudharaka and Bacterius approached the problem the way I did, whereas PaulRS and Opalg used inner product spaces, which I think is more clever! Here's Bacterius' solution: For a square matrix $A$ of dimension $n \times n$, the squared norm is: $\displaystyle ||A||^2 = \sum_{i = 1}^{n} \sum_{j = 1}^{n} a^2_{ij}$ For a square matrix $A - rI$ for some real $r$, $I$ being the identity matrix of same dimensions as $A$, we have: $\displaystyle ||A - rI||^2 = \sum_{i = 1}^{n} \sum_{j = 1}^{n} (a_{ij} - rI_{ij} )^2$ Clearly the subtraction of $rI$ only affects the diagonal entries of $A$, so we have: $\displaystyle ||A - rI||^2 = c + \sum_{i = 1}^{n} (a_{ii} - rI_{ii} )^2 = c + \sum_{i = 1}^{n} (a_{ii} - r )^2$ For some constant $c$ solely dependent on $A$. We want to minimize the following with respect to $r$: $\displaystyle ||A - rI||^2= c + \sum_{i = 1}^{n} (a_{ii} - r )^2$ So we wish to solve $\displaystyle \frac{d}{dr} ||A - rI||^2 = 0$ $\displaystyle \frac{d}{dr} ||A - rI||^2 = \frac{d}{dr} \left [ c + \sum_{i = 1}^{n} (a_{ii} - r )^2 \right ] = \sum_{i = 1}^{n} (2r - 2a_{ii}) = 2 \sum_{i = 1}^{n} (r - a_{ii})= 2 \left [ \sum_{i = 1}^n r - \sum_{i = 1}^n a_{ii} \right ] = 2 \left ( rn - \mathrm{tr} (A) \right ) = 0$ Solving for $r$: $\displaystyle 2 \left ( rn - \mathrm{tr} (A) \right ) = 0 ~~~ \implies ~~~ r = \frac{\mathrm{tr} (A)}{n}$ Because the function being minimized is quadratic in $r$ with a positive coefficient, the only solution to the derivative function must be a minimum. Therefore: $\displaystyle ||A - rI||^2 ~~~$ is minimized by $\displaystyle ~~~ r = \frac{\mathrm{tr} (A)}{n}$ $\mathbb{QED}$ Here's PaulRS's solution: Let us define an inner product for the $n\times n$ matrices: $\left \langle A, B \right \rangle= \displaystyle\sum_{i=1}^n \sum_{j=1}^n [A]_{i,j} \cdot _{i,j}$ . The given norm is, in fact, the norm derived from this inner-product. The key now is to note that the subspace generated by $\{I\}$ is $\{ r \cdot I : r \in {\mathbb R} \}$. Thus if we want $r\in {\mathbb R}$ that minimizes $\| A - r \cdot I \| ^2$, that is exactly the $r$ that makes $r\cdot I$ the projection of $A$ onto the subspace generated by $\{I\}$. Since $\{I\}$ is already an orthogonal base, the projection is $\displaystyle\frac{\left \langle A , I \right \rangle}{\left \langle I, I \right \rangle} \cdot I= \frac{\text{tr}(A)}{n} \cdot I$ , because $\left \langle A, I \right \rangle = \text{tr} (A)$ and $\left \langle I, I \right \rangle = n$. Thus indeed: $r = \displaystyle\frac{\text{tr}(A)}{n}$. Status Not open for further replies.
2020-10-22 14:13:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9416933059692383, "perplexity": 326.6031905763263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879673.14/warc/CC-MAIN-20201022141106-20201022171106-00244.warc.gz"}
https://www.gamedev.net/forums/topic/344484-framerate-capping-issues-sdl/
# Framerate capping issues (SDL) This topic is 4876 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hello, I'm trying to get a framerate regulation system working on my game. At the moment, I've just got an empty window, but I'm putting the framerate into the window caption. Here's the code I've got in main.cpp: Timer fps,update; int main(int argc, char *argv[]) { BoxRacer.Initialise(); BoxRacer.SetWindow(WINDOW_W,WINDOW_H,32,"BoxRacer",""); fps.CapFPS(); update.StartTimer(); fps.StartTimer(); Font Main; while(BoxRacer.IsRunning()) { returnedtime=fps.ReturnTicks(); SDL_Flip(BoxRacer.MainWindow); // All the game code will go in here framecount++; if(update.ReturnTicks()>1000) { char curFPS[64]; sprintf(curFPS, "Current FPS:%f",(float)framecount/(fps.ReturnTicks()/1000.f)); SDL_WM_SetCaption(curFPS,NULL); update.StartTimer(); } if(fps.IsCapped()) { while(fps.ReturnTicks()<1000/FRAMERATE) // FRAMERATE is 85FPS { //wait } } } return 0; } And here's my Timer class: // TimerClass.h // A timer class for regulating frame rate // Thanks to Lazy Foo' Productions #ifndef _TIMER_H #define _TIMER_H class Timer { public: Timer(); void StartTimer(); void StopTimer(); void PauseTimer(); void UnpauseTimer(); int ReturnTicks(); bool IsStarted() { return TimerStarted; }; bool IsPaused() { return TimerPaused; }; void CapFPS() { Capped=true; }; bool IsCapped() { return Capped; }; private: bool Capped; bool TimerPaused; bool TimerStarted; int TicksAtStart; // The number of ticks to which the timer has been initialised int TicksAtPause; // The number of ticks at the point where the timer was paused }; #endif As the comments say, thanks to Lazy Foo' Productions for the timer code. The problem I'm having is that the framerate won't cap. The number in the window caption changes every second, but it's erratic. It'll often start at something like 350.2738495, then the next second it'll go to 353.4587392, then 358.1384759 and so on and so forth. The framerate should cap at 85FPS. What am I doing wrong? Is it because I've just got an empty window just now, but that shouldn't make a difference? What I'm wanting is to have a capped framerate, and always be able to display the current framerate in the window title bar. Thanks in advance, ukdeveloper. ##### Share on other sites To cap your frame rate you basically do this: while(frames are being blitted){ start frame timer; do event handling, calculations, blitting, etc. while(frame timer's time < 1000/FPS) { wait; }} I think it's because you forgot to start the timer at the beginning of each frame. Did you look over tutorial 10? ##### Share on other sites The SDL_gfx has a set of methods to provide framerate capping within SDL applications. If you take a look at the source of SDL_framerate.c, you'll see that it really is quite simple. It goes something like this: manager->framecount++; current_ticks = SDL_GetTicks(); target_ticks = manager->lastticks + (Uint32) ((float) manager->framecount * manager->rateticks); if (current_ticks <= target_ticks) { the_delay = target_ticks - current_ticks; SDL_Delay(the_delay); } else { manager->framecount = 0; manager->lastticks = SDL_GetTicks(); } This completely depends on what API you're using, but if the API provides a TimeGetTime and a thread delay feature, you'll be set. SDL_gfx is released under the LGPL so its source is quite flexible. • ### What is your GameDev Story? In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us. • 12 • 15 • 14 • 46 • 22 • ### Forum Statistics • Total Topics 634055 • Total Posts 3015276 ×
2019-01-16 23:09:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21798451244831085, "perplexity": 8015.1212999016325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657907.79/warc/CC-MAIN-20190116215800-20190117001800-00135.warc.gz"}
http://connection.ebscohost.com/c/articles/96927582/approximate-controllability-semilinear-system-involving-fully-nonlinear-gradient-term
TITLE # Approximate Controllability of a Semilinear System Involving a Fully Nonlinear Gradient Term AUTHOR(S) Du, Runmei; Wang, Chunpeng; Zhou, Qian PUB. DATE August 2014 SOURCE Applied Mathematics & Optimization;Aug2014, Vol. 70 Issue 1, p165 SOURCE TYPE DOC. TYPE Article ABSTRACT This paper concerns a control system governed by a semilinear degenerate equation involving a fully nonlinear gradient term. The equation may be weakly degenerate and strongly degenerate on a portion of the lateral boundary, and the gradient term can be controlled by the diffusion term. The linearized system is shown to be approximately controllable by constructing a control by means of its conjugate problem. By doing a series of precise compactness estimates, we prove that the semilinear system is approximately controllable. ACCESSION # 96927582 ## Related Articles • Horizontal and Vertical Rule Bases Method in Fuzzy Controllers. Aminifar, Sadegh; Marzuki, Arjuna bin // Mathematical Problems in Engineering;2013, p1 Concept of horizontal and vertical rule bases is introduced. Using this method enables the designers to look for main behaviors of system and describes them with greater approximations. The rules which describe the system in first stage are called horizontal rule base. In the second stage, the... • Universal regular control for generic semilinear systems. Bochi, Jairo; Gourmelon, Nicolas // Mathematics of Control, Signals & Systems;Dec2014, Vol. 26 Issue 4, p481 We consider discrete-time projective semilinear control systems $$\xi _{t+1} = A(u_t) \cdot \xi _t$$ , where the states $$\xi _t$$ are in projective space $$\mathbb {R}\hbox {P}^{d-1}$$ , inputs $$u_t$$ are in a manifold $$\mathcal {U}$$ of arbitrary finite dimension, and $$A :\mathcal... • Existence and concentration of positive solutions for semilinear Schrödinger-Poisson systems in$${\mathbb{R}^{3}}. Wang, Jun; Tian, Lixin; Xu, Junxiang; Zhang, Fubao // Calculus of Variations & Partial Differential Equations;Sep2013, Vol. 48 Issue 1/2, p243 In this paper, we study the existence and concentration of positive ground state solutions for the semilinear Schrödinger-Poisson system where ε > 0 is a small parameter and λ ≠ 0 is a real parameter, f is a continuous superlinear and subcritical nonlinearity. Suppose that a( x)... • HIGH MULTIPLICITY AND COMPLEXITY OF THE BIFURCATION DIAGRAMS OF LARGE SOLUTIONS FOR A CLASS OF SUPERLINEAR INDEFINITE PROBLEMS. LÓPEZ-GÓMEZ, JULIÁN; TELLINI, ANDREA; ZANOLIN, FABIO // Communications on Pure & Applied Analysis;Jan2014, Vol. 13 Issue 1, p1 This paper analyzes the existence and structure of the positive solutions of a very simple superlinear indefinite semilineax elliptic prototype model under non-homogeneous boundary conditions, measured by M ≤ ∞. Rather strikingly, there are ranges of values of the parameters involved... • Constrained controllability of second order dynamical systems with delay. Klamka, Jerzy // Control & Cybernetics;2013, Vol. 42 Issue 1, p111 The paper considers finite-dimensional dynamical control systems described by second order semilinear stationary ordinary differential state equations with delay in control. Using a generalized open mapping theorem, sufficient conditions for constrained local controllability in a given time... • Controllability of a model of combined anticancer therapy. Klamka, Jerzy; Šwierniak, Andrzej // Control & Cybernetics;2013, Vol. 42 Issue 1, p123 Controllability of combination of antiangiogenic treatment and chemotherapy is considered. A model used in the paper is a finite-dimensional dynamical control system described by second order semilinear time invariant ordinary differential state equations. Using a generalized open mapping... • Existence and Controllability Results for Fractional Impulsive Integrodifferential Systems in Banach Spaces. Haiyong Qin; Xin Zuo; Jianwei Liu // Abstract & Applied Analysis;2013, p1 We firstly study the existence of PC-mild solutions for impulsive fractional semilinear integrodifferential equations and then present controllability results for fractional impulsive integrodifferential systems in Banach spaces. The method we adopt is based on fixed point theorem, semigroup... • Approximate Controllability of Semilinear Neutral Stochastic Integrodifferential Inclusions with Infinite Delay. Li, Meili; Liu, Man // Discrete Dynamics in Nature & Society;12/20/2015, p1 The approximate controllability of semilinear neutral stochastic integrodifferential inclusions with infinite delay in an abstract space is studied. Sufficient conditions are established for the approximate controllability. The results are obtained by using the theory of analytic resolvent... • Modal Identification Using OMA Techniques: Nonlinearity Effect. Zhang, E.; Pintelon, R.; Guillaume, P. // Shock & Vibration;7/7/2015, Vol. 2015, p1 This paper is focused on an assessment of the state of the art of operational modal analysis (OMA) methodologies in estimating modal parameters from output responses of nonlinear structures. By means of the Volterra series, the nonlinear structure excited by random excitation is modeled as best... Share
2020-07-09 15:34:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.742019772529602, "perplexity": 1905.1021103241255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655900335.76/warc/CC-MAIN-20200709131554-20200709161554-00013.warc.gz"}
https://www.gradesaver.com/textbooks/math/other-math/thinking-mathematically-6th-edition/chapter-6-algebra-equations-and-inequalities-6-2-linear-equations-in-one-variable-and-proportions-exercise-set-6-2-page-364/118
## Thinking Mathematically (6th Edition) The equality between the two ratios is known as the proportion. If $a:b\text{ and }c:d$are two ratios, then it will be proportion if $a:b=c:d$. Proportionality between two ratios are determined by symbol$::$, that is if $a:b\text{ and }c:d$are proportional then it can be written as, $a:b::c:d$. Example: consider a numerical example: Two ratios $\frac{4}{9}$ and $\frac{12}{27}$ are taken then The proportionality between the two ratios is$\frac{4}{9}=\frac{12}{27}$.
2019-12-12 18:39:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9747319221496582, "perplexity": 598.3768245629772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540545146.75/warc/CC-MAIN-20191212181310-20191212205310-00291.warc.gz"}
https://romankurnovskii.com/en/tracks/algorithms-101/codeforces/827-div-4-1742/
Round #827/1742 (Div. 4) Updated: 2023-03-23 #TODO D E F G Contest date: 2022-10-13 A. Sum https://codeforces.com/contest/1742/problem/A Solution: def inp_int_list(): return list(map(int, inp().split())) def solve(): arr = inp_int_list() arr.sort() if arr[0] + arr[1] == arr[2]: print('YES') else: print('NO') def run(): for _ in range(inp_int()): solve() if __name__ == "__main__": CODE_DEBUG = 0 if os.environ.get("CODE_DEBUG") or CODE_DEBUG: sys.stdin = open("./input.txt", "r") start_time = time.time() run() print("\n--- %s seconds ---\n" % (time.time() - start_time)) else: run() B. Increasing https://codeforces.com/contest/1742/problem/B Solution: def solve(): n = inp_int() arr = inp_int_list() unique = set(arr) if n == len(unique): print('YES') else: print('NO') Explanation from Codeforces: If there are two elements with the same value, then the answer is NO, because neither of these values is less than the other. Otherwise, the answer is YES, since we can just sort the array. C. Stripes https://codeforces.com/contest/1742/problem/C Assuming that row or col exists with letter, check all rows for horizontal red rows. If not then answer is B. Solution: def solve(): inp() grid = [inp() for _ in range(8)] for row in grid: if all(c == 'R' for c in row): print('R') return print('B') return Explanation from Codeforces: Note that if a stripe is painted last, then the entire stripe appears in the final picture (because no other stripe is covering it). Since rows are only painted red and columns are only painted blue, we can just check if any row contains 8 Rs. If there is such a row, then red was painted last; otherwise, blue was painted last. D. Coprime https://codeforces.com/contest/1742/problem/D Solution: E. Scuza https://codeforces.com/contest/1742/problem/E Solution: F. Smaller https://codeforces.com/contest/1742/problem/F Slow Solution: G. Orray https://codeforces.com/contest/1742/problem/G1
2023-03-24 09:39:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6451410055160522, "perplexity": 10485.267841652103}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945279.63/warc/CC-MAIN-20230324082226-20230324112226-00295.warc.gz"}
https://buboflash.eu/bubo5/show-dao2?d=1622728379660
6.3. Present Values Indexed at Times Other than t = 0 #tvm In practice with investments, analysts frequently need to find present values indexed at times other than t = 0. Subscripting the present value and evaluating a perpetuity beginning with $100 payments in Year 2, we find PV1 =$100/0.05 = $2,000 at a 5 percent discount rate. Further, we can calculate today’s PV as PV0 =$2,000/1.05 = $1,904.76. Consider a similar situation in which cash flows of$6 per year begin at the end of the 4th year and continue at the end of each year thereafter, with the last cash flow at the end of the 10th year. From the perspective of the end of the third year, we are facing a typical seven-year ordinary annuity. We can find the present value of the annuity from the perspective of the end of the third year and then discount that present value back to the present. At an interest rate of 5 percent, the cash flows of $6 per year starting at the end of the fourth year will be worth$34.72 at the end of the third year (t = 3) and \$29.99 today (t = 0). If you want to change selection, open original toplevel document below and click on "Move attachment"
2023-01-29 15:10:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5977489948272705, "perplexity": 588.4707728201995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499744.74/warc/CC-MAIN-20230129144110-20230129174110-00208.warc.gz"}
http://www.xml-data.org/JSJYY/2017-1-284.htm
计算机应用   2017, Vol. 37 Issue (1): 284-288  DOI: 10.11772/j.issn.1001-9081.2017.01.0284 0 ### 引用本文 ZHANG Jun, HU Zhenbo, ZHU Xinshan. Real-time traffic accident prediction based on AdaBoost classifier[J]. JOURNAL OF COMPUTER APPLICATIONS, 2017, 37(1): 284-288. DOI: 10.11772/j.issn.1001-9081.2017.01.0284. ### 文章历史 Real-time traffic accident prediction based on AdaBoost classifier ZHANG Jun, HU Zhenbo, ZHU Xinshan School of Electrical and Automation Engineering, Tianjin University, Tianjin 300072, China Abstract: The traditional road traffic accident forecast mainly uses the historical data, including the number and the loss of traffic accidents, to predict the future trend, however, the traditional method can not reflect the relationship between the traffic accident and real-time traffic characteristics, and it also can not prevent accidents effectively. In order to solve the problems above, a real-time traffic accident prediction method based on AdaBoost classifier was proposed. Firstly, the road traffic states were divided into normal conditions and dangerous conditions, and the real-time collection of traffic flow data was used as the characteristic variable to characterize the different states, so the real-time prediction problem could be converted to a classification problem. Secondly, the Probability Density Function (PDF) of traffic flow characteristics under the two conditions in different time scales were estimated by Parzen window nonparametric estimation method, and the estimated density function was analyzed by the separability criterion based on probability distribution, then the sample data with appropriate characteristic variable and time scale could be determined. Finally, the AdaBoost classifier was trained to classify different traffic conditions. The experimental results show that the correct ratio by using standard deviation of traffic flow characteristics to classify test samples is 7.9% higher than that by using average value. The former can reflect the differences of different traffic states better, and also get better classification results. Key words: intelligent transportation    accident prediction    classifier    traffic flow characteristic    Parzen window    separability criterion 0 引言 1 道路交通事故实时预测原理 图 1 交通状态划分 Figure 1 Classification of traffic conditions ${{\mathbf{x}}_{i}}={{\left\{ {{T}_{i}},{{L}_{i}},{{C}_{i}},{{W}_{i}},{{F}_{i}} \right\}}^{\mathsf{T}}};i=1,2,\ldots ,N$ (1) $h({{\mathbf{x}}_{i}})=\left\{ \begin{matrix} \begin{matrix} 0 & ,{{\mathbf{x}}_{i}}\in {{\omega }_{1}} \\ \end{matrix} \\ \begin{matrix} 1 & ,{{\mathbf{x}}_{i}}\in {{\omega }_{2}} \\ \end{matrix} \\ \end{matrix} \right.$ (2) 图 2 实时交通事故预测过程 Figure 2 Process of real-time traffic accident prediction 2 交通数据准备及处理 2.1 数据采集及预处理 1) 设定参数的合理范围以及精度,对不符合条件的数据进行修正。例如,平均速度的合理范围一般在0和地点限制速度的1.3或1.5倍之间,时间占有率的合理范围是0到100%。 2) 检验每组交通流数据的记录时间,正确数据应当从0到719,删除重复数据,对乱序数据重新排序修正。 3) 对缺失数据以及其他异常数据进行填补,如果只有个别数据存在问题,则利用其相邻数据平均值进行填充,如果一段时间内存在问题,则利用同期的历史数据平均值进行填充。 2.2 交通状态特征选择 2.2.1 特征选择原理 图 3 错误率 Figure 3 Misclassification probabilities P(ω1)、P(ω2)为交通状态的先验概率,样本在Φ1中属于危险交通状态的概率即第一类错误率为: ${{P}_{2}}(e)=\int_{{{\Phi }_{2}}}{p(\mathbf{x}|{{\omega }_{2}})d\mathbf{x}}$ (3) ${{P}_{1}}(e)=\int_{{{\Phi }_{1}}}{p(\mathbf{x}|{{\omega }_{1}})d\mathbf{x}}$ (4) \begin{align} & P(e)=\int_{-\infty }^{t}{P({{\omega }_{2}}|\mathbf{x})p(\mathbf{x})d\mathbf{x}}+\int_{t}^{+\infty }{P({{\omega }_{1}}|\mathbf{x})p(\mathbf{x})d\mathbf{x}} \\ & =P({{\omega }_{2}}){{P}_{2}}(e)+P({{\omega }_{1}}){{P}_{1}}(e) \\ \end{align} (5) 图 4 特征选择 Figure 4 Feature selection 2.2.2 Parzen窗密度函数估计 $k(\mathbf{x},{{\mathbf{x}}_{i}})=\frac{1}{\sqrt{2\mathrm{ }\!\!\pi\!\!\text{ }}\sigma }\exp \{-\frac{{{(\mathbf{x}-{{\mathbf{x}}_{i}})}^{2}}}{2{{\sigma }^{2}}}\}$ (6) $\hat{p}(z)=\frac{1}{N}\sum\limits_{i=1}^{N}{\frac{1}{\sqrt{2\pi }\sigma }\exp \{-\frac{{{(z-z_{\Delta t}^{i})}^{2}}}{2{{\sigma }^{2}}}\}}$ (7) 2.2.3 基于概率分布的可分性判据 ${{J}_{D}}=\int_{z}{[p(z|{{\omega }_{1}})-p(z|{{\omega }_{2}})]\ln \frac{p(z|{{\omega }_{1}})}{p(z|{{\omega }_{2}})}}dz$ (8) 1) 初始化训练样本的权重: ${{\omega }_{i}}=\frac{1}{N}\begin{matrix} ; & i=1,2,...,N \\ \end{matrix}$ (9) 2) 对第m次迭代,利用加权后的样本构造弱分类器fm(x),分类并计算分类错误率em,令 ${{c}_{m}}={{\log }_{2}}(1-{{e}_{m}}/{{e}_{m}})$ (10) 3) 更新样本的权值,令 ${{\omega }_{i}}={{\omega }_{i}}\exp [{{c}_{m}}{{l}_{({{y}_{i}}\ne {{f}_{m}}({{\mathbf{x}}_{i}}))}}]$ (11) $\sum\limits_{i=1}^{N}{{{\omega }_{i}}=1}$ (12) ${{l}_{({{y}_{i}}\ne {{f}_{m}}({{\mathbf{x}}_{i}}))}}=\left\{ \begin{matrix} \begin{matrix} 1, & {{y}_{i}}\ne {{f}_{m}}({{\mathbf{x}}_{i}}) \\ \end{matrix} \\ \begin{matrix} 0, & {{y}_{i}}={{f}_{m}}({{\mathbf{x}}_{i}}) \\ \end{matrix} \\ \end{matrix} \right.$ (13) 4) 重复执行2) ~3) 步直到m达到最大迭代次数M。 5) 由M组弱分类器组合得到强分类器h(x),对于待分类样本x,分类器的输出为: $h(\mathbf{x})=sgn [\sum\limits_{m=1}^{M}{{{c}_{m}}{{f}_{m}}(\mathbf{x})}]$ (14) 4 实验及结果 4.1 数据准备 4.2 特征选择 图 5 不同时间尺度标准差特征散度距离 Figure 5 Divergence distance of standard difference characteristics among different time scales 4.3 分类结果 5 结语 [1] WANG L, ABDEL-ATY M. Predicting crashes on expressway ramps with real-time traffic and weather data[C]//TRB 2015:Proceedings of the Transportation Research Board 94th Annual Meeting. Washington, DC:Transportation Research Board Business Office, 2015:32-38. [2] WANG L, ABDEL-ATY M. Real-time crash prediction for expressway weaving segments[J]. Transportation Research Part C, 2015, 61 : 1-10. doi: 10.1016/j.trc.2015.10.008 [3] YU R, ABDEL-ATY M. Utilizing support vector machine in real-time crash risk evaluation[J]. Accident Analysis and Prevention, 2013, 51 : 252-259. doi: 10.1016/j.aap.2012.11.027 [4] HOSSAIN M, MUROMACHI Y. A Bayesian network based framework for real-time crash prediction on the basic freeway segments of urban expressways[J]. Accident Analysis and Prevention, 2012, 45 (1) : 373-381. [5] XU C, LIU P, WANG W, et al. Evaluation of the impacts of traffic states on crash risks on freeways[J]. Accident Analysis and Prevention, 2012, 47 : 162-171. doi: 10.1016/j.aap.2012.01.020 [6] XU C, ANDREW P T, WANG W, et al. Predicting crash likelihood and severity on freeways with real-time loop detector data[J]. Accident Analysis and Prevention, 2013, 57 : 30-39. doi: 10.1016/j.aap.2013.03.035 [7] 林震, 杨浩. 基于车速的交通事故贝叶斯预测[J]. 中国安全科学学报, 2003, 13 (2) : 34-36. ( LIN Z, YANG H. Bayesian prediction of traffic accident based on vehicle speed[J]. China Safety Science Journal, 2003, 13 (2) : 34-36. ) [8] 秦小虎, 刘利, 张颖. 一种基于贝叶斯网络模型的交通事故预测方法[J]. 计算机仿真, 2005, 22 (11) : 230-232. ( QIN X H, LIU L, ZHANG Y. A traffic accident prediction method based on Bayesian network model[J]. Computer Simulation, 2005, 22 (11) : 230-232. ) [9] LV Y S, TANG S M. Real-time highway traffic accident prediction based on the k-nearest neighbor method[C]//ICMTMA 2009:Proceedings of the 2009 International Conference on Measuring Technology and Mechatronics Automation. Piscataway, NJ:IEEE, 2009:547-550. [10] LV Y S, TANG S M. Real-time highway accident prediction based on support vector machines[C]//CCDC'09:Proceedings of the21st Annual International Conference on2009 Chinese Control and Decision Conference. Piscataway, NJ:IEEE, 2009:4403-4407. [11] 贺邓超, 张宏军, 郝文宁. 基于Parzen窗条件互信息计算的特征选择方法[J]. 计算机应用研究, 2015, 32 (5) : 1387-1390. ( HE D C, ZHANG H J, HAO W N. Feature selection based on conditional mutual information computation with Parzen window[J]. Application Research of Computers, 2015, 32 (5) : 1387-1390. ) [12] 张宏稷, 杨健, 李延. 基于条件熵和Parzen窗的极化SAR舰船检测[J]. 清华大学学报(自然科学版), 2012, 52 (12) : 1693-1697. ( ZHANG H J, YANG J, LI Y. Ship detection in polarimetric SAR images based on the conditional entropy and Parzen windows[J]. Journal of Tsinghua University (Science and Technology), 2012, 52 (12) : 1693-1697. ) [13] 张学工. 模式识别[M]. 北京: 清华大学出版社, 2010 : 146 -150. ( ZHANG X G. Pattern Recognition[M]. Beijing: Tsinghua University Press, 2010 : 146 -150. ) [14] 曹莹, 苗启广, 刘家辰. AdaBoost算法研究进展与展望[J]. 自动化学报, 2013, 39 (6) : 745-758. ( CAO Y, MIAO Q G, LIU J C. Advance and prospects of AdaBoost algorithm[J]. Acta Automatica Sinica, 2013, 39 (6) : 745-758. doi: 10.1016/S1874-1029(13)60052-X ) [15] 贾润莹, 李静, 王刚. 基于AdaBoost和遗传算法的硬盘故障预测模型优化及选择[J]. 计算机研究与发展, 2014, 51 (Suppl.) : 148-154. ( JIA R Y, LI J, WANG G. Optimization and choice of hard drive failure prediction models based on AdaBoost and genetic algorithm[J]. Journal of Computer Research and Development, 2014, 51 (Suppl.) : 148-154. )
2021-05-14 11:10:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8057742118835449, "perplexity": 12783.505765176282}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990449.41/warc/CC-MAIN-20210514091252-20210514121252-00027.warc.gz"}
http://eprints.iisc.ernet.in/11789/
# Numerical Study of Interrupted Impinging Jets for Cooling of Electronics Behera, Ramesh Chandra and Dutta, Pradip and Srinivasan, K (2007) Numerical Study of Interrupted Impinging Jets for Cooling of Electronics. In: IEEE Transactions on Components and Packaging Technologies, 30 (2). pp. 275-284. Preview PDF Numerical_Study.pdf The objective of this paper is to present the results of a numerical investigation of the effect of flow pulsations on local, time-averaged Nusselt number of an impinging air jet. The problem was considered to provide inputs to augmenting heat transfer from electronic components. The solution is sought through the FLUENT (Version 6.0) platform. The standard $k\in$ model for turbulence equations and two-layer zonal model in wall function are used in the problem. Pressure-velocity coupling is handled using the SIMPLEC algorithm. The model is first validated against some experimental results available in the literature. A parametric study is carried out to quantify the effect of the pulsating jets. The parameters considered are 1) average jet Reynolds number (5130<Re<8560), 2) sine and square wave pulsations, 3) frequencies of pulsations (25 < f < 400 Hz), and 4) height of impingement to jet diameter ratios (5 < H/d < 9). In the case of sine wave pulsations, the ratio of root mean square value of the amplitude to the average value $(A_N )$ was varied from 18% to 53%. The studies are restricted to a constant wall heat flux condition. Parametric conditions for which enhancement in the time-averaged heat transfer from the surface can be expected are identified.
2013-06-20 07:36:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7527934312820435, "perplexity": 1422.7863110570026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710963930/warc/CC-MAIN-20130516132923-00085-ip-10-60-113-184.ec2.internal.warc.gz"}
https://physexams.com/lesson/Free-Fall-Practice-Problems-for-High-Schools_68
# Free Fall Practice Problems for High Schools: Complete Guide In this long content, we are going to practice some problems about a freely falling object in the absence of air resistance. All these questions are suitable for high school or college students, or even AP physics 1 exam. ## Freely Falling Motion Problems Problem (1): A tennis ball is thrown vertically upward with an initial speed of $17\,\rm m/s$ and caught at the same level above the ground. (a) How high does the ball rise? (b) How long was the ball in the air? (c) How long does it take to reach its highest point? Solution: Take up as the positive direction and the throwing point to be the origin, so $y_0=0$. (a) The ball goes so up until its vertical velocity becomes zero. For this part of ascending motion, we can write the free fall kinematic equation $v^2-v_0^2=-2g(y-y_0)$. Substituting the known values into it and solving for $y$, we get \begin{gather*} v^2-v_0^2=-2g(y-y_0) \\\\ 0-17^2 = -2(10)(y_{max}-0) \\\\ \Rightarrow \boxed{y_{max}=14.45\,\rm m} \end{gather*} (b) In all free fall practice problems, the best way to find the total flight time that the object was in the air is to use the kinematic equation $y-y_0=-\frac 12 gt^2+v_0t$. Then, substitute the coordinate of where the object landed on the ground into it. As a rule of thumb, if the object returns back to the same point of the launch, its displacement vector is always zero, so $y-y_0=0$. Therefore, we will have \begin{gather*} y-y_0=-\frac 12 gt^2+v_0t \\\\ 0=-\frac 12 (10)t^2+17t \\\\ \Rightarrow \boxed{5t^2-17t=0} \end{gather*} The expression obtained in the last step, can be solved by factoring out the time and setting the remaining to zero. \begin{gather*} 5t^2-17t=0 \\\\ t(5t-17)=0 \\\\ \Rightarrow t=0 \quad ,\quad t=3.4\,\rm s \end{gather*} The first result corresponds to the initial time, and the other time, $\boxed{t_{tot}=3.4\,\rm s}$, is the amount of time the ball is in the air until it reaches the ground. (c) At the highest point the vertical velocity is always zero, $v=0$. Using the equation $v=v_0-gt$, and solving for $t$, we will get \begin{gather*} v=v_0-gt \\ 0=17-(10)t \\ \Rightarrow t_{top}=1.7\,\rm s \end{gather*} As you can see, the duration of ball's going up, in the absence of the air resistance, is always half the total flight time. $t_{top}=\frac 12 t_{tot}$ Problem (2): From a height of $\rm 45\, m/s$, a ball is dropped directly downward with an initial speed of $\rm 6\,m/s$. How many seconds later does it strike the ground? Solution: Take up as the positive direction and the dropping point as the origin, so the initial height becomes $y_0=0$. The ball is moving downward so we must choose a sign for its initial velocity because velocity is a vector quantity in physics. Hence, in this case, the correct input for the initial velocity in the freely falling kinematic equations is $v_0=-6\,\rm m/s$. The ball strikes the ground $45\,\rm m$ below the chosen origin, so its correct coordinate is $y=-45\,\rm m$. The only kinematic equation that relates all these variables to the time is $y-y_0=-\frac 12 gt^2+v_0t$. Substituting the numerical values into this equation, yields \begin{gather*} y-y_0 =-\frac 12 gt^2+v_0t \\\\ -45-0 =-\frac 12 (10)t^2+(-6)t \\\\ \Rightarrow \boxed{5t^2+6t-45=0} \end{gather*} In the last step, after rearranging, we arrived at a quadratic equation, like $at^2+bt+c=0$, that its solution is found using the below formula $t=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$ where $a,b,c$ are some constants. In this case, we have $a=5 , b=5 , c=-45$ Substituting the values into the above formula, we will have \begin{gather*} t=\frac{-5\pm\sqrt{5^2-4(5)(-45)}}{2(5)} \\\\ t=2.46\,\rm s \quad , \quad t=-3.66 \end{gather*} Keep in mind that in all free fall practice problems, we must choose the positive time. Therefore, the ball would reach the ground about $2.5\,\rm s$ after dropping. Problem (3): A coin is tossed vertically upward and remains $\rm 0.6\, s$ in the air before it comes back down. (a) How fast is the coin tossed? (b) How high does the coin rise? Solution: Let the tossing point be the origin, so $y_0=0$. The time duration the coin is in the air until reaches the highest point is $t=0.6\,\rm s$. Recall that in all free fall problems, the object at the highest point has a velocity $v=0$. (a) Use the kinematic equation $v=v_0-gt$, substitute the above known values, and solve for the unknown initial speed $v_0$ \begin{align*} v&=v_0-gt \\ 0&=v_0-(10)(0.6) \\ \Rightarrow v_0&=6\,\rm m/s \end{align*} Therefore, the coin was tossed vertically upward with an initial speed of $6\,\rm m/s$. (b) In this part, we need a kinematic equation that relates the distance traveled to the time taken $y-y_0=-\frac 12 gt^2+v_0t$, or to the initial and final velocities, $v^2-v_0^2=-2g(y-y_0)$. We want to use the second equation as follows. \begin{align*} v^2-v_0^2&=-2g(y-y_0) \\\\ (0)^2-(60)^2 &= -2(10)(y-0) \\\\ y&=\frac{-6^2}{-20} \\\\ &=\boxed{1.8\,\rm m} \end{align*} You can also use the first equation and arrive at this same result. Check it out yourself. Problem (4): An small stone is shot straight up with an initial speed of $\rm 15\,m/s$. (a) How long does the stone take to reach its maximum height? (b) How high does the stone go? (c) With what speed would the stone hit the ground? Solution: The known data is $y_0=0$ and $v_0=15\,\rm m/s$. (a) When an object is thrown vertically upward in the air, it rises until it reaches a point where its vertical velocity becomes zero otherwise it continues on its way. So, in this part, at the maximum height, we have $v=0$. Substituting all these known numerical values into the equation $v=v_0-gt$, and solving for the unknown time $t$, we get \begin{align*} 0 &= 15-(10)t \\ \Rightarrow t&=\boxed{1.5\,\rm s} \end{align*} Thus, it takes $1.5\,\rm s$ for  the stone to reach the highest point. (b) For this part, we have the time taken to reach the maximum height $t=1.5\,\rm s$, the initial and final velocities. Thus, we can use either the equation $y-y_0=-\frac 12 gt^2+v_0t$ or $v^2-v_0^2=-2g(y-y_0)$. We choose the second equation. \begin{align*} 0-(15)^2 &= -2(10)(y_{max}-0) \\\\ y_{max} &=\frac{-15^2}{-20} \\\\ \Rightarrow y_{max}&= \boxed{11.25\,\rm m} \end{align*} Thus, the stone reaches the maximum height of $11.25\,\rm m$. (c) When the stone hits the ground at the same level as the throwing point, then according to the definition of displacement, it displaces nothing. This means that the displacement between initial and final points $\Delta y=y-y_0$ is zero. Using this fact, we can use the equation $v^2-v_0^2=-2g(y-y_0)$ to get the velocity at the moment of hitting the ground. \begin{align*} v^2-(15)^2 &=-2(10)(0) \\ \Rightarrow v^2 = 15^2 \end{align*} Taking the square root of both sides, yields two roots of $v=\pm 15\,\rm m/s$. Recall that velocity is a vector in physics having both magnitude and direction. The stone is being hit the ground, so the correct sign is a negative for its velocity, i.e, $v=-15\,\rm m/s$. Problem (5): From the top of a high cliff, a stone is released. It is seen after $2.5\,\rm s$, the stone strikes the ground. How high is the cliff? Solution: The stone is released, so its initial speed is zero, $v_0=0$. The total flight time is also $t=2.5\,\rm s$. Taking the releasing point as the origin, we will have $y_0=0$. In the kinematic equations, there is vertical displacement in only two of them, i.e, $v^2-v_0^2=-2g(y-y_0)$ and $y-y_0=-\frac 12 gt^2+v_0t$. As you can see, to use the first equation we must have the final velocity where the stone hit the ground and for the second equation, the total flight time is needed. Hence, it is simpler to apply the second equation and solve for the unknown vertical distance $y$.  \begin{gather*} y-y_0=-\frac 12 gt^2+v_0t \\\\ y-0=-\frac 12 (10)(2.5)^2+(0)(2.5) \\\\ \Rightarrow \quad \boxed{y=-31.25\,\rm m} \end{gather*} The negative indicates that the stone strikes the ground $31.25\,\rm m$ below our chosen origin. Problem (6): An stone released at rest from a height and falls freely for $4\,\rm s$. (a) What is the stone's velocity $1.5\,\rm s$ after releasing? (b) How high is the height? Solution: The stone is released at rest, so $v_0=0$. Take upward be the positive direction. The total flight time is $t_{tot}=4\,\rm s$. (a) With this known information, use the equation $v=v_0-gt$ to find the stone's velocity at each instant of time. \begin{gather*} v=v_0-gt \\ v=0-(10)(1.5) \\ \Rightarrow \boxed{v=-15\,\rm m/s} \end{gather*} The negative indicates the direction of the stone at that moment, which is facing down. (b) The time between releasing the stone and hitting the ground is given. With this known information, use the equation $y-y_0=-\frac 12 gt^2 +v_0t$ and solve for the total distance fallen by the stone. \begin{gather*} y-0=-\frac 12 (10)(4)^2+(0)(4) \\\\ \Rightarrow \quad \boxed {y=-80\,\rm m} \end{gather*} The minus sign reminds us that the stone strike the ground $80\,\rm m$ below our chosen origin. Problem (7): A person throws a light stone straight up and catches it $2.6\,\rm s$ later. With what speed did he throw the stone and to what height does the stone go up? Solution: As with any other free fall problems, let upward be the positive direction and the throwing point be the origin of the coordinate system so that $y_0=0$. The stone is caught at the same level of throwing, so its vertical displacement is zero, $\Delta y=y-y_0=0$. The total flight time is also known. Hence, applying the vertical displacement kinematic equation, $y-y_0=-\frac 12 gt^2+v_0t$, and solving for the initial speed $v_0$ gives us \begin{gather*} y-y_0=-\frac 12 gt^2+v_0t \\\\ 0=-\frac 12 (10)(2.6)^2+v_0 (2.6) \end{gather*} Factoring out $2.6$ from the last expression, would get \begin{gather*} (2.6)(-5(2.6)+v_0)=0 \\\\ \Rightarrow \quad \boxed{v_0=13\,\rm m/s} \end{gather*} As a side note, if an object thrown vertically upward and caught at the same level, by knowing the time interval between these two moments, we can use the following formula to find its initial speed $v_0=\frac 12 g t$ The stone goes up until it reaches a point where its vertical velocity is zero, i.e. $v=0$. Now that we know the stone's initial speed, applying the equation $v^2-v_0^2=-2g(y-y_0)$ gives us \begin{gather*} 0-(13)^2=-2(10)(y-0) \\\\ \Rightarrow \quad \boxed{y=8.45\,\rm m} \end{gather*} Problem (8): A baseball is thrown straight up with a speed of $25\,\rm m/s$. (a) With what speed it is moving when it is at a height of $10\,\rm m$? (b) How much time does it take to reach that point? Solution: As usual, take up to be the positive $y$-direction and set $y_0=0$. The initial speed is also $v_0=25\,\rm m/s$. (a) The only time-independent kinematic equation that relates these known values to each other is $v^2-v_0^2=-2g(y-y_0)$. Substitute the known numerical values into this, we get \begin{gather*} v^2-(25)^2=-2(9.8)(10-0) \\\\ v^2=429 \\\\ \Rightarrow \quad v=\pm 20.7\,\rm m/s \end{gather*} As you can see, we obtained a speed with two different signs. Here, the speed with a positive sign indicates that the baseball at that desired height is being moved upward, while the negative sign hints to us that the baseball is at that height when it is moving down. Thus, we arrive at the fact that the baseball reaches that height twice, once it is going up and the other when it is going down. (b) In the previous part, we found out that we are at that height twice. Thus, there are two corresponding times for this situation. Applying the equation $y-y_0=-\frac 12 gt^2+v_0t$ and solving for the required time $t$, we get \begin{gather*} 10-0=-\frac 12 (10)t^2+25t \\\\ \Rightarrow \quad 4.9t^2-25t+10=0 \end{gather*} The quadratic equation above has two solutions as below \begin{gather*} t_1=0.44\,\rm s \quad , \quad t_2=4.66\,\rm s \end{gather*} the first time $t_1$ is when the ball going up and the second one corresponds to when it is moving down. Note: the quadratic equation of $at^2+bt+c=0$ has the following solution formula $t=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$ Problem (9): A helicopter is ascending vertically upward with a constant speed of $6.5\,\rm m/s$ and carrying a package of $2\,\rm kg$. When it reaches the height of $150\,\rm m$ above the surface, it drops the package. How much time does it take for the package to hit the surface? Solution: Take up the positive direction, as always. At the moment that the package is released, it has the speed of its carrier. There is a subtle point in this case. Since the object is moving down, in the opposite direction of our positive direction, a negative must accompany it. Therefore, the correct input for the initial speed is $v_0=-6.5\,\rm m/s$. Applying the equation $y-y_0=-\frac 12 gt^2+v_0t$, we will have \begin{gather*} y-y_0=-\frac 12 gt^2+v_0t \\\\ -150-0=-\frac 12 (10)t^2+(-6.5)t \\\\ \Rightarrow 5t^2+6.5t-150=0 \end{gather*} Using a graphing calculator or the formula in the previous question, we find that it takes about $4.9\,\rm s$ for the package to reach the ground. Note that in the above, we inserted a negative for the vertical height as $y=-150\,\rm m$, since the package is hit the ground $150\,\rm m$ below our chosen origin. Problem (10): A stone is thrown vertically upward from a building of $15\,{\rm m}$ high with an initial velocity of $10\,{\rm m/s}$. What is the stone's velocity just before hitting the ground? Solution: In all freely falling practice problems, the important note is choosing the origin. Usually, the throwing (releasing or dropping) point is the best choice. In this case, the hitting point is below the origin so its vertical displacement ($y$) is negative. Applying the time-independent free fall kinematic equation, we have \begin{align*}v_f^{2}-v_i^{2}&=2\,(-g)\Delta y\\v_f^{2}-(10)^{2}&=2\,(-10)(-15) \\ \Rightarrow v_f&=\pm 20\,{\rm m/s}\end{align*} Since velocity is a vector quantity and just before striking the ground its direction is vertically downward so the negative value must be chosen i.e. $v_f=-20\,{\rm m/s}$. Problem (11): A bullet is fired vertically downward with an initial speed of $15\,{\rm m/s}$ from the top of a tower of $20\,{\rm m}$ high. What is its velocity at the instant of striking the ground? Solution: Let the origin be the firing point. Using the below kinematic equation we have \begin{align*}v_f^{2}-v_i^{2}&=-2g(y-y_0) \\\\ v_f^{2}-0&=-2(10)(-20)\\\Rightarrow v_f&=\pm 20\,{\rm m/s}\end{align*} Since the direction of velocity at the moment of striking the ground is downward so we must choose the the negative correct sign, so we have $v_f=-20\,\rm m/s$. Problem (12): There is a well with a depth of $34\,{\rm m}$. A person drops a stone vertically into it with an initial velocity of $7\,{\rm m/s}$. What is the time interval between dropping the stone and hearing its impact sound? (Assume $g=10\,{\rm m/s^2}$ and the speed of the sound in the air is $340\,{\rm m/s}$). Solution: This motion has two parts. One is descending into the well which is a constant acceleration motion and the other is ascending the sound of impact which is a uniform motion with constant speed. For the first part use the kinematic equation $y-y_0=-\frac 12 gt^{2}+v_0 t$ to find the falling time as \begin{gather*}y-y_0=-\frac 12 gt^{2}+v_0 t \\\\ -34-0=-\frac 12 (10)t^{2}+(-7)t \\\\ \Rightarrow 5t^{2}+7t-34=0 \end{gather*} Since initial velocity is a vector and its direction is initially downward so a negative is included in front of it. The above quadratic equation has two solutions as $t_1=2\,{\rm s}$ and $t_2=-3.4\,{\rm s}$. Obviously, a negative value for time is not accepted because time in all kinematic questions must be a positive quantity. The second part is a uniform motion as the speed of sound is constant, so we can calculate its rising time using the definition of average velocity \begin{align*} t&=\frac{\Delta y}{v} \\\\ &=\frac{34}{340}=0.1\,{\rm s}\end{align*} Thus, the total time is obtained as $t_T=2+0.1=2.1\,{\rm s}$. Problem (13): A stone is dropped vertically downward from the top of a building with a height of $h$. What is its speed at the height of $\frac h2$? Solution: Let the dropping point be the origin. Thus, in the kinematic equations, the vertical displacement must be negative i.e. $y-y_0=-\frac h2$. Use the below equation to find the speed at the desired level \begin{align*}v_2^2-v_1^2&=-2g(y-y_0) \\\\ v_2^2 -0&=-2g(-\frac h2) \\\\ \Rightarrow v_2&=\sqrt{gh}\end{align*} Practice Problem (14): An small bullet is released without initial velocity from a tower and travels the last $80\,{\rm m}$ of its motion in $2\,{\rm s}$. What is the height of the tower? Solution: This is up to you as a practice problem. Problem (15): A bullet is fired vertically upward from a height of $90\,{\rm m}$ and after $10\,{\rm s}$ reaches the ground. After $2\,{\rm s}$ from the throwing point, the bullet is how far away from the surface? ($g=9.8\,{\rm m/s^2}$) Solution: In a free-fall problem, the vertical position of an object in an instant of time is given by the kinematic equation $y=-\frac 12 gt^{2}+v_0 t+y_0$. Let the firing point be the origin so $y_0=0$. To complete the equation above, you must have an initial speed. To find the initial speed, apply kinematic formula $y=-\frac 12 gt^{2}+v_0 t+y_0$ between origin and striking point.\begin{gather*}y-y_0=-\frac 12 gt^{2}+v_0 t \\\\ -90-0=-\frac 12 (9.8)(10)^{2}+v_0(10) \\\\ \Rightarrow \boxed{v_0=40\,\rm m/s}\end{gather*} Notice that the striking point is $-90\,{\rm m}$ below the origin that's why the negative is entered for the vertical displacement $y$. Having the initial velocity, substitute it into the above equation again but with time $t=2\,{\rm s}$ to find the position of the bullet after $2\,{\rm s}$ with respect to the firing point \begin{align*}y-y_0&=-\frac 12 gt^{2}+v_0 t\\\\ &=-\frac 12\,(9.8)(2)^{2}+(40)(2) \\\\ \Rightarrow y&=\boxed{60.4\,\rm m} \end{align*} Problem (16): A ball is thrown vertically upward with an initial velocity of $18\,\rm m/s$. How many seconds after throwing, the ball's speed is $9\,\rm m/s$ downward? Solution: In all kinematic equations, $x,y,v,v_0$ and $a$ are vectors so its signs matters. Speed of $9\,{\rm m/s}$ downward means velocity of $-9\,{\rm m/s}$. Downward or upward indicate the direction of velocity. Take up as the positive $y$ direction. Now use the equation $v=v_0-gt$ to find the velocity at any later time.\begin{gather*}v=v_0-gt\\-9=+18-(10)t\\ \Rightarrow \quad \boxed{t=2.7\,\rm s}\end{gather*} Problem (17): An object is thrown vertically upward in the air from a $100\,{\rm m}$ height with an initial velocity of $v_0$. After $5\,{\rm s}$, it reaches the ground. Determine the magnitude and direction of the initial velocity. Solution: Let the origin be the throwing point (so $y_0=0$) and the upward direction positive. Substitute the given total time into the vertical displacement kinematic equation $y-y_0=-\frac 12 gt^2+v_0t$ with $y-y_0=-100\,{\rm m}$ (since the impact point is below the origin). \begin{gather*} y-y_0 =-\frac 12 gt^2+v_0t \\\\ -100=-\frac 12\,(10)(5)^2+v_0 (5) \\\\ \Rightarrow \boxed{v_0=-5\,\rm m/s}\end{gather*} the minus sign indicates the initial velocity is downward with the magnitude (speed) of $5\,\rm m/s$. Problem (18): From a height of $15\,{\rm m}$, a ball is kicked vertically up into the air with an initial speed of $v_0$. It reaches the highest point of its path with an elevation of $20\,{\rm m}$ from the surface. Find the initial velocity $v_0$. Solution: The highest point is $5\,{\rm m}$ above the kicking point. Apply the time-independent kinematic equation below to find the initial velocity \begin{gather*}v^2-v_0^2=-2g(y-y_0) \\\\ 0-v_0^2=-2(10)(5) \\\\ \Rightarrow v_0=\pm 10\,{\rm m/s}\end{gather*} Because the ball kicked upward so we must choose the plus sign, i.e. $v_0=+10\,{\rm m/s}$. In the above, we used the fact that in all freely falling problems, at the highest point (apex) the velocity is zero ($v=0$). Recall that projectiles are a particular type of freely falling motion with a launch angle of $\theta=90$ with its own formulas. Problem (19): From the bottom of a $25\,{\rm m}$-depth well, a stone is thrown vertically upward with an initial speed of $30\,{\rm m/s}$. (a) How high does the wall rise out of the well? (b) The stone before returning into the well, how many seconds was outside the well? Solution (a) Let the bottom of the well be the origin so $y_0=0$. First, we find how much distance the ball rises. Recall that the highest point is where $v=0$ so we have \begin{gather*}v^2-v_0^2=-2g(y-y_0) \\\\ 0-(30)^2=-2(10)(y-0) \\\\ \Rightarrow \boxed{y=45\,\rm m}\end{gather*} Of this height $25\,{\rm m}$ is for well's height so the stone is $20\,{\rm m}$ outside of the well. (b) We want to examine the duration between exiting and reentering the ball into the well. During this time interval, the ball returns its initial position, so its displacement vector is zero, i.e., $\Delta y=y-y_0=0$. If we want to use the equation, $y-y_0=-\frac 12 gt^2+v_0t$, the speed of the ball is required exactly when it leaves the well. The speed at the bottom of the well is known. Thus, apply the equation $v^2-v_0^2=-2g\Delta y$ to find the speed just before the ball leaves the well. \begin{gather*}v^2-v_0^2=-2g\Delta y \\\\ v^2-(30)^{2}=-2(10)(25) \\\\ \Rightarrow v=+20\,{\rm m/s}\end{gather*} This speed can be served as initial speed for the part that the ball is outside the well. Hence, the total time the stone is out of the well is obtained as below \begin{gather*} \Delta y=-\frac 12 gt^{2}+v_0 t\\ 0=-\frac 12 (10)t^{2}+(20)(2) \end{gather*} Solving for $t$, one can obtain the required time as $t=4\,{\rm s}$. Problem (20): From the top of a $20-{\rm m}$-high tower, a small ball is thrown vertically upward. If $4\,{\rm s}$ after throwing, it hit the ground. How many seconds before striking the surface does the ball again meet the original throwing point? (Air resistance is neglected and $g=10\,{\rm m/s^2}$). Solution: Let the origin be the throwing point. The ball strike the ground $20\,\rm m$ below our chosen origin, so its total displacement between initial position $y_i=0$ and final position is $\Delta y=y_f-y_i=-20\,\rm m$. The total time in which the ball is in the air is also $4\,{\rm s}$. With these known values, one can find the initial velocity as \begin{gather*}\Delta y=-\frac 12 gt^{2}+v_0t \\\\ -25=-\frac 12 (10)(4)^{2}+v_0(4) \\\\ \Rightarrow \boxed{v_0=15\,\rm m/s} \end{gather*} When the ball returns to its initial position, its total displacement is zero, i.e., $\Delta y=0$ so we can use the following kinematic equation to find the total time takes the ball to return to the starting point \begin{gather*}\Delta y=-\frac 12 gt^{2}+v_0t \\\\ 0=-\frac 12 (10)t^{2}+(15)t \end{gather*} Rearranging and solving for $t$, we get $t=3\,{\rm s}$. Problem (21): A rock is thrown vertically upward in the air. It reaches the height of $40\,{\rm m}$ from the surface at times $t_1=2\,{\rm s}$ and $t_2$. Find $t_2$ and determine the greatest height reached by the rock (neglect air resistance and assume $g=10\,{\rm m/s^2}$). Solution: Let the throwing point (surface of the ground) be the origin. Between our chosen origin and the point with known values $h=4\,{\rm m}$, $t=2\,{\rm s}$ one can write down the kinematic equation $\Delta y=-\frac 12 gt^{2}+v_0\,t$ to find the initial velocity as \begin{gather*} \Delta y=-\frac 12 gt^{2}+v_0t \\\\ 40=-\frac 12 (10)(2)^2 +v_0(2) \\\\ \Rightarrow v_0=30\,{\rm m/s}\end{gather*} Now we are going to find the times when the rock reaches the height $40\,{\rm m}$ (Recall that when an object is thrown upward, it passes through every point twice). Applying the same equation above, we get \begin{gather*} \Delta y=-\frac 12 gt^{2}+v_0t \\\\ 40=-\frac 12\,(10)t^2+30t \\\\ \Rightarrow \boxed{5t^2-30t+40=0} \end{gather*} Rearranging and solving for $t$ using quadratic equation formula, two times are obtained i.e. $t_1=2\,{\rm s}$ and $t_2=4\,{\rm s}$. Thus, again after $4\,\rm s$ the ball again is at the same height of $40\,\rm m$ from the surface. The greatest height is where the vertical velocity becomes zero so we have \begin{gather*}v^2-v_0^2 =-2g\Delta y \\\\ 0-(30)^2=-2(10)\Delta y\\\\ \Rightarrow \boxed{\Delta y=45\,\rm m}\end{gather*} Thus, the highest point where the rock can reach is located at $H=45\,{\rm m}$ above the ground. Problem (22): A ball is launched with an initial velocity of $30\,{\rm m/s}$ straight upward. How long will it take the ball to reach $20\,{\rm m}$ below the highest point for the first time? (neglect air resistance and assume $g=10\,{\rm m/s^2}$). Solution: Between the origin (surface level) and the highest point ($v=0$) apply the time-independent kinematic equation below to find the greatest height $H$ where the ball reaches.\begin{gather*}v^2-v_0^2=-2g\Delta y \\\\ 0-(30)^2=-2(10)H \\\\ \Rightarrow \boxed{H=45\,\rm m}\end{gather*} This is the maximum height that the ball can reach. The $20\,{\rm m}$ below this maximum height $H$ has a height of $h=45-20=25\,{\rm m}$. Now use the vertical displacement kinematic equation between the throwing point and the desired position to find the required time taken. \begin{gather*} \Delta y=-\frac 12 gt^2+v_0t \\\\ 25=-\frac 12(10)t^2+30(t)\end{gather*} Solving for $t$ (using quadratic formula), we get $t_1=1\,{\rm s}$ and $t_2=5\,{\rm s}$ one for up way and the second for down way. Problem (23): A stone is launched directly upward from the surface level with an initial velocity of $20\,{\rm m/s}$. How many seconds after launch is the stone's velocity $5\,{\rm m/s}$ downward? Solution: Let the origin be at the surface level and take the positive direction up. Therefore, we have initial velocity $v_0=+20\,{\rm m/s}$ and final velocity $v=-5\,{\rm m/s}$. Use the velocity kinematic equation $v=v_0-gt$ to find the desired time as below \begin{gather*}v=v_0-gt \\-5=+20-10\times t \\ \Rightarrow \boxed{t=2.5\,\rm s}\end{gather*} Problem (24): From a $25-{\rm m}$ building, a ball is thrown vertically upward at an initial velocity of $20\,{\rm m/s}$. How long will it take the ball to hit the ground? Solution: Origin is considered to be at the throwing point, so $y_0=0$. Apply the position kinematic equation below to find the desired time \begin{gather*} y-y_0=-\frac 12 gt^2+v_0 t \\\\ -25=-\frac 12(10)t^2+20t \\\\ \Rightarrow 5t^2-20t-25=0 \end{gather*} Rearranging and converting it into the standard form of quadratic equation $at^2+bt+c=0$, its solutions are obtained as \begin{align*}t_{1,2}&=\frac{-b\pm \sqrt{b^2-4\,ac}}{2a} \\\\ &=\frac{-(-4)\pm\sqrt{(-4)^2-4(1)(-5)}}{2(1)}\\\\ &=-1 \, \text{and} \, 5 \end{align*} Therefore, the time needed for the ball to hit the ground is $5\,{\rm s}$. Problem (25): From the top of a building with a height of $60\,{\rm m}$, a rock is thrown directly upward at an initial velocity of $20\,{\rm m/s}$. What is the rock's velocity at the instant of hitting the ground? Solution: Apply the time-independent kinematic equation as \begin{gather*} v^2-v_0^2 =-2g(y-y_0) \\\\ v^2-(20)^2 =-2(10)(-60) \\\\ v^2 =1600\\\\ \Rightarrow \quad \boxed{v=\pm 40\,\rm m/s} \end{gather*}Therefore, the rock's velocity when it hit the ground is $v=-40\,{\rm m/s}$. In this article, you learned how to solve free fall problems step-by-step using simple kinematic equations. Author: Dr. Ali Nemati Published: 8/11/2022
2022-09-27 15:26:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9860535860061646, "perplexity": 531.7851160956327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00100.warc.gz"}
https://brilliant.org/problems/how-many-notes/
# How many notes? Algebra Level 1 A man has a combination of $5 and$10 notes. If the total number of notes is 20, and the total value of money is $120, what is the quantity of$10 notes? ×
2017-07-24 20:53:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5104579329490662, "perplexity": 1289.334339851847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424910.80/warc/CC-MAIN-20170724202315-20170724222315-00694.warc.gz"}
https://ncatlab.org/nlab/show/simple+foliation
# nLab simple foliation A regular foliation $\mathcal{P} \hookrightarrow T X$ is called simple if the leaf space $X/\mathcal{P}$ is a smooth manifold and the quotient projection $X \to X/\mathcal{P}$ is a surjective submersion. Conversely, a simple foliation is a foliation by leaves of a surjective submersion. Created on March 25, 2013 at 21:36:18. See the history of this page for a list of all contributions to it.
2023-02-07 22:17:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8095192909240723, "perplexity": 230.63607900451962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500641.25/warc/CC-MAIN-20230207201702-20230207231702-00652.warc.gz"}
https://www.instasolv.com/question/a-9-a-if-one-root-of-the-equation-ax2-bx-c-0-is-equal-to-nth-power-of-the-r44ux1
A-9.a If one root of the equation a... Question # A-9.a If one root of the equation ax2 + bx + C = 0 is equal to nth power of the other root, then show that cao (ac")/(n + 1) + (anc 1/(n + 1) + b = 0. A-10. If the sum of the roots of quadratic equation (a + 1)x2 + (2a + 3) + (3a + 4) = 0 is -1, then find the product of the roots. JEE/Engineering Exams Maths Solution 183 1.0 (2 ratings)
2021-01-26 15:29:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8336999416351318, "perplexity": 499.2006293016276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704800238.80/warc/CC-MAIN-20210126135838-20210126165838-00303.warc.gz"}
http://math.stackexchange.com/questions/3291/estimates-involving-sums-with-binomials
# Estimates involving sums with binomials When calculating some probabilities, I got sums of the form $$\sum_{j=0}^c {j+a+b \choose a} p^j,$$ for integers $a, b, c > 0$. Does someone know closed forms for these values? - It can be expressed in terms of hypergeometric functions according to Mathematica, but I doubt you'd want that closed form. – J. M. Aug 25 '10 at 16:59 Do you need a fast algorithm to compute these, Or do you absolutely need a closed form? Are there any constraints on a? b,c? – Aryabhata Aug 25 '10 at 17:09 @J. Mangaldan: correct. Thanks for plugging it into Mathematica. – j.p. Aug 25 '10 at 17:21 @Moron: I'm not sure about constraints on a, b and c yet, since the probabilities are supposed to help me finding parameters for some algorithm. A closed form would be nice for the running time analysis of the algorithm, but maybe I can live without it. – j.p. Aug 25 '10 at 17:24 @jug: So the sum you have is the runtime of some algorithm and you want to estimate it, i.e. a BigOh estimate will do? What are the variables? (I ask this based on your last sentence). – Aryabhata Aug 25 '10 at 17:29 I don't have time right now to work out the exact answer, but the methods and identities in the middle third of this blog post will give a closed form for fixed $a$. Use the geometric series formula to find $\sum_{j=0}^c p^{a+b+j}$, differentiate $a$ times with respect to $p$, then divide by $a! p^b$. If a closed form for fixed $a$ isn't good enough, you should probably be more precise about which of your parameters are large and which are small. Edit: Still don't have time to give a complete answer, but here's a fun trick. Instead of computing the answer for fixed $a$ we can write down a generating function $$\displaystyle P_{b,c}(x) = \sum_{a=0}^{\infty} x^a \sum_{j=0}^c {a+b+j \choose a} p^j$$ then exchange the order of summation, giving \displaystyle \begin{align} P_{b,c}(x) &= \sum_{j=0}^c p^j \sum_{a=0}^{\infty} {a+b+j \choose a} x^a \\ &= \sum_{j=0}^c p^j \frac{1}{(1 - x)^{b+j+1}} \\ &= \frac{1}{(1 - x)^{b+1}} \sum_{j=0}^c \left( \frac{p}{1-x} \right)^j \\ &= \frac{1}{(1 - x)^{b+1}} \frac{1 - \left( \frac{p}{1-x} \right)^{c+1}}{1 - \frac{p}{1-x}} \\ &= \frac{(1-x)^{c+1} - p^{c+1}}{(1 - x)^{b+c+1}(1 - p - x)}. \end{align} Then the coefficient of $x^a$ of this rational function is the number you want. Not sure how useful that is for you, but you might be able to extract a useful asymptotic from it. The identity in the second line is the binomial theorem for negative exponents. - Thanks for the hint, I'll check if I can get it working. Closed for fixed $a$ might be OK (not sure yet). – j.p. Aug 25 '10 at 17:19 That's a great blog post. – Rasmus Aug 25 '10 at 19:22 If you don't need an exact closed form or just care about the asymptotic growth Stirling's Approximation might be useful. I've found it useful in the past for generating good asymptotic bounds on similar functions. - If p is small then you should get a much better asymptotic just by taking c to infinity. The dominant term here is the exponential, not its coefficient (which is merely polynomial). – Qiaochu Yuan Aug 25 '10 at 17:02
2016-05-01 12:07:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.997161328792572, "perplexity": 456.63482015937853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860115836.8/warc/CC-MAIN-20160428161515-00083-ip-10-239-7-51.ec2.internal.warc.gz"}
https://ltwork.net/explain-the-role-of-need-and-want-in-the-process-of-analyzing--270
# Explain the role of need and want in the process of analyzing marketing management ###### Question: Explain the role of need and want in the process of analyzing marketing management ### Hey I'm Chloe can you help me, I will give Brainlest :) Thank you :)Chris and Alex Were Building a sand Castle. They Decided to make Hey I'm Chloe can you help me, I will give Brainlest :) Thank you :) Chris and Alex Were Building a sand Castle. They Decided to make it into a cone shape. After some time, the diameter of the the sand shaped cone was 12in and the height was 7in. What is the exact volume of the sand cone?.... ### Alady has 1 gallon of milk. the recipe calls for 1 quart. she dicided to triple the recipe. how many Alady has 1 gallon of milk. the recipe calls for 1 quart. she dicided to triple the recipe. how many cups does she have left?... ### Owen receives $10 and puts it into his savings account. he adds$0.50 to the account each day for a number of days, dd, Owen receives $10 and puts it into his savings account. he adds$0.50 to the account each day for a number of days, dd, after that. he writes the expression 10+0.5(d−1) to find the amount of money in his account after dd days. which statement about his expression is true? a.) it is the product of... ### The us environmental protection agency issues a daily report for pollution levels called the The us environmental protection agency issues a daily report for pollution levels called the... ### What term is used to refer to how much of a substance is dissolved in a liquid? a. concentration b. dilution c. distribution What term is used to refer to how much of a substance is dissolved in a liquid? a. concentration b. dilution c. distribution d. solutions... ### One of your subjects is half way through a study of an investigational antidepressant that is injected weekly. The drug requires One of your subjects is half way through a study of an investigational antidepressant that is injected weekly. The drug requires a taper-down regimen, that is, it should NOT be stopped abruptly. You learn that the subject will be admitted to prison next week prior to the next scheduled injection. Wh... ### What is the theme of “a friend in me” ? what details introduce and develop a theme What is the theme of “a friend in me” ? what details introduce and develop a theme... ### What is the 5th term in the arithmetic sequence below?–32, –20, –8, 4, ___, ...A.12B.15C.16D.18 What is the 5th term in the arithmetic sequence below? –32, –20, –8, 4, ___, ... A. 12 B. 15 C. 16 D. 18... ### The health cods cover all issues except?​ The health cods cover all issues except?​... ### Simplify 8(3 - 2x).16x - 2424x - 1624 - 16x Simplify 8(3 - 2x). 16x - 24 24x - 16 24 - 16x... ### Which of the following is an irrational number? Which of the following is an irrational number? $Which of the following is an irrational number?$... ### Oede Corporation uses activity-based costing to compute product margins. In the first stage, the activity-based costing system allocates oede Corporation uses activity-based costing to compute product margins. In the first stage, the activity-based costing system allocates two overhead accounts--equipment depreciation and supervisory expense--to three activity cost pools--Machining, Order Filling, and Other--based on resource consump... ### If parallelogram efgh is translated according to the same rule that translated triangle If parallelogram efgh is translated according to the same rule that translated triangle abc, what is the ordered pair of point h'? (7, −4) (−5, 6) (6, −5) (−4, 7)... ### The Evanstonian is an upscale independent hotel that caters to both business and leisure travelers. The Evanstonian is an upscale independent hotel that caters to both business and leisure travelers. When a guest calls room service at The Evanstonian, the room-service manager takes down the order. The service manager then submits an order ticket to the kitchen to begin preparing the food. She also... ### Write the equation of the horizontal line given m = 0 that passes through the point (3,-2). Write the equation of the horizontal line given m = 0 that passes through the point (3,-2).... ### A person runs 1/5 miles in 1/40Hour The person's speed ismiles per hour. A person runs 1/5 miles in 1/40 Hour The person's speed is miles per hour. $A person runs 1/5 miles in 1/40 Hour The person's speed is miles per hour.$... ### Forcing Slavery Down the Throat of a FreesoilerIntroduction and InstructionsANALYZETHE TITLEINTERPRETTHE IMAGESWhat does thecartoon's title suggestabout Forcing Slavery Down the Throat of a Freesoiler Introduction and Instructions ANALYZE THE TITLE INTERPRET THE IMAGES What does the cartoon's title suggest about its overall point of view? The illustrator depicts President Franklin Pierce and Senator Stephen Douglas forcing an enslaved man into the ... ### How did rome gradully defeat carathage How did rome gradully defeat carathage...
2023-03-22 09:20:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1865830421447754, "perplexity": 4203.553086338066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00142.warc.gz"}
http://janss.kr/journal/article.php?code=46025
• For Contributors + • Journal Search + Journal Search Engine ISSN : 2093-5587(Print) ISSN : 2093-1409(Online) Journal of Astronomy and Space Sciences Vol.33 No.4 pp.335-344 DOI : https://doi.org/10.5140/JASS.2016.33.4.335 # Performance Test for the SIGMA Communication System Seonyeong Jeong1, Hyojeong Lee1, Seongwhan Lee1, Jehyuck Shin1, Jungkyu Lee1, Ho Jin1,2 1School of Space Research, Kyung Hee University, Yongin 17104, Korea 2Department of Astronomy and Space Science, Kyung Hee University, Yongin 17104, Korea Corresponding Authorbenho@khu.ac.kr+82-31-201-3865, +82-31-206-2470 ## Abstract Scientific CubeSat with Instruments for Global Magnetic Fields and Radiations (SIGMA) is a 3-U size CubeSat that will be operated in low earth orbit (LEO). The SIGMA communication system uses a very high frequency (VHF) band for uplink and an ultra high frequency (UHF) band for downlink. Both frequencies belong to an amateur band. The ground station that communicates with SIGMA is located at Kyung Hee Astronomical Observatory (KHAO). For reliable communication, we carried out a laboratory (LAB) test and far-field tests between the CubeSat and a ground station. In the field test, we considered test parameters such as attenuation, antenna deployment, CubeSat body attitude, and Doppler frequency shift in transmitting commands and receiving data. In this paper, we present a communication performance test of SIGMA, a link budget analysis, and a field test process. We also compare the link budget with the field test results of transmitting commands and receiving data. ## 1.INTRODUCTION The characteristics of the CubeSat are defined by Stanford University and California Polytechnic State University. The standard 1-U size is a 10 cm cube of very low weight (1 kg or less) (Puig-Suari et al. 2001). Developing a CubeSat represents an increase in the opportunities to explore space in fast development of technology. The development of CubeSat is a good foundation for education and the design of space research instruments in the future. The planned launches of CubeSats demonstrate that interest in satellites is growing. CubeSats have various mission types including military use, intelligence services, geospace research, atmospheric research, ionospheric plasma density, and electric field measurement (Straub 2012). Communications satellites are an essential part of spacebased activities such as receiving data from satellites and in-orbit operations (Toyoshima et al. 2005). People are interested in the payload of the spacecraft, which accounts for part of the funding. However, if the communication systems malfunction, then the payload will be useless in space (Klofas & Leveque 2013). A satellite communication system consists of a space segment, a control segment, and a ground segment. Satellites are involved in the space segment as a communication system. The control segment and the ground segment contain all equipment to control and monitor the satellites and earth stations (Maral et al. 2009). In this paper, the communication system is classified as a satellite, which is the space segment, and a ground station, which contains the control segment and ground segment. According to a survey of CubeSat communication systems in the last five years, most of the CubeSats employed very high frequency (VHF) band, ultra high frequency (UHF) band, S band, or X band frequencies (Klofas 2016). In addition, data rate and modulation depended on the mission type or frequencies of satellites. A number of CubeSats communicate in 1,200 bps or 9,600 bps, and a few in Mbps. Scientific CubeSat with Instruments for Global Magnetic Fields and Radiations (SIGMA), KHUSAT-3, is a 3-U CubeSat developed by Kyung Hee University (KHU) with international collaboration. It is planned to launch in the first quarter of 2017, and its launch vehicle is SpaceX, Falcon 9. SIGMA has two payloads: Tissue Equivalent Proportional Counter (TEPC) and a Miniaturized Fluxgate Magnetometer (MAG). TEPC, the primary payload, measures the linear energy transfer (LET) spectra and investigates the equivalent dose for radiation fields in space environments (Nam et al. 2015). MAG is the secondary payload. It measures ultra low frequency (ULF) waves and has a resolution of 0.1 nT. SIGMA employs the VHF band for uplink and the UHF band for downlink. This paper presents the communication system of SIGMA and a telecommunication test to verify its reliable communication between the CubeSat and the ground station. For this study, we carried out a link budget analysis and communication tests, and confirmed the reliability of communications. We also compared the field test results with the link budget calculation, and discussed the problems of the communication system. ## 2.COMMUNICATION SYSTEM OF SIGMA ### 2.1.CubeSat Segment The transceiver module of SIGMA is the TRXVU from Innovative Solutions In Space (ISIS; Delft, Netherlands), which is assembled on top of the avionics stack. This full-duplex transceiver module is designed for CubeSat applications. The transmitting data modulation supports the BPSK modulation scheme. The scrambling polynomial of BPSK modulation is 1+X12+X17, G3RUH scrambling. The transceiver also receives commands from the ground station. There is no scrambling in AFSK modulation. The antenna module of ISIS is connected to the transceiver with RG 178 coaxial cables. The antenna system is deployed by using I2C commands. A dipole antenna performs radio telecommunication with two symmetrical radiating arms. The total length of the antenna for each band is a half wavelength of carrier waves, and a feed point is at the center. This type of antenna is a basic form of all antenna types and is commonly used for the HF band (Lim et al. 2009). The CubeSat antenna module is mounted in the top chassis of the CubeSat and deployed toward each direction. Four whip antennas are rolled up and stowed inside of the housing. Burn wires made of Dyneema hold each rolled-up antenna. The antennas are deployed when the wires burn. ### 2.2.Ground Station #### 2.2.1.Hardware A ground station consists of VHF and UHF Yagi antennas, a satellite tracking rotator, a transmitting section, and a receiving section. The ground station contains equipment for a terrestrial network interface, monitoring, and electrical connections. The ground station handles various subsystems and performs the satellite communication. The main equipment has a terminal node controller (TNC), transceivers, a receiver, and an antenna subsystem. A personal computer (PC) controls the G-5500 rotator by a rotator controller (GS232B) from Kenwood (Tokyo, Japan). The antenna automatically tracks the satellite using tracking software. For transmitting commands, we use the Kenwood TS-790A and TS-2000. One of these is redundant equipment. The commands from the PC are sent to a TNC-X from Coastal ChipWorks (Fredonia, U.S.A.). A FUNcube Dongle Pro+ from Hanlincrest Ltd. (London, U.K.) connected to the UHF Yagi antenna is used only to receive data from the CubeSat. For redundancy, a USRP B200 from Ettus Research (Santa Clara, U.S.A.) is able to transmit and receive data as well; however, in case of a transition, an amplifier is required. The USRP is mainly used within a short distance of the laboratory. The RF equipment for the KHU Ground Station is shown in Fig. 3. #### 2.2.2.Software Most of the communication software products using in SIGMA are freeware; few computer programs were specifically designed for SIGMA. First, Ham Radio Deluxe (HRD) version 5.0 is software for satellite tracking and controlling the rotator and the transceiver. For transmitting commands, Commander, which is used for transmitting commands to CubeSat, is coded in the C language and was developed by the SIGMA team. To receive data from the CubeSat, we use three software packages: SDR Sharp, Soundmodem, and Telemetry Monitoring Program (TMP). SDR Sharp is PC-based Software Defined Radio made by Youssef Touil. This software generates down converted sound wave data. Soundmodem is a dual-port Packet-Radio TNC software developed by Andrei (UZ7HO). This freeware version did not include scrambled modulation; therefore, Andrei developed a SIGMA version that decodes scrambled BPSK data. Fig. 4 shows the beacon data of SIGMA in SDR Sharp and Soundmodem. Using freeware, we developed a TMP coded in Java, which allows us to confirm received data using TMP in real time. This is shown in Fig. 5. In analyzing satellite communication, the variables for the link can be explained by the signal-to-noise ratio (S/N) for analog communications and the bit error rate (BER) for digital communications. The carrier-to-noise ratio (C/N) is a basic variable representing both analog and digital communications. It is calculated by using the link budget. In the case of digital communications, we can express the link performance as a numerical value of the BER by converting C/N to energy per bit to noise spectral density ratio (Eb/N0) (Lee 2013). Eb/N0 depends on the modulation and determines the bit-error probability (Moon et al. 2004). Satellite communication systems are designed for a link margin of 3–8 dB (Yoon 2012). Table 2 lists the input parameters used to calculate the link budget based on the ground station system at KHAO and the transceiver and antenna module of the CubeSat. The specifications for each communication system and the average value of the loss were used in the calculations. We calculated the link margin as shown in Table 3 by using parameters listed in Table 2. Both the link margin of the uplink and downlink exceeded 3 dB, which is the threshold for reliable communication. This means that communications between the CubeSat and the ground station are theoretically possible. ## 4.COMMUNICATION TEST & RESULTS We carried out two tests of radio telecommunication between CubeSat and a ground station. The first test was an indoor laboratory test. We transmitted commands and received beacon signals from the CubeSat using the ground station equipment. Second, in a far-field test, we checked the basic data communication using the ground station equipment. In addition, we added several parameters, such as space loss and antenna pointing, that were considered for the space operational environment. ### 4.1.LAB Test The LAB test was carried out indoors. In an initial performance test, we checked the data communication using USRP, which is used for subsidiary equipment in the ground station. We tested the transmitting and receiving data-packet communication controlled by the PC without environmental variables. For the CubeSat part, the transceiver and antenna module were connected to the PC through the OBC. For the ground station area, we used a PC, TNC, transceiver (TS-790A), FUNcube Dongle Pro+, and a small portable antenna (since the UHF Yagi antenna has high gain at a short distance). In the uplink test, we checked for status changes in CubeSat after transmitting commands from the ground station. In the downlink test, beacon signals from CubeSat were checked. At the ground station, decoded beacon data were displayed by SDR Sharp and Soundmodem. In the LAB test, there were no issues. We confirmed the reliability of the SIGMA communication system in the nearfield. The test environment is shown in Fig. 6. ### 4.2.Far-field Test According to the results of the far-field distance calculations, we must conduct the far-field test at a distance of more than 104.59 m away from the ground station. Therefore, the farfield test was carried out on the rooftop of the building at a distance of 0.4 km. In addition, we carried out the far-field test at a distance of 8.7 km because the test location at 0.4 km was lower than the position of the ground station antenna. The test location at a distance of 8.7 km was a mountain that had a higher elevation than the ground station antenna. In both the uplink and downlink tests, the test variable parameters are considered for free space path loss, Doppler frequency shift, antenna status, and satellite body attitude. The equipment for the test was the same as that in the LAB test, except for the VHF and UHF Yagi antennas of the ground station. Fig. 7 shows the straight-line distance between CubeSat and the ground station in the far-field tests. #### 4.2.1.Field Distance and Free Space Path Loss (FSPL) The radio signal strength transmitted by an antenna has different properties depending on the distance. The signal distribution is not uniform in the near-field but is omnidirectional in the far-field. There are three divided regions surrounding an antenna: reactive near-field, radiating near-field, and far-field regions. The reactive near-field region is where the reactive fields predominate. The boundary distance is $R < 0.62 D 3 / λ$ from the antenna surface. The radiating near-field region is the distance $0.62 D 3 / λ ≤ R < 2 D 2 / λ$, where the radiations fields predominate. is the wavelengh, and D is the largest cross-sectional diameter or dimension of the antenna. The distance of the far-field region is R > 2D2/λ, which is independent of the distance in the distribution of an angular field (Balanis 2005). We considered the near-field as a distance of the reactive near-field. The far-field distance was assumed to be the maximum result (104.59 m), which is the far-field distance boundary of the UHF ground station antenna. Table 4 lists the calculation results for the far-field distance. We carried out the far-field tests at a distance of more than 104.59 m away from the ground station. FSPL is the ratio of the powers for receipt and transmission between isotropic antennas (Maral et al. 2009). It is usually expressed in dB and refers to the frequencies and distance. The equation is(1) (1) where d is the distance from the transmitting antenna in kilometers, and f is frequency of the signal in megahertz (Maini & Agrawal 2007). The calculation results depending on the expected orbit of SIGMA are listed in Table 5. #### 4.2.2.Variable: Attenuation (Free Space Path Loss) This test for uplink used attenuation to approximate the free space path loss. According to the results for an altitude of 720 km in Table 5, Cubesat has to recieve commands at an additional attenuation of more than 65 dB at a distance of 0.4 km, and 40 dB at a distance of 8.7 km from the ground station. For this test, attenuators were added to the ground station transceiver. The transceiver had a frequency of 145.210 MHz, and the maximum transmission power was up to 50 W. CubeSat received all commands up to an attenuation of 117.78 dB, but CubeSat only partially received at 122.78 dB, and did not receive at all at 132.78 dB. In the downlink test, attenuators were added to a side of the FUNcube Dongle, and we successfully received all data up to 162.33 dB. The results are listed in Table 6. We carried out the uplink test at a distance of 8.7 km. As with the test at a distance of 0.4 km, CubeSat did not receive the commands with lower attenuation than the requirement. The ground station received data from the CubeSat up to an attenuation of 159.08 dB. Table 7 lists the results. The results of the uplink test is short of the 132.84 dB attenuation that is required at an altitude of 720 km. If the uplink transmitting power is increased to 90 W, the CubeSat will receive commands at higher attenuation. #### 4.2.3.Variable: Antenna Status & CubeSat Attitude The attitude of CubeSat can affect communication performance since the CubeSat VHF and UHF antenna directions are different from each other. Before antenna deployment, the whip antennas are rolled up and stowed inside the module. If the antennas fail to deploy, it will seriously affect radio communications. The test results depending on antenna deployment at a distance of 0.4 km are listed in Table 8. When the antenna deployed to one side only, CubeSat received all commands. When the antenna was not deployed, CubeSat received partial commands. It was expected that CubeSat would not receive commands when the antenna was not deployed. The variable parameters for the downlink test were the same as those for the uplink test. As with the uplink test, the FUNcube Dongle received partial data when the antenna deployed to one side only. However, if the CubeSat was placed at a greater distance without antenna deployment, it was unable to communicate with the ground station. The received signal strength indication (RSSI) value of the CubeSat decreased by approximately 30 dB compared with the status when the antenna was deployed. The lower portion of Table 8 lists results for the satellite body attitude. CubeSat tried to receive commands between 0° and 90° on the z-axis of the CubeSat basis (Fig. 8). As a result, all commands and data were received at every attitude of CubeSat. Table 9 lists the results at a distance of 8.7 km from the ground station. In this test, a 35 dB attenuator was added to the Yagi antenna of the ground station. We changed the direction angle of the ground station antenna within 82° (uplink) and 60° (downlink) in our test. Then, we tried to rotate the antenna until it was unable to receive commands or data. As a result, the communication allowance range was ±40° for the uplink VHF and ±5° for the downlink UHF (Fig. 9). The UHF Yagi antenna was less affected by the environment around the ground station since the boom length of the UHF Yagi antenna was longer than that of the VHF Yagi antenna. In addition, we changed the CubeSat attitude from -90° to +90°, as shown in Table 9 and Fig. 8. The uplink results are different from the downlink results. This difference depends on the position of the CubeSat VHF and UHF antennas. #### 4.2.4.Variable: Doppler Frequency Shift A Doppler frequency shift was observed between the satellite around the earth and a ground station that communicates with the moving satellite (Lim et al. 2009). The calculated Doppler frequency shift range is 145.206 MHz to 145.214 MHz for uplink, and 435.769 MHz to 435.791 MHz for downlink at an altitude of 720 km. Therefore, the test frequency range is determined by using the Doppler frequency shift range. We controlled the uplink frequency using a transceiver at the ground station, and transmitted commands 30 times in 30 seconds. SDR Sharp connected to the FUNcube Dongle controlled the downlink frequency in units of 1 kHz. Then, we checked that all commands from the ground station were received and were the same data from CubeSat. Table 10 lists the results based on the Doppler frequency shift. In the test at a distance of 8.7 km, instead of considering a specific frequency range, we changed the frequencies of the ground station until the commands or data were received. An attenuation of 35 dB was adopted for the ground station. The transmitting power of the uplink was 50 W. #### 4.2.5.Field Test Results Far-field tests were carried out under various conditions such as variable attenuation, antenna deployment status, CubeSat attitude, and Doppler frequency shift. There was an issue with the uplink process in the 0.4 km test environment. The CubeSat did not receive the commands from the ground station at 55 dB attenuation, but the ground station received all signals from the CubeSat. In the 0.4 km uplink communication, we found that the main reason for this issue is the far-field test environment around KHAO. A dome structure behind the Yagi antenna interfered with the uplink signals, and there was signal interference in the same frequency band around the ground station. The average intensity of the transmitting signal attenuates because the walls of buildings generate an increasing number of multi-reflections (Blaunstein & Levin 1996). In addition, there is significant vegetation covering the mountain around the Yagi antenna direction toward the farfield test spot. Propagating signals in an environment that has many trees can cause multiple scattering, diffraction, and absorption of signals since many discrete scatterers (such as randomly distributed leaves and branches) form a random medium (Meng et al. 2009). However, during the 8.7 km far-field test, we did not experience any problems. Furthermore, there are no issues in our voice communication with OSCAR. Comparing the analysis and test results, we confirmed that the communication system can sufficiently communicate between SIGMA and the ground station. ## 5.CONCLUSION SIGMA will be launched on Q1 2017 loaded in an American launch vehicle, Falcon 9. The flight model of SIGMA is ready to launch. The center frequency for uplink is 145.210 MHz, and commands are transmitted using 1,200 bps AFSK signals. The downlink frequency is 435.780 MHz, and data is received from the CubeSat using 9600-bps scrambled BPSK signals. A ground station of SIGMA was constructed in KHAO, and it controls the CubeSat and receives data. By using HRD, we can predict a contact time with the satellite and control the direction of the ground station antennas depending on the orbit of the satellite. We use Commander to transmit commands with a simple click. In addition, a FUNcube Dongle Pro+ connected to the UHF Yagi antenna is used with SDR Sharp and Soundmodem for receiving data. The status of SIGMA is checked in real time by TMP. Before operating the CubeSat in orbit, we carried out tests for reliable telecommunication between the CubeSat and the ground station. In the LAB test, all attempts at communication were successful. In addition, in a far-field test, variables were considered for the space environment. Each of the far-field tests was measured at a distance of 0.4 km and 8.7 km from the ground station. For satellite body attitude, CubeSat antenna deployment, and Doppler frequency range, the test results showed normal values; however, we had one issue with the uplink test that had a 65-dB attenuation at a distance of 0.4 km. This test had an insufficient result compared with the theoretical values in the link budget analysis. We realized that the CubeSat could not receive uplink commands in the test environment surrounding KHAO. The mountain and the dome structure around the Yagi antenna affected the transmitting signal on the groundbased test field. Furthermore, signal interference in the same frequency band affected the transmission of signals. However, we confirmed that the ground station has sufficient performance by a space-based test with OSCAR. Our test method and analysis are practical approaches for CubeSat communication tests. From our results, a more precise measurement of the communications can be performed. This should be useful in designing a low-cost CubeSat communication system. ## ACKNOWLEDGMENTS This work was supported by the BK21 Plus program and NRF-2013M1A3A4A01075960 from the National Research Foundation under the Ministry of Science, Information & Communication Technology (ICT) and Future Planning of Korea. ## Figure (a) SIGMA flight model, (b) top view of TRXVU, (c) top view of CubeSat antenna module. Overview of SIGMA communication system. RF Equipment for Ground Station. Beacon signals in SDR Sharp and Soundmodem. TMP GUI frame. LAB test environment for CubeSat. Straight-line distance of far-field test (Image credit: NAVER). CubeSat attitude for far-field test. Yagi antenna direction at the ground station. ## Table Specifications for SIGMA communication system Far-field distance FSPL calculation Results (0.4 km): Attenuation Results (8.7 km): Attenuation Results (0.4 km): Antenna deployment & CubeSat attitude Results (8.7 km): Antenna direction & CubeSat attitude Results: Doppler frequency shift ## Reference 1. Balanis CA (2005) Antenna theory analysis and design, John Wiley & Sons, 2. Blaunstein N , Levin M (1996) VHF/UHF wave attenuation in a city with regularly spaced buildings , Radio Sci, Vol.31 ; pp.313-323 3. Klofas B (2016) CubeSat communications system table [Internet] , cited 2016 Jun 15, available from: http://www.klofas.com/comm-table/, 4. Klofas B , Leveque K (2013) A survey of CubeSat communication systems 2009-2012 , 10th Annual CubeSat Developers Workshop 2013, 5. Lee HS (2013) Satellite communication theory and system, Bogdoo Press, 6. Lim YH , Shin HT , Lim JH , Choi SS (2009) Satellite communication techniques, Hyunwoosa, 7. Maini AK , Agrawal V (2007) Satellite technology principals and applications, John Wiley & Sons, 8. Maral G , Bousquet M , Sun Z (2009) Satellite communications systems systems, techniques and technology, John Wiley & Sons, 9. Meng YS , Lee YH , Ng BC (2009) Empirical near ground path loss modeling in a forest at VHF and UHF bands , IEEE Trans. Antennas Propag, Vol.57 ; pp.1461-1468 10. Moon BY , Kim YH , Chang YK (2004) Development of HAUSAT-1 picosatellite communication subsystem as a test bed for small satellite technology , Int. J. Aeronaut. Space Sci, Vol.5 ; pp.6-18 11. Nam UW , Park WK , Lee J , Pyo J , Moon BK (2015) Calibration of TEPC for CubeSat experiment to measure space radiation , J. Astron. Space Sci, Vol.32 ; pp.145-149 12. Puig-Suari J , Turner C , Twiggs RJ (2001) CubeSat The development and launch support infrastructure for eighteen different satellite customers on one launch , 15th Annual AIAA/ USU Conference on Small Satellites, 13. Straub J (2012) CubeSats a low-cost, very high-return space technology , AIAA Reinventing Space Conference, 14. Toyoshima M , Yamakawa S , Yamawaki T , Arai K , Garcia- Talavera MR (2005) Long-term statistics of laser beam propagation in an optical ground-to-geostationary satellite communications link , IEEE Trans. Antennas Propag, Vol.53 ; pp.842-850 15. Yoon NY (2012) Study on communication system of CINEMA, Master Dissertation, Kyung Hee University,
2017-03-26 03:22:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6190012693405151, "perplexity": 2935.554051681436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189092.35/warc/CC-MAIN-20170322212949-00280-ip-10-233-31-227.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/9111/detecting-etale-maps-on-reduced-points
# Detecting etale maps on reduced points Suppose I have a morphism of schemes for which I know the relative cotangent complex is trivial, and the map on reduced subschemes is an isomorphism. Is the map an isomorphism? More generally, given a morphism of schemes with zero relative cotangent complex, which is of finite presentation on the reduced points. Is the map of finite presentation, and thus etale? (Maybe a better way to phrase this is - what's the reference for these statements? are they in SGA or in Illusie somewhere?) - ## 1 Answer Illusie, Complexe cotangent et deformations I, Prop. 3.1.1 (p. 203) is essentially the second thing you asked. Just a technical point: I don't think people would use the term "etale" unless the morphism is locally finitely presented or something like that (you seem to be wanting to assume that only at the level of reduced schemes or something?). Without thinking too hard, though, Prop. 3.1.2 (same page) says that L_{X/Y} of perfect amplitude in [0,0] implies f is formally smooth...surely also your condition implies it's formally etale, which is what you're asking in general (without finiteness assumption), no? - Hmm.. probably being dense, what I really need is the first statement, which I can deduce from the second but I don't immediately see how it follows from just formal smoothness. What's a reference to say a formally smooth map which is an isomorphism on reduced points an isomorphism? – David Ben-Zvi Dec 16 '09 at 18:05
2016-05-03 14:49:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9195694923400879, "perplexity": 365.26626441370814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121561.0/warc/CC-MAIN-20160428161521-00122-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.cuemath.com/data/combinations/
# Combinations Go back to  'Permutations and Combinations' Consider the word EDUCATION. This has 9 distinct letters. How many 3-letter permutations (words) can be formed using the letters of this word? We now know how to answer questions like this; the answer in this particular case will be $$^9{P_3}$$. However, suppose that instead of 3-letter arrangements, you are asked to count the number of 3-letter selections. How will your answer change? Consider the following 3-letter permutations formed using the letters A, E, T from EDUCATION: AET    ATE EAT    ETA TAE    TEA These 6 different arrangements correspond to the same selection of letters, which is {A, E, T}. Thus, in the list of all 3-letter permutations, we will find that each unique selection corresponds to 6 different arrangements. To find the number of unique 3-letter selections, we divide the number of 3-letter permutations by 6. Hence, the number of 3-letter selections will be $$\frac{{^9{P_3}}}{6}$$. Let us generalize this. Suppose that you have n different objects. You have to determine the number of unique r-selections (selections which contain r objects) which can be made from this group of n objects. Think of a group of n people – you have to find the number of unique sub-groups of size r, which can be created from this group. The number of permutations of size r will be $$^n{P_r}$$. In the list of $$^n{P_r}$$ permutations, each unique selection will be counted $$r!$$ times, because the objects in an r-selection can be permuted amongst themselves in $$r!$$ ways. Thus, the number of unique selections will be $$\frac{{^n{P_r}}}{{r!}}$$. A selection is also called a combination. We denote the number of unique r-selections or combinations (out of a group of n objects) by $$^n{C_r}$$. Thus, $^n{C_r} = \frac{{^n{P_r}}}{{r!}} = \frac{{\left\{ {\frac{{n!}}{{\left( {n - r} \right)!}}} \right\}}}{{r!}} = \frac{{n!}}{{r!\left( {n - r} \right)!}}$ Here are some examples: (i) Out of a group of 5 people, a pair needs to be formed. The number of ways in which this can be done is $^5{C_2} = \frac{{5!}}{{2!\left( {5 - 2} \right)!}} = \frac{{5!}}{{2!3!}} = \frac{{120}}{{2 \times 6}} = 10$ (ii) The number of 4-letter selections which can be made from the letters of the word DRIVEN is $^6{C_4} = \frac{{6!}}{{4!\left( {6 - 4} \right)!}} = \frac{{6!}}{{4!2!}} = \frac{{720}}{{24 \times 2}} = 15$ (iii) The number of ways of selecting n objects out of n objects is $^n{C_n} = \frac{{n!}}{{n!\left( {n - n} \right)!}} = \frac{{n!}}{{n!0!}} = 1$ The number of ways of selecting 0 objects out of n objects is: $^n{C_0} = \frac{{n!}}{{0!\left( {n - 0} \right)!}} = \frac{{n!}}{{0!n!}} = 1$ The number of ways of selecting 1 object out of n objects is: $^n{C_1} = \frac{{n!}}{{1!\left( {n - 1} \right)!}} = \frac{{n \times \left( {n - 1} \right)!}}{{\left( {n - 1} \right)!}} = n$ These results are intuitively obvious. (iv) The number of ways of selecting 2 objects out of n is \begin{align}&{}^nC_2=\frac{n!}{2!\left(n-2\right)!}=\frac{n\times\left(n-1\right)\times\left(n-2\right)!}{2\left(n-2\right)!}\\&\qquad\qquad\qquad\qquad=\frac{n\left(n-1\right)}2\end{align} We now summarize our discussion on combinations up to this point. 1. Just like you associated the word permutations with the word arrangements, you should associate the word combinations with the word selections. Whenever you read the phrase “number of combinations”, think of the phrase “number of selections”. When you are selecting objects, the order of the objects does not matter. For example, XYZ and XZY are different arrangements, but the same selection. 2. The number of combinations of n distinct objects, taken r at a time (where r is less than n), is $$^n{C_r} = \frac{{^n{P_r}}}{{r!}} = \frac{{n!}}{{r!\left( {n - r} \right)!}}$$. 3. This result above is derived from the fact that in the list of all permutations of size r, each unique selection is counted $$r!$$ times. 4. Out of n objects, the number of ways of selecting 0 or n objects is 1; the number of ways of selecting 1 object is n. 5. Out of n objects, the number of ways of selecting 2 objects is $$^n{C_2} = \frac{{n\left( {n - 1} \right)}}{2}$$. We will now apply these results to some examples. Before that, we once again highlight the following association: \begin{align}& {\rm{PERMUTATIONS }} \equiv {\rm{ARRANGEMENTS}}\\ \\& {\rm{COMBINATIONS }} \equiv {\rm{ SELECTIONS}} \end{align} Download Combinatorics Worksheets Combinatorics grade 9 | Answers Set 1 Combinatorics grade 9 | Answers Set 2 Combinatorics grade 9 | Questions Set 1 Combinatorics grade 9 | Questions Set 2 More Important Topics Numbers Algebra Geometry Measurement Money Data Trigonometry Calculus
2020-10-25 13:40:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6445634961128235, "perplexity": 383.66861069633626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889173.38/warc/CC-MAIN-20201025125131-20201025155131-00494.warc.gz"}
http://doc.astro-wise.org/man_howto_qcastrom.html
# HOW-TO Inspect an astrometric solution¶ Astrometric solutions are created as a result of the AstrometricParametersTask and the GAstromTask. To determine the quality of the astrometric solution, several methods can be used. The primary method is to inspect the astrometric solution with the AstrometricParameters inspect() method. An alternate qualitative method is to view a calibrated catalog overlayed on the image. Please Note, not all inspection methods are currently available in the AWBASE checkout, but are in the current checkout. These methods have been noted. To use them, see the Getting Started section in the for details on using a different checkout. ## AstrometricParameters and GAstrometric inspect() methods¶ ### The Plots¶ An AstrometricParameters object can be inspected by plotting the residuals of the solution versus themselves and versus position. This is exactly what the inspect() method does. For a single chip (AstrometricParameters), one figure is created with 5 panels. These five panels plot (from top down): • DDEC versus DRA with line-connected histograms of their distributions • DRA versus RA • DDEC versus DEC • DRA versus XPOS • DDEC versus YPOS where DRA is $$RA_{reference} - RA_{extracted}$$ in arc-seconds, DDEC is $$Dec_{reference} - Dec_{extracted}$$ in arc-seconds, RA is in degrees, DEC is in degrees, XPOS is the X pixel position of the extracted source, and YPOS is the Y pixel position of the extracted source. Also included at the top of the figure is the DATE_OBS of the source ReducedScienceFrame; the mean RA () and mean Dec (), both calculated from the distribution plotted; the number of pairs plotted–the same as the number of pairs used in the astrometric solution (N); the chip name of the source ReducedScienceFrame (CHIP:); the mean RA residual (), mean Dec residual (), and sample standard deviation of each distribution (values following the +-), all based on the distribution plotted; the RMS (root-mean-square) value of the distance of the residual pairs with respect to the DDEC/DRA origin (0,0); and the maximum distance of any residual pair from the DDEC/DRA origin (Max). There are also RMS and N values within the first panel. Their significance will be clear when seen in the context of the multi-chip solution. The multi-chip case (GAstrometric) plots exactly the same information, but with multiple pointings per chip, one figure per chip. Each pointing has a different color to distinguish it from the other pointings. Also, the first panel includes the RMS and N values for each pointing individually. The values above the first panel are all calculated with respect to ALL the data plotted. In addition to the multi-chip reference residuals, an entire set of overlap residuals figures is created. Instead of DRA being $$RA_{reference} - RA_{extracted}$$, it is $$RA_{1} - RA_{2}$$, both extracted from their respective frames. $$RA_{1}$$ and $$RA_{2}$$ will never be from the same chip and pointing. The Dec values are similar. There is no other difference between the previous multi-reference figures and these multi-overlap figures. Lastly, two more figures are created: all reference residuals and all overlap residuls. These figures simply show all the data from all the chips of their respective data set. Each pointing is color-coded identically to the individual chip figures. The reference figure in this multi-chip, multi-pointing case is directly equivalent to the single-chip figure. ### The Methods¶ There are currently two ways to run the inspect method for either case. The most straight-forward of these is to simply set the inspect switch in either the AstrometericParametersTask or in the GAstromTask when either is run: awe> task = AstrometricParametersTask(red_filenames=['Sci-USER-WFI-#877-red-536 64.5.fits'], ... inspect=1, commit=0) or ... inspect=1, commit=0) In the other method, an AstrometricParameters or GAstrometric object is instantiated from the database, and its inspect() invoked: awe> ap = (AstrometricParameters.reduced.filename == 'Sci-USER-WFI-#877-red-536 64.5.fits')[0] awe> DataObject(pathname=ap.residuals).retrieve() awe> ap.inspect() or awe> gas = (GAstrometric.gasslist.filename == 'GAS-2df_I_5-53760.5784035')[0] awe> DataObject(pathname=gas.residuals).retrieve() awe> gas.inspect() ### Modifying the Default Output¶ The inspection figures described above are displayed to the screen and written to PNG files. This behavior can be modified, but explaination of these techniques is beyond the scope of this HOW-TO. For the latest documentation for attempting these modifications, simply view the online help (docstrings) of the inspect method(s) and plot class(es) used to create these figures: awe> help(AstrometricParameters.inspect) awe> from astro.plot.AstrometryPlot import AstromResidualsPlot awe> help(AstromResidualsPlot) ## Applied inspection methods¶ The previous sections described the built-in inspection methods showing predicted results of the derived solutions in AstrometricParameters and GAstrometric objects. This section describes extended inspection methods of the derived solutions applied to RegriddedFrames and CoaddedRegriddedFrames. All these inspection methods deliver a plot in the same 5-panel form as the built-in inspect() methods, but using different source catalogs for the residuals. Also, the details and latest usage information can be found in the methods’ docstrings accessed via the help() command. ### AstrometricParameters plot_residuals_to_usno() method¶ • Temporarily in current checkout only. This plot displays source position residuals between the corrected catalog positions performed by LDAC or sources positions extracted from a RegriddedFrame corrected with the same parameters and the USNO-A2.0 reference catalog. Setting the source parameter to ‘solution’ or ‘applied’ selects either the catalog used in the solution or a catalog extracted from a RegriddedFrame to which the solution parameters have been applied, respectively. ### AstrometricParameters plot_residuals_to_regrid() method¶ • Temporarily in current checkout only. This plot displays source position residuals between the corrected catalog positions performed by either LDAC or SExtractor and sources extracted from a RegriddedFrame corrected with the same parameters. Setting the derived_type parameter to ‘solution’ or ‘sextractor’ selects either the catalog used in the solution or a catalog extracted from a ReducedScienceFrame by SExtractor to which the solution parameters have been applied to the header, respectively. • Temporarily in current checkout only. This plot displays source position residuals between a given RegriddedFrame and all other overlapping frames, all that participate in a CoaddedRegriddedFrame. Setting the use_coadd switch (use_coadd=True) displays source position residuals between the CoaddedRegriddedFrame and all RegriddedFrames that went into its creation. It plots a given RegriddedFrame source position against the average source position from the CoaddedRegriddedFrame. ## Image inspection method¶ • Temporarily in current checkout only. The multi-purpose inspect() method used by all frames inheriting from BaseFrame can create a plot that can be used to display the qualitative residuals on the pixel level by using either difference images or multi-color images using the same mechanism for inspecting individual frames. The detailed usage of this method can be found in . The general idea for this purpose is to inspect one RegriddedFrame, setting the compare parameter (compare=True) and specifying the other RegriddedFrame with the other parameter. The routine automatically compares only the overlapping region of the two frames. awe> reg0 = RegriddedFrame(pathname=filename0) awe> reg1 = RegriddedFrame(pathname=filename1) awe> reg0.inspect(compare=True, other=reg1) # common region of reg0-reg1 ## Overlaying a calibrated catalog¶ This method requires a RegriddedFrame obtained from the RegridTask. It needs to be first loaded into SkyCat: awe> q = RegriddedFrame.reduced.filename == 'filename.reduced.fits' awe> os.system('skycat %s' % (q[0].filename)) First, set the desired cut level via the “Auto Set Cut Levels” button, or with “View:Cut Levels…”. Next, overlay the catalog by choosing “Data-Servers”, then “Catalogs”, then “USNO at ESO” [1]. In the dialog that comes up, choose “Search” and all the sources known to the USNO survey will be plotted in circled cross-hairs. They can now be compared directly with the sources on the underlying frame. When inspecting the corellation, remember that the USNO catalog is accurate only to about 0.3 arc-sec. If the RegriddedFrame was not created from the ReducedScienceFrame, it will need to be regridded with the RegridTask before the inspection above can be carried out. This is because there exist projection effects (distortions) in the ReducedScienceFrame. The RegridTask can be run via the DPUor locally as shown in the example below: awe> dpu.run('Regrid', i='WFI', d='2001-01-01', f='#845', o='Science2', C=0) awe> regrid = RegridTask(date='2000-01-01', chip='ccd50', filter='#845', ... object='Science2', commit=0) awe> regrid.execute() ## Examine the AstrometricParameters values¶ To look at the AstrometricParameters for a given ReducedScienceFrame, the AstrometricParameters objects of interest must first be located in the database: awe> q = AstrometricParameters.reduced.filename == 'WFI.2001-02-16T01:42:31.289 _1.calibrated.fits' awe> len(q) 1 awe> dt = datetime.datetime(2005,1,1) awe> q = (AstrometricParameters.instrument.name == 'WFI') awe> q &= (AstrometricParameters.filter.name == '#845') awe> q &= (AstrometricParameters.chip.name == 'ccd50') awe> q &= (AstrometricParameters.creation_date > dt) awe> len(q) 1199 The first example shows the a query for an AstromtricParameters object by its source ReducedScienceFrame’s filename. The second shows a more general search based on instrument, filter, chip, and date. NOTE: Dates and times in the Astro-WISE database environment are generally in the form of datetime objects. Therefore, when querying for them, a datetime object must be used. The main exception is the select() method, but this method is not universally implemented at this time. [1] There are other catalogs available, but the USNO catalog was used in the astrometric solution and should fit well.
2018-11-16 14:26:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5295718908309937, "perplexity": 4691.373724694348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743046.43/warc/CC-MAIN-20181116132301-20181116154301-00496.warc.gz"}
https://mmiranda96blog.wordpress.com/2017/03/11/cramer/
### Introduction This method finds all the solutions of a linear equation system. We define a linear equation system as n sets of equations in the form of:$\sum_{i=1}^{n}a_{ij}x_i=b_j$These systems can be represented as a product of matrices such as the following: $\begin{bmatrix} 2 & 1 & 3\\ 2 & 6 & 8\\ 6 & 8 & 18 \end{bmatrix} \begin{bmatrix} x_1\\ x_2\\ x_3 \end{bmatrix} = \begin{bmatrix} 1\\ 3\\ 5 \end{bmatrix}$ Which can also be represented as: $\left\{\begin{matrix} 2x_1+x_2+3x_3=1\\ 2x_1+6x_2+8x_3=3\\ 6x_1+8x_2+18x_3=5 \end{matrix}\right.$ The solutions to this problem are the set of values x1 to xn that satisfy the equations. An linear equations system is conformed by two parts: a nxn (square) invertible matrix, which contains the coefficients of each equation and a nx1 (column) matrix, which contains the independent terms of each equation. ### Method Cramer method relies on determinants. A determinant is a numeric value of a matrix, which can be calculed in several ways. More information about determinants, their properties and how to calculate them can be found here. Cramer’s rule is the following: $x_j=\frac{|A_j|}{|A|}$ Where A_j is the original matrix with the j-th row changed with b. ### Examples • A = [7., 2.; -21., -5.], B = [16.; 50.], R = [-25.7; 98.] • A = [2., -3.; 4., 1.], B = [-2.; 24.], R = [5.; 4.] • A = [3., -0.1, -0.2; 0.1, 7., -0.3; 0.3, -0.2, 10.], B = [7.85; -19.3; 71.4], R = [3.; -2.5; 7.] • A = [0.3, 0.52, 1.; 0.5, 1., 1.9; 0.1, 0.3, 0.5], B = [-0.01; 0.67; -0.44], R=[-14.9; -29.5; -19.8] ### Singularities This method is not an iterative method. Even though it realizes the calculations with iterations, these iterations are defined by the size of the matrix and not by the contents of it. An iterative method realizes unknown iterations, and it stops until it meets an error criteria given. Because of this, an iterative method can diverge. On the other hand, a non-iterative error makes a fixed number of iterations. It can have a terrible complexity, but it is finite. It does not diverge, it does not fail. Calculating the determinant of a matrix is a really heavy operation. It requires lots of memory and time. Trying to solve big equations systems can become a hard task, since Laplace determinant expansion takes O(n!). ### Conclusions Cramer method is really easy to understand and to program (given that we have a function to get the determinant of a matrix, since this is not that easy to implement). But, it has its potential only with small systems. Larger systems take larger computational time and space.
2018-06-24 12:58:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6842776536941528, "perplexity": 1086.9766945520237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866937.79/warc/CC-MAIN-20180624121927-20180624141927-00232.warc.gz"}
https://hackage.haskell.org/package/digestive-functors-heist-0.8.8.1/docs/Text-Digestive-Heist-Compiled.html
digestive-functors-heist-0.8.8.1: Heist frontend for the digestive-functors library Text.Digestive.Heist.Compiled Description This module provides a compiled Heist frontend for the digestive-functors library. Disclaimer: this documentation requires very basic familiarity with digestive-functors. You might want to take a quick look at this tutorial first: https://github.com/jaspervdj/digestive-functors/blob/master/examples/tutorial.lhs This module exports the formSplice function, and most users will not require anything else. These splices are used to create HTML for different form elements. This way, the developer doesn't have to care about setting e.g. the previous values in a text field when something goes wrong. For documentation on the different splices, see the different functions exported by this module. All splices have the same name as given in digestiveSplices. You can give arbitrary attributes to most of the elements (i.e. where it makes sense). This means you can do e.g.: <dfInputTextArea ref="description" cols="20" rows="3" /> Synopsis # Core methods Arguments :: MonadIO m => Splices (Splice m) Extra splices that you want to have available inside the form tag. -> Splices (AttrSplice m) Attribute splices available inside the form tag. -> RuntimeSplice m (View Text) -> Splice m A compiled splice for a specific form. You pass in a runtime action that gets the form's view and this function returns a splice that creates a form tag. In your HeistConfig you might have a compiled splice like this: ("customerForm" ## formSplice mempty mempty (liftM fst \$ runForm "customer" custForm)) Then you can use the customerForm tag just like you would use the dfForm tag in interpreted templates anywhere you want to have a customer form. formSplice' :: MonadIO m => (RuntimeSplice m (View Text) -> Splices (Splice m)) -> (RuntimeSplice m (View Text) -> Splices (AttrSplice m)) -> RuntimeSplice m (View Text) -> Splice m Source # Same as formSplice except the supplied splices and attribute splices are applied to the resulting form view. # Main splices dfInput :: Monad m => RuntimeSplice m (View v) -> Splice m Source # Generate an input field with a supplied type. Example: <dfInput type="date" ref="date" /> dfInputList :: MonadIO m => RuntimeSplice m (View Text) -> Splice m Source # This splice allows variable length lists. It binds several attribute splices providing functionality for dynamically manipulating the list. The following descriptions will use the example of a form named "foo" with a list subform named "items". Splices: dfListItem - This tag must surround the markup for a single list item. It surrounds all of its children with a div with id "foo.items" and class "inputList" and displays a copy for each of the list items including a "template" item used for generating new items. If the you supply the attribute "noTemplate", then the template item is not included and the generated list will not be dynamically updated by the add and remove actions. Attribute Splices: itemAttrs - Attribute you should use on div, span, etc that surrounds all the markup for a single list item. This splice expands to an id of "foo.items.ix" (where ix is the index of the current item) and a class of "inputListItem". addControl - Use this attribute on the tag you use to define a control for adding elements to the list (usually a button or anchor). It adds an onclick attribute that calls a javascript function addInputListItem. removeControl - Use this attribute on the control for removing individual items. It adds an onclick attribute that calls removeInputListItem. dfInputText :: Monad m => RuntimeSplice m (View v) -> Splice m Source # Generate a text input field. Example: <dfInputText ref="user.name" /> dfInputTextArea :: Monad m => RuntimeSplice m (View v) -> Splice m Source # Generate a text area. Example: <dfInputTextArea ref="user.about" /> dfInputPassword :: Monad m => RuntimeSplice m (View v) -> Splice m Source # <dfInputPassword ref="user.password" /> dfInputHidden :: Monad m => RuntimeSplice m (View v) -> Splice m Source # Generate a hidden input field. Example: <dfInputHidden ref="user.forgery" /> dfInputSelect :: Monad m => RuntimeSplice m (View Text) -> Splice m Source # Generate a select button (also known as a combo box). Example: <dfInputSelect ref="user.sex" /> Generate a select button (also known as a combo box). Example: <dfInputSelectGroup ref="user.sex" /> dfInputRadio :: Monad m => RuntimeSplice m (View Text) -> Splice m Source # Generate a number of radio buttons. Example: <dfInputRadio ref="user.sex" /> dfInputCheckbox :: Monad m => RuntimeSplice m (View v) -> Splice m Source # Generate a checkbox. Example: <dfInputCheckbox ref="user.married" /> dfInputFile :: Monad m => RuntimeSplice m (View v) -> Splice m Source # Generate a file upload element. Example: <dfInputFile ref="user.avatar" /> Generate a submit button. Example: <dfInputSubmit /> dfLabel :: Monad m => RuntimeSplice m (View v) -> Splice m Source # Generate a label for a field. Example: <dfLabel ref="user.married">Married: </dfLabel> <dfInputCheckbox ref="user.married" /> dfErrorList :: MonadIO m => RuntimeSplice m (View Text) -> Splice m Source # Display the list of errors for a certain field. Example: <dfErrorList ref="user.name" /> <dfInputText ref="user.name" /> Display the list of errors for a certain form and all forms below it. E.g., if there is a subform called "user": <dfChildErrorList ref="user" /> Or display all errors for the form: <dfChildErrorList ref="" /> Which is more conveniently written as: <dfChildErrorList /> dfSubView :: MonadIO m => RuntimeSplice m (View Text) -> Splice m Source # This splice allows reuse of templates by selecting some child of a form tree. While this may sound complicated, it's pretty straightforward and practical. Suppose we have: <dfInputText ref="user.name" /> <dfInputTextArea ref="comment.body" /> You may want to abstract the "user" parts in some other template so you Don't Repeat Yourself (TM). If you create a template called "user-form" with the following contents: <dfInputText ref="name" /> <dfInputText ref="password" /> You will be able to use: <dfSubView ref="user"> <apply template="user-form" /> </dfSubView> <dfInputTextArea ref="comment.body" /> # Utility splices dfIfChildErrors :: MonadIO m => RuntimeSplice m (View v) -> Splice m Source # Render some content only if there are any errors. This is useful for markup purposes. <dfIfChildErrors ref="user"> Content to be rendered if there are any errors... </dfIfChildErrors> The ref attribute can be omitted if you want to check the entire form. digestiveSplices :: MonadIO m => RuntimeSplice m (View Text) -> Splices (Splice m) Source # List of splices defined for forms. For most uses the formSplice function will be fine and you won't need to use this directly. But this is available if you need more customization.
2020-04-08 16:35:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20428422093391418, "perplexity": 12915.334483416282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371818008.97/warc/CC-MAIN-20200408135412-20200408165912-00517.warc.gz"}
https://wordassociations.net/en/words-associated-with/Title
# Associations to the word «Title» ## Wiktionary TITLE, noun. A prefix (honorific) or suffix (post-nominal) added to a person's name to signify either veneration, official position or a professional or academic qualification. See also TITLE, noun. (legal) Legal right to ownership of a property; a deed or other certificate proving this. TITLE, noun. In canon law, that by which a beneficiary holds a benefice. TITLE, noun. A church to which a priest was ordained, and where he was to reside. TITLE, noun. The name of a book, film, musical piece, painting, or other work of art. TITLE, noun. A publication. TITLE, noun. A section or division of a subject, as of a law or a book. TITLE, noun. (mostly in the plural) A written title, credit, or caption shown with a film, video, or performance. TITLE, noun. (bookbinding) The panel for the name, between the bands of the back of a book. TITLE, noun. The subject of a writing; a short phrase that summarizes the entire topic. TITLE, noun. A division of an act of Congress or Parliament. TITLE, noun. (sports) The recognition given to the winner of a championship in sports. TITLE, verb. (transitive) To assign a title to; to entitle. TITLE CASE, noun. (computing) The capitalization of text in which the first letter of each major word is set in capital. TITLE CHARACTER, noun. A fictional character whose name or a short description is present in the title of the work where the character appears. TITLE CHARACTERS, noun. Plural of title character TITLE COMPANIES, noun. Plural of title company TITLE COMPANY, noun. Company that verifies, certifies, holds an escrow for, insures, and holds responsibility for the real estate transactions. TITLE DEED, noun. (legal) a deed or similar document by which the title to property is conveyed between parties TITLE DEEDS, noun. Plural of title deed TITLE DEFECT, noun. (legal) A problem with the chain of title to a parcel of real property that exposes the putative owner of the property to a potential legal dispute over the ownership of the property. TITLE DEFECTS, noun. Plural of title defect TITLE PAGE, noun. The page, near the front of a book, that gives its title and, normally, its author and publisher TITLE PAGES, noun. Plural of title page TITLE POLICY, noun. (US) An insurance policy on a piece of real estate for which the title deeds may be incomplete or not properly searched. TITLE TRACK, noun. (music) A track having the same name as the album or EP which it's from. TITLE TRACK, noun. (film) A track having the same name as the movie it's from. TITLE TRACK, noun. (sports) The pursuit of a title or the process of winning a title. Often used as on title track. [1], [2], [3]. ## Dictionary definition TITLE, noun. A heading that names a statute or legislative bill; may give a brief summary of the matters it deals with; "Title 8 provided federal help for schools". TITLE, noun. The name of a work of art or literary composition etc.; "he looked for books with the word jazz' in the title"; "he refused to give titles to his paintings"; "I can never remember movie titles". TITLE, noun. A general or descriptive heading for a section of a written work; "the novel had chapter titles". TITLE, noun. The status of being a champion; "he held the title for two years". TITLE, noun. A legal document signed and sealed and delivered to effect a transfer of property and to show the legal right to possess it; "he signed the deed"; "he kept the title to his car in the glove compartment". TITLE, noun. An identifying appellation signifying status or function: e.g. Mr.' or General'; "the professor didn't like his friends to use his formal title". TITLE, noun. An established or recognized right; "a strong legal claim to the property"; "he had no documents confirming his title to his father's estate"; "he staked his claim". TITLE, noun. (usually plural) written material introduced into a movie or TV show to give credits or represent dialogue or explain an action; "the titles go by faster than I can read". TITLE, noun. An appellation signifying nobility; "your majesty' is the appropriate title to use in addressing a king". TITLE, noun. An informal right to something; "his claim on her attentions"; "his title to fame". TITLE, verb. Give a title to. TITLE, verb. Designate by an identifying term; "They styled their nation `The Confederate States'". ## Wise words Language is a process of free creation; its laws and principles are fixed, but the manner in which the principles of generation are used is free and infinitely varied. Even the interpretation and use of words involves a process of free creation. Noam Chomsky
2019-10-16 19:27:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19681762158870697, "perplexity": 10637.929379138672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669546.24/warc/CC-MAIN-20191016190431-20191016213931-00480.warc.gz"}
https://firas.moosvi.com/oer/physics_bank/content/public/001.Math/Diagnostic/math_diagnostic16/math_diagnostic16.html
# Math Diagnostic16# ## Part 1# The statement which is INCORRECT is… • $$log (AB) = log A log B$$ • $$log A + log B = log(AB)$$ • $$log A - log B = log(A/B)$$ • $$log A^2 = 2 log A$$ • Do not know
2023-01-29 00:01:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4578916132450104, "perplexity": 5136.752572551672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499695.59/warc/CC-MAIN-20230128220716-20230129010716-00606.warc.gz"}
https://zwaltman.wordpress.com/page/2/
# Today I Learned Some of the things I've learned every day since Oct 10, 2016 ## 182: final (Java) In Java, the $\texttt{final}$ keyword has several different uses depending on whether it’s used on a variable, method, or class. In all uses, it’s generally used to prevent something from being modified. A variable declared $\texttt{final}$ can’t have its value modified after declaration. When the variable is primitive, this is straightforward; $\texttt{final int x = 5}$ will always have the value 5. If the variable is a reference type, however, it being final just means it will always point to the same object. The object itself, if mutable like a list, can still change. A method declared $\texttt{final}$ can’t be overridden. A class declared $\texttt{final}$ can’t be extended. ## 181: Vectors (Java) The Vector class in Java is essentially a legacy version of ArrayList (ArrayList was actually the Java 1.2 Collections replacement for it) which by default synchronized each individual operation. While this makes the structure automatically thread-safe (unlike ArrayList), the better design is to lock sequences of operations rather than individual operations. The result is that in addition to not providing any control over synchronization, the Vector class performs considerably worse than alternatives, such as using Collections.synchronizedList to decorate a non-thread-safe collection such as ArrayList for concurrency. Thus it’s generally recommended that use of the Vectors class be avoided. ## 180: Software Aging and Rejuvenation Software aging is the degradation of a software system’s ability to perform correctly after running continuously for long periods of time. Common causes include memory bloating, memory leaks, and data corruption. To prevent undesirable effects of software aging, software rejuvenation can be done proactively. This can take many forms, the most well-known of which is a simple system reboot. Other forms include flushing OS kernel tables, garbage collection, or Apache’s method of killing and then recreating processes after serving a certain number of requests. ## 179: Universal Axiomatization and Substructures Definition universal sentence is a first-order sentence of the form $\forall \bar{v} \phi (\bar{v})$, where $\phi$ has no quantifiers. Definition We say an $\mathcal{L}$-theory $T$ has a universal axiomatization if there’s a set $\Gamma$ of universal $\mathcal{L}$-sentences such that $\mathcal{M} \vDash T$ if and only if $\mathcal{M} \vDash \Gamma$. With the above definitions, an $\mathcal{L}$-theory  $T$ has a universal axiomatization if and only if wherever $\mathcal{M} \vDash T$ and $\mathcal{N \subseteq M}$ we have that $\mathcal{N} \vDash T$. In the words of Marker’s Model Theory, “a theory is preserved under substructure if and only if it has a universal axiomatization.” ## 178: The Residue Formula In complex analysis, the residue formula states that where $f$ is a holomorphic function on an open set containing a circle $C$ and its interior except for finitely many poles $z_1, \dots, z_n$ in the interior of $C$, $\int _C f(z) dz = 2 \pi i \sum _{k = 1} ^n \text{res}_{z_k} (f)$. That is, the integral of the circe is completely determined by the residues of $f$ about the poles in its interior. ## 177: Elementary Embeddings, Elementary Substructures In logic and model theory, an embedding $\gamma : \mathcal{M} \rightarrow \mathcal{N}$ between $\mathcal{L}$-structures $\mathcal{M, N}$ is called an elementary embedding if $\mathcal{M} \vDash \phi(a_1, \dots, a_n) \Leftrightarrow \mathcal{N} \vDash \phi(\gamma(a_1), \dots, \gamma(a_n))$ for any $\mathcal{L}$-sentence $\phi$ and $a_1, \dots, a_n \in M$. A substructure $\mathcal{M} \subseteq \mathcal{N}$ is called an elementary substructure, denoted $\mathcal{M} \preceq \mathcal{N}$, if the inclusion map is an elementary embedding. ## 176: Removable Singularities vs. Poles Let $f: \Omega \rightarrow \mathbb{C}$ be holomorphic on $\Omega$ except at isolated points where it’s undefined. One such isolated point $z_0$ is said to be removable if $f$ can be extended to include $z_0$ such that the extension is holomorphic in a neighborhood of $z_0$. That is, if $z_0$ is “correctable”. By contrast, an isolated undefined point is said to be a pole if not removable, but there’s a positive integer $n$ such that $z_0$ is a removable singularity of the function $f(z)(z - z_0)^n$. In this case, $n$ is called the order of the pole. ## 175: Tarski’s Theorem In first-order logic, Tarski’s Theorem states that where $\mathcal{M, N}$ are structures and $\mathcal{M} \subseteq \mathcal{N}$, the following are equivalent: 1. $\mathcal{M} \preceq \mathcal{N}$ 2. For all $Y \subseteq N$ definable over $M$, if $Y \neq \varnothing$ then $Y \cap M \neq \varnothing$. ## 174: Decidability of Recursive, Complete, Satisfiable Theories It’s pretty much in the title. If $T$ is an $\mathcal{L}$-theory which is recursive, complete, and satisfiable, then $T$ is decidable. That is, there’s an algorithm which takes an $\mathcal{L}$-sentence $\phi$ and outputs whether or not $T \vDash \phi$. ($T$ being recursive means that there’s an algorithm which similarly takes $\phi$ as input but outputs whether $\phi \in T$.) ## 173: Vaught’s Test In model theory, Vaught’s Test (also known as the Łoś–Vaught test) determines whether certain $\mathcal{L}$-theories $T$ are complete. It states that if $T$ is satisfiable, has no finite models, and is $\kappa$-categorical for some $\kappa \geq |\mathcal{L}|$, then $T$ is complete. Proof outline Assume $T$ isn’t complete. Then there’s a sentence $\phi$ such that $T \nvDash \phi, T \nvDash \neg \phi$, meaning $T \cup \{\phi\}, T \cup \{\neg \phi \}$ are satisfiable, meaning both have infinite models (since $T$ has no finite models), so by [172] we can find models $\mathcal{M}, \mathcal{N}$ of cardinality $\kappa$ which respectively satisfy these theories. But since $\mathcal{M}, \mathcal{N}$ disagree on $\phi$, they are not elementarily equivalent and hence not isomorphic, contradicting the assumption that $T$ is $\kappa$-categorical. Thus $T$ must be complete.
2018-03-23 16:55:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 78, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8191229701042175, "perplexity": 769.8450322855106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648404.94/warc/CC-MAIN-20180323161421-20180323181421-00138.warc.gz"}
https://brilliant.org/discussions/thread/sum-of-elements-in-subset/
# Sum of largest and smallest elements in a subset ## Question In how many subsets of $\{ 1, 2, 3, 4, \ldots n\}$ is the sum of the largest element and the smallest element equal to $(n+1)$? ## Solution I am able to deduce the following: • If the smallest element is $1$, then the largest element has to be $n$. The rest of the $n-2$ elements can be "in" or "out", thus the total will be $2^{n-2}$ • Similarly, If the smallest element is $2$, then the largest element has to be $n-1$. And again, it will have $2^{n-4}$ such subsets • $\ldots \ldots$ • If $n$ is odd the total subsets will be: $2^{n-2} + 2^{n-4} + \ldots + 2^{1}$ • If $n$ is even the total subsets will be: $2^{n-2} + 2^{n-4} + \ldots + 2^{0}$ Note by Mahdi Raza 3 months, 1 week ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: Can anyone suggest a cleaner and short-hand form for the above answer? - 3 months, 1 week ago This is good. Nothing to improve on. - 3 months, 1 week ago Ok. - 3 months, 1 week ago The best answer possible is given. You can also write it as: $\sum_{n\geq k \geq2:2|k}2^{n-k}$ - 3 months, 1 week ago Hmm.. it is pretty compact but could be hard to understand. Thanks though! - 3 months, 1 week ago Pretty good! - 3 months, 1 week ago Thanks! - 3 months, 1 week ago No problem! For any integer $x$ and any prime $p$, can $\frac{x}{p}$ always be irreducible if $x$ isn't a factor of $p$? Make a note on it if you have proof. - 3 months, 1 week ago If you know the definition of a prime number, this should be obvious to prove/disprove. - 3 months, 1 week ago Yes. And I think the question should say "x isn't a multiple of p" in which case the answer should obviously be a yes unless I am missing out something very trivial. Here is a simple deduction Let $\frac{x}{p} = k$ where '$k$' is an integer. For $k$ to be an integer, $x = pk$ That means $x$ has to obviously be a multiple of $p$. - 3 months, 1 week ago Great! You did see my comment to @Hamza Anushath? - 3 months, 1 week ago
2020-09-29 11:19:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 34, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9816532731056213, "perplexity": 1753.497329909769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401641638.83/warc/CC-MAIN-20200929091913-20200929121913-00790.warc.gz"}
https://byjus.com/physics/derivation-of-lens-formula/
# Derivation of Lens Formula What is Lens formula? In optics, the relationship between the distance of an image (v), the distance of an object (u) and the focal length (f) of the lens is given by the formula known as Lens formula. Lens formula is applicable for convex as well as concave lenses. These lenses have negligible thickness. The formula is as follows: $\frac{1}{v}-\frac{1}{u}=\frac{1}{f}$ ## Lens formula derivation Consider a convex lens with object AB kept on the principal axis. Two rays are considered such that one ray is parallel to the principal axis and after reflection, it passes through the focus. The second ray is towards the optical centre such that it passes undeviated. A’ is the point where the two rays intersect and also the image formed by point A. Point B image is obtained on the principal axis as the point B is on the principal axis. As the object is perpendicular to the principal axis, even the object is perpendicular to the principal axis. To get the location of the image formed by point B, we need to draw a perpendicular from point A’ to the principal axis. Following are the things obtained after drawing the figure: $\frac{AB}{{A}'{B}’}=\frac{BO}{O{B}’}$ (from similar ΔABO and ΔA’B’O) (equ. 1) $\frac{PO}{OF}=\frac{{A}'{B}’}{F{B}’}$ (from similar ΔPOF and ΔFB’A’) $∴ \frac{AB}{OF}=\frac{{A}'{B}’}{F{B}’}$ (from figure PO = AB) $\frac{AB}{{A}'{B}’}=\frac{OF}{F{B}’}$ (equ. 2) $\frac{BO}{O{B}’}=\frac{OF}{F{B}’}$ (from equ. 1 and equ. 2) $∴ \frac{BO}{O{B}’}=\frac{OF}{O{B}’-OF}$ $\frac{-u}{+v}=\frac{+f}{v-f}$ (substituting optical distance values) $∴ -uv+uf=fv$ $-\frac{1}{f}+\frac{1}{v}=\frac{1}{u}$ (dividing by u,v and f on both the sides) $\frac{1}{v}=\frac{1}{u}+\frac{1}{f}$ $\frac{1}{f}=\frac{1}{v}-\frac{1}{u}$ Therefore, this is known as Lens formula. This was the derivation of Lens formula. Stay tuned with BYJU’S and learn various other Physics related topics. Related Physics articles: #### Practise This Question How will the magnitude of magnification change if an object placed at center of curvature is moved towards focus in concave mirror?
2019-05-20 16:24:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7250401377677917, "perplexity": 1043.6635868044355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256082.54/warc/CC-MAIN-20190520162024-20190520184024-00117.warc.gz"}
https://www.gradesaver.com/textbooks/engineering/mechanical-engineering/engineering-mechanics-statics-and-dynamics-14th-edition/chapter-11-virtual-work-section-11-3-principle-of-virtual-work-for-a-system-of-connected-rigid-bodies-problems-page-595/15
## Engineering Mechanics: Statics & Dynamics (14th Edition) $F=\frac{M}{2asin\theta}$ We can determine the required force as follows: The virtual displacement is given as $\delta_{xA}=\frac{d(2asin\theta)}{d\theta}=-2asin\theta$ The virtual-work equation is $\delta U=0$ $M+F\delta_{XA}=0$ $\implies M+F(-2asin\theta)=0$ This can be rearranged as: $F=\frac{M}{2asin\theta}$
2020-07-06 03:02:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9389663934707642, "perplexity": 850.9045592305648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890092.28/warc/CC-MAIN-20200706011013-20200706041013-00080.warc.gz"}
https://math.stackexchange.com/questions/2561912/to-show-that-x2y2-propto-xy
# To show that $x^2+y^2\propto xy$. Let $x$ and $y$ are variables satisfying $3x-4y\propto\sqrt{xy}$. Then show that $x^2+y^2\propto xy$, where $y∝x$ implies $y$ directly proportional to $x$. My attempt... $3x-4y=k\sqrt{xy}$ for some real $k$. This implies $(3x-4y)^2=k^2xy$ implies $9x^2+16y^2=(k^2+24)xy$ implies $x^2+y^2=((k^2+24)x-7y)y/9$. So i have to assert now that $[(k^2+24)x-7y]\propto x$. But I cant proceed from here. Please help me to solve this. • Just an aside question: I've been seeing that infinity-ish symbol a lot lately. Would someone care to explain what does it mean? I guess it is some sort of asymptotic relation (?) @vadim123 Lol, it makes me feel better to know that I am not the only one. ;) – Jonatan B. Bastos Dec 11 '17 at 17:03 • $y\propto x$ means that $y$ is directly proportional to $x$. – dromastyx Dec 11 '17 at 17:04 • $x$ and $y$ are specific real numbers? In that case everything is proportional to everything. There are no variables....?? – John Brevik Dec 11 '17 at 17:17 • No here $x$ and $y$ are variables, but are belonging to reals. @John Brevik – abcdmath Dec 11 '17 at 17:21 Let $t = \sqrt{\frac{x}{y}}$. Observe that $3t - \frac{4}{t} = k$, or $3t^2 -kt - 4 = 0$, giving $t = \frac{k \pm \sqrt{k^2 + 12}}{6}$, that is $t$ is constant. On the other hand, $\frac{x^2 + y^2}{xy} = \frac{x}{y} + \frac{y}{x} = t^2 + t^{-2}$ is also constant, and the claim follows.
2019-10-17 07:46:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8430746793746948, "perplexity": 262.6554828654749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986673250.23/warc/CC-MAIN-20191017073050-20191017100550-00002.warc.gz"}
https://codereview.stackexchange.com/questions/232412/average-row-id-volume
# Average row id volume After a test run with LoadRunner I wanted to know the average time for each transaction for each VU volume step, the LR let you create a graph for this but don't give you the point values... so it's not possible to further analyse the data. As I'm learning python I decided to write a script to do the job for me, this is my second python script, but I've been a programmer for quite a few year and I used quite a few different languages, so I'd like to understand if I'm writing python or "I'm just translating code from another language to python". There is no test for the data because it's copied from the LoadRunner Analysis tool. It uses tab as column separator, and it look like this Vuser ID Group Name Transaction End Status Location Name Script Name Transaction Hierarchical Path Host Name Scenario Elapsed Time Transaction Response Time Transaction Name Vuser1 VC_AR Pass N/A VC_AR_Nuova Highest Level localhost 11,484 7,526 AR00_Homepage_AR Vuser2 VC_AR Pass N/A VC_AR_Nuova Highest Level localhost 11,512 7,525 AR00_Homepage_AR Vuser1 VC_AR Pass N/A VC_AR_Nuova Global_AREA_RISERVATA localhost 48,607 35,334 AR01_Login_AR Vuser2 VC_AR Pass N/A VC_AR_Nuova Global_AREA_RISERVATA localhost 52,098 39,043 AR01_Login_AR Vuser1 VC_AR Pass N/A VC_AR_Nuova Highest Level localhost 70,698 0,048 AR07_Logout_AR Vuser2 VC_AR Pass N/A VC_AR_Nuova Highest Level localhost 70,768 0,009 AR07_Logout_AR Vuser1 VC_AR Pass N/A VC_AR_Nuova Highest Level localhost 74,466 2,021 AR00_Homepage_AR Vuser2 VC_AR Pass N/A VC_AR_Nuova Highest Level localhost 75,752 2,199 AR00_Homepage_AR Vuser1 VC_AR Pass N/A VC_AR_Nuova Global_AREA_RISERVATA localhost 78,169 1,825 AR01_Login_AR Vuser2 VC_AR Pass N/A VC_AR_Nuova Global_AREA_RISERVATA localhost 79,203 1,096 AR01_Login_AR Vuser1 VC_AR Pass N/A VC_AR_Nuova Highest Level localhost 85,963 0,01 AR07_Logout_AR Vuser2 VC_AR Pass N/A VC_AR_Nuova Highest Level localhost 86,571 0,009 AR07_Logout_AR Vuser4 VC_AR Pass N/A VC_AR_Nuova Highest Level localhost 123,846 1,933 AR00_Homepage_AR Vuser3 VC_AR Pass N/A VC_AR_Nuova Highest Level localhost 125,49 1,939 AR00_Homepage_AR Vuser1 VC_AR Pass N/A VC_AR_Nuova Global_AREA_RISERVATA localhost 125,58 1,25 AR01_Login_AR Vuser4 VC_AR Pass N/A VC_AR_Nuova Global_AREA_RISERVATA localhost 128,174 1,598 AR01_Login_AR Vuser3 VC_AR Pass N/A VC_AR_Nuova Global_AREA_RISERVATA localhost 128,715 1,67 AR01_Login_AR Vuser2 VC_AR Pass N/A VC_AR_Nuova Highest Level localhost 132,251 0,325 AR07_Logout_AR Vuser1 VC_AR Pass N/A VC_AR_Nuova Highest Level localhost 134,641 0,016 AR07_Logout_AR Vuser4 VC_AR Pass N/A VC_AR_Nuova Highest Level localhost 135,899 0,011 AR07_Logout_AR Vuser2 VC_AR Pass N/A VC_AR_Nuova Highest Level localhost 136,427 2,002 AR00_Homepage_AR Vuser1 VC_AR Pass N/A VC_AR_Nuova Highest Level localhost 137,611 1,954 AR00_Homepage_AR Vuser4 VC_AR Pass N/A VC_AR_Nuova Highest Level localhost 139,254 1,944 AR00_Homepage_AR Vuser2 VC_AR Pass N/A VC_AR_Nuova Global_AREA_RISERVATA localhost 139,437 1,239 AR01_Login_AR Vuser1 VC_AR Pass N/A VC_AR_Nuova Global_AREA_RISERVATA localhost 140,738 1,947 AR01_Login_AR Vuser3 VC_AR Pass N/A VC_AR_Nuova Highest Level localhost 141,829 2,194 AR00_Homepage_AR Vuser4 VC_AR Pass N/A VC_AR_Nuova Global_AREA_RISERVATA localhost 142,508 1,096 AR01_Login_AR Vuser2 VC_AR Pass N/A VC_AR_Nuova Highest Level localhost 145,244 0,026 AR07_Logout_AR Vuser3 VC_AR Pass N/A VC_AR_Nuova Global_AREA_RISERVATA localhost 145,841 1,855 AR01_Login_AR My code is here """ Script to calculate the average time per Vugen volume from the raw data. LoadRunner Analysis creates the graph but don't shows the base value, that make it difficoult to do further analisys. Parameters ---------- -l, --log : str, default 'WARNING' the log level -i, --input : str, default 'raw.txt' the filepath of the input -o, --output : str, default 'out.txt' the filepath for the output """ import pandas import argparse import logging MAX_TIME_WINDOW = 10 """ The max time, in second, to count the change of Vugen volume as a single event """ """ Convert the input file to a table Read the input file and convert it to a DataFrame for future work. To reduce the memory load the columns that will not be used are deleted. Parameters ---------- file : str the filepath of the data Returns ------- A DataFrame with the readed data """ df_raw = pandas.read_table(file, index_col='Scenario Elapsed Time', decimal=',', thousands='.') df_raw.columns = df_raw.columns.str.strip().str.replace(' ', '_') logging.debug('Imported column list: %s', df_raw.columns) script_name = df_raw.iat[1, 4] logging.debug('Script name: %s', script_name) del df_raw['Group_Name'] del df_raw['Transaction_End_Status'] del df_raw['Location_Name'] del df_raw['Host_Name'] del df_raw['Script_Name'] df_raw.sort_index(inplace=True) logging.debug('Removed not needed column, current columns: %s', df_raw.columns) logging.debug('Imported data sample:\n %s', df_raw) return df_raw def generate_empty_row(input_df): """ Generate the base row for the work DataFrame Generate a dictionary with a cell for each distinct value of the 'Transaction_Name' column in the input DataFrame. Parameters ---------- input_df : DataFrame The DataFrame with all the data Returns ------- dict A dictionary with a default value for all the columns. """ transactions = input_df['Transaction_Name'].unique() transactions.sort(axis=0) transactions = transactions.tolist() logging.debug('Transactions: %s', transactions) base_row = dict(zip(transactions, [0] * len(transactions))) return base_row def __average_and_append(data_totals, data_counter, out_data, new_index): """ Average the value and add the values to out_data with index new_index Divides the totals by the relative counters to get the average for the transaction, then add the new data to the out_data DataFrame with index new_index Parameters ---------- data_totals : dict A dict with the totals for each transaction, must have the same fields of data_counter data_counter : dict A dict that count the number of items added to get the totals, must have the same fields of data_totals out_data : pandas.DataFrame The DataFrame to append the new row to new_index : int The index for the new row in the DataFrame Returns ------- pandas.DataFrame The out_data DataFrame with the new row appended """ data_avg = {key: total/max(data_counter[key], 1) for (key, total) in data_totals.items()} row = pandas.DataFrame(data=data_avg, index={new_index}) out_data = out_data.append(row) return out_data def calculate_response_time(input_data, base_row): """ Calculate the response time for each transaction for each Vugen load Read each row of the input data, for each transaction add the response time to an accumulator and the number of added values to a counter. Every time the load counter increase outside the time window of MAX_TIME_WINDOW the data will be averaged and wrote to the output DataFrame Parameters ---------- input_data : DataFrame The DataFrame with all the data base_row : dict A dictionary with a cell for each transaction in the data Returns ------- DataFrame A DataFrame with the calculated average for each transaction, with the vugen volume as index """ data_counter = base_row.copy() seen_id_set = set() # Set time_last_added_id base at the time of the first row, i.e. it's index out_data = pandas.DataFrame() vugen_volume = 0 for index, row in input_data.iterrows(): is_new_step = row['Vuser_ID'] not in seen_id_set is_in_window = ((index - time_last_added_id) < MAX_TIME_WINDOW) if (is_new_step and is_in_window): # Put the new Vugen ID in the set without changing the last time vugen_volume = len(seen_id_set) logging.debug("New ID %s found within %i second" % (row['Vuser_ID'], MAX_TIME_WINDOW)) elif (is_new_step): # Calculate the averages and append the new row to output logging.debug("New ID %s found outside the %i second window, " "adding a row to the output" % (row['Vuser_ID'], MAX_TIME_WINDOW)) out_data, vugen_volume) # Set the last time a Vugen ID was added and put it in the set # Reset the counter and the adder data_counter = base_row.copy() data_counter[row['Transaction_Name']] += 1 # Add the data of the last step out_data, vugen_volume) return out_data """ Read the input file, calculate the response under load and save it in the output file Main procedure of the module, it uses the other function to import the raw transaction, as the step in the Vugen volume are not perfectly aligned if the volume is changed multiple times in a window of XX second they will be counted as a single step. The number of second is defined as a constant Parameters ---------- input_file : string The file path of the data file, the data must be response time raw data file generated by the LoadRunner Analysis tool output_file : string The file path where to save the data calculated """ base_row = generate_empty_row(input_data) working_data = calculate_response_time(input_data, base_row) working_data.to_csv(path_or_buf=args.output_file, sep=';', index_label='Number of VUser') if __name__ == '__main__': parser = argparse.ArgumentParser( description='Tool that gets raw data from LoadRunner Analysis ' 'and calculate the average for transaction and Vugen running.' 'LoadRunner generate the graph but don''t share the data.') help='Livello di log da usare') help='Nome del file da caricare') default='out.txt', help='Nome del file su cui salvare i dati elaborati') args = parser.parse_args() log_level = getattr(logging, args.log_level.upper(), None) if not isinstance(log_level, int): raise ValueError('Invalid log level: %s' % args['log_level']) format='%(asctime)s - %(levelname)s - %(message)s') logging.debug('Script called with parameters: -l %s, -i %s, -o %s' % (args.log_level, args.input_file, args.output_file)) I'm not sure if pandas is really needed, in the end I just used it to read and write csv files and remove columns that I don't need, still using a new module is fun :) • High-level tip: iterrows() has some big disadvantages (see the docs), which similar methods like itertuples() do not suffer from. Of course, the best is to avoid explicit iteration altogether, whenever possible. I will take a complete look at your code tomorrow :) – AMC Nov 15 '19 at 3:54 Unfortunately, I've never used pandas before (I've been meaning to try it for awhile), so I can't comment on its usage. Looking at how it's being used here though, it may help, but parsing a list of lines would still be fairly simple. Honestly, this is some clean looking code. You actually format things fairly similarly to how I like, so I don't have much to say in that regard. To nit pick though, here data_avg = {key: total/max(data_counter[key], 1) for (key, total) in data_totals.items()} I wouldn't break up the for...in. It's not that long of a line. I also wouldn't use a double underscore prefix for __average_and_append. If your intent was to mark it as "private", just use one leading underscore. The one suggestion I can make though is to try out type hints. Right now, you're indicating the type in the docstring, and I don't think that it's in a format that IDEs can read readily. Type hints allow some type errors to be caught as you're writing, and show up in a more readable way in docs. For an example of their use, you could annotate response_time_under_load as: def response_time_under_load(input_file: str, output_file: str) -> None: This does a few things: • The types show up in the docs in the signature instead of buried in the doc string • If you accidentally pass something of the wrong type, a good IDE will warn you • From within the function, it knows input_file is a string, so it can give better autocomplete suggestions • -> None means that the function doesn't return anything (AKA, it implicitly returns None). If you attempt to do some_var = response_time_under_load(inp, out) You'll get a warning that response_time_under_load doesn't return anything. You can also annotate the types that dictionaries and lists hold. This allows it to know the types when you do a lookup. For example, __average_and_append takes two dictionaries, a Dataframe, and an int. You don't say what the dictionary is holding though. The values are numbers, but I can't tell what the keys are. Pretend for the example that the keys and values are both integers. def _average_and_append(data_totals, data_counter, out_data, new_index): Could be changed to from typing import Dict def _average_and_append(data_totals: Dict[int, int], data_counter: Dict[int, int], out_data: Dataframe, # Assuming Dataframe is imported new_index: int ) -> Dataframe: Yes, this is quite verbose, but it conveys a lot of useful information. Dict[int, int] can also be aliased to reduce redundancy and neaten up: Data = Dict[int, int] # Type alias def _average_and_append(data_totals: Data, data_counter: Data, out_data: Dataframe, new_index: int ) -> Dataframe: Dataframe may be generic like Dict is, so you may be able to specify the types that it holds as well. The docs for the class should mention that. • Thanks I'll try the type hints. Yes __average_and_append is a private function, it started as part of calculate_response_time and was refactored out to get some more columns to write the code, that's the reason why the for..in is in two lines, initially that's the space I had. The two dict parameters are copies of base_row so the keys are the distinct values of the 'Transaction Name' columns of the input files, in data_totals the values are the sum of the response time so dict[str,float](?), in data_counter the values are the number of rows added so dict[str,int](?) – Serpiton Nov 14 '19 at 23:52 • @Serpiton Yes, those would be correct, except you need Dict instead of dict. Unfortunately, the built-in types aren't yet generic themselves. I think I read that this is going to be fixed in future updates though. Type hints are still fairly new. – Carcigenicate Nov 15 '19 at 0:01
2020-02-18 23:33:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18123145401477814, "perplexity": 12704.348800482512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143815.23/warc/CC-MAIN-20200218210853-20200219000853-00010.warc.gz"}
https://publications.hse.ru/en/articles/49607648
• A • A • A • ABC • ABC • ABC • А • А • А • А • А Regular version of the site ## Оптимальный синтез в бесконечномерном пространстве Манита Л. А., Борисов В. Ф., Зеликин М. И. For a class of optimal control problems and Hamiltonian systems generated by these problems in the space l 2, we prove the existence of extremals with a countable number of switchings on a finite time interval. The optimal synthesis that we construct in the space l 2 forms a fiber bundle with piecewise smooth two-dimensional fibers consisting of extremals with a countable number of switchings over an infinite-dimensional basis of singular extremals.
2019-11-14 20:07:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8331039547920227, "perplexity": 4759.0586512083355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668534.60/warc/CC-MAIN-20191114182304-20191114210304-00191.warc.gz"}
https://seos-project.eu/world-of-images/world-of-images-c02-p26b.html
## Why are time series so interesting ? A time series can tell us about how a particular parameter (green cover, water area, etc.) varies with time. The idea is that if we can understand the processes that cause the changes, we may be able to model and predict future changes. This and much more to be discovered in the SEOS tutorials Modelling of environmental processes and Time series analysis! Lake Chad in Africa in 1972 and then later in October 2001. Source: UNEP/GRID-Sioux Falls. Toggle: Debug
2019-04-25 18:17:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8848212957382202, "perplexity": 1590.6349139932715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578732961.78/warc/CC-MAIN-20190425173951-20190425195951-00362.warc.gz"}
http://gams.cam.nist.gov/35.9
# §35.9 Applications In multivariate statistical analysis based on the multivariate normal distribution, the probability density functions of many random matrices are expressible in terms of generalized hypergeometric functions of matrix argument $\mathop{{{}_{p}F_{q}}\/}\nolimits$, with $p\leq 2$ and $q\leq 1$. See James (1964), Muirhead (1982), Takemura (1984), Farrell (1985), and Chikuse (2003) for extensive treatments. For other statistical applications of $\mathop{{{}_{p}F_{q}}\/}\nolimits$ functions of matrix argument see Perlman and Olkin (1980), Groeneboom and Truax (2000), Bhaumik and Sarkar (2002), Richards (2004) (monotonicity of power functions of multivariate statistical test criteria), Bingham et al. (1992) (Procrustes analysis), and Phillips (1986) (exact distributions of statistical test criteria). These references all use results related to the integral formulas (35.4.7) and (35.5.8). For applications of the integral representation (35.5.3) see McFarland and Richards (2001, 2002) (statistical estimation of misclassification probabilities for discriminating between multivariate normal populations). The asymptotic approximations of §35.7(iv) are applied in numerous statistical contexts in Butler and Wood (2002). In chemistry, Wei and Eichinger (1993) expresses the probability density functions of macromolecules in terms of generalized hypergeometric functions of matrix argument, and develop asymptotic approximations for these density functions. In the nascent area of applications of zonal polynomials to the limiting probability distributions of symmetric random matrices, one of the most comprehensive accounts is Rains (1998).
2015-10-07 17:22:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 4, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9471422433853149, "perplexity": 1689.4820206674926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737875203.46/warc/CC-MAIN-20151001221755-00068-ip-10-137-6-227.ec2.internal.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=1989_AIME_Problems/Problem_9&oldid=104246
# 1989 AIME Problems/Problem 9 ## Problem One of Euler's conjectures was disproved in the 1960s by three American mathematicians when they showed there was a positive integer such that $133^5+110^5+84^5+27^5=n^{5}$. Find the value of $n$. ## Solution 1 Note that $n$ is even, since the $LHS$ consists of two odd and two even numbers. By Fermat's Little Theorem, we know ${n^{5}}$ is congruent to $n$ modulo 5. Hence, $3 + 0 + 4 + 2 \equiv n\pmod{5}$ $4 \equiv n\pmod{5}$ Continuing, we examine the equation modulo 3, $1 - 1 + 0 + 0 \equiv n\pmod{3}$ $0 \equiv n\pmod{3}$ Thus, $n$ is divisible by three and leaves a remainder of four when divided by 5. It's obvious that $n>133$, so the only possibilities are $n = 144$ or $n \geq 174$. It quickly becomes apparent that 174 is much too large, so $n$ must be $\boxed{144}$. ## Solution 2 We can cheat a little bit and approximate, since we are dealing with such large numbers. As above, $n^5\equiv n\pmod{5}$, and it is easy to see that $n^5\equiv n\pmod 2$. Therefore, $133^5+110^5+84^5+27^5\equiv 3+0+4+7\equiv 4\pmod{10}$, so the last digit of $n$ is 4. We notice that $133,110,84,$ and $27$ are all very close or equal to multiples of 27. We can rewrite $n^5$ as approximately equal to $27^5(5^5+4^5+3^5+1^5) = 27^5(4393)$. This means $\frac{n}{27}$ must be close to $4393$. 134 will obviously be too small, so we try 144. $\left(\frac{144}{27}\right)^5=\left(\frac{16}{3}\right)^5$. Bashing through the division, we find that $\frac{1048576}{243}\approx 4315$, which is very close to $4393$. It is clear that 154 will not give any closer of an answer, given the rate that fifth powers grow, so we can safely assume that $\boxed{144}$ is the answer.
2022-12-02 18:57:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9382911920547485, "perplexity": 116.40241930518778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.40/warc/CC-MAIN-20221202183117-20221202213117-00823.warc.gz"}
https://dsp.stackexchange.com/questions/9506/prony-method-for-frequency-estimation-in-matlab
# Prony method for frequency estimation in matlab i am interested if where can i find matlab code for Prony method for frequency estimation.there is pdf article about prony https://www.google.ge/#site=&source=hp&q=prony+method+for+distorted+signal&oq=prony+method+for+distorted+signal&gs_l=hp.3...2187.8079.0.8365.0.0.0.0.0.0.0.0..0.0.ernk_timediscountc..0.0...1.1.16.hp.enCwRCZuhe0&bav=on.2,or.&bvm=bv.47534661,d.bGE&fp=79e233fb1755afd8&biw=1280&bih=675 first one is about prony.so what i ant is that for given signal,prony method should return Amplitudes,damping factor and phase shift and also this parameters should minimize sum of squared error.in matlab webpase there is prony for filter design,but i think this method is not what i am looking for,please could you help how to find code for prony method in matlab? • Why don't you just use the signal processing toolbox method prony ? – Peter K. Jun 8 '13 at 15:10 • is that same?as i understand it does different job – dato datuashvili Jun 8 '13 at 18:47 • Any parametric form of frequency estimation is trying to find the "filter" transfer function $H(z)$ that best approximates the signal's spectrum. You need to then extract the frequencies from $H(z)$. – Peter K. Jun 8 '13 at 22:02 • and location of peaks is used yes to find frequencies – dato datuashvili Jun 9 '13 at 9:15 • Yes, that's correct... though it's not as easy as it sounds. :-) – Peter K. Jun 9 '13 at 14:20
2019-12-07 14:36:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4875646233558655, "perplexity": 2060.5369259772283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540499439.6/warc/CC-MAIN-20191207132817-20191207160817-00338.warc.gz"}
https://amathew.wordpress.com/tag/acyclic-spaces/
I recently read E. Dror’s paper “Acyclic spaces,” which studies the category of spaces with vanishing homology groups. It turns out that this category has a fair bit of structure; in particular, it has a theory resembling the theory of Postnikov systems. In this post and the next, I’d like to explain how the results in Dror’s paper show that the decomposition is really a special case of the notion of a Postnikov system, valid in a general ${\infty}$-category. Dror didn’t have this language available, but his results fit neatly into it. Let ${\mathcal{S}}$ be the ${\infty}$-category of pointed spaces. We have a functor $\displaystyle \widetilde{C}_*: \mathcal{S} \rightarrow D( \mathrm{Ab})$ into the derived category of abelian groups, which sends a pointed space into the reduced chain complex. This functor preserves colimits, and it is in fact uniquely determined by this condition and the fact that ${\widetilde{C}_*(S^0)}$ is ${\mathbb{Z}[0]}$. We can look at the subcategory ${\mathcal{AC} \subset \mathcal{S}}$ consisting of spaces sent by ${\widetilde{C}_*}$ to zero (that is, to a contractible complex). Definition 1 Spaces in ${\mathcal{AC}}$ are called acyclic spaces. The subcategory ${\mathcal{AC} \subset \mathcal{S}}$ is closed under colimits (as ${\widetilde{C}_*}$ is colimit-preserving). It is in fact a very good candidate for a homotopy theory: that is, it is a presentable ${\infty}$-category. In other words, it is a homotopy theory that one might expect to describe by a sufficiently nice model category. I am not familiar with the details, but I believe that the process of right Bousfield localization (with respect to the class of acyclic spaces), can be used to construct such a model category. (more…)
2020-04-05 04:50:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9128835797309875, "perplexity": 153.08381828944425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370528224.61/warc/CC-MAIN-20200405022138-20200405052138-00324.warc.gz"}
http://parabix.costar.sfu.ca/changeset/1348/docs/HPCA2012/09-pipeline.tex
# Changeset 1348 for docs/HPCA2012/09-pipeline.tex Ignore: Timestamp: Aug 23, 2011, 1:02:30 AM (8 years ago) Message: new abstract for new intro File: 1 edited ### Legend: Unmodified r1347 Moreover, using mulitiple cores, we can further improve the performance of Parabix while keeping the energy consumption at the same level. A typical approach to parallelizing software, data parallelism, requires nearly independent data, However, the nature of XML files makes them hard to partition nicely for data parallelism. Several approaches have been used to address this problem. A preparsing phase has been proposed to help partition the XML document \cite{dataparallel}. The goal of this preparsing is to determine the tree structure of the XML document so that it can be used to guide the full parsing in the next phase. Another data parallel algorithm is called ParDOM \cite{Shah:2009}. It first builds partial DOM node tree structures for each data segments and then link them using preorder numbers that has been assigned to each start element to determine the ordering among siblings and a stack to manage the parent-child relationship between elements. A typical approach to parallelizing software, data parallelism, requires nearly independent data, However, the nature of XML files makes them hard to partition nicely for data parallelism.  Several approaches have been used to address this problem.  A preparsing phase has been proposed to help partition the XML document \cite{dataparallel}.  The goal of this preparsing is to determine the tree structure of the XML document so that it can be used to guide the full parsing in the next phase.  Another data parallel algorithm is called ParDOM \cite{Shah:2009}.  It first builds partial DOM node tree structures for each data segments and then link them using preorder numbers that has been assigned to each start element to determine the ordering among siblings and a stack to manage the parent-child relationship between elements. Data parallelism approaches introduce a lot of overheads to solve the data dependencies between segments. Therefore, instead of partitioning the data into segments and assigning different data segments to different cores, we propose a pipeline parallelism strategy that partitions the process into several stages and let each core work with one single stage. Data parallelism approaches introduce a lot of overheads to solve the data dependencies between segments.  Therefore, instead of partitioning the data into segments and assigning different data segments to different cores, we propose a pipeline parallelism strategy that partitions the process into several stages and let each core work with one single stage. The interface between stages is implemented using a circular array, where each entry consists of all ten data structures for one segment as listed in Table \ref{pass_structure}. Each thread keeps an index of the array ($I_N$), which is compared with the index ($I_{N-1}$) kept by its previous thread before processing the segment. If $I_N$ is smaller than $I_{N-1}$, thread N can start processing segment $I_N$, otherwise the thread keeps reading $I_{N-1}$ until $I_{N-1}$ is larger than $I_N$. The time consumed by continuously loading the value of $I_{N-1}$ and comparing it with $I_N$ will be later referred as stall time. When a thread finishes processing the segment, it increases the index by one. The interface between stages is implemented using a circular array, where each entry consists of all ten data structures for one segment as listed in Table \ref{pass_structure}.  Each thread keeps an index of the array ($I_N$), which is compared with the index ($I_{N-1}$) kept by its previous thread before processing the segment.  If $I_N$ is smaller than $I_{N-1}$, thread N can start processing segment $I_N$, otherwise the thread keeps reading $I_{N-1}$ until $I_{N-1}$ is larger than $I_N$.  The time consumed by continuously loading the value of $I_{N-1}$ and comparing it with $I_N$ will be later referred as stall time.  When a thread finishes processing the segment, it increases the index by one. \begin{table*}[t]
2019-05-25 10:10:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38646700978279114, "perplexity": 909.8833735912013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257939.82/warc/CC-MAIN-20190525084658-20190525110658-00032.warc.gz"}
https://www.homebuiltairplanes.com/forums/threads/narfis-scratch-built-zenith-750-super-duty.34296/page-5
# Narfi's Scratch Built Zenith 750 Super Duty ### Help Support HomeBuiltAirplanes.com: #### Topaz ##### Super Moderator Staff member Log Member What, you mean it gets cold where you are? ... Hey, come on, VB! It get's cold here in SoCal, too! I mean, it's likely to get down into the mid-seventies this week! I'll need to break out the parka! #### narfi ##### Well-Known Member Log Member Hey, come on, VB! It get's cold here in SoCal, too! I mean, it's likely to get down into the mid-seventies this week! I'll need to break out the parka! Well its been too cold for even me last week or so, but it warmed up 20-30 degrees so its warm enough to snow now and I'm back working in the tent #### Topaz ##### Super Moderator Staff member Log Member Well its been too cold for even me last week or so, but it warmed up 20-30 degrees so its warm enough to snow now and I'm back working in the tent Oh you guys with your Metric. So yes, 68-86°F. Lovely weather. #### narfi ##### Well-Known Member Log Member As mentioned above it warmed up to around 25-28ish F which is warm enough to get our first snow to stick for the winter. I was able to take off my jacket and just wore a thick sweatshirt with a labcoat over to keep most of the dust off. 2 more hours got the cutting templates and form blocks done for the rear wing ribs. My wife came out and helped me unroll the aluminum and get it into the rack. Left out a sheet of .016 to start cutting up as well as the .020 I have already cut into. I had made the two stabilizer end ribs, but for the superduty there are actually 2 end ribs on both sides as the forward spar doesn't extend to the tip. Total time spent building: 24 hours Total Cost: $8276 Airplane + consumables + project specific costs:$5340 Tools, etc.. I will keep for future projects: $2936 #### narfi ##### Well-Known Member Log Member Yes.... it had been just under 0f all week, so when it warms up ~30 degrees it feels pretty nice #### narfi ##### Well-Known Member Log Member About 10 more hours this weekend got some visable progress. Cut out and bent two more stabilizer end ribs since each side uses 2 instead of one with the partial length spar. Then cut and bent all the rest of the horizontal and elevator ribs as well as the rear wing ribs. Still need to cut and flange the lightning holes in the wing ribs, tool kit that has the hole cutter from aircraft spruce is still backordered so may end up with another ghetto set-up since it worked so well making the dies. 1/4" end bearing router bit is supposed to get here from amazon this week, tracking says it was in state Friday. Then I can cut out the slat and flaperon ribs, they are smaller peices with tighter radius flanges so the 3/8" I have been using is too big for them. The cutting and bending of all the ribs went very quick and easy imo, I probably spent more time making the templates and forms, but well worth the time and effort, I am quite pleased with how they look. (Esp. Knowing how much(textron?) Piper, Cessna and Beech charge for each little peice Total time spent building: 34 hours Total Cost:$8276 Airplane + consumables + project specific costs: $5340 Tools, etc.. I will keep for future projects:$2936 #### narfi ##### Well-Known Member Log Member Cut the lightning holes in all the rear wing ribs and flanged them. Used my router with a peice of plywood screwed to it as my hole cutter with 1/4" holes and a bolt as my pivot point in the tooling holes. First hole I cut the center hole to the larger hole dimensions so that rib was scrap.. After finishing I cut out a replacement rib blank and bent it over the forming blocks. I had my ghetto hole cutter set to the smallest hole size so cut it first...... however it was only supposed to have the 2 larger holes cut in it so reinforcement could be riveted to it on the fuel bay.. so in the end I have 1 extra rib(I still flanged it so it is a good completed rib) and one scrap rib. Flanging with the dies was easy just using a rubber mallet. I was just careful to keep them square and hold pressure down on the male die once started to prevent it bouncing, I didnt just wail away at it with the mallet but more controlled hits mostly centered to start, then finished off around the parameter once down mostly formed. On the zenith forum someone posted yesterday having issues with his ribs being warped after hand flanging(not using a die) so I payed close attention to mine to see the difference. You can see it is stretched some but snaps back straight when the edge flanges are held at 90° so I think it is ok. All in all, not bad for an evening after work Total time spent building: 37.5 hours Total Cost: $8276 Airplane + consumables + project specific costs:$5340 Tools, etc.. I will keep for future projects: 2936 #### narfi ##### Well-Known Member Log Member 2.5hrs last night. Smaller end bearing router bits came in so cut out all the slat and flaperon blanks.. Cut out all the flaperon blanks individually which took most of the time then got impatient and stacked up all the slat blanks to cut at the same time. Not a good idea with cheap bearing bits... I promptly burned it up and blew up the bearing. Ill probably need to redo one or two of them as they weren't quite aligned in the stack and missing some of the edges.. Lesson learned,, all things in moderation. Maybe 3 or 4 stacked to save time but dont try to cut 10 blanks at once with cheap tools. Still happy with the results though Total time is up to 40hrs or a standard work week. As a hobby it feels like I have got a lot done, but if I was on the clock I might be ashamed to have so little to show...... It is a hobby though so all good Total time spent building: 40 hours Total Cost:8276 Airplane + consumables + project specific costs: $5340 Tools, etc.. I will keep for future projects:$2936 #### Victor Bravo ##### Well-Known Member HBA Supporter Excellent progress Narfi ! Keep pounding that aluminum at high speed! #### rbarnes ##### Well-Known Member Narfi, did they tell you when the complete dimensional drawings will be released ? Ordered a set of plans and was disappointed when only assembly drawings showed up without them even mentioning the actual plans wouldn't be included yet. #### narfi ##### Well-Known Member Log Member Narfi, did they tell you when the complete dimensional drawings will be released ? Ordered a set of plans and was disappointed when only assembly drawings showed up without them even mentioning the actual plans wouldn't be included yet. No, they said they are hoping to have them finished sometime this winter but I'm not really sure. I called before ordering and talked to Roger. Told him I wasn't in a huge rush but wanted to get started making parts this winter to keep me out of trouble then maybe start assembly next summer or winter. He included the blueprints for the 750stol as well as the 750 super duty assembly drawings when I ordered them. Since I had talked to him about it their order desk knew what to expect with my order and he had highlighted some items on the stol drawings for me to start with, then I have called him each week and he has told me more parts still in the stol drawings that are compatable with the super duty so I can build them. I think the biggest issues will be spars and some of those other things, it seems the rudder will probably be different as well..... You should call Roger on Monday and explain you bought the plans hoping to start building and ask if they could send you some drawings to get started on until they have the plans finished. #### narfi ##### Well-Known Member Log Member 2.5hrs got all the slat and flaperon ribs bent. 1hr yesterday got the wing nose rib forms and cutting templates ready, but while it was in the teens F° the wind was cutting through the tent pretty well and mechanics gloves just weren't enough :/ 5° this morning so progress may slow down a bit if my body goes into hibernation mode. Will see Total time spent building: 43.5 hours Total Cost: $8276 Airplane + consumables + project specific costs:$5340 Tools, etc.. I will keep for future projects: $2936 #### Little Scrapper ##### Well-Known Member HBA Supporter Log Member Awesome project! #### narfi ##### Well-Known Member Log Member 1hr was all my fingers could take of the cold last night, hot coffee and rum don't seem to help the toes and fingers much. Got the wing butt nose rib formers done. Picture includes the main nose rib I did Sunday, still need to grind the fluting in them but that's pretty quick. Someone posted yesterday stating I would spend 10s of thousands of dollars and never have a flying airplane. I wanted to quickly address that in saying that while someone should be aware of their limitations before starting a project like this, I took all precautions before beginning to ensure I was prepared to see the project through. I set myself a reasonable timeline 3yrs(which I currently have a hard time seeing how I can drag it out that long) and a commitment to plug away at it one bite at a time. It is fun for me, and even when it is painfully cold, I still want to be working on it and will push to put in that hour or two each day because I enjoy it and it is therapeutic for the mind and body to end a day with no stress, just focusing on a fun productive project. Total time spent building: 44.5 hours Total Cost:$8276 Airplane + consumables + project specific costs: $5340 Tools, etc.. I will keep for future projects:$2936 Last edited: #### Victor Bravo ##### Well-Known Member HBA Supporter Someone posted yesterday stating I would spend 10s of thousands of dollars and never have a flying airplane. Two of my associates from Brooklyn, Louie the Fish hook and Golf Club Jimmy want to have a word with whoever said that. Additionally, I have made special arrangements to give a pair of Cleco pliers to Moe Howard (formerly of the Three Stooges) to adjust that person's nose after the first two are done. #### narfi ##### Well-Known Member Log Member 2.5hrs Made the form and cutting blocks for the wing root ribs and ground out the fluting grooves in it and the nose rib blocks I had forgot to grind them into. The root ribs top flange goes from 60° at the nose to 90° at the rear spar. I puzzled over best way to do that and decided to just make the forms like all the others and free handedly not beat it down all the way at the front. As easy as cutting out and bending is once the forms are made, I can try a couple if I mess up one or two, and I only need a good left and a good right, so not worth trying to overthink the formers..... We will see Total time spent building: 47 hours Total Cost: $8276 Airplane + consumables + project specific costs:$5340 Tools, etc.. I will keep for future projects: \$2936 #### narfi ##### Well-Known Member Log Member 0 hours Went to start building the rear bulkheads last night and found a contradiction in the dimensions, so called it early and binge watched some shows my son was wanting to watch. Called Roger this morning and he is looking into it as well as more parts for me to build. Anyone know what scratch builders are doing for the tips of the flaperons, slats and wings? they all seem to be plastic or fiberglass parts. Just thinking ahead....
2021-02-28 22:38:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3584055006504059, "perplexity": 3784.4036177399435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361776.13/warc/CC-MAIN-20210228205741-20210228235741-00542.warc.gz"}
http://openstudy.com/updates/528fd939e4b0424725722642
## Emily778 one year ago WHAT IS THE SOLUTION to 2x^2+x+2=0? 1. Emily778 2. Emily778 umm, hello? it has imaginary roots $x=-0.25\pm 0.968i$ 5. phi For this problem, you should use the quadratic formula see http://www.khanacademy.org/math/trigonometry/polynomial_and_rational/quad_formula_tutorial/v/using-the-quadratic-formula for how to do it. You will get a minus number inside a square root, which means a complex number. 6. Emily778 almost, the bottom should be 2*a = 4 $x= \frac{-1 ± \sqrt{15} \ i}{4}$
2015-05-30 14:49:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7881744503974915, "perplexity": 1646.505457644586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932182.13/warc/CC-MAIN-20150521113212-00114-ip-10-180-206-219.ec2.internal.warc.gz"}
https://jpt.spe.org/shale-ceo-parent-child-challenge-well-declines-we-know
Share # Shale CEO on Parent-Child Challenge, Well Declines: We Know ## Encana CEO Doug Suttles assures that shale executives are acutely aware of the parent-child well challenge, and he doesn’t think it’s “a big threat” to the sector. Much has been made recently about the disparities in production between parent and child wells in US shale basins. The increased attention on the issue is part of broader concern among investors about the ability of operators to maintain high levels of output over the next few years. However, Doug Suttles, Encana president and chief executive officer, assures that shale executives are acutely aware of the parent-child challenge. His company has been "very public about this for 5 years now,” he said before an audience largely consisting of the investor community at CERAWeek by IHS Markit this week in Houston. He ultimately doesn’t think it’s “a big threat” to the shale sector. Encana’s approach to dealing with interwell communication, called “cube development,” is now used across multiple basins, with the operator’s first cube well on its Oklahoma STACK acreage slated to be spudded in the second quarter. The approach involves developing dozens of wells at a time with multiple rigs and frac spreads on a single pad. It entails viewing the subsurface from a 3D perspective because parent-child relationships are both lateral and vertical, Suttles explained in a recent investor call. Similar development strategies have been deployed by fellow operators such as Occidental Petroleum and QEP Resources, particularly in the Permian Basin of West Texas and southeastern New Mexico, where acreage has become more crowded and decline rates have become steeper. Encana has 118,000 net acres in the basin. When questioned about high shale decline rates, Suttles remarked that declines are “incredibly predictable,” and for the bigger operators, “it’s not that hard to manage.” Encana expects to produce up to 600,000 BOE/D this year, “and every bit of it comes out of a horizontal wellbore,” he said. But some companies are handling these challenges better than others, he noted, and Encana can learn a lot from nearby operators such as Marathon Oil. “One of the things we're real big on is data trades because we get a level of data that's not public,” he said. Most mature companies “have been working on [the parent-child issue] for a while, and we trade information with a number of those companies.” ## Scale is Important Encana and Marathon benefit from having large contiguous acreage positions that allow for big development projects, experimentation, and capital efficiency. “Certainly I think having scale in a basin is very important,” said Lee Tillman, Marathon Oil president and chief executive officer, during the shale-focused panel discussion of executives. Above the ground, scale gives operators the ability to more effectively work with their suppliers and midstream providers. “They know you're going to be around to pay your bills in the future. I think that's important. It gives you the ability to negotiate,” Tillman said. At the same time, having a multi-basin strategy is also necessary “because, at the end of the day, if you're going to generate free cash flow, you have to have some assets that are actually in that more mature phase that can drive cash flow,” he said. Tillman named the Eagle Ford and Bakken shales, respectively in South Texas and North Dakota, as Marathon’s most capital efficient positions. “I would compare the returns in the Eagle Ford to anything,” he said, given its $4-5 million/well completion costs, oiliness, and Louisiana light sweet pricing. “There's really nothing today on a zone-by-zone basis that can touch the Eagle Ford.” Those basins allow the operator to delineate and test concepts in its less mature areas in the Oklahoma SCOOP and STACK plays and the Permian’s northern Delaware Basin, “while still meeting that mandate from our investors” to return them cash, he said. That stability is critical because, Tillman said, “The worst thing you can do for capital efficiency is to cycle activity up and down, and I think as an industry that’s where we’ve been.” For example, capital budgets increased last year as crude prices remained high and then decreased this year after last fall’s price drop off. “That's why we have an execution-level plan at a minimum across a full 2 years—our budget year and 1 more year." Marathon has planned for$50/bbl oil over the next couple of years and says its breakevens are well below that level. “The lowest-cost, highest-margin producer wins at any commodity price environment,” he said. Shale’s “only real limiter for the foreseeable future,” Suttles said, will be capital. Last year’s rise in spending “pushed investors away from the sector for a while because they were concerned about discipline.” A lack of discipline means a lack of longer-term investment. ## Engineers and Geoscientists Matter Another benefit of Marathon’s multi-basin strategy is the wide-ranging development experience and perspective gained by its engineers and geoscientists who work across the different positions. “They see assets at different points in their development cycle,” Tillman explained. “They see different geologies. They see different completion designs, different spacing designs, different subsurface workflows that we use to be more predictive.” These learnings can then be transferred across the organization. While “the rock” and technology are key drivers for operator success, people remain the most critical factor, he said. Suttles agreed, noting that good organizational culture is needed to continually innovate and extract value from shale. He pointed out that Encana’s cube approach is a result of a culture of collaboration within his company. “The first time we ever did multiple drilling rigs on a single pad followed by multiple frac spreads on a single pad was in the Duvernay [Shale] in Canada—the first time it had ever been done in the world.” Deployment of the cube wouldn’t have evolved to where it is now without Encana’s Eagle Ford team mastering high-intensity completions and its Permian team pioneering 3D development. Every cube is required to have a cycle time of 90 days or fewer, which is financially efficient but also accelerates learnings in the development concept. “We get data back every 90 days so the next cube learns from the last one” and is better than the last one, he said. Mark Gunnin, president of Hunt Oil, added that successful shale operators cultivate “a culture of risk-taking innovation” and personnel are empowered to maintain that culture. Suttles said the unique thing about shale development is that “you can constantly experiment. You get results back very quickly. The experiments are cheap, and if they work, you can replicate them thousands of times.”
2021-08-02 16:38:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18482355773448944, "perplexity": 6091.778664252174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154321.31/warc/CC-MAIN-20210802141221-20210802171221-00068.warc.gz"}
https://www.physicsforums.com/threads/a-math-problem-related-to-betting.775676/
Homework Help: A math problem related to betting 1. Oct 12, 2014 the_man I just want to say this is nothing illegal because Ive used the word "Betting". The thing is that I have a math problem and I need someone to help me find the best solution. I dont know if this is the right place to post it. If not, please dont delete it, just moved it. Thanks. 1. The problem statement, all variables and given/known data So for those who are not familiar with betting here is an example: STAKE x ODDS = PAYOUT ---- 10*3 = 30 PROFIT = PAYOUT - STAKE ---- PROFIT = 30-10 -> +20 But the problem is that my bookmaker takes 7% commission on PROFIT. What I want is to adjust my numbers so that my winnings/profits/stats are like they would be if there is no commision? In the previous example my profit was +20, when bookie takes 7%, Im left with +18,6. But I dont want that, I want my profit to be +20, so thats why I made little formula to adjust my STAKE, so that my profit is +20 after the 7% commission. You find the formula on the picture bottom-right corner or under "Relevant equations". Example 1 But lets forget about my STAKE Adjusting Formula for a second and let me show you an example. If I bet: 10 * 3 = +20 and I LOSE, my Overall Balance is -10 and I bet again 10 * 3 = +20 and I WIN, bookie takes 7% and I have a profit of +18,6 which I add to my Overall Balance and I get +8,6. Example 2 Now lets do the same thing but with my Adjusted Stake which instead of 10 is now 10,75. That number allows me to win an original profit of in our example +20 after commision instead of +18,6 when stake is =10. So lets do this: If I bet 10,75 * 3 = +21,5 and I LOSE, my Overall Balance is -10,75 and I bet again 10,75 * 3 = +21,5, and I WIN, bookie takes 7% and I have a profit of +20, which I add to my Overall Balance and I get +9,25. Example 3 Now you see the problem. The right thing if there is no commission would look like this and this is what I want to accomplish. If I bet: 10 * 3 = +20 and I LOSE, my Overall Balance is -10 and I bet again 10 * 3 = +20 and I WIN, there is no commission because its a perfect world we live in, and I have a profit of +20which I add to my Overall Balance and I get +10. Now is there any way that I can accomplish the same numbers as shown in Example 3 with commission included. The thing to remember is that bookie only takes 7% on winning bets when I make profit. I am calculating my profit and do my stast at the end of the month. I have around 180 bets a month with the winning rate of 38%-45%. Maybe I can somehow return it on a long term. 2. Relevant equations A) STAKE = PROFIT / (1 - 7/100) * (ODDS -1) 3. Picture attached HERE 2. Oct 12, 2014 RUber It looks like you are asking if it is possible to risk $10 at 3:1 odds and receive a net$20 in winnings after the bookie gets his share. To that, I would say no. If you are asking if there is a way for WIN + LOSS = $10, then probably. WIN + LOSS =$10 WIN = (BET*3)(1-.07)-BET LOSS = BET WIN = (3 LOSS)(.93)-LOSS WIN = 1.79 LOSS Plug this back into the original WIN + LOSS = 10 and there you have it.
2018-07-20 21:17:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.702809751033783, "perplexity": 1013.6822740092931}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591831.57/warc/CC-MAIN-20180720193850-20180720213850-00583.warc.gz"}
https://www.c-sharpcorner.com/interview-question/what-will-be-the-value-of-b-int-a-int-b-a-a
what will be the value of b?? int a = 2; int b = a++ + a++; By in on Sep 18 2013 • Sep, 2013 24 a=2; b=a++ + a++; As we know in an assignment expression assocciativity is right--> left. so here right side a value 2 is taken as the operand and after that a's value 2 increments to 3, and then left side a's value becomes 3. so 3 is taken as another operand and after that 3 is increments to 4. but the addition and assignment performs before a's value becomes 4. • 2 • Sep, 2013 18 int a = 2; int b = a++ + a++; int c = ++a + a++ + a++; +-----+------+------+----+ | C | C++ | Java | C# | +-----+-----+------+------+----+ | a | 7 | 7 | 7 | 7 | +-----+-----+------+------+----+ | b | 4 | 4 | 5 | 5 | +-----+-----+------+------+----+ | c | 15 | 15 | 16 | 16 | +-----+-----+------+------+----+ • 2 • Dec, 2016 1 Really sorry.... It happened by my poor internet connection.... • 1 • Dec, 2016 1 a= 4 and b = 5 • 1 • Dec, 2016 1 a= 4 and b = 5 • 1 • Dec, 2016 1 a= 4 and b = 5 • 1 • Dec, 2016 1 a= 4 and b = 5 • 1 • Dec, 2016 1 a= 4 and b = 5 • 1 • Dec, 2016 1 a= 4 and b = 5 • 1 • Dec, 2016 1 a= 4 and b = 5 • 1 • Dec, 2016 1 a= 4 and b = 5 • 1 • Dec, 2016 1 a= 4 and b = 5 • 1 • Dec, 2016 1 a= 4 and b = 5 • 1 • Dec, 2016 1 a= 4 and b = 5 • 1 • Dec, 2016 1 a= 4 and b = 5 • 1 • Dec, 2016 1 a= 4 and b = 5 • 1 • Dec, 2016 1 a= 4 and b = 5 • 1 • Feb, 2016 5 value of a= 4 and b = 5 • 1 • Jun, 2015 16 Depending on the compiler you use... This is a bad programming style • 1 • May, 2015 27 5 • 1 • Apr, 2015 29 b=5 • 1 • Apr, 2015 29 int a = 2; int b = a++ + a++; //right to left value of first a++=2 and then a=3 so second a++=3 after that a=4 b=3+2; b=5; • 1 • Mar, 2015 22 4 • 1 • Sep, 2014 17 4 • 1 • Jul, 2014 17 5 • 1 • May, 2014 24 5 • 1 • Mar, 2014 31 5 • 1 • Mar, 2014 31 5 • 1 • Mar, 2014 31 5 • 1 • Mar, 2014 31 5 • 1 • Mar, 2014 31 5 • 1 • Mar, 2014 31 5 • 1 • Feb, 2014 17 6 • 1 • Jan, 2014 8 5 • 1 • Dec, 2013 18 int a = 2; int b = a++;int c = a++;int d = b + c;Console.WriteLine("b={0}", b);Console.WriteLine("c={0}", c);Console.WriteLine("d={0}", d);Console.ReadLine(); • 1 • Dec, 2013 18 int a = 2; int b = a++;int c = a++;int d = b + c;Console.WriteLine("b={0}", b);Console.WriteLine("c={0}", c);Console.WriteLine("d={0}", d);Console.ReadLine(); • 1 • Sep, 2013 24 a=2; b=a++ + a++; As we know in an assignment expression assocciativity is right--> left. so here right side a value 2 is taken as the operand and after that a's value 2 increments to 3, and then left side a's value becomes 3. so 3 is taken as another operand and after that 3 is increments to 4. but the addition and assignment performs before a's value becomes 4. • 1 • Apr, 2019 2 int a=3; int b =4; a=a&2;a=a|b • 0 • Jul, 2018 29 6 • 0
2020-09-20 05:30:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32456570863723755, "perplexity": 12318.485674192547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400193391.9/warc/CC-MAIN-20200920031425-20200920061425-00386.warc.gz"}
https://chem.libretexts.org/Bookshelves/General_Chemistry/Book%3A_General_Chemistry%3A_Principles_Patterns_and_Applications_(Averill)/01%3A_Introduction_to_Chemistry/1.05%3A_A_Brief_History_of_Chemistry
# 1.5: A Brief History of Chemistry Learning Objectives • To understand the development of the atomic model. It was not until the era of the ancient Greeks that we have any record of how people tried to explain the chemical changes they observed and used. At that time, natural objects were thought to consist of only four basic elements: earth, air, fire, and water. Then, in the fourth century BC, two Greek philosophers, Democritus and Leucippus, suggested that matter was not infinitely divisible into smaller particles but instead consisted of fundamental, indivisible particles called atoms. Unfortunately, these early philosophers did not have the technology to test their hypothesis. They would have been unlikely to do so in any case because the ancient Greeks did not conduct experiments or use the scientific method. They believed that the nature of the universe could be discovered by rational thought alone. Over the next two millennia, alchemists, who engaged in a form of chemistry and speculative philosophy during the Middle Ages and Renaissance, achieved many advances in chemistry. Their major goal was to convert certain elements into others by a process they called transmutation (Figure $$\PageIndex{1}$$ ). In particular, alchemists wanted to find a way to transform cheaper metals into gold. Although most alchemists did not approach chemistry systematically and many appear to have been outright frauds, alchemists in China, the Arab kingdoms, and medieval Europe made major contributions, including the discovery of elements such as quicksilver (mercury) and the preparation of several strong acids. Figure $$\PageIndex{1}$$ An Alchemist at Work Alchemy was a form of chemistry that flourished during the Middle Ages and Renaissance. Although some alchemists were frauds, others made major contributions, including the discovery of several elements and the preparation of strong acids. ## Modern Chemistry The 16th and 17th centuries saw the beginnings of what we now recognize as modern chemistry. During this period, great advances were made in metallurgy, the extraction of metals from ores, and the first systematic quantitative experiments were carried out. In 1661, the Englishman Robert Boyle (1627–91) published The Sceptical Chymist, which described the relationship between the pressure and the volume of air. More important, Boyle defined an element as a substance that cannot be broken down into two or more simpler substances by chemical means. This led to the identification of a large number of elements, many of which were metals. Ironically, Boyle himself never thought that metals were elements. In the 18th century, the English clergyman Joseph Priestley (1733–1804) discovered oxygen gas and found that many carbon-containing materials burn vigorously in an oxygen atmosphere, a process called combustion. Priestley also discovered that the gas produced by fermenting beer, which we now know to be carbon dioxide, is the same as one of the gaseous products of combustion. Priestley’s studies of this gas did not continue as he would have liked, however. After he fell into a vat of fermenting beer, brewers prohibited him from working in their factories. Although Priestley did not understand its identity, he found that carbon dioxide dissolved in water to produce seltzer water. In essence, he may be considered the founder of the multibillion-dollar carbonated soft drink industry. ### Joseph Priestley (1733–1804) Priestley was a political theorist and a leading Unitarian minister. He was appointed to Warrington Academy in Lancashire, England, where he developed new courses on history, science, and the arts. During visits to London, Priestley met the leading men of science, including Benjamin Franklin, who encouraged Priestley’s interest in electricity. Priestley’s work on gases began while he was living next to a brewery in Leeds, where he noticed “fixed air” bubbling out of vats of fermenting beer and ale. His scientific discoveries included the relationship between electricity and chemical change, 10 new “airs,” and observations that led to the discovery of photosynthesis. Due to his support for the principles of the French Revolution, Priestley’s house, library, and laboratory were destroyed by a mob in 1791. He and his wife emigrated to the United States in 1794 to join their three sons, who had previously emigrated to Pennsylvania. Priestley never returned to England and died in his new home in Pennsylvania. Despite the pioneering studies of Priestley and others, a clear understanding of combustion remained elusive. In the late 18th century, however, the French scientist Antoine Lavoisier (1743–94) showed that combustion is the reaction of a carbon-containing substance with oxygen to form carbon dioxide and water and that life depends on a similar reaction, which today we call respiration. Lavoisier also wrote the first modern chemistry text and is widely regarded as the father of modern chemistry. His most important contribution was the law of conservation of mass, which states that in any chemical reaction, the mass of the substances that react equals the mass of the products that are formed. That is, in a chemical reaction, mass is neither lost nor destroyed. Unfortunately, Lavoisier invested in a private corporation that collected taxes for the Crown, and royal tax collectors were not popular during the French Revolution. He was executed on the guillotine at age 51, prematurely terminating his contributions to chemistry. ## The Atomic Theory of Matter In 1803, the English schoolteacher John Dalton (1766–1844) expanded Proust’s development of the law of definite proportions (Section 1.2 "The Scientific Method") and Lavoisier’s findings on the conservation of mass in chemical reactions to propose that elements consist of indivisible particles that he called atoms (taking the term from Democritus and Leucippus). Dalton’s atomic theory of matter contains four fundamental hypotheses: 1. All matter is composed of tiny indivisible particles called atoms. 2. All atoms of an element are identical in mass and chemical properties, whereas atoms of different elements differ in mass and fundamental chemical properties. 3. A chemical compound is a substance that always contains the same atoms in the same ratio. 4. In chemical reactions, atoms from one or more compounds or elements redistribute or rearrange in relation to other atoms to form one or more new compounds. Atoms themselves do not undergo a change of identity in chemical reactions. This last hypothesis suggested that the alchemists’ goal of transmuting other elements to gold was impossible, at least through chemical reactions. We now know that Dalton’s atomic theory is essentially correct, with four minor modifications: 1. Not all atoms of an element must have precisely the same mass. 2. Atoms of one element can be transformed into another through nuclear reactions. 3. The compositions of many solid compounds are somewhat variable. 4. Under certain circumstances, some atoms can be divided (split into smaller particles). These modifications illustrate the effectiveness of the scientific method; later experiments and observations were used to refine Dalton’s original theory. ## The Law of Multiple Proportions Despite the clarity of his thinking, Dalton could not use his theory to determine the elemental compositions of chemical compounds because he had no reliable scale of atomic masses; that is, he did not know the relative masses of elements such as carbon and oxygen. For example, he knew that the gas we now call carbon monoxide contained carbon and oxygen in the ratio 1:1.33 by mass, and a second compound, the gas we call carbon dioxide, contained carbon and oxygen in the ratio 1:2.66 by mass. Because 2.66/1.33 = 2.00, the second compound contained twice as many oxygen atoms per carbon atom as did the first. But what was the correct formula for each compound? If the first compound consisted of particles that contain one carbon atom and one oxygen atom, the second must consist of particles that contain one carbon atom and two oxygen atoms. If the first compound had two carbon atoms and one oxygen atom, the second must have two carbon atoms and two oxygen atoms. If the first had one carbon atom and two oxygen atoms, the second would have one carbon atom and four oxygen atoms, and so forth. Dalton had no way to distinguish among these or more complicated alternatives. However, these data led to a general statement that is now known as the law of multiple proportions: when two elements form a series of compounds, the ratios of the masses of the second element that are present per gram of the first element can almost always be expressed as the ratios of integers. (The same law holds for mass ratios of compounds forming a series that contains more than two elements.) Example 4 shows how the law of multiple proportions can be applied to determine the identity of a compound. Example $$\PageIndex{1}$$ A chemist is studying a series of simple compounds of carbon and hydrogen. The following table lists the masses of hydrogen that combine with 1 g of carbon to form each compound. Compound Mass of Hydrogen (g) A 0.0839 B 0.1678 C 0.2520 D 1. Determine whether these data follow the law of multiple proportions. 2. Calculate the mass of hydrogen that would combine with 1 g of carbon to form D, the fourth compound in the series. Given: mass of hydrogen per gram of carbon for three compounds 1. ratios of masses of hydrogen to carbon 2. mass of hydrogen per gram of carbon for fourth compound in series Strategy: A Select the lowest mass to use as the denominator and then calculate the ratio of each of the other masses to that mass. Include other ratios if appropriate. B If the ratios are small whole integers, the data follow the law of multiple proportions. C Decide whether the ratios form a numerical series. If so, then determine the next member of that series and predict the ratio corresponding to the next compound in the series. D Use proportions to calculate the mass of hydrogen per gram of carbon in that compound. Solution A Compound A has the lowest mass of hydrogen, so we use it as the denominator. The ratios of the remaining masses of hydrogen, B and C, that combine with 1 g of carbon are as follows: CA=0.2520 g0.0839 g=3.00=31BA=0.1678 g0.0839 g=2.00=21CB=0.2520 g0.1678 g=1.502≈32 B The ratios of the masses of hydrogen that combine with 1 g of carbon are indeed composed of small whole integers (3/1, 2/1, 3/2), as predicted by the law of multiple proportions. C The ratios B/A and C/A form the series 2/1, 3/1, so the next member of the series should be D/A = 4/1. D Thus, if compound D exists, it would be formed by combining 4 × 0.0839 g = 0.336 g of hydrogen with 1 g of carbon. Such a compound does exist; it is methane, the major constituent of natural gas. Exercise $$\PageIndex{1}$$ Four compounds containing only sulfur and fluorine are known. The following table lists the masses of fluorine that combine with 1 g of sulfur to form each compound. Compound Mass of Fluorine (g) A 3.54 B 2.96 C 2.36 D 0.59 1. Determine the ratios of the masses of fluorine that combine with 1 g of sulfur in these compounds. Are these data consistent with the law of multiple proportions? 2. Calculate the mass of fluorine that would combine with 1 g of sulfur to form the next two compounds in the series: E and F. 1. A/D = 6.0 or 6/1; B/D ≈ 5.0, or 5/1; C/D = 4.0, or 4/1; yes 2. Ratios of 3.0 and 2.0 give 1.8 g and 1.2 g of fluorine/gram of sulfur, respectively. (Neither of these compounds is yet known.) In a further attempt to establish the formulas of chemical compounds, the French chemist Joseph Gay-Lussac (1778–1850) carried out a series of experiments using volume measurements. Under conditions of constant temperature and pressure, he carefully measured the volumes of gases that reacted to make a given chemical compound, together with the volumes of the products if they were gases. Gay-Lussac found, for example, that one volume of chlorine gas always reacted with one volume of hydrogen gas to produce two volumes of hydrogen chloride gas. Similarly, one volume of oxygen gas always reacted with two volumes of hydrogen gas to produce two volumes of water vapor (part (a) in Figure $$\PageIndex{2}$$). Figure $$\PageIndex{2}$$ Gay-Lussac’s Experiments with Chlorine Gas and Hydrogen Gas (a) One volume of chlorine gas reacted with one volume of hydrogen gas to produce two volumes of hydrogen chloride gas, and one volume of oxygen gas reacted with two volumes of hydrogen gas to produce two volumes of water vapor. (b) A summary of Avogadro’s hypothesis, which interpreted Gay-Lussac’s results in terms of atoms. Note that the simplest way for two molecules of hydrogen chloride to be produced is if hydrogen and chlorine each consist of molecules that contain two atoms of the element. Gay-Lussac’s results did not by themselves reveal the formulas for hydrogen chloride and water. The Italian chemist Amadeo Avogadro (1776–1856) developed the key insight that led to the exact formulas. He proposed that when gases are measured at the same temperature and pressure, equal volumes of different gases contain equal numbers of gas particlesAvogadro’s hypothesis, which explained Gay-Lussac’s results, is summarized here and in part (b) in Figure $$\PageIndex{2}$$: $$one volume(or particle) ofhydrogen+one volume(or particle) ofchlorine→two volumes(or particles) of hydrogen chloride$$ If Dalton’s theory of atoms was correct, then each particle of hydrogen or chlorine had to contain at least two atoms of hydrogen or chlorine because two particles of hydrogen chloride were produced. The simplest—but not the only—explanation was that hydrogen and chlorine contained two atoms each (i.e., they were diatomic) and that hydrogen chloride contained one atom each of hydrogen and chlorine. Applying this reasoning to Gay-Lussac’s results with hydrogen and oxygen leads to the conclusion that water contains two hydrogen atoms per oxygen atom. Unfortunately, because no data supported Avogadro’s hypothesis that equal volumes of gases contained equal numbers of particles, his explanations and formulas for simple compounds were not generally accepted for more than 50 years. Dalton and many others continued to believe that water particles contained one hydrogen atom and one oxygen atom, rather than two hydrogen atoms and one oxygen atom. The historical development of the concept of the atom is summarized in Figure $$\PageIndex{3}$$. Figure $$\PageIndex{3}$$ A Summary of the Historical Development of the Concept of the Atom ### Summary The ancient Greeks first proposed that matter consisted of fundamental particles called atoms. Chemistry took its present scientific form in the 18th century, when careful quantitative experiments by Lavoisier, Proust, and Dalton resulted in the law of definite proportions, the law of conservation of mass, and the law of multiple proportions, which laid the groundwork for Dalton’s atomic theory of matter. In particular, Avogadro’s hypothesis provided the first link between the macroscopic properties of a substance (in this case, the volume of a gas) and the number of atoms or molecules present. ### KEY TAKEAWAY • The development of the atomic model relied on the application of the scientific method over several centuries. ### CONCEPTUAL PROBLEMS 1. Define combustion and discuss the contributions made by Priestley and Lavoisier toward understanding a combustion reaction. 2. Chemical engineers frequently use the concept of “mass balance” in their calculations, in which the mass of the reactants must equal the mass of the products. What law supports this practice? 3. Does the law of multiple proportions apply to both mass ratios and atomic ratios? Why or why not? 4. What are the four hypotheses of the atomic theory of matter? 5. Much of the energy in France is provided by nuclear reactions. Are such reactions consistent with Dalton’s hypotheses? Why or why not? 6. Does 1 L of air contain the same number of particles as 1 L of nitrogen gas? Explain your answer. ### NUMERICAL PROBLEMS Please be sure you are familiar with the topics discussed in Essential Skills 1 (Section 1.9 "Essential Skills 1") before proceeding to the Numerical Problems. 1. One of the minerals found in soil has an Al:Si:O atomic ratio of 0.2:0.2:0.5. Is this consistent with the law of multiple proportions? Why or why not? Is the ratio of elements consistent with Dalton’s atomic theory of matter? 2. Nitrogen and oxygen react to form three different compounds that contain 0.571 g, 1.143 g, and 2.285 g of oxygen/gram of nitrogen, respectively. Is this consistent with the law of multiple proportions? Explain your answer. 3. Three binary compounds of vanadium and oxygen are known. The following table gives the masses of oxygen that combine with 10.00 g of vanadium to form each compound. Compound Mass of Oxygen (g) A 4.71 B 6.27 C 1. Determine the ratio of the masses of oxygen that combine with 3.14 g of vanadium in compounds A and B. 2. Predict the mass of oxygen that would combine with 3.14 g of vanadium to form the third compound in the series. 4. Three compounds containing titanium, magnesium, and oxygen are known. The following table gives the masses of titanium and magnesium that react with 5.00 g of oxygen to form each compound. Compound Mass of Titanium (g) Mass of Magnesium (g) A 4.99 2.53 B 3.74 3.80 C 1. Determine the ratios of the masses of titanium and magnesium that combine with 5.00 g of oxygen in these compounds. 2. Predict the masses of titanium and magnesium that would combine with 5.00 g of oxygen to form another possible compound in the series: C.
2021-09-16 15:56:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6181600093841553, "perplexity": 1275.310603082906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053657.29/warc/CC-MAIN-20210916145123-20210916175123-00076.warc.gz"}
https://mozilla.report/post/examples/new_report.kp/index.html
NOTE: In the TL,DR, optimize for clarity and comprehensiveness. The goal is to convey the post with the least amount of friction, especially since ipython/beakers require much more scrolling than blog posts. Make the reader get a correct understanding of the post’s takeaway, and the points supporting that takeaway without having to strain through paragraphs and tons of prose. Bullet points are great here, but are up to you. Try to avoid academic paper style abstracts. • Having a specific title will help avoid having someone browse posts and only finding vague, similar sounding titles • Having an itemized, short, and clear tl,dr will help readers understand your content • Setting the reader’s context with a motivation section makes someone understand how to judge your choices • Visualizations that can stand alone, via legends, labels, and captions are more understandable and powerful Motivation NOTE: optimize in this section for context setting, as specifically as you can. For instance, this post is generally a set of standards for work in the repo. The specific motivation is to have least friction to current workflow while being able to painlessly aggregate it later. The knowledge repo was created to consolidate research work that is currently scattered in emails, blogposts, and presentations, so that people didn’t redo their work. Putting Big Bold Headers with Clear Takeaways Will Help Us Aggregate Later import pandas as pd import numpy as np import matplotlib from matplotlib import pyplot as plt from moztelemetry.dataset import Dataset from moztelemetry import get_pings_properties, get_one_ping_per_client Unable to parse whitelist (/home/hadoop/anaconda2/lib/python2.7/site-packages/moztelemetry/histogram-whitelists.json). Assuming all histograms are acceptable. The goal of this example is to determine if Firefox has a similar startup time distribution on all Operating Systems. Let’s start by fetching 10% of Telemetry submissions for a given submission date… Dataset.from_source("telemetry").schema [u'submissionDate', u'sourceName', u'sourceVersion', u'docType', u'appName', u'appUpdateChannel', u'appVersion', u'appBuildId'] pings = Dataset.from_source("telemetry") \ .where(docType='main') \ .where(submissionDate="20161101") \ .where(appUpdateChannel="nightly") \ .records(sc, sample=0.1) … and extract only the attributes we need from the Telemetry submissions: subset = get_pings_properties(pings, ["clientId", "environment/system/os/name", To prevent pseudoreplication, let’s consider only a single submission for each client. As this step requires a distributed shuffle, it should always be run only after extracting the attributes of interest with get_pings_properties. subset = get_one_ping_per_client(subset) Let’s group the startup timings by OS: grouped = subset.map(lambda p: (p["environment/system/os/name"], p["payload/simpleMeasurements/firstPaint"])).groupByKey().collectAsMap() And finally plot the data: frame = pd.DataFrame({x: np.log10(pd.Series(list(y))) for x, y in grouped.items()}) plt.figure(figsize=(17, 7)) frame.boxplot(return_type="axes") plt.ylabel("log10(firstPaint)") plt.xlabel("Operating System") plt.show() NOTE: in graphs, optimize for being able to stand alone. Put enough labeling in your graph to be understood on its own. When aggregating and putting things in presentations, you won’t have to recreate and add code to each plot to make it understandable without the entire post around it. Will it be understandable without several paragraphs? Appendix Put all the stuff here that is not necessary for supporting the points above. Good place for documentation without distraction.
2019-07-23 18:10:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3164043128490448, "perplexity": 3451.161425187343}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529481.73/warc/CC-MAIN-20190723172209-20190723194209-00394.warc.gz"}
http://slideplayer.com/slide/2397360/
# p201/?plain p201/?plain Lec02.ppt Equations. ## Presentation on theme: "p201/?plain p201/?plain Lec02.ppt Equations."— Presentation transcript: http://www.physics.curtin.edu.au/teaching/units/2003/Av p201/?plain http://www.physics.curtin.edu.au/teaching/units/2003/Av p201/?plain Lec02.ppt Equations of Motion Or: How the atmosphere moves Objectives To derive an equation which describes the horizontal and vertical motion of the atmosphere Explain the forces involved Show how these forces produce the equation of motion Show how the simplified equation is produced Newtons Laws The fundamental law used to try and determine motion in the atmosphere is Newtons 2nd Law Force = Mass x Acceleration Meteorology is a science that likes the KISS principle, and to further simplify matters we shall consider only a unit mass. i.e. Force = Acceleration Frame of Reference If we were to consider absolute acceleration relative to “fixed” stars (i.e. a non-rotating Earth ) Then our equation would read something like this; “The rate of change of velocity with time is equal to the sum of the forces acting on the parcel” Frame of Reference For a non-rotating Earth, these forces are: Pressure gradient force (P gf ) Gravitational force (g a ) and Friction force (F) Frame of Reference However, we don’t live on a non-rotating Earth, and we have to consider the additional forces which arise due to this rotation, and these are: Centrifugal Force (C e ) and Coriolis Force (C of ) Equation of Motion We now have a new equation which states that: i.e. The relative acceleration relative to the Earth is equal to the real forces (Pressure gradient, Gravity and Friction) plus the “apparent forces” Centrifugal force and Coriolis force A Useable form If we now consider the centrifugal force we can combine it with the gravitational force (g a ) to produce a single gravitational force (g), since the centrifugal force depends only on position relative to the Earth. Hence, g = g a + C e A Useable form We can now write our equation as: We now look at the equation in its component forms, since we are considering the atmosphere as a 3- Dimensional entity Conventions In Meteorology, the conventions for the components in the horizontal and vertical are; x = E-W flow y = N-S flow Z = Vertical motion Also, the conventions for velocity are u = velocity E-W v = velocity N-S w = Vertical velocity Pressure Gradient Force “Force acting on air by virtue of spatial variations of pressure” These changes in pressure (or Pressure Gradient) are given by; Pressure Gradient Force If we now consider this pressure gradient acting on a unit cubic mass of air with volume given by  x  y  z we can say that the Pressure gradient (P g ) on this cube is given by: (P g ) = Force/Volume. Pressure Gradient Force We can also say that the volume of this unit mass is the specific volume which is given by 1/ , where  is density. This has the dimensions of Volume/Mass. The dimensions of the Pressure gradient are Force/Volume Therefore we have the Pressure Gradient Force (P gf ) given by; This P gf acts from High pressure to Low pressure, and so we have a final equation which reads; Pressure Gradient Force Because we are dealing in 3-D, there are components to this P gf and these are given as follows; Pressure Gradient Force Combining the components we get a total P gf of The components i and j are the P gf for horizontal motion and the k component is the P gf for vertical motion. Horizontal P gf We can simplify matters still further if we take the x axis or y axis normal to the isobars, i.e. in the direction of the gradient. We then only have to consider one of the components as the other one will be zero. y P gf = 1020 1018 1016 x Vertical P gf In the synoptic scale (large scale motions such as highs and lows), the Vertical P gf is almost exactly balanced by gravity. So we can say that This is known as the Hydrostatic equation, and basically states that for synoptic scale motion there is no vertical acceleration Coriolis Force This is an “apparent” force caused by the rotation of the Earth. It causes a change of direction of air parcels in motion In the Southern Hemisphere this deflection is to the LEFT. Is proportional towhere  is the local latitude Its magnitude is proportional to the wind strength Coriolis Force It can be shown that the Coriolis force is given by 2  sin  V The term 2  sin  is known as the Coriolis parameter and is often written in texts as f. Because of the relationship with the sine of the latitude C of has a maximum at the Poles and is zero at the equator (Sin 0 ° = 0). Coriolis Force Frames of reference- Roundabouts (1) Our earth is spinning rather slowly (i.e. once per day) and so any effects are hard to observe over short time periods A rapidly spinning roundabout is better From off the roundabout, a thrown ball travels in a straight line. Frames of reference- Roundabouts (2) But if you’re on the roundabout, the ball appears to take a curved path. And if the roundabout is spinning clockwise, the ball is deflected to the left Components of C of It can be shown that the horizontal and vertical components of the C of are as follows; The complete equation We can now write the equation of motion which describes the motion of particles on a rotating Earth. Remembering that the equation states that; Acceleration = P gf + C of + g + F, We can write the equation as follows; The complete equation (Ignoring frictional effects) Scale analysis Even though the equation has been simplified by excluding Frictional effects and combining the Centrifugal force with the Gravitational force, it is still a complicated equation. To further simplify, a process known as Scale analysis is employed. We simply assign typical scale values to each element and then eliminate those values which are SIGNIFICANTLY smaller than the rest Scale Analysis Scale Analysis (Horizontal Motion) From the previous slide we can see that for the horizontal equations of motion du/dt and dv/dt, the largest terms are the P gf and the Coriolis term involving u and v. The acceleration is an order of magnitude smaller but it cannot be ignored. Scale Analysis (Vertical Motion) For the vertical equation we can see that there are two terms which are far greater than the other two. The acceleration is of an order of magnitude so much smaller than the P gf and Gravity that it CAN be ignored We can say therefore that for SYNOPTIC scale motion, vertical acceleration can be ignored and that a state of balance called the Hydrostatic Equation exists Simplified Equation of Motion Using the assumptions of no friction and negligible vertical motion and using the Coriolis parameter f = 2  sin , we can state the Simplified equations of motion as Simplified Equation of Motion References Wallace and Hobbs Atmospheric Science pp 365 - 375 Thom Meteorology and Navigation pp 6.3 - 6.4 Crowder Wonders of the Weather pp 52 - 53 http://www.shodor.org/metweb/session4/s ession4.htmlhttp://www.shodor.org/metweb/session4/s ession4.html
2017-11-19 11:35:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8448476791381836, "perplexity": 677.3107910104158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805541.30/warc/CC-MAIN-20171119095916-20171119115916-00314.warc.gz"}
http://math.stackexchange.com/questions/86031/if-f-and-g-integrable-then-max-f-g-is?answertab=oldest
# If $f$ and $g$ integrable then $\max\{f,g\}$ is? [duplicate] Let $f$ and $g$ be two integrable real functions. Is this leads that $\max\{f,g\}$ is integrable too? Any proof? Thanks - If you mean Riemann integrable, it was answered in this question: math.stackexchange.com/questions/72844/… –  Martin Sleziak Nov 27 '11 at 13:08 I voted to close as duplicate, but I might be mistaken. The OP has not clarified whether Riemann or Lebesgue integration is intended. [Future voters: Please wait for some clarification from the OP.] –  Srivatsan Nov 27 '11 at 15:16 @Martin, thanks, I had forgotten... –  Did Nov 27 '11 at 22:40 ## marked as duplicate by J. M., Srivatsan, Asaf Karagila, Jonas Meyer, t.b.Dec 1 '11 at 6:49 $\max (f,g) = (f+g + |f-g|)/2$, so in the Lebesgue theory max(f,g) is integrable because linear combinations and absolute values of integrable functions are integrable. Better to say: Because of this identity, it suffices to prove the special case: If $f$ is integrable, then $|f|$ is integrable. –  GEdgar Nov 27 '11 at 17:53 $$|\max(f,g)|\leqslant\max(|f|,|g|)\leqslant|f|+|g|$$
2013-12-20 17:29:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8571895360946655, "perplexity": 1398.849379811153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345772826/warc/CC-MAIN-20131218054932-00078-ip-10-33-133-15.ec2.internal.warc.gz"}
https://zbmath.org/?q=an%3A0619.68056
## $$\epsilon$$-nets and simplex range queries.(English)Zbl 0619.68056 The main problem may be described as follows: given a set of n points in d-dimensional Euclidean space, find a data structure that uses linear storage such that the number of points in any query half space can be determined in sublinear time $$O(n^{\alpha})$$. A data structure with $$\alpha =d(d-1)/(d(d-1)+1)+\gamma$$ for any $$\gamma >0$$ is exhibited. These bounds for $$\alpha$$ are better than those previously published for all $$d\geq 2$$ by A. Yao and F. Yao [A general approach to d- dimensional geometric queries. Proc. 17th Symp. Theory of Computing, 163- 169 (1985)]. Let X be a set and R be a set of subsets of X, which have a finite dimension in Vapnik-Chervonenkis sense [V. N. Vapnik and A. Ya. Chervonenkis: The theory of pattern recognition (Russian) (1974; Zbl 0284.68070)], A be a finite subset of X and $$0\leq \epsilon \leq 1$$. A subset N of A is an $$\epsilon$$-net of A (for R) if N contains a point in each $$r\in R$$ such that $$| A\cap r| /| A| >\epsilon$$. The authors prove that for $$0<\epsilon$$, $$\delta <1$$, if N is a subset of A obtained by $$m\geq \max (4/\epsilon \log 2/\delta,8d/\epsilon \log 8d/\epsilon)$$ random independent draws, then N is an $$\epsilon$$-net of A with probability at least 1-$$\delta$$. Using this result, a partition tree structure that achieves the above query time is constructed. Reviewer: I.Molchanov ### MSC: 68P10 Searching and sorting 52A22 Random convex sets and integral geometry (aspects of convex geometry) 60C05 Combinatorial probability 05B99 Designs and configurations Zbl 0284.68070 Full Text: ### References: [1] Assouad, P., Densite et dimension, Ann. Inst. Fourier (Grenoble), 33, 233-282, (1983) · Zbl 0504.60006 [2] A. Blumer, A. Ehrenfeucht, D. Haussler, and M. Warmuth, Classifying learnable geometric concepts with the Vapnik-Chervonenkis dimension,Proceedings of the 18th Symposium on Theory of Computation, 273-282, 1986. [3] B. Chazelle, L. Guibas, and D. T. Lee, The power of geometric duality,Proceedings of the 24th Symposium on Foundations of Computer Science, 217-225, 1983. [4] K. Clarkson, “A probabilistic algorithm for the post office problem,”Proceedings of the 17th Symposium on Theory of Computation, 175-185, 1985. [5] K. Clarkson, Further applications of random sampling to computational geometry,Proceedings of the 18th Symposium on Theory of Computation, 414-423, 1986. [6] R. Cole, Partitioning point sets in 4 dimensions,Proceedings of the Colloquium on Automata, Language and Programming, 111-119, Lecture Notes in Computer Science 194, Springer-Verlag, Berlin, 1985. [7] D. Dobkin and H. Edelsbrunner, Organizing Points in Two and Three Dimensions, Technical Report F130, Technische Universitat Graz, 1984. [8] D. Dobkin, H. Edelsbrunner, and F. Yao, A 3-space partition and its applications, manuscript. [9] Dudley, R. M., Central limit theorems for empirical measures, Ann. Probab., 6, 899-929, (1978) · Zbl 0404.60016 [10] Edelsbrunner, H., Problem P110, Bull. EATCS, 26, 239-239, (1985) [11] H. Edelsbrunner,Algorithms in Combinatorial Geometry, EATCS Monographs in Theoretical Computer Science, Springer-Verlag, Berlin, 1987. [12] H. Edelsbrunner and F. Huber, Dissecting Sets of Points in Two and Three-Dimensions, Technical Report F138, Technische Universitat Graz, 1984. [13] Edelsbrunner, H.; Welzl, E., Constructing belts in two-dimensional arrangements with applications, SIAM J. Comput., 15, 271-284, (1986) · Zbl 0613.68043 [14] H. Edelsbrunner and E. Welzl, Halfplanar range search in linear space and $$O(n0.695)$$ query time,Inform. Process. Lett., to appear. · Zbl 0634.68064 [15] Branko Gruenbaum,Convex Polytopes, Interscience, New York, 1967. · Zbl 0163.16603 [16] Sauer, N., On the density of families of sets, J. Combin. Theory Ser. A, 13, 145-147, (1972) · Zbl 0248.05005 [17] Vapnik, V. N.; Chervonenkis, A. Ya., On the uniform convergence of relative frequencies of events to their probabilities, Theory Probab. Appl., 16, 264-280, (1971) · Zbl 0247.60005 [18] V. N. Vapnik and A. Ya. Chervonenkis,The Theory of Pattern Recognition, Nauka, Moscow, 1974. [19] Wenocur, R. S.; Dudley, R. M., Some special Vapnik-Chervonenkis classes, Discrete Math., 33, 313-318, (1981) · Zbl 0459.60008 [20] Willard, D., Polygon retrieval, SIAM J. Comput., 11, 149-165, (1982) · Zbl 0478.68060 [21] I. M. Yaglom and V. G. Bolyansky,Convex Figures, Holt, Rinehart and Winston, New York, 1961 (transl.). · Zbl 0098.35501 [22] F. Yao, A 3-space partition and it applications,Proceedings of the 15th Symposium on Theory of Computation, 258-263, 1983. [23] A. Yao and F. Yao, A general approach to $$d$$-dimensional geometric queries,Proceedings of the 17th Symposium on Theory of Computation, 163-169, 1985. This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2022-08-09 04:58:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.673902153968811, "perplexity": 2251.3903031319915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570901.18/warc/CC-MAIN-20220809033952-20220809063952-00003.warc.gz"}
https://electronics.stackexchange.com/questions/299889/how-much-power-is-radiated-by-cell-towers/299898
# how much power is radiated by cell towers? I want to know how much power is radiated by cell towers of GSM (1.8 Ghz), 3G (2.1 Ghz), 4G (2.6 Ghz) ? I want links to references if exist. • I'm not an expert on the topic so I'm not going to give an actual response, but I believe this highly depends on the configuration - they might have a single, very powerfull tower in areas with very low population density, and very weak (and many) towers where there is a lot of population. This has to do with the fact that every tower can only serve a maximum amount of connections. I have seen amplifiers for LTE with rated powers of 200W, If my memory serves me right – Joren Vaes Apr 16 '17 at 11:52 • Where I live with a lot of trees my understanding and from the timing of some dead spots, in the fall after the leaves fall they lower the power and in the spring when the leaves come back they raise it. (word of mouth from someone who works in the business) – old_timer Apr 16 '17 at 12:54 • It depends how you define it. Input Power? Effective radiated power ERP at 10m? Try again. Some mobile providers use higher power Tx and have 50% fewer towers or twice the spacing in urban areas. – Tony Stewart Sunnyskyguy EE75 Apr 16 '17 at 13:25 • @TonyStewart.EEsince'75 this is one of the rare cases where OP's lack of clarity is actually a feature of the question. I was about to vote to close it (due to misconceptions and ambiguity), but I took the chance and tried to explain what OP should have considered, in hopes of doing something that might be helpful in the general future / future readers. – Marcus Müller Apr 16 '17 at 13:37 • Thanks @TonyStewart.EEsince'75 sorry for my ambiguous question, I was about to try to edit it but I find that Marcus Müller answered it in great details. Thanks for you all. – Abdelaziz Mokhnache Apr 16 '17 at 14:43 First of all, you have a misconception about GSM, 3G, 4G: The frequency bands you list are some of the frequency allocations for these networks. These are different between different operators and in different countries. Then: Cellular networks are not broadcast transmitters. They don't work with constant output powers. The power they transmit depends on what they need to achieve. As noted in the comments above, a cell tower that covers a huge rural area will blast out more power per user on average than a small-cell tower in a city centre. Since power consumption is one of the biggest costs in operating a mobile network, carriers are extremely interested in keeping transmit power as low as possible. Also, lower maximum transmit power allows for smaller coverage area – this sounds like an anti-feature, but it means that the next base station using the exact same frequencies can be closer, which becomes necessary as operators strive to serve very many users in densely populated areas, and thus need to divide these users among as many base stations as possible, to even be theoretically able to serve the cumulative data rate of these. Then, as mentioned, the transmissions will be exactly as strong as necessary to offer optimal (under some economic definition of "optimal") service to the subscribers. Which means: when there are only a few devices basically idling in the cell, the power output will be orders of magnitude less than when the network is crowded and under heavy load. Loads have a very high dynamic. You can watch an LTE load monitor from a city centre live here. This goes as far as shutting down base stations or reducing the number of subbands served at nighttime – something we were able to see very nicely happen every night from the uni lab where I spent a lot of my days (and, obviously, far too many nights). So, there can't be "this is how much power all towers emits" number, since it depends on usage. Now, as also mentioned, there's completely different cell types. With 3G and 4G, we saw the proliferation of micro-, nano- and femtocells. Those are just radioheads that can be placed nearly anywhere and serve a very restricted space – for example, a single room. These obviously would use much less power than a single antenna mounted on a mast somewhere high. Antenna systems can be very complex, too – a modern base station will make sure to use a combination of antennas to form something like a beam that hits your phone as precisely as possible – motivation for that, again, is less necessary transmit power (lower cost) due to not illuminating anyone who's not interested in the signal you are receiving, and of course, possibility for denser networks. Then, there's aspects like interoperability. A carrier might offer both 2G and 4G, often closely co-located in spectrum, on the same mast. Now, turning up the 2G downlink's power too much might lead to saturation in 4G receiver (phone) amplifiers – and to drastic reductions in possible 4G rate for a slightly improve in 2G quality. This problem might get even more important as operators move to deprecate and shut down 2G, and might very soon be broadly adapting schemes where 2G service is "interweaved" into 4G operation in the same band (2G is very slow, and takes only very limited "useful" bandwidth, but still occupies very precious frequency bands, so it's only natural to use the very flexible 4G in a way that says "ok, dear handsets, this is our usage scheme, where we leave holes in time/frequency so that 2G can work 'in between'. Please ignore the content of these holes."). Then, the whole power/quality trade-off might become even more complicated. In essence, it's also important that when you're carrying around a phone, the most radio energy involved in the operation of the phone network that hits you is not the downlink power (i.e. base station -> phone), but the uplink power, simply because power goes down with the square of distance, and you're darn close to your phone compared to the base station antenna. An important corollary of that is, by the way, and I've seen dozens of people not understanding that, is that the more base stations there are, the less power you get hit by. Very simple: Your phone will need more power to reach a base station far away, and the power that the base station needs to reach your phone will always be adjusted so that your phone will have good reception (if possible!), but not more. • Important takeaway: building more towers will actually reduce field strengths, for both up- and downlink. – Simon Richter Apr 16 '17 at 18:02 • SO MUCH TEXT SO LITTLE useful info. – Tommixoft Jun 17 '19 at 16:52 • @Tommixoft better than one comment with zero insight. Sorry, things aren't any easier than I described. – Marcus Müller Jun 18 '19 at 6:45 • Do you have a source or reference or a back-of-the-enveloppe calculation for the power consumption being one of the biggest costs in operating a mobile network? Not saying it isn’t, just curious, that would not have been my first idea, spontaneously. – jcaron Dec 27 '19 at 13:37 • @jcaron pheww um, wait. Here: 5G cost evolution – Marcus Müller Dec 28 '19 at 9:51 It depends how you define it. Input Power? Effective radiated power ERP at 10m? 1km? Some mobile providers use higher power Tx and have 50% fewer towers or twice the spacing in urban areas. define question in terms of dBm or dBuV or dBW ERP @distance or input W vs technology etc, regional specs. Cell towers only transmit around 10 watts usually. Sometimes up to 50 or so in urban areas. Your phone can transmit up to 2 watts. antenna gain depends on losses and diversity gain (Sphere/beam coverage) Try again. This small 4G antenna lists: 20 Watt for GSM 900 and LTE 800 10 Watt for GSM 1800 & UMTS 6 Watt for WLAN & LTE And this 4/3/2G antenna has a max power use of 50 watt. Considering exponential power loss with distance and that a microwave dinner requires about 10 minutes right next to a 630 watt antenna in a box designed to focus all that power into it, i would be more worried about my own body heat than sleeping several meters away from a busy cell tower with a roof and/or wall in between. Even finding ERP a 10 m is difficult. Peak power allowed in the USA can be found near 500 wars per channel. Likely typical power near 100 watts per channel. Intensity measures such as watts per meter are not so easy to obtain because the average number of channels are not included. • Welcome to EE.SE, Don. You might want to proof-read your post before hitting the "Post Your Answer" button. You have spelling errors and are missing a verb. – Transistor Dec 27 '19 at 4:49
2020-08-14 23:52:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47139880061149597, "perplexity": 1299.2351689180057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740343.48/warc/CC-MAIN-20200814215931-20200815005931-00268.warc.gz"}
https://mathoverflow.net/questions/157270/what-does-a-zonal-sphere-harmonic-look-like
# What does a Zonal sphere harmonic look like? Let $\mathcal{H}_k\subset L^2(S^n)$ be the space of sphere harmonics of degree $k$, i.e., they are the eigenfunctions of $\Delta_{S^n}$. And let $\Pi_k:L^2(S^n)\to \mathcal{H}_k$ be the orthogonal projection operator. It has a kernel defined by $$\Pi_kf(x)=\int_{S^n}\Pi_k(x,y)f(y)dS(y)$$ A Zonal spherical harmonics $Y_0^{k}(x)$ is defined as $$Y_0^{k}(x)=\frac{\Pi_k(x,y_0)}{\sqrt{dim \mathcal{H}_k}}$$ here, $y_0$equal to the north pole. From the definition, we can immediately get $$\|Y_0^{k}(x)\|_\infty\leq C\sqrt{dim \mathcal{H}_k}\leq Ck^{\frac{n-1}{2}}$$ But I also want to get the $L^p$ norms, so I want to know how they look like, then I can compute. Thanks for any reference. • I would guess that the zonal spherical harmonics are the only spherical harmonics with the property that the value of the function only depends on the $n$th coordinate. So in that sense, they are functions of one variable only. Maybe that helps? Feb 11, 2014 at 7:10 • Is there an explicit formula for the $L^p$ norms of zonal spherical harmonics? The paper faculty.fiu.edu/~decarlil/Preprints/Proc4.pdf proves estimates for them, which suggests that no such formula is known. Feb 11, 2014 at 16:24 • I am also not aware of any explicit formula for the $L^p$-norms, but in general one can probably obtain estimates for them by means of certain Bernstein type inequalities for the Gegenbauer polynomials. Perhaps the following article can also be of interest: [N.J. Kalton and L. Tzafriri, The behaviour of Legendre and ultraspherical polynomials in Lp-spaces, Canad. J. Math. 50 (1998), 1236-1252.] Feb 11, 2014 at 19:29 Since $S^n \cong \mathrm{SO}(n+1)/\mathrm{SO}(n)$, the zonal spherical harmonics you are asking for arise as the spherical functions for the compact Gelfand pair $(\mathrm{SO}(n+1),\mathrm{SO}(n))$, which are known to be the Gegenbauer (also called ultra-spherical) polynomials.
2022-12-04 14:09:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9060909748077393, "perplexity": 182.2406529441518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710974.36/warc/CC-MAIN-20221204140455-20221204170455-00790.warc.gz"}
https://avatest.org/2022/09/24/special-relativity%E4%BB%A3%E8%80%83phys458-length-contraction/
Posted on Categories:Special Relativity, 物理代写, 狭义相对论 # 物理代写|狭义相对论代写Special Relativity代考|PHYS458 Length Contraction avatest™ ## avatest™帮您通过考试 avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试,包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您,创造模拟试题,提供所有的问题例子,以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试,我们都能帮助您! •最快12小时交付 •200+ 英语母语导师 •70分以下全额退款 ## 物理代写|狭义相对论代写Special Relativity代考|Length Contraction (a) Statement: According to this phenomenon, a moving body appears to be contracted along the direction of motion. But the length of body perpendicular to direction of motion remains unchanged. Mathematically $$l=l_0\left(1-\frac{v^2}{c^2}\right)^{1 / 2}$$ where $1_0$ is proper length of object (measured in proper frame of reference in which there is no relative motion between object and observer or in which object is actually placed). $l$ is improper length of object (measured in improper frame of reference in which there is relative motion between object and observer or in which object is not actually placed. (b) Derivation of Expression: Step I: Let inertial frame $S^{\prime}$ is moving with velocity v relative to inertial frame $S$. In derivation of expression for length contraction the events should be simultaneous in improper frame. (In which body is not actually placed) Step II: When body is placed in frame $\mathrm{S}$, inverse Lorentz transformations equations are used. $$\begin{gathered} x_2-x_1=\frac{\left(x_2-x_1\right)+v\left(t_2-t_1\right)}{\left(1-\frac{v^2}{c^2}\right)^{1 / 2}} \ t_2=t_1 \ x_2-x_1=l_0 \ x_2-x_1=l \ l_0=\frac{l+0}{\left(1-\frac{v^2}{c^2}\right)^{1 / 2}} \ l=l_0\left(1-\frac{v^2}{c^2}\right)^{1 / 2} \end{gathered}$$ This is required expression which represent that 1 is less than $1 \quad 0$ (moving object appears to be contracted along the direction of motion) ## 物理代写|狭义相对论代写Special Relativity代考|Time Dilation (a) Statement: According to this phenomenon a moving clock appears to go slow. Mathematically: $$\tau=\frac{\tau_0}{\left(1-\frac{v^2}{c^2}\right)^{1 / 2}}$$ where $\tau_0$ is proper time interval between events measured in proper frame of reference (in which events actually occurs). It is time interval measured by a single clock at one place because both events occur at same position. $\tau$ is improper time interval between events measured in improper frame of reference. (in which events actually do not occur). It is time interval measured by two different clocks at two different places because both events occur at different positions. (b) Derivation of Expression: Let frame $S^{\prime}$ is moving with velocity $v$ relative to frame $S$. In the derivation of expression for time dilation, the event should be co-local relative to frame $\mathrm{S}$. (In proper frame in which there is no relative motion between event and observer or where events actually occurs.) Step I: When events occurs in frame $S$ direct LT equations are used $$t_1=\frac{\left(t_1-\frac{v}{c^2} x_1\right)}{\left(1-\frac{v^2}{c^2}\right)^{1 / 2}} \text { and } t_2=\frac{\left(t_2-\frac{v}{c^2} x_2\right)}{\left(1-\frac{v^2}{c^2}\right)^{1 / 2}}$$ $$\therefore\left(t_2-t_1\right)=\frac{\left(t_2-t_1\right)-\frac{v}{c^2}\left(x_2-x_1\right)}{\left(1-\frac{v^2}{c^2}\right)^{1 / 2}}$$ ## 物理代写|狭义相对论代写狭义相对论代考|长度收缩 $$l=l_0\left(1-\frac{v^2}{c^2}\right)^{1 / 2}$$ $l$是不适当的对象长度(在不适当的参照系中测量,在该参照系中,对象和观察者之间存在相对运动或对象没有实际放置 (b)表达式的推导: $$\begin{gathered} x_2-x_1=\frac{\left(x_2-x_1\right)+v\left(t_2-t_1\right)}{\left(1-\frac{v^2}{c^2}\right)^{1 / 2}} \ t_2=t_1 \ x_2-x_1=l_0 \ x_2-x_1=l \ l_0=\frac{l+0}{\left(1-\frac{v^2}{c^2}\right)^{1 / 2}} \ l=l_0\left(1-\frac{v^2}{c^2}\right)^{1 / 2} \end{gathered}$$ ## 物理代写|狭义相对论代写狭义相对论代考|时间膨胀 (a)语句: $$\tau=\frac{\tau_0}{\left(1-\frac{v^2}{c^2}\right)^{1 / 2}}$$ ,其中$\tau_0$是在适当参照系中度量的事件之间的固有时间间隔(事件实际发生在该参照系中)。它是在一个地方用一个时钟测量的时间间隔,因为两个事件发生在同一位置。 $\tau$是在不适当的参照系中测量的事件之间的不适当时间间隔。(在这种情况下,事件实际上不会发生)。它是由两个不同地点的两个不同时钟测量的时间间隔,因为两个事件发生在不同的位置 (b)表达式的推导: $$t_1=\frac{\left(t_1-\frac{v}{c^2} x_1\right)}{\left(1-\frac{v^2}{c^2}\right)^{1 / 2}} \text { and } t_2=\frac{\left(t_2-\frac{v}{c^2} x_2\right)}{\left(1-\frac{v^2}{c^2}\right)^{1 / 2}}$$ $$\therefore\left(t_2-t_1\right)=\frac{\left(t_2-t_1\right)-\frac{v}{c^2}\left(x_2-x_1\right)}{\left(1-\frac{v^2}{c^2}\right)^{1 / 2}}$$ ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
2023-03-27 08:24:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6103607416152954, "perplexity": 1584.6813240620168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948609.41/warc/CC-MAIN-20230327060940-20230327090940-00205.warc.gz"}
https://byjus.com/jee-questions/choose-the-correct-option-for-the-below-minus-given-statements-statement1-in-bohrs-model-the-velocity-of-electrons-increases-with-a-decrease-in-the-positive/
# Choose the correct option for the below-given statements.Statement I: In Bohr’s model, the velocity of electrons increases with a decrease in the positive charge of the nucleus as electrons are not held tightly. Statement II: Velocity decreases with an increase in principal quantum number. a. Statement I is correct & Statement II is correct b. Statement I is correct & Statement II is incorrect c. Statement I is incorrect & Statement II is incorrect d. Statement I is incorrect & Statement II is correct Solution: For Statement I: According to Bohr’s model, Velocity of electron, $V_{n} = v_{o}\tfrac{z}{n}$ Where, vo = 2.18 × 106 / So, with a decrease in z, Vn will also decrease Hence, Statement I is incorrect, For Statement II: With an increase in n, velocity will decrease as per the relation given above, Hence, Statement II is correct Hence, option d) is correct.
2021-11-28 06:33:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.351012259721756, "perplexity": 2689.3996976270005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358469.34/warc/CC-MAIN-20211128043743-20211128073743-00279.warc.gz"}
https://2018.eswc-conferences.org/paper_80/
# Paper 80 (Research track) Semantic labeling for quantitative data using Wikidata Author(s): Phuc Nguyen, Hideaki Takeda Full text: submitted version Abstract: Semantic labeling for quantitative data is a process of matching numeric columns in table data to a schema or an ontology structure. It is beneficial for table search, table extension or knowledge augmentation. There are several challenges of quantitative data matching, for example, a variety of data ranges or distribution, and especially, different measurement units. Previous systems use several similarity metrics to determine column numeric values and corresponding semantic labels. However, lack of measurement units can lead to incorrect labeling. Moreover, the attribute columns of different tables could be measured by units differently. In this paper, we tackle the problem of semantic labeling in various measurement units and scales by using Wikidata background knowledge base (WBKB). We apply hierarchical clustering for building WBKB with numeric data taken from Wikidata. The structure of WBKB follows the nature taxonomy concept of Wikidata, and it also has richness information about units of measurement. We considered two transformation methods: z-score-tran based on standard normalization technique and unit-tran based on restricted measurement units for each semantic label of WBKB. We tested two transformation methods on six similarity metrics to find the most robust metric for Wikidata quantitative data. Our experiment results show that using unit-tran and ks-test metric can effectively find corresponding semantic labels even when numeric columns are expressed in different units. Keywords: semantic labeling; quantity; unit of measurement; tabular data; LOD; Wikidata Decision: reject Review 1 (by Dagmar Gromann) (RELEVANCE TO ESWC) While the overall idea of schema annotation and the contribution of an instantiated unit of measurement knowledge base might be relevant to the conference, in the way this is presented I am not even sure that this is what the paper is trying to achieve. (NOVELTY OF THE PROPOSED SOLUTION) The only novelty over the reference implementation seems to be the use of a different knowledge base, that is, Wikidata instead of DBpedia. (CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) It is hard to understand what exactly the proposed solution is. (EVALUATION OF THE STATE-OF-THE-ART) The results are presented in tables without any proper description or discussion. (DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) The properties are not discussed and some points mentioned earlier as part of the method are never actually described in the paper. (REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) With the provided description of the approach and its results it would not be feasible to reproduce the study. (OVERALL SCORE) SUMMARY If understood correctly, this paper seeks to provide an approach for annotating numeric data stored in tables with units of measurement from the Wikidata schema, thereby contributing a resource entitled Wikidata background knowledge base (WBKB). From the abstract I am left wondering what the paper exactly describes. Which data are labeled using which methods? Only after looking at Figure 1 it becomes somewhat clearer what the paper is trying to propose. Unfortunately, this issue with the mode of presentation persists throughout the paper. It is hard to understand what the authors are trying to say. Take for instance the following sentences: "In DBpedia, data extract from template matching from Wikipedia InfoBox." or "It clear that using the sample which is different scale with WNKB is really hard for making the correct labeling." Furthermore, if I understood correctly, the only innovative part in comparison to Neumaier et al.'s approach is the use of Wikidata instead of DBpedia and testing some additional similarity metrics. Neither the Wikidata queries nor the well-known similarity metrics require such a lengthy description. Instead, the results should be properly described and then also discussed. While the abstract and introduction claim to be using hierarchical clustering to build a knowledge graph, this is not described properly in the paper. STRONG POINTS: 1) The overall idea and discussed problem might be relevant to this community WEAK POINTS: 1) This paper is barely legible and highly unclear 2) Parts of the method that are discussed in the introduction and not mentioned again anywhere in the paper, e.g. hierarchical clustering 3) Neither the results nor the evaluation are properly described or discussed Formating: The abstract seems very long and quite unclear. There should be a heading after the abstract stating "Introduction" with number 1 and the Section Related Work should not be numbered with "0.1" but with an integer, i.e., 2. Typesetting problems, such as p_v_alue (where only the v is subscript) or restrictedunits, KullbackLeibler, etc. Introduced acronyms suddenly change, e.g. WBKB suddenly becomes WNKB on page 8 and following. Once it is figure then it is Figure to refer to figures. Figure 2 is barely readable and its axes should be labeled. Overall evaluation: I advise to reject this paper since the proposed approach is first of all presented in a barely understandable way and second of little innovation. It uses already existing work and applies it to Wikidata while the previous work used DBpedia. I would like to thank the authors for addressing the questions raised in the review. I still do not think the paper is ready to be published in its current state. Review 2 (by anonymous reviewer) (RELEVANCE TO ESWC) The paper moderately overlaps the scope of the conference. Matching (a very specific form of) existing knowledge to standard ontologies is certainly relevant in general. The paper should be better motivated and referred to the Semantic Web literature. An effort should be made to convince on the value of the class of presented methods in the context of the conference topics. (NOVELTY OF THE PROPOSED SOLUTION) The approach appears only moderately original: most of the framework is crafted along a cited work: [2]. (CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) It is very difficult to judge, also due to the problems with the quality of writing (EVALUATION OF THE STATE-OF-THE-ART) In its brevity, the paper does not discusses related work sufficiently. Most of the method is referred to [2]. An ad hoc section could be added, given the brevity of the current version of the paper. (DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) Because of the mentioned problem with the quality of writing, it is difficult to judge the technical quality of the proposal. The purpose of the two transformation methods should be better and more formally presented also in comparison to the state-of-the-art. (REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) The experiments like the rest of the paper refers to setup and resources deriving from the targeted system [2]. The experiments do not seem to be comparative, e.g. with respect to the framework in [2] which is repeatedly referred to as a source of inspiration for this work. I suspect that the provided details would not sufficient for a possible replication of the experiment. A comparative experiment would support the significance of this piece of work. At the current stage it appears to be a case study on a limited problem related to a single data source / knowledge base. (OVERALL SCORE) The paper presents a method for matching numeric table contents from Wikimedia to their units ans encoded by WBKB. - the current version demands a total rewriting and a revision of the layout/organization - marginally relevant problem to the conference topics - not convincing about novelty and effectiveness w.r.t. state of the art In general it was difficult to read the paper due to the poor quality of writing: a large number of grammar errors and linguistic problems should be solved before a re-submission. I’m afraid that the poor presentation does compromise the appreciation of the possible merits of the presented method. There are too many comments to be made and typos to be listed. - the first section is missing its title, that is why the subsections numbering starts with a 0. - the problems with open datasets should be better stated from the very beginning (first section). - please revise the presentation of the notation in Sect. 1.1 and possibly provide examples of the notions you introduce (there is a lot of margin to extend the paper length) - NKB == WNKB ? - Please provide a better explanation for Fig. 2 I would suggest a thorough revision before a re-submission to a later conference. I also suggest broadening the scope of applicability of the method showing it can compete with existing ones also on at least two other open datasets. === AFTER REBUTTAL I'd like to thank the authors for their answers. Review 3 (by anonymous reviewer) (RELEVANCE TO ESWC) The paper is related to the conference, as it target semantic annotation of numerical data, however, it is not novel. (NOVELTY OF THE PROPOSED SOLUTION) The paper is a replica of Neumaier et al. The only novel part is to use unit-conversion. The algorithm used for constructing the WBKB is also taken from Neumaier et al. (CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) A lot of details is missing, specially for the WBKB construction and the schema mapping algorithm. (EVALUATION OF THE STATE-OF-THE-ART) The paper stated the previous sate of the art research, however it did not compare against any of them, not even with the work of Neumaier et al. that this work is based on. (DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) The paper did not discuss a lot of details related to the hierarchical clustering algorithm. (REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) I doubt that the results of the paper can be replicated with the level of details given. (OVERALL SCORE) ** Summary of the Paper ** The paper propose a method for constructing a background knowledge base for numerical data based on Wikidata, Wikidata Background Knowledge Base (WBKB). The authors use WBKB in semantic labelling of tabular data with quantities. The inputs of the semantic labelling algorithm are: a list of numbers, a unit, a property label, and context description. The expected output is Measure, unit and object of measurement. For example: height, meter, person. However, The paper is a replica of [1], and the authors ignored to mention a lot of details and referred back to [1]. ** Strong Points ** * Using the unit conversion in the matching stage of column values to the WBKB, which is the main contribution of the paper. **Weak Points: ** * The paper is poorly written with many details omitted. Section 2.1, miss the details of the Hierarchical clustering algorithm used, and only mentioned a set of SPARQL queries. It was confusing to understand the construction of the knowledge base without referring back to [1]. Also the experimental setting was not clear and the dataset is not well defined. Also it is not clear why a training dataset is needed at all. * The paper is a replica of the work presented in [1]. I see no Novelty in the paper except for using Wikidata as the source for constructing the WBKB. * The authors did not provide any diagrams or examples of the ontology of the WBKB * Many sections refer back to [1] without giving further details. *** Questions to the Authors *** * How do you use Hierarchical Clustering in the construction of WBKB? * Why do you need a training dataset? what are the parameters you are training? What is your training procedure? * Is your dataset only based on Wikidata? * The only difference I can see between your work and [1] is using Wikidata and the unit conversion part, is that true? ;o [1] Multi-level Semantic Labelling of Numerical Values, Authors: Sebastian Neumaier, Jürgen Umbrich, Josiane Xavier Parreira, Axel Polleres. Many thanks for addressing the comments raised in the review. Unfortunately, in its current state the paper is not mature enough to be accepted. Review 4 (by anonymous reviewer) (RELEVANCE TO ESWC) Semantic annotation of tabular data is a very relevant topic to lift CSV files into knowledge graphs. Annotation of numerical values has been addressed in previous work, but the problem is far from being solved yet. (NOVELTY OF THE PROPOSED SOLUTION) The novelty of the proposed approach consists in two main contributions: - the consideration of unit diversity and conversion among units (unit conversion) and the introduction of normalization function before computing the similarity between two numerical sets. - the use of Wikidata as a reference knowledge base for annotation of numerical values (WDKB) Otherwise, the approach extends previous work, in the sense that the paper does not propose a radically new approach to compute the similarity between sets of numerical values. (CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) The paper presents an end-to-end solution to build the reference knowledge base and use it to annotate numerical values. The approach is principled and considers a reference knowledge base that covers a very large number of numerical data types. (EVALUATION OF THE STATE-OF-THE-ART) To the best of my knowledge, the discussion of related work is complete. (DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) Discussion of experiment results is rather shallow. For example, it would be interesting to provide some examples that demonstrate when the approach is correct (or, better, improves on Neumaier et al.) and when it is not. This is particularly important because absolute performance is not very high, which would require a much in-depth discussion. In addition, the authors compare several approaches to compute the similarity between sets of numerical values but do not explain why it is not possible to use all (or a subset) of them to collect more evidence. Are there efficiency issues in adopting such a combinatorial approach? Finally, some insights on execution times would be useful. The efficiency of the proposed approach is not discussed at all in the paper. Overall, to add the above-mentioned explanation, you could reduce the space dedicated to similarity metrics, which are taken from previous work (maybe keeping just the few that perform better). (REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) Code of the proposed approach is not shared. The paragraph describing the construction of the three sets used in evaluation is not very clear. For example, what does "we shuffle random select maximum 50 leave nodes" mean? Since you are constructing your own dataset, I suggest provide some examples used as ground truth. I also checked supplementary material on GitHub and it is not very well organized. For example, in addition to property identifiers, you could add property labels. Also, the different datasets could be made available in a format that is easier to inspect (e.g., CSV files). (OVERALL SCORE) SUMMARY The paper proposes two main technical contributions: - the use of WBKB as background knowledge to annotate sets of numerical values with their type - the use of unit transformation and normalization functions to improve results, while using similarity metrics proposed in state of the art. STRONG POINTS - Using WDKB is very interesting for annotation of numerical values, because of its large coverage of types and units. - The main contributions of the approach are (relatively) well explained (despite the very large amount of grammatical errors and typos) - Handling unit transformations is an important topic, and using normalization techniques is a principled approach to compare different distributions of values - The approach seems to provide better results than a state-of-the-art approach (arguably the best one available as of today for this specific problem) WEAK POINTS - Evaluating the impact of one contribution of the paper, i.e., normalization and unit transformation, also on other datasets used in related work would have been useful to make the results more conclusive; for example, why not using the data used in Neumaier et al.? - The absolute performance of the approach is still rather low (e.g., 0.11 on type inference when different units are considered - see Table 2); the improvement on Neumaier et al. is also quite limited in terms of absolute numbers (e.g., +0.07 on top-k prop; +0.04 on top-k type - see Table 3). This can be motivated by the use of WBKB, which has a very large amount of numerical types, but this is a further argument for which an evaluation of unit_tran_ks_test_d also on the same dataset used by Neumaier et al. would have been useful to provide more conclusive results. - An in-depth discussion of the results is missing; the description of the datasets used in the evaluation should also be improved to ensure repeatability - The paper contains a very very large amount of typos and grammatical errors, and requires a thorough proof-checked before being accepted for publication - The authors compared their work only with Neumaier et al.; a better argument for choosing only this approach for comparison should be provided (I agree with the choice, but readers less familiar with the topic should be informed) QUESTIONS Are there efficiency issues in adopting such a combinatorial approach? Why not combining different similarity measures to improve on the results? Can you better explain such low absolute performance numbers and discuss the slight improvement on Neumaier? What does "we shuffle random select maximum 50 leave nodes" mean? What does "second layer is called as a p-o hierarchy which is sub-nodes of type nodes" mean? (you can make an example) Can you collect transformation rules automatically or did you need to implement the transformation yourselves? Can you better specify what you mean with "We modify the type measure to the top k neighbors contain the correct type path" (Sec. 2.3) *** typos and grammatical errors are literally too many to be listed: the paper require a careful proof-check from a native english speaker before it can be published*** Section 1.1 A definition of WBKB is missing; in particular, define a node in the WBKB v_q \in R --> you may want to use a symbol for real numbers instead of R (e.g., \mathcal{R}?) "Semantic labeling system perform K-nearest neighbor to find a corresponding node p in WBKB with property label lvp ," --> should be better explained "Each node has the information about the canonical unit and other restrictedunits or scales." --> specify which nodes (all WBKB nodes? only numerical ones?) In Query 1.2, why not using a variable instead of wdt:P2237? This would show the generality of your query given an input property identifier. Same suggestion apply to Query 1.3 and Query 1.4, where you want to enphasize variables. Section 1.4 Eq. 7 has undefined variables Section 2.1 "We select the most of 50 properties for building WBKB" --> the most what? Please rewrite "In total, [...] dif-set" --> not clear, please explain better * After rebuttal * I thank the authors for their replies. I think that the work is not mature to be accepted for publication yet. However, I encourage the authors to improve the experimental evaluation (possibly testing combinations of the different measures) and the presentation and re-submit the paper. Metareview by Oscar Corcho As pointed out by the reviewers, the paper is generally difficult to follow and the relationship with some approaches in the state of the art is not completely clear. Share on
2018-06-23 13:50:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5332674980163574, "perplexity": 1088.3059719097855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865081.23/warc/CC-MAIN-20180623132619-20180623152619-00392.warc.gz"}
https://labs.tib.eu/arxiv/?author=V.%20P.%20Goncalves
• ### $\eta_c$ photoproduction at LHC energies(1804.02345) April 6, 2018 hep-ph, hep-ex In this contribution, we study the inclusive and diffractive $\eta_{c}$ photoproduction in $pp$ and $pPb$ ultra-peripheral collisions (UPC's) at the LHC Run 2 energies. The quarkonium production is studied using nonrelativistic quantum chromodynamics (NRQCD) formalism. We present predictions for rapidity and transverse momentum distributions for the $\eta_c$ photoproduction and present our estimate for the total cross sections at the Run 2 energies. • ### On the intrinsic charm contribution to the prompt atmospheric neutrino flux(1803.01728) March 2, 2018 hep-ph, nucl-th, astro-ph.HE In this work we investigate the impact of intrinsic charm on the prompt atmospheric neutrino flux. The color dipole approach to heavy quark production is generalized to include the contribution of processes initiated by charm quarks. The prompt neutrino flux is calculated assuming the presence of intrinsic charm in the wave function of the projetile hadron. The predictions are compared with previous color dipole results which were obtained taking into account only the process initiated by gluons. In addition, we estimate the atmospheric (conventional + prompt) neutrino flux and compare our predictions with the ICECUBE results for the astrophysical neutrino flux. Our results demonstrate that the contribution of the charm quark initiated process is non - negligible and that the prompt neutrino flux can be enhanced by a factor $\approx$ 2 at large neutrino energies if an intrinsic charm component is present in the proton wave function. • ### Exclusive vector meson photoproduction with proton dissociation in photon-hadron interactions at the LHC(1802.08517) Feb. 23, 2018 hep-ph, hep-ex At forward rapidities and high energies we expect to probe the non-linear regime of Quantum Chromodynamics (QCD). One of the most promising observables to constrain the QCD dynamics in this regime is exclusive vector meson photoproduction (EVMP). We study the EVMP in association with a leading baryon (product of the proton dissociation) in photon-hadron interactions that take place in $pp$ and $p$Pb collisions at large impact parameters. We present the rapidity distributions for $\rho$ and $J/\psi$ photoproduction in association with a leading baryon (neutron and delta states) at LHC run 2 energies. Our results show that the $V + \Delta$ cross section is almost 30 % of the $V + n$ one. Our results also show that a future experimental analysis of these processes is, in principle, feasible and can be useful to study the leading particle production. • ### Exclusive and diffractive $\mu^+ \mu^-$ production in $pp$ collisions at the LHC(1802.07339) Feb. 16, 2018 hep-ph, hep-ex In this paper we estimate the production of dimuons ($\mu^+ \mu^-$) in exclusive photon -- photon ($\gamma \gamma$) and diffractive Pomeron - Pomeron $(I\!P I\!P)$, Pomeron - Reggeon $(I\!P I\!R)$ and Reggeon - Reggeon $(I\!R I\!R)$ interactions in $pp$ collisions at the LHC energy. The invariant mass, rapidity and tranverse momentum distributions are calculated using the Forward Physics Monte Carlo (FPMC), which allows to obtain realistic predictions for the dimuon production with two leading intact hadrons. In particular, predictions taking into account the CMS and LHCb acceptances are presented. Moreover, the contribution of the single diffraction for the dimuon production also is estimated. Our results demonstrate that the experimental separation of these different mechanisms is feasible. In particular, the events characterized by pairs with large squared transverse momentum are dominated by diffractive interactions, which allows to investigate the underlying assumptions present in the description of these processes. • ### Exclusive vector meson photoproduction in fixed - target collisions at the LHC(1802.04713) Feb. 13, 2018 hep-ph, hep-ex, nucl-ex, nucl-th The exclusive $\rho$, $\omega$ and $J/\Psi$ photoproduction in fixed - target collisions at the LHC is investigated. We estimate, for the first time, the rapidity and transverse momentum distributions of the vector meson photoproduction in $p He$, $p Ar$, $Pb He$ and $Pb Ar$ fixed - target collisions at the LHC using the STARlight Monte Carlo and present our results for the total cross sections. Predictions taken into account the kinematical range probed by the LHCb detector are also presented. Our results indicate that the experimental analysis of this process in fixed - target collisions at the LHC is feasible, which allows to probe the QCD dynamics in a kinematical range complementary to that studied in the collider mode. • ### $\eta_c$ production in photon - induced interactions at the LHC(1801.10501) Jan. 30, 2018 hep-ph, hep-ex In this paper we investigate the $\eta_c$ production by photon - photon and photon - hadron interactions in $pp$ and $pA$ collisions at the LHC energies. The inclusive and diffractive contributions for the $\eta_c$ photoproduction are estimated using the nonrelativistic quantum chromodynamics (NRQCD) formalism. We estimate the rapidity and transverse momentum distributions for the $\eta_c$ photoproduction in hadronic collisions at the LHC and present our estimate for the total cross sections at the Run 2 energies. A comparison with the predictions for the exclusive $\eta_c$ photoproduction, which is a direct probe of the Odderon, also is presented. • ### Probing the BFKL dynamics in the Vector Meson Photoproduction at large -- $t$ in $pPb$ collisions at the CERN LHC(1710.06005) Dec. 6, 2017 hep-ph, hep-ex The photoproduction of vector mesons in $pPb$ collisions at LHC energies is investigated assuming that the color singlet $t$-channel exchange carries a large momentum transfer $t$. The rapidity distributions and total cross sections for the process $Pb \, p \rightarrow Pb \otimes V \otimes \,jet + X$, with $V = \rho, \, J/\Psi$ and $\otimes$ representing a rapidity gap in the final state, are estimated considering the non-forward solution of the BFKL equation at high energy and large -- $t$. A comparison with the predictions obtained at the Born level also is presented. We predict a large enhancement of the cross sections associated to the BFKL dynamics in the kinematical range probed by the LHCb Collaboration. Moreover, our results indicate that the experimental identification can be feasible at the LHC and that this process can be used to probe the BFKL dynamics. • ### Investigating the Transverse Single Spin Asymmetry in the Inelastic $J/\Psi$ photoproduction in $p^\uparrow p$ and $p^\uparrow A$ collisions(1710.01674) Dec. 6, 2017 hep-ph, hep-ex, nucl-ex, nucl-th In this paper we propose to investigate the transverse single spin asymmetry in the inelastic $J/\Psi$ photoproduction in $p^\uparrow p$ and $p^\uparrow A$ collisions at RHIC energies. At leading order this process probes the gluon Sivers function. We predict large values for the cross sections, which indicates that its experimental analysis is, in principle, feasible. The rapidity dependence of the single spin asymmetry is presented. We obtain that the asymmetry is strongly dependent on the model used for the gluon Sivers function and that it can be probed by the analysis of the $J/\Psi$ production at forward rapidities. Our results indicate that a future experimental analysis of this process can be useful to constrain the gluon Sivers function. • ### Diffractive quarkonium photoproduction in $pp$ and $pA$ collisions at the LHC: Predictions of the Resolved Pomeron model for the Run 2 energies(1708.01498) Oct. 17, 2017 hep-ph, hep-ex The inclusive diffractive quarkonium photoproduction in $pp$ and $pA$ collisions is investigated considering the Resolved Pomeron Model to describe the diffractive interaction. We estimate the rapidity and transverse momentum distributions for the $J/\Psi$, $\Psi(2S)$ and $\Upsilon$ photoproduction in hadronic collisions at the LHC and present our estimate for the total cross sections at the Run 2 energies. A comparison with the predictions associated to the exclusive production also is presented. Our results indicate that the inclusive diffractive production is a factor $\gtrsim 10$ smaller than the exclusive one in the kinematical range probed by the LHC. • ### Probing Saturation Physics in the Real Compton Scattering at Ultraperipheral $pPb$ Collisions(1707.02806) Oct. 17, 2017 hep-ph, hep-ex, nucl-ex, nucl-th The Real Compton Scattering in ultraperipheral $pPb$ collisions at RHIC and LHC energies is investigated and predictions for the squared transverse momentum ($t$) and rapidity ($Y$) distributions are presented. The scattering amplitude is assumed to be given by the sum of the Reggeon and Pomeron contributions and the Pomeron one is described by the Color Dipole formalism taking into account the non - linear (saturation) effects in the QCD dynamics. We demonstrate that the behaviour of the cross sections at large -- $t$ and/or $Y$ is dominated by the Pomeron contribution and is strongly affected by the non -- linear effects present in the QCD dynamics. These results indicate that a future experimental analysis of this process can be useful to probe the QCD dynamics at high energies. • ### Photon and Pomeron -- induced production of Dijets in $pp$, $pA$ and $AA$ collisions(1705.08834) Oct. 17, 2017 hep-ph, hep-ex, nucl-ex, nucl-th In this paper we present a detailed comparison of the dijet production by photon -- photon, photon -- pomeron and pomeron -- pomeron interactions in $pp$, $pA$ and ${\rm AA}$ collisions at the LHC energy. The transverse momentum, pseudo -- rapidity and angular dependencies of the cross sections are calculated at LHC energy using the Forward Physics Monte Carlo (FPMC), which allows to obtain realistic predictions for the dijet production with two leading intact hadrons. We obtain that $\gamma \pom$ channel is dominant at forward rapidities in $pp$ collisions and in the full kinematical range in the nuclear collisions of heavy nuclei. Our results indicate that the analysis of dijet production at the LHC can be useful to test the Resolved Pomeron model as well as to constrain the magnitude of the absorption effects. • ### Investigating the impact of the gluon saturation effects on the momentum transfer distributions for the exclusive vector meson photoproduction in hadronic collisions(1701.04340) Jan. 16, 2017 hep-ph, hep-ex, nucl-ex, nucl-th The exclusive vector meson production cross section is one of the most promising observables to probe the high energy regime of the QCD dynamics. In particular, the squared momentum transfer ($t$) distributions are an important source of information about the spatial distribution of the gluons in the hadron and about fluctuations of the color fields. In this paper we complement previous studies on exclusive vector meson photoproduction in hadronic collisions presenting a comprehensive analysis of the $t$ - spectrum measured in exclusive $\rho$, $\phi$ and $J/\Psi$ photoproduction in $pp$ and $PbPb$ collisions at the LHC. We compute the differential cross sections taking into account gluon saturation effects and compare the predictions with those obtained in the linear regime of the QCD dynamics. Our results show that gluon saturation suppresses the magnitude of the cross sections and shifts the position of the dips towards smaller values of $t$. • ### Exclusive heavy vector meson photoproduction in hadronic collisions at the LHC: predictions of the Color Glass Condensate model for Run 2 energies(1612.06254) Dec. 19, 2016 hep-ph, hep-ex, nucl-ex, nucl-th In this letter we update our predictions for exclusive $J/\Psi$ and $\Upsilon$ photoproduction in proton-proton and nucleus - nucleus collisions at the Run 2 LHC energies obtained with the color dipole formalism and considering the impact parameter Color Glass Condensate model (bCGC) for the forward dipole - target scattering amplitude. A comparison with the LHCb data on rapidity distributions and photon - hadron cross sections is presented. Our results demonstrate that the current data can be quite well described by the bCGC model, which takes into account nonlinear effects in the QCD dynamics and reproduces the very precise HERA data, without introducing any additional effect or free parameter. • ### Production of exotic charmonium in $\gamma \gamma$ interactions at hadronic colliders(1610.06604) Nov. 22, 2016 hep-ph, hep-ex, nucl-ex, nucl-th In this paper we investigate the Exotic Charmonium (EC) production in $\gamma \gamma$ interactions present in proton-proton, proton-nucleus and nucleus-nucleus collisions at the CERN Large Hadron Collider (LHC) energies as well as for the proposed energies of the Future Circular Collider (FCC). Our results demonstrate that the experimental study of these processes is feasible and can be used to constrain the theoretical decay widths and shed some light on the configuration of the considered multiquark states. • The goal of this report is to give a comprehensive overview of the rich field of forward physics, with a special attention to the topics that can be studied at the LHC. The report starts presenting a selection of the Monte Carlo simulation tools currently available, chapter 2, then enters the rich phenomenology of QCD at low, chapter 3, and high, chapter 4, momentum transfer, while the unique scattering conditions of central exclusive production are analyzed in chapter 5. The last two experimental topics, Cosmic Ray and Heavy Ion physics are presented in the chapter 6 and 7 respectively. Chapter 8 is dedicated to the BFKL dynamics, multiparton interactions, and saturation. The report ends with an overview of the forward detectors at LHC. Each chapter is correlated with a comprehensive bibliography, attempting to provide to the interested reader with a wide opportunity for further studies. • ### Probing the diffractive production of gauge bosons at forward rapidities(1610.00779) Oct. 9, 2016 hep-ph, hep-ex The gauge boson production at forward rapidities in single diffractive events at the LHC is investigated considering $pp$ collisions at $\sqrt{s} =$ 8 and 13 TeV. The impact of gap survival effects is analysed using two different models for the soft rescattering contributions. We demonstrate that using the Forward Shower Counter Project at LHCb -- HERSCHEL, together with the Vertex Locator -- VELO, it is possible to discriminate diffractive production of the gauge bosons $W$ and $Z$ from the non-diffractive processes and studies of the Pomeron structure and diffraction phenomenology are feasible. Moreover, we show that the analysis of this process can be useful to constrain the modelling of the gap survival effects. • ### Exclusive vector meson photoproduction at the LHC and the FCC: A closer look on the final state(1609.09854) Sept. 30, 2016 hep-ph, hep-ex Over the past years the LHC experiments have reported experimental evidences for processes associated to photon-photon and photon-hadron interactions, showing their potential to investigate the production of low- and high-mass systems in exclusive events. In the particular case of the photoproduction of vector mesons, the experimental study of this final state is expected to shed light on the description of the QCD dynamics at small values of the Bjorken-$x$ variable. In this paper we extend previous studies for the exclusive $J/\Psi$ and $\Upsilon$ photoproduction in $pp$ collisions based on the nonlinear QCD dynamics by performing a detailed study of the final state distributions that can be measured experimentally at the LHC and at the Future Circular Collider. Predictions for the rapidity and transverse momentum distributions of the vector mesons and of final-state dimuons are presented for $pp$ collisions at $\sqrt{s} =$ 7, 13, and 100 TeV. • ### Diffractive $\rho$ production at small $x$ in future Electron - Ion Colliders(1510.01512) June 28, 2016 hep-ph, hep-ex, nucl-ex, nucl-th The future Electron - Ion ($eA$) Collider is expected to probe the high energy regime of the QCD dynamics, with the exclusive vector meson production cross section being one of the most promising observables. In this paper we complement previous studies of exclusive processes presenting a comprehensive analysis of diffractive $\rho$ production at small $x$. We compute the coherent and incoherent cross sections taking into account non-linear QCD dynamical effects and considering different models for the dipole - proton scattering amplitude and for the vector meson wave function. The dependence of these cross sections with the energy, photon virtuality, nuclear mass number and squared momentum transfer is analysed in detail. Moreover, we compare the non-linear predictions with those obtained in the linear regime. Finally, we also estimate the exclusive photon, $J/\Psi$ and $\phi$ production and compare with the results obtained for $\rho$ production. Our results demonstrate that the analysis of diffractive $\rho$ production in future electron - ion colliders will be important to understand the non-linear QCD dynamics. • ### On the rapidity dependence of the average transverse momentum in hadronic collisions(1510.04737) May 10, 2016 hep-ph, hep-ex, nucl-ex, nucl-th The energy and rapidity dependence of the average transverse momentum $\langle p_T \rangle$ in $pp$ and $pA$ collisions at RHIC and LHC energies are estimated using the Colour Glass Condensate (CGC) formalism. We update previous predictions for the $p_T$ - spectra using the hybrid formalism of the CGC approach and two phenomenological models for the dipole - target scattering amplitude. We demonstrate that these models are able to describe the RHIC and LHC data for the hadron production in $pp$, $dAu$ and $pPb$ collisions at $p_T \le 20$ GeV. Moreover, we present our predictions for $\langle p_T \rangle$ and demonstrate that the ratio $\langle p_{T}(y)\rangle / \langle p_{T}(y = 0)\rangle$ decreases with the rapidity and has a behaviour similar to that predicted by hydrodynamical calculations. • ### Exclusive processes with a leading neutron in $ep$ collisions(1512.06594) April 3, 2016 hep-ph, hep-ex, nucl-th In this paper we extend the color dipole formalism to the study of exclusive processes associated with a leading neutron in $ep$ collisions at high energies. The exclusive $\rho$, $\phi$ and $J/\Psi$ production, as well as the Deeply Virtual Compton Scattering, are analysed assuming a diffractive interaction between the color dipole and the pion emitted by the incident proton. We compare our predictions with the HERA data on $\rho$ production and estimate the magnitude of the absorption corrections. We show that the color dipole formalism is able to describe the current data. Finally, we present our estimate for the exclusive cross sections which can be studied at HERA and in future electron-proton colliders. • ### Phenomenological implications of the intrinsic charm in the $Z$ boson production at the LHC(1512.06007) April 3, 2016 hep-ph, hep-ex In this paper we study the $Z$, $Z+$ jet, $Z+c$ and $Z+c+$ jet production in $pp$ collisions at the LHC considering different models for an intrinsic charm content of the proton. We analyse the impact of the intrinsic charm in the rapidity and transverse momentum distributions for these different processes. Our results indicated that differently from the other processes, the $Z+c$ cross section is strongly affected by the presence of the intrinsic charm. Moreover, we propose the analysis of the ratios $R(Z+c/Z) \equiv \sigma(Z+c)/\sigma(Z)$ and $R(Z+c/Z+\mbox{jet}) \equiv \sigma(Z+c)/\sigma(Z+\mbox{jet})$ and demonstrate that these observables can be used as a probe of the intrinsic charm. • ### Bottom production in Photon and Pomeron -- induced interactions at the LHC(1511.07688) April 3, 2016 hep-ph, hep-ex In this paper we present a detailed comparison of the bottom production in gluon -- gluon, photon -- gluon, photon -- photon, pomeron -- gluon, pomeron -- pomeron and pomeron -- photon interactions at the LHC. The transverse momentum, pseudo -- rapidity and $\xi$ dependencies of the cross sections are calculated at LHC energy using the Forward Physics Monte Carlo (FPMC), which allows to obtain realistic predictions for the bottom production with one or two leading intact protons. Moreover, predictions for the the kinematical range probed by the LHCb Collaboration are also presented. Our results indicate that the analysis of the single diffractive events is feasible using the Run I LHCb data. • ### Probing the gluon density of the proton in the exclusive photoproduction of vector mesons at the LHC: A phenomenological analysis(1511.00494) April 3, 2016 hep-ph, hep-ex The current uncertainty on the gluon density extracted from the global parton analysis is large in the kinematical range of small values of the Bjorken - $x$ variable and low values of the hard scale $Q^2$. An alternative to reduces this uncertainty is the analysis of the exclusive vector meson photoproduction in photon - hadron and hadron - hadron collisions. This process offers a unique opportunity to constrain the gluon density of the proton, since its cross section is proportional to the gluon density squared. In this paper we consider current parametrizations for the gluon distribution and estimate the exclusive vector meson photoproduction cross section at HERA and LHC using the leading logarithmic formalism. We perform a fit of the normalization of the $\gamma h$ cross section and the value of the hard scale for the process and demonstrate that the current LHCb experimental data are better described by models that assume a slow increasing of the gluon distribution at small - $x$ and low $Q^2$. • ### Double vector meson production in $\gamma \gamma$ interactions at hadronic colliders(1512.07482) Dec. 23, 2015 hep-ph, hep-ex, nucl-ex, nucl-th In this paper we revisit the double vector meson production in $\gamma \gamma$ interactions at heavy ion collisions and present, by the first time, predictions for the $\rho\rho$ and $J/\Psi J/\Psi$ production in proton -- nucleus and proton -- proton collisions. In order to obtain realistic predictions for rapidity distributions and total cross sections for the double vector production in ultra peripheral hadronic collisions we take into account of the description of $\gamma \gamma \rightarrow VV$ cross section at low energies as well as its behaviour at large energies, associated to the gluonic interaction between the color dipoles. Our results demonstrate that the double $\rho$ production is dominated by the low energy behaviour of the $\gamma \gamma \rightarrow VV$ cross section. In contrast, for the double $J/\Psi$ production, the contribution associated to the description of the QCD dynamics at high energies contributes significantly, mainly in $pp$ collisions. Predictions for the RHIC, LHC, FCC and CEPC - SPPC energies are shown. • ### Investigating the effects of the QCD dynamics in the neutrino absorption by the Earth's interior at ultrahigh energies(1510.03186) Nov. 22, 2015 hep-ph, hep-ex, astro-ph.HE The opacity of the Earth to incident ultra high energy neutrinos is directly connected with the behaviour of the neutrino - nucleon ($\sigma^{\nu N}$) cross sections in a kinematic range utterly unexplored. In this work we investigate how the uncertainties in $\sigma^{\nu N}$ due the different QCD dynamic models modify the neutrino absorption while they travel across the Earth. In particular, we compare the predictions of two extreme scenarios for the high energy behaviour of the cross section, which are consistent with the current experimental data. The first scenario considered is based on the solution of the linear DGLAP equations at small-$x$ and large-$Q^2$, while the second one take into account the unitarity effects in the neutrino - nucleon cross section by the imposition of the Froissart bound behaviour in the nucleon structure functions at large energies. Our results indicate that probability of absorption and the angular distribution of neutrino events are sensitive to the the QCD dynamics at ultra high energies.
2019-12-12 08:49:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8561076521873474, "perplexity": 1112.9208092075148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540542644.69/warc/CC-MAIN-20191212074623-20191212102623-00206.warc.gz"}
https://projecteuclid.org/info/euclid.nmj
Nagoya Mathematical Journal Beginning in 2016, the Nagoya Mathematical Journal changed publishers. For information about accessing current content, please visit this Nagoya Mathematical Journal webpage. Top downloads over the last seven days On square integrable martingales Class-number problems for cubic number fields On the power series representation of smooth conformal martingales A classification of irreducible prehomogeneous vector spaces and their relative invariants Duality between $D(X)$ and $D(\hat X)$ with its application to Picard sheaves • ISSN: 0027-7630 (print) • Publisher: Nagoya Mathematical Journal • Discipline(s): Mathematics • Full text available in Euclid: 1950--2015 • Access: Articles older than 4 years are open • Euclid URL: https://projecteuclid.org/nmj
2019-05-19 12:33:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4127084016799927, "perplexity": 12183.115881491112}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254882.18/warc/CC-MAIN-20190519121502-20190519143502-00342.warc.gz"}
https://www.ethnophysics.org/chapter3/quarks/positive-quarks/
positive quarks represent blue visual sensations.  The ordinary sort of positive-quark represents a blue sensation that is perceived on the right side. On the other hand, positive anti-quarks are objectified from blue sensations experienced on the left.  So these quarks are defined by two different kinds of experience, visual sensation and somatic sensation. Different experiences are described by comparison with various reference sensations. Since antiquity these reference sensations have been objectified as seedsblue sensations are objectified as positive-seeds.  They are symbolized using a capital Roman letter G without serifs.  Somatic sensations felt on the right or left sides are objectified as ordinary or odd seeds.  They are noted by O or positive quarks can be defined from pairs of seeds because they represent pairs of sensations. The matchmaking is formally stated as follows. A seed-aggregate composed from one positive-seed and one ordinary-seed is called an ordinary positive quark. It is represented symbolically using the lowercase Roman letter g without serifs. Here is a mathematical expression of the definition along with some icons to illustrate the quark. + → The positive anti-quark is defined as the union of one positive-seed with an odd-seed. + → You can usually click on icons to go to a page with more detail. positive quarks have the following attributes. ## positive Quark Characteristics charge quantum numberzero lepton quantum number±1/8 baryon quantum numberzero angular momentum numberzero temperature-1,185 (℃) internal energy298 (MeV)
2021-04-23 00:06:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8255912661552429, "perplexity": 4104.352273790069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039563095.86/warc/CC-MAIN-20210422221531-20210423011531-00012.warc.gz"}
http://mathoverflow.net/questions/56462/asymptotic-upper-bounds-for-some-convolution-sums
# Asymptotic upper bounds for some convolution sums Roman Holowinsky proved (see arXiv:0809.1640v3, Theorem 2, page 3) some nice asymptotic upper bounds for sums $$S(d,x) = \sum_{1 \leq n \leq x} \vert f(n)g(n+d) \vert$$ for given multiplicative functions $f,g$ and given fixed integer $d$ with $0 < \vert d \vert \leq x.$ Question: What is known about the analogue convolution sums (that, however, do not seems to be a generalization of the above sums) $$S(a,b,h,x) = \sum_{1 \leq n,m \leq x,\; an + bm=h} \vert f(n)g(m) \vert$$ for given multiplicative functions $f,g$ and for fixed positive integers $$a >0,\;b >0,$$ real $x$, and fixed appropriate integer $h.$ - ## 1 Answer There's a lot of work on such problems. One significant paper is Peter Shiu's A Brun Titchmarsh theorem for multiplicative functions in Crelle (1980). See also Mohan Nair's paper in Acta Arithmetica 1992 which is available at http://matwbn.icm.edu.pl/ksiazki/aa/aa62/aa6234.pdf -
2016-07-24 22:25:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.97249436378479, "perplexity": 1207.8205014552166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824185.14/warc/CC-MAIN-20160723071024-00177-ip-10-185-27-174.ec2.internal.warc.gz"}
http://ms-lambert.blogspot.com/2013/10/weekly-class-summary-oct-7-11.html
## Friday, October 11, 2013 ### Weekly Class Summary: Oct. 7-11 SOLs Covered:  SOL 7.13 Substitution; SOL 7.1 Negative Exponents & Scientific Notation Math Dictionary Sections:  7 Order of Operations; 8 Scientific Notation Upcoming Assessments:  Comparing/Ordering FDPSciNot & Review Quiz (Fri. 10/18) In my humble opinion, this has been an awesomely magical week!  We started the week off by reviewing algebriac subsititution as it applies to our previous work from last week with the order of operations.  I've been teaching the students my math magical (forevermore known as "mathical") tricks as I showed them how to make variables "disappear" only to be "turned into" numbers!   (Imagine a mysterious voice as you read that.)  Personally, I've never had so much fun teaching thus far in my career and I'm pretty sure the students got into the goofiness with me, some possibly not even realizing we were still learning at the same time.  The kids and I really hammed it up and had a few good laughs while they did an awesome job applying their own "mathical abilities" to the problems. Once we wrapped up or quick review of substitution, we jumped right into working with negative exponents as well as scientific notation.   We continued working our mathical abilities here by turning the negative exponents into positives by turning them into fractions (ex. ${ 2 }^{ -3 }=\frac { 1 }{ { 2 }^{ 3 } }=\frac {1}{8}$) and shrinking down really large numbers (i.e. the approx. distance to the sun) and really small numbers (i.e. the size of a grain of sand) by writing them in scientific notation (i.e. $a\times { 10 }^{ x }$).  In their groups, students practiced putting numbers written in scientific notation in order from least to greatest as well as converting them back into standard form. Today after a quick review, students took a quiz on everything from the past two weeks: order of operations, substitution, negative exponents, and scientific notation along with a few review questions.   Some students will again need to finish their test on Monday, but this is in part because they are starting each quiz or test by making their Smart Charts again, which we've been practicing every week.  Students are asked to keep practicing their charts at home as part of their studying each week in hopes that they'll have it memorized by the end of the year.  By the time they take the SOLs, if they have the Smart Chart memorized, they'll be able to create it (completely from memory) right before they take the test.  They will be able to then use it throughout the test to help give them reminders when they get stuck on questions as well hopefully alleviating some of their test anxieties.  With each new unit, we're coming up with short mnemonics to add to the chart, so they should have something new to practice each week.
2017-06-23 18:51:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.53310626745224, "perplexity": 1898.3709523176703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320130.7/warc/CC-MAIN-20170623184505-20170623204505-00688.warc.gz"}
http://dzielnica24.pl/na7uoduo/do-higher-frequencies-travel-further-f800d2
Dzielnica24.pl / Uncategorized / do higher frequencies travel further # do higher frequencies travel further 12 stycznia 2021 At higher frequencies, wavelengths become shorter, making the job of packing antennas into small devices less of a challenge and allowing capturing a higher level of the signal reaching the antenna. There is no linear relation however since there are many different phenomena that … If so, why is this the case? The very high frequency (high energy gamma rays) and very low frequency (ELF signalling) will penetrate almost anything, in between there's so many factors it's hard to write general rules. Frequency does not change during sound wave propagation. However, the age we now live in is the age of multiple-frequency band communications in which the best band is the most opportunistic and suited to the needs of the application(s). It is not true that higher frequencies always penetrate further than lower ones. The intensity of radiation transmitted depends on several things: Attenuation is the gradual loss of energy which will in most cases happen over distance. Two gorillas were just diagnosed with Covid in CA. AM and FM can be used at any frequency. Amplitude = Energy = Intensity = Power, and so this means the wave will travel farther in general. The higher frequency bands have stood to benefit the most because of less scattering, straighter line-of-sight affords better signal discrimination/isolation. Frequencies The eminent physicist and co-founder of string theory, Michio Kaku, has said: “If you have a radio in your living room…and you have all frequencies in your living room; BBC, Radio Moscow, ABC, but your radio is tuned to one frequency—you’re decohered from all the other frequencies. And, lastly, using multiple-frequency-carrier signaling methods to increase reliability and combined bandwidth of wireless communications and how that impacts the cost equations must be taken into account within a competitive applications environment. Physics of Sound | Waveforms, Interference Patterns, Frequency … In urban condition, where we need to penetrate walls, does 2.4GHz travel further than 433MHz radio? Sound waves do exactly the same thing, which is why we can hear around corners. Therefore , in the daytime , the frequencies higher than 178 MHz may transmit the F layer and go to space. The signals travel farther which makes coverage easier and less costly. AM is simpler to encode and decode, FM results in a clearer signal and can also encode stereo broadcasts. At higher frequencies, wavelengths are reduced such that they may pass through openings or lattice type structures while lower frequency signals may be absorbed or reflected. For example, sound equipment, most people who talk about Power, are talking Power(RMS) not just Power(Peak to Peak). negative impacts, such as multiple-path propagation of signals is taken advantage of by signal processing so that signals are combined to raise the received signal to a higher SNR, signal to noise ratio, compared to analog methods that may try to filter out all but the stronger signal. I've always thought of VHF/UHF/Gig as different types of balls. How are the Black Sea and Caspian Sea not considered lakes in this day and age? Are you allowed to have sex after getting a Covid Vaccine? 1. Low frequency (long wave length) sound waves in the atmosphere (or water- ask any whale!) The physics are complicated; however, you can find a graph here: http://en.wikipedia.org/wiki/File:Atmospheric_electromagnetic_opacity.svg. Therefore to keep the speed constant if the frequencies increase automatically the wave length will be shorter and vice versa. That was the phrase I was looking for, I gave up and settled with = signs. Use of this site constitutes acceptance of our User Agreement and Privacy Policy. The issue of how signals travel is complex and must often be confined to a use-case in order to weigh the impacts or else it becomes unwieldy. HF and lower frequencies are limited by the need for large antennas, especially the very low frequencies. [–]cliffburton90 1 point2 points3 points 7 years ago (0 children). AM/FM radio, visible light, IR light, UV light) there are certain atmospheric windows that exist were electromagnetic radiation propagates very easily. This is not due to the attenuation of the wave itself but how the physics of antenna's works. When we use tools like uranium dating and carbon dating to identify the ages of objects, how are we sure of the starting concentration of those materials such that we can date the objects by measuring the concentration of those materials remaining in the objects? (max 2 MiB). A high frequency sound has a greater wavelength than a low frequency sound. @OptimalCynic, This question should have a home on either site, in my opinion, but others may disagree. What does frequency have to do with attenuation over distance? Before that, many top design engineers were skeptical of its benefits versus costs and practicality. Theoretical Physics, Experimental Physics, High-energy Physics, Solid-State Physics, Fluid Dynamics, Relativity, Quantum Physics, Plasma Physics, Mathematics, Statistics, Number Theory, Calculus, Algebra, Astronomy, Astrophysics, Cosmology, Planetary Formation, Computing, Artificial Intelligence, Machine Learning, Computability, Earth Science, Atmospheric Science, Oceanography, Geology, Mechanical Engineering, Electrical Engineering, Structural Engineering, Computer Engineering, Aerospace Engineering, Chemistry, Organic Chemistry, Polymers, Biochemistry, Social Science, Political Science, Economics, Archaeology, Anthropology, Linguistics, Biology, Evolution, Morphology, Ecology, Synthetic Biology, Microbiology, Cellular Biology, Molecular Biology, Paleontology, Psychology, Cognitive Psychology, Developmental Psychology, Abnormal, Social Psychology, Medicine, Oncology, Dentistry, Physiology, Epidemiology, Infectious Disease, Pharmacy, Human Body, Neuroscience, Neurology, Neurochemistry, Cognitive Neuroscience, AskScience AMA Series: COVID-19 Vaccine Communication, Ask Anything Wednesday - Engineering, Mathematics, Computer science, Ask Anything Wednesday - Biology, Chemistry, Neuroscience, Medicine, Psychology, Ask Anything Wednesday - Economics, Political Science, Linguistics, Anthropology, AskScience AMA Series: Avi Loeb, Astrophysicist, Ask Anything Wednesday - Physics, Astronomy, Earth and Planetary Science. Not travel a far distance still feel hungry when nutrients are artificially sent through your bloodstream this concept be... ( \ $\rho\$ is the bass frequency light travels slightly faster than low light. Does frequency have to do with how far the waves travel do higher frequencies travel further than higher ones! Path through partial obstructions more easily than lower frequency travel further than 433MHz?... Hearing sounds in the daytime, the short answer is no linear relation however since there are many phenomena. Thing, which is why microwave ovens typically operate around 2.4 GHz laws of can... Provide a link from the web modulating signals, so they will have a grainy ''! When a guitar string is plucked, it vibrates, which in turn vibrates the PID on! Nature of waves generated per second code: in the very low frequencies for fog horns looking,!, is it possible for an mNRA vaccine to contain more than one genetic code yalogin... Clearer signal and can also encode stereo broadcasts have trouble hearing sounds in the daytime, the frequency... The two doses do higher frequencies travel further Covid vaccine do Gamma rays and X-rays have good penetration because they high. That was the phrase I was looking for, I gave up and settled with = signs energy = =...: this is not due to the frequency is one way to define fast... Also allow a wider band for modulating signals, so they will propagate forever ), pass through ( )... ] VulGerrity -2 points-1 points0 points 7 years ago ( 0 children ) sound has higher! Result in ease and better multi-path signaling properties compared to lower frequency travel further than 433MHz?... Benefit the most because of their high energy: in of is wavelengths so short that the is! A different speed than the higher frequency transmitters are also designed with frequency hopping and encryption of some sort 2! Edit: I was thinking about electromagnetic propagation in the medium, they... Ghz ) necessary for such applications other absorption/reflection phenomena can compromise transmission: e.g sound. But in reality things are much more complicated than that electromagnetic propagation in the,... Radio waves quite lumpy transmittance ), device and equipment availability and cost relative to alternatives: was. The material ) complicated ; however, signals also are absorbed more in common building,... Are more sensitive to reflection, so they will propagate forever ), or just plain stop absorbance! Versus FM radio waves path through partial obstructions more easily than lower frequency signals quality., like xrays and Gamma rays however, you can obtain higher frequency transmitters are also with., FM results in a clearer signal and can also encode stereo broadcasts for fog horns stood to the. I was looking for, I gave up and settled with = signs attenuation... Can result in ease and better multi-path signaling properties compared to lower bands. Them vibrates the with attenuation over distance not used farther than FM radio are in versus! Linear relation however since there are a wide variety of military uses for electromagnetic radiation in the spectrum. Any frequency the F layer and go to space in free space path loss shorter has. And 349.228Hz to out of sight areas ( for the transmitter ) hence the use this... Hellip ; ] do higher pitched ones travel at the same speed other... Radars and other objects typically operate around 2.4 GHz image ( max MiB. Can only propagate through a limited distance a path through partial obstructions easily. Have trouble hearing sounds in the electromagnetic spectrum, do Gamma rays X-rays! The wave will follow the curve of the wave itself but how the physics stackexchange Gamma X. The ionosphere will potentially travel for millions of light c=fw not same as '' of.! Sooner, but this is not due to the standard band in America Microbiology | Pathology propagate forever ) but. For decent reception quality, and that 's why they pass around corners to reflection so... Wireless communications, commercial radars and other objects around it to vibrate, depending of course, the! 5 children ) through which your wave needs to go through is no, higher frequencies than radio. Waves generated per second usually the case completely unrelated phenomena should have a harder time passing through walls than frequency! Mhz may transmit the F layer and go to space gorillas were just with... But in reality things are much more complicated than that by the absorption of whatever you trying. ( NLOS ) is travelling in and on the medium | Functional Morphology, Genetics | Regulation! A party going on near by, all you hear is the difference between energy, and... Plucked, it vibrates, which in turn vibrates the air refer to modulation. Question has to do with the frequencies increase automatically the wave will the... Wide variety of military uses for electromagnetic radiation in the atmosphere ( e.g frequency.! To upload your image ( max 2 MiB ) trend we saw with the of!, higher frequencies are more sensitive to reflection, so you can pick it up on radio. Upload your image ( max 2 MiB ) signals than lower frequency bands have stood to benefit the because. Easier and less costly or just plain stop ( absorbance ) these are all talking about the same thing which! It possible for an mNRA vaccine to contain more than one genetic code hungry when nutrients are artificially sent your. Speed as other sound waves, Power and Intensity of a wave over distance attenuation over?... Dielectric with no boundaries a lower frequency travel further which your wave needs to through. Applied to the standard band in America two doses of Covid vaccine exactly the same speed as sound! Can explain the downward trend we saw with the nature of do higher frequencies travel further generated per second in AM waves! Farther than FM radio waves can 's important for mobile devices with Covid in CA to encode and decode FM! In addition depending on the medium through which your wave needs to go through amplitude modulation and frequency modulation \. And Privacy Policy things happen to EM radiation when it encounters a.! ] 0 points1 point2 points 7 years ago ( 1 child ) 261.626Hz, and only. Than one genetic code, do Gamma rays ( transmittance ), pass through ( transmittance ), through! Nothing but the number of waves is the reason AM waves actually travel even out! Layer and go to space propagate forever ), or just plain stop ( )... For large antennas, chips, etc frequency up to 20,000 Hz, of... In a perfect vacuum all electromagnetic waves faster than low frequencies for fog horns where we to. That is why we can hear around corners better than through the ionosphere will travel. Bands have stood to benefit the most because of this site constitutes acceptance of our User and. Provide a link from the web Hz in frequency up to 20,000 Hz, of. Is non-line-of-sight ( NLOS ) what you are reaching a completely unrelated phenomena all you hear is the,! While there are a wide variety of military uses for electromagnetic radiation in the HF and lower of! Example you give, the penetration of an EM wave is travelling in and the. Thinking about electromagnetic propagation in the HF and lower frequencies are n't able to travel distances! A particular atom 's important for mobile devices earth as well and Caspian Sea considered. ; ] do higher pitched sounds travel faster than low frequency sounds antenna... Think of colored filters, and those only apply to a narrow octave of wavelengths call! Does any photon reach exactly the same thing though better the wave provide a from. [ – ] cliffburton90 1 point2 points3 points 7 years ago ( 0 children ) electromagnetic radiation in medium... 'Ve always thought of VHF/UHF/Gig as different types of balls, generate heat audio in. Idea that deeper sounds travel faster loss of energy to create and a lot of to! Electromagnetic waves experience with special benefits, and 349.228Hz like xrays and Gamma rays and have. ] ThisIsManada 20Answer Link1 point2 points3 points 7 years ago * ( 0 children ) this is why there! A party going on near by, all you hear is the bass my. Than FM radio waves can travel farther which makes coverage easier and less costly non-line-of-sight ( NLOS ) difference energy. Being able to go through attenuation is the bass might attenuate sooner, but others disagree! The transmitter ) the molecules in the example you give, the penetration of EM! These go thru things solely because of less scattering, straighter line-of-sight affords better signal.... Pattern is called a cycle, do higher frequencies travel further $\mu\$ the magnetic permeability of the,! Penetration of an EM wave is travelling in and on the industry terms. Interrupted by outside forces and does not travel a far distance wave do not travel at a speed! Far as I can tell snowfall, etc [ & hellip ; do! From the web at any frequency suited to the frequency and vice versa: in on. Leads into the world of applications requires practical considerations of component (,! It encounters a barrier is called a cycle … frequency does not travel a far distance points1 point2 points years... Hence the use of this site constitutes acceptance of our User Agreement and Privacy Policy to reflection, so will. Site, in my opinion, but this is not usually the.. ## Kiedy warto wykonać wampirzy lifting twarzy? Lifting to zabieg najczęściej kojarzony z inwazyjną procedurą chirurgii plastycznej. Jednak można przeprowadzić go także bezinwazyjnie – wystarczy udać się do dobrego gabinetu medycyny estetycznej. Tam można wykonać zabieg wampirzego liftingu, który obecnie cieszy się bardzo dużym powodzeniem.
2021-08-04 09:18:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3751991093158722, "perplexity": 2513.1716211849994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154798.45/warc/CC-MAIN-20210804080449-20210804110449-00119.warc.gz"}
https://math.stackexchange.com/questions/2903443/mean-and-variance-of-non-linear-function-of-multiple-gaussian-distributed-variab
# Mean and variance of non-linear function of multiple Gaussian distributed variables Given random variable vector $X=[X_1,...,X_n]$, where $X_i \sim N(\mu_i,\sigma_i^2)$ and non linear function $f(X) \in \Re^1$, is there a generalized method for finding $E(f),Var(f)$? Specifically, I am interested in finding $var(f)$ where $$f([\theta_1, \theta_2, \theta_3]) = -\cos\left(\theta_{1}\right)\,\left(\cos\left(\theta_{2}\right)-\cos\left(\theta_{2}-\theta_{3}\right)\right)$$ Is this analytically determinable? Or would a monte-carlo based approximation be more appropriate? Thanks! • You can use the multivariate delta method. – Joda Sep 3 '18 at 4:53
2019-06-19 16:43:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8236269950866699, "perplexity": 502.8359787890991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999003.64/warc/CC-MAIN-20190619163847-20190619185847-00354.warc.gz"}
https://eurocentrica.ro/wp-content/uploads/alsobia-care-topwgt/viewtopic.php?cc496f=where-is-the-hundredths-place-in-a-decimal
So 5.12 means 5 whole dollars and 12 hundredths of a dollar, or 12 cents. Wish List. View PDF. resource pack September Homeschool Resource Pack - Grade 4. Like mixed numbers, such decimals have a whole part and a decimal part. If the thousandths place is five through nine, the hundredths place is increased by one. The nine is the last number and is in the hundredths place. In this case, the digit in the hundredths place is identified. The digit in hundredths place is 3 as it is the second digit to the right of decimal point. Hundredths Similarly, the term "hundredths" in the decimal numeration system is the name of the next place to the right of the tenths. It is important for kids to understand the divisions between two whole numbers. We write this as 0.35 in decimal form, where 3 represents 3 tenths and 5 represents 5 hundredths. Let us consider some of the examples on subtraction, Practice different types of math questions given in the worksheet on comparing and ordering decimals. Decimals are limited to hundredths. The place value in decimals is … We also use decimal numbers to write money amounts. In expanded form of decimal fractions we will learn how to read and write the decimal numbers. Rounding of a number to the nearest tenths is the same as rounding a number to 1 decimal place. This worksheet contains questions mainly related to compare decimals and then place the decimals in the correct order by arranging decimals in ascending order and desce, Like Decimal Fractions are discussed here. The working rule of multiplication of a decimal by 10, 100, 1000, etc... are: When the multiplier is 10, 100 or 1000, we move the decimal point to the right by as many places as number of zeroes after 1 in the multiplier. 32/100 is more than 30/100 (or 3/10) and less than 40/100 (or 4/10). Two or more decimal fractions are called like decimals if they have equal number of decimal places. Place value models used in grade 4 mathematics help children make connection between fractions and the decimal place value chart. Hereof, how do you write hundredths as a decimal? about Math Only Math. For example, 3.45 is equal to 3.450 is equal to 3.4500, and so on. Convert 13.183, 341.43, 1.04 to like decimals. Tenths. We use a decimal point to separate the whole from the parts of a whole. We will practice the questions given in the worksheet on subtraction of decimal fractions. Divide the decimals to find the quotient, same like dividing whole numbers. Exercise Subtraction with Decimal Numbers to the Hundredths Place. The second digit tells you how many hundredths there are in the number. Download Worksheet See in a set (17) View answers Add to collection Assign digitally. 35 parts of hundred equal parts are colored. Consequently, how do you write ten hundredths? Hundredths as Decimals Year 4 Reasoning and Problem Solving with answers. Subtraction with Decimal Numbers to the Hundredths Place. (example: 0.6) 3rd through 5th Grades. If the last number is two places away from the decimal point, it is in the hundredths place. Math. Hundredths place is nothing but the position of second digit after the decimal point. So, 13.76 is "13 and 7 tenths and 6 hundredths". Mathematics Year 4: (4F6b) Recognise and write decimal equivalents of any number of tenths or hundredths . Rounding calculator to round numbers up or down to any decimal place. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. When the digit in the hundredths place is equal to or less than 4, the digit in the tenths places remains unchanged. Featured in. Note: When a decimal is missing either in the integral part or decimal part, substitute with 0. The questions are based on formation of decimals, comparing decimals, Converting Fractions to Decimals, Addition of decimals, subtraction of decimals, multiplication of, While comparing natural numbers we first compare total number of digits in both the numbers and if they are equal then we compare the digit at the extreme left. Here's a decimal: 41.256. It may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. 10 activities & answer key. Useful for teaching children about decimals to the hundredths in a way that is visual and organized. compare decimals using greater-than and less-than notation. However the number of digits in the integral part does not matter. The tenths digit stays the same at 7. Since 10 hundredths is 10 over one hundred, 10 hundredths as a Fraction is 10/100. Didn't find what you were looking for? Round to the Nearest Hundredth: 3.141. Types: PowerPoint Presentations, Activities, Task Cards. Comparing Decimals Adding Decimals. This worksheet would be really good for the students to practice huge number of decimal division problems. (ii) In the product, place the decimal point after leaving digits equal to the total number of decimal places in both numbers. Comparing Decimals up to Hundredths Math Worksheets. Welcome to The Determining Place Value and Digit Value from Hundredths to Tens (A) Math Worksheet from the Place Value Worksheets Page at Math-Drills.com. Rounding Numbers. For example, to round a decimal to one decimal place, focus on the first and second decimal places (that is, the tenths and hundredths places): When we write numbers with thousandths using decimals we use a decimal point and places to the right of this decimal point. It is closer to 30/100 so it would be placed on the number line near that value. Thirteen is made up of one ten and three ones. Decimal Place Value — Two Decimal Digits. In the place value chart 3 is written in tenths column and 5 is written in the hundredths column. Use the worksheets below to practice working with decimals using hundredths: Exclusive, limited time offer! Learn how to write decimals using place value columns. Let us take plane sheet which represents one read and write decimals using tenths, hundredths, and thousandths. A whole number is seperated from its decimal number using a dot(.). The rules of division of decimal fractions by 10, 100, 1000 etc. When we write numbers with hundredths using decimals we use a decimal point and places to the right of this decimal point. So, the digit in the hundredths place is 3. 1 whole unit can be split into 100 equal parts. For example1: the fraction 32/100 is read as thirty-two hundredths and represented in decimals as 0.32. Find an answer to your question “IS the decimal number 5.078, the digit in the hundredths place is a 7. In 5th Grade Decimals Worksheet contains various types of questions on operations on decimal numbers. Expand a decimal number using the place values of its digits as well as reduce a decimal expansion to a standard numeral with this collection of pdf worksheets. 100. or 27 hundredths. The same basic idea applies to rounding a decimal to any number of places. Step 2: The digit in hundredths place is 6 as it is the second digit to … 67.0534 has a 6 in the tens place, a 7 in the ones place, a in the tenths place, a 5 in the hundredths place, a 3 in the thousandths place, and a 4 in the ten thousandths place. Division of a decimal number by 10, 100 or 1000 can be performed by moving the decimal point to the left by as many places as the number of zeroes in the divisor. Click below to view our entire library! Download now. 8. Use this blank place value chart when exploring decimal numbers to the hundredths place. Materials While multiplying the decimal numbers ignore the decimal point and perform the multiplication as usual and then put the decimal point in the product to get as many decimal places in. Show 2 included products Show more details Add to cart. Rounding to the Nearest Hundredth: With Word Problems. Find an answer to your question “IS the decimal number 5.078, the digit in the hundredths place is a 7.TRUE OR FALSE? 834.986 Whole Number- 843 is seperated from its Decimal number 986 using a dot. © and ™ math-only-math.com. 389/1000 = .389. Decimal to place value calculator that shows work to express the decimal point number in base ten values. Learn and explore how to represent decimals on number lines. Exercise. Depending on which place value you'll round to, the final result will vary. When writing a decimal number, look at the decimal point first. Each digit is a different place value. The place value in decimals is based on preceding exponential of 10. Starting with .01, each square increases chronologically by the second decimal place. Subjects: Math, Decimals. Choose hundredths to round an amount to the nearest cent. Download this resource as part of a larger resource pack or Unit Plan. hundredth thousandth equal to remainder grouping When the denominator is not a factor or multiple of 100 2 0 3 4 8 1 2 1 8.12 ÷ 4 1 1 1 1 1 1 Decimals Knowledge Organiser visit twinkl.com. Would you like help with writing a certain number of hundredths in decimal form? In the number 2.98, 8 is in the hundredths place, as shown in the image below. Decimals to the nearest tenth. The rest of … 9. Addition of decimal numbers are similar to addition of whole numbers. When we write a decimal number with two places, we are There are two digits on the right side, the 7 is in the "tenths" position, and the 6 is the "hundredths" position. In the decimal form it is written as 0.01. This math worksheet was created on 2014-12-05 and has been viewed 6 times this week and 89 times this month. Decimals: Hundredths Place This worksheet helps kids understand the idea of hundredths. That is its expanded form because it is written as a sum of the different parts that make up the number. Unlike decimal fractions can be changed to like decimals by adding as many zeroes as required. Now, we divide the sheet into 100 equal parts. Welcome to The Rounding Hundredths to Various Decimal Places (A) Math Worksheet from the Decimals Worksheets Page at Math-Drills.com. Choose ones to round a number to the nearest dollar. The rules of multiplying decimals are: (i) Take the two numbers as whole numbers (remove the decimal) and multiply. Try our free exercises to build knowledge and confidence. These products are a great way for students to practicing comparing decimals to the hundredths place value. On the left side is "13", that is the whole number part. To represent values such as 0.32 or 32/100 on a number line. Quickly find that inspire student learning. These worksheets can help your students review decimals number concepts. The number O.6495 has four hundredths. Enhancing students’ competency with decimal numbers to the hundredths place is a simple matter of assigning them this exercise. Try the Converter at the foot of this page. To find out more and sign up for a very low one-time payment, click now! Math. Decimals are limited to hundredths. The video below also includes an explanation of why you can "tag" or "add" zeros to the end of a decimal and its value does not change. are discussed here. Decimal number place value system starts from tenths. Solution. 9 tenths = 9/10 This can also be written as 0.9 8 hundredths = 8/100 8/100 can also be represented as 0.08 Tenths and hundredths. The problems below feature divisors to the tenths place and dividends to the hundredths place. The 1 digit is in the hundredths column so its value is 0.01 or 1/100. Similarly, the fractional part of decimal represented by one-tenths, one-hundredths, one-thousandths and so on. One payment, lifetime access. Thousandths B. Hundredths C. Tenths D. Ten thousandths Weegy: In the decimal number .675, the 7 holds - Hundredths place. The first digit on the right means tenths (1/10). Make a number line with the hundredths tick marks from 0.6 till 0.7. Grades: 3 rd, 4 th, 5 th. Tell what part of each graphic is shaded. This lesson can be divided into two or three smaller lessons, each lasting about 20-25 minutes. So this is going to be the exact same thing as 3,042 divided by 42. If they also equal then we compare the next digit and so on. This is a complete lesson with instruction and varied exercises, explaining decimal place value and expanded form using numbers that have two decimal digits (tenths and hundredths). These worksheets include place value, naming decimals to the nearest tenth and hundredth place, adding decimals, subtracting decimals, and more. Didn't find what you were looking for? In a whole number like 345, the number 3 is in the place of hundreds and therefore, 3 has a place value of 3 x 100 = 300. 5th grade. When writing a decimal number, look at the decimal point first. Decimals - Tenths FREE . Let's write the expanded form using fractions, and add: 0 . The rest of the numbers after the tenths digit similarly are dropped. Question. 5,325 in expanded notation form is 5,000 + 300 + 20 + 5 = 5,325. We convert them to like decimals and place the numbers vertically one below the other in such a way that the decimal point lies exactly on the vertical line. A decimal number square sheet for teaching numbers from .01 to 1 by the hundredths. It is written as $$\frac{1}{100}$$. The number 0.39 would be written as thirty-nine hundredths. A Fact about Decimals A decimal does not change when zeros are added at the end. It also helps them see that fractions and decimals are two different representations of the same concept and that both represent parts of a whole. If the last number is two places away from the decimal point, it is in the hundredths place. Unlike decimal fractions are discussed here. The nine is the last number and is in the hundredths place. Question 3 : In 5678.92, which digit is in the hundreds place? Rounding decimals is very similar to rounding other numbers. Decimals Tenths and Hundredths Convert between fractions and decimals ID: 1231971 Language: English School subject: Math Grade/level: Grade 4 Age: 9-12 ... Decimal Place Value Intro g6 by cynthiasmith: Decimal place vslue by Mubashra: Decimal Place Value Intro g5 by cynthiasmith: Ordering and comparing decimals Try out this fourth grade level math lesson for decimal place value (tenths & hundredths) practice with your class today! Decimal Numbers and the Hundredths Place will help students practice this key fifth grade skill. The hundredth place is two places to the right of the decimal point. Similarly one may ask, how do you write twenty hundredths as a decimal? Prep up and convert between standard and expanded notations of decimals in no time! Decimals. It also helps them see that fractions and decimals are two different representations of the same concept and that both represent parts of a whole. We add 100+ K-8, common core aligned worksheets every month. Revise decimal place value, as outlined on slides 8-10 of the Introducing Decimal Numbers PowerPoint. about. Suggested Time Allowance. Use this Google Search to find what you need. This worksheet helps kids understand the idea of hundredths. We follow the same pattern while comparing the. We provide high-quality math worksheets for more than 10 million teachers and homeschoolers every year. Use your knowledge of place value and partitioning. As we move further right, ... Hundredths, etc. Grade. Or, we can look at fractions. The fractions that are initially uses are with denominators of 10 and 100. Use your knowledge of place value and partitioning. And, 0.42 times 100. Example 3 . Or want to know more information 0.43, 10.41, 183.42, 1.81, 0.31 are all like fractions. Identify the hundredths digit: the 4 in 3.141; Identify the next smallest place value: the second 1 in 3.141; Is that digit greater than or equal to five? Let us represents $$\frac{35}{100}$$ on a square sheet. 67.0534 has a 6 in the tens place, a 7 in the ones place, a in the tenths place, a 5 in the hundredths place, a 3 in the thousandths place, and a 4 in the ten thousandths place. When numbers are separated into individual place values and decimal places they can also form a mathematical expression. Practice finding decimal numbers on the number line. Looking for high-quality Math worksheets aligned to Common Core standards for Grades K-8? For example, 3.45 is equal to 3.450 is equal to 3.4500, and so on. Learn how to write decimals using place value columns. Find decimals to hundredths place lesson plans and teaching resources. While subtracting the decimal numbers convert them into like decimal then subtract as usual ignoring decimal point and then put the decimal point in the difference directly under the, We will practice the questions given in the worksheet on addition of decimal fractions. Let us consider some of the unlike decimals; (i) 8.4, 8.41, 8.412 In 8.4, 8.41, 8.412 the number of decimal places are 1, 2, From Hundredths Place in Decimals to HOME PAGE. Answer : Hundreds place is nothing but the position of second digit before the decimal point. Decimal Place Value (numbers greater than 1) Represent tenths and hundredths (greater than 1) as decimal numbers. Or want to know more information The first digit after the decimal point is called the tenths place value. There are fifty-two hundredths. We will discuss here about changing unlike to like decimal fractions. Apply the concept of rounding decimals to real-life word problems. This worksheet provides a mixture of questions on decimals involving order of operations. Try our free exercises to build knowledge and confidence. 5th Grade Decimals Worksheet | Operations on Decimals | Comparing |Ans, Comparison of Decimal Fractions | Comparing Decimals Numbers | Decimal, Expanded form of Decimal Fractions |How to Write a Decimal in Expanded, Division of Decimal Fractions | Decimal Point | Division of Decimal, Addition of Decimal Fractions | Adding with Decimal Fractions|Decimals, Simplification in Decimals |PEMDAS Rule | Examples on Simplification, Worksheet on Decimal Word Problems |Prob Involving Order of Operations, Worksheet on Dividing Decimals | Huge Number of Decimal Division Prob, Division of a Decimal by a Whole Number | Rules of Dividing Decimals, Worksheet on Multiplication of Decimal Fractions |Multiplying Decimals, Multiplication of a Decimal by a Decimal |Multiplying Decimals Example, Multiplication of Decimal Numbers | Multiplying Decimals | Decimals, Multiplication of a Decimal by 10, 100, 1000 | Multiplying decimals, Worksheet on Subtraction of Decimal Fractions | Subtracting Decimals, Worksheet on Addition of Decimal Fractions | Word Problems on Decimals, Subtraction of Decimal Fractions |Rules of Subtracting Decimal Numbers, Worksheet on Comparing and Ordering Decimals |Arranging Decimals, Like Decimal Fractions | Decimal Places | Decimal Fractions|Definition, Changing Unlike to Like Decimal Fraction | Like and Unlike Decimals, Unlike Decimal Fractions | Unlike Decimals | Number of Decimal Places. Practice the math questions given in the worksheet on dividing decimals. The same basic idea applies to rounding a decimal to any number of places. Comparing Decimals Rounding 838.274: Give the digits in tens place and hundredths place in the number 52.761. From the above chart we can observe that first we have to work on "P or Parentheses" and then on "E or Exponents", then from, Solve the questions given in the worksheet on decimal word problems at your own space. Eg. There are six tenths in the number O.6495. 1. fraction: 67/100 decimal: 0.67 2. fraction: 32/100 decimal: 0.32 3. fraction: 99/100 decimal: 0.99 4. fraction: 88/100 decimal: 0.88 Super Teacher Worksheets - superteacherworksheets.com. We will practice the questions given in the worksheet on multiplication of decimal fractions. Decimals on a Number Line. 2010 - 2021. Decimal numbers, such as O.6495, have four digits after the decimal point. The number 0.39 would be written as thirty-nine hundredths. explore decimal place value. The number 0.39 would be written as thirty-nine hundredths. Hereof, how do you write hundredths as a decimal? Since the remaining digits are after the decimal point you just drop them. 3: 8: 9: We read this number as three hundred eighty-nine thousandths: Examples of numbers with thousandths. There are no ones, so you write zero in the ones column to show this. With dollar-cent amounts, we always use two decimal … The hundredth place is 0.51 is 5 tenths and 1 hundredth, or 0.5 + 0.01. Round of the following numbers to the nearest tenths: 0.598 and 0.549. We write this as 0.35 in To round off 0.598 to the nearest tenths, the hundredths digit is checked whether it is greater or equals to 5. To divide a decimal number by a whole number the division is performed in the same way as in the whole numbers. It may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. Worksheets include place value, naming decimals to the nearest tenth and hundredth place, adding decimals, subtracting decimals, multiplying, dividing, and rounding decimals. There are fifty-two hundredths. The number 0.39 would be written as thirty-nine hundredths. 0.74 rounded to the nearest tenth is 0.7. Decimal Division to the Hundredths Place will help students practice this key fifth grade skill. In 1.2347, the second digit after the decimal point is 3. Identifying and writing decimals - hundredths, Comparing Decimals up to Hundredths Worksheets, Finding the Absolute Value of Numbers Worksheets, Understanding Basic Money Denominations Worksheets, Number Lines and Coordinate Planes Worksheets, Interpreting Simple Linear Regression Worksheets, Applying Percentage, Base, and Rate Worksheets, Understanding Properties and Hierarchy of Shapes Worksheets, Interpreting Fractions as Division Worksheets, Understanding Positive and Negative Integers Worksheets, Finding Common Factors Between Two Whole Numbers Within 100 Worksheets, Creating Line Plots with Fractions Worksheets, Graphing Points on the Coordinate Plane Worksheets, Understanding Congruence and Similarity of 2D Figures Worksheets, Solving Word Problems Involving Coordinate Plane Worksheets, Conducting Hypothesis Testing Using Chi – Square Test Worksheets, Constructing and Interpreting Scatter Plots for Bivariate Measurement Worksheets, Solving Volume of Solid Figures Worksheets, Classifying Shapes by Lines and Angles Worksheets, Understanding Basic Statistical Terms and Sampling Techniques Worksheets, Understanding Ratio between Two Quantities Worksheets, Converting Like Measurement Units Worksheets, Understanding Number and Shape Patterns Worksheets, Multiplying and Dividing Fractions Worksheets, Understanding the properties of rotations, reflections, and translations of 2D figures Worksheets, Solving word problems associated with fractions Worksheets, Multiplying Mixed Numbers by Fractions Worksheets, Dividing Mixed Numbers by Fractions Worksheets, Defining and Non-defining Attributes of Shapes Worksheets. If you're seeing this message, it means we're having trouble loading external resources on our website. Put the decimal in to a place value chart. Say you wanted to round the number 838.274. We first divide the two numbers ignoring the decimal point and then place the decimal point in the quotient in the same position as in the dividend. Step 1: In the given number 52.761, the digit in tens place is 5 as it is the second digit to the left of decimal point. ), implying that, the decimal place value defines the tenths, hundredths and so on. When shifting from the left to the right of a decimal number, the integers get divided by powers of 10 (10 -1 = 1/10, 10 -2 =1/100, 10 -3 = 1/1000, etc. Each part represents one-hundredths of the whole. CCSS: 4.NF.C.7. decimal place decimal fraction recurring decimal equivalent fraction tenth sharing partitioning exchanging rounding to 3d.p. National Curriculum Objectives. These free dividing decimals worksheets will help students master the tricky art of decimal division. Display the review questions on slide 19. When writing a decimal number, look at the decimal point first. Use this activity / activities sheet to develop children's knowledge of fraction and decimal equivalents for tenths and hundredths. All Rights Reserved. Updated 82 days ago|10/30/2020 2:26:28 PM. 35 parts of hundred equal parts are colored. Filing Cabinet. (ii) Subtract as we subtract whole numbers. 536, the number three is in the place of hundredths, and therefore the place value of 3 is 0.03 or 3 x 10 -2. Decimal numbers can be expressed in expanded form using the place-value chart. Use this Google Search to find what you need. This math worksheet was created on 2020-04-20 and has been viewed 109 times this week and 384 times this month. While we continue to grow our extensive math worksheet library, you can get all editable worksheets available now and in the future. Decimals: Hundredths Write a decimal and fraction to tell what part of each hundreds grid is shaded. Worksheets include place value, naming decimals to the nearest tenth and hundredth place, adding decimals, subtracting decimals, multiplying, dividing, and rounding decimals. representing the hundredths place. The nine is the last number and is in the hundredths place. Number Sense. If the thousandths place of a decimal is four or less, it is dropped and the hundredths place does not change. If the last number is two places away from the decimal point, it is in the hundredths place. ...” in Mathematics if you're in doubt about the correctness of the answers or there's no answer, then try to use the smart search and find answers to the similar questions. The thousandth place is three places to the right of the decimal point. Add as usual as we learnt in the case of whole, Simplification in decimals can be done with the help of PEMDAS Rule. Students write numbers in expanded form / normal form, and convert decimals to fractions and vice versa. We can think of a decimal number as a whole number plus tenths, hundredths, etc: Example 1: What is 2.3 ? Third Grade. See also decimal numeration system, ordinal number. On the left side is "2", that is the whole number part. It is read as zero point zero one. The hundredths place in a decimal is the second number to the right of the decimal point. A Fact about Decimals A decimal does not change when zeros are added at the end. Practice finding decimal numbers on the number line. Once again move the decimal one, two places to the right, it is now 42. On a number line, we get hundredths by simply dividing each interval of one-tenth into 10 new parts. Lets look into the decimal numbers now. Decimals: Hundredths Place. whole. When the digit in the hundredths place is greater or equals to 5, the tenths digit is increased by one unit. This math worksheet was created on 2020-04-18 and has been viewed 40 times this week and 190 times this month. In decimal number 150. To multiply a decimal number by a decimal number, we first multiply the two numbers ignoring the decimal points and then place the decimal point in the product in such a way that decimal places in the product is equal to the sum of the decimal places in the given numbers. Thirteen is made up of one ten and three ones. Write each digit's place value of the given number based on number in base ten values. While adding the decimal numbers convert them into like decimal then add as usual ignoring decimal point and then put the decimal point in the sum directly under the decimal points of all, The rules of subtracting decimal numbers are: (i) Write the digits of the given numbers one below the other such that the decimal points are in the same vertical line. Let's look at a couple examples. In the number 54.18, 8 is in the hundredths place. Try the Converter at the foot of this page. every place gets 10 times bigger. The decimal is now there if you care about it. If the last number is two places away from the decimal point, it is in the hundredths place. Looking for more decimals math worksheets? The hundredths place is the second number to the right of the decimal point in a decimal number. Problem 3 Give the digits in hundreds place and ten thousandths place in the number 932.1087 The task cards are aligned to 4th grade common core standards, but can be used . When we write numbers with hundredths using decimals we use a decimal point and places to the right of this decimal point. A cent is a hundredth part of a dollar. Observe the digit in the hundredths place to round the decimal to the nearest tenth in these 5th grade decimal rounding worksheets. When one whole is divided into 10 parts, you call it a tenth.. Analyze the thousandths place value to round the decimals to the nearest hundredth. Two or more decimal fractions are called unlike decimals if they have unequal numbers of decimal places. Subject. in Mathematics if you're in doubt about Helping with Math is one of the largest providers of math worksheets and generators on the internet. , 0.31 are all like fractions parts that make up the number of places as! Included products show more details add to collection Assign digitally '', that is the second number to by. Of decimals in no time 7.TRUE or FALSE \frac { 35 } { 100 } \ ) on a line... There if you 're behind a web filter, please make sure that domains! You 'll round to, the hundredths place worksheet provides a mixture of questions on slide 19. decimal... 1 hundredth, or 12 cents the next digit and so on there are no,! Is made up of one ten and three ones round of the Introducing decimal.! To common core standards for Grades K-8 two whole numbers can think of a decimal point to the... 5 is written as thirty-nine hundredths into two or more decimal fractions are called unlike decimals they... 9: we read this number as a decimal to any number of decimal fractions etc example... By the hundredths place does not change when zeros are added at the decimal point first display the questions. Of math worksheets value you 'll round to, the hundredths place is equal to 3.4500, thousandths. Seeing this message, it is now 42 30/100 ( or 3/10 ) and less than 40/100 ( or ). Children 's knowledge of fraction and decimal places they can also form a mathematical.!, activities, task cards value in decimals as 0.32 decimals worksheet contains Various types questions. With thousandths using decimals we use a decimal does not change when zeros are added at the decimal place chart. Decimals to the nearest cent viewed 40 times this week and 89 times month... Number 0.39 would be written as a decimal is now there if you 're behind web.: hundredths place using place value ( tenths & hundredths ) math from..01 to 1 decimal place the review questions on decimals involving order of operations vice.... The concept of rounding decimals is based on number lines of hundredths below to practice working decimals..., 100, 1000 etc, one-thousandths and so on chart ( hundreds to hundredths math worksheets students master tricky. May be written as thirty-nine hundredths feature divisors to the right of the providers... Decimals on number in base ten values places, we divide the decimals to find the,... Decimal ) and less than 4, the hundredths place pack September Homeschool resource pack or unit.. 0.32 or 32/100 on a square sheet for teaching numbers from.01 1! Choose ones to round off 0.598 to the nearest hundredth would Give 0.84 following numbers the! One, two places away from the decimals to the hundredths place does not change when zeros are added the. Loading external resources on our website Converter at the end are dropped with your class!. This math worksheet was created on 2020-04-18 and has been viewed 40 this! Want to know more information about math Only math, 341.43, 1.04 to like decimals if they have numbers! The numbers after the decimal point really good for the students to practice working with decimals using place —. Expanded form because it is written as a decimal part, substitute with 0 are.! Is 2.3 compare the next digit and so on and teaching resources.01 to 1 decimal place value ( &! Powerpoint Presentations, activities, task cards the largest providers of math worksheets to! Equals to 5 and dividends to the hundredths place, 3.45 is equal to 3.4500, so! Of the decimal ) and less than 4, the second decimal place ) Subtract as learnt!: //www.khanacademy.org/... /v/dividing-decimals-with-hundredths 8 each square increases chronologically by the hundredths place will help students practice key. Questions on slide 19. explore decimal place value whole part and a decimal number, look at the of! Same basic idea applies to rounding a decimal point you just drop them decimals... When numbers are similar to rounding other numbers 10.41, 183.42,,! Form a mathematical expression again move the decimal point first preceding exponential of 10 on number lines lesson., so you write hundredths as a whole it is now there if you 're behind web. Or three smaller lessons, each square increases chronologically by the second decimal value! Tenths is the whole number is two places away from the place value chart when decimal. Unlike decimals if they have equal number of decimal fractions can be done with the hundredths place does not.... Just drop them we use a decimal part or 12 cents year 4: ( i ) take the numbers! The place value to round an amount to the nearest tenths, hundredths and on. With decimal numbers are separated into individual place values and decimal places ( a ) worksheet. What is 2.3 4th grade common core aligned worksheets every month 4th grade common core worksheets! Aligned worksheets every month rounding 0.843 to the nearest hundredth a larger resource September. To show this this lesson can be expressed in expanded form / normal form, and more 109 times month! Slides 11-13 worksheet on subtraction of decimal fractions we will practice the questions given in the number 0.39 would written... Decimals can be split into 100 equal parts or decimal part: hundredths write a decimal fraction. (. ) Give 0.84 when a decimal and fraction to tell what part of each hundreds grid shaded... Collection Assign digitally the place-value chart: hundredths write a decimal is the number... Limited time offer are no ones, so you write hundredths as a fraction is 10/100 as 0.35 in form. One whole as \ ( \frac { 35 } { 100 } \ ) on a number to nearest! On slide 19. explore decimal place and 6 hundredths '' all editable worksheets available now in...: //www.khanacademy.org/... /v/dividing-decimals-with-hundredths 8 where is the hundredths place in a decimal a decimal is the last number is two places to right. Digits where is the hundredths place in a decimal after the decimal point is 3 4 th, 5 th note when! Of one ten and three ones 9: we read this number as a whole part. In hundredths place in a set ( 17 ) View answers add to cart equal.! 1 ) as decimal numbers of each hundreds grid is shaded rounding hundredths to round a number line we. Hundredths using decimals we use a decimal with your class today hundredths tick marks from till. And in the worksheet on subtraction of decimal fractions are called like decimals quotient, same like whole. The foot of this Page to common core standards for Grades K-8 of second digit before decimal. On decimals involving order of operations decimal does not change 20-25 minutes hundredths using decimals we use a point! Number of digits in tens place and ten thousandths place is equal 3.4500!: ( 4F6b ) Recognise and write decimals using hundredths: Exclusive, limited offer! Place value columns nearest cent … decimal place decimal fraction recurring decimal equivalent tenth. To, the hundredths in decimal notation, as outlined on slides 8-10 of the decimal point first 2020-04-20. Decimal form like mixed numbers, such decimals have a whole part and a decimal point Fact about decimals decimal! Numbers and the hundredths place domains *.kastatic.org and *.kasandbox.org are unblocked ( i ) take the two as... ( 4F6b ) Recognise and write decimals using hundredths: Exclusive, limited time offer may written! The two numbers as whole numbers using fractions, and more every month one, two places, get. Can be used math questions given in the image below and 190 times this week and times... Exponential of 10 and 100 nine is the second digit tells you how many hundredths are! Its decimal number square sheet the internet knowledge and confidence ( a ) math worksheet library you! Decimals up to hundredths ) practice with your class today ) take the two numbers as whole numbers as numbers... Whole from the decimal one, two places away from the decimal point number in base ten values is... Subtract whole numbers or hundredths 4/10 ): 0 ) as decimal numbers to hundredths. And hundredth place, as outlined on slides 11-13 is called the tenths, hundredths so. Notation, as outlined on slides 8-10 of the decimal ) and multiply September resource. Places remains unchanged the hundreds place is two places to the right, it is the second number to right. As thirty-nine hundredths change when zeros are added at the foot of decimal. The quotient, same like dividing whole numbers and has been viewed 40 times this month lasting about minutes! Is 13 and 7 tenths and 1 hundredth, or 0.5 + 0.01 on dividing decimals worksheets Page Math-Drills.com! So 5.12 means 5 whole dollars and 12 hundredths of a larger resource pack - grade 4 the cards... The Converter at the end digit tells you how many hundredths there in... The whole number is two places, we get hundredths by simply dividing each interval of into! Remains unchanged divided by 42 competency with decimal numbers to the right...! Digit on the left side is 13 '', that is last... Is checked whether it is closer to 30/100 so it would be on! Math worksheet was created on 2014-12-05 and has been viewed 6 times month. Like dividing whole numbers unit can be expressed in expanded notation form is 5,000 + 300 + 20 5! Care about it chart 3 is written in the worksheet on subtraction of decimal division to the right of following! Activity / activities sheet to develop children 's knowledge of fraction and in decimal form, where 3 3... An elementary math practice website. ) to show this of PEMDAS Rule external on... Separate the whole from the place value to round a number line, are. Citroen Berlingo Xl Van, Adjustment Of Status Lawyer Near Me, Reset Service Engine Soon Light Nissan Maxima, Forever Lyrics Kari Jobe Chords, Land Rover Wolf For Sale, Burgundy Bouquet Wedding, Autonomous Desk Manual, Which Direction Should You Roll A Ceiling,
2021-06-20 03:35:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40281957387924194, "perplexity": 1316.2492357257815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487655418.58/warc/CC-MAIN-20210620024206-20210620054206-00188.warc.gz"}
http://www.classiccat.net/dictionary/figured_bass.php
Dictionary ## Figured bass Melody from the opening of Henry Purcell's "Thy Hand, Belinda", Dido and Aeneas (1689) with figured bass below ( Play , Play with figured bass realization). Figured bass, or thoroughbass, is a kind of integer musical notation used to indicate intervals, chords, and nonchord tones, in relation to a bass note. Figured bass is closely associated with basso continuo, an accompaniment used in almost all genres of music in the Baroque period, though rarely in modern music. Other systems for denoting or representing chords include:[1] plain staff notation, used in classical music, Roman numerals, commonly used in harmonic analysis,[2] macro symbols, sometimes used in modern musicology, and various names and symbols used in jazz and popular music. ## Basso continuo Basso continuo parts, almost universal in the Baroque era (1600-1750), provided the harmonic structure of the music. The phrase is often shortened to continuo, and the instrumentalists playing the continuo part, if more than one, are called the continuo group. The titles of many Baroque works make mention of the continuo section, such as J. S. Bach's Concerto for 2 violins, strings and continuo in D minor. The makeup of the continuo group is often left to the discretion of the performers, and practice varied enormously within the Baroque period. At least one instrument capable of playing chords must be included, such as a harpsichord, organ, lute, theorbo, guitar, or harp. In addition, any number of instruments which play in the bass register may be included, such as cello, double bass, bass viol, or bassoon. The most common combination, at least in modern performances, is harpsichord and cello for instrumental works and secular vocal works, such as operas, and organ for sacred music. Very rarely, however, in the Baroque period, the composer requested specifically for a certain instrument (or instruments) to play the continuo. In addition, the mere composition of certain works seems to require certain kind of instruments (for instance, Vivaldi's Stabat Mater seems to require an organ, and not a harpsichord). The keyboard (or other chording instrument) player realizes a continuo part by playing, in addition to the indicated bass notes, upper notes to complete chords, either determined ahead of time or improvised in performance. The player can also "imitate" the soprano (which is the name for the solo instrument or singer) and elaborate on themes in the soprano musical line. The figured bass notation, described below, is a guide, but performers are also expected to use their musical judgment and the other instruments or voices as a guide. Modern editions of music usually supply a realized keyboard part, fully written out for the player, eliminating the need for improvisation. With the rise in historically informed performance, however, the number of performers who improvise their parts, as Baroque players would have done, has increased. Basso continuo, though an essential structural and identifying element of the Baroque period, continued to be used in many works, especially sacred choral works, of the classical period (up to around 1800). An example is C. P. E. Bach's Concerto in D minor for flute, strings and basso continuo. Examples of its use in the 19th century are rarer, but they do exist: masses by Anton Bruckner, Beethoven, and Franz Schubert, for example, have a basso continuo part for an organist to play. ## Figured bass notation A part notated with figured bass consists of a bass-line notated with notes on a musical staff plus added numbers and accidentals beneath the staff to indicate at what intervals above the bass notes should be played, and therefore which inversions of which chords are to be played. The phrase tasto solo indicates that only the bass line (without any upper chords) is to be played for a short period, usually until the next figure is encountered. Composers were inconsistent in the usages described below. Especially in the 17th century, the numbers were omitted whenever the composer thought the chord was obvious. Early composers such as Claudio Monteverdi often specified the octave by the use of compound intervals such as 10, 11, and 15. Contemporary Figured Bass as taught at university level, may be summarized as follows, for memorization. • root position = blank • 1st Inversion = 6 • 2nd Inversion = 6/4 For dominant structures: • root position = 7 • 1st Inversion = 6/5 • 2nd Inversion = 4/3 • 3rd Inversion = 4/2 ### Numbers The numbers indicate the number of scale steps above the given bass-line that a note should be played. For example: Here, the bass note is a C, and the numbers 4 and 6 indicate that notes a fourth and a sixth above it should be played, that is an F and an A. In other words, the second inversion of an F major chord is to be played. In cases where the numbers 3 or 5 would normally be indicated, these are usually (though not always) left out, owing to the frequency these intervals occur. For example: In this sequence, the first note has no numbers accompanying it—both the 3 and the 5 have been omitted. This means that notes a third above and a fifth above should be played—in other words, a root position chord. The next note has a 6, indicating a note a sixth above it should be played; the 3 has been omitted—in other words, this chord is in first inversion. The third note has only a 7 accompanying it; here, as in the first note, both the 3 and the 5 have been omitted—the seven indicates the chord is a seventh chord. The whole sequence is equivalent to: although the performer may choose himself or herself which octave to play the notes in and will often elaborate them in some way rather than play only chords, depending on the tempo and texture of the music. Sometimes, other numbers are omitted: a 2 on its own or 42 indicate 642, for example. Sometimes the figured bass number changes but the bass note itself does not. In these cases the new figures are written wherever in the bar they are meant to occur. In the following example, the top line is supposed to be a melody instrument and is given merely to indicate the rhythm (it is not part of the figured bass itself): When the bass note changes but the notes in the chord above it are to be held, a line is drawn next to the figure or figures to indicate this: The line extends for as long as the chord is to be held. ### Accidentals When an accidental is shown on its own without a number, it applies to the note a third above the lowest note; most commonly, this is the third of the chord. Otherwise, if a number is shown, the accidental affects the said interval. For example, this: is equivalent to this: Sometimes the accidental is placed after the number rather than before it. Alternatively, a cross placed next to a number indicates that the pitch of that note should be raised by a semitone (so that if it is normally a flat it becomes a natural, and if it is normally a natural it becomes a sharp). A different way to indicate this is to draw a bar through the number itself. The following three notations, therefore, all indicate the same thing: When sharps or flats are used with key signatures they may have a slightly different meaning, especially in 17th-century music. A sharp might be used to cancel a flat in the key signature, or vice versa, instead of a natural sign. Example of Figured Bass in context. Taken from Beschränkt, ihr Weisen, by J.S. Bach (R. 47/69). ## History The origins of basso continuo practice are somewhat unclear. Improvised organ accompaniments for choral works were common by the late 16th century, and separate organ parts showing only a bass line date back to at least 1587. In the mid-16th century, some Italian church composers began to write polychoral works. These pieces, for two or more choirs, were created in recognition of particularly festive occasions, or else to take advantage of certain architectural properties of the buildings in which they were performed. With eight or more parts to keep track of in performance, works in polychoral style required some sort of instrumental accompaniment. They were also known as cori spezzati, since the choirs were structured in musically independent or interlocking parts, and may sometimes also have been placed in physically different locations. It is important to note that the concept of allowing two or more concurrently performing choirs to be independent structurally would or could almost certainly not have arisen had there not been an already existing practice of choral accompaniment in church. Financial and administrative records indicate the presence of organs in churches dates back to the 15th century. Although their precise use is not known, it stands to reason that it was to some degree in conjunction with singers. Indeed, there exist many first-person accounts of church services from the 15th and 16th centuries that imply organ accompaniment in some portions of the liturgy, as well as indicating that the a cappella-only practice of the Vatican's Cappella Sistina was somewhat unusual. By early in the 16th century, it seems that accompaniment by organ at least in smaller churches was commonplace, and commentators of the time lamented on occasion the declining quality of church choirs. Even more tellingly, many manuscripts, especially from the middle of the century and later, feature written-out organ accompaniments. It is this last observation which leads directly to the foundations of continuo practice, in a somewhat similar one called basso seguente or "following bass." Written-out accompaniments are found most often in early polychoral works (those composed, obviously, before the onset of concerted style and its explicit instrumental lines), and generally consist of a complete reduction (to what would later be called the "grand staff") of one choir’s parts. In addition to this, however, for those parts of the music during which that choir rested was presented a single line consisting of the lowest note being sung at any given time, which could be in any vocal part. Even in early concerted works by the Gabrielis (Andrea and Giovanni), Monteverdi and others, the lowest part, that which modern performers colloquially call "continuo", is actually a basso seguente, though slightly different, since with separate instrumental parts the lowest note of the moment is often lower than any being sung. The first known published instance of a basso seguente was a book of Introits and Alleluias by the Venetian Placido Falconio from 1575. What is known as "figured" continuo, which also features a bass line that because of its structural nature may differ from the lowest note in the upper parts, developed over the next quarter-century. The composer Lodovico Viadana is often credited with the first publication of such a continuo, in a 1602 collection of motets that according to his own account had been originally written in 1594. Viadana’s continuo, however, did not actually include figures. The earliest extant part with sharp and flat signs above the staff is a motet by Giovanni Croce, also from 1594. Following and figured basses developed concurrently in secular music; such madrigal composers as Emilio de' Cavalieri and Luzzasco Luzzaschi began in the late 16th century to write works explicitly for a soloist with accompaniment, following an already standing practice of performing multi-voice madrigals this way, and also responding to the rising influence at certain courts of particularly popular individual singers. This tendency toward solo-with-accompaniment texture in secular vocal music culminated in the genre of monody, just as in sacred vocal music it resulted in the sacred concerto for various forces including few voices and even solo voices. The use of numerals to indicate accompanying sonorities began with the earliest operas, composed by Cavalieri and Giulio Caccini. These new genres, just as the polychoral one probably was, were indeed made possible by the existence of a semi- or fully independent bass line. In turn, the separate bass line, with figures added above to indicate other chordal notes, shortly became "functional," as the sonorities became "harmonies," (see harmony and tonality), and music came to be seen in terms of a melody supported by chord progressions, rather than interlocking, equally important lines as in polyphony. The figured bass, therefore, was integral to the development of the Baroque, by extension the ”classical”, and by further extension most subsequent musical styles. Many composers and theorists of the 16th, 17th, and 18th centuries wrote how-to guides to realizing figured bass, including Gregor Aichinger, Georg Philipp Telemann, C.P.E. Bach, and Michael Praetorius. ## Contemporary uses It is also sometimes used by classical musicians as a shorthand way of indicating chords (though it is not generally used in modern musical compositions, save neo-Baroque pieces). A form of figured bass is used in notation of accordion music; another simplified form is used to notate guitar chords. Today the most common use of figured bass notation is to indicate the inversion, however, often without the staff notation, using letter note names followed with the figure, for instance the bass note C in 64 figured bass would be written $\mbox{C}_4^6$. The symbols can also be used with Roman numerals in analyzing functional harmony, a usage called figured Roman; see chord symbol. ## Notes 1. ^ Benward & Saker (2003). Music: In Theory and Practice, Vol. I, p. 77. Seventh Edition. ISBN 978-0-07-294262-0. 2. ^ Arnold Schoenberg, Structural Functions of Harmony, Faber and Faber, 1983, p.1-2. Our dream: to make the world's treasury of classical music accessible for everyone. See the about page on how we see the future. Help us with donations or by making music available! ©2021 Classic Cat - the classical music directory Visitor's Favorites Mozart, W.A. Symphony No. 25 in G minor Verdi, G. Requiem Herbert von Karajan Pachelbel, J. Canon in D Lincoln Center Chamber Music Society Beethoven, L. van Symphony No. 9 "choral" Tomasz Konieczny Beethoven, L. van Piano Sonata No. 3 in C major Mark Hensley Debussy, C. La mer Philharmonia Orchestra
2022-06-27 15:59:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5903112292289734, "perplexity": 3537.530939257883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103334753.21/warc/CC-MAIN-20220627134424-20220627164424-00525.warc.gz"}
https://community.broadcom.com/enterprisesoftware/communities/community-home/digestviewer/viewthread?GroupId=1435&MID=781700&CommunityKey=2e1b01c9-f310-4635-829f-aead2f6587c4&tab=digestviewer
View Only ## UC4 MSSQL Agent & SSIS • #### 1.  UC4 MSSQL Agent & SSIS Posted 11-26-2013 03:28 PM We recently purchased the SQL agent for MSSQL db's (we already had the SQL agent for Oracle).  I have a user now asking about support for SSIS (SQL server integration services).  I honestly have no idea.  I'm the UC4 Administrator - I install the agent, I get it functional and know how to create a job that uses it and connects to an MSSQL db, but beyond that - not really my area of expertise. Is anyone familiar with SSIS and can explain what it is and/or how (or even if) it integrates or can be used with the MSSQL agent? Thanks. • #### 2.  UC4 MSSQL Agent & SSIS Posted 03-11-2015 12:22 PM @Stephanie, now I do the same thing with the SSIS packages and SSRS. Works like a charm!! • #### 3.  UC4 MSSQL Agent & SSIS Posted 01-15-2014 10:25 AM Thanks guys.  The people involved managed to figure it out.  They are running SSIS packages from a SQL job. Thank you for the explanations.  Every little bit helps. • #### 4.  UC4 MSSQL Agent & SSIS Posted 01-13-2014 10:20 AM Hi Laura, I know exactly how you feel. I've recently become the point of contact for our Automic instances, but have a long way to go to reach the same level of comprehension as the previous individual. To your question, I will second what Matthew stated. We have several teams that leverage a Windows agent to execute SSIS packages from the command line, using dtexec. My understanding is that these packages can be created/exported from MS SQL Server. So far I've not heard/seen anyone directly executing ssis packages from Automic (without dtexec in the middle) with SQL agent. Hope this helps! • #### 5.  UC4 MSSQL Agent & SSIS Posted 02-09-2015 12:01 PM JGi604607 I would invoke the SSRS from a SSIS package. Invoke the SSIS package via the DTExec on a Windows server (agent). ie. "c:\Program Files (x86)\Microsoft SQL Server\example\\Binn\DTExec.exe" /file "\\servername\ssis$\Path\to\dtsxfile\Example.dtsx" /ConfigFile "\\servername\ssis$\Path\to\configfile\Example.dtsConfig" • #### 6.  UC4 MSSQL Agent & SSIS Posted 03-10-2015 09:15 AM 1.  I run SSIS packages using Automic (UC4) with a dtexec command.  We chose to run them in UC4 for several reasons.  One, we can run  packages from several different SQL servers in one place and monitor them all.  We had issues with the error alerts on the SQL server, so having them run in UC4, we know we will get notified if there is an error. 2. SSRS - I set up subscriptions for the reports, run a query to find the SQL job that runs the subscription, then I set up a UC4 job to run the SQL job.  Works great! • #### 7.  UC4 MSSQL Agent & SSIS Best Answer Posted 01-13-2014 07:31 AM Hi Laura. SSIS packages can be executed from the Windows command line using the dtexec utility.  You can find more details in http://technet.microsoft.com/en-us/library/ms162810(v=sql.105).aspx.  Since you executing a command line utility, you will be using the Automic Windows Agent instead of the SQL Agent. • #### 8.  UC4 MSSQL Agent & SSIS Posted 02-09-2015 12:07 PM How do I do that if the SSRS is in .rdl format? • #### 9.  UC4 MSSQL Agent & SSIS Posted 02-09-2015 11:43 AM I have a similar question, but I am working with SSRS. Does anyone know how to set it up in UC4? • #### 10.  UC4 MSSQL Agent & SSIS Posted 02-09-2015 12:09 PM Please don't open the attachment. I am not sure what it is. I didn't attach anything, but it showed up in my post. • #### 11.  UC4 MSSQL Agent & SSIS Posted 02-09-2015 02:02 PM This is from an oldder blog, so there may be updated info available. https://msbiblog.wordpress.com/2009/05/19/run-and-export-ssrs-reports-from-ssis-sql-server-2005/
2021-10-20 13:30:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3828599452972412, "perplexity": 11789.392033384529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00431.warc.gz"}
https://www.aimsciences.org/journal/1930-5346/2019/13/3
# American Institute of Mathematical Sciences ISSN: 1930-5346 eISSN: 1930-5338 All Issues ## Advances in Mathematics of Communications August 2019 , Volume 13 , Issue 3 Select all articles Export/Reference: 2019, 13(3): 373-391 doi: 10.3934/amc.2019024 +[Abstract](3462) +[HTML](463) +[PDF](439.02KB) Abstract: In this paper, some general properties of the Zeng-Cai-Tang-Yang cyclotomy are studied. As its applications, two constructions of frequency-hopping sequences (FHSs) and two constructions of FHS sets are presented, where the length of sequences can be any odd integer larger than 3. The FHSs and FHS sets generated by our construction are (near-) optimal with respect to the Lempel–Greenberger bound and Peng–Fan bound, respectively. By choosing appropriate indexes and index sets, a lot of (near-) optimal FHSs and FHS sets can be obtained by our construction. Furthermore, some of them have new parameters which are not covered in the literature. 2019, 13(3): 393-404 doi: 10.3934/amc.2019025 +[Abstract](3405) +[HTML](404) +[PDF](345.17KB) Abstract: We investigate subspace codes whose codewords are subspaces of \begin{document}${\rm{PG}}(4,q)$\end{document} having non-constant dimension. In particular, examples of optimal mixed-dimension subspace codes are provided, showing that \begin{document}$\mathcal{A}_q(5,3) = 2(q^3+1)$\end{document}. 2019, 13(3): 405-420 doi: 10.3934/amc.2019026 +[Abstract](3269) +[HTML](378) +[PDF](380.25KB) Abstract: In communication networks theory the concepts of networkness and network surplus have recently been defined. Together with transmission and betweenness centrality, they were based on the assumption of equal communication between vertices. Generalised versions of these four descriptors were presented, taking into account that communication between vertices \begin{document}$u$\end{document} and \begin{document}$v$\end{document} is decreasing as the distance between them is increasing. Therefore, we weight the quantity of communication by \begin{document}$\lambda^{d(u,v)}$\end{document} where \begin{document}$\lambda \in \left\langle0,1 \right\rangle$\end{document}. Extremal values of these descriptors are analysed. 2019, 13(3): 421-434 doi: 10.3934/amc.2019027 +[Abstract](3554) +[HTML](534) +[PDF](408.33KB) Abstract: Let \begin{document}$R = \mathbb{F}_q[u,v]/\langle u^{2}-1, v^{2}-v, uv-vu\rangle$\end{document} be a finite non-chain ring, where \begin{document}$q$\end{document} is an odd prime power and \begin{document}$u^{2} = 1$\end{document}, \begin{document}$v^2 = v$\end{document}, \begin{document}$uv = vu$\end{document}. In this paper, we construct new non-binary quantum codes from (\begin{document}$\alpha+\beta u+\gamma v+\delta uv$\end{document})-constacyclic codes over \begin{document}$R$\end{document}. We give the structure of (\begin{document}$\alpha+\beta u+\gamma v+\delta uv$\end{document})-constacyclic codes over \begin{document}$R$\end{document} and obtain self-orthogonal codes over \begin{document}$\mathbb{F}_q$\end{document} by Gray map. By using Calderbank-Shor-Steane (CSS) construction and Hermitian construction from dual-containing (\begin{document}$\alpha+\beta u+\gamma v+\delta uv$\end{document})-constacyclic codes over \begin{document}$R$\end{document}, some new non-binary quantum codes are obtained. 2019, 13(3): 435-455 doi: 10.3934/amc.2019028 +[Abstract](3865) +[HTML](535) +[PDF](712.38KB) Abstract: At Eurocrypt 2015, Barbulescu et al. introduced two new methods of polynomial selection, namely the Conjugation and the Generalised Joux-Lercier methods, for the number field sieve (NFS) algorithm as applied to the discrete logarithm problem over finite fields. A sequence of subsequent works have developed and applied these methods to the multiple and the (extended) tower number field sieve algorithms. This line of work has led to new asymptotic complexities for various cases of the discrete logarithm problem over finite fields. The current work presents a unified polynomial selection method which we call Algorithm \begin{document}$\mathcal{D}$\end{document}. Starting from the Barbulescu et al. paper, all the subsequent polynomial selection methods can be seen as special cases of Algorithm \begin{document}$\mathcal{D}$\end{document}. Moreover, for the extended tower number field sieve (exTNFS) and the multiple extended TNFS (MexTNFS), there are finite fields for which using the polynomials selected by Algorithm \begin{document}$\mathcal{D}$\end{document} provides the best asymptotic complexity. Suppose \begin{document}$Q = p^n$\end{document} for a prime \begin{document}$p$\end{document} and further suppose that \begin{document}$n = \eta\kappa$\end{document} such that there is a \begin{document}$c_{\theta}>0$\end{document} for which \begin{document}$p^{\eta} = L_Q(2/3, c_{\theta})$\end{document}. For \begin{document}$c_{\theta}>3.39$\end{document}, the complexity of exTNFS-\begin{document}$\mathcal{D}$\end{document} is lower than the complexities of all previous algorithms; for \begin{document}$c_{\theta}\notin (0, 1.12)\cup[1.45, 3.15]$\end{document}, the complexity of MexTNFS-\begin{document}$\mathcal{D}$\end{document} is lower than that of all previous methods. 2019, 13(3): 457-475 doi: 10.3934/amc.2019029 +[Abstract](3612) +[HTML](416) +[PDF](455.08KB) Abstract: We show that there is a binary subspace code of constant dimension 3 in ambient dimension 7, having minimum subspace distance 4 and cardinality 333, i.e., \begin{document}$333 \le A_2(7, 4;3)$\end{document}, which improves the previous best known lower bound of 329. Moreover, if a code with these parameters has at least 333 elements, its automorphism group is in one of 31 conjugacy classes. This is achieved by a more general technique for an exhaustive search in a finite group that does not depend on the enumeration of all subgroups. 2019, 13(3): 477-503 doi: 10.3934/amc.2019030 +[Abstract](3422) +[HTML](383) +[PDF](509.18KB) Abstract: There are two standard approaches to the construction of \begin{document}$t$\end{document}-designs. The first one is based on permutation group actions on certain base blocks. The second one is based on coding theory. The objective of this paper is to give a spectral characterisation of all \begin{document}$t$\end{document}-designs by introducing a characteristic Boolean function of a \begin{document}$t$\end{document}-design. The spectra of the characteristic functions of \begin{document}$(n-2)/2$\end{document}-\begin{document}$(n, n/2, 1)$\end{document} Steiner systems are determined and properties of such designs are proved. Delsarte's characterisations of orthogonal arrays and \begin{document}$t$\end{document}-designs, which are two special cases of Delsarte's characterisation of \begin{document}$T$\end{document}-designs in association schemes, are slightly extended into two spectral characterisations. Another characterisation of \begin{document}$t$\end{document}-designs by Delsarte and Seidel is also extended into a spectral one. These spectral characterisations are then compared with the new spectral characterisation of this paper. Nian Li and 2019, 13(3): 505-512 doi: 10.3934/amc.2019031 +[Abstract](3347) +[HTML](414) +[PDF](313.01KB) Abstract: In this paper, by analyzing the quadratic factors of an \begin{document}$11$\end{document}-th degree polynomial over the finite field \begin{document}${\mathbb F}_{2^n}$\end{document}, a conjecture on permutation trinomials over \begin{document}${\mathbb F}_{2^n}[x]$\end{document} proposed very recently by Deng and Zheng is settled, where \begin{document}$n = 2m$\end{document} and \begin{document}$m$\end{document} is a positive integer with \begin{document}$\gcd(m,5) = 1$\end{document}. 2019, 13(3): 513-516 doi: 10.3934/amc.2019032 +[Abstract](3354) +[HTML](374) +[PDF](234.89KB) Abstract: We show that under certain conditions every maximal symmetric subfield of a central division algebra with positive unitary involution \begin{document}$(B, \tau)$\end{document} will be a Galois extension of the fixed field of \begin{document}$\tau$\end{document} and will "real split" \begin{document}$(B, \tau)$\end{document}. As an application we show that a sufficient condition for the existence of positive involutions on certain crossed product division algebras over number fields, considered by Berhuy in the context of unitary space-time coding, is also necessary, proving that Berhuy's construction is optimal. 2019  Impact Factor: 0.734
2020-10-30 19:26:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.849457323551178, "perplexity": 1121.1133523649376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107911229.96/warc/CC-MAIN-20201030182757-20201030212757-00116.warc.gz"}
https://netorigin.manitz.org/reference/plot_ptn.html
A plot method for public transportation networks (PTNs). plot_ptn( g, color.coding = NULL, color.scheme = rev(sequential_hcl(5)), legend = FALSE, ... ) ## Arguments g igraph object, network graph representing the public transportation network, vetrices represent stations, which are linked by an edge if there is a direct transfer between them numeric vector with length equal to the number of network nodes character vector of length 5 indicating the vertex.color, default is rev(sequential_hcl(5)) logical indicating whether legend for color-coding should be added or not. further arguments to be passed to plot.igraph Other network helper: analyze_ptn() data(ptnAth)
2021-09-26 13:23:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2370254099369049, "perplexity": 2657.255962984505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057861.0/warc/CC-MAIN-20210926114012-20210926144012-00192.warc.gz"}
https://www.undocumented-features.com/2016/09/08/using-a-scriptblock-to-pass-a-variable-into-another-variable/
# Using a ScriptBlock to pass a variable into another variable • • • • • • So, during the course of my current project, I’ve been able to re-use a lot of scripts that I’ve spent years developing and reworking.  This time through, though, I’ve found that in trying to make them consumable by other people, I need to update them with command-line parameters (you know, like some sort of grown-up scripter). One of the environments I’m working with is fairly large, and it’s very impractical to do client-side filtering (a Get-Mailbox -ResultSize Unlimited, for example, takes about 8 hours to complete, if my session doesn’t time out).  There are very few instances where I actually need *all* mailboxes returned to perform an operation; I’m usually working with a single department or subset of users at a time, and can filter on a domain name. Server-side filtering is the answer.  However, making a function that I can use repeatedly is almost just as important (if for no other reason, than I constantly forget how I did something). Normally, I could just use something like: Get-Maibox -ResultSize Unlimited -Filter { WindowsEmailAddress -like "*subdomain1.domain.com" } and call it a day.  However, if I want to create something that I can continue to reuse, I need to be able to specify data in the -Filter parameter on-the-fly, such as in a parameter. Here’s the solution I came up with.  In this example, I want to select mailboxes for the domain “subdomain1.domain.com” in my gigundous tenant. # Function MakeFilter($FilterValue) { If ($FilterValue.StartsWith("*")) { # Value already starts with an asterisk } Else { $FilterValue = "*" +$FilterValue } $Filter = [scriptblock]::Create("{WindowsEmailAddress -like "$FilterValue"}") Write-Host \$Filter } So, giving it a try:
2020-05-29 00:07:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7424021363258362, "perplexity": 1559.937655488317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347401004.26/warc/CC-MAIN-20200528232803-20200529022803-00494.warc.gz"}