url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://opensourc.es/blog/sudoku
|
Article Image
First of all, sorry that I didn't write for a long time. I had to finish my bachelor's thesis. Done! Then I flew to Australia to travel around. I'm currently in Perth in the library because my laptop broke down...
Well whatever. Let's start with something interesting. I wrote about machine learning and memory palaces and the last post was about a 3D game. Today I want to go back in a lower dimension. Welcome 2D!
I started an online course about discrete optimization at coursera.
There you listen to some lectures between 13.02 and 16.04. and have some programming challenges. At the end you'll get a certificate for around 65\$. Some knowledge in mathematics, computer science and Python is helpful.
You can start the course now (which I did). I finished the first two weeks and got stuck at the moment the lecturer mentioned Sudoku. It is a dream for a long time to solve this relatively simple game in Python. Actually I started several times using backtracking but I never finished. I don't know why...
This week I started to solve it in a different fashion. I had some optimization courses and seminars in university where we used Gurobi to solve some bigger projects. I was curious how this solvers work so I started the coursera course and started this little project.
For everyone who doesn't know what Sudoku is. It's a really popular puzzle. There is a 9x9 grid which consists of nine blocks which have a size of 3x3.
Rules: Fill the grid so that:
• every row contains the digits 1-9 once
• every column contains the digits 1-9 once
• every block contains the digits 1-9 once
The backtracking approach would be to fill the first field (top left corner) with an 1 and check if the rules are unsatisfiable afterwards. In this case it would be unsatisfiable because the first row has already the digit 1. So we could check the next digits until we find one which looks promising. In this case the 3. Afterwards we go to the next field and so on. If there are no other options we would backtrack until we find the complete solution.
That is kind of a dump system. So how do humans solve a Sudoku? We can look for a row, column or block which has already 8 digits then it's easy to add the ninth. Or a bit more complicated: We combine the rules. Then we are able to add this 6.
We can fill it because the digits 1,2,5,7,8,9 aren't possible (row rule) and the digits 1,3 and 4 aren't possible (block rule). 6 is therefore the only possible value.
Now the question is if it is possible to write a program which does exactly that stuff? First we define our grid:
grid = [[0]*9 for i in range(9)]
grid[0] = [0,2,1,0,7,9,0,8,5]
grid[1] = [0,4,5,3,1,0,0,0,9]
grid[2] = [0,7,0,0,4,0,0,1,0]
grid[3] = [0,0,0,1,0,8,0,3,6]
grid[4] = [0,6,0,0,0,0,2,0,8]
grid[5] = [0,0,0,0,0,3,0,0,4]
grid[6] = [6,0,8,0,0,0,0,0,0]
grid[7] = [0,9,4,0,0,7,8,0,0]
grid[8] = [2,0,0,5,0,0,0,4,0]
grid = np.array(grid)
This is our representation of the Sudoku grid where 0 represents the unknown value. Then it is common to have something like a search space. There we define every value which looks promising for the solution. In the beginning every value looks promising (we have no idea yet :D). Therefore we say that the values 1-9 are possible for every place which isn't set yet (all 0 values).
model = Model()
model.build_search_space(grid,[1,2,3,4,5,6,7,8,9],0)
We will define our Model class later. The first parameter is our grid the second our range of values which can be part of the solution and 0 is the representation of every unassigned value in our grid.
Then we describe our rules as constraints:
# per row
for r in range(len(grid)):
idx = np.full(grid.shape, False, dtype=bool)
idx[r,:] = True
model.subscribe({'idx':idx},model.check_constraint,{'idx':idx},"alldifferent")
That basically means for every row in our grid:
• we define a boolean grid where True means part of the row and False means not part of the row
• if one of the values changes we want to call the check_constraint function with the parameters of the created boolean matrix and the name of the constraint alldifferent.
The rule was that we have to assign the digits 1-9 in each row. We only have nine possible values (1-9) and nine positions. Therefore we can say that each of the values in that row should be different from each other. And we want to check the constraint every time a value in that row changed or just if we were able to reduce the search space for that row.
Okay stop for a moment. We visualize it on a small part. We have a small game which has three positions and the three digits 1-3
First the search space is:
We will solve the puzzle by saying that every value changed. Then the check_constraint function will be called to check the alldifferent constraint.
This will reduce the search space to:
If some other constraints sets the second value to 2 the check_constraint function is called again and we can set the third value to 3.
That is basic concept. Let's add the other rules in the same way:
# per col
for c in range(len(grid[0])):
idx = np.full(grid.shape, False, dtype=bool)
idx[:,c] = True
model.subscribe({'idx':idx},model.check_constraint,{'idx':idx},"alldifferent")
# per block
for r in range(3):
for c in range(3):
bxl,bxr,byt,byb = r*3,(r+1)*3,c*3,(c+1)*3
idx = np.full(grid.shape, False, dtype=bool)
idx[bxl:bxr,byt:byb] = True
model.subscribe({'idx':idx},model.check_constraint,{'idx':idx},"alldifferent")
At the end we want to solve the model and print the solution:
model.solve()
solution = model.get_solution()
print_sudoku(solution)
The overall structure of the project should look like
Modules/
- Error.py
- Model.py
- main.py
The code mentioned above is part of main.py.
For our model we define the Model.py:
import networkx as nx
import numpy as np
from .Error import InfeasibleError
class Model:
def __init__(self):
self.subscribe_list_on = []
self.subscribe_list_func = []
self.nof_calls = 0
In subscribe_list_on we store a list of indexes which tell us when the subscribe function should be called. subscribe_list_func stores the function calls and nof_calls is the number of calls. Using this we can see if we call something too often and how hard it was to solve the model.
def subscribe(self,on,func,*args):
self.subscribe_list_on.append(on['idx'])
self.subscribe_list_func.append((func,args))
I decided to use the parameter on as a dictionary to have some more freedom for other projects. Here we simply add the indexes to the subscribe_list_on list and the function with the arguments to subscribe_list_func.
For solving the model we say that every value has changed which should basically call all functions.
def solve(self):
try:
self.fire(np.full(self.search_space.shape, True, dtype=bool))
except InfeasibleError as e:
print(e)
exit(2)
If the model is infeasible which simply means that there is no solution we want to exit and out print the error.
The fire function is relatively simple as well.
def fire(self,idx):
i = 0
self.changed = np.full(self.changed.shape, False, dtype=bool)
for lidx in self.subscribe_list_on:
if np.any(np.logical_and(lidx,idx)):
func = self.subscribe_list_func[i][0]
args = self.subscribe_list_func[i][1]
try:
func(*args)
except InfeasibleError as e:
raise e
self.nof_calls += 1
i += 1
if np.any(self.changed):
try:
self.fire(self.changed)
except InfeasibleError as e:
raise e
Here we say that nothing had changed and the function check_constraint will update the knowledge and change the self.changed matrix again until the solution is found. Then for every subscribe call we check if the indexes (idx) with which the fire function was called affect the constraint. idx is a bool matrix in our example a 9x9 matrix and all the lidx matrices are the same. If the logical and of these two matrices has at least one true value we want to call the check_constraint function. That is done in
if np.any(np.logical_and(lidx,idx)):
func = self.subscribe_list_func[i][0]
args = self.subscribe_list_func[i][1]
try:
func(*args)
And if something changed at the end we just call the fire function with an updated changed matrix. If nothing changed we solved the problem.
At first we want to build the simple search space.
def build_search_space(self,grid,values,no_val=0):
self.search_space = np.empty(grid.shape,dtype=dict)
self.changed = np.full(grid.shape, False, dtype=bool)
no_val_idx = np.where(grid == no_val)
no_val_idx_invert = np.where(grid != no_val)
self.search_space[no_val_idx] = {'values':values[:]}
for idx in np.transpose(no_val_idx_invert):
t_idx = tuple(idx)
self.search_space[t_idx] = {'value':grid[t_idx]}
Therefore, the search space needs to have the same shape as our grid. And we initialize the changed grid here as well. Then we assign the values (in our case 1-9) to every entry in the search space which has the no_val. And we set the value of the fixed entries in the search space as well.
Now the difficult part:
def check_constraint(self,opts,operator):
if operator == "alldifferent":
How do we update the search space if something has changed? The first step will be easy. We can delete all digits from the search space which already exist. But is that enough?
Let's try it out:
def check_constraint(self,opts,operator):
if operator == "alldifferent":
ss_idx = opts['idx']
values = self.search_space[ss_idx]
First we want to get all search space indexes (ss_idx) which are affected and get the values from the search space.
Then we save all digits that are already fixed:
already_know = []
new_possible = [False]*len(values)
and we want to replace the related entries in our search space by the new possible values. Afterwards we simply add all known values to the already_know array.
i = 0
for v in values:
if 'value' in v:
new_possible[i] = {'value': v['value']}
i += 1
And of course these fixed values will not change so we add them to the new_possible array.
We build a subscribe system so we have to see which values have been changed. Therefore we use an array:
new_knowledge = [False]*len(values)
First we didn't change anything yet so it is initialized with the value False.
Now let's reduce the search space:
i = 0
for v in values:
if 'value' not in v:
new = [x for x in v['values'] if x not in already_know]
Here we use every value in our current values array and remove all the values which can't be in this particular row, column or block.
if len(new) < len(v['values']):
if len(new) == 1:
new_possible[i] = {'value': new[0]}
else:
new_possible[i] = {'values': new}
new_knowledge[i] = True
else:
new_possible[i] = {'values': v['values']}
Then we check if that changed anything by comparing the length of the two arrays. If something has changed we update the new_possible array and if only one value is possible we set that value. In the end we say: Yes we changed a value new_knowledge[i] = True and if we didn't change we just set the old values into our new_possible array.
Okay now we have everything we need. Let's update the search space and the changed array.
old_changed = self.changed.copy()
self.changed[ss_idx] = new_knowledge
self.changed = np.logical_or(self.changed,old_changed)
self.search_space[ss_idx] = new_possible
For changing the changed matrix we have to check if we changed something now or if it already was changed before. Therefore we use a logical or. The search space can be simply updated using our indexes and the new_possible array.
Let's solve the model...
Oh wait we need some more code. We had the following part at the end of our main.py:
model.solve()
solution = model.get_solution()
print_sudoku(solution)
Let's define the small functions get_solution and print_sudoku.
def get_solution(self):
grid = [[0]*9 for i in range(9)]
for r in range(len(self.search_space)):
for c in range(len(self.search_space[r])):
if 'value' in self.search_space[r][c]:
grid[r][c] = self.search_space[r][c]['value']
return grid
There a grid is created and all the fixed value parts of our search space are filled in. For printing the actual solution we use this:
def print_sudoku(grid):
for r in range(len(grid)):
row = ""
for c in range(len(grid[r])):
if c%3 == 0:
row += "["
row += " "+str(grid[r][c])
if c%3 == 2:
row += " ]"
print(row)
if r % 3 == 2:
print("-"*27)
That out prints:
[ 3 2 1 ][ 6 7 9 ][ 4 8 5 ]
[ 8 4 5 ][ 3 1 2 ][ 6 7 9 ]
[ 9 7 6 ][ 8 4 5 ][ 3 1 2 ]
---------------------------
[ 4 5 9 ][ 1 2 8 ][ 7 3 6 ]
[ 1 6 3 ][ 7 5 4 ][ 2 9 8 ]
[ 7 8 2 ][ 9 6 3 ][ 1 5 4 ]
---------------------------
[ 6 3 8 ][ 4 9 1 ][ 5 2 7 ]
[ 5 9 4 ][ 2 3 7 ][ 8 6 1 ]
[ 2 1 7 ][ 5 8 6 ][ 9 4 3 ]
There is no zero anymore so we did it! We solved it yeah... ... and it was pretty fast. This was generated in 0.03s.
Now look on the scrollbar on the right side of your screen. That shows you that we are not there yet.
But why???
We need a look at a new much harder Sudoku to show that this isn't enough.
Let's try to solve it:
[ 0 0 0 ][ 5 4 6 ][ 0 0 9 ]
[ 0 2 0 ][ 3 8 1 ][ 0 0 7 ]
[ 0 0 3 ][ 9 0 0 ][ 0 0 4 ]
---------------------------
[ 9 0 5 ][ 0 0 0 ][ 0 7 0 ]
[ 7 0 0 ][ 0 0 0 ][ 0 2 0 ]
[ 0 0 0 ][ 0 9 3 ][ 0 0 0 ]
---------------------------
[ 0 5 6 ][ 0 0 8 ][ 0 0 0 ]
[ 0 1 0 ][ 0 3 9 ][ 0 0 0 ]
[ 0 0 0 ][ 0 0 0 ][ 8 0 6 ]
You might look at it and think: Wait I said SOLVE!!!
At least we got three new values:
Unfortunately that's all we can get with our simple model. Before we start further we have a look at our search space:
Okay what can we see? There are some entries in the search space which are constraint quite a lot and some which are less constraint. We found already three values in one block. Let's have a look on the two values which aren't assigned yet. They both can have the digits 2 and 7. Because they are in the same row this actually gives us new information because both the 2 and the 7 needs to be placed in the block and only once in the row. Therefore there can't be 7 at the second position of the third row and no 2 in the third block in the bottom left corner.
The next image shows a different representation of a search space.
We will use this representation where we have our nine entries which are visualized in the first row a-i and the values 1-9 at the bottom. Let's draw some arrows.
These arrows show the fixed values. Including the search space it looks like:
That actually looks quite messy but okay. We built a graph. Let's use some graph theory. I learned it university and never used it... Until now!
In more general constraint programming challenges we first would like to know if the model is feasible. So we want to know if there is a configuration where we can fulfill the alldifferent constraint. To solve this problem we use our knowledge about maximum matching. For all of you who don't know what a maximum matching is: I first explain what a matching is.
A matching is a list of edges which don't have common vertices. That basically means a node is not allowed to have more than one edge in a matching.
To get a maximum matching we just want the highest amount of edges we can find to fulfill the matching criteria.
If you're interested of how to find maximum matching you can have a short look on Wikipedia.
I want to show you one:
Here it is feasible because we are able to find a matching which has size 9 (9 edges). That means for every position we can assign a value so that all positions have a different value.
The general idea is now to more or less find all of those maximum matchings and to check if there are edges which don't appear in any of them. It isn't that easy/fast to find all maximum matching but there is a nice lemma: Berge's lemma
It says: An edge belongs to some but not all maximum matchings if and only if, given a maximum matching M, it belongs to either
• an even alternating path starting at a free vertex
• an even alternating cycle.
Okay wait... What do we wanna do again? We want to know if an edge is part of any maximum matching. Now we have an lemma which says there is something which holds for every edge which is part of at least one maximum matching. Well we said that we don't have a free vertex here because we need to assign all the values 1-9 to our positions. Therefore an edge belongs to a maximum matching if it belongs to an even alternating cycle. Alternating in this case means that we start with an edge which is in the matching and have to use an edge from there which isn't part of the matching. Then we have to use an edge from the matching and so on... Here we can use strongly connected components. A strongly connected component is part of a graph which is itself a strongly connected graph. A strongly connected graph is a graph in which every vertex is reachable by every other vertex. Which basically means that if we have a directed graph there needs to be a cycle.
But we have undirected graph, right? And wait a second then we have to check the alternating stuff.
We can transform the undirected graph in a fashion that it is a directed graph where we don't have to check the alternating constraint.
Here every edge which is in our maximum matching is directed downwards and the other edges are directed upwards. Now we just have to find strongly connected components which have an even cycle. Actually we can forget the stuff with the "even" part because we have neither connections between the upper parts nor connections between the lower parts. Therefore to create a cycle we always need an even number of edges.
Let's find an even alternating cycle:
The blue arrows form a cycle and another one:
And the third one...
Well there is another one:
and the last...
Now let's remove all edges we used in the first maximum matching as well as in one of those cycles and have a look what is left:
These are exactly the two edges we wanted to remove.
Fortunately there are graph libraries for Python which can do all this stuff for us. I used networkx.
ss_idx = opts['idx']
values = self.search_space[ss_idx]
G = nx.MultiDiGraph()
for i in range(len(values)):
if 'values' in values[i]:
for j in values[i]['values']:
else:
already_know[i] = 1
We do the same initialization as before and build the search space graph and we save the values which we already know in already_know.
This line gives us a maximum matching:
matching = nx.bipartite.maximum_matching(G)
Then we build the second graph where we have the directed version.
n_matching = []
GM = nx.DiGraph()
possible = np.empty((len(values)),dtype=dict)
for k in matching:
if str(k)[:2] == 'x_':
n_matching.append({k:matching[k]})
possible[int(k[2:])] = {'values':set([matching[k]])}
In that part we only add the edges which are already in the matching and define an empty numpy array which holds the new possible values (possible). We add the value of the current matching to the array. Then we check if we can really reach all nine values:
if len(n_matching) < len(values):
raise InfeasibleError("Infeasible","The model is infeasible")
Yeah I can work on the error message... Then we add all the other edegs:
for e in G.edges():
if not GM.has_edge(e[0],e[1]):
GM.add_edge(e[1],e[0])
That means if we don't have the edge in the one direction we add it in the other direction which gives us this:
Then we find all strongly connected components:
scc = nx.strongly_connected_component_subgraphs(GM)
for scci in scc:
for e in scci.edges():
if str(e[0])[:2] != 'x_':
e = (int(e[1][2:]),e[0])
else:
e = (int(e[0][2:]),e[1])
if 'values' not in possible[e[0]]:
possible[e[0]] = {'values': set()}
possible[e[0]]['values'].add(e[1])
and if the edge is part of a strongly connected component we add the connection in our possible array.
We are coming to the end...
new_possible = []
new_knowledge = [False]*len(values)
i = 0
for p in possible:
l = list(p['values'])
if len(l) == 1:
new_possible.append({'value':l[0]})
new_knowledge[i] = True
else:
new_possible.append({'values':l[:]})
if len(l)<len(values[i]['values']):
new_knowledge[i] = True
i += 1
We just want to know which part we changed therefore we use new_knowledge = [False]*len(values) and if we construct the new_possible array which uses the possible array but converts it into the search_space dictionary form.
At the end we use the same lines as before for our subscribe system:
old_changed = self.changed.copy()
self.changed[ss_idx] = new_knowledge
self.changed = np.logical_or(self.changed,old_changed)
self.search_space[ss_idx] = new_possible
That's it we can solve our model:
[ 1 7 8 ][ 5 4 6 ][ 2 3 9 ]
[ 4 2 9 ][ 3 8 1 ][ 5 6 7 ]
[ 5 6 3 ][ 9 2 7 ][ 1 8 4 ]
---------------------------
[ 9 3 5 ][ 2 1 4 ][ 6 7 8 ]
[ 7 4 1 ][ 8 6 5 ][ 9 2 3 ]
[ 6 8 2 ][ 7 9 3 ][ 4 1 5 ]
---------------------------
[ 2 5 6 ][ 4 7 8 ][ 3 9 1 ]
[ 8 1 4 ][ 6 3 9 ][ 7 5 2 ]
[ 3 9 7 ][ 1 5 2 ][ 8 4 6 ]
and that was possible in 0.3s which isn't too bad.
Well now there might be Sudoku where two possible solutions exist and we have to use some backtracking to solve it but I think we are done for the moment.
Thanks for reading! I hope you enjoyed the post. I would like to enhance my model and solve some other challenges. If you have a game or another challenge... Make a comment! I'll try to solve it and add some other constraint stuff to my model and maybe backtracking.
|
2019-02-16 08:26:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3597804605960846, "perplexity": 708.0401500137624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479967.16/warc/CC-MAIN-20190216065107-20190216091107-00226.warc.gz"}
|
https://advanced-r-solutions.rbind.io/domain-specific-languages.html
|
# 33 Domain specific languages
## 33.1 HTML
1. Q: The escaping rules for <script> and <style> tags are different: you don’t want to escape angle brackets or ampersands, but you do want to escape </script> or </style>. Adapt the code above to follow these rules.
A:
2. Q: The use of ... for all functions has some big downsides. There’s no input validation and there will be little information in the documentation or autocomplete about how they are used in the function. Create a new function that, when given a named list of tags and their
attribute names (like below), creates functions which address this problem.
list(
a = c("href"),
img = c("src", "width", "height")
)
All tags should get class and id attributes.
A:
3. Q: Currently the HTML doesn’t look terribly pretty, and it’s hard to see the structure. How could you adapt tag() to do indenting and formatting?
A:
## 33.2 LaTeX
1. Q: Add escaping. The special symbols that should be escaped by adding a backslash in front of them are \, \$, and %. Just as with HTML, you’ll need to make sure you don’t end up double-escaping. So you’ll need to create a small S3 class and then use that in function operators. That will also allow you to embed arbitrary LaTeX if needed.
A:
2. Q: Complete the DSL to support all the functions that plotmath supports.
A:
3. Q: There’s a repeating pattern in latex_env(): we take a character vector, do something to each piece, convert it to a list, and then convert the list to an environment. Write a function that automates this task, and then rewrite latex_env().
A:
4. Q: Study the source code for dplyr. An important part of its structure is partial_eval() which helps manage expressions when some of the components refer to variables in the database while others refer to local R objects. Note that you could use very similar ideas if you needed to translate small R expressions into other languages, like JavaScript or Python.
A:
|
2018-12-13 01:18:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.300719290971756, "perplexity": 1784.6428902858784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824338.6/warc/CC-MAIN-20181213010653-20181213032153-00336.warc.gz"}
|
https://gitter.im/conda/conda?at=634ce658cf41c67a5cb5342e
|
## Where communities thrive
• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
• Create your own community
##### Activity
• 15:57
jaimergp synchronize #12335
• 15:57
jaimergp on docker-conda-forge
eof fix again (compare)
• 15:43
jaimergp synchronize #12335
• 15:43
jaimergp on docker-conda-forge
fix env var (compare)
• 15:34
dholth unlabeled #12214
• 15:33
beeankha edited #12336
• 15:25
jaimergp edited #12335
• 15:25
jaimergp synchronize #12335
• 15:25
jaimergp on docker-conda-forge
move to script trim trim trim (compare)
• 15:21
dholth commented #12331
• 15:13
dholth synchronize #12331
• 15:12
jaimergp synchronize #12335
• 15:12
jaimergp on docker-conda-forge
add docs for docker compose usa… add news provide more startup info in in… and 2 more (compare)
• 14:39
dholth auto_merge_enabled #12331
• 14:30
conda-bot labeled #12337
• 14:30
conda-bot on infrastructure
🔄 Synced file(s) with conda/in… (compare)
• 14:30
conda-bot opened #12337
• 14:30
conda-bot review_requested #12337
• 14:08
sven6002 labeled #12336
• 14:08
sven6002 opened #12336
Mark Harfouche
@hmaarrfk
might be related.
Cheng H. Lee
@chenghlee
It was cloudflare-related but nothing to do with the outages posted on their status page. In any case, Anaconda pushed a hotfix to anaconda.org that should mitigate the issue.
1 reply
mattalhonte-srm
@mattalhonte-srm
Heya! keeps trying to install different Python versions and then failing when it doesn't find some dep - example error: - nothing provides openssl >=1.1.1,<1.1.2.0a0 needed by python-3.7.1-h0371630_3
I explicitly put python==3.8.* (also tried it with a single = in my yaml file.
Not really sure what to do? Is there a reliable way to pin the Python version? Is there a reliable way to do detective work and see what package in my yaml file is responsible for this chain of events?
Thanks!
Stephen Nayfach
@snayfach
I updated my package checkv yesterday (https://anaconda.org/bioconda/checkv) but the new version (1.0.0) still is not available when I run conda install -c bioconda checkv. Is there something I did wrong when I updated the recipe?
1 reply
Dave Clements
@tnabtaf:matrix.org
[m]
We are pleased to announce that conda 4.14.0 & conda-build 3.22.0 releases are now available, featuring
⇒ rename conda environments
⇒ channel notifications
⇒ better error handling
⇒ and more!
Many thanks to all our contributors, especially the 14 new contributors who helped make this happen
Griffin Tabor
@gftabor
So I ran into this trying to update figure out the right workflow for my system
conda/conda#5821
So its clear what I want to do is not the "conda approved" workflow so I am trying to figure out the right method.
I have software repo A that provides a conda environment to run it. Then I have software repo B that uses repo A as a dependency but also adds a number of additional dependencies that are not needed for repo A.
It seems like what I would want is to specify a new environment in repo B that is something like "these dependencies plus whatever is in the repo A environment file" and for the package manager to solve for if that's possible (and it will be) and build me that environment. If its not possible it would throw and error just like it does if I make a single environment file that isn't possible, like asking for specific versions of packages that don't go together.
What is the suggested workflow for this? Would env update second_file.yaml work to add second set of dependencies?
Alan Hoyle
@alanhoyle
I installed a tool with micromamba on a CentOS 7 machine. The install worked fine, but when I try to run it, I get an error: ImportError: libffi.so.7: cannot open shared object file: No such file or directory I see libffi.so.6 in the system lib64, and libffi.so.8 in my ~/micromamba/lib (it's a symbolic link to somewhere else in the hierarchy). Is this most likely a problem with the conda package?
1 reply
Daniel Holth
@dholth
Hello condans
Jaime Rodríguez-Guerra
@jaimergp
:wave:
Pat Gunn
@pgunn
I'm wondering if people have hints as to how to debug when a package on a conda channel is not considered a candidate for installation (apart from the obvious "wrong arch")
Pat Gunn
@pgunn
This fails: conda create -n assimp_test -c flyem-forge -c conda-forge assimp=4.0.1 but I can see in the flyem-forge channel that there is a Linux package of assimp 4.0.1. Unclear how to debug further
4 replies
Pat Gunn
@pgunn
Weird. I can download the bz2 package and manually put it into an environment first, then build the rest of the environment. Wondering if this is some channel index issue
Jaime Rodríguez-Guerra
@jaimergp
There might be a CDN sync issue. Does it appear on the results of conda search?
1 reply
Dave Clements
@tnabtaf:matrix.org
[m]
Hi all, Minutes for the conda Community Call (starting in 10 minutes) are here
Jacob Barhak
@Jacob-Barhak
I am having issues with installing an old package on windows that used to install well and I can no longer install it - I wonder what is the problem - it used to work and no longer works. This is anaconda 2 based and uses python 2 yet it does no longer install even with an older anaconda - here is the command I am using conda install mist -c jacob-barhak - here is the error I am getting CondaMultiError: CondaHTTPError: HTTP 404 NOT FOUND for url <https://conda.anaconda.org/jacob-barhak/win-64/win-64\inspyred-1.0-py27_0.tar.bz2> - can anyone with a windows machine can try to verify this is what they get...
I looked the anaconda cloud and the file is there. see: https://anaconda.org/jacob-barhak/inspyred I changed it access to public yesterday since I though it will be a source for the error, yet it seems the problem is elsewhere now...
Jacob Barhak
@Jacob-Barhak
I located someone else who seemed to have a problem like mine and was no answered on stackoverflow: https://stackoverflow.com/questions/59335034/condahttperror-http-404-not-found-for-dask-core-2-7-0-py-0-tar-bz2
does anyone have any clue on why old packages cannot be located anymore?
Jacob Barhak
@Jacob-Barhak
I also tried conda install inspyred -c jacob-barhak separately - it seems the problem is with this specific package and it is either a conda error or a cloud flare error since there is CF message attached to the http error. Here is how it loos like:
Downloading and Extracting Packages
inspyred-1.0 | 78 KB | | 0%
CondaMultiError: CondaHTTPError: HTTP 404 NOT FOUND for url <https://conda.anaconda.org/jacob-barhak/win-64/win-64\inspyred-1.0-py27_0.tar.bz2>
Elapsed: 00:00.423906
CF-RAY: 74c04c37b9b174a1-LHR
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
Is there a way to tell if this is a conda issue or a cloud flare issue?
1 reply
Dave Clements
@tnabtaf:matrix.org
[m]
Hi All, a draft conda Discourse Forum is now up. See GitHub for the announcement, and for the call for help. My hope is to make this live by the end of September. Please provide feedback in GitHub or here. Thanks.
Dave Clements
@tnabtaf:matrix.org
[m]
🚀 We are elated to announce the release of #conda 22.9.0, the first version following CEP 8 (Conda Enhancement Proposal) in our new bi-monthly release cycle.
Dave Clements
@tnabtaf:matrix.org
[m]
We are (also!) pleased to announce that the conda Community Forum is now open. Questions, answers, discussions, and news about the conda ecosystem. It is (or will be!) all here.
conda-bot
@conda-bot:matrix.org
[m]
@thath posted in Conda.org is comming and we want your help! - https://conda.discourse.group/t/conda-org-is-comming-and-we-want-your-help/99/1
conda-bot
@conda-bot:matrix.org
[m]
@SandeepAllampalli posted in CondaSSLError: OpenSSL appears to be unavailable on this machine. OpenSSL is required to download and install packages - https://conda.discourse.group/t/condasslerror-openssl-appears-to-be-unavailable-on-this-machine-openssl-is-required-to-download-and-install-packages/103/1
Dave Clements
@tnabtaf:matrix.org
[m]
Conda users: we want to hear from you! Complete a survey about your experience using conda, and enter a raffle for a chance to win a $150 Amazon gift-card. The results of the survey will be summarized and published, and then used to help guide future directions for conda. Interested in providing your feedback? Click here to fill out the survey and tell us your experience using conda. Complete the survey before 11:59 pm EST on 11/16/2022 for a chance to win a$150 Amazon gift card. The winner of the raffle will be notified via email.
Thank you for helping to improve conda!
conda-bot
@conda-bot:matrix.org
[m]
@tnabtaf (Dave Clements) posted in Tell us how you use conda! - https://conda.discourse.group/t/tell-us-how-you-use-conda/113/1
conda-bot
@conda-bot:matrix.org
[m]
Martin K. Scherer
@marscher
Hi, I have an env.yaml containing a pip: section. These packages are being built as wheels prior creating the env (residing in /root/.cache/pip, e.g. Docker env). However during the creation of the env, pip ignores the cache and tries to build these again. Is there any way to force pip to use the cache?
conda-bot
@conda-bot:matrix.org
[m]
@tnabtaf (Dave Clements) posted in Conda-build 3.23.1 Released - https://conda.discourse.group/t/conda-build-3-23-1-released/123/1
Sylvain Corlay
@SylvainCorlay
New error messages for mamba!
Jannis Leidel
@jezdez
Huzzah! Thanks Sylvain!
Dave Hirschfeld
@dhirschfeld
:rocket:
Jonathan Ellis
@jbellis
mamba install --revision 1 finishes with "All requested packages already installed" and leaves me at revision 2. How to troubleshoot? (Am on Windows 10.)
conda-bot
@conda-bot:matrix.org
[m]
@tnabtaf (Dave Clements) posted in 🎉 Conda 22.11.1 Release - https://conda.discourse.group/t/conda-22-11-1-release/139/1
Leo Fang
@leofang
Hi guys, not sure if this is the best channel, can we get some attention on this issue (regarding conda-forge package info not updated on anaconda.org) please? Thanks! https://github.com/conda/infra/discussions/649
1 reply
conda-bot
@conda-bot:matrix.org
[m]
@tyler_wang (Hellcat) posted in how to build and install a self-made conda package - https://conda.discourse.group/t/how-to-build-and-install-a-self-made-conda-package/146/1
Julien Schueller
@jschueller
hello,
does anyone knows where CMAKE_ARGS is defined in conda-build ?
I do not seem to find anything in the sources.
I'm trying to figure out how to add flags for conda-forge/conda-forge.github.io#1859
2 replies
Jaime Rodríguez-Guerra
@jaimergp:matrix.org
[m]
So it's not conda-build defining it, but a package designed to export those variables upon environment activation, which conda-build triggers for the build and host environment during the building phases.
conda-bot
@conda-bot:matrix.org
[m]
@tnabtaf (Dave Clements) posted in Conda is now fiscally sponsored by NumFOCUS - https://conda.discourse.group/t/conda-is-now-fiscally-sponsored-by-numfocus/150/1
Nate Coraor
@natefoo:matrix.org
[m]
Hi all, I'm wondering if it's possible for root to use an unprivileged user's conda install without polluting that install with root-owned files. When another unprivileged user uses someone else's conda install, it seems to be clever enough to use ~/.conda for the pkgs and cache, but I don't see a way to force this for the case of root.
2 replies
Jaime Rodríguez-Guerra
@jaimergp:matrix.org
[m]
Ah, I see, you just want to execute the existing conda in some user's account, without adding root-owned stuff there... You can define temporary pkgs caches with CONDA_PKGS_DIRS. It might redownload things again, but it shouldn't touch the original pkgs location.
The "clever" approach to fallback to ~/.conda is out of necessity, I'd say. It doesn't change locations because it knows it's another user's property. It does it because it can't write there. By redefining CONDA_PKGS_DIRS (or adding a /root/.condarc with equivalent contents pkgs_dirs: [your_path]), you are mimicking this process, I think.
Is that somehow a compromise Nate Coraor ?
Nate Coraor
@natefoo:matrix.org
[m]
Yep, in fact that's what I'm doing: galaxyproject/ansible-miniconda#4 but I wasn't sure if there was another way to induce the "other unprivleged user" scenario directly.
2 replies
Jaime Rodríguez-Guerra
@jaimergp:matrix.org
[m]
Feyzaaaa
@Feyzaaaa
cannot import name 'BeatifulSoup4' from 'bs4' (/Users/feyzaerdogan/opt/anaconda3/lib/python3.7/site-packages/bs4/init.py) hatası alıyorum yardımcı olur musunuz?
3 replies
conda-bot
@conda-bot:matrix.org
[m]
@Angel_Picos posted in Install miniconda3 for windows - https://conda.discourse.group/t/install-miniconda3-for-windows/169/1
Lars Nilse
@lars20070
In the continuumio/anaconda3 Docker container I try to install the pyopenms package. But the package cannot be found in the bioconda channel. It is clearly there.
(base) root@61346253da63:/# conda install -c bioconda pyopenms
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
- pyopenms
Current channels:
- https://conda.anaconda.org/bioconda/linux-aarch64
- https://conda.anaconda.org/bioconda/noarch
- https://repo.anaconda.com/pkgs/main/linux-aarch64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/linux-aarch64
- https://repo.anaconda.com/pkgs/r/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
|
2023-02-08 16:13:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21920368075370789, "perplexity": 8286.594477691347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500837.65/warc/CC-MAIN-20230208155417-20230208185417-00832.warc.gz"}
|
https://socratic.org/questions/how-do-you-use-partial-fractions-to-find-the-integral-int-x-2-x-2-4x-dx#357530
|
How do you use partial fractions to find the integral int (x+2)/(x^2-4x)dx?
Dec 28, 2016
First, factor the denominator.
${x}^{2} - 4 x = x \left(x - 4\right)$
$\frac{A}{x} + \frac{B}{x - 4} = \frac{x + 2}{\left(x\right) \left(x - 4\right)}$
$A \left(x - 4\right) + B \left(x\right) = x + 2$
$A x + B x - 4 A = x + 2$
$\left(A + B\right) x - 4 A = x + 2$
We can now write a systems of equations.
$\left\{\begin{matrix}A + B = 1 \\ - 4 A = 2\end{matrix}\right.$
Solving, we get $A = - \frac{1}{2}$ and $B = \frac{3}{2}$.
$\therefore$ The partial fraction decomposition is $\frac{3}{2 \left(x - 4\right)} - \frac{1}{2 x}$.
This can be integrated using the rule $\int \frac{1}{u} \mathrm{du} = \ln | u | + C$.
$= \frac{3}{2} \ln | x - 4 | - \frac{1}{2} \ln | x | + C$
Hopefully this helps!
|
2021-12-09 06:43:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9772726893424988, "perplexity": 544.503816351174}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363689.56/warc/CC-MAIN-20211209061259-20211209091259-00534.warc.gz"}
|
http://nrich.maths.org/6160&part=note
|
### Equation Matcher
Can you match these equations to these graphs?
### Curve Fitter
Can you fit a cubic equation to this graph?
### Guess the Function
This task depends on learners sharing reasoning, listening to opinions, reflecting and pulling ideas together.
# Real-life Equations
### Why do this problem?
This problem encourages students to get into the real meaning of equations and graphical representation without getting bogged down in algebraic calculations or falling back into blind computation. It will help to reinforce the differences between different 'types' of equation.
### Possible approach
Note the difference between showing that an equation is a possibility and showing that it is not a possibility. In the first case, students need only give a single example of a curve with certain paramaters which passes through a point of the required type. To show that an equation CANNOT pass through a point of a certain type requires more careful explanation. Hopefully students will work this out for themselves, but prompt them if necessary.
### Key questions
• How can you tell if a certain point will match a certain equation type?
• How can you tell if a certain point will not match a certain equation type?
### Possible extension
You might naturally try Equation matching next.
### Possible support
Give concrete examples by labelling the points $(1, -1), (-1, 1), (-1, -1), (1, -1)$
Alternatively, try the easier non-algebraic question Bio-graphs
|
2014-04-21 09:57:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.379438579082489, "perplexity": 1164.82192440793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://periodontiamedica.com/i-still-uiykip/can-variance-be-negative-bd6478
|
## can variance be negative
Any information may be inaccurate, incomplete, outdated or plain wrong. You can also use the spreadsheet I’ve provided on this page to get the confidence interval. Negative variances and R-squared values greater than 1 are not theoretically possible, so the solution is considered improper … By studying a business’s budget variance, management can spot unexpected changes in … Negative Variance Means You Have Made an Error. The solution is then called inadmissible. Variance cannot be smaller than the standard deviation because the standard deviation is the square root of the variance. We can install our application and show you live linked variance reports in as little as 2 hours. The smallest value variance can reach is exactly zero. This means we cannot place it on our frequency distribution and cannot directly relate its value to the values in our data set. The expected value of a random discrete variable can't be negative as the expected value stems from a probability distribution.Since probability can range from 0 to 1 only ,the expected value can never be … So if one data entry in calculating variance is negative, it … Can you calculate Cohen’s d from the results of t-tests or F-tests? The variance of a data set cannot be negative because it is the sum of the squared deviation divided by a positive value. Components of a material variance Your $6,000 unfavorable variance can be broken down into a price variance and an efficiency variance. This information can be used for planning purposes in the development of budgets for future periods, as well as a feedback loop back to those employees responsible for the direct labor component of a business. Contact us today to schedule an in-depth demonstration using your own data!$ 0. 2. Therefore variance can’t be negative. Variance of sample mean of correlated RVs. Standard deviation can not be negative because it is square rooted variance. Variance is the average squared deviation from the mean. Let me explain this: Variance is calculated by summing all the squared distances from the mean and dividing them by number of all cases. Notice the word “squared”. Macroption is not liable for any damages resulting from using the content. If you get a negative variance in the context of ANOVA something is wrong. A negative variance is troublesome because one cannot take the square root (to estimate standard deviation) of a negative number without resorting to imaginary numbers. Deviation can be positive or negative. A budget variance can be either positive or negative. Semivariance can be … It takes less than a minute. Variance is defined as the sum of the squares of the differences between the data and the mean, divided by the number of items. According to two stats books that I am working with, the Coefficient of Variation is the Variance divided by the Mean. Here you can see how to calculate both variance and standard deviation in 4 easy steps. Average of non-negative numbers can’t be negative either. I can not provide the file due to company policy via this link =IF(MIN(old value, new value)<=0,"--", (new value/old value)-1) Method #2: Show Positive or Negative Change If not, can the resulting solution be interpreted, despite the small negative residual variance? Negative variances are the unfavorable differences between two amounts, such as: The amount by which actual revenues were less than the budgeted revenues. The formula for calculating percent variance within Excel works beautifully in most cases. Then you square each of the differences (multiply it by itself). The first thing we can do is check if either number is negative, and then display some text to tell the reader a percentage change calculation could not be made. An unfavourable variance leads to a lower net income than expected, which businesses want to avoid. Here, the Material Price Variance can be calculated as follows: MPV = (10 – 8) x 150 = 300 (F) Here (F) stands for favorable. In cases where the actual price is more than the standard price, the result is … Variance cannot be negative. the project is ahead of the schedule. This is precisely the moment which eliminates the possibility of variance (and therefore also. By remaining on this website or using its content, you confirm that you have read and agree with the Terms of Use Agreement just as if you have signed it. Our Variance Analyzer uses live links to ERP systems like Dynamics GP, so users can drill down directly into the details. In the process of building a Predictive Machine learning model, we come across the Bias and Variance errors. You can express the variance as a percentage. Learn the most common way price variance arises and how companies can reduce price variance. If the mean is 3, a value of 5 has a deviation of 2 (subtract the mean from the value). You can learn more about each type of budget variance below. in Homework Help . Therefore, the figure of 211.89, our variance… Macroption is not liable for any damages resulting from using the content. If there are at least two numbers in a data set which are not equal, variance must be greater than zero. That's because it's mathematically impossible since you can't have a negative value resulting from a square. For example, imagine that you’re starting a business and expect to take a loss the first year. Here are the formulas for each variance: Price variance ($4.20 actual price –$4 standard A positive covariance means the variation in the two variables X and Y is in the same direction (like positive correlation) and negative covariance means the variation in the two Can the variance o f a data set ever be negative? What is a positive or negative variance of a budget? A variance cannot be negative. Arithmetic Average Advantages and Disadvantages, Arithmetic Average: When to Use It and When Not, Why Arithmetic Average Fails to Measure Average Percentage Return over Time, Why You Need Weighted Average for Calculating Total Portfolio Return, Calculating Variance and Standard Deviation in 4 Easy Steps, Population vs. Whereas favorable budget variance refers to a positive difference between projected and actual budget outcomes, such as higher profits, a negative variance literally indicates a negative outcome, such as net loss. Julia Wolke posted on Wednesday, May 16, 2018 - … In order to write the equation that defines the variance, it is simplest to use the summation operator, Σ. Jensen's inequality (in the context of probability theory) tells us that for any random variable $Y[/math Favorite Answer Yes, the coefficient of variation can be negative, undefined, or zero. Because it is the sum of squares of numbers divided by 1 less than the number of numbers. The first thing we can do is check if either number is negative, and then display some text to tell the reader a percentage change calculation could not be made. Variance is non-negative because the squares are positive or zero: ≥ The variance of a constant is zero. We'll start by assigning each number to variable, X1–X6, like this: Think of the variable (… The Agreement also includes Privacy Policy and Cookie Policy. The function [math]f(x)=x^2$ for $x\in\mathbb R$ is an example of a convex function. If this development is sustainable and positive, or at least 0, variances can be retained, the project will have a realistic chance to be back on track within a few months. Send me a message. Thus, it can be better to absorb the negative Semivariance can be … Your intuition is correct -- a discrete variable can take on negative values. However, managers should note that variances can seem misleading, so it's important to use other records to determine the cause. Competitive Environment Example: … Yes, food cost variance can be both negative and positive. This is the basic principle behind the the measure namely "variance" of dispersion of data. Covariance can be either positive or negative. Mean: the average of all values in a data set (add all values and divide their sum by the number of values). The example is just an example: a person can't have $-2$ children, but the difference in scores between Home and Away sports teams can be $-2$ when the Home team is behind by two points. The following formula does this with an IF function and MIN function. The variance of a data set can be negative if the mean is negative. The idle capacity variance may not be a useful measurement, since it creates an incentive to keep using production facilities even when there is no need to build excess inventory levels. (The difference between 2 variances can be negative if the larger is subtracted from the smaller.) under the null hypothesis. Actual expenses are higher than budgeted expenses. The solution is then called inadmissible. Secondly, the variance is not in the same units as the scores in our data set: variance is measured in the units squared. The following formula does this with an IF function and MIN function. Any information may be inaccurate, incomplete, outdated or plain wrong. I hope this helps. The Agreement also includes Privacy Policy and Cookie Policy. Explain ,Can the variance ever be smaller than the standard deviation? According to Tabachnick & Fidell, (2001) , uniquely explained variance is computed by adding up the squared semipartial correlations. For example, if a company’s sales for the last quarter of the year were projected to be $400,000 but the company only generated$300,000 in reality, this leads to … I understand one way of fixing this Have a question or feedback? Author has 158 answers and 53.8K answer views. Now imagine that […] This is not like Analysis of Variance (by using the means) which was developed by Fisher in the 1920s. The importance of variance analysis lies in how businesses can use it to determine why one result varied from another value, either in terms of dollars or percentages. Littell et al. Answer to: Can covariance be negative? By signing up, you'll get thousands of step-by-step solutions to your homework questions. The variance of the Cohen’s d statistic is found using: You can use this variance to find the confidence interval. You can express the variance as a percentage. The formula for calculating percent variance within Excel works beautifully in most cases. This is when all the numbers in the data set are the same, therefore all the deviations from the mean are zero, all squared deviations are zero and their average (variance) is also zero. It can’t be negative. = Conversely, if the variance of a random variable is 0, then it is almost surely a constant. The reason for this is that the deviations from the mean are squared in the formula. $\endgroup$ – hardmath Nov 25 '13 at 17:47 Obviously, you can’t get a negative variance, so I tried the exact mean, 32.16. While standard deviation and variance provide measures of volatility, semivariance only looks at the negative fluctuations of an asset. Smallest Possible Variance Value. While standard deviation and variance provide measures of volatility, semivariance only looks at the negative fluctuations of an asset. The covariance is also sometimes denoted σ X Y {\displaystyle \sigma _{XY}} or σ (X , Y) {\displaystyle \sigma (X,Y)} , in analogy to variance . For example, a common mistake is that you forget to square the deviations from the mean (and that would result in a possibly negative variance). The variance of a set of n equally likely values can be written as: The standard deviation is the square root of the variance: Formulae with Greek letters have a way of looking daunting, but this less complicated than it seems. That is because the mean can be negative and the mean or sd can be zero. A significantly negative food cost variance may mean that your purchasing process has underlying issues that need to be resolved. A question on my homework states to calculate the standard deviation from a given frequency table with several class widths and frequencies. A significantly negative food cost variance may mean that your purchasing process has underlying issues that need to be resolved. Because inventory is composed of physical product units that occupy either shelf space or storage space, it may seem impossible to have a negative balance of inventory. The amount by which actual net income was less than the budgeted net income. The main difference between the two is fairly self-explanatory. While the project still fails to meet the overall schedule baseline, the breakdown by months shows a positive trend: the initially negative schedule variance improved incrementally over time. To find the confidence interval, you need the variance. It is possible to get an "adjusted R-sq" that is negative if your explained variance is zero or near zero and use a large number of degrees of freedom to produce that outcome. in SAS for mixed models (2nd ed.) There are several categories of budget By definition variance cannot be negative. Because it is the sum of squares of numbers divided by 1 less than the number of numbers. As a result of its calculation and mathematical meaning, variance can never be negative, because it is the average squared deviation from the mean and: Anything squared is never negative. No If not, why not? The variance is favorable because the actual price is less than the standard price. … quite often leads to negative test statistics, caused by estimated parameter variance differ-ences that are not positive semi-definite (not PSD). The concept as well as the definition/formulation of variance is such that it can not be negative. Have a question or feedback? Sample Variance and Standard Deviation. This is when all the numbers in the data set are the same, therefore all the deviations from the mean are zero, all squared deviations are zero and their average (variance) is also zero. By contrast, unfavourable or negative budget variance occurs when: Actual revenue is lower than expected. Variance can be smaller than the standard deviation if the variance is less than 0 The variance of a data set cannot be negative because it is the sum of the squared deviations divided by a positive value. 1.1 is the correct variance using the rounding rules. When it is negative, it is considered an unfavorable variance. Variance can be smaller Some of these differences can be and – unless all the numbers are exactly the same – will be negative. Tan discusses the English language to build the idea that there is a lack of appropriate synonyms for the word “broken.” ... Can variance be negative? Can the sample variance ever be a negative number? Obviously, you can’t get a negative variance, so I tried the exact mean, 32.16. All»Tutorials and Reference»Statistics for Finance, You are in Tutorials and Reference»Statistics for Finance. This average of the squared deviations is in fact variance. The smallest value variance can reach is exactly zero. But when the percentage is a negative number I want the value to display as "Missing Data not the negative number and I want to make a pivot chart by PM to summarize the data.. Assuming you meant ‘Expected Value of discrete random variable’ . No, a standard deviation or variance does not have a negative sign. One factor has two items, one of which loads >1 and has negative residual variance and an undefined r-square. Here, the Material Price Variance can be calculated as follows: MPV = (10 – 8) x 150 = 300 (F) Here (F) stands for favorable. The solution is then called inadmissible. The variance is favorable because the actual price is less than the standard price. Thus, it can be better to absorb the negative variance than to avoid the variance by investing in more output. discuss how a naturally competitive environment within a plot can result in a negative variance component. A negative schedule variance (SV < 0) indicates that the project is behind the schedule, as earned value does not meet the planned value. Although variances cannot be negative, Amos can produce variance estimates that are negative. 1. There is simply no chance that variance can be negative if calculated correctly. Budget variance analysis helps business management track favorable and negative budget variances and determine how to adjust the budget to better serve the business’s goals. In that case a negative value could be assigned on a set of measure zero, but it would not contribute anything to the definition of the probability distribution. That is, it always has the same value: Conditional mean and variance with negative random variable. 0. All»Tutorials and Reference»Statistics for Finance, You are in Tutorials and Reference»Statistics for Finance. Yes, food cost variance can be both negative and positive. Standard deviation can not be negative because it is square rooted variance. Variance can be smaller than the standard deviation if the variance is less than 0 The variance of a data set cannot be negative because it is the sum of the squared deviations divided by a positive value. 1.1 is the correct variance using the rounding rules. A variance is the difference between the projected budget and the actual performance for a particular account. However, when the benchmark value is a negative value, the formula breaks down. Average of non-negative numbers can’t be negative either. This experience is clearly worrying given that the test statistic is distributed as c2 under the null hypothesis. Annie Barnes. Although variances cannot be negative, Amos can produce variance estimates that are negative. For example, imagine that you’re starting a business and expect to take a loss the first year. I feel the others are going somewhere a bit different here, in which they're explaining why the variance can never be negative, but as we all know #x^2 = 1# Has two answers, #-1# and #1#, which can raise a question much like your Send me a message. It also doesn't seem consistent with the formula for Most simply, the sample variance is computed as an average of squared deviations about the (sample) mean, by dividing by n. (The difference between 2 variances can be When it is positive, it is commonly referred to as a favorable variance. Variance … Deviation: the distance of each value from the mean. The summation operator is just a shorthand way to write, "Take the sum of a set of numbers." A positive schedule variance (SV > 0) indicates that the earned value exceeds the planned value in the reference period (s), i.e. Squares are never negative, so … Upper bound for Variance of linear combination of random variables: $\operatorname{Var}\left(x^Ta\right) \leq \frac{\|a\|^2}{4}. It takes less than a minute. As a result of its calculation and mathematical meaning, variance can never be negative, because it is the average squared deviation from the mean and: Therefore, if you have negative variance and you are wondering how to calculate standard deviation from it, first look at how you have got the negative variance in the first place. I am trying to calculate the amount of shared variance explained in a regression model with four predictor variables, and this number is coming out negative (-.465). Estimating the population variance by taking the sample's variance is close to optimal in general, but can be improved in two ways. Yes, you can. If your food cost variance is negative, that means your actual costs are higher than expected. The reason is that the way variance is calculated makes a negative result mathematically impossible. 0. When calculating my variance, the result turned out to be a negative number, which means that the standard deviation cannot be a realistic number as you cannot square root a negative number. Understand what price variance is in relation to cost accounting. Arithmetic Average Advantages and Disadvantages, Arithmetic Average: When to Use It and When Not, Why Arithmetic Average Fails to Measure Average Percentage Return over Time, Why You Need Weighted Average for Calculating Total Portfolio Return, Calculating Variance and Standard Deviation in 4 Easy Steps, Population vs. Eq.1) where E [X] {\displaystyle \operatorname {E} [X]} is the expected value of X {\displaystyle X} , also known as the mean of X {\displaystyle X} . If your food cost variance is negative, that means your actual costs are higher than expected. Calculate the differences between the individual numbers and the mean of the data set. The amount by which actual expenses were greater than the budgeted expenses. Sample Variance and Standard Deviation, how to calculate both variance and standard deviation in 4 easy steps. If you don't agree with any part of this Agreement, please leave the website now. So you give yourself a budget of negative$10,000. Explain Choose the correct answer below, The variance of a data set can be negative if the mean is negative. Thank you very much! By remaining on this website or using its content, you confirm that you have read and agree with the Terms of Use Agreement just as if you have signed it. However, when the benchmark value is a negative value, the formula breaks down. Negative variances and R-squared values greater than 1 are not theoretically possible, so the solution is considered improper and the other estimates are not reliable. By definition variance cannot be negative. The idle capacity variance may not be a useful measurement, since it creates an incentive to keep using production facilities even when there is no need to build excess inventory levels. Variance is defined as the sum of the squares of the differences between the data and the mean, divided by the number of items. Part 2: Finding the Error I know how to calculate variance percentage in excel. As an example, we'll show how we would use the summation operator to write the equation for calculating the mean value of data set 1. It has good fit. The amount by which current revenues were less than the previous year's revenues. No If not, why not? An unfavorable variance means that the cost of labor was more expensive than anticipated, while a favorable variance indicates that the cost of labor was less expensive than planned. Let me explain this: Variance is calculated by summing all the squared distances from the mean and dividing them by number of all cases. Take each observation (number) in the data set. Can the sample variance ever be a negative number? Tan discusses her mother’s use of English to build the idea that a form of language can be purposeful and meaningful even if it is nonstandard. Thank you very much! Variance can be smaller than the standard deviation if … The Bias-Variance Tradeoff is one … If you don't agree with any part of this Agreement, please leave the website now. I have a three-factor, oblique geomin CFA with categorical variables. You (or the person who has calculated the variance) have made a mistake somewhere. Part 2: Finding the Error Set of numbers. are in Tutorials and Reference » Statistics for,... Measure namely variance '' of dispersion of data person who has calculated variance! According to Tabachnick & Fidell, ( 2001 ), uniquely explained variance is negative, can. 6,000 unfavorable variance can be either positive or negative variance component the sum of a data set ever a! Not like Analysis of variance ( by using the content set which are not equal, variance be. The basic principle behind the the measure namely variance '' of dispersion of data (... Sd can be both negative and positive a Predictive Machine learning model, we come across the Bias variance! At the negative variance, it is commonly referred to as a favorable variance three-factor! The most common way price variance is negative variance reports in as little as 2 hours deviation and variance measures... Can you calculate Cohen ’ s d statistic is found using: you also. No chance that variance can be … understand what price variance is favorable because standard. Which was developed by Fisher in the formula for calculating percent variance within Excel works in... Arises and how companies can reduce price variance is non-negative because the price! ) have made a mistake somewhere how to calculate both variance and deviation! Of budget Yes, food cost variance is favorable because the standard can! Of step-by-step can variance be negative to your homework questions you give yourself a budget way write. Be broken down into a price variance arises and how companies can reduce price.. Ve provided on this page to get the confidence interval here you can see how to calculate the standard and. Standard deviation because the squares are positive or zero leave the website now starting. Non-Negative numbers can ’ t get a negative variance component be negative the... Calculate variance percentage in Excel expect to take a loss the first.. Of negative $10,000 ever be a negative value, the coefficient of variation be... As a percentage numbers divided by 1 less than the standard deviation from a square the! Of which loads > 1 and has negative residual variance by 1 than... The actual price is less than the previous year 's revenues links ERP..., Σ provided on this page to get the confidence interval negative either and positive users! To a lower net income to write, take the sum of squares numbers! Average of non-negative numbers can ’ t be negative mean is 3, a value of random. Be greater than zero 0, then it is the sum of the data set and Cookie Policy install. Class widths and frequencies 's important to use other records to determine the cause$ 6,000 variance! Expected, which businesses want to avoid commonly referred to as a percentage squared deviations is relation. ‘ expected value of discrete random variable ’ calculate Cohen ’ s d from the )... For mixed models ( 2nd ed. observation ( number ) in the.. The exact mean, 32.16 step-by-step solutions to your homework questions you a... Example: … can the sample variance and standard deviation in 4 easy steps discuss how a competitive. Imagine that you ’ re starting a business and expect to take a loss first... Liable for any damages resulting from a square has calculated the variance of the squared deviation divided by less! Provide measures of volatility, semivariance only looks at the negative you can use this variance to the... From a square see how to calculate both variance and standard deviation the! For Finance, you are in Tutorials and Reference » Statistics for Finance leads to a lower net.. Must be greater than zero thousands of step-by-step solutions to your homework questions of ANOVA is... Cohen ’ s d statistic is distributed as c2 under the null hypothesis SAS mixed. Deviation of 2 ( subtract the mean of the data set which are equal. On my homework states to calculate both variance and standard deviation from a square sd can better. Is subtracted from the results of t-tests or F-tests of ANOVA something is wrong the numbers exactly. Undefined r-square deviation from a given frequency table with several class widths and frequencies almost surely a constant i. The process of building a Predictive Machine learning model, we come across the Bias and variance errors the are. 1 and has negative residual variance … to find the confidence interval > 1 and negative..., one of which loads > 1 and has negative residual variance two items, one of which >. Your $6,000 unfavorable variance down into a price variance and an undefined r-square higher! Loads > 1 and has negative residual variance variance component of discrete random is. Divided by 1 less than the number of numbers. 3, a value of random... Analyzer uses live links to ERP systems like Dynamics GP, so it 's important to use other to. Budget Yes, the formula for calculating percent variance within Excel works beautifully most... Works beautifully in most cases s d statistic is distributed as c2 under the null.. Find the confidence interval, you can also use the summation operator, Σ to absorb the negative of..., outdated or plain wrong Finance, you 'll get thousands of solutions... The benchmark value is a negative result mathematically impossible since you ca n't have a negative,... Deviation because the squares are positive or negative variance than to avoid can result in a negative than... With any part of this Agreement, please leave the website now if calculated.! The most common way price variance arises and how companies can reduce price variance and standard deviation in easy! Also use the summation operator, Σ with categorical variables also use the summation operator is just a way! Schedule an in-depth demonstration using your own data ERP systems like Dynamics GP, so tried! Not, can the variance of a data set can not be negative if mean. Us today to schedule an in-depth demonstration using your own data business and to... Chance that variance can reach is exactly zero actual net income be and – unless all numbers., oblique geomin CFA with categorical variables has underlying issues that need be. … Conditional mean and variance provide measures of volatility, semivariance only looks at the negative,... Is zero drill down directly into the details using: you can learn more about type! F a data set, that means your actual costs are higher than expected d from the )... So … Conditional mean and variance provide measures of volatility, semivariance only looks at negative! Of budget Yes, the formula for calculating percent variance within Excel works in. Negative and positive and Cookie Policy come across the Bias and variance provide measures of volatility, only. Have made a mistake somewhere of variation can be negative, that means your actual are.: the distance of each value from the smaller. a three-factor oblique... Distributed as c2 under the null hypothesis variance using the content i tried the exact mean, 32.16 '' dispersion! Two numbers in a data set looks at the negative fluctuations of an asset can the! ), uniquely explained variance is calculated makes a negative variance than to avoid get! By a positive or zero: ≥ the variance of a set of numbers. > 1 has... While standard deviation in 4 easy steps do n't agree with any part of this Agreement, leave! Explain Choose the correct variance using the content and – unless all the numbers are the! And frequencies better to absorb the negative variance component deviation of 2 ( subtract mean. Variance Analyzer uses live links to ERP systems like Dynamics GP, so … Conditional mean variance. Price is less than the standard deviation in 4 easy steps Finance, you are in Tutorials Reference. Links to ERP systems like Dynamics GP, so … Conditional mean and variance provide measures of volatility semivariance... Companies can reduce price variance arises and how companies can reduce price variance by less! Explain, can the sample variance ever be a negative variance of a budget variance be... Variance provide measures of volatility, semivariance only looks at the negative fluctuations of asset... Zero: ≥ the variance of a material variance your$ 6,000 variance... How a naturally competitive environment example: … can the variance is favorable because the actual price less. Which current revenues were less than the standard deviation and variance with negative random variable ’ the distance of value. The way variance is negative, Amos can produce variance estimates that are negative and deviation... Sas for mixed models ( 2nd ed. variable is 0, then it is commonly to... Be and – unless all the numbers are exactly the same – will be negative if the is. From the mean can install our application and show you live linked variance reports in as little 2! Mean can be both negative and positive factor has two items, one of which loads > 1 has! Resulting from a given frequency table with several class widths and frequencies i have a variance! Are several categories of budget variance below estimates that are negative get a negative value from. If your can variance be negative cost variance can not be negative if the mean the Bias and variance provide measures volatility. Deviations is in fact variance the Agreement also includes Privacy Policy and Cookie Policy you get a negative variance a...
Filed Under: Informações
## Comentários
nenhum comentário
Nome *
E-mail*
Website
|
2021-04-20 01:29:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8006870150566101, "perplexity": 550.9674430122142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038921860.72/warc/CC-MAIN-20210419235235-20210420025235-00603.warc.gz"}
|
http://mathhelpforum.com/calculus/19855-integral-tanx-71-secx-4-a-print.html
|
# Integral of (tanx)^71 * (secx)^4
• Oct 2nd 2007, 09:19 AM
circuscircus
Integral of (tanx)^71 * (secx)^4
$\int tan^71x sec^4x$
$\int tan^70x sec^3x * tanx secx$
I'm stuck after this part :confused:
• Oct 2nd 2007, 10:18 AM
TKHunny
I wouldn't do that.
My first impression is to take two of the secants and turn them into tangents.
After that, it is a rather obvious substitution u = tan(x) and you're done.
• Oct 2nd 2007, 03:05 PM
Soroban
Hello, circuscircus!
TKHunny has the best idea . . .
Quote:
$\int \left(\tan^{71}\!x\right)\left(\sec^4\!x\right)\,d x$
We have: . $\left(\tan^{71}\!x\right)\left(\sec^2\!x\right)\le ft(\sec^2\!x\right) \;=\;\left(\tan^{71}\!x\right)\left(\tan^2\!x+1\ri ght)\left(\sec^2\!x\right)$
. . Then: . $\int \left(\tan^{73}\!x + \tan^{71}\!x\right)(\sec^2\!x)\,dx$
Let: $u \:= \:\tan x\quad\Rightarrow\quad du \:=\:\sec^2\!x\,dx$
Substitute: . $\int \left(u^{73} + u^{71}\right)\,du$ . . . . Got it?
|
2016-10-01 10:44:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.972310483455658, "perplexity": 7699.437310760686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662705.84/warc/CC-MAIN-20160924173742-00298-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://socratic.org/questions/what-is-the-z-score-of-sample-x-if-n-144-mu-41-st-dev-120-and-e-x-63
|
# What is the z-score of sample X, if n = 144, mu= 41, St. Dev. =120, and E[X] =63?
The z-score is $= 2.2$
The z-score for a sample mean is $z = \frac{\overline{x} - \mu}{\frac{\sigma}{\sqrt{n}}}$
$= \frac{63 - 41}{\frac{120}{\sqrt{144}}} = \frac{22}{\frac{120}{12}} = \frac{22}{10} = 2.2$
|
2023-03-29 06:46:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9010777473449707, "perplexity": 4356.740392176141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00245.warc.gz"}
|
https://search.r-project.org/CRAN/refmans/paleomorph/html/procrustes.html
|
procrustes {paleomorph} R Documentation
## Conducts Procrustes superimposition to align 3D shapes with or without scaling to centroid size.
### Description
Conducts Procrustes superimposition to align 3D shapes with or without scaling to centroid size. Skips any missing values in computation of Procrustes coordinates.
### Usage
procrustes(A, scale = TRUE, scaleDelta = FALSE, maxiter = 1000,
tolerance = 1e-05)
### Arguments
A N x 3 x M matrix where N is the number of landmarks, 3 is the number of dimensions, and M is the number of specimens scale Logical indicating whether objects should be scaled to unit centroid size scaleDelta Logical determining whether deltaa should be scaled by the total number of landmarks. maxiter Maximum number of iterations to attempt tolerance Difference between two iterations that will cause the search to stop.
### Details
A number of computations are run until the difference between two iterations is less than tolerance. The more specimens and landmarks you have, the less each landmark is allowed to move before this tolerance is reached. Setting scaleDelta = TRUE will make the alignment run faster but have potentially less well aligned results. But the alignment between a large and small array of shapes should be more comparable with scaleDelta = TRUE. However, preliminary tests imply that run time scales linearly with scaleDelta set to TRUE or FALSE.
### Value
A new (N x 3 x M) array, where each 3d vector has been rotated and translated to minimize distances among specimens, and scaled to unit centroid size if requested.
### Examples
# Make an array with 6 specimens and 20 landmarks
A <- array(rep(rnorm(6 * 20, sd = 20), each = 6) + rnorm(20 * 3 * 6 ),
dim = c(20, 3, 6))
# Align the data (although it is already largely aligned)
aligned <- procrustes(A)
plotSpecimens(aligned)
[Package paleomorph version 0.1.4 Index]
|
2022-07-02 02:32:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32177069783210754, "perplexity": 2726.4292343160882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103983398.56/warc/CC-MAIN-20220702010252-20220702040252-00340.warc.gz"}
|
http://www2.macaulay2.com/Macaulay2/doc/Macaulay2-1.19/share/doc/Macaulay2/RelativeCanonicalResolution/html/toc.html
|
• RelativeCanonicalResolution -- construction of relative canonical resolutions and Eagon-Northcott type complexes
• balancedPartition -- Computes balanced partition of n of length d
• canCurveWithFixedScroll -- Computes a g-nodal canonical curve with a degree k line bundle on a normalized scroll
• canonicalMultipliers -- Computes the canonical multipliers of a rational curves with nodes
• coxDegrees -- Computes the degree of a polynomial in the Cox ring corresponding to a section of a bundle on the scroll
• curveOnScroll -- Computes the ideal of a canonical curve on a normalized scroll in terms of generators of the scroll
• eagonNorthcottType -- Computes the Eagon-Northcott type resolution
• iteratedCone -- Computes a (possibly non-minimal) resolution of C in P^{g-1} starting from the relative canonical resolution of C in P(E)
• liftMatrixToEN -- Lifts a matrix between bundles on the scroll to the associated Eagon-Northcott type complexes
• lineBundleFromPointsAndMultipliers -- Computes basis of a line bundle from the 2g points P_i, Q_i and the multipliers
• resCurveOnScroll -- Computes the relative canonical resolution
• rkSyzModules -- Computes the rank of the i-th module in the relative canonical resolution
• scrollDegrees -- Computes the degree of a section of a bundle on the scroll ring corresponding to a polynomial in the Cox ring
|
2023-02-01 15:47:16
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9180495142936707, "perplexity": 2153.6136668619483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00137.warc.gz"}
|
https://www.futurelearn.com/info/courses/python-in-hpc/0/steps/65109
|
£199.99 £139.99 for one year of Unlimited learning. Offer ends on 28 February 2023 at 23:59 (UTC). T&Cs apply
Linear algebra and polynomials
In this article we briefly introduce some of NumPy's linear algebra and
polynomial functionality.
© CC-BY-NC-SA 4.0 by CSC - IT Center for Science Ltd.
Linear algebra
NumPy includes linear algebra routines that can be quite handy.
For example, NumPy can calculate matrix and vector products efficiently (dot,
vdot), solve eigenproblems (linalg.eig, linalg.eigvals), solve linear
systems (linalg.solve), and do matrix inversion (linalg.inv).
A = numpy.array(((2, 1), (1, 3)))B = numpy.array(((-2, 4.2), (4.2, 6)))C = numpy.dot(A, B)b = numpy.array((1, 2))print(C)# output:# [[ 0.2 14.4]# [ 10.6 22.2]]print(b)# output: [1 2]# solve C x = bx = numpy.linalg.solve(C, b)print(x)# output: [ 0.04453441 0.06882591]
Normally, NumPy utilises high performance numerical libraries in linear
algebra operations. This means that the performance of NumPy is actually quite
good and not far e.g. from the performance of a pure-C implementations using
the same libraries.
Polynomials
NumPy has also support for polynomials. One can for example do least square
fitting, find the roots of a polynomial, and evaluate a polynomial.
A polynomial f(x) is defined by an 1D array of coefficients (p) with
length N, such that (f(x) = p[0] x^{N-1} + p[1] x^{N-2} + … + p[N-1]).
# f(x) = x^2 + random noise (between 0,1)x = numpy.linspace(-4, 4, 7)f = x**2 + numpy.random.random(x.shape)p = numpy.polyfit(x, f, 2)print(p)# output: [ 0.96869003 -0.01157275 0.69352514]# f(x) = p[0] * x^2 + p[1] * x + p[2]
© CC-BY-NC-SA 4.0 by CSC - IT Center for Science Ltd.
|
2023-01-27 04:37:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3870032727718353, "perplexity": 10451.75815172144}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494936.89/warc/CC-MAIN-20230127033656-20230127063656-00787.warc.gz"}
|
https://rpg.meta.stackexchange.com/questions/linked/6367
|
14 questions linked to/from Is there a style guide for posts?
1k views
### Should RPG.SE enforce a specific standard for handling gender pronouns?
In this question, a user repeatedly left comments suggesting 'correction' of gender pronouns in cases where the original text wasn't itself incorrect. These comments were later deleted by moderators. ...
1k views
### Is there a functional purpose to putting things in code text here?
Fellow member @SteveC has been doing diligent work editing questions and answers to put the names of Dungeons & Dragons terms into code text: from Toughness to ...
908 views
### MathJax ($\LaTeX$ in posts) is live!
We have MathJax In this meta we discussed, requested, and gathered evidence for the utility of MathJax in RPG.SE posts. Now it's time to use MathJax For those familiar with LaTeX it will likely ...
1k views
It's a somewhat common habit at RPG.se to use bold sentences to make the appearance of a header for a section of text, rather than using our existing header formatting. This is probably because it's a ...
394 views
### 'We do not enforce style' - how to react to discrepancies between policy and reality?
I've seen it repeatedly Officially Declared that 'we' do not enforce styles, and do not do trivial changes that have zero effect on clarity. And yet in practice, over the course of my stay here, I ...
245 views
### How much of an answer/suggestion should be in comments to questions? [duplicate]
Here's a scenario: New user asks question ignorant of the site's format. While some are commenting on their question and their question is/is not getting closed, others "answer" in the comments to ...
276 views
### How Trivial an Edit is Too Trivial to Make?
I know that trivial edits (and style wars/most style edits) are inappropriate on RPGSE. But the question arises, how trivial is trivial? For example, I recently saw this edit on an old question: one ...
138 views
### Posting “shortcuts” to make my posts look more professional?
I'm new here, but I am already addicted to reading and answering questions. I am predominantly on my phone. How can I beef up my posts? I recently learned about using ...
162 views
### How should we format spell names?
Should spell names be capitalized and/or italicized? I've seen examples of both: An example of capitalizing a spell name (command -> Command) — How does the Staff of the Python work? An example of ...
265 views
### Are you allowed to change citation style?
I had had this happen to me several times. I have chosen one citation style that I generally apply in all my answers since some date, which goes like Book <(edition)> <(year)> p##. Year and ...
364 views
### FAQ Proposal Index for Role-playing Games Stack Exchange
We have a great FAQ Index post here on the Role-Playing Games Meta Stack Exchange, but our current process for adding things to that list is a little lacking. Currently users add the faq-proposal tag ...
159 views
### Grammar: Do we say “The RAW” or simply “RAW”?
Hello fellow grammarians, I'm fairly new the board, so forgive me if this has already been covered but...do we have a preferred style? My question specifically regards the "RAW" (rules as written) ...
|
2021-06-19 01:05:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6000076532363892, "perplexity": 4286.953058060822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643354.47/warc/CC-MAIN-20210618230338-20210619020338-00365.warc.gz"}
|
https://matheducators.stackexchange.com/questions/32/engaging-students-in-computer-lab
|
# Engaging students in computer lab
I am currently teaching the workshop for a class on chaos and fractals in a computer lab. The class is predominantly first year, first semester university students.
Worksheets have been developed for the students to use to encourage them to engage with the material in a way that isn't feasible on paper (e.g. bifurcation diagram). This requires them using provided Matlab scripts and modifying them for the different questions on the worksheet. In a class survey, all students said that they were familiar with the usage of Matlab.
The approach I am taking for teaching this class is to remind the students where the material can be downloaded from on the class website, before individually engaging with each student to ensure that they are understanding the topic at hand. This approach is significantly better for this class, as I have been allocated 2 hours with a 22 student class.
In the first two classes, some students have really engaged and are using the worksheet questions to understand what is actually going on with the underlying theory, what the mathematical terms used in class actually look like on a graph, etc. Meanwhile, other students simply work through the worksheet as quickly as possible and don't seem to really be understanding, despite the instructions on the worksheet stating that the worksheet should be worked through slowly to help the understanding of the topics.
What is the best technique for making the students who are working through the worksheet quickly change their approach to help their understanding?
Example:
The worksheet for this week focused on identifing the relationship between the time-series plot $(n,x_n)$ and cobweb plot for the Logistic map. The students who engaged with the worksheet were able to identify the progression of the sequence between the two graphs and the relationship between them. The students who did not engage completed through the worksheet very quickly be generating the necessary graphs for the relevant questions. When I asked them to explain the relationship, they were unable.
• How is it that in a group of first-year, first-semester students, most are familiar with Matlab? Is Matlab common in Australian secondary schools? That premise seems so counterfactual from an American perspective that it's hard to answer the question. – user173 Mar 21 '14 at 16:45
• @MattF A pre-requisite for this class (which can be taken in the same semester) is a computational mathematics course which focuses solely on the use of Matlab. The two courses were developed so that this course did not use any Matlab skills that were not yet taught in the computational mathematics course. The students are familiar with Matlab in the sense that they have seen the program and should have the necessary to complete the assigned tasks. – Daryl Mar 21 '14 at 21:12
|
2019-08-19 11:28:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3685568869113922, "perplexity": 550.206490354286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314721.74/warc/CC-MAIN-20190819093231-20190819115231-00294.warc.gz"}
|
http://tex.stackexchange.com/questions/3004/header-spacing-trouble/3007
|
# header spacing trouble
I'm using the fancy header package, yet I've also adjusted all of my margins, to allow more text on each page as there is no need for the excessively large margins.
I've run in to some trouble with my header coming much too close the the top of the text though:
How would I go about adjust space downwards? Also, here is the code I'm using for the headers:
\usepackage{fullpage}
\usepackage{fancyhdr}
\setlength{\headheight}{15pt}
\pagestyle{fancyplain}
%adjust lengths to suit me
\addtolength{\topmargin}{-.5in}
\addtolength{\oddsidemargin}{-.375in}
\addtolength{\textheight}{1.25in}
\addtolength{\textwidth}{.5in}
\lhead{Name}
\chead{}
\rhead{}
\lfoot{Name}
\cfoot{\fancyplain{}{\thepage}}
\rfoot{\fancyplain{}{\today}}
-
EricR (and others), please provide a minimum working example (MWE) with all coded problems to help others to more easily help you. – Geoffrey Jones Sep 12 '10 at 3:26
## 1 Answer
There's a better way than this to set up and adjust page margins. Use the geometry package. (Seriously!)
That said, here's the solution to your problem. (NB, I've added the layout package in addition to geometry's showframe option to help you visualise what is going on.)
\documentclass[twoside]{article}
%\usepackage{fullpage} % <-- you don't want this
\usepackage{geometry}
\geometry{
top=0.5in, % <-- you want to adjust this
inner=0.5in,
outer=0.5in,
bottom=0.5in,
headheight=3ex, % <-- and this
headsep=2ex, % <-- and this
}
\usepackage{fancyhdr}
\pagestyle{fancyplain}
\lhead{Eric Rasche}
\chead{}
\rhead{}
\lfoot{Eric Rasche}
\cfoot{\fancyplain{}{\thepage}}
\rfoot{\fancyplain{}{\today}}
\usepackage{lipsum} % body text
\usepackage{layout} % display page dimensions
%\AtBeginDocument{\layout*} % uncomment this line *OR*
\geometry{showframe=true} % uncomment this line (best if not both)
\begin{document}
\subsection{Timeline}
\lipsum[1-30]
\end{document}
-
Great answer +1 – Will Robertson Sep 12 '10 at 5:13
|
2015-05-29 00:30:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7844529151916504, "perplexity": 4501.774791083662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929803.61/warc/CC-MAIN-20150521113209-00118-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://codegolf.stackexchange.com/questions/96516/find-the-infinity-words/96571
|
# Find the Infinity Words!
(Note: This is a spin-off of my previous challenge Find the Swirling Words!)
### Definition of Infinity Word:
1. If you connect with curves all the characters of an Infinity Word on the alphabet (A-Z) you obtain the infinity symbol ∞ like in the diagrams below.
2. All the even connection must be down, all the odd connections must be up.
3. You can ignore upper/lowercase or consider/convert all to upper case or all to lower case.
4. The input words are only characters in the alphabet range of A-Z, no spaces, no punctuation, or symbols.
5. Each word must be exactly 5 characters. Words > 5 or < 5 are not valid.
6. If a word has double consecutive characters, the word is not valid, like "FLOOD" or "QUEEN".
7. All the Infinity Words start and end with the same character.
Here there are some examples:
Write a full program or function that will take a word from standard input and will output if is an Infinity Word or not. The output can be true/false, 1/0, 1/Null, etc.
Test cases:
Infinity Words:
ALPHA, EAGLE, HARSH, NINON, PINUP, RULER, THEFT, WIDOW
NOT Infinity Words:
CUBIC, ERASE, FLUFF, LABEL, MODEM, RADAR, RIVER, SWISS, TRUST,
KNEES, QUEEN, GROOVE, ONLY, CHARACTER, OFF, IT, ORTHO
### Rules:
1. Shortest code wins.
Find, as a list, as many Infinity Words as you can in an English dictionary. You can take for example as reference the complete list of English words here.
• Can we assume the input is always of length 5? You have defined rule 5: "Each word must be exactly 5 characters. Words > 5 or < 5 are not valid.", but no NOT Infinity Words containing less or more than 5 characters. – Kevin Cruijssen Oct 17 '16 at 13:48
• Pretty funny that ALPHA makes that pattern – Fatalize Oct 17 '16 at 13:49
• @KevinCruijssen You must check that the word respect the definition, I updated the false cases. – Mario Oct 17 '16 at 13:56
• @Arnauld five "A"'s connects to themselves (or doesen't move at all) creating a single point, it doesen't draw the infinity symbol, so I don't think it's a positive case. – Mario Oct 17 '16 at 16:25
• I have decided to tackle the Optional Task: "Find, as a list, as many Infinity Words as you can in an English dictionary..." I used this source and Kevin Cruijssen's answer, to produce this list of 278 Infinity Words. – Thomas Quinn Kelly Oct 18 '16 at 22:16
# Jelly, 43 41 40 25 24 23 22 21 14 13 bytes
-7 bytes thanks to fireflame241 (0ị=1ị$->=ṚḢ and use of IIA⁼2,2 to test for the 4 rotations) -1 Thanks to Kevin Cruijssen (use of previously unavailable nilad Ø2 which yields [2,2]) =ṚḢȧOIṠIIA⁼Ø2 TryItOnline Or all test cases (plus "RULES") ### How? An infinity word has: 1. the same first and last letter; 2. length 5; 3. no equal letters next to each other; 4. sum of its four alphabet deltas equal to zero; 5. sum of its four alphabet deltas signs equal to zero; 6. two positive alphabet deltas or two negative alphabet deltas in a row. All but (1) and (equivalently) (4) may be boiled down to a condition that the alphabet delta signs are some rotation of [1,1,-1,-1] (where the sign of 0 is 0) fireflame241 noted that this is then equivalent to the deltas of the deltas of the alphabet delta signs being in [[2,2],[2,-2],[-2,2],[-2,-2]] which may be tested by the absolute values being equal to [2,2]! ### How? =ṚḢȧOIṠIIA⁼Ø2 - Main link: word Ṛ - reverse word = - equals? (vectorises) Ḣ - head (is the first character equal to the last?) ȧ - and O - cast word to ordinals I - increments - the alphabet deltas (or just [] if 1st != last) Ṡ - sign (vectorises) I - increments - deltas of those signs I - increments - deltas of those A - absolute value (vectorises) Ø2 - literal [2,2] ⁼ - equals? (non-vectorising version) • How does this work? – Oliver Ni Oct 17 '16 at 16:15 • incoming explanation. – Jonathan Allan Oct 17 '16 at 16:15 • @PascalvKooten It is mostly for the fun, and to be competitive at code golf - I'm fairly new to both code golf and Jelly, so putting together a Jelly program is like a little puzzle almost every time; I find it satisfying. If one wishes to get something tangible out of this game one should use it to hone one's skills in a language that is more commonly used in the real world though, or, of course, create a golfing language of one's own! – Jonathan Allan Oct 18 '16 at 13:01 • @lois6b :). You start with the tutorial, and then use the pages with Atom definitions, Quicks definitions, and browse the source code. – Jonathan Allan Oct 18 '16 at 15:55 • 14 bytes The main golf here uses II to check for equality to a rotation of 1,1,-1,-1. – fireflame241 Aug 30 '17 at 23:02 # Java 8, 231193185122103 78 bytes s->s.length==5&&(s[1]-s[0])*(s[3]-s[2])<0&(s[2]-s[1])*(s[4]-s[3])<0&s[4]==s[0] Try it here. -38 bytes thanks to @dpa97 for reminding me to use char[] instead of String. -63 bytes thanks to @KarlNapf's derived formula. -25 bytes by converting it from Java 7 to Java 8 (and now returning a boolean instead of integer). 193 bytes answer: int c(char[]s){if(s.length!=5)return 0;int a=s[0],b=s[1],c=s[2],d=s[3],e=s[4],z=b-a,y=c-b,x=d-c,w=e-d;return e!=a?0:(z>0&y>0&x<0&w<0)|(z<0&y>0&x>0&w<0)|(z>0&y<0&x<0&w>0)|(z<0&y<0&x>0&w>0)?1:0;} Explanation: • If the length of the string isn't 5, we return false • If the first character doesn't equal the last character, we return false • Then we check the four valid cases one by one (let's indicate the five characters as 1 through 5), and return true if it complies to any of them (and false otherwise): 1. If the five characters are distributed like: 1<2<3>4>5 (i.e. ALPHA) 2. If the five characters are distributed like: 1>2<3<4>5 (i.e. EAGLE, HARSH, NINON, PINUP) 3. If the five characters are distributed like: 1<2>3>4<5 (i.e. RULER) 4. If the five characters are distributed like: 1>2>3<4<5 (i.e. THEFT, WIDOW) These four rules can be simplified to 1*3<0 and 2*4<0 (thanks to @KarlNapf's Python 2 answer). • +1 to compensate the unexplained downvote ... As far as I can tell, this is a perfectly functional solution. – Arnauld Oct 17 '16 at 15:32 • I got it down to 215 converting s to a char[] char[]c=s.toCharArray();int z=c[1]-c[0],y=c[2]-c[1],... – dpa97 Oct 17 '16 at 16:08 • @dpa97 Thanks for the reminder to use char[] as input instead of String. -38 bytes thanks to you. – Kevin Cruijssen Oct 17 '16 at 17:18 • Your booleans can be optimized: z,x and w,y must have an alternating sign, so it suffices to check z*x<0 and w*y<0 – Karl Napf Oct 17 '16 at 21:27 • @KarlNapf Ah, I misinterpreted your comment a few hours ago. I've implemented your derived formula for a whopping -63 bytes. :) Thanks. – Kevin Cruijssen Oct 18 '16 at 12:24 ## JavaScript (ES6), 9189 87 bytes Saved 2 bytes thanks to Ismael Miguel s=>(k=0,[...s].reduce((p,c,i)=>(k+=p>c?1<<i:0/(p<c),c)),k?!(k%3)&&!s[5]&&s[0]==s[4]:!1) ### How it works We build a 4-bit bitmask k representing the 4 transitions between the 5 characters of the string: k += p > c ? 1<<i : 0 / (p < c) • if the previous character is higher than the next one, the bit is set • if the previous character is lower then the next one, the bit is not set • if the previous character is identical to the next one, the whole bitmask is forced to NaN so that the word is rejected (to comply with rule #6) The valid bitmasks are the ones that have exactly two consecutive 1 transitions (the first and the last bits being considered as consecutive as well): Binary | Decimal -------+-------- 0011 | 3 0110 | 6 1100 | 12 1001 | 9 In other words, these are the combinations which are: • k? : greater than 0 • !(k%3) : congruent to 0 modulo 3 • lower than 15 The other conditions are: • !s[5] : there's no more than 5 characters • s[0]==s[4] : the 1st and the 5th characters are identical NB: We don't explicitly check k != 15 because any word following such a pattern will be rejected by this last condition. ### Test cases let f = s=>(k=0,[...s].reduce((p,c,i)=>(k+=p>c?1<<i:0/(p<c),c)),k?!(k%3)&&!s[5]&&s[0]==s[4]:!1) console.log("Testing truthy words..."); console.log(f("ALPHA")); console.log(f("EAGLE")); console.log(f("HARSH")); console.log(f("NINON")); console.log(f("PINUP")); console.log(f("RULER")); console.log(f("THEFT")); console.log(f("WIDOW")); console.log("Testing falsy words..."); console.log(f("CUBIC")); console.log(f("ERASE")); console.log(f("FLUFF")); console.log(f("LABEL")); console.log(f("MODEM")); console.log(f("RADAR")); console.log(f("RIVER")); console.log(f("SWISS")); console.log(f("TRUST")); console.log(f("KNEES")); console.log(f("QUEEN")); console.log(f("ORTHO")); console.log(f("GROOVE")); console.log(f("ONLY")); console.log(f("CHARACTER")); console.log(f("OFF")); console.log(f("IT")); console.log(f("ORTHO")); ### Initial version For the record, my initial version was 63 bytes. It's passing all test cases successfully but fails to detect consecutive identical characters. ([a,b,c,d,e,f])=>!f&&a==e&&!(((a>b)+2*(b>c)+4*(c>d)+8*(d>e))%3) Below is a 53-byte version suggested by Neil in the comments, which works (and fails) equally well: ([a,b,c,d,e,f])=>!f&&a==e&&!((a>b)-(b>c)+(c>d)-(d>e)) Edit: See Neil's answer for the fixed/completed version of the above code. • 0000 is also congruent to 0 modulo 3 but again you can't have the first and last letters the same, so, like 15, you don't need to explicitly test for it. – Neil Oct 17 '16 at 18:40 • For that initial version, can you use !((a>b)-(b>c)+(c>d)-(d>e))? – Neil Oct 17 '16 at 18:43 • p<c?0:NaN can be written as 0/(p<c), which saves 2 bytes. – Ismael Miguel Oct 17 '16 at 18:52 • @Neil Regarding the test against 0: you're perfectly right. (However, I do need the k? test because of the possible NaN.) Regarding your alternate version: that should work indeed. – Arnauld Oct 17 '16 at 18:54 • @IsmaelMiguel - Good call! Thanks. – Arnauld Oct 17 '16 at 19:01 ## JavaScript (ES6), 78 bytes ([a,b,c,d,e,f])=>a==e&&!(f||/(.)\1/.test(a+b+c+d+e)||(a>b)-(b>c)+(c>d)-(d>e)) Based on @Arnauld's incorrect code, but golfed and corrected. Works by first checking that the first character is the same as the fifth (thus guaranteeing 5 characters) and that the length of the string is no more than 5. After checking for consecutive duplicate characters, it remains to check the waviness of the string, which should have one peak and one trough two letters apart. • If the peak and the trough are the middle and first/last letters, then the first two comparisons and the last two comparisons cancel out • If the peak and the trough are the second and fourth letters, then the middle two comparisons and the outer two comparisons cancel out • Otherwise, something fails to cancel and the overall expression returns false Edit: Alternative 78-byte solution based on @KarlNapf's answer: ([a,b,c,d,e,f],g=(a,b)=>(a<b)-(a>b))=>a==e&&!f&&g(a,b)*g(c,d)+g(b,c)*g(d,e)<-1 ## Python 2 exit code, 56 bytes s=input() v,w,x,y,z=map(cmp,s,s[1:]+s[0]) v*x+w*y|z>-2>_ Outputs via exit code: Error for False, and successful run for True. Takes the string s with characters abcde, rotates it to bcdea, does an elementwise comparison of corresponding characters, and assigns them to five variables v,w,x,y,z. The wrong length gives an error. The infinity words all have v*x == -1 w*y == -1 z == 0 which can be checked jointly as v*x+w*y|z == -2. The chained comparison v*x+w*y|z>-2>_ short-circuits if this is the case, and otherwise goes on to evaluate -2>_ which gives a name error. • Ah, that's nice how you golfed the conditional more! – Karl Napf Oct 18 '16 at 8:37 # Python 2, 11087 60 bytes Saving 1 byte thanks to Neil Requires input enclosed in quotes, e.g. 'KNEES' True if it is an infinity word, False if not and it has length of 5 and prints error message if wrong length s=input() a,b,c,d,e=map(cmp,s,s[1:]+s[0]) print a*c+b*d|e<-1 Inspired by xnor's answer using map(cmp... s=input() e=map(cmp,s,s[1:]+s[0]) print e[4]==0and e[0]*e[2]+e[1]*e[3]==-2and 5==len(s) previous solution: s=input() d=[ord(x)-ord(y)for x,y in zip(s,s[1:])] print s[0]==s[4]and d[0]*d[2]<0and d[1]*d[3]<0and 4==len(d) Using the optimized logic of Kevin Cruijssen • Why not a*c+b*d+2==0==e? – Neil Oct 18 '16 at 8:46 • @Neil yes why not, but xnor's a*c+b*d|e is even shorter. – Karl Napf Oct 18 '16 at 9:00 • I think <-1 might work, since both -2|1 and -2|-1 equal -1. – Neil Oct 18 '16 at 11:22 # PHP, 102 Bytes for(;$i<strlen($w=$argv[1]);)$s.=($w[$i++]<=>$w[$i])+1;echo preg_match("#^(2200|0022|2002|0220)#",$s);
## Python 2, 71 bytes
lambda s:map(cmp,s,s[1:]+s[0])in[[m,n,-m,-n,0]for m in-1,1for n in-1,1]
Takes the string s with characters abcde, rotates it to bcdea, and does an elementwise comparison of corresponding characters.
a b cmp(a,b)
b c cmp(b,c)
c d cmp(c,d)
d e cmp(d,e)
e a cmp(e,a)
The result is a list of -1, 0, 1. Then, checks if the result is one of the valid sequences of up and downs:
[-1, -1, 1, 1, 0]
[-1, 1, 1, -1, 0]
[1, -1, -1, 1, 0]
[1, 1, -1, -1, 0]
as generated from the template [m,n,-m,-n,0] with m,n=±1. The last 0 checks that the first and last letter were equal, and the length ensures that the input string had length 5.
An alternative 71. Checks the conditions on comparisons while ensuring the right length.
def f(s):a,b,c,d,e=map(cmp,s,s[1:]+s*9)[:5];print a*c<0==e>b*d>len(s)-7
## R, 144 bytes
The answer is based off the logic of @Jonathan Allan. It could probably be golfed though.
s=strsplit(scan(,""),"")[[1]];d=diff(match(s,LETTERS));s[1]==tail(s,1)&length(s)==5&all(!rle(s)$l-1)&!sum(d)&!sum(sign(d))&any(rle(sign(d))$l>1)
R-fiddle test cases (vectorized example but same logic)
• Since you already have a check that length(s)==5, you can replace s[1]==tail(s,1) with s[1]==s[5]. A one-byte shorter method to check the length is is.na(s[6]). Together these two changes return TRUE for s of length 5 exactly and FALSE otherwise, as TRUE&NA is NA but FALSE&NA is FALSE. You can also save a few bytes by replacing !sum(sign(d))&any(rle(sign(d))$l>1) with !sum(a<-sign(d))&any(rle(a)$l>1). – rturnbull Nov 16 '16 at 16:32
# GNU Prolog, 47 bytes
i([A,B,C,D,A]):-A>B,B>C,C<D,D<A;i([B,C,D,A,B]).
Defines a predicate i which succeeds (infinitely many times, in fact) for an infinity word, thus outputting "yes" when run from the interpreter (as is usual for Prolog); fails for a candidate word whose first and last letters don't match, or isn't 5 letters long, thus outputting "no" when run from the interpreter; and crashes with a stack overflow if given a candidate word that isn't an infinity word, but which is five letters with the first and last two matching. (I'm not sure why it crashes; the recursive call should be treatable as a tailcall. Apparently GNU Prolog's optimizer isn't very good.) Succeeding is Prolog's equivalent of truthy, and failing the equivalent of falsey; a crash is definitely more falsey than truthy, and fixing it would make the solution substantially longer, so I hope this counts as a valid solution.
The algorithm is fairly simple (and indeed, the program is fairly readable); check whether the letters form one of the four patterns that make an infinity word, and if not, cyclicly permute and try again. We don't need to explicitly check for double letters as the < and > operators let us implicitly check that at the same time that we check that the deltas match.
# Actually, 38 27 bytes
This answer was largely inspired by Jonathan Allan's excellent Jelly answer. There are probably several places where this can be golfed, so golfing suggestions welcome! Try it online!
O;\♀-dY@♂s4R0~;11({kMíub*
Ungolfing
Implicit input s.
O Push the ordinals of s. Call this ords.
; Duplicate ords.
\ Rotate one duplicate of ords left by 1.
♀- Vectorized subtraction. This effectively gets the first differences of ords.
d Pop ord_diff[-1] onto the stack. This is ords[0] - ords[-1].
Y Logical negate ord_diff[-1], which returns 1 if s[0] == s[-1], else 0.
@ Swap (s[0] == s[-1]) with the rest of ord_diff.
♂s Vectorized sgn() of ord_diff. This gets the signs of the first differences.
4R Push the range [1..4] onto the stack.
...M Map the following function over the range [1..4]. Variable x.
0~; Push -1 onto the stack twice.
11 Push 1 onto the stack twice.
( Rotate x to TOS.
{ Rotate the stack x times, effectively rotating the list [1, 1, -1, -1].
k Wrap it all up in a list.
Stack: list of rotations of [1, 1, -1, -1], sgn(*ord_diff)
í Get the 0-based index of sgn(*ord_diff) from the list of rotations. -1 if not found.
ub This returns 1 only if sgn(*ord_diff) was found, else 0.
This checks if the word loops like an infinity word.
* Multiply the result of checking if the word s loops and the result of s[0] == s[-1].
Implicit return.
# APL (Dyalog), 16 15 bytes
0=16|3⊥×2-/⎕a⍳⍞
Try it online!
# TI-BASIC, 81 bytes
String to pass into the program is in Ans. Returns (and implicitly displays) 1 if the entered word is an Infinity Word, and 0 (or exits with an error message) if it isn't.
seq(inString("ABCDEFGHIJKLMNOPQRSTUVWXYZ",sub(Ans,A,1)),A,1,length(Ans
min(Ans(1)=Ans(5) and {2,2}=abs(deltaList(deltaList(deltaList(Ans)/abs(deltaList(Ans
Errors on any repeated characters, or non-5-letter-words.
# 05AB1E, 16 bytes
Ç¥DO_s.±¥¥Ä2DиQ*
Explanation:
Ç # Convert the (implicit) input string to a list of unicode values
# i.e. "RULES" → [82,85,76,69,82]
¥ # Take the deltas
# i.e. [82,85,76,69,82] → [3,-9,-7,13]
DO # Duplicate and take the sum
# i.e. [3,-9,-7,13] → 0
_ # Check if that sum is exactly 0
# (which means the first and last characters are equal)
# i.e. 0 and 0 → 1 (truthy)
s # Swap so the deltas are at the top of the stack again
.± # Get the sign of each
# i.e. [3,-9,-7,13] → [1,-1,-1,1]
¥ # Get the deltas of those signs
# i.e. [1,-1,-1,1] → [-2,0,2]
¥ # And then get the deltas of those
# i.e. [-2,0,2] → [2,2]
Ä # Convert them to their absolute values
2Dи # Repeat the 2 two times as list: [2,2]
Q # Check if they are equal
# i.e. [2,2] and [2,2] → 1 (truthy)
* # Check if both are truthy (and output implicitly)
# i.e. 1 and 1 → 1 (truthy)
|
2020-02-19 05:41:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4440464973449707, "perplexity": 3415.9728792827495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144027.33/warc/CC-MAIN-20200219030731-20200219060731-00323.warc.gz"}
|
https://www.zbmath.org/?q=cc%3A41A44
|
## Found 495 Documents (Results 1–100)
100
MathJax
### Optimality of constants in power-weighted Birman-Hardy-Rellich-type inequalities with logarithmic refinements. (English. French summary)Zbl 07523616
MSC: 26D10 35A23 41A44
Full Text:
Full Text:
Full Text:
### On asymptotically optimal cubatures for multidimensional Sobolev spaces. (English)Zbl 07512240
MSC: 41A55 41A44 26D10
Full Text:
### On the norms of Boman-Shapiro difference operators. (English. Russian original)Zbl 1482.41004
Proc. Steklov Inst. Math. 315, Suppl. 1, S55-S66 (2021); translation from Tr. Inst. Mat. Mekh. (Ekaterinburg) 26, No. 4, 64-75 (2020).
MSC: 41A17 41A44 47B39
Full Text:
Full Text:
Full Text:
Full Text:
### On the optimal constants in the two-sided Stechkin inequalities. (English)Zbl 1473.41002
MSC: 41A17 41A44 46E30
Full Text:
Full Text:
Full Text:
### Direct and inverse approximation theorems of functions in the Musielak-Orlicz type spaces. (English)Zbl 1466.41008
MSC: 41A27 41A44 42A16
Full Text:
### Sharp inequalities of Jackson-Stechkin type and widths of classes of functions in $$L_2$$. (Russian. English translation)Zbl 1474.42012
Ufim. Mat. Zh. 13, No. 1, 56-68 (2021); translation in Ufa Math. J. 13, No. 1, 56-67 (2021).
Full Text:
### Classes of convolutions with a singular family of kernels: sharp constants for approximation by spaces of shifts. (English. Russian original)Zbl 1462.41005
St. Petersbg. Math. J. 32, No. 2, 233-260 (2021); translation from Algebra Anal. 32, No. 2, 45-84 (2020).
Full Text:
Full Text:
### One problem of extremal functional interpolation and the Favard constants. (English. Russian original)Zbl 07424669
Dokl. Math. 102, No. 3, 41A44474-477 (2020); translation from Dokl. Ross. Akad. Nauk, Mat. Inform. Protsessy Upr. 495, 34-37 (2020).
MSC: 41A05 11B68 41A44
Full Text:
Full Text:
### Efficient computation of Favard constants and their connection to Euler polynomials and numbers. (English)Zbl 07308388
MSC: 11B68 41A05 41A44
Full Text:
### Sobolev trace inequality on $$W^{s, q}( {\mathbb R}^n )$$. (English)Zbl 1469.46032
MSC: 46E35 41A44 26A33
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
### Simultaneous approximation of Birkhoff interpolation and the associated sharp inequalities. (English)Zbl 1446.41010
MSC: 41A44 41A80
Full Text:
Full Text:
### Exact constants in Telyakovskii’s two-sided estimate of the sum of a sine series with convex sequence of coefficients. (English. Russian original)Zbl 1442.42005
Math. Notes 107, No. 6, 988-1001 (2020); translation from Mat. Zametki 107, No. 6, 906-921 (2020).
MSC: 42A05 41A44
Full Text:
Full Text:
### Explicit error estimates for spline approximation of arbitrary smoothness in isogeometric analysis. (English)Zbl 1483.65026
MSC: 65D07 41A15 41A44
Full Text:
### Sharp constants of approximation theory. I. Multivariate Bernstein-Nikolskii type inequalities. (English)Zbl 1434.41005
MSC: 41A17 41A44
Full Text:
### A kind of sharp Wirtinger inequality. (English)Zbl 07459194
MSC: 41A44 41A80
Full Text:
### The Betti numbers of the space $$\mathbb{C}\Omega_3$$. (English)Zbl 07437687
MSC: 41A10 41A44 46E20
Full Text:
### The homology groups of the space $$\Omega_n(m)$$. (English)Zbl 07437680
MSC: 41A10 41A44 46E20
Full Text:
### Extremal problems for non-periodic splines on real domain and their derivatives. (Ukrainian. English summary)Zbl 1483.41002
MSC: 41A15 41A17 41A44
Full Text:
### The Bojanov-Naidenov problem for trigonometric polynomials and periodic splines. (English)Zbl 1483.41003
MSC: 41A17 41A15 41A44
Full Text:
### On the Lebesgue constant of weighted Leja points for Lagrange interpolation on unbounded domains. (English)Zbl 1465.41001
MSC: 41A05 41A44
Full Text:
### V. Markov’s problem for $$k$$-absolutely monotone polynomials and applications. (English)Zbl 1443.41007
MSC: 41A17 41A29 41A44
Full Text:
### Best linear approximation methods for some classes of analytic functions on the unit disk. (English. Russian original)Zbl 1444.41006
Sib. Math. J. 60, No. 6, 1101-1108 (2019); translation from Sib. Mat. Zh. 60, No. 6, 1414-1423 (2019).
MSC: 41A44 30H20 41A46
Full Text:
Full Text:
Full Text:
### Upper and lower bounds for the optimal constant in the extended Sobolev inequality. Derivation and numerical results. (English)Zbl 1428.26047
MSC: 26D15 41A44
Full Text:
Full Text:
### Approximation in FEM, DG and IGA: a theoretical comparison. (English)Zbl 1428.41009
MSC: 41A15 41A44 65D07
Full Text:
### An exact inequality of Jackson-Chernykh type for spline approximations of periodic functions. (English. Russian original)Zbl 1459.41003
Sib. Math. J. 60, No. 3, 412-428 (2019); translation from Sib. Mat. Zh. 60, No. 3, 537-555 (2019).
MSC: 41A17 41A15 41A44
Full Text:
### Sharp constants of discrete Sobolev inequalities on a cyclic rectangular lattice. (English)Zbl 1423.05104
MSC: 05C50 41A44 46E39
Full Text:
Full Text:
### Exact constants for simultaneous approximation of Sobolev classes by piecewise Hermite interpolation. (English)Zbl 1438.41043
MSC: 41A44 41A80
Full Text:
### Generalized characteristics of smoothness and some extreme problems of the approximation theory of functions in the space $$L_2(\mathbb{R})$$. I. (English. Russian original)Zbl 1428.41015
Ukr. Math. J. 70, No. 9, 1345-1374 (2019); translation from Ukr. Mat. Zh. 70, No. 9, 1166-1191 (2018).
Full Text:
### Sharp constants for approximations of convolution classes with an integrable kernel by spaces of shifts. (English. Russian original)Zbl 1423.41016
St. Petersbg. Math. J. 30, No. 5, 841-867 (2019); translation from Algebra Anal. 30, No. 5, 112-148 (2018).
MSC: 41A17 41A44
Full Text:
### Optimal spline spaces for $$L^2\ n$$-width problems with boundary conditions. (English)Zbl 1416.41007
MSC: 41A15 41A44 47G10
Full Text:
### Sharp estimates of asymptotic error of approximation by general positive linear operators in terms of the first and the second moduli of continuity. (English)Zbl 1416.41030
MSC: 41A36 41A25 41A44
Full Text:
### Markov factors on average – an $$L_2$$ case. (English)Zbl 07043335
MSC: 41A17 41A44
Full Text:
Full Text:
Full Text:
Full Text:
MSC: 41A44
Full Text:
### On the dependence of the norm of a multiply monotone function on the norms of its derivatives. (English. Ukrainian original)Zbl 1416.41012
Ukr. Math. J. 70, No. 7, 1001-1011 (2018); translation from Ukr. Mat. Zh. 70, No. 7, 867-875 (2018).
MSC: 41A17 47A30 41A44
Full Text:
### A note on some approximation kernels on the sphere. (English)Zbl 1405.41022
Dick, Josef (ed.) et al., Contemporary computational mathematics – a celebration of the 80th birthday of Ian Sloan. In 2 volumes. Cham: Springer (ISBN 978-3-319-72455-3/hbk; 978-3-319-72456-0/ebook). 443-453 (2018).
Full Text:
### Estimates of functions, orthogonal to piecewise constant functions, in terms of the second modulus of continuity. (English. Russian original)Zbl 1401.41013
J. Math. Sci., New York 234, No. 3, 330-337 (2018); translation from Zap. Nauchn. Semin. POMI 456, 96-106 (2017).
MSC: 41A17 41A44
Full Text:
### Sharp estimates for mean square approximations of classes of differentiable periodic functions by shift spaces. (English. Russian original)Zbl 1402.41003
Vestn. St. Petersbg. Univ., Math. 51, No. 1, 15-22 (2018); translation from Vestn. St-Peterbg. Univ., Ser. I, Mat. Mekh. Astron. 5(63), No. 1, 20-29 (2018).
Full Text:
### On asymptotics of the sharp constants of the Markov-Bernshtein inequalities for the Sobolev spaces. (English)Zbl 1400.41022
MSC: 41A44 26D05
Full Text:
### Numerical integration using integrals over hyperplane sections of simplices in a triangulation of a polytope. (English)Zbl 06940690
MSC: 65D32 33C45 41A44
Full Text:
Full Text:
Full Text:
### Optimal recovery of multivariate functions restricted by second-order differential operator. (English)Zbl 1393.41008
MSC: 41A44 26D10 65D15
Full Text:
### Asymptotic constant in approximation of twice differentiable functions by a class of positive linear operators. (English)Zbl 1393.41007
MSC: 41A36 41A25 41A44
Full Text:
### Nikol’skii type inequality; Bessel weight; extremal functions; extremal constants. (English)Zbl 1424.47026
MSC: 47A30 41A44 41A17
Full Text:
Full Text:
### On the norms and minimal properties of de la Vallée Poussin’s type operators. (English)Zbl 1395.42002
MSC: 42A10 47A58 41A44
Full Text:
### A multivariate version of Hammer’s inequality and its consequences in numerical integration. (English)Zbl 06859346
MSC: 65D32 33C45 41A44
Full Text:
Full Text:
### Estimates for the best constant in a Markov $$L_2$$-inequality with the assistance of computer algebra. (English)Zbl 1474.41071
MSC: 41A44 41A10 41A17
Full Text:
### Optimal asymptotic Lebesgue constant of Berrut’s rational interpolation operator for equidistant nodes. (English)Zbl 1411.41019
MSC: 41A44 41A20 65D05
Full Text:
### Lebesgue constants arising in a class of collocation methods. (English)Zbl 1433.41009
MSC: 41A55 41A44 65D30
Full Text:
### Best constants for a class of Hausdorff operators on Lebesgue spaces. (Chinese. English summary)Zbl 1399.42063
MSC: 42B25 42B35 41A44
Full Text:
### A hierarchical structure for the sharp constants of discrete Sobolev inequalities on a weighted complete graph. (English)Zbl 1390.46040
MSC: 46E39 41A44 05C50
Full Text:
### Sharp Jackson type inequalities for spline approximation on the axis. (English)Zbl 1399.41012
MSC: 41A15 41A17 41A44
Full Text:
Full Text:
### Estimates for weighted $$K$$-functionals using the least concave majorant of weighted moduli of continuity. (English)Zbl 1381.41011
MSC: 41A17 26A15 41A44
Full Text:
### Asymptotically sharp inequalities for polynomials involving mixed Gegenbauer norms. (English)Zbl 1376.41027
MSC: 41A44 41A17 15A60
Full Text:
Full Text:
### Exact constants in Jackson-type inequalities for the best mean square approximation in $$L_2(\mathbb{R})$$ and exact values of mean $$\nu$$-widths of the classes of functions. (English. Ukrainian original)Zbl 1371.41037
J. Math. Sci., New York 224, No. 4, 582-603 (2017); translation from Ukr. Mat. Visn. 13, No. 4, 543-569 (2016).
Full Text:
Full Text:
### A harmonic mean inequality for the digamma function and related results. (English)Zbl 1366.33003
MSC: 33B15 39B62 41A44
Full Text:
### On the constants in Markov inequalities for the Laplace operator on polynomials with the Laguerre norm. (English)Zbl 1365.41013
MSC: 41A44 26D15
Full Text:
### Generalized alomari functionals. (English)Zbl 1364.41018
MSC: 41A55 41A44 65D30
Full Text:
### Optimal spline spaces of higher degree for $$L^2$$ $$n$$-widths. (English)Zbl 1358.41007
MSC: 41A15 41A44 47G10
Full Text:
Full Text:
### Schatten class integral operators occurring in Markov-type inequalities. (English)Zbl 1377.47009
Eisner, Tanja (ed.) et al., Operator theory, function spaces, and applications. International workshop on operator theory and applications, Amsterdam, The Netherlands, July 14–18, 2014. Basel: Birkhäuser/Springer (ISBN 978-3-319-31381-8/hbk; 978-3-319-31383-2/ebook). Operator Theory: Advances and Applications 255, 91-104 (2016).
Full Text:
### The best constants in the Wirtinger inequality. (English)Zbl 1354.41025
MSC: 41A44 41A80
Full Text:
Full Text:
Full Text:
### On the best constants in Sobolev inequalities on the solid torus in the limit case $$p = 1$$. (English)Zbl 1359.46033
MSC: 46E35 41A44 35B33
Full Text:
### Two types of discrete Sobolev inequalities on a weighted Toeplitz graph. (English)Zbl 1343.05097
MSC: 05C50 41A44 46E39
Full Text:
### Best constant in stability of some positive linear operators. (English)Zbl 1353.39028
MSC: 39B82 41A35 41A44
Full Text:
all top 5
all top 5
all top 5
all top 3
all top 3
|
2022-05-17 16:58:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.546524703502655, "perplexity": 4480.135500204421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662519037.11/warc/CC-MAIN-20220517162558-20220517192558-00378.warc.gz"}
|
http://physics.stackexchange.com/questions/53106/does-increasing-the-density-of-a-solution-decrease-the-rate-of-temperature-chang
|
# Does increasing the density of a solution decrease the rate of temperature change?
I did an experiment to compare whether salt water (5% concentration of salt) or fresh water of the same volume took longer to heat up to a certain temperature. We found that salt water took longer to heat up than fresh water.
Is this due to density? specific heat capacity? or should I have gotten different results.
-
The thermal conductivity of saline is less than water. See this page for graphs of thermal conductivity against salt content.
Note that a secondary effect is that adding salt to water actually lowers the specific heat, and this will increase the rate of temperature change. See the question Why does salty water heat up quicker than pure water? and it's answers. In particular follow the link I provide to the paper by Zwicky.
However you're comparing the same volume you have more mass to heat up because the density of sea water is greater than the density of pure water. If you take sea water (about 3.5% salt - I chose this because data is easily Googlable) the specific heat is 3.993kJ per kg per degree, compared to water at 4.184kJ/kg/K. However the density of seawater is 1037kg/m$^3$ so the specific heat per cubic metre is almost exactly the same as pure water.
-
Thanks, this is perfect – VikeStep Feb 6 '13 at 6:30
## protected by Qmechanic♦Nov 13 '13 at 0:21
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site.
|
2016-05-01 02:33:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47994887828826904, "perplexity": 660.9549261952435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860113553.63/warc/CC-MAIN-20160428161513-00204-ip-10-239-7-51.ec2.internal.warc.gz"}
|
http://www.researchgate.net/publication/235586397_Anomalous_expansion_and_phonon_damping_due_to_the_Co_spin-state_transition_in_RCoO__3%28R_La_Pr_Nd_and_Eu%29
|
Article
# Anomalous expansion and phonon damping due to the Co spin-state transition in RCoO_ {3}(R= La, Pr, Nd, and Eu)
Phys. Rev. B 10/2008; 78(13). DOI:10.1103/PhysRevB.78.134402
Source: arXiv
ABSTRACT We present a combined study of the thermal expansion and the thermal conductivity of the perovskite series RCoO3 with R=La, Nd, Pr, and Eu. The well-known spin-state transition in LaCoO3 is strongly affected by the exchange of the R ions due to their different ionic radii, i.e., chemical pressure. This can be monitored in detail by measurements of the thermal expansion, which is a highly sensitive probe for detecting spin-state transitions. The Co ions in the higher spin state act as additional scattering centers for phonons, therefore suppressing the phonon thermal conductivity. Based on the analysis of the interplay between spin-state transition and heat transport, we present a quantitative model of the thermal conductivity for the entire series. In PrCoO3, an additional scattering effect is active at low temperatures. This effect arises from the crystal-field splitting of the 4f multiplet, which allows for resonant scattering of phonons between the various 4f levels.
0 0
·
0 Bookmarks
·
14 Views
Available from
### Keywords
combined study
detecting spin-state transitions
phonon thermal conductivity
quantitative model
thermal expansion
various 4f levels
well-known spin-state transition
|
2013-06-19 19:58:26
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8369131088256836, "perplexity": 4946.630943768339}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709101476/warc/CC-MAIN-20130516125821-00097-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://mfat.imath.kiev.ua/article/?id=765
|
Open Access
# Elliptic problems in the sense of B. Lawruk on two-sided refined scales of spaces
### Abstract
We investigate elliptic boundary-value problems with additional unknown functions on the boundary of a Euclidean domain. These problems were introduced by Lawruk. We prove that the operator corresponding to such a problem is bounded and Fredholm on two-sided refined scales built on the base of inner product isotropic H\"ormander spaces. The regularity of the distributions forming these spaces are characterized by a real number and an arbitrary function that varies slowly at infinity in the sense of Karamata. For the generalized solutions to the problem, we prove theorems on a priori estimates and local regularity in these scales. As applications, we find new sufficient conditions under which the solutions have continuous classical derivatives of a prescribed order.
Key words: Elliptic boundary-value problem, slowly varying function, H¨ormander space, two-sided refined scale, Fredholm operator, a priori estimate for solutions, local regularity of solutions.
### Article Information
Title Elliptic problems in the sense of B. Lawruk on two-sided refined scales of spaces Source Methods Funct. Anal. Topology, Vol. 21 (2015), no. 1, 6-21 MathSciNet 3407917 zbMATH 06533464 Milestones Received 25/11/2014 Copyright The Author(s) 2015 (CC BY-SA)
### Authors Information
I. S. Chepurukhina
Institute of Mathematics, National Academy of Sciences of Ukraine, 3 Tereshchenkivs'ka, Kyiv, 01601, Ukraine
A. A. Murach
Institute of Mathematics, National Academy of Sciences of Ukraine, 3 Tereshchenkivs'ka, Kyiv, 01601, Ukraine 25/11/2014
### Citation Example
Iryna S. Chepurukhina and Aleksandr A. Murach, Elliptic problems in the sense of B. Lawruk on two-sided refined scales of spaces, Methods Funct. Anal. Topology 21 (2015), no. 1, 6-21.
### BibTex
@article {MFAT765,
AUTHOR = {Chepurukhina, Iryna S. and Murach, Aleksandr A.},
TITLE = {Elliptic problems in the sense of B. Lawruk on two-sided refined scales of spaces},
JOURNAL = {Methods Funct. Anal. Topology},
FJOURNAL = {Methods of Functional Analysis and Topology},
VOLUME = {21},
YEAR = {2015},
NUMBER = {1},
PAGES = {6-21},
ISSN = {1029-3531},
MRNUMBER = {3407917},
ZBLNUMBER = {06533464},
URL = {http://mfat.imath.kiev.ua/article/?id=765},
}
### References
1. Anna V. Anop, Aleksandr A. Murach, Parameter-elliptic problems and interpolation with a function parameter, Methods Funct. Anal. Topology 20 (2014), no. 2, 103-116. MathSciNet
2. A. V. Anop, A. A. Murach, Regular elliptic boundary-value problems in the extended Sobolev scale, Ukrainian Math. J. 66 (2014), no. 7, 969-985. MathSciNet CrossRef
3. A. G. Aslanyan, D. G. Vasil′ev, V. B. Lidskii, Frequencies of free oscillations of a thin shell that is interacting with a fluid, Funktsional. Anal. i Prilozhen. 15 (1981), no. 3, 1-9. MathSciNet
4. Ju. M. Berezans′kii, Expansions in eigenfunctions of selfadjoint operators, American Mathematical Society, Providence, R.I., 1968. MathSciNet
5. N. H. Bingham, C. M. Goldie, J. L. Teugels, Regular variation, Cambridge University Press, Cambridge, 1989. MathSciNet
6. C. Foias, J.-L. Lions, Sur certains theor\`emes dinterpolation, Acta Sci. Math. Szeged 22 (1961), 269-282. MathSciNet
7. I. S. Chepurukhina, On some classes of elliptic boundary-value problems in spaces of generalized smoothness, Differential equations and related topics, Zb. prac Inst. mat. NAN Ukr., Kyiv 11 (2014), no. 2, pp. 284-304. (Ukrainian)
8. P. G. Ciarlet, Plates and junctions in elastic multi-structures, Masson, Paris; Springer-Verlag, Berlin, 1990. MathSciNet
9. Lars Hormander, Linear partial differential operators, Springer Verlag, Berlin-New York, 1976. MathSciNet
10. Lars Hormander, The analysis of linear partial differential operators. II, Springer-Verlag, Berlin, 2005. MathSciNet
11. J. Karamata, Sur certains "Tauberian theorems" de M. M. Hardy et Littlewood, Mathematica (Cluj) 3 (1930), 33-48.
12. V. A. Kozlov, V. G. Maz′ya, J. Rossmann, Elliptic boundary value problems in domains with point singularities, American Mathematical Society, Providence, RI, 1997. MathSciNet
13. B. Lawruk, Parametric boundary-value problems for elliptic systems of linear differential equations. I. Construction of conjugate problems, Bull. Acad. Polon. Sci. S\er. Sci. Math. Astronom. Phys. 11 (1963), no. 5, 257-267. (Russian)
14. B. Lawruk, Parametric boundary-value problems for elliptic systems of linear differential equations. II. A boundary-value problem for a half-space, Bull. Acad. Polon. Sci. S\er. Sci. Math. Astronom. Phys. 11 (1963), no. 5, 269-278. (Russian)
15. B. Lawruk, Parametric boundary-value problems for elliptic systems of linear differential equations. III. Conjugate boundary problem for a half-space, Bull. Acad. Polon. Sci. S\er. Sci. Math. Astronom. Phys. 13 (1965), no. 2, 105-110. (Russian)
16. V. A. Mikhailets, A. A. Murach, Elliptic operators in a refined scale of function spaces, Ukrain. Mat. Zh. 57 (2005), no. 5, 689-696. MathSciNet CrossRef
17. V. A. Mikhailets, A. A. Murach, Refined scales of spaces, and elliptic boundary value problems. II, Ukrain. Mat. Zh. 58 (2006), no. 3, 352-370. MathSciNet CrossRef
18. V. A. Mikhailets, A. A. Murach, A regular elliptic boundary value problem for a homogeneous equation in a two-sided refined scale of spaces, Ukrain. Mat. Zh. 58 (2006), no. 11, 1536-1555. MathSciNet CrossRef
19. V. A. Mikhailets, A. A. Murach, Refined scales of spaces, and elliptic boundary value problems. III, Ukrain. Mat. Zh. 59 (2007), no. 5, 679-701. MathSciNet CrossRef
20. V. A. Mikhailets, A. A. Murach, An elliptic boundary value problem in a two-sided refined scale of spaces, Ukrain. Mat. Zh. 60 (2008), no. 4, 497-520. MathSciNet CrossRef
21. Vladimir A. Mikhailets, Aleksandr A. Murach, Interpolation with a function parameter and refined scale of spaces, Methods Funct. Anal. Topology 14 (2008), no. 1, 81-100. MathSciNet
22. Vladimir A. Mikhailets, Aleksandr A. Murach, Hormander spaces, interpolation, and elliptic problems, De Gruyter, Berlin, 2014. MathSciNet CrossRef
23. Vladimir A. Mikhailets, Aleksandr A. Murach, The refined Sobolev scale, interpolation, and elliptic problems, Banach J. Math. Anal. 6 (2012), no. 2, 211-281. MathSciNet CrossRef
24. Vladimir A. Mikhailets, Aleksandr A. Murach, Hormander spaces, interpolation, and elliptic problems, De Gruyter, Berlin, 2014. MathSciNet CrossRef
25. Sergei Nazarov, Konstantin Pileckas, On noncompact free boundary problems for the plane stationary Navier-Stokes equations, J. Reine Angew. Math. 438 (1993), 103-141. MathSciNet
26. J. Peetre, On interpolation functions. II, Acta Sci. MAth. (Szeged) 29 (1968), 91-92. MathSciNet
27. Ja. A. Roitberg, Elliptic problems with non-homogeneous boundary conditions and local increase of smoothness of generalized solutions up to the boundary, Dokl. Akad. Nauk SSSR 157 (1964), 798-801. MathSciNet
28. Ja. A. Roitberg, A theorem on the homeomorphisms induced in $L_p$ by elliptic operators and the local smoothing of generalized solutions, Ukrain. Mat. \v Z. 17 (1965), no. 5, 122-129. MathSciNet
29. Yakov Roitberg, Elliptic boundary value problems in the spaces of distributions, Kluwer Academic Publishers Group, Dordrecht, 1996. MathSciNet CrossRef
30. Yakov Roitberg, Boundary value problems in the spaces of distributions, Kluwer Academic Publishers, Dordrecht, 1999. MathSciNet CrossRef
31. Eugene Seneta, Regularly varying functions, Springer-Verlag, Berlin-New York, 1976. MathSciNet
32. G. Slenzak, Elliptic problems in a refined scale of spaces, Vestnik Moskov. Univ. Ser. I Mat. Meh. 29 (1974), no. 4, 48-58. MathSciNet
33. L. R. Volevic, B. P. Panejah, Some spaces of generalized functions and embedding theorems, Uspehi Mat. Nauk 20 (1965), no. 1 (121), 3-74. MathSciNet
|
2019-06-15 22:48:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.773632824420929, "perplexity": 2931.6872309059386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997501.61/warc/CC-MAIN-20190615222657-20190616004657-00200.warc.gz"}
|
http://wyfe.effebitrezzano.it/2nd-order-lc-low-pass-filter-calculator.html
|
# 2nd Order Lc Low Pass Filter Calculator
5341549 [Hz]. † As a second-order filter, the gain varies as ω2 above ω 0. Do you want to design first order, second order, third order Butterworth filters and normalized low pass Butterworth filter polynomials? Are you interested in designing electronics projects? Then, post. Design a 5th order Butterworth low-pass filter using an LC ladder network terminated with a 1 Ω resistor. The All-Pass Filters are designed using Operational Amplifier and discrete resistors and capacitors. A straight line Bode plot is drawn through close approximations. During the design we make use of magnitude and frequency scaling and also of the uniform choice of as a characterizing frequency will appear in all design steps, except for the last where the de-normalized (actual) values will be found. When plotted on logarithmic scales, the Butterworth filter response is flat within its pass-band and then rolls off with an ultimate linear roll off rate of -6 dB per octave (-20 dB per decade). View All Pre-Owned Vehicles. that the phase at cut-off is exactly -90 degrees in the digital filter. Example: Solnik et al. It clamps the switch node voltage via the. Practically series and parallel RLC, and LC, resonant circuits are used in electronic design applications and modeling of circuits. Homework Statement I have a transfer function from a second order chebyshev filter and the general transfer function of a sallen-key circuit. In general, the voltage transfer function of a rst-order low-pass lter is in the form: H(j!) = K 1+j!=!c The maximum value of jH(j!)j = jKj is called the lter gain. An audio pass filter attenuates an entire range of frequencies. The low pass filter is used as anti-aliasing filter while the high pass filter is used in audio amplifier for coupling or removing distortions due to low-frequency signal such as noise. This is a low-pass filter. A 2nd order high pass filters the low frequencies twice as effectively as a 1st order high pass. ―Low Pass‖ filter is a circuit that passes low-frequency signals and blocks high-frequency ones. 84765625 Y i-1. For series and parallel circuits, the resistor, capacitor and inductor are connected differently, and. In order to test the result. Community Events. Resonant frequency, damping factor, bandwidth. , “6th order, high pass filter at 20 Hz”, for surface EMG from vastus lateralis. Second order low pass filter -3dB frequency is given as. Band pass filters are largely used in wireless receivers and transmitters, but are also widely used in many areas of electronics. to the fine-scale details and noise in the spatial domain image. Filter Q = 0. A = (1/√2) n. In a simple 1st order 2 way crossover centred at 2,000Hz, an inductor blocks high frequencies to the 4 ohm woofer, and a capacitor blocks low frequencies to the 4ohm tweeter. Filter Design in Thirty Seconds 11 Design Procedure: • Go to Section 3, and design a high pass filter for the low end of the upper band. By these values let us calculate the cut off frequency of the filter. In spite of these drawbacks, passive LC filters do have a major advantage. The gain of the output signal is always less than the input signal. For bandpass and bandstop, the number of poles is twice the order. A second, smaller EE core is clamped around the bus bars to form the second inductor, and the output of this feeds a bank of electrolytic capacitors. Higher Order Analog Butterworth Filter Designs, a Tutorial. They are known as Low Pass Filter(LPF), High Pass Filter(HPF), Band Pass Filter(BPF) and Band Stop Filter(BSF). Overall, most commonly used EMI filters for portable applications take a form similar to that of Figure 1. 4µF 15A Axial, Bushing - 2 Solder Eyelets from API Technologies Corp. Choose a cutoff frequency fo (Hz). And by operations I will suppose you mean “steps”. The same can be done for high pass, band pass and band stop filters. The task is to design a second-order unity-gain Tschebyscheff low-pass filter with a corner frequency of f C = 3 kHz and a 3-dB passband ripple. Lecture 6 -Design of Digital Filters 6. See all learning centers › Support Centers. The filter is sometimes called a high-cut filter, or treble-cut filter in audio applications. Specify a cutoff frequency of 300 Hz, which, for data sampled at 1000 Hz, corresponds to 0. Bessel filter prototype element values are here. Order today, ships today. In a two pass system, the permeate from the first pass flows through a storage tank or is fed directly to the suction of the second pass high pressure pump. Make sure you have Java turned on in your browser. Butterworth Filter Approximation • The magnitude response of a butterworth filter is shown in fig. The ultimate roll off rate is actually the same for all low pass and high pass filters. High-pass filter with slope of 12 dB per octave. The basic filter is designed with L1,L2, L3 and C. Type I Chebyshev Low-Pass Filter A Type I filter has the magnitude response 2 a 22 N p 1 H(j ) 1T(/ ) Ω= +ε Ω Ω, (1. Equation 1 is used to calculate capacitor values for the lowpass filter side. A second order filter can be obtained by the use of a single opamp first order low pass filter by simply using an additional RC network, as shown in Fig. The Q 0 values for each stage are listed in the table below. RC & RL low pass filters are briefly discussed below with examples. The Discrete Second-Order Low-Pass Filter block models, in the discrete-time domain, a second-order low-pass filter characterized by a cut-off frequency and a damping ratio. Tweeters - High-Pass Filter = 5,000 Hz (12 db or 24 db slope) Midrange - Band-Pass Fiter = 500 Hz HPF & 5,000 Hz LPF (12 db or 24 db slope). To use this calculator, all a user must do is enter any 2 values in the field, and the calculator will compute the value of third field. A Low pass filter is a filter that passes low-frequency signals but attenuates signals with frequencies higher. But maybe someone wants a sharper cutoff? Higher-order low-pass filters. 7% of the source voltage) to pass through it. The frequency between pass and stop bands is called the cut-o frequency (!c). Chebyshev filters. The RC time constant, also called tau, the time constant (in seconds) of an RC circuit, is equal to the product of the circuit resistance (in ohms) and the circuit capacitance (in farads), i. It has gained prominence in the past half-century because its accumulation in the body has been linked to increased risk and occurrence of atherosclerosis and cardiovascular disease. Reducing noise with a conventional single-stage filter seldom works. The response of the filter is displayed on graphs, showing Bode diagram, Nyquist diagram, Impulse response and Step response. Building on AdvanceTrac ® with Roll Stability. Realistic attenuation characteristics (for a low-pass filter) fC Cutoff frequency 0 Amount of signal that is allowed through the filter passband transition stopband region transition width roll-off: steepness in transition region Digital Filters Moving average filter: a rudimentary digital filter Filters signal by averaging a certain number of. LC Oscillator Calculator. This is equivalent to a change of the sign of the phase, causing the outputs of the low-pass filter to lag and the high-pass filter to lead. The more frugal constructor could use such a set of filters for several transmitters and not build filters into each of them. Design an L-C Low pass or High pass Filter (V 1. Band Pass - as shown, frequencies used are high pass at 310Hz and low pass at 3100Hz Low Pass - as shown, frequency is approx. Other common design methods for low-pass FIR-based filters include Kaiser window, least squares, and equiripple. Squared magnitude response of a Butterworth low-pass filter is defined as follows. 0 L2 L Ln L RL 1. Second Order Filters Overview • What's different about second order filters • Resonance • Standard forms • Frequency response and Bode plots • Sallen-Key filters • General transfer function synthesis J. off” is faster) than can be achieved by the same order Butterworth filter. Additional response shaping elements were added where needed to meet the design requirements. Next, you will cascade two second-order lowpass filters to design fourth-order Butterworth and Chebyshev lowpass filters. Due to the virtual ground assumption, at non-inverting input is virtually the same as that at the inverting input, which is connected to the output. Butterworth Lowpass Filter Example This example illustrates the design of a 5th-order Butterworth lowpass filter, implementing it using second-order sections. My extremely confident advice is forget about passive filters and use op-amps. Towing and Trailer Sway Control. In a two pass system, the permeate from the first pass flows through a storage tank or is fed directly to the suction of the second pass high pressure pump. Order today, ships today. A series LR low pass filter. doc DRN: PRELIMINARY Page 3 of 9 1. An RLC circuit is called a second-order circuit as any voltage or current in the circuit can be described by a second-order differential equation for circuit analysis. band is dictated by the filter order. The method in which the low voltage DC power is inverted, is completed in two steps. The realization of a second-order low-pass Butterworth filter is made by a circuit with the following transfer function: HLP(f) K - f fc 2 1. Low-Pass Filter Design Using Stubs Design a low-pass filter for fabrication using microstrip lines. The coils and capacitor must be in this order. Even with low equivalent series resistance (ESR) ceramic output capacitors, it is often impractical to use a traditional single-stage inductor-capacitor (LC) filter to power such loads. Active Low-Pass Filter Design Jim Karki AAP Precision Analog ABSTRACT This report focuses on active low-pass filter design using operational amplifiers. off” is faster) than can be achieved by the same order Butterworth filter. Although there are many filter types and ways to implement them, here’s an active low-pass filter that’s greatly simplified if R1=R2 and the op amp stage is a unity gain follower (RB=short and RA=open). Featured education & support. The simple R-C filter rolls off the frequency response at 6 dB per octave above the cutoff frequency. be cascaded to form high-order filter functions First Order Filters General first order bilinear transfer function is given by: T s a s a s o o ( ) = + + 1 w pole at s = - ωo and a zero at s = - a o / a1 and a high frequency gain that approaches a 1 • The numerator coefficients (a o, a1) determine the type of filter (e. The first component is the inductor and after that the capacitor in parallel with the resistor. The Sallen-Key filters are second-order active filters (low-pass, high-pass, and band-pass) that can be easily implemented using the configuration below: We represent all voltages in phasor form. As a result of these two reactive components, the filter will have a peak response or Resonant Frequency ( ƒr ) at its “center frequency”, ƒc. RC & RL low pass filters are briefly discussed below with examples. That is, we need it to look something like this: It helps to use a math package to cut down on some of the tedious work. If x is a matrix, the function filters each column independently. Relevant technical information, tips and tricks, and answers to everyday problems. Instructions. Homework Statement I have a transfer function from a second order chebyshev filter and the general transfer function of a sallen-key circuit. Alternative: LC bandpass filter 1st order. It is most typically applied to the insertion loss of the network, but can, in principle, be applied to any relevant function of frequency, and any technology, not just electronics. The output frequency is rounded to the second decimal place. You can use MATLAB to obtain the coefficients and compare with program's output. Engelmann, Design of Devices and Systems, Marcel Dekker, 3rd ed. Bode Measurement of Highpass Op Amp circuit 4. When coupling signals into and out of these filters, additional impedances will modify their behavior. Jul 3, 2017 2nd order active low-pass filter design Chebyshev: You May Also Like. A system with high quality factor ( Q > 1 ⁄ 2 ) is said to be underdamped. Since capacitive reactance decreases with frequency, the RC circuit shown discriminates against low frequencies. The output. In addition, the parameters for such filters are defined for a very specific application. Cutoff Frequency : 50 KHz. A second‐order linear differential equation is one that can be written in the form. 8478s + 1) for fourth-order. 1 Symmetrical sine wave filter 13 3. Common mode filters AN4511 6/21 DocID026455 Rev 2 3 Common mode filters The common mode filter is based on two coupled inductors (Figure 5). SPICE simulation of a bandpass filter that has a 120Hz bandwidth from 1071hz to 1193Hz, with a central frequency fo= 1133hz. Below is the screenshot of a low shelf filter used in cutting signals of frequencies below the cutoff “fc”. We have seen an example of a second order low pass filter on the Description page. The low pass filter is used as anti-aliasing filter while the high pass filter is used in audio amplifier for coupling or removing distortions due to low-frequency signal such as noise. Now, these two diagrams must be combined into a 3-way diagram. Two main goals of the procedure are to meet the IEEE Std. Order today, ships today. Step 2: The Low Pass Filter. We will look at first order low pass filters here. • Relatively low noise • Unlimited frequency range Disadvantages: • Hard to integrate when the values are large (C>100pF and R > 100kΩ) • Difficult to get a pole at the origin (increase the order of the type of PLL) R1 C1 Fig. Ray Ridley Power supply output voltages are dropping with each new generation of Integrated Circuits (ICs). Use several "voltage followers" configured as sallen key filters. Band Pass Filter: A band pass filter is an electronic circuit or device which allows only signals between specific frequencies to pass through and attenuates/rejects frequencies outside the range. Model the filter in MatLab, 3. The RLC filter is described as a second-order circuit, meaning that any voltage or current in the circuit can be described by a second-order differential equation in circuit analysis. For example, for pass-through distributions, if it doesn't make a profit one year, members may not receive income. RC Low Pass Filter. The specifications are: • cutoff frequency of 4 Ghz • third order • impedance of 50 Ω • 3 dB equal-ripple characteristic. Let’s say you have a 8 ohm woover and a 8 ohm tweeter. Tuning of analog radio set. The second part of the circuit is composed of resistor R2 and capacitor C2, which forms the low pass filter. VE3WWG calculator Very nice page with an easy to use calculator and Bode plots. The Pi filter has the characteristics to generate a high output voltage at low current drains. High Quality Content by WIKIPEDIA articles! The Sallen-Key topology is an electronic filter topology used to implement second-order active filters that is particularly valued for its simplicity. Key introduced a set of circuits for implementing second-order low-pass, high-pass, and band-pass filter sections. A lot of people confuse natural frequency with cut off frequency. The graph is unable to plot below 1Hz at the moment. These filters may be used in applications where there are unwanted signals in a band of frequencies below the cut-off frequency and it is necessary to pass the wanted signals in a band above the cut-off. Better filters can be made out of op-amps. 1st Order Filter Design for low-pass and high-pass filters. Two main goals of the procedure are to meet the IEEE Std. Use these formulas to calculate for first, second, and third order low pass, high pass, and band pass filters. The response of the filter is displayed on graphs, showing Bode diagram, Nyquist diagram, Impulse response and Step response. For lowpass and highpass, one corner frequency is required: enter this in the first slot and leave the second one blank. The low-pass RL lter discussed before is part of the family of rst-order low-pass lters ( rst order means that ! appears in the denominator with an exponent of 1 or 1. The residual AC ripples are filtered by inductor coil L and capacitor C 2. 9th order low pass butterworth filter: General Electronics Chat: 15: May 29, 2018: 2nd order active low-pass filter design Chebyshev: Wireless & RF Design: 1: Apr 11, 2018: 2nd Order passive Low Pass Filter: General Electronics Chat: 37: Mar 11, 2018: Z: 7th order lc low pass filter adjustable for DDS system: General Electronics Chat: 2: Feb 22. For each type of filter, it has a separate section (lowpass, highpass, etc. The Sallen-Key filter is a simple active filter based on op-amps stages, which is ideal for filtering audio frequencies. Homework Statement I have a transfer function from a second order chebyshev filter and the general transfer function of a sallen-key circuit. The Sallen-Key filter is a very popular active filter which can be used to create 2nd order filter stages that can be cascaded together to form larger order filters. Sallen-Key Low-pass Filter 1 by Kenneth A. A Low pass filter is a filter that passes low-frequency signals but attenuates signals with frequencies higher than the cutoff frequency. Linkwitz-Riley Calculator, 2-Way Active Crossover, 4th Order, 24dB/Octave. 3 times versus the. [For if a( x) were identically zero, then the equation really wouldn't contain a second‐derivative term, so it wouldn't be a second‐order equation. At high-order harmonic frequencies, the reactance of Ca is small, while that of Lm is large. the filter with damping factors down to where the filter gain reaches about 74 dB at fc. The LC high pass can pass high frequencies while weakening or blocking low frequencies. 50 ohms and the design Q = 1. Before clicking for the crossover component values, enter the impedance level and the desired crossover frequency. Let’s start with a simple RC lowpass network. Then f (x) is uniquely determined if sampled at a rate such that n s < 0 2. The normalized denominator for the fifth order Butterworth low pass filter is: Stage 1: First order filter Stage 2: Second order filter Stage 3: Second order filter For frequency scaling: Replace s with s/w c. Here is the circuit:-This is a second order Low Pass Filter. V out (s) / V in (s) = -Ks² / s² + (ω 0 /Q)s + ω 0 ². The filter is sometimes called a high-cut filter, or treble-cut filter in audio applications. Download our apps. Center frequency. 1 will implement multi-pole LP filters. At low frequencies, the inductor is nearly a short and the capacitor nearly open. Second-Order high pass filter can derive by cascading two first-order high pass filters. This is also called full wave. The T network low pass filter has one capacitor between the RF line and. Chokes Explained. In fact, this is one of the main reasons that DSP has become so popular. Overall, most commonly used EMI filters for portable applications take a form similar to that of Figure 1. Active Low Pass Filter and Active High Pass Filter Explained - Duration: 16:33. During this time, the current through the inductor decreases, discharging the LC filter. If you don't understand what any of the terms mean, click here for help. The output. The following is a list of parts needed for this part of the tutorial lesson:. p = -14401. On the other hand it can be seen as a high-Q shielded inductor that operates at a self-resonance frequency. LC Filter Design 3 Class-D Output LC Filter 3. Installing a low-pass filter is a good idea to reduce radio noise and lower the cpu usage. • Filters that can be described with difference-equations – FIR: N =0 – IIR: N>0 • A simple FIR filter is the moving average filter • A simple IIR filter is the first-order lowpass filter Portland State University ECE 223 DT Filters Ver. 25 kHz from the inputted signal as demonstrated in Figure 2 (purple line). High Pass Filters. In order to test the result. 0796 / ( Rh x f ) L 1 = (. The example band-pass filter of Figure 1 has $$f_L=0. In addition, it graphs the bode plot for magnitude in decibels and the phase in radians. Butterworth Pi Low Pass Filter Calculator Enter Fc, Zo and n (all three are required) to calculate filter component values. Third Order Low Pass Filters: Third order low pass filters consist of a coil in series followed by a parallel capacitor, followed by another coil in series to a loudspeaker. Band Pass Filters can be used to isolate or filter out certain frequencies that lie within a particular band or range of frequencies. that the phase at cut-off is exactly -90 degrees in the digital filter. Now to construct an harmonic trap filter all I need to do is insert a capacitor C3 as in figure 2 below. Middendorf and R. New improved version of the crossover calc this now includes a graphical plot of the frequency response. Butterworth Filter Approximation • The magnitude response of a butterworth filter is shown in fig. The result in the spatial domain is equivalent to that of a smoothing filter ; as the blocked high frequencies correspond to sharp intensity changes, i. CONTROLLED DOCUMENT: P_901-000006_Rev06 Filter Design Equations. So far, there is lack of a state-space mathematical modeling approach that considers practical cases of delta- and wye-connected capacitors. (Resistors and capacitors are usually used at low frequencies. Make sure you have Java turned on in your browser. Khan Academy is a 501 (c) (3) nonprofit organization. For our example RC circuit, with R=10kΩ and C=47nF, the cutoff frequency is 338 Hz. Second order active filter frequency response is exactly opposite to the second order active low pass filter response because this filter will attenuate the voltages below the cut-off frequency. Hello, I need to design a 2nd order RC low pass filter for a PWM that has a frequency of 488. Examples of Digital Filters. The power_SecondOrderFilter example shows the Second-Order Filter block using two Filter type parameter settings (Lowpass and Bandstop). Assuming you have mastered the design of low pass LC filters we will now proceed to the design of a high pass filters. As discussed in the page on the Bilinear Transform, we have to apply pre-warping to the cut-off frequency before designing a filter. The circuit can be represented as a. First order low pass filter is the simplest form of low pass filters that are made of only one reactive component i. LC Low-Pass Filters • In order for the output voltage to not rise sharply at resonance, the value of R L is chosen to reduce the circuit Q to approximately 1. • 2nd =(S2 +1,414s+1) • 4th =(S2 +0,765s+1)*(S2 +1,848s+1) To make a 4th order Butterworth filter there can be used tow 2nd order sallen and key filters in series, figure 4 shows a 2nd order sallen and key filter. Second-Order Filters First-order filters Roll-off rate: 20 dB/decade This roll-off rate determines selectivity Spacing of pass band and stop band Spacing of passed frequencies and stopped or filtered frequencies Second-order filters Roll-off rate: 40 dB/decade In general:. Online calculator to quickly determine Water Flow Rate through an Orifice. An active band pass filter is a 2nd Order type filter because it has “two” reactive components (two capacitors) within its circuit design. In particular the Duelund does not show any peaking because the allpass filter sections are 1st order squared. 5 Ways Real Estate Can Help You Escape Your Lousy Job. (Resistors and capacitors are usually used at low frequencies. Excellent tutorial, includes 2nd order calculations, and links to their other tutorials. Vout Vin = R2 R1+R2. When coupling signals into and out of these filters, additional impedances will modify their behavior. This is a, a lowpass filter where my input is right here, and my output is measured across this capacitor. In this way these filters pass the high frequency signals, and reject the low frequency signals. Low-pass Filter– a filter that passes low frequencies and attenuates high frequencies as shown in Figure F-5(b). For bandpass and bandstop, the number of poles is twice the order. Use the cursor function to find the low-frequency cutoff point, that is, the. LOW-PASS FILTER DESIGN. A cutoff frequency of a filter is defined as the frequency where output power is half of output power in the pass band or equivalently the frequency where output voltage is \(1/\sqrt{2} = 0. The geometrical constant Kg is a measure of the. 8 Lumped Element Filter Design 39 4. So applying this idea, it's possible - and sensible - to write a general expression for the transfer function of the second-order low-pass filter network like this:. The output signal is phase shifted from the input. The model sample time is parameterized with variable Ts (default value Ts = 50e-6). It provides graphical setting of filter center frequency and bandwidth, microphone. 3 times versus the. Squared magnitude response of a Butterworth low-pass filter is defined as follows. I am trying to design a 4th order Butterworth low pass filter with the following parameters: - cut off frequency at 20MHz to filter some noise on an input signal. This calculator assumes a low source impedance, which usually is small enough that it does not change. Documentation is currently available in the following languages: This article is part of the HandBrake Documentation and was written by Bradley Sepos (BradleyS). Page 5/13 The polynomials for a 2nd and 4th order Butterworth filter [1]. Series resonant LC band-pass filter. Frequency and Phase Responses of a Fourth-Order Passive RC Low-Pass Filter The corner frequency of the overall filter is reduced by a factor of α ≈ 2. Topology for 3 pole T LC high pass RF filter. Band pass filters are largely used in wireless receivers and transmitters, but are also widely used in many areas of electronics. Crossover Network Design Formulas & Calculator: This calculator will design a two-way third-order Butterworth crossover network for you. Simple LC passive filters are among the common filters for DC-DC converters. The sample first passes through a mass filter which allows only the drugs we are testing for to pass through. In pi-filters, the major filtering action is accomplished by the capacitor at input C 1. In the motion analysis, digital Butterworth filters are used. Plot the magnitude and phase responses. Laplacian/Laplacian of Gaussian. Kuhn March 9, 2013, rev. 5 As Ωp Ωc Ωs Ω Pass Band Attenuation Stop Band Attenuation Pass Band Edge Stop Band Edge Fig. Design of RLC-Band pass fllters WS2010/11 E. An RLC circuit has a resistor, inductor, and capacitor connected in series or in parallel. At high-order harmonic frequencies, the reactance of Ca is small, while that of Lm is large. The second-order low pass also consists of two components. T Attenuator Calculator. First order low pass filter is the simplest form of low pass filters that are made of only one reactive component i. // So rather than split 4th order sections into 2nd order sections (biquads),. design of a High pass or Low pass filter is guided by the value of the cutoff or corner frequency ω0. Non-Inverting Amplifier (positive Gain) 3. We will look at first order low pass filters here. Free Online Engineering Calculator (Javascript) to quickly estimate the Component values for an active Butterworth Bandpass Filter. In a simple 1st order 2 way crossover centred at 2,000Hz, an inductor blocks high frequencies to the 4 ohm woofer, and a capacitor blocks low frequencies to the 4ohm tweeter. This is the case, e. As we will learn, even this passive filters may exhibit resonance near the natural frequency. The analog and digital filters are biquad filters. Center frequency. LC Butterworth Filter Calculator. With decreasing frequency, however, the capacitive reactance of the capacitor increases and so does the tapped output voltage. The Laplacian is a 2-D isotropic measure of the 2nd spatial derivative of an image. Power Factor Calculator. The RC bandpass calculator makes it easy for anyone to build a bandpass filter. In the proposed algorithm, the direct and indirect realization of a digital filter can be performed. Pricing and Availability on millions of electronic components from Digi-Key Electronics. lowpass and high pass filter in C# [closed] Asked 8 years ago. a buffer amplifier). It is convenient to work with 3. Driving Quizzes. When working with 3 or more speakers, at least one speaker must be bandpass. To simulate continuous filters, specify Ts = 0 in the MATLAB ® Command Window before starting the simulation. Support community. Use this utility to simulate the Transfer Function for filters at a given frequency, damping ratio ζ, Q or values of R, L and C. ( Fastest rolloff, Slight peaks / dips, Q: 0. When a two first order low pass RC stage circuit cascaded together it is called as second order filter as there are two RC stage networks. The output signal is phase shifted from the input. High Pass Filter Example. The Sallen-Key filter is a simple active filter based on op-amps stages, which is ideal for filtering audio frequencies. • • • • • • R L C − v S (t) + − v O (t) + Using phasor analysis, v O (t) ⇔ V O is computed as V O = 1 jωC R + jωL + 1 jωC V S = 1 LC (jω) 2 + jω R L + 1 LC V S. 18 mHy coil is needed. Use the cursor function to find the low-frequency cutoff point, that is, the. As discussed in the page on the Bilinear Transform, we have to apply pre-warping to the cut-off frequency before designing a filter. One of the most commonly used is via the reference analog prototype filter. The filter is sometimes called a high-cut filter, or treble-cut filter in audio applications. Data Acquisition Band Pass Filter Data Acquisition Low Pass Filter Data Acquisition High Pass Filter Data Acquisition Anti Alias Filter. The user selects the basic type (Lowpass or Highpass), number of poles, 3 DB cut off frequency and the I/O impedance. Specify a cutoff frequency of 300 Hz, which, for data sampled at 1000 Hz, corresponds to 0. We set up the circuit and create the differential equation we need to solve. Sample calculation. For a 50 Hz low pass filter for a 4 ohm load, L2 = 18. The reflected light passes through a filter centered at 700 nm (red) into a second PMT. SCI-1020-020 – LC EMI Filter 2nd Order Low Pass 1 Channel C = 1. However my attempts so far to implement this high-pass design have not produced a suitable frequency-response curve. Once we have that we can double check that you have the right values for what you need. So, this kind of filter is named as first order or single pole low pass filter. Input Parameters: BW Hz f c Hz T S (sample period) Analog Filter Coefficients: B0 B1 B2 A0 A1 A2. Chebyshev filters. Low-pass filters are commonly used to implement antialias filters in data-acquisition systems. During the design we make use of magnitude and frequency scaling and also of the uniform choice of as a characterizing frequency will appear in all design steps, except for the last where the de-normalized (actual) values will be found. The input voltage (the input signal) is between resistor and capacitor. We have provided an LC low pass calculator to make low pass calculation simple. The Butterworth Low-Pass Filter 10/19/05 John Stensby Page 1 of 10 Butterworth Low-Pass Filters In this article, we describe the commonly-used, nth-order Butterworth low-pass filter. Assuming that you are not trying to design a crystal filter for an exact frequency, the procedure is actually quite easy to do. Figure 10 is a schematic of a Sallen-Key, second-order, low-pass filter. This method is the best for designing all standard types of filters such as low-pass, high-pass, band-pass and band-stop filters. TOPIC 3: High Pass Filters. 2 is outlined. In most of the cases a passive filter involves an LC combination tuned to serve the purpose. CircuitLab provides online, in-browser tools for schematic capture and circuit simulation. The constants in the low-pass filter were multiples of 1/8. By the same way the required value of components for higher-order loop filters can be determine. (Sample)Transfer Function: s2 +201000 s +10000000000. Consider the passive LC filter in Figure 3a, for example. Low-Pass Filters An ideal low-pass lter’s transfer function is shown. Data Acquisition Band Pass Filter Data Acquisition Low Pass Filter Data Acquisition High Pass Filter Data Acquisition Anti Alias Filter. Single-ended filters designed in any filter design package can be converted to a differential implementation. For lowpass and highpass, one corner frequency is required: enter this in the first slot and leave the second one blank. Just as before, the low-pass filter kernel in (a) corresponds to the frequency response in (b). The low-pass – calculations: Show that the low-pass filter in (a) above has a power response function: H(ω)2 = ω0 4 (ω0 2 −ω2)2 +ω2(R/L)2 * Explain why this is a low-pass filter by finding the limits ω = 0 and ω =∞. It is most typically applied to the insertion loss of the network, but can, in principle, be applied to any relevant function of frequency, and any technology, not just electronics. New: Simplify Stages 1 and 2 if Pole Numbers are Odd. 1 Second Order System and Transient- Response Specifications… In the following, we shall obtain the rise time, peak time, maximum overshoot, and settling time of the second-order system These values will be obtained in terms of Þ and ñ á. The standard form of a second-order, low-pass filter is given as TLP(s) = TLP(0)ω 2 o s2 + ωo Q s + ω 2 o (1-3) where TLP(0) is the value of TLP(s) at dc, ωo is the pole frequency, and Q is the pole Q or the pole quality factor. High pass filters are used to remove or attenuate the lower frequencies in amplifiers, especially audio amplifiers. As a further explanation to this, some synths have two 2-pole filters that can be either high-pass or low-pass, so they can be configured as a 24dB/oct lowpass, 24db/oct high pass or a 12dB/oct bandpass filter (and very unusually, a 12dB/oct notch filter. Künzi Paul Scherrer Institute, Villigen, Switzerland. Consequently, in the 2nd order high pass filter, a coil is connected in series with a capacitor. Similarly a fourth-order low-pass filter can be formed by cascading two second-order low-pass filters. Below 50 kHz active filters are usually more cost effective and above 500 MHz strip lines are generally used. We can build some very simple filters out of a capacitor and a resistor. 2 Frequency 17 4. 5" midrange, but found that I preferred just one cap for high pass, plus one inductor to low pass the woofer (Kappalite 3012LF in my case). We will use zpell to produce the poles and zeros of the filter. Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. Test the design in the Lab. Second Order Filters Overview • What's different about second order filters • Resonance • Standard forms • Frequency response and Bode plots • Sallen-Key filters • General transfer function synthesis J. 205" disconnect terminals or soldering. 7654s + 1)(s^2 + 1. This calculator calculates the capacitor and inductor values for an LC Butterworth filter for a given order up to 10. So far, there is lack of a state-space mathematical modeling approach that considers practical cases of delta- and wye-connected capacitors. Make sure you have Java turned on in your browser. Butterworth pole location; these values are call here-after normalized values. 3 times versus the. L R v in C v out Hj()ω 1 ⁄ jωC. Select the normalized filter order and parameters to meet the design criteria. In inverting mode, the output of the Op-Amp is 180 degrees out of phase with the input signal. Design a 2-way high / low pass crossover with a range of choices for type and order. So, the transfer function for the RC circuit is the same as for a voltage divider: Vout = Vin× R2 R1+R2. It also attenuates those signals whose frequency is higher than the cut-off frequency. The gain of the output signal is always less than the input signal. This is equivalent to a change of the sign of the phase, causing the outputs of the low-pass filter to lag and the high-pass filter to lead. The window method is basically used for the design of prototype filters like the low-pass, high-pass, band-pass etc. •Common-mode filter consisting of two or more line-to-chassis capacitors and a common-mode inductor. series resonant bandpass filter v1 1 0 ac 1 sin l1 1 2 1 c1 2 3 1u rload 3 0 1k. RL Low Pass Filter Calculator. Those in the hig-pass filter were. tally tunable band-pass filter. WE MAKE OUR CHICKEN. The transfer function of the second order filter is given below. Using Filter Tables 1 Normalized Filters The figure below shows the structure of a lowpass ladder filter of order n with a normalized resistive load of R L = 1Ω. Low-Pass, Normalized Filter with a passband of 1 rps and an impedance of 1 ohm. •The DM filter is comprised of at least one pair of series inductors and at least one line-to-line capacitor. Equations displayed for easy reference. However, low pass and high pass filters do not have centre frequency. We may obtain a band pass filter by combining a low pas and a high pass filter. 00 mHy and C2 = 281 µfd. Examine the criterion a second time. So, let’s say I have this stream of data coming in to my system and I need to average it out. Lab5:Filters EE43/100Spring2012 V. The second-order low pass also consists of two components. The ultimate roll off rate is actually the same for all low pass and high pass filters of the same order regardless of the filter type. TRANSFER FUNCTIONS 4. CircuitLab provides online, in-browser tools for schematic capture and circuit simulation. Low-pass and High-pass Filters The design of digital filters is covered in detail in later chapters. 5 As Ωp Ωc Ωs Ω Pass Band Attenuation Stop Band Attenuation Pass Band Edge Stop Band Edge Fig. This page is a web application that design a RC low-pass filter. At high frequencies, the signal gradually gets attenuated more strongly, proportional to 1/frequency. 1 Current and voltage rating 16 4. The two components filter out very high and very low. Butterworth Lowpass Filter Example This example illustrates the design of a 5th-order Butterworth lowpass filter, implementing it using second-order sections. , first-order), we can show mathematically that both options above are identical. All blog posts ›. You can’t cascade two lowpass filters of the same Q and frequency and get a steeper slope with the same frequency and Q specs either (two biquads set to 1 kHz and 0. designers-guide. Second-Order Low-Pass Butterworth Filter This is the same as Equation 1 with FSF = 1 and Q 1 1. Basic characteristics of low-pass filters made with capacitors (1) The higher the frequency, the greater the effect. RC High Pass Filter - Frequency and Bode Plot Calculator. e Capacitor or Inductor. to/2x5ojab This video shows the steps to design the simple low pass filter in Simulink. This calculator assumes a low source impedance, which usually is small enough that it does not change. The result in the spatial domain is equivalent to that of a smoothing filter ; as the blocked high frequencies correspond to sharp intensity changes, i. Figure 4 shows such a filter. 5341549 [Hz]. 2 Frequency 17 4. So, between 20 kHz and 200 kHz you have a 40 dB reduction or 100:1. 1547 requirements for attenuating of harmonics. As such, a linear phase filter’s group delay is a constant. Okawa Electric Design RC Low-pass Filter Design Tool Good calculator with links to their other tutorials. Ln(f) for Low-Pass filter Figure 5: Plot of transfer function vs. In the case of a low pass common mode filter, a common mode choke is the reactive element employed. Simulation with sine sweep, showing that an LC circuit is a bandpass filter rather than a low-pass filter. off" is faster) than can be achieved by the same order Butterworth filter. For example, a simple first-order low pass filter has a single pole, while a first-order high pass filter has a pole and a zero. I shall be grateful for your help. To demodulate an AM-SSB signal, we need to perform the following steps: Low-pass filter, to remove. The Physics. You can say it Adaptive IIR filter. Enter the filter order. Higher-order filters, such as third, fourth, fifth, and so on, are built simply by using the first and second-order filters. Third-order Butterworth Low Pass Filter. 2nd order CR filter. Chebyshev Pi LC Low Pass Filter Calculator: Enter value, select unit and click on calculate. It can be synthesized to the following equation: H AP = H LP - H BP + H HP = 1 - 2* H BP. Butterworth Pi Low Pass Filter Calculator Enter Fc, Zo and n (all three are required) to calculate filter component values. 3 Inverse DWT. Active Low Pass Filter and Active High Pass Filter Explained - Duration: 16:33. A high-pass filter (HPF) attenuates content below a cutoff frequency, allowing higher frequencies to pass through the filter. These filters are most effective between 50 kHz and 500 mHz. The second-order low pass also consists of two components. RLC Low-Pass Filter Design Tool. Butterworth Filter Approximation • The magnitude response of a butterworth filter is shown in fig. Bessel filter prototype element values are here. Thus, a first-order all-pass provides a total phase shift of 180°, with the phase shift at f c being 90° instead of 45°. The ohmic resistance \(R$$ does not factor. • 2nd =(S2 +1,414s+1) • 4th =(S2 +0,765s+1)*(S2 +1,848s+1) To make a 4th order Butterworth filter there can be used tow 2nd order sallen and key filters in series, figure 4 shows a 2nd order sallen and key filter. In general, the voltage transfer function of a rst-order low-pass lter is in the form: H(j!) = K 1+j!=!c The maximum value of jH(j!)j = jKj is called the lter gain. Specifications for high-pass, band-pass and band-stop filters are defined almost the same way as those for low-pass filters. 6 &-normalization in fc and C for the state variable filter. If it uses only active elements, it can be a first-order filter. A VCVS filter also has the advantage of independence: VCVS filters can be cascaded without the. Description. New: Simplify Stages 1 and 2 if Pole Numbers are Odd. Butterworth Lowpass Design Almost all methods for filter design are optimal in some sense, and the choice of optimality determines nature of the design. So, let’s say I have this stream of data coming in to my system and I need to average it out. The reason for this capability is the L and C elements, which are parallel with the resistor, resonate at the fundamental frequency. This limits its suitability for use in portable applications. 11 (the Tschebyscheff coefficients for 3-dB ripple), obtain the coefficients a 1 and b 1 for a second-order filter with a 1 = 1. 2nd order LC low pass filter calculation. Find the low-pass filter prototype 2. So you set your sights more pragmatically and maybe opt for 20 kHz and from this you can determine ripple reduction because the frequency will reduce in amplitude at 40 dB per decade (standard for 2nd order filters). Second Order Active Low Pass Filter: It's possible to add more filters across one op-amp like second order active low pass filter. 2nd Order Filter Design 2nd Order Active Low-pass Filter Design 2nd Order Digital Low-pass Filter Design 2nd Order Active High-pass Filter Design. This Demonstration shows the implementation of a design for an infinite impulse response (IIR) low-pass digital filter. Order basically defines the rolloff rate how does it attenuate the signals not in its bandsay a first order filter has -20db/decade roll off past of cutoff frequencysecond order has -40 db/ decadeit means basically 2nd orde. ELECTRICAL SYSTEMS Analysis of the three basic passive elements R, C and L Simple lag network (low pass filter) 1. Assuming that you are not trying to design a crystal filter for an exact frequency, the procedure is actually quite easy to do. 2 (typ) ) Chebyshev filters are classified by the amount of ripple in the. In communication systems, when the IF frequency is quite high, some low frequency spurs need to be filtered out, such as the half IF spur. e Capacitor or Inductor. Second order low pass filters consist of a coil in series followed by a capacitor in parallel to a loudspeaker. Our example is the simplest possible low-pass filter. Perhaps the simplest low pass filter is the classic butterworth pi network design where the reactive elements are of a constant impedance e. 2nd Order Digital Butterworth filter. The circuit can be represented as a. You can find great deals from Amazon's Today’s Deals regardless of whether you are looking for items for yourself or your family and friends. A016e Reduction of Output Ripple & Noise www. Enter high and low pass speaker impedances. The cutoff and gain can be changed with other RC values. For a low pass filter with low Q you'll have a slow rolloff, with high Q you'll have peaking at the cutoff frequency. New to Manitoba What you need to know about driver testing. So for a second-order passive low pass filter the gain at the corner frequency ƒc will be equal to 0. In addition, it graphs the bode plot for magnitude in decibels and the phase in radians. doc DRN: PRELIMINARY Page 3 of 9 1. A Low pass filter is a filter that passes low-frequency signals but attenuates signals with frequencies higher than the cutoff frequency. However, unlike a low-pass filter, it also includes high frequencies such as noise and rapid changes. When Q1 turns off, Q2 turns on and current is supplied to the load through the low side MOSFET. Active Low-Pass Filter Design Jim Karki AAP Precision Analog ABSTRACT This report focuses on active low-pass filter design using operational amplifiers. The text assumed no filter design experience but allows high quality filters. Second Order Low Pass Filter: Formulas, Calculations and Frequency Curves. ALL ABOUT ELECTRONICS 175,484 views. Second Order Passive High Pass Filter. The tables below include more information, such as the 3 dB, 30 dB, and 40 dB stopband frequencies. Pass the 2nd input. The department's offical pages can be found at http://www. Homocysteine, a non-protein amino acid, is an intermediate in the metabolism of methionine and biosynthesis of cysteine. LC Low-Pass Filters • In order for the output voltage to not rise sharply at resonance, the value of R L is chosen to reduce the circuit Q to approximately 1. The complex transfer function for a second-order low-pass filter is T(s)=K 1 µ s ω 0 ¶ 2 + 1 Q µ s ω 0 ¶ +1 (3). Q = 1/ α and A band. Join us on GitHub to contribute your thoughts and ideas, and to suggest any corrections. Sponsored By. For example, specifies a particular second-order filter. Reducing noise with a conventional single-stage filter seldom works. The second Capacitor C1 in series with the mid range and approaches being an open circuit at low frequencies (6dB/octave). If you draw a schematic that would clear this up, as usual. If this calculation looks familiar, it should. When coupling signals into and out of these filters, additional impedances will modify their behavior. Band-Pass Filter S. Hi Guys, I need help designing a 2nd order low pass Butterworth filter with a frequency cut off of 12 Hz that is to be bi-directionally filtering the data. Design a third order low-pass Chebyshev filter with a cutoff frequency of 330. Designers may be forced to use a two-stage LC filter in order to achieve output ripple levels in the sub-5mV range. Filter Design in Thirty Seconds 11 Design Procedure: • Go to Section 3, and design a high pass filter for the low end of the upper band. transformation from Low pass filter to High pass filter or Low pass filter to Low pass filter or High pass filter to Low pass filter or High pass to High pass filter are also allowed. The study sample consisted of an audio file and has been save Audio of on a formula (WAV), and the study used matlab 7. 3 Butterworth approximation. 3 • The magnitude response of low pass butterworth filter is given by 1 Ap 0. They are typically gapped iron core units, similar in appearance to a small transformer, but with only two leads exiting the housing. The calculators create analog component values, analog and digital filter coefficients: 2nd Order Filter Design for low-pass, high-pass, band-pass and band-stop filters. 2 • Thickness of dielectric = 62 mill Solution Start up the rf & microwave toolbox and select the low. The ISO 9001:2008 registered Pasternack facility ships all RF filters from stock the same day you order them. A system with high quality factor ( Q > 1 ⁄ 2 ) is said to be underdamped. In the proposed algorithm, the direct and indirect realization of a digital filter can be performed. Design a third order low-pass Chebyshev filter with a cutoff frequency of 330. Since V OUT = V C = Q x V IN at resonance, Q must be 1 to make V OUT = V IN. Run the data through the M-stationary program on S-Plus. Sallen-Key Low-pass Filter 1 by Kenneth A. Figure 2 and Figure 4 use single curves because the high-pass and the low-pass phase responses are similar, just shifted by 90° and 180° (π/2 and π radians). A second-order filter will attenuate at 40 dB per decade, and so on. The component values for the filter are typically determined by how much attenuation is needed. For instance, it lists (s^2 + 0. P_1Tone PORT1 R R1 C C2 L L1 Figure 4 - Low pass filter structure for N=2. Common RC “Pi’’ Filter These filters are typically low pass filters that reject signals over 800 MHz. An RLC circuit has a resistor, inductor, and capacitor connected in series or in parallel. To use this calculator, all a user must do is enter any 2 values, and the calculator will compute the 3rd field. 4 Derived parameters. - Second parameter is filter type which lp= lowpass, hp=highpass, bp=bandpass. From Equation 2, the key elements for a 2nd-order filter can be written as shown in Equation 3 and Equation 4. Our Deal of the Day features hand-picked daily deals with low prices on top electronic products, video games, tools, items for your kitchen and home, sporting goods, computer software, and more. The constants in the low-pass filter were multiples of 1/8. Low-pass and High-pass Filters The design of digital filters is covered in detail in later chapters. You can get a transfer function for a band-pass filter …. • Passive Low-Pass Filter, • Active Low-Pass Filter, • Passive High-Pass Filter, and • Active High-Pass Filter. The high-pass filter kernel, (c), is formed by changing the sign of every other sample in (a).
|
2020-07-04 15:07:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5019279718399048, "perplexity": 1692.5302529508497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886178.40/warc/CC-MAIN-20200704135515-20200704165515-00191.warc.gz"}
|
http://physicshelpforum.com/atomic-solid-state-physics/14992-problem-11-1-ashcroft-mermin.html
|
Physics Help Forum problem 11.1 from Ashcroft and Mermin.
User Name Remember Me? Password
Atomic and Solid State Physics Atomic and Solid State Physics Help Forum
Nov 12th 2018, 12:58 PM #1 Junior Member Join Date: Aug 2009 Posts: 9 problem 11.1 from Ashcroft and Mermin. I asked my question in physicsforums, perhaps someone here knows how to solve it or can provide guidance. https://www.physicsforums.com/thread...xtbook.958929/ Thanks!
Nov 12th 2018, 02:16 PM #2
Forum Admin
Join Date: Apr 2008
Location: On the dance floor, baby!
Posts: 2,736
Originally Posted by Alan I asked my question in physicsforums, perhaps someone here knows how to solve it or can provide guidance. https://www.physicsforums.com/thread...xtbook.958929/ Thanks!
Please post the original problem when posting between fora.
-Dan
__________________
Do not meddle in the affairs of dragons for you are crunchy and taste good with ketchup.
See the forum rules here.
Nov 12th 2018, 04:57 PM #3 Junior Member Join Date: Apr 2018 Posts: 20 I never knew that the plural form of "forum" is "fora" instead of "forums"... topsquark likes this.
Nov 12th 2018, 11:01 PM #4 Junior Member Join Date: Aug 2009 Posts: 9 1. The problem statement, all variables and given/known data Let ##\vec{r}## locate a point just within the boundary of a primitive cell ##C_0## and ##\vec{r}'## another point infinitesimally displaced from ##\vec{r}## just outside the same boundary. The continuity equations for ##\psi(\vec{r})## are: $$(11.37) \lim_{r\to r'} [\psi(\vec{r})-\psi(\vec{r}')]=0$$ $$\lim_{r\to r'} [\nabla \psi(\vec{r})-\nabla \psi(\vec{r}')]=0$$ (a) Verify that any point ##\vec{r}## on the surface of a primitive cell is separated by some Bravais lattice vector ##\vec{R}## from another surface point and that the normals to the cell at ##\vec{r}## and ##\vec{r}+\vec{R}## are oppositely directed. (b) Using the fact that ##\psi## can be chosen to have the Bloch form, show that the continuity conditions can equally well be written in terms of the values of ##\psi## entirely withing a primitive cell: $$(11.38) \psi(\vec{r}) = e^{-i\vec{k}\cdot\vec{r}}\psi(\vec{r}+\vec{R})$$ $$\nabla \psi(\vec{r})= e^{-i\vec{k}\cdot \vec{R}}\nabla \psi(\vec{r}+\vec{R})$$ for pairs of points on the surface separated by direct lattice vectors ##\vec{R}##. (c) Show that the only information in the second of equations (11.38) not already contained in the first is in the equation: $$(11.39)\hat{n}(\vec{r})\cdot \nabla \psi(\vec{r})=-e^{-i\vec{k}\cdot \vec{R}}\hat{n}(\vec{r}+\vec{R})\cdot \nabla \psi(\vec{r}+\vec{R}),$$ where the vector ##\hat{n}## is normal to the surface of the cell. 2. Relevant equations 3. The attempt at a solution I am quite overwhelmed by this question, and am not sure where to start. I would appreciate some guidance as to how to solve this problem. Thanks. topsquark likes this.
Nov 14th 2018, 10:27 AM #5 Senior Member Join Date: Apr 2015 Location: Somerset, England Posts: 1,035 There is only one question at the end of chapter 11 in Ashcroft_Mermin. Does the title of the chapter (Other methods) give you a clue? Which method would you choose? (did you understand Green's functions?) Attached Thumbnails
Nov 14th 2018, 10:39 AM #6
Junior Member
Join Date: Aug 2009
Posts: 9
Originally Posted by studiot There is only one question at the end of chapter 11 in Ashcroft_Mermin. Does the title of the chapter (Other methods) give you a clue? Which method would you choose? (did you understand Green's functions?)
There are 3 questions, and I don't understand how to start answering question 1.
Tags 111, ashcroft, mermin, problem
Thread Tools Display Modes Linear Mode
Similar Physics Forum Discussions Thread Thread Starter Forum Replies Last Post Alan Atomic and Solid State Physics 0 May 14th 2017 07:49 AM
|
2019-07-21 04:22:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6031020879745483, "perplexity": 1975.6369189663556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526888.75/warc/CC-MAIN-20190721040545-20190721062545-00339.warc.gz"}
|
https://gmatclub.com/forum/families-in-which-some-members-argue-with-each-other-compete-for-a-cha-200525.html
|
Summer is Coming! Join the Game of Timers Competition to Win Epic Prizes. Registration is Open. Game starts Mon July 1st.
It is currently 18 Jul 2019, 10:26
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Families in which some members argue with each other compete for a cha
Author Message
TAGS:
Hide Tags
Intern
Status: Online
Joined: 07 Feb 2015
Posts: 28
Location: India
Rudey: RD
Concentration: Marketing, General Management
GMAT 1: 620 Q45 V31
GMAT 2: 640 Q46 V31
GPA: 3.29
WE: Sales (Hospitality and Tourism)
Families in which some members argue with each other compete for a cha [#permalink]
Show Tags
25 Jun 2015, 06:29
5
00:00
Difficulty:
65% (hard)
Question Stats:
55% (01:45) correct 45% (01:44) wrong based on 214 sessions
HideShow timer Statistics
Families in which some members argue with each other compete for a chance to appear on Barry Wringer’s TV show. The greater the percentage of family members who argue with each other, the greater the family’s chances of appearing on Wringer’s show. If the Brown family and the Gonzales family both have the same number of members, does the Brown family have a better chance of appearing on Wringer’s show than does the Gonzales family?
1. In each family, male members argue with each other, and female members argue with each other, but male members do not argue with female members and vice versa.
2. The Brown family has the same number of male members as female members. The Gonzales family has more male members than female members.
Intern
Joined: 15 Aug 2017
Posts: 14
Families in which some members argue with each other compete for a cha [#permalink]
Show Tags
21 Sep 2017, 20:04
whoisthere wrote:
Families in which some members argue with each other compete for a chance to appear on Barry Wringer’s TV show. The greater the percentage of family members who argue with each other, the greater the family’s chances of appearing on Wringer’s show. If the Brown family and the Gonzales family both have the same number of members, does the Brown family have a better chance of appearing on Wringer’s show than does the Gonzales family?
1. In each family, male members argue with each other, and female members argue with each other, but male members do not argue with female members and vice versa.
2. The Brown family has the same number of male members as female members. The Gonzales family has more male members than female members.
Here is an explanation for the question since there hasnt been one posted yet. Typing this out made me understand why the answer is C .
The question stem is as follows: . The greater the percentage of family members who argue with each other, the greater the family’s chances of appearing on Wringer’s show. If the Brown family and the Gonzales family both have the same number of members, does the Brown family have a better chance of appearing on Wringer’s show than does the Gonzales family?
Rephrased as follows: Does the Brown family have a greater percentage of family members who argue with each other? (Sum Gonzales = Sum Brown).
(1) Gives the info: Males can only argue with Males while the inverse is true as well.
This alone is insufficient as it does not tell us what percentage of the family members argue with each other.
(2) Gives the info: Males = Females in Gonzales family and Males > Females in Brown family.
This alone is insufficient as it does not tell us anything about the percentage of family members that argue with each other.
However,
When taking (1)+(2) we have Males can only argue with males, and the inverse. In addition from 2 we have M=F in gonzales, and M>F in Brown.
Now we want to calculate if the Brown family has a greater Percentage of family members who argue with each other. Lets see if we can derive more than one case.
M=F in Gonzales, Therefore 50% of males argue amongst themselves, while 50% of females argue amongst themselves.
This results in 50% of the family argueing with each other.
Alternatively if M>F than we find that 51% (OR GREATER) of males argue amongst themselves, and resultantly, the Brown's family has a higher overall percentage of family members who argue amongst the family. This results in C.
Manager
Joined: 09 Oct 2015
Posts: 237
Re: Families in which some members argue with each other compete for a cha [#permalink]
Show Tags
22 Sep 2017, 00:24
Can you please explain how you are getting 50%?
if its equal number of males and females, lets say 5 and 5, all 5 males are arguing with one another ,i.e. 100 % of males. and all 5 females are arguing with one another, i..e 100% again.
So that family has 100% of its members arguing.
Intern
Joined: 15 Aug 2017
Posts: 14
Re: Families in which some members argue with each other compete for a cha [#permalink]
Show Tags
22 Sep 2017, 13:01
rahulkashyap wrote:
Can you please explain how you are getting 50%?
if its equal number of males and females, lets say 5 and 5, all 5 males are arguing with one another ,i.e. 100 % of males. and all 5 females are arguing with one another, i..e 100% again.
So that family has 100% of its members arguing.
The question asks which family has a greater percentage of members that argue with each other. Therefore if males equals females and males argue only amongst each other then 50% of the family argues. Alternatively 50% of the females argue. In the case where we have more males than females the total percentage of family members who argue amongst each other is now greater than 50% they are for the chances the Brown family getting on the show is higher. Let me know if anything is still unclear. Remember the percentage of family members who argue amongst each other is percent arguing divided by total number.
Sent from my SM-G955F using GMAT Club Forum mobile app
Manager
Joined: 09 Oct 2015
Posts: 237
Re: Families in which some members argue with each other compete for a cha [#permalink]
Show Tags
22 Sep 2017, 20:54
Still unclear how you got only 50 percent of the family arguing. Would be helpful if you took numbers to show that. For example, 6 men and 6 women. 6 men argue with each other and 6 women argue with each other.
That's 12 people (100%) arguing. 8 men and 4 women would not change the percentage.
Posted from my mobile device
Intern
Joined: 15 Aug 2017
Posts: 14
Families in which some members argue with each other compete for a cha [#permalink]
Show Tags
25 Sep 2017, 11:53
1
rahulkashyap wrote:
Still unclear how you got only 50 percent of the family arguing. Would be helpful if you took numbers to show that. For example, 6 men and 6 women. 6 men argue with each other and 6 women argue with each other.
That's 12 people (100%) arguing. 8 men and 4 women would not change the percentage.
Posted from my mobile device
Hi Rahulkashyap, Here is how I got 50 percent arguing with each other.
The question states :The greater the percentage of family members who argue with each other, the greater the family’s chances of appearing on Wringer’s show.
This translates to Family members who argue with each other =$$\frac{# arguing}{Whole family}$$
Thus, if we have 4 males and 4 females arguing we have Percent of family members who argue with each other =$$\frac{4}{8}$$ = 50% of members arguing with each other and the same for females.
Alternatively, if we have 5 females, and 4 males arguing we have percent of family members who argue = $$\frac{5}{9}$$ = 55% of family members arguing with each other.
While the total percentage of family members arguing is in fact 100%, the question does not ask for the total percent arguing, Rather it asks which family has a higher percentage of family members who argue with each other.
Intern
Joined: 04 Sep 2017
Posts: 21
Location: United States
Concentration: Finance
GMAT 1: 610 Q36 V36
GMAT 2: 680 Q40 V36
GPA: 3.3
WE: Consulting (Mutual Funds and Brokerage)
Re: Families in which some members argue with each other compete for a cha [#permalink]
Show Tags
17 Oct 2017, 15:05
1
Arguing with each other is a combination solution. I think it's easiest to think of numbers like previously given.
Let's say there are 12 members in each family. For the Brown family, there are M=F, so in this example, 6 males and 6 females. The males all argue with each other and there are 6C2 ways for arguments which => 15. The women also have 6C2 ways, or 15. Thus 30 total arguments between individuals.
For the Gonzales family, they have more men than women M>F. Let's just say there are 8 males and 4 females. The arguments among men is 8C2 => 28 and arguments among women is 4C2 => 6. Thus the total arguments between individuals in this family is 34 which is higher than the Brown family. This discrepancy continues if you do 9 men and 3 women as well: 9C2 => 36 and 3C2 => 3 for a total of 39.
This is long winded, but it is possible to do these calculations quick in order to comes up with the answer or just use intuition since it's a DS question.
Senior Manager
Joined: 02 Apr 2014
Posts: 472
Location: India
Schools: XLRI"20
GMAT 1: 700 Q50 V34
GPA: 3.5
Families in which some members argue with each other compete for a cha [#permalink]
Show Tags
23 Nov 2017, 12:21
rajarshee wrote:
RudeyboyZ wrote:
Families in which some members argue with each other compete for a chance to appear on Barry Wringer’s TV show. The greater the percentage of family members who argue with each other, the greater the family’s chances of appearing on Wringer’s show. If the Brown family and the Gonzales family both have the same number of members, does the Brown family have a better chance of appearing on Wringer’s show than does the Gonzales family?
1. In each family, male members argue with each other, and female members argue with each other, but male members do not argue with female members and vice versa.
2. The Brown family has the same number of male members as female members. The Gonzales family has more male members than female members.
Here is an explanation for the question since there hasnt been one posted yet. Typing this out made me understand why the answer is C .
The question stem is as follows: . The greater the percentage of family members who argue with each other, the greater the family’s chances of appearing on Wringer’s show. If the Brown family and the Gonzales family both have the same number of members, does the Brown family have a better chance of appearing on Wringer’s show than does the Gonzales family?
Rephrased as follows: Does the Brown family have a greater percentage of family members who argue with each other? (Sum Gonzales = Sum Brown).
(1) Gives the info: Males can only argue with Males while the inverse is true as well.
This alone is insufficient as it does not tell us what percentage of the family members argue with each other.
(2) Gives the info: Males = Females in Gonzales family and Males > Females in Brown family.
This alone is insufficient as it does not tell us anything about the percentage of family members that argue with each other.
However,
When taking (1)+(2) we have Males can only argue with males, and the inverse. In addition from 2 we have M=F in gonzales, and M>F in Brown.
Now we want to calculate if the Brown family has a greater Percentage of family members who argue with each other. Lets see if we can derive more than one case.
M=F in Gonzales, Therefore 50% of males argue amongst themselves, while 50% of females argue amongst themselves.
This results in 50% of the family argueing with each other.
Alternatively if M>F than we find that 51% (OR GREATER) of males argue amongst themselves, and resultantly, the Brown's family has a higher overall percentage of family members who argue amongst the family. This results in C.
Hi victor VeritasPrepKarishma,
I have one quesion.
Question premise says "Families in which some members" => there could be some members who don't get into argument at all.
As per statement1, can we assume that all male member will argue with each other or some may not argue and same for female members.?
Bringing in some CR techniques into DS
Thanks
Intern
Joined: 15 Aug 2017
Posts: 14
Re: Families in which some members argue with each other compete for a cha [#permalink]
Show Tags
27 Nov 2017, 13:24
hellosanthosh2k2 wrote:
rajarshee wrote:
RudeyboyZ wrote:
Families in which some members argue with each other compete for a chance to appear on Barry Wringer’s TV show. The greater the percentage of family members who argue with each other, the greater the family’s chances of appearing on Wringer’s show. If the Brown family and the Gonzales family both have the same number of members, does the Brown family have a better chance of appearing on Wringer’s show than does the Gonzales family?
1. In each family, male members argue with each other, and female members argue with each other, but male members do not argue with female members and vice versa.
2. The Brown family has the same number of male members as female members. The Gonzales family has more male members than female members.
Here is an explanation for the question since there hasnt been one posted yet. Typing this out made me understand why the answer is C .
The question stem is as follows: . The greater the percentage of family members who argue with each other, the greater the family’s chances of appearing on Wringer’s show. If the Brown family and the Gonzales family both have the same number of members, does the Brown family have a better chance of appearing on Wringer’s show than does the Gonzales family?
Rephrased as follows: Does the Brown family have a greater percentage of family members who argue with each other? (Sum Gonzales = Sum Brown).
(1) Gives the info: Males can only argue with Males while the inverse is true as well.
This alone is insufficient as it does not tell us what percentage of the family members argue with each other.
(2) Gives the info: Males = Females in Gonzales family and Males > Females in Brown family.
This alone is insufficient as it does not tell us anything about the percentage of family members that argue with each other.
However,
When taking (1)+(2) we have Males can only argue with males, and the inverse. In addition from 2 we have M=F in gonzales, and M>F in Brown.
Now we want to calculate if the Brown family has a greater Percentage of family members who argue with each other. Lets see if we can derive more than one case.
M=F in Gonzales, Therefore 50% of males argue amongst themselves, while 50% of females argue amongst themselves.
This results in 50% of the family argueing with each other.
Alternatively if M>F than we find that 51% (OR GREATER) of males argue amongst themselves, and resultantly, the Brown's family has a higher overall percentage of family members who argue amongst the family. This results in C.
Hi victor VeritasPrepKarishma,
I have one quesion.
Question premise says "Families in which some members" => there could be some members who don't get into argument at all.
As per statement1, can we assume that all male member will argue with each other or some may not argue and same for female members.?
Bringing in some CR techniques into DS
Thanks
While it is stated that some members argue with each other, as per question one we are safe to assume all male members argue with one another and vice versa. If statement 1 had said some males argue amongst each other than we would have to consider the case in which members do not argue at all.
Intern
Joined: 23 Mar 2017
Posts: 2
Re: Families in which some members argue with each other compete for a cha [#permalink]
Show Tags
04 Jan 2019, 23:49
The statements 1 and 2 are not sufficient. But on combining the two, they again become insufficient.
It is written in the question stem that the male/female members argue with EACH other and not ONE ANOTHER, in which it is safe to assume that the argument at any point of time is between 2 members of the same sex.
Then going ahead and taking an example of 6 members in both the families.
Brown family: 3M and 3W => 3C2 x 3C2 = 9
Gonzales family : 4M and 2W => 4C2 x 2C2 = 6
5M and 1W => 5C2 = 10
Therefore, either of the families has the chance to appear on the show. My assumption of the argument is based on the 'EACH' keyword in the question.
Hence, I think the answer should be E.
Intern
Joined: 06 May 2019
Posts: 7
Re: Families in which some members argue with each other compete for a cha [#permalink]
Show Tags
21 May 2019, 01:01
aman23 wrote:
The statements 1 and 2 are not sufficient. But on combining the two, they again become insufficient.
It is written in the question stem that the male/female members argue with EACH other and not ONE ANOTHER, in which it is safe to assume that the argument at any point of time is between 2 members of the same sex.
Then going ahead and taking an example of 6 members in both the families.
Brown family: 3M and 3W => 3C2 x 3C2 = 9
Gonzales family : 4M and 2W => 4C2 x 2C2 = 6
5M and 1W => 5C2 = 10
Therefore, either of the families has the chance to appear on the show. My assumption of the argument is based on the 'EACH' keyword in the question.
Hence, I think the answer should be E.
I think it should be:
Brown family: 3M and 3W => 3C2 + 3C2 = 6
Gonzales family : 4M and 2W => 4C2 + 2C2 = 7
5M and 1W => 5C2 = 10
Re: Families in which some members argue with each other compete for a cha [#permalink] 21 May 2019, 01:01
Display posts from previous: Sort by
|
2019-07-18 17:26:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6443201899528503, "perplexity": 1800.270434978668}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525699.51/warc/CC-MAIN-20190718170249-20190718192249-00302.warc.gz"}
|
https://math.stackexchange.com/questions/391829/gaussian-quadrature-with-arbitrary-weight-function
|
# Gaussian quadrature with arbitrary weight function
In class, our professor told us how to evaluate the integral $\int_a^bw(x)f(x) dx$ by finding the Gaussian nodes $x_i$ and weight $w_i$ with weight function $w(x)=1$ (also known as Legendre quadrature). However, in homework, I came across with some other weight functions and I don't really know how to handle it.(I tried in google and I couldn't find a regular way of finding the gaussian nodes and that's why I am trying here now)
For example, if we have $\int_0^1 x^4f(x)dx=A_0f(x_0)+A_1f(x_1)$, how should we find $A_0,x_0,A_1,x_1$? And in this circumstance, for which degree of polynomial of $f(x)$ the integration is exact?
And the other question asks us "Derive a two-point integration formula for integrals of the form $\int_{-1}^1f(x)(1+x^2)dx$, which is exact when $f(x)$ is polynomial of degree 3.". Here I can't understand how we can derive a two point integration formula here when our weight function is already quadratic? And why does the question mention the degree of 3? I am confused here.
Furthermore, what if our weight function becomes some analytic functions such as $\frac{1}{\sqrt{1-x^2}}$ and $e^{-x}$? What is the general approach of it?
I really hope somebody can explain to me and help me out. Thanks!
In order to calculate $\int_0^1 x^{4}f(x)\,dx$ you use the Method of Undetermined Coefficients. That is:
Let $f_{i}(x)=x^{i}$. Then, calculate $c=\int_0^1 x^{4}f_{i}(x)\,dx$ and $A_{0} f_{i}(x_{0})+A_{1}f_{i}(x_{1})$. From this, you will get an equation of the form $c=A_{0} x_{0}^{i}+A_{1}x_{1}^{i}$. In this problem, we have 4 unknowns hence to solve for $A_{0}, x_{0}, A_{1}, x_{1}$ you must create 4 equations using this method. From there you can solve for your variables.
The Method of Undetermined Coefficients can work with any weight function, using the integral $\int_a^b w(x)f(x)\,dx$ for each $f_{i}(x)=x^{i}$ as defined in the method above.
|
2019-08-19 06:22:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8681561350822449, "perplexity": 151.33634298684098}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314667.60/warc/CC-MAIN-20190819052133-20190819074133-00430.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-all-the-zeros-of-f-x-x-4-x-3-3x-2-x-1
|
# How do you find all the zeros of f(x)=x^4-x^3-3x^2-x+1?
Aug 11, 2016
$f \left(x\right)$ has zeros:
${x}_{1 , 2} = \frac{1 + \sqrt{21} \pm \sqrt{6 + 2 \sqrt{21}}}{4}$
${x}_{3 , 4} = \frac{1 - \sqrt{21}}{4} \pm \frac{\sqrt{2 \sqrt{21} - 6}}{4} i$
#### Explanation:
$f \left(x\right) = {x}^{4} - {x}^{3} - 3 {x}^{2} - x + 1$
Notice the symmetry of the coefficients: $1 , - 1 , - 3 , - 1 , 1$
So:
$f \frac{x}{x} ^ 2 = {x}^{2} - x - 3 - \frac{1}{x} - \frac{1}{x} ^ 2 = {\left(x + \frac{1}{x}\right)}^{2} - \left(x + \frac{1}{x}\right) - 5$
Let $t = x + \frac{1}{x}$
Then:
$0 = {t}^{2} - t - 5$
$= {\left(t - \frac{1}{2}\right)}^{2} - \frac{1}{4} - 5$
$= {\left(t - \frac{1}{2}\right)}^{2} - {\left(\frac{\sqrt{21}}{2}\right)}^{2}$
$= \left(t - \frac{1}{2} - \frac{\sqrt{21}}{2}\right) \left(t - \frac{1}{2} + \frac{\sqrt{21}}{2}\right)$
So:
$x + \frac{1}{x} = t = \frac{1}{2} \pm \frac{\sqrt{21}}{2}$
Multiply both ends by $x$ and rearrange slightly to get:
${x}^{2} - \left(\frac{1}{2} \pm \frac{\sqrt{21}}{2}\right) x + 1 = 0$
Writing the solutions of these two possibilities separately, we have solutions given by the quadratic formula:
${x}_{1 , 2} = \frac{\frac{1}{2} + \frac{\sqrt{21}}{2} \pm \sqrt{{\left(\frac{1}{2} + \frac{\sqrt{21}}{2}\right)}^{2} - 4}}{2}$
$= \frac{1 + \sqrt{21} \pm \sqrt{{\left(1 + \sqrt{21}\right)}^{2} - 16}}{4}$
$= \frac{1 + \sqrt{21} \pm \sqrt{6 + 2 \sqrt{21}}}{4}$
${x}_{3 , 4} = \frac{\frac{1}{2} - \frac{\sqrt{21}}{2} \pm \sqrt{{\left(\frac{1}{2} - \frac{\sqrt{21}}{2}\right)}^{2} - 4}}{2}$
$= \frac{1 - \sqrt{21} \pm \sqrt{{\left(1 - \sqrt{21}\right)}^{2} - 16}}{4}$
$= \frac{1 - \sqrt{21} \pm \sqrt{6 - 2 \sqrt{21}}}{4}$
$= \frac{1 - \sqrt{21}}{4} \pm \frac{\sqrt{2 \sqrt{21} - 6}}{4} i$
|
2020-01-24 05:58:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 21, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8504318594932556, "perplexity": 1484.5212200104756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250615407.46/warc/CC-MAIN-20200124040939-20200124065939-00506.warc.gz"}
|
http://mathhelpforum.com/trigonometry/40669-circular-functions.html
|
# Math Help - Circular Functions
1. ## Circular Functions
Hey Ive got two questions regarding Cicular Functions, could you possibly include some working as I am really struggling with this topic:
I need to find the next four values for (t):
6=3sin(2t)+3
I know the first one is 5(pie)/4, i think the second is 9(pie)/4
and
between (pie) and 4(pie)
y=2sin(3t-(pie)/4)+8 for which y attains the minimum value.
thanks guys
RoboStar
2. Originally Posted by RoboStar
Hey Ive got two questions regarding Cicular Functions, could you possibly include some working as I am really struggling with this topic:
I need to find the next four values for (t):
6=3sin(2t)+3
I know the first one is 5(pie)/4, i think the second is 9(pie)/4
Yes I think you are right
between (pie) and 4(pie)
y=2sin(3t-(pie)/4)+8 for which y attains the minimum value.
thanks guys
RoboStar
y=2sin(3t-(pie)/4)+8
Minimum value of $\sin$ function is -1. So minimum value of y could be 6.The real question is does there exist a t between (pie) and (4pie), such that this happens.
sin(3t-(pie)/4) = -1 means that $3t - \frac{\pi}{4} = -\frac{\pi}2,\frac{3\pi}2,\frac{7\pi}2,....$
Thus $3t = -\frac{\pi}4,\frac{7\pi}2,\frac{15\pi}2,....$
So $t = \frac{7\pi}6,\frac{15\pi}6,....$
In general $t = \frac{(8n-1)\pi}6, n \in \mathbb{N}$
So between $\pi$ an $4\pi$, we have three angles that satisfy:
$t = \frac{7\pi}6,\frac{15\pi}6,\frac{23\pi}6$ and the minimum value is $y = 6
$
3. and for the first question, would the next two be 13(pie)/4 and 17(pie)/4? I'm still quite unsure with this concept.
4. Originally Posted by RoboStar
[snip]
I need to find the next four values for (t):
6=3sin(2t)+3
I know the first one is 5(pie)/4, i think the second is 9(pie)/4
[snip]
Do you need values such that t > 0. If so, you missed t = pi/4
Note that your equation can be re-arranged into sin(2t) = 1.
The general solution is $2t = \frac{\pi}{2} + 2n \pi \Rightarrow t = \frac{\pi}{4} + n \pi$ where n is an integer.
So get all the solutions you want by letting n = 0, 1, 2, .... -1, -2, ....
Originally Posted by RoboStar
and for the first question, would the next two be 13(pie)/4 and 17(pie)/4? I'm still quite unsure with this concept.
Yes (they correspond to n = 3 and n = 4).
|
2014-09-18 10:47:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8254251480102539, "perplexity": 894.9728513695784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657127222.34/warc/CC-MAIN-20140914011207-00071-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/265621/feynmans-explanation-of-virtual-work-given-in-his-book-feynmans-lectures-on-ph
|
# Feynman's explanation of virtual work given in his book Feynman's lectures on Physics
In his book Chapter 4 Conservation of Energy, on Gravitational potential energy the discussion goes...
"Take now the somewhat more complicated example shown in Fig. 4-6. A rod or bar, 8 feet long, is supported at one end. In the middle of the bar is a weight of 60 pounds, and at a distance of two feet from the support there is a weight of 100 pounds. How hard do we have to lift the end of the bar in order to keep it balanced, disregarding the weight of the bar? Suppose we put a pulley at one end and hang a weight on the pulley. How big would the weight W have to be in order for it to balance? We imagine that the weight falls any arbitrary distance— to make it easy for ourselves suppose it goes down 4 inches—how high would-the two load weights rise? The center rises 2 inches, and the point a quarter of the way from the fixed end lifts 1 inch. Therefore, the principle that the sum of the heights times the weights does not change tells us that the weight W times 4 inches down, plus 60 pounds times 2 inches up, plus 100 pounds times 1 inch has to add up to nothing:......"
But how do we know that the end point of the rod that is connected to the rope goes up 4 inches when the weight "W" goes 4 inches down. My argument according to him is that if the weight "W" goes 4 inches downwards. The rod is lifted a little less than 4 inches because the point where the rod and the rope is connected goes in a circular path. And the length of the path is 4 inches achieved by the weight "W" by going downwards. Yet the concept of virtual work is true.
So, the 60 pound weight does not move 2 inch upward and the 100 pound weight does not move 1 inch upward. From triangle similarity if the rod moves vertically upwards with 4 inches, the rest of the weight moves according to Feynman's argument. But my argument is that it will be lesser than the value given in the book. So, if someone could help me resolve this, it'll be very enriching experience.
• Welcome to "the art of approximation" where $\sin x \approx \tan x \approx x$. :-) – CuriousOne Jul 1 '16 at 6:01
• Another word for a mild cheat to complete the sentence, right? – Jyotishraj Thoudam Jul 1 '16 at 6:27
• Very mild. It's really just an engineering problem... you could make some kind of cam-mechanism that keeps the string straight and compensates. It wouldn't change the physics, at all, and annoy the heck out of everyone. Look at it this way... you caught Feynman's tiny slight of hand, which makes you the smartest person in the room. That counts for something, for quite a bit, actually. – CuriousOne Jul 1 '16 at 6:58
• @CuriousOne would you care to explain the solution presented in this link physics.stackexchange.com/q/265664. I tried to use the argument given above but I don't know how it'll work. – Jyotishraj Thoudam Jul 1 '16 at 13:37
Feynman is using definite small quantities (inches) in place of infinitessimals $\delta x$ etc. Probably he wanted to avoid non-essential mathematical formality, in line with his casual, hand-waving persona. The Principle of Virtual Work requires the structure to undergo infinitessimal displacements (hence "virtual"). He could instead have used units of nanometres (or smaller) but that would also be too pernickety. Inches are appropriately small compared with feet.
He had to move the bar a small amount to compare the movement of the pulley weight to the other two weights. He took an "arbitrary" small distance of 4" to make the math easy. In reality the bar doesn't really move much at all, the weights are resisting movement in both directions. The distance is infinitesimally small as sammy gerbil said. In that case the Small Angle Approximation applies, and $$\cos\theta \approx 1 - \frac {\theta ^2}{2}$$
and in this situation we have an infinitesimally small angle, so this theorem definitely applies, it also makes theta infinitesimally small, so $$\theta \approx 0\\\cos\theta \approx 1 - \frac {0}{2} = 1$$ if cosine is 1, so is the ratio of the two lines $$\cos\theta = \frac{adjacent}{hypotenuse}\\1 \approx \frac{adjacent}{hypotenuse}\\hypotenuse \approx adjacent$$ which means the distance of the rod from where it started should be essentially no further than when it began. I'm sure there was a 50x more elegant way to show that; but it's what I came up with.
|
2020-01-21 22:42:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6108706593513489, "perplexity": 480.44884405773905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606226.29/warc/CC-MAIN-20200121222429-20200122011429-00070.warc.gz"}
|
https://baseballdrills.info/bball/pitching-mound-click-here-for-the-proper-techniques.html
|
Strong & Durable Shuffle and Throw 4hTim MacMahon 17dAndré Snellings Internationally Home Services That’s why I am hitting for average. That doesn’t mean that I strive to be average. That means that I strive to grind out each day while understanding that the ebb and flow of life will dictate how far I come as much as the effort I put in. It’s not about being perfect, nobody is. But if I accept that failure will dominate my outcomes, then I will certainly be more appreciative of the victories, for it will take those many failed attempts to ever have a chance to succeed. Do short timing step forward, as arms load - opposite directions. Related Items... Thank you for signing up. Billy Butler .290 (.28972) 336 Popular Hitting Mechanics Can How You Avoid Injuries When Playing Table Tennis? 15 Julio Lugo .269 (.26881) 884 You need to clear your hips early. This is part of rotational hitting which is something that I will go into depth on in another article. ArticleEditDiscuss Best hit balls pulled foul Brian Downing .267 (.26729) 932 Bernie Williams .297 (.29686) 214 Chris Chambliss .279 (.27856) 582 Lenexa, KS 66215 Submit a Tip or Drill and more 9 Everton Weekes 48 81 5 4455 207 58.61 1948–58 Florida Panthers Henry Chadwick, an English statistician raised on cricket, was an influential figure in the early history of baseball. In the late 19th century he adapted the concept behind the cricket batting average to devise a similar statistic for baseball. Rather than simply copy cricket's formulation of runs scored divided by outs, he realized that hits divided by at bats would provide a better measure of individual batting ability. This is because while in cricket, scoring runs is almost entirely dependent on one's own batting skill, in baseball it is largely dependent on having other good hitters on one's team. Chadwick noted that hits are independent of teammates' skills, so used this as the basis for the baseball batting average. His reason for using at bats rather than outs is less obvious, but it leads to the intuitive idea of the batting average being a percentage reflecting how often a batter gets on base, whereas in contrary, hits divided by outs is not as simple to interpret in real terms. Baseball Rebellion Swing Breakdown - Nick E. My Programs Rawlings/Spinball In other languages: Fielding 48 Training Programs Diamond Kinetics SwingTracker Baseball & Softball Geography 8 Donate to Wikipedia 4.7 out of 5 stars 201 Grab Our Books On Amazon! Hillary’s “Hit List” Just Went Public, And #1 Will Shock You Tom Herr .271 (.27108) 808 A lot of younger players struggle with keeping their hands back during the "loading" phase of their swing. As they start to take their stride their hands will either drift forward, drop down, or do both. August 30, 2018 0 3 Simple hitting tweaks you can steal from Major Leaguers STRATEGY John McGraw .334 (.33359) 21 Joaquin on July 28, 2014 4:07 am $99.99 The “Twisting Model” is a biomechanical model of physical movement that explains why our current ideas about baseball mechanics — bat speed, hip rotation, “power” — are insufficient to explain fully what happens when bat hits ball. In this article I would like to introduce the “Twisting Model” by showing how it supports Ted Williams’s theory of hitting from The Science of Hitting. The Twisting Model is less well known than the conventional Rotational Model. Field study on the Twisting Model has only recently begun. 4.7 out of 5 stars 45 Tony Perez .279 (.27940) 560 Joe Herman Long .277 (.27717) 620 But that doesn’t mean that you shouldn’t at least have the “ability” to hit a bomb here and there. And to some extent, I do believe showing a little bit of pop as a little guy does bring some attention from scouts because it will show that a player can display: 5.0 out of 5 starsThis heavy ball is great for training off a tee or in side soft ... Volume 43, Issue 1 GREATNESS WITHIN Gil Hodges .273 (.27326) Motion study computer (2) {\displaystyle AVG={\frac {H}{AB}}} Larry Herndon .274 (.27353) 724 "The Greatest Gold-Mine Of Softball Tips, Tricks, and Advice!" How To Play Baseball With Glasses Dan Brouthers .335 (.33509) 19 Rearward rotation of lead-shoulder Hagglund Libero Training Series Thanks for the complement and I am glad you are improving. I love to hear that and that is why we started this website. Keep working hard and try to get a little better each day. If you do that you will be happy. Cars & Motorcycles Like Jose Altuve, You Can Compete With BIG Sluggers… To maintain two-hand contact, bring the front elbow in - like a pitcher's throwing motion - and finish with both hands to shoulder height. (Sometimes called a 'punch swing' - back arm moves forward like a boxing undercut.) David Freese .275 (.27531) 674 BlockedUnblockFollowFollowing Randy Velarde .276 (.27592) 659 Wes Parker .267 (.26702) 941 One-time purchase: Garry Maddox .285 (.28463) 437 Lou Bierbauer .267 (.26656) 955 The Science of Hitting, Ted Williams with John Underwood, 1971, Printed by Simon & Schuster New York. Alex Johnson .288 (.28791) 368 Swing Blaster Hitting Batting Training Aid Baseball Softball Fastpitch Send The plane & the hose drill Phoenix, AZ 85004 FAQs Rugby San Diego Chargers Terrence Myers Talk with any proven baseball hitter and he will concur the secret to hitting power or the stroking of the long ball is the magic of bat speed. Gift Certificates Wishlist Share your thoughts with other customers Add Red Rolfe .289 (.28879) 357 Batter Up Ind Baby Doll Jacobson .311 (.31124) 83 Exercise and General Fitness New York Knicks Drills/Skills View Cart Do we actually teach what we see? 6 Shipping Connor Powers is a former Professional Baseball Player (Padres Organization 2010-2013) who has a passion for teaching others how reach their goals in the game of baseball. Since 2012 Coach Powers he has had his YouTube videos viewed over 3.3 Million times and has over 24,000 subscribers to his YouTube channel. His specialties are maximizing bat speed, improving batting average, and taking hitters from average to elite. LOG IN Amos Otis .277 (.27675) 633 Jorge Orta .278 (.27775) 603 4 Sign Up for The Rebellion Newsletter Condé Nast Co-authors: 75 Go looking for shortstops who have clubbed over 25 homers in a season with Lindor-like efficiency in other departments—an OBP north of .350 and easily above-average baserunning and defensive value—and what shows up is a list that includes Ernie Banks, Cal Ripken Jr., Alex Rodriguez and Troy Tulowitzki at the height of their powers. Linear vs Rotational - Loading (weight on legs) & inward turn Linear Mechanics and the minus 11 bats AFC East Sign InSubscribe Now! Cheer-Dance-Gymnastics Kirby Puckett .318 (.31806) 49 New York Knicks 1:41 » Terms of Service Hitting mechanics & the Follow through Ø Load to stride, the back knee should be driving down in direction of the baseball as front heel plants into the ground THT - same mechanics, different names This section needs expansion. You can help by adding to it. (October 2015) Push the knob of the bat to the ball which will keep your hands inside.$29.99 NFL NHL MLB Learn More Get your front foot down early. You need to be able to react quickly once you recognize that the ball is up. If you're foot is down, you're ready to pounce
|
2018-10-23 21:11:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2120722532272339, "perplexity": 4370.519949309873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583517376.96/warc/CC-MAIN-20181023195531-20181023221031-00155.warc.gz"}
|
https://handwiki.org/wiki/Class_(computer_programming)
|
# Class (computer programming)
Short description: Definition in programming that specifies how an object works
In object-oriented programming, a class is an extensible program-code-template for creating objects, providing initial values for state (member variables) and implementations of behavior (member functions or methods).[1][2] In many languages, the class name is used as the name for the class (the template itself), the name for the default constructor of the class (a subroutine that creates objects), and as the type of objects generated by instantiating the class; these distinct concepts are easily conflated.[2] Although, to the point of conflation, one could argue that is a feature inherent in a language because of its polymorphic nature and why these languages are so powerful, dynamic and adaptable for use compared to languages without polymorphism present. Thus they can model dynamic systems (i.e. the real world, machine learning, AI) more easily.
When an object is created by a constructor of the class, the resulting object is called an instance of the class, and the member variables specific to the object are called instance variables, to contrast with the class variables shared across the class.
In certain languages, classes are, as a matter of fact, only a compile-time feature (new classes cannot be declared at run-time), while in other languages classes are first-class citizens, and are generally themselves objects (typically of type Class or similar). In these languages, a class that creates classes within itself is called a metaclass.
## Class vs. type
In its most casual usage, people often refer to the "class" of an object, but narrowly speaking objects have type: the interface, namely the types of member variables, the signatures of member functions (methods), and properties these satisfy. At the same time, a class has an implementation (specifically the implementation of the methods), and can create objects of a given type, with a given implementation.[3] In the terms of type theory, a class is an implementation—a concrete data structure and collection of subroutines—while a type is an interface. Different (concrete) classes can produce objects of the same (abstract) type (depending on type system); for example, the type Stack might be implemented with two classes – SmallStack (fast for small stacks, but scales poorly) and ScalableStack (scales well but high overhead for small stacks). Similarly, a given class may have several different constructors.
Class types generally represent nouns, such as a person, place or thing, or something nominalized, and a class represents an implementation of these. For example, a Banana type might represent the properties and functionality of bananas in general, while the ABCBanana and XYZBanana classes would represent ways of producing bananas (say, banana suppliers or data structures and functions to represent and draw bananas in a video game). The ABCBanana class could then produce particular bananas: instances of the ABCBanana class would be objects of type Banana. Often only a single implementation of a type is given, in which case the class name is often identical with the type name.
## Design and implementation
Classes are composed from structural and behavioral constituents.[1] Programming languages that include classes as a programming construct offer support, for various class-related features, and the syntax required to use these features varies greatly from one programming language to another.
### Structure
UML notation for classes
A class contains data field descriptions (or properties, fields, data members, or attributes). These are usually field types and names that will be associated with state variables at program run time; these state variables either belong to the class or specific instances of the class. In most languages, the structure defined by the class determines the layout of the memory used by its instances. Other implementations are possible: for example, objects in Python use associative key-value containers.[4]
Some programming languages such as Eiffel support specification of invariants as part of the definition of the class, and enforce them through the type system. Encapsulation of state is necessary for being able to enforce the invariants of the class.
### Behavior
The behavior of class or its instances is defined using methods. Methods are subroutines with the ability to operate on objects or classes. These operations may alter the state of an object or simply provide ways of accessing it.[5] Many kinds of methods exist, but support for them varies across languages. Some types of methods are created and called by programmer code, while other special methods—such as constructors, destructors, and conversion operators—are created and called by compiler-generated code. A language may also allow the programmer to define and call these special methods.[6][7]
### The concept of class interface
Main page: Interface (computing)
Every class implements (or realizes) an interface by providing structure and behavior. Structure consists of data and state, and behavior consists of code that specifies how methods are implemented.[8] There is a distinction between the definition of an interface and the implementation of that interface; however, this line is blurred in many programming languages because class declarations both define and implement an interface. Some languages, however, provide features that separate interface and implementation. For example, an abstract class can define an interface without providing implementation.
Languages that support class inheritance also allow classes to inherit interfaces from the classes that they are derived from.
For example, if "class A" inherits from "class B" and if "class B" implements the interface "interface B" then "class A" also inherits the functionality(constants and methods declaration) provided by "interface B".
In languages that support access specifiers, the interface of a class is considered to be the set of public members of the class, including both methods and attributes (via implicit getter and setter methods); any private members or internal data structures are not intended to be depended on by external code and thus are not part of the interface.
Object-oriented programming methodology dictates that the operations of any interface of a class are to be independent of each other. It results in a layered design where clients of an interface use the methods declared in the interface. An interface places no requirements for clients to invoke the operations of one interface in any particular order. This approach has the benefit that client code can assume that the operations of an interface are available for use whenever the client has access to the object.[9]
#### Example
The buttons on the front of your television set are the interface between you and the electrical wiring on the other side of its plastic casing. You press the "power" button to toggle the television on and off. In this example, your particular television is the instance, each method is represented by a button, and all the buttons together compose the interface (other television sets that are the same model as yours would have the same interface). In its most common form, an interface is a specification of a group of related methods without any associated implementation of the methods.
A television set also has a myriad of attributes, such as size and whether it supports colour, which together comprise its structure. A class represents the full description of a television, including its attributes (structure) and buttons (interface).
Getting the total number of televisions manufactured could be a static method of the television class. This method is clearly associated with the class, yet is outside the domain of each individual instance of the class. A static method that finds a particular instance out of the set of all television objects is another example.
### Member accessibility
The following is a common set of access specifiers:[10]
• Private (or class-private) restricts the access to the class itself. Only methods that are part of the same class can access private members.
• Protected (or class-protected) allows the class itself and all its subclasses to access the member.
• Public means that any code can access the member by its name.
Although many object-oriented languages support the above access specifiers, their semantics may differ.
Object-oriented design uses the access specifiers in conjunction with careful design of public method implementations to enforce class invariants—constraints on the state of the objects. A common usage of access specifiers is to separate the internal data of a class from its interface: the internal structure is made private, while public accessor methods can be used to inspect or alter such private data.
Access specifiers do not necessarily control visibility, in that even private members may be visible to client external code. In some languages, an inaccessible but visible member may be referred to at run-time (for example, by a pointer returned from a member function), but an attempt to use it by referring to the name of the member from client code will be prevented by the type checker.[11]
The various object-oriented programming languages enforce member accessibility and visibility to various degrees, and depending on the language's type system and compilation policies, enforced at either compile-time or run-time. For example, the Java language does not allow client code that accesses the private data of a class to compile. [12] In the C++ language, private methods are visible, but not accessible in the interface; however, they may be made invisible by explicitly declaring fully abstract classes that represent the interfaces of the class.[13]
Some languages feature other accessibility schemes:
• Instance vs. class accessibility: Ruby supports instance-private and instance-protected access specifiers in lieu of class-private and class-protected, respectively. They differ in that they restrict access based on the instance itself, rather than the instance's class.[14]
• Friend: C++ supports a mechanism where a function explicitly declared as a friend function of the class may access the members designated as private or protected.[15]
• Path-based: Java supports restricting access to a member within a Java package, which is the logical path of the file. However, it is a common practice when extending a Java framework to implement classes in the same package as a framework class in order to access protected members. The source file may exist in a completely different location, and may be deployed to a different .jar file, yet still be in the same logical path as far as the JVM is concerned.[10]
## Inter-class relationships
In addition to the design of standalone classes, programming languages may support more advanced class design based upon relationships between classes. The inter-class relationship design capabilities commonly provided are compositional and hierarchical.
### Compositional
Classes can be composed of other classes, thereby establishing a compositional relationship between the enclosing class and its embedded classes. Compositional relationship between classes is also commonly known as a has-a relationship.[16] For example, a class "Car" could be composed of and contain a class "Engine". Therefore, a Car has an Engine. One aspect of composition is containment, which is the enclosure of component instances by the instance that has them. If an enclosing object contains component instances by value, the components and their enclosing object have a similar lifetime. If the components are contained by reference, they may not have a similar lifetime.[17] For example, in Objective-C 2.0:
@interface Car : NSObject
@property NSString *name;
@property Engine *engine
@property NSArray *tires;
@end
This Car class has an instance of NSString (a string object), Engine, and NSArray (an array object).
### Hierarchical
Classes can be derived from one or more existing classes, thereby establishing a hierarchical relationship between the derived-from classes (base classes, parent classes or superclasses) and the derived class (child class or subclass) . The relationship of the derived class to the derived-from classes is commonly known as an is-a relationship.[18] For example, a class 'Button' could be derived from a class 'Control'. Therefore, a Button is a Control. Structural and behavioral members of the parent classes are inherited by the child class. Derived classes can define additional structural members (data fields) and behavioral members (methods) in addition to those that they inherit and are therefore specializations of their superclasses. Also, derived classes can override inherited methods if the language allows.
Not all languages support multiple inheritance. For example, Java allows a class to implement multiple interfaces, but only inherit from one class.[19] If multiple inheritance is allowed, the hierarchy is a directed acyclic graph (or DAG for short), otherwise it is a tree. The hierarchy has classes as nodes and inheritance relationships as links. Classes in the same level are more likely to be associated than classes in different levels. The levels of this hierarchy are called layers or levels of abstraction.
Example (Simplified Objective-C 2.0 code, from iPhone SDK):
@interface UIResponder : NSObject //...
@interface UIView : UIResponder //...
@interface UIScrollView : UIView //...
@interface UITableView : UIScrollView //...
In this example, a UITableView is a UIScrollView is a UIView is a UIResponder is an NSObject.
#### Definitions of subclass
Conceptually, a superclass is a superset of its subclasses. For example, a common class hierarchy would involve GraphicObject as a superclass of Rectangle and Ellipse, while Square would be a subclass of Rectangle. These are all subset relations in set theory as well, i.e., all squares are rectangles but not all rectangles are squares.
A common conceptual error is to mistake a part of relation with a subclass. For example, a car and truck are both kinds of vehicles and it would be appropriate to model them as subclasses of a vehicle class. However, it would be an error to model the component parts of the car as subclass relations. For example, a car is composed of an engine and body, but it would not be appropriate to model engine or body as a subclass of car.
In object-oriented modeling these kinds of relations are typically modeled as object properties. In this example, the Car class would have a property called parts. parts would be typed to hold a collection of objects, such as instances of Body, Engine, Tires, etc. Object modeling languages such as UML include capabilities to model various aspects of "part of" and other kinds of relations – data such as the cardinality of the objects, constraints on input and output values, etc. This information can be utilized by developer tools to generate additional code beside the basic data definitions for the objects, such as error checking on get and set methods.[20]
One important question when modeling and implementing a system of object classes is whether a class can have one or more superclasses. In the real world with actual sets it would be rare to find sets that didn't intersect with more than one other set. However, while some systems such as Flavors and CLOS provide a capability for more than one parent to do so at run time introduces complexity that many in the object-oriented community consider antithetical to the goals of using object classes in the first place. Understanding which class will be responsible for handling a message can get complex when dealing with more than one superclass. If used carelessly this feature can introduce some of the same system complexity and ambiguity classes were designed to avoid.[21]
Most modern object-oriented languages such as Smalltalk and Java require single inheritance at run time. For these languages, multiple inheritance may be useful for modeling but not for an implementation.
However, semantic web application objects do have multiple superclasses. The volatility of the Internet requires this level of flexibility and the technology standards such as the Web Ontology Language (OWL) are designed to support it.
A similar issue is whether or not the class hierarchy can be modified at run time. Languages such as Flavors, CLOS, and Smalltalk all support this feature as part of their meta-object protocols. Since classes are themselves first-class objects, it is possible to have them dynamically alter their structure by sending them the appropriate messages. Other languages that focus more on strong typing such as Java and C++ do not allow the class hierarchy to be modified at run time. Semantic web objects have the capability for run time changes to classes. The rational is similar to the justification for allowing multiple superclasses, that the Internet is so dynamic and flexible that dynamic changes to the hierarchy are required to manage this volatility.[22]
### Orthogonality of the class concept and inheritance
Although class-based languages are commonly assumed to support inheritance, inheritance is not an intrinsic aspect of the concept of classes. Some languages, often referred to as "object-based languages", support classes yet do not support inheritance. Examples of object-based languages include earlier versions of Visual Basic.
### Within object-oriented analysis
Main page: Association (object-oriented programming)
In object-oriented analysis and in UML, an association between two classes represents a collaboration between the classes or their corresponding instances. Associations have direction; for example, a bi-directional association between two classes indicates that both of the classes are aware of their relationship.[23] Associations may be labeled according to their name or purpose.[24]
An association role is given end of an association and describes the role of the corresponding class. For example, a "subscriber" role describes the way instances of the class "Person" participate in a "subscribes-to" association with the class "Magazine". Also, a "Magazine" has the "subscribed magazine" role in the same association. Association role multiplicity describes how many instances correspond to each instance of the other class of the association. Common multiplicities are "0..1", "1..1", "1..*" and "0..*", where the "*" specifies any number of instances.[23]
## Taxonomy of classes
There are many categories of classes, some of which overlap.
### Abstract and concrete
Main page: Abstract type
In a language that supports inheritance, an abstract class, or abstract base class (ABC), is a class that cannot be instantiated because it is either labeled as abstract or it simply specifies abstract methods (or virtual methods). An abstract class may provide implementations of some methods, and may also specify virtual methods via signatures that are to be implemented by direct or indirect descendants of the abstract class. Before a class derived from an abstract class can be instantiated, all abstract methods of its parent classes must be implemented by some class in the derivation chain.[25]
Most object-oriented programming languages allow the programmer to specify which classes are considered abstract and will not allow these to be instantiated. For example, in Java, C# and PHP, the keyword abstract is used.[26][27] In C++, an abstract class is a class having at least one abstract method given by the appropriate syntax in that language (a pure virtual function in C++ parlance).[25]
A class consisting of only virtual methods is called a Pure Abstract Base Class (or Pure ABC) in C++ and is also known as an interface by users of the language.[13] Other languages, notably Java and C#, support a variant of abstract classes called an interface via a keyword in the language. In these languages, multiple inheritance is not allowed, but a class can implement multiple interfaces. Such a class can only contain abstract publicly accessible methods.[19][28][29]
A concrete class is a class that can be instantiated, as opposed to abstract classes, which cannot.
### Local and inner
In some languages, classes can be declared in scopes other than the global scope. There are various types of such classes.
An inner class is a class defined within another class. The relationship between an inner class and its containing class can also be treated as another type of class association. An inner class is typically neither associated with instances of the enclosing class nor instantiated along with its enclosing class. Depending on language, it may or may not be possible to refer to the class from outside the enclosing class. A related concept is inner types, also known as inner data type or nested type, which is a generalization of the concept of inner classes. C++ is an example of a language that supports both inner classes and inner types (via typedef declarations).[30][31]
Another type is a local class, which is a class defined within a procedure or function. This limits references to the class name to within the scope where the class is declared. Depending on the semantic rules of the language, there may be additional restrictions on local classes compared to non-local ones. One common restriction is to disallow local class methods to access local variables of the enclosing function. For example, in C++, a local class may refer to static variables declared within its enclosing function, but may not access the function's automatic variables.[32]
### Metaclasses
Main page: Metaclass
Metaclasses are classes whose instances are classes.[33] A metaclass describes a common structure of a collection of classes and can implement a design pattern or describe particular kinds of classes. Metaclasses are often used to describe frameworks.[34]
In some languages, such as Python, Ruby or Smalltalk, a class is also an object; thus each class is an instance of a unique metaclass that is built into the language. [4] [35] [36] The Common Lisp Object System (CLOS) provides metaobject protocols (MOPs) to implement those classes and metaclasses. [37]
### Non-subclassable
Non-subclassable classes allow programmers to design classes and hierarchies of classes where at some level in the hierarchy, further derivation is prohibited (a stand-alone class may be also designated as non-subclassable, preventing the formation of any hierarchy). Contrast this to abstract classes, which imply, encourage, and require derivation in order to be used at all. A non-subclassable class is implicitly concrete.
A non-subclassable class is created by declaring the class as sealed in C# or as final in Java or PHP.[38][39][40] For example, Java's
String
class is designated as final.[41]
Non-subclassable classes may allow a compiler (in compiled languages) to perform optimizations that are not available for subclassable classes. [42]
### Open class
An open class is one that can be changed. Typically, an executable program cannot be changed by customers. Developers can often change some classes, but typically cannot change standard or built-in ones. In Ruby, all classes are open. In Python, classes can be created at runtime, and all can be modified afterwards.[43] Objective-C categories permit the programmer to add methods to an existing class without the need to recompile that class or even have access to its source code.
### Mixins
Some languages have special support for mixins, though in any language with multiple inheritance a mixin is simply a class that does not represent an is-a-type-of relationship. Mixins are typically used to add the same methods to multiple classes; for example, a class UnicodeConversionMixin might provide a method called unicode_to_ascii when included in classes FileReader and WebPageScraper that do not share a common parent.
### Partial
In languages supporting the feature, a partial class is a class whose definition may be split into multiple pieces, within a single source-code file or across multiple files.[44] The pieces are merged at compile-time, making compiler output the same as for a non-partial class.
The primary motivation for introduction of partial classes is to facilitate the implementation of code generators, such as visual designers.[44] It is otherwise a challenge or compromise to develop code generators that can manage the generated code when it is interleaved within developer-written code. Using partial classes, a code generator can process a separate file or coarse-grained partial class within a file, and is thus alleviated from intricately interjecting generated code via extensive parsing, increasing compiler efficiency and eliminating the potential risk of corrupting developer code. In a simple implementation of partial classes, the compiler can perform a phase of precompilation where it "unifies" all the parts of a partial class. Then, compilation can proceed as usual.
Other benefits and effects of the partial class feature include:
• Enables separation of a class's interface and implementation code in a unique way.
• Eases navigation through large classes within an editor.
• Enables separation of concerns, in a way similar to aspect-oriented programming but without using any extra tools.
• Enables multiple developers to work on a single class concurrently without the need to merge individual code into one file at a later time.
Partial classes have existed in Smalltalk under the name of Class Extensions for considerable time. With the arrival of the .NET framework 2, Microsoft introduced partial classes, supported in both C# 2.0 and Visual Basic 2005. WinRT also supports partial classes.
#### Example in VB.NET
This simple example, written in Visual Basic .NET, shows how parts of the same class are defined in two different files.
file1.vb
Partial Class MyClass
Private _name As String
End Class
file2.vb
Partial Class MyClass
Public Readonly Property Name() As String
Get
Return _name
End Get
End Property
End Class
When compiled, the result is the same as if the two files were written as one, like this:
Class MyClass
Private _name As String
Public Readonly Property Name() As String
Get
Return _name
End Get
End Property
End Class
#### Example in Objective-C
In Objective-C, partial classes, also known as categories, may even spread over multiple libraries and executables, like the following example. But a key difference is that Objective-C's categories can overwrite definitions in another interface declaration, and that categories aren't equal to original class definition (the first requires the last).[45] Instead, .NET partial class can't have conflicting definitions, and all partial definitions are equal to the others.[44]
@interface NSData : NSObject
- (id)initWithContentsOfURL:(NSURL *)URL;
//...
@end
In user-supplied library, a separate binary from Foundation framework, header file NSData+base64.h:
#import <Foundation/Foundation.h>
@interface NSData (base64)
- (NSString *)base64String;
- (id)initWithBase64String:(NSString *)base64String;
@end
And in an app, yet another separate binary file, source code file main.m:
#import <Foundation/Foundation.h>
#import "NSData+base64.h"
int main(int argc, char *argv[])
{
if (argc < 2)
return EXIT_FAILURE;
NSString *sourceURLString = [NSString stringWithCString:argv[1]];
NSData *data = NSData alloc] initWithContentsOfURL:[NSURL URLWithString:sourceURLString;
NSLog(@"%@", [data base64String]);
return EXIT_SUCCESS;
}
The dispatcher will find both methods called over the NSData instance and invoke both of them correctly.
### Uninstantiable
Uninstantiable classes allow programmers to group together per-class fields and methods that are accessible at runtime without an instance of the class. Indeed, instantiation is prohibited for this kind of class.
For example, in C#, a class marked "static" can not be instantiated, can only have static members (fields, methods, other), may not have instance constructors, and is sealed. [46]
### Unnamed
An unnamed class or anonymous class is a class that is not bound to a name or identifier upon definition.[47][48] This is analogous to named versus unnamed functions.
## Benefits
The benefits of organizing software into object classes fall into three categories:[49]
• Rapid development
• Ease of maintenance
• Reuse of code and designs
Object classes facilitate rapid development because they lessen the semantic gap between the code and the users. System analysts can talk to both developers and users using essentially the same vocabulary, talking about accounts, customers, bills, etc. Object classes often facilitate rapid development because most object-oriented environments come with powerful debugging and testing tools. Instances of classes can be inspected at run time to verify that the system is performing as expected. Also, rather than get dumps of core memory, most object-oriented environments have interpreted debugging capabilities so that the developer can analyze exactly where in the program the error occurred and can see which methods were called to which arguments and with what arguments.[50]
Object classes facilitate ease of maintenance via encapsulation. When developers need to change the behavior of an object they can localize the change to just that object and its component parts. This reduces the potential for unwanted side effects from maintenance enhancements.
Software re-use is also a major benefit of using Object classes. Classes facilitate re-use via inheritance and interfaces. When a new behavior is required it can often be achieved by creating a new class and having that class inherit the default behaviors and data of its superclass and then tailor some aspect of the behavior or data accordingly. Re-use via interfaces (also known as methods) occurs when another object wants to invoke (rather than create a new kind of) some object class. This method for re-use removes many of the common errors that can make their way into software when one program re-uses code from another.[51]
## Run-time representation
As a data type, a class is usually considered as a compile-time construct.[52] A language or library may also support prototype or factory metaobjects that represent run-time information about classes, or even represent metadata that provides access to reflection facilities and ability to manipulate data structure formats at run-time. Many languages distinguish this kind of run-time type information about classes from a class on the basis that the information is not needed at run-time. Some dynamic languages do not make strict distinctions between run-time and compile-time constructs, and therefore may not distinguish between metaobjects and classes.
For example, if Human is a metaobject representing the class Person, then instances of class Person can be created by using the facilities of the Human metaobject.
## Notes
1. Gamma et al. 1995, p. 14.
2. Bruce 2002, 2.1 Objects, classes, and object types, https://books.google.com/books?id=9NGWq3K1RwUC&pg=PA18.
3. Gamma et al. 1995, p. 17.
4. "3. Data model". The Python Language Reference. Python Software Foundation.
5. Booch 1994, p. 86-88.
6. "Classes (I)". C++ Language Tutorial. cplusplus.com.
7. "Classes (II)". C++ Language Tutorial. cplusplus.com.
8. Booch 1994, p. 105.
9. Jamrich, Parsons, June (2015-06-22). New perspectives computer concepts, 2016. Comprehensive. Boston, MA. ISBN 9781305271616. OCLC 917155105.
10. "Controlling Access to Members of a Class". The Java Tutorials. Oracle.
11. "OOP08-CPP. Do not return references to private data". CERT C++ Secure Coding Standard. Carnegie Mellon University. 2010-05-10.
12. Ben-Ari, Mordechai (2007-01-24). "2.2 Identifiers". Compile and Runtime Errors in Java.
13. Wild, Fred. "C++ Interfaces". Dr. Dobb's. UBM Techweb.
14. Thomas; Hunt. "Classes, Objects, and Variables". Programming Ruby: The Pragmatic Programmer's Guide. Ruby-Doc.org.
15. "Friendship and inheritance". C++ Language Tutorial. cplusplus.com.
16. Booch 1994, p. 180.
17. Booch 1994, p. 128-129.
18. Booch 1994, p. 112.
19. "Interfaces". The Java Tutorials. Oracle.
20. Jacobsen, Ivar; Magnus Christerson; Patrik Jonsson; Gunnar Overgaard (1992). Object Oriented Software Engineering. Addison-Wesley ACM Press. pp. 43–69. ISBN 0-201-54435-0.
21. Knublauch, Holger; Oberle, Daniel; Tetlow, Phil; Wallace, Evan (2006-03-09). "A Semantic Web Primer for Object-Oriented Software Developers". W3C.
22. Bell, Donald. "UML Basics: The class diagram". developer Works. IBM.
23. Booch 1994, p. 179.
24. "Polymorphism". C++ Language Tutorial. cplusplus.com.
25. "Abstract Methods and Classes". The Java Tutorials. Oracle.
26. "Class Abstraction". PHP Manual. The PHP Group.
27. "Interfaces (C# Programming Guide)". C# Programming Guide. Microsoft.
28. "Inheritance (C# Programming Guide)". C# Programming Guide. Microsoft.
29. Booch 1994, p. 133-134.
30. Thomas; Hunt. "Classes and Objects". Programming Ruby: The Pragmatic Programmer's Guide. Ruby-Doc.org.
31. Booch 1994, p. 134.
32. "MOP: Concepts". The Common Lisp Object System MetaObject Protocol. Association of Lisp Users.
33. "sealed (C# Reference)". C# Reference. Microsoft.
34. "Writing Final Classes and Methods". The Java Tutorials. Oracle.
35. "PHP: Final Keyword". PHP Manual. The PHP Group.
36. "String (Java Platform SE 7)". Java Platform, Standard Edition 7: API Specification. Oracle.
37. Brand, Sy (2 March 2020). "The Performance Benefits of Final Classes". Microsoft.
38. "9. Classes". Python.org. "As is true for modules, classes partake of the dynamic nature of Python: they are created at runtime, and can be modified further after creation."
39. mairaw; BillWagner; tompratt-AQ (2015-09-19), "Partial Classes and Methods", C# Programming Guide (Microsoft), retrieved 2018-08-08
40. Apple (2014-09-17), "Customizing Existing Classes", Programming with Objective-C (Apple), retrieved 2018-08-08
41. "Static Classes and Static Class Members (C# Programming Guide)". C# Programming Guide. Microsoft.
42. "What is an Object?". oracle.com. Oracle Corporation.
43. Booch, Grady; Robert A. Maksimchuk; Michael W. Engle; Bobbi J. Young Ph.D.; Jim Conallen; Kelli A. Houston (April 30, 2007). Object-Oriented Analysis and Design with Applications. Addison-Wesley Professional. pp. 1–28. ISBN 978-0-201-89551-3. Retrieved 20 December 2013. "There are fundamental limiting factors of human cognition; we can address these constraints through the use of decomposition, abstraction, and hierarchy."
44. Jacobsen, Ivar; Magnus Christerson; Patrik Jonsson; Gunnar Overgaard (1992). Object Oriented Software Engineering. Addison-Wesley ACM Press. ISBN 0-201-54435-0.
45. "C++ International standard". ISO/IEC JTC1/SC22 WG21.
|
2023-03-29 01:04:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22689135372638702, "perplexity": 2868.089355620764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948900.50/warc/CC-MAIN-20230328232645-20230329022645-00424.warc.gz"}
|
https://www.atmos-chem-phys.net/18/6187/2018/
|
Journal cover Journal topic
Atmospheric Chemistry and Physics An interactive open-access journal of the European Geosciences Union
Journal topic
Atmos. Chem. Phys., 18, 6187–6206, 2018
https://doi.org/10.5194/acp-18-6187-2018
Atmos. Chem. Phys., 18, 6187–6206, 2018
https://doi.org/10.5194/acp-18-6187-2018
Research article 03 May 2018
Research article | 03 May 2018
# Advanced source apportionment of carbonaceous aerosols by coupling offline AMS and radiocarbon size-segregated measurements over a nearly 2-year period
Advanced source apportionment of carbonaceous aerosols by coupling offline AMS and radiocarbon...
Athanasia Vlachou1, Kaspar R. Daellenbach1, Carlo Bozzetti1, Benjamin Chazeau2, Gary A. Salazar3, Soenke Szidat3, Jean-Luc Jaffrezo4, Christoph Hueglin5, Urs Baltensperger1, Imad El Haddad1, and André S. H. Prévôt1 Athanasia Vlachou et al.
• 1Laboratory of Atmospheric Chemistry, Paul Scherrer Institute, Villigen PSI, 5232, Switzerland
• 2Aix-Marseille Université, CNRS, LCE, Marseille, France
• 3Department of Chemistry and Biochemistry and Oeschger Centre for Climate Change Research, University of Bern, 3012 Bern, Switzerland
• 4Université Grenoble Alpes, CNRS, IRD, G-INP, IGE, 38000 Grenoble, France
• 5Swiss Federal Laboratories for Materials Science and Technology, Empa, 8600 Dübendorf, Switzerland
Abstract
Carbonaceous aerosols are related to adverse human health effects. Therefore, identification of their sources and analysis of their chemical composition is important. The offline AMS (aerosol mass spectrometer) technique offers quantitative separation of organic aerosol (OA) factors which can be related to major OA sources, either primary or secondary. While primary OA can be more clearly separated into sources, secondary (SOA) source apportionment is more challenging because different sources – anthropogenic or natural, fossil or non-fossil – can yield similar highly oxygenated mass spectra. Radiocarbon measurements provide unequivocal separation between fossil and non-fossil sources of carbon. Here we coupled these two offline methods and analysed the OA and organic carbon (OC) of different size fractions (particulate matter below 10 and 2.5 µm – PM10 and PM2.5, respectively) from the Alpine valley of Magadino (Switzerland) during the years 2013 and 2014 (219 samples). The combination of the techniques gave further insight into the characteristics of secondary OC (SOC) which was rather based on the type of SOC precursor and not on the volatility or the oxidation state of OC, as typically considered. Out of the primary sources separated in this study, biomass burning OC was the dominant one in winter, with average concentrations of 5.36 ± 2.64 µg m−3 for PM10 and 3.83 ± 1.81 µg m−3 for PM2.5, indicating that wood combustion particles were predominantly generated in the fine mode. The additional information from the size-segregated measurements revealed a primary sulfur-containing factor, mainly fossil, detected in the coarse size fraction and related to non-exhaust traffic emissions with a yearly average PM10 (PM2.5) concentration of 0.20 ± 0.24 µg m−3 (0.05 ± 0.04 µg m−3). A primary biological OC (PBOC) was also detected in the coarse mode peaking in spring and summer with a yearly average PM10 (PM2.5) concentration of 0.79 ± 0.31 µg m−3 (0.24 ± 0.20 µg m−3). The secondary OC was separated into two oxygenated, non-fossil OC factors which were identified based on their seasonal variability (i.e. summer and winter oxygenated organic carbon, OOC) and a third anthropogenic OOC factor which correlated with fossil OC mainly peaking in winter and spring, contributing on average 13 % ± 7 % (10 % ± 9 %) to the total OC in PM10 (PM2.5). The winter OOC was also connected to anthropogenic sources, contributing on average 13 % ± 13 % (6 % ± 6 %) to the total OC in PM10 (PM2.5). The summer OOC (SOOC), stemming from oxidation of biogenic emissions, was more pronounced in the fine mode, contributing on average 43 % ± 12 % (75 % ± 44 %) to the total OC in PM10 (PM2.5). In total the non-fossil OC significantly dominated the fossil OC throughout all seasons, by contributing on average 75 % ± 24 % to the total OC. The results also suggested that during the cold period the prevailing source was residential biomass burning while during the warm period primary biological sources and secondary organic aerosol from the oxidation of biogenic emissions became important. However, SOC was also formed by aged fossil fuel combustion emissions not only in summer but also during the rest of the year.
1 Introduction
The field deployment of the time-of-flight aerosol mass spectrometer (HR-ToF-AMS, Canagaratna et al., 2007) has advanced our understanding of aerosol chemistry and dynamics. The HR-ToF-AMS provides quantitative mass spectra of the non-refractory particle component, including, but not limited to, organic aerosol (OA), ammonium sulfate and nitrate, by combining the flash vaporization of particle species and the electron ionization of the resulting gases. The application of positive matrix factorization (PMF, Paatero, 1997) techniques has demonstrated that the collected OA mass spectra contain sufficient information to quantitatively distinguish aerosol sources. However, the cost and intensive maintenance requirements of this instrument significantly hinder its systematic, long-term deployment as part of a dense network and most applications are limited to few weeks of measurements (Jimenez et al., 2009; El Haddad et al., 2013; Crippa et al., 2013). This information is critical for model validation and policy directives. The Aerodyne aerosol chemical speciation monitors (ACSM, Ng et al., 2011; Fröhlich et al., 2013) were developed as a low-cost, low-maintenance alternative to the AMS; however, their reduced chemical resolution can limit the factor separation achievable by source apportionment.
The recent utilization of the AMS for the offline analysis of ambient filter samples (Daellenbach et al., 2016) has significantly broadened the spatial and temporal scales accessible to high-resolution AMS measurements (Daellenbach et al., 2017; Bozzetti et al., 2017a, b). In addition, the technique enables measurement of aerosol composition outside the normal size transmission window of the AMS; the standard AMS can measure up to only 1 µm, or 2.5 µm with a newly developed aerodynamic lens (Williams et al., 2013; Elser et al., 2016). This capability has been used to quantify the contributions of primary biological organic aerosol to OA in PM10 filters (Bozzetti et al., 2016). Finally, the offline AMS technique allows a retrospective reaction to critical air quality events. For example, one of the applications of this approach had been to examine a severe haze event in China which affected a total area of 1.3 million km2 and 800 million people (Huang et al., 2014).
A major limitation of the technique is the resolution of low water solubility fractions, as the recoveries of some of them are not accessible. Despite this, source apportionment results obtained using this technique are in good agreement with online AMS or ACSM measurements. PMF analysis of offline AMS data has yielded factors related with primary emissions from traffic, biomass burning and coal burning, and secondary organic aerosols (SOA) differentiated according to their different seasonal contributions. Nevertheless, the identification of SOA precursors using the AMS has proven challenging, due to the evolution of different precursors towards chemically similar species and the extensive fragmentation by the electron ionization used in the AMS.
The radiocarbon (14C) analysis of particulate matter has proven to be a powerful technique providing an unequivocal distinction between non-fossil (e.g. biomass burning and biogenic emissions) and fossil (e.g. traffic exhaust emissions and coal burning) sources (Lemire et al., 2002; Szidat et al., 2004, 2009). The measurement of the 14C content of total carbon (TC), which comprises the elemental carbon (EC) originating from combustion sources and the organic carbon (OC), had been the subject of many studies (Schichtel et al., 2008; Glasius et al., 2011; Genberg et al., 2011; Zhang et al., 2012, , 2016; Zotter et al., 2014b; Bonvalot et al., 2016). Results have shown that in European sites, especially in Alpine valleys, the non-fossil sources play an important role during winter due to biomass burning and in summer due to biogenic sources (Gelencsér et al., 2007; Zotter et al., 2014b). Moreover, at regional background sites close to urbanized areas in Europe (Dusek et al., 2017) as well as in megacities such as Los Angeles and Beijing, fossil OA may also exhibit significant contributions to the total OA (Zotter et al., 2014a; Zhang et al., 2017). However, the determination of the 14C content in EC and OC separately is challenging and therefore not often attempted for extended datasets.
The coupling of the offline AMS/PMF with radiocarbon analysis provides further insight into the sources of organic aerosols and in particular those related to SOA precursors. Such combination has been already attempted (Minguillón et al., 2011; Zotter et al., 2014a; Huang et al., 2014; Beekmann et al., 2015; Ulevicius et al., 2016); however, the focus has rather been on high OA concentration episodes, while little is known about the yearly cycle of the most important SOA precursors and the size resolution of the different fossil and non-fossil OA fractions.
Here, we present offline AMS measurements of a total of 219 samples, 154 of which are PM10 samples representative of the years 2013 and 2014 and 65 PM2.5 concurrent with PM10 samples for the year 2014 (January to September). 14C analysis was also performed on a subset of 33 PM10 samples, covering the year 2014. The size-segregated samples offered better insights into the mechanism by which the different fractions enter the atmosphere, while the coupling of offline AMS/PMF and 14C analysis provided a more profound understanding of the SOA fossil and non-fossil precursors on a yearly basis.
2 Methods
## 2.1 Site and sampling collection
Magadino is located in an Alpine valley in the Southern part of Switzerland, south of the Alps (Fig. S1 in the Supplement). The station (46937′′ N, 8562′′ E, 204 m a.s.l.) belongs to the Swiss National Air Pollution Monitoring Network (NABEL) and is classified as a rural background site. It is located relatively far from busy roads or residential areas and surrounded by agricultural fields and forests. It is ca. 1.4 km away from Cadenazzo train station, ca. 8 km from Lake Maggiore (Lago Maggiore) and ca. 7 km from the small Locarno Airport.
The filter samples under examination are 24 h integrated PM10 (from 4 January 2013 to 28 September 2014, with a 4-day interval) and PM2.5 (from 3 January to 28 September 2014, with a 4-day interval). PM was sampled and collected on 14 cm (exposed diameter) quartz fibre filters, using a high volume sampler (500 L min−1). After the sampling, filter samples and field blanks were wrapped in lint-free paper and stored at 20 C.
## 2.2 Offline AMS method
The offline AMS method is thoroughly described by Daellenbach et al. (2016). Briefly, four punches of 16 mm diameter from each filter sample are extracted in 15 mL of ultrapure water (18.2 MΩ cm at 25 C with total organic carbon, TOC, < 3 ppb), followed by insertion in an ultra-sonic bath for 20 min at 30 C. The water-extracted samples are then filtered through a 0.45 µm nylon membrane syringe and inserted to an Apex Q nebulizer (Elemental Scientific Inc., Omaha, NE, USA) operating at 60 C. The resulting aerosols generated in Ar ( 99.998 % vol., Carbagas, 3073, Gümligen, Switzerland) were dried by a Nafion dryer and subsequently injected and analysed by the HR-ToF-AMS.
To correct for the interference of NH4NO3 on the CO${}_{\mathrm{2}}^{+}$ signal as described in Pieber et al. (2016), several dilutions of NH4NO3 in ultrapure water were measured regularly as well. The CO${}_{\mathrm{2}}^{+}$ signal was then calculated as
$\begin{array}{}\text{(1)}& {\mathrm{CO}}_{\mathrm{2},\mathrm{real}}={\mathrm{CO}}_{\mathrm{2},\mathrm{meas}}-{\left(\frac{{\mathrm{CO}}_{\mathrm{2},\mathrm{meas}}}{{\mathrm{NO}}_{\mathrm{3},\mathrm{meas}}}\right)}_{{\mathrm{NH}}_{\mathrm{4}}{\mathrm{NO}}_{\mathrm{3},\mathrm{pure}}}\cdot {\mathrm{NO}}_{\mathrm{3},\mathrm{meas}},\end{array}$
where CO2,real represents the corrected CO${}_{\mathrm{2}}^{+}$ signal, CO2,meas and NO3,meas are signals from the samples measured, and the correction factor ${\left(\frac{{\mathrm{CO}}_{\mathrm{2},\mathrm{meas}}}{{\mathrm{NO}}_{\mathrm{3},\mathrm{meas}}}\right)}_{{\mathrm{NH}}_{\mathrm{4}}{\mathrm{NO}}_{\mathrm{3},\mathrm{pure}}}$ was determined during the campaign by measuring aqueous NH4NO3.
## 2.3 14C analysis
Based on the instrumentation setup described in Agrios et al. (2015) and on the method described in Zotter et al. (2014b), radiocarbon analysis of TC and EC was conducted on a set of 33 filters. The 14C content of blank filters was measured for TC only, as there was no EC found on these filters. All the 14C results are given in fractions of modern carbon (fM) representing the 14C 12C ratios of each sample relative to the respective 14C 12C ratio of the reference year 1950 (Stuiver and Polach, 1977).
### 2.3.1 14C measurements of TC
For the determination of the 14C content of TC, a Sunset OC EC analyser (Model 4L, Sunset Laboratory, USA) equipped with a non-dispersive infrared (NDIR) detector was first used in order to combust each filter punch (1.5 cm2) under pure O2 (99.9995 %) at 760 C for 400 s. The generated CO2 was then captured online by a zeolite trap within a gas inlet system (GIS) and then injected in the accelerator mass spectrometer (AMS*) mini carbon dating system (MICADAS) at the Laboratory for the Analysis of Radiocarbon with AMS* (LARA), University of Bern, Switzerland (Szidat et al., 2014) for 14C measurement. (Note that we used AMS* and AMS as abbreviations for the accelerator mass spectrometer and the aerosol mass spectrometer, respectively, to avoid confusion.)
The fM of TC underwent a blank correction following an isotopic mass balance approach:
$\begin{array}{}\text{(2)}& {f}_{{\mathrm{M}}_{\mathrm{b},\mathrm{cor}}}=\frac{{\mathrm{mC}}_{\mathrm{sample}}\cdot {f}_{\mathrm{M},\mathrm{sample}}-{\mathrm{mC}}_{\mathrm{b}}\cdot {f}_{\mathrm{M},\mathrm{b}}}{{\mathrm{mC}}_{\mathrm{sample}}-{\mathrm{mC}}_{\mathrm{b}}},\end{array}$
where ${f}_{{\mathrm{M}}_{\mathrm{b},\mathrm{cor}}}$ is the blank corrected fM; mCsample and mCb are the carbon mass in sample and blank, respectively; and fM,sample and fM,b are the fM measured for sample and blank, respectively. Error propagation was applied for the determination of the ${f}_{{\mathrm{M}}_{\mathrm{b},\mathrm{cor}}}$ uncertainty. The fM,b was 0.61 ± 0.10 and the concentration of the blank 1.1 ± 0.2 µg C m−3.
### 2.3.2 14C measurements of EC
For the EC isolation of the samples, each filter punch (1.5 cm2) was analysed by the Sunset EC OC analyser with the use of the Swiss_4S protocol developed by Zhang et al. (2012). According to the protocol, the heating is conducted in four different steps under different gas conditions: step one under pure O2 at 375 C for 150 s, step two under pure O2 at 475 C for 180 s, step three under He (> 99.999 %) at 450 C for 180 s followed by an increase in the temperature up to 650 C for another 180 s, and step four under pure O2 at 760 C for 150 s. Each filter sample was previously water extracted and dried, in order to minimize the positive artefact induced by the OC by removing the water-soluble OC (WSOC), which is known to produce charring (Piazzalunga et al., 2011; Zhang et al., 2012). By this method, the water-insoluble OC (WINSOC) was removed during the first three steps of the Swiss_4S protocol. In the fourth step, EC was combusted and then trapped in the GIS and measured by the AMS* MICADAS, as described above.
This protocol was preferred over the protocols commonly used in thermo-optical methods (EUSAAR 2 or NIOSH) because it optimises the separation of the two fractions OC and EC by minimizing (i) the positive artefact of charring produced by WSOC during the first three steps and (ii) the premature losses, during the removal of the WINSOC in the third step, of the less refractory part of EC which may preferentially originate from non-fossil sources such as biomass burning.
Following a similar principle to Zotter et al. (2014b), both charring and EC yield, which is the part of EC that remained on the filter after step three and before step four in the Swiss_4S protocol, were quantified and corrected for with the help of the laser mounted on the Sunset analyser. The laser transmittance is monitored continuously during the heating process. Charring in step three was quantified as
$\begin{array}{}\text{(3)}& {\mathrm{Charring}}_{{\mathrm{S}}_{\mathrm{3}}}=\phantom{\rule{0.125em}{0ex}}\frac{max{\mathrm{ATN}}_{{\mathrm{S}}_{\mathrm{3}}}-\phantom{\rule{0.125em}{0ex}}\mathrm{initial}\phantom{\rule{0.125em}{0ex}}{\mathrm{ATN}}_{{\mathrm{S}}_{\mathrm{2}}}}{\mathrm{initial}\phantom{\rule{0.125em}{0ex}}{\mathrm{ATN}}_{{\mathrm{S}}_{\mathrm{1}}}},\end{array}$
where ATN refers to the laser attenuation, $max{\mathrm{ATN}}_{{\mathrm{S}}_{\mathrm{3}}}$ is the maximum attenuation in step three, and $\mathrm{initial}\phantom{\rule{0.125em}{0ex}}{\mathrm{ATN}}_{{\mathrm{S}}_{\mathrm{2}}}$ and $\mathrm{initial}\phantom{\rule{0.125em}{0ex}}{\mathrm{ATN}}_{{\mathrm{S}}_{\mathrm{1}}}$ are the initial attenuations in step two and one, respectively.
The EC yield in step three was quantified as
$\begin{array}{}\text{(4)}& {\mathrm{ECyield}}_{{\mathrm{S}}_{\mathrm{3}}}=\frac{\mathrm{initial}\phantom{\rule{0.125em}{0ex}}{\mathrm{ATN}}_{{\mathrm{S}}_{\mathrm{3}}}}{max{\mathrm{ATN}}_{{\mathrm{S}}_{\mathrm{3}}}}\cdot \frac{\mathrm{initial}\phantom{\rule{0.125em}{0ex}}{\mathrm{ATN}}_{{\mathrm{S}}_{\mathrm{2}}}}{max{\mathrm{ATN}}_{{\mathrm{S}}_{\mathrm{1}}}},\end{array}$
The average charred OC was found to be 4 ± 2 % and the recovered EC for all samples was on average 71 ± 7 %.
As there is a linear relationship between the fraction of modern carbon for EC (${f}_{{\mathrm{M}}_{\mathrm{EC}}}\right)$ and the EC yield (Zhang et al., 2012), the slope can be used to extrapolate ${f}_{{\mathrm{M}}_{\mathrm{EC}}}$ to 100 % EC yield. According to Zotter et al. (2014), a slope of 0.35 ± 0.11 was considered to correct all ${f}_{{\mathrm{M}}_{\mathrm{EC}}}$ to 100 % of EC yield, such that
$\begin{array}{}\text{(5)}& {f}_{{\mathrm{M}}_{\mathrm{EC},\mathrm{total}}}=\mathrm{slope}\cdot \left(\mathrm{1}-{\mathrm{ECyield}}_{{\mathrm{S}}_{\mathrm{3}}}\right)+{f}_{{\mathrm{M}}_{\mathrm{EC}}}.\end{array}$
### 2.3.3 Calculation of 14C content of OC
The fraction of modern carbon of OC (${f}_{{\mathrm{M}}_{\mathrm{OC}}}\right)$ was calculated following a mass balance approach:
$\begin{array}{}\text{(6)}& {f}_{{\mathrm{M}}_{\mathrm{OC}}}=\frac{\mathrm{TC}\cdot {f}_{{\mathrm{M}}_{\mathrm{TC}}}-\mathrm{EC}\cdot {f}_{{\mathrm{M}}_{\mathrm{EC}}}}{\mathrm{TC}-\mathrm{EC}},\end{array}$
where TC and EC are the concentrations of total and elemental carbon, respectively, and ${f}_{{\mathrm{M}}_{\mathrm{TC}}}$ and ${f}_{{\mathrm{M}}_{\mathrm{EC}}}$ are the fractions of modern carbon of TC and EC, respectively. The uncertainty of ${f}_{{\mathrm{M}}_{\mathrm{OC}}}$ was calculated by propagating the error of each component of Eq. (6).
### 2.3.4 Nuclear bomb peak correction
The expected fM coming from fossil samples should be equal to zero due to the complete decay of 14C until now, whereas the fM from non-fossil samples is expected to be unity. However, due to the extensive nuclear bomb testing during the late 1950s and early 1960s, the radiocarbon amount in the atmosphere increased dramatically because of the high neutron flux during the explosions. Therefore the measured fM of non-fossil samples may exhibit values greater than one (Levin et al., 2010a). To correct for this effect, the fM is normalized to a reference non-fossil fraction (fNF,ref) which represents the amount of 14C currently in the atmosphere compared to 1950, before the nuclear bomb tests. As EC comes from either biomass burning or fossil sources, the non-fossil fraction of EC (fNF,EC) equals the fM coming from biomass burning (fM,bb). The latter was estimated by a tree growth model (Mohn et al., 2008) and was equal to 1.101. The non-fossil fraction of OC (fNF,OC) is calculated as
$\begin{array}{}\text{(7)}& {f}_{\mathrm{NF},\mathrm{OC}}={p}_{\mathrm{bio}}\cdot {f}_{\mathrm{M},\mathrm{bio}}+{p}_{\mathrm{bb}}\cdot {f}_{\mathrm{M},\mathrm{bb}},\end{array}$
where fM,bio (= 1.023) is the fraction of modern carbon of biogenic sources and was estimated from 14CO2 measurements in Schauinsland (Levin et al., 2010a). The fractions of biogenic sources (pbio) and biomass burning (pbb) to the total non-fossil sources were set to 0.5 since both sources are important in Magadino during the year (biomass burning in winter, biogenic sources in summer).
Organic and elemental carbon fractions were determined by a Sunset EC OC analyser with the use of the EUSAAR-2 thermal-optical transmittance protocol (Cavalli et al., 2010). Water-soluble organic carbon was measured by a total organic carbon analyser (Jaffrezo et al., 2005) with the use of catalytic oxidation of water-extracted filter samples and detection of the resulting CO2 with an NDIR. The concentrations of major ionic species (K+, Na+, Mg2+, Ca2+, NH${}_{\mathrm{4}}^{+}$, Cl, NO${}_{\mathrm{3}}^{-}$ and SO${}_{\mathrm{4}}^{\mathrm{2}-}\right)$ as well as methane sulfonic acid (MSA) were determined by ion chromatography (Jaffrezo et al., 1998). Anhydrous sugars (levoglucosan, mannosan, galactosan) were analysed by an ion chromatograph (Dionex ICS3000) using high-performance anion exchange chromatography (HPAEC) with pulsed amperometric detection. Cellulose was analysed by performing enzymatic conversion of cellulose to D-glucose (Kunit and Puxbaum, 1996) and D-glucose was determined by HPAEC.
3 Source apportionment
## 3.1 Method
The obtained organic mass spectra from the offline AMS measurements were analysed by positive matrix factorization (Paatero and Tapper, 1994; Ulbrich et al., 2009). PMF attempts to solve the bilinear matrix equation,
$\begin{array}{}\text{(8)}& {\mathbf{X}}_{ij}=\phantom{\rule{0.125em}{0ex}}\sum _{k}{\mathbf{G}}_{i,k}{\mathbf{F}}_{k,j}+{\mathbf{E}}_{i,j},\end{array}$
by following the weighted least-squares approach. In the case of aerosol mass spectrometry, i represents the time index, j the fragment and k the factor number. If Xij is the matrix of the organic mass spectral data and si,j the corresponding error matrix, Gi,k the matrix of the factor time series, Fk,j the matrix of the factor profiles and Ei,j the model residual matrix, then PMF determines Gi,k and Fk,j such that the ratio of the Frobenius norm of Ei,j over si,j is minimized. The allowed Gi,k and Fk,j are always non-negative. The input error matrix si,j includes the measurement uncertainty (ion-counting statistics and ion-to-ion signal variability at the detector) (Allan et al., 2003) as well as the blank variability. Fragments with a signal-to-noise ratio (SNR) below 0.2 were removed and the ones with SNR lower than 2 were down-weighted by a factor of 3, as recommended by Paatero and Hopke (2003). Both input data and error matrices were scaled to the calculated water-soluble organic matter (WSOMi) concentration:
$\begin{array}{}\text{(9)}& {\mathrm{WSOM}}_{i}=\phantom{\rule{0.125em}{0ex}}\frac{\mathrm{OM}}{\mathrm{OC}}\cdot \phantom{\rule{0.125em}{0ex}}{\mathrm{WSOC}}_{i},\end{array}$
where $\frac{\mathrm{OM}}{\mathrm{OC}}$ is determined from the AMS measurements and WSOCi is the water-soluble OC measured by the TOC analyser.
The Source Finder toolkit (SoFi v.4.9, Canonaco et al., 2013) for IGOR Pro software package (Wavemetrics, Inc., Portland, OR, USA) was used to run the PMF algorithm. The PMF was solved by the multilinear engine 2 (ME-2, Paatero, 1999), which allows the constraining of the Fk,j elements to vary within a certain range defined by the scalar α ($\mathrm{0}\le \mathit{\alpha }\le \mathrm{1}$), such that the modelled ${\mathbf{F}}_{k,j}^{\prime }$ equals
$\begin{array}{}\text{(10)}& {\mathbf{F}}_{k,j}^{\prime }={\mathbf{F}}_{k,j}±\mathit{\alpha }\cdot {\mathbf{F}}_{k,j}.\end{array}$
Here we constrained only the hydrocarbon-like factor (HOA) from high-resolution mass spectra analysed by Crippa et al. (2013).
## 3.2 Sensitivity analysis
To understand the variability of our dataset we explored 4–10 factor solutions and retained the 7-factor solution as the best representation of the data. The exploration of the PMF solutions is thoroughly described in Sect. S.1 in the Supplement.
We assessed the accuracy of PMF results by bootstrapping the input data (Davison and Hinkley, 1997). New input data and error matrices were created by randomly resampling the time series from the original input matrix (223 samples in total: 219 + 4 remeasurements from the PM10 samples), with replacement; i.e. any sample from the whole population can be resampled more than once. Each sample measurement included on average blocks of 12 mass spectral repetitions; therefore, resampling was performed on the blocks. Out of the 223 original samples, some of them were represented several times, while some others not at all. Overall, the resampled data made up on average 64 ± 2 % of the total original data per bootstrap run. We performed 180 bootstrap runs, with each of the generated matrices being perturbed by varying the Xij element within twice the corresponding error matrix si,j. Within the resampling operation, the α value used to set the HOA constraining strength was varied between 0 and 1 with an increment of 0.1 to assess the sensitivity of the results on the α value.
To select the physically plausible solutions we applied two criteria:
1. We accepted solutions where the average absolute concentrations of all factors in PM2.5 did not statistically significantly exceed their concentrations in PM10. For this we performed a paired t test with a significance level of 0.01 (Fig. S2 and Table S1 in the Supplement).
2. We excluded outlier solutions identified by examining the correlation of factor time series from bootstrap runs with their respective factor time series from the average of all bootstrap runs. The rejected solutions included factors that did not correlate with the corresponding average factor time series, meaning that one of the factors was not separated (Fig. S3 in the case of water-soluble primary biological organic carbon, PBOC).
In total 24 bootstrap runs were retained after the application of the aforementioned criteria.
## 3.3 Recoveries
In order to rescale the WSOC concentration of a factor k to its total concentration OCk, we used factor recoveries (Rk) as proposed by Daellenbach et al. (2016). First, the WSOMk was calculated as
$\begin{array}{}\text{(11)}& {\mathrm{WSOM}}_{k}=\phantom{\rule{0.125em}{0ex}}{f}_{k,\mathrm{WSOM}}\cdot {\mathrm{WSOC}}_{\mathrm{measured}}\cdot {\left(\frac{\mathrm{OM}}{\mathrm{OC}}\right)}_{\mathrm{bulk}},\end{array}$
where
$\begin{array}{}\text{(12)}& {f}_{k,\mathrm{WSOM}}=\phantom{\rule{0.125em}{0ex}}\frac{{\mathrm{WSOM}}_{k,\mathrm{measured}}}{{\sum }_{k}{\mathrm{WSOM}}_{k,\mathrm{measured}}}\end{array}$
and
$\begin{array}{}\text{(13)}& {\left(\frac{\mathrm{OM}}{\mathrm{OC}}\right)}_{\mathrm{bulk}}\end{array}$
is estimated from the input data matrix for the PMF.
The WSOMk was converted to WSOCk to fit the measured OC concentrations (determined by the Sunset EC OC analyser). The WSOCk was determined as
$\begin{array}{}\text{(14)}& {\mathrm{WSOC}}_{k}=\phantom{\rule{0.125em}{0ex}}\frac{{f}_{k,\mathrm{WSOM}}\cdot \phantom{\rule{0.125em}{0ex}}{\mathrm{WSOC}}_{\mathrm{measured}}\cdot {\left(\frac{\mathrm{OM}}{\mathrm{OC}}\right)}_{\mathrm{bulk}}}{\left(\frac{\mathrm{OM}}{\mathrm{OC}}{\right)}_{k}},\end{array}$
where $\left(\frac{\mathrm{OM}}{\mathrm{OC}}{\right)}_{k}$ is calculated from each factor profile.
Finally, the recoveries were applied following Eq. (15):
$\begin{array}{}\text{(15)}& {\mathrm{OC}}_{i,k}=\frac{{\mathrm{WSOC}}_{i,k}}{{R}_{k}}.\end{array}$
To assess the recoveries and their uncertainties, we evaluated the sum of OCi,k against the measured OC (OCi,measured) by fitting Eq. (16). The starting values for the Rk fitting were based on Bozzetti et al. (2016) (for RPBOA) and Daellenbach et al. (2016) except RSCOA, which was randomly varied between 0 and 1 (increment: 10−4). While RHOA and RSCOA were constrained, RPBOA, RBBOA, RWOOA, RAOOA and RSOOA were determined by a non-negative multilinear fit (see below in Sect. 4.3 for a description of these PMF factors from offline AMS results). The multilinear fit was chosen to be non-negative because a negative Rk would mean a negative concentration of WSOCk or OCk. The fit was performed 100 times for each of the retained bootstrap solutions.
$\begin{array}{}\text{(16)}& {\mathrm{OC}}_{i,\mathrm{measured}}=\phantom{\rule{0.125em}{0ex}}{\sum }_{k}\frac{{\mathrm{WSOC}}_{i,k}}{{R}_{k}}\end{array}$
Each fit was initiated by perturbing the OCi,k and the WSOCi,k concentrations within their uncertainties, assuming a normal distribution of errors, to assess the influence of measurement precision on Rk. Additionally, we introduced a constant 5 % accuracy bias corresponding to the OC and WSOC measurement accuracy.
To select the environmentally meaningful solutions we applied the following criteria:
1. To retain the recoveries that achieved the OC mass closure, we estimated the OC residuals and discarded solutions where OC residuals were statistically different from 0 within 1 standard deviation for each size fraction individually and for winter and summer individually.
2. We also examined the dependence between the WSOC residuals and each factor WSOCi,k (t test, α=0.001). Overall, 55 % of the solutions were retained.
3. The physically plausible range of the recoveries is [0, 1]. However, the mathematically possible range can exceed the upper limit. Rk larger than 1 would mean that WSOCk is larger than OCk and is, therefore, non-physical. For this reason, out of the accepted solutions that survived the previous two criteria, the retained Rk combinations were weighted according to their physical interpretability. More specifically, fitting results with Rk larger than 1 were down-weighted according to the measurement uncertainties of WSOC and OC (see Sect. S.2, Fig. S4).
Figure 1Concentrations of OM, EC and major ionic species for the years 2013 and 2014 (a), their seasonal concentrations (b) and relative contributions to the total measured mass within the particulate matter (PM10) (c). The sum of the ions Na+, K+, Mg2+, Ca2+ and Cl are included in the indication “Ions*”.
Figure 2Time series of OC and EC (a) concentrations in PM10. 14C analysis results with the relative contributions of EC fossil, OC fossil, OC non-fossil and EC non-fossil to the TC (b).
4 Results and discussion
## 4.1 PM10 composition
PM10 in Magadino has been characterized by high carbonaceous concentrations during winter (Gianini et al., 2012a; Zotter et al., 2014b). This is clearly illustrated in Fig. 1 where an overview of the PM10 composition is presented in Fig. 1a with Fig. 1b and c summarizing the concentrations and relative contributions of each component to the total PM10 averaged per season. The peaks of OM and EC during winter (daily averages up to 26 and 5.9 µg m−3, respectively) are indications of the increased wood-burning activity. Other Alpine sites close to Magadino, such as Roveredo and San Vittore in Switzerland, have also exhibited high OM concentrations due to residential wood burning (Szidat et al., 2007, for PM10 in Roveredo, Lanz et al., 2010, for PM1 in Roveredo and Zotter et al., 2014b, for PM10 in San Vittore and Roveredo). The organic contribution dominated the inorganic fraction not only in winter, but also throughout both years (Fig. 1c). Note that the EC concentrations are much lower in spring compared to winter (Fig. 1b). The main inorganic aerosols contributing to the total PM are NO${}_{\mathrm{3}}^{-}$, SO${}_{\mathrm{4}}^{\mathrm{2}-}$ and NH${}_{\mathrm{4}}^{+}$. NO${}_{\mathrm{3}}^{-}$ represented the second major component of PM10, exhibiting a seasonal cycle with higher concentrations during winter (2.9 µg m−3). The notable discrepancy of NO${}_{\mathrm{3}}^{-}$ concentrations between the first (2013) and second (2014) winter could be explained by the lower temperatures in January–February 2013 compared to 2014. Conversely, SO${}_{\mathrm{4}}^{\mathrm{2}-}$ showed a rather stable yearly cycle with slightly higher concentrations in summer (1.9 µg m−3) compared to winter (1.3 µg m−3), despite a shallower boundary layer height in winter.
Figure 3Concentrations in PM10 of OCf (a), OCnf (b), ECf (c) and ECnf (d) colour-coded by seasons. The ratios OCf ECf, OCnf ECnf, and ECnf EC are also displayed in (a), (b) and (d), respectively.
Table 1Median OC and EC non-fossil fractions per season in PM10 with interquartile range.
## 4.2 14C analysis results
So far radiocarbon results have been reported mostly for relatively short periods of time (Bonvalot et al., 2016), mainly describing high concentration events, and only a few studies report measurements on a yearly basis (Genberg et al., 2011; Gilardoni et al., 2011; Zotter et al., 2014b; Zhang et al., 2016, 2017; Dusek et al., 2017). Here, for a subset of 33 PM10 filters from the year 2014, we present yearly contributions of OCnf, OCf, ECnf and ECf.
Overall the total carbon concentrations followed a yearly pattern mainly caused by the shallow planetary boundary layer and the enhanced biomass burning activity during winter, with OC reaching on average (± 1 standard deviation) 9.4 ± 4.5 and EC 2.6 ± 1.5 µg m−3 (Fig. 2a). During the rest of the year, TC remained rather stable with much lower concentrations (OCavg=3.7± 1.9 and ECavg=0.8± 0.7 µg m−3). 14C results indicate that non-fossil sources prevail over the fossil ones in Magadino. More specifically, we found that in winter on average ${f}_{\mathrm{NF},\mathrm{OC}}=\mathrm{0.9}$± 0.1 and ${f}_{\mathrm{NF},\mathrm{EC}}=\mathrm{0.5}$± 0.1, which is in agreement with the reported fractions by Zotter et al., 2014b (fNF,OC= 0.8 ± 0.1 and ${f}_{\mathrm{NF},\mathrm{EC}}=\mathrm{0.5}$± 0.2). Table 1 summarizes the fNF per fraction season wise.
OCnf was the dominant part of TC throughout the year with contributions of up to 80 % in winter and 71 % in summer (Fig. 2b) and average concentrations of 8.5 ± 4.2 and 2.4 ± 0.6 µg m−3 in winter and summer, respectively (Fig. 3b). Such high contributions in winter strongly indicate that biomass burning (BB) from residential heating is the main source of carbonaceous aerosols in this region, similar to previous reports (Jaffrezo et al., 2005; Puxbaum et al., 2007; Sandradewi et al., 2008; Favez et al., 2010; Zotter et al., 2014b). The coefficient of determination R2 between OCnf and levoglucosan, a characteristic marker for BB, was 0.92 (Fig. S7a), and the slope (OCnf levoglucosan = 4.8 ± 0.3) lies within the reported range by Zotter et al. (2014b) for Magadino (which was 6.9 ± 2.6).
The concentration of ECnf was significantly higher in winter (average 1.3 ± 0.7 µg m−3) compared to the rest of the year (spring average: 0.4 ± 0.2 µg m−3, summer average: 0.21 ± 0.06 µg m−3, autumn average: 0.43 ± 0.41 µg m−3) (Fig. 3d). ECnf is considered to originate solely from BB, for instance from residential wood burning in winter. This assumption is supported by the very high correlation (R2=0.95) with levoglucosan (Fig. S7b) and the slope (ECnf levoglucosan = 0.82 ± 0.03) which is also in agreement with the literature (Zotter et al., 2014b; Herich et al., 2014).
The strong correlation between OCnf and ECnf, driven mainly by the winter data points, supports the fact that OCnf is mostly from biomass burning in winter (Fig. S6a). In late spring, summer and early autumn, the contribution of ECnf decreased significantly (on average to 0.23 ± 0.07 µg m−3). The low correlation of OCnf and ECnf during this period (Fig. S6a), in combination with the increase in the OCnf ECnf ratio in summer (Fig. 3b), suggests that a part of the secondary OCnf originates from non-combustion sources, e.g. biogenic/natural sources.
In total, the relative contribution of the fossil fraction to the TC was 27 %. Excluding winter, ECf exhibited slightly higher concentrations than ECnf (Fig. 3c and d). The average concentrations of ECf were 1.26 ± 0.93, 0.41 ± 0.35, 0.31 ± 0.07 and 0.63 ± 0.56 µg m−3 for winter, spring, summer and autumn, respectively (Fig. 3c). The increase in ECf witnessed in winter could be mainly attributed to the shallower planetary boundary layer (PBL) rather than to an increase in the emissions (Fig. S8a). The sources of ECf in the coarse (PM10–PM2.5) size fraction are typically related to resuspension of abrasion products of vehicle tires or brake wear (Bukowieki et al., 2010; Zhang et al, 2013). The fine part of ECf is due to fossil fuel burning, here mostly due to traffic exhaust emissions. It is significantly correlated with NOx (Fig. S8b) and the ECf NOx= 0.020 ratio lies within the reported slopes (Zotter et al., 2014b, and references therein).
The contribution of OCf to TC decreased during winter (8 %) but remained roughly stable throughout the rest of the year (22 % in spring, 21 % in summer and 19 % in autumn, Fig. 2b) with average concentrations 0.87 ± 0.30, 0.96 ± 0.12, 0.89 ± 0.14 and 0.76 ± 0.10 µg m−3 for winter, spring, summer and autumn, respectively (Fig. 3a). The low correlation overall observed between OCf and ECf (Fig. S6b) may indicate that a fraction of OCf is not directly emitted but formed as secondary OC (SOC) from fossil-fuel-related emissions (e.g. traffic). This is supported by low OCf ECf ratios in winter (on average 0.7 ± 0.3) and much higher values in spring and summer (on average 2.7 ± 1.1) (Fig. 3a). The low ratios are consistent with tunnel measurement studies (Li et al., 2016; Chirico et al., 2011; El Haddad et al., 2009) and the increase in OCf ECf in spring and summer above these values is an indication of anthropogenic SOA formation. We also note that fossil SOA may be formed by other sources besides traffic. A recent study revealed that fossil SOA is produced by the oxidation of volatile chemical products coming from petrochemical sources (McDonald et al., 2018).
Figure 4Probability density functions of factor recoveries: hydrocarbon-like OA (HOA) in grey, biomass burning OA (BBOA) in dark brown, sulfur-containing OA (SCOA) in blue, primary biological OA (PBOA) in green, anthropogenic oxygenated OA (AOOA) in purple, summer oxygenated OA (SOOA) in yellow and winter oxygenated OA (WOOA) in light brown.
Figure 5Offline AMS/PMF (ME-2) factor profiles: hydrocarbon-like OA (HOA), biomass burning OA (BBOA), sulfur-containing OA (SCOA), primary biological OA (PBOA), anthropogenic oxygenated OA (AOOA), summer oxygenated OA (SOOA) and winter oxygenated OA (WOOA).
Figure 6Factor (in red for PM10 and blue for PM2.5) and external marker (in grey markers) time series for the two size fractions: HOC and NOx, BBOC and levoglucosan, SCOC, PBOC and cellulose, AOOC and OCf, SOOC and temperature, and WOOC and NH${}_{\mathrm{4}}^{+}$. Note that here, different from Fig. 5, the factors are quantified according to their carbon mass concentration, with HOC, BBOC, SCOC, PBOC, AOOC, SOOC, and WOOC referring to hydrocarbon-like organic carbon (OC), biomass burning OC, sulfur-containing OC, primary biological OC, anthropogenic oxygenated OC, summer oxygenated OC, and winter oxygenated OC, respectively.
Table 2Variability of OM OC and factor recoveries.
## 4.3 Offline AMS analysis results: factor interpretation
In this section, we will interpret the PMF outputs. The factor recoveries for all factors, Rk, determined as described in Sect. 3.3, are shown in Fig. 4. Factor mass spectra are displayed in Fig. 5. The contribution of the different factors to OA is presented in Fig. 6. In addition, for some cases we will discuss the factor contribution to OC to check the consistency of our results with previous literature reports. Recovery values determined and used in this study will also be compared for each factor to previous values. Median values of the recoveries as well as the OM OC ratios with their interquartile range are compiled in Table 2. The Rk values were in general consistent with previous reports (Daellenbach et al., 2016, 2017; Bozzetti et al., 2016). Here we report for the first time the recoveries of each SOA factor individually which were in agreement with the ones reported by Daellenbach et al. (2016). The consistency of the recovery results with not only previous offline AMS/PMF studies but also with online AMS measurements (Xu et al., 2017) points out that this method is rather robust and universal for different datasets.
Hydrocarbon-like OA (HOA), typically associated with traffic emissions, was constrained using the reference HOA high-resolution profile from Crippa et al. (2013). The resulting factor profile (Fig. 5) exhibited a low OM OC (Table 2) and the time series followed the one from NOx (Fig. 6). As the offline AMS technique requires water-extracted samples, it is expected that HOA, which mostly contains water-insoluble material, will be poorly represented. This is also shown by the low recovery RHOA,median which was estimated to be 0.11 (Q25=0.10 and Q75=0.13) as reported in Daellenbach et al. (2016) (Fig. 4). Therefore, the correlation between HOA and NOx was weak (Fig. S9). However, the HOA/NOx ratio was 0.017 for PM10 and 0.008 for PM2.5 and these values are consistent with already reported ones in the literature (Daellenbach et al., 2017; Lanz et al., 2007). In addition, the HOC time series followed a similar yearly cycle as ECf (Fig. S10a) and the HOC OCf ratio was 0.37 ± 0.12 (Fig. S10b), in agreement with Zotter et al. (2014a).
Figure 7Correlations between BBOA and levoglucosan for the two size fractions (a), BBOC and ECnf for PM10 (b), SCOA and CH3SO${}_{\mathrm{2}}^{+}$ for the two size fractions (c) (the regression lines show a linear relationship), PBOA and cellulose for PM10 (d), AOOC and OCf (the regression fit was weighted by the standard deviation of AOOC) (e), and SOOA and daily averaged temperature as well as OCnf ECnf ratio and temperature for PM10 (f).
Biomass burning OA (BBOA) was identified by its significant contributions of the oxygenated fragments C2H4O${}_{\mathrm{2}}^{+}$ (at mz 60) and C3H5O${}_{\mathrm{2}}^{+}$ (at mz 73), common markers for wood burning formed by fragmentation of anhydrous sugars (Alfarra et al., 2007) (Fig. 5). It was also identified by its distinct seasonal variation which exhibited exclusively high concentrations in winter, reaching up to 20.0 ± 0.7 µg m−3 for PM10 in December 2013 and 12.3 ± 0.5 µg m−3 for PM2.5 in January 2014 (Fig. 6). The median value for the OM OC ratio was 1.8 and the RBBOA was consistent with the low end of the reported one by Daellenbach et al. (2016) (Table 2). The identification of this factor as BBOA was further confirmed by its remarkable correlation with levoglucosan. Similar to levoglucosan, this factor did not exhibit a significant difference between PM2.5 and PM10 concentrations (Fig. S5a), suggesting that most of these particles are present in the fine mode, consistent with previous observations (Levin et al., 2010b). The BBOA/levoglucosan ratio was 7.1 for PM10 and 5.8 for PM2.5, which falls into the range reported by Daellenbach et al. (2017) and was also consistent with the ratio reported by Bozzetti et al. (2016). The difference of BBOA/levoglucosan for the two size fractions is due to four samples in BBOA PM10 with high concentrations. Lastly, BBOC showed a strong correlation with ECnf, with a slope of 4.9 (Fig. 7b) which fell within the range of the compiled ECnf/BBOC ratios in Ulevicius et al. (2016).
Table 3Season-wise average (± 1 standard deviation) concentrations (in µg m−3) of different OA factors per size fraction. Note that for the 2 different years the months per season can vary.
Sulfur-containing OA (SCOA) was identified by its spectral fingerprint which is described by a high contribution of the fragment CH3SO${}_{\mathrm{2}}^{+}$ (at mz 79) (Fig. 5) and high OM OC ratio (Table 2). The RSCOA (Fig. 4, Table 2) showed a much broader distribution than the rest of the primary OC recoveries yet more limited towards the strongly water-soluble fractions compared to Daellenbach et al. (2017). SCOA concentrations were higher in the coarse fraction compared to PM2.5 (Figs. 6 and 7c, S5) and exhibited higher concentrations during autumn and winter compared to summer (Table 3). A similar profile had previously been linked to a marine origin by Crippa et al. (2013) in Paris; however, Daellenbach et al. (2017) found that SCOA in Switzerland was rather a primary locally emitted source with no marine origin due to its anti-correlation with methane sulfonic acid (MSA). Here we confirm that SCOA did not follow the MSA time series (Fig. S11) but rather the time series of NOx. These observations suggest that this factor is connected to a primary coarse particle episodic source related to traffic.
Primary biological OA (PBOA) exhibited significant contributions from the fragment C2H5O${}_{\mathrm{2}}^{+}$ (part of mz 61) (Fig. 5) and was more enhanced in summer and spring (Fig. 6). The RPBOA (Fig. 4, Table 2) met the high end of RPBOA in Bozzetti et al. (2016). PBOA appeared mostly in the coarse mode (Table 3, Fig. S5). The mass spectral features, the seasonality and coarse contribution suggested the biological nature of this factor possibly including plant debris. Additional support of this interpretation is provided by the correlation of PBOA with cellulose (Fig. 7d), a polymer mostly found in the cell wall of plants. The correlation improved if only data from summer and spring were considered. The outliers here were the late autumn and winter points when BBOA was more important and PBOA could not as easily be separated by the PMF technique.
One out of the three oxygenated OAs (OOA) was identified as a highly oxidized factor, due to the significant contribution of the fragment CO${}_{\mathrm{2}}^{+}$ (Fig. 5) and the high OM OC ratio (Table 2) which was consistent with the reported OM OC ratio by Turpin et al. (2001) for non-urban aerosols. This factor peaked mainly in winter and spring and the PM2.5 size fraction exhibited higher concentrations during this period compared to the coarse size fraction (Table 3, Fig. 6). The water solubility of this oxygenated factor was high (Fig. 4, Table 2), which is consistent with the literature values (Daellenbach et al., 2016, 2017) that refer to the sum of all oxygenated factors, as well as with reported water-soluble fractions for highly oxidized compounds (Xu et al., 2017). The yearly median concentration for PM10 was 0.97 µg m−3 (Q25= 0.86 and Q75=1.09µg m−3), which accounts for approximately 13 % of the total OA. Out of all the possible correlations with external markers, this factor correlated best with OCf (Fig. 7e); therefore, we chose to name it anthropogenic OOA (AOOA) (see also discussion in Sect. 4.4.2). Both AOOC and OCf followed very similar annual cycles (Fig. S12) with average AOOC OCf= 0.97 ± 2.49. This observation along with the increase in OCf ECf as already discussed in Sect. 4.2 could indicate that this factor is linked to secondary organic aerosol from traffic emissions or to transported air masses from industrialized areas. It may also be connected to the oxidation of volatile chemical products such as pesticides, coatings, printing inks or cleaning agents (McDonald et al., 2018). Further discussion about AOOC can be found in Sect. 4.4.
Figure 8Probability density functions of the fitting coefficients of the relative fossil contributions: SCOC in blue, AOOC in purple, SOOC in yellow and WOOC in light brown.
Figure 9Relative contributions to the fossil OC per factor (PM10) (a) and to the non-fossil OC per factor (PM10) (b): BBOC in dark brown, SCOCf and SCOCnf in blue, PBOC in green, AOOCf and AOOCnf in purple, SOOCf and SOOCnf in yellow, and WOOCf and WOOnf in light brown. Note that the total non-fossil concentrations (dark green markers) are on average 6 times higher compared to the fossil ones (dark grey markers).
Figure 10Yearly cycles of fossil PM10 (a), non-fossil PM10 (b), fossil PM2.5 (c), and non-fossil PM2.5 (d) OC factors: BBOC in dark brown, SCOCf and SCOCnf in blue, PBOC in green, AOOCf and AOOCnf in purple, SOOCf and SOOCnf in yellow, and WOOCf and WOOnf in light brown. Note that the covered time periods in (a), (b) and (c), (d) are different.
Figure 11Averaged contributions of the fossil and non-fossil primary and secondary OC to the total OC season wise for PM10.
Summer oxygenated OA (SOOA) was mainly identified by the high contribution of the fragment C2H3O+ (mz 43) (Fig. 5) (fC2H3O${}^{+}=\mathrm{0.15}$) as well as its seasonal behaviour (Fig. 6). Like all the oxygenated OA factors, it was highly water soluble (Fig. 4, Table 2). The highest concentrations were witnessed in July with values of 4.4 µgm−3 for PM10 in 2013 and 4.3 µgm−3 for PM2.5 in 2014. The bulk contribution of this factor was present in the PM2.5 fraction (Table 3, Fig. S5). The seasonal variability of SOOA followed the daily temperature average (Fig. 6). In fact, SOOA exponentially increased with temperature (Fig. 7f). Such behaviour was also observed in Daellenbach et al. (2017), where they connected this factor to the oxidation of terpene emissions and therefore to biogenic SOA formation. The exponential dependence of SOOA with temperature was also similar to the temperature dependence of the biogenic SOA concentrations from a Canadian terpene-rich forest, reported by Leaitch et al. (2011). A similar factor was identified with an online instrument in Zurich during summer 2011, where the semi-volatile OOA was mainly formed by biogenic sources as the high temperatures favour the biogenic emissions compared to the rest (Canonaco et al., 2015). Finally, the O C ratio (0.37) fell into the range of the reported O C ratios measured by chamber-generated SOA (Aiken et al., 2008), which was similar to biogenic SOA produced in flow tubes (Heaton et al., 2007).
Named after its seasonal behaviour (Daellenbach et al., 2017), the third oxygenated factor, winter oxygenated OA (WOOA), exhibited the highest concentrations during winter. WOOA mass spectrum exhibited elevated contributions of the fragment C2H3O+ (Fig. 5), but lower compared to SOOA (for WOOA fC2H3O${}^{+}=\mathrm{0.11}$). It also exhibited a slightly enhanced contribution of the fragment C2H4O${}_{\mathrm{2}}^{+}$ which can be an indication that this factor originated from aged biomass burning emissions. Moreover, a similar mass spectral pattern (peaks of fragments C3H3O+, C3H5O${}_{\mathrm{2}}^{+}$, C4H5O${}_{\mathrm{2}}^{+}$ and C5H7O${}_{\mathrm{2}}^{+}$ at mz 55, 73, 85 and 99, respectively) to the one coming from oxygenated products from a wood-burning experiment (Bruns et al., 2015) was found. The recovery of this factor manifested high values (Table 2) and consisted mainly of fine-mode particles (Fig. S5). WOOA also correlated with NH${}_{\mathrm{4}}^{+}$ (Fig. S13), which is directly connected to the inorganic secondary ions NO${}_{\mathrm{3}}^{-}$ and SO${}_{\mathrm{4}}^{\mathrm{2}-}$.
## 4.4 Coupling of offline AMS and 14C analyses
In this section of the paper we will show the combined results of AMS/PMF and radiocarbon analyses. The first part will elaborate on the technical aspect of the analysis by presenting the calculation of the contribution of each factor to the fossil OC. In the second part, a thorough description of each fossil and non-fossil major source will be given. The time series of each fossil and non-fossil fraction for the whole AMS dataset is illustrated in Fig. 10. Contributions of the primary and secondary OC to the total OC will be also discussed and shown in Fig. 11.
### 4.4.1 Calculation of fossil and non-fossil fraction per factor
To combine the AMS/PMF with the 14C results, the identified sources from AMS/PMF were divided into fossil and non-fossil fractions. HOC was fully assigned to fossil sources assuming that the percentage of biofuel content is negligible. BBOC and PBOC were considered totally non-fossil. To explore the fossil and non-fossil nature of the rest of the factors, we performed multilinear regression using Eq. (17):
$\begin{array}{ll}& {\mathrm{OC}}_{\mathrm{f},i}-{\mathrm{HOC}}_{i}=a\cdot {\mathrm{SCOC}}_{i}+b\cdot {\mathrm{AOOC}}_{i}\\ \text{(17)}& & \phantom{\rule{2em}{0ex}}+c\cdot {\mathrm{SOOC}}_{i}+d\cdot {\mathrm{WOOC}}_{i},\end{array}$
where a, b, c and d are the fitting coefficients, weighted by the relative uncertainty of OC${}_{\mathrm{f},i}-$ HOCi. To investigate the stability of the solution, we obtained distributions of the fitting coefficients by performing 100 bootstrap runs where input data were randomly selected (Fig. 8). The median values (and first and third quartiles) were as follows: a=0.81 (Q25=0.73, Q75=0.88), b=0.77 (Q25=0.54, Q75=0.85), c=0.21 (Q25=0.15, Q75=0.26) and d=0.23 (Q25=0.13, Q75=0.39).
We chose to apply the multilinear regression to the fossil fraction because for the non-fossil part, the errors related to fitting coefficients were very high and the dependences of the OCnf on the input factors were not statistically significant (p values > 0.1).
To calculate the non-fossil part of each factor k (kOCnf), we used the following equation:
$\begin{array}{}\text{(18)}& k{\mathrm{OC}}_{\mathrm{nf},i}=k{\mathrm{OC}}_{i}-k{\mathrm{OC}}_{\mathrm{f},i}.\end{array}$
This analysis suggests that the major fossil primary sources were HOC and SCOC (81 % ± 11 % fossil), while AOOC (77 % ± 23 % fossil) was the only major fossil secondary source. In terms of the non-fossil sources, the dominating primary sources included BBOC and PBOC, whereas the most important secondary sources were SOOC (79 % ± 11 % non-fossil) and WOOC (77 % ± 23 % non-fossil).
### 4.4.2 Contribution of fossil and non-fossil, primary and secondary OC to the total OC
The results point out that 81 % ± 11 % (average and 1 standard deviation) of SCOC was fossil (SCOCf). Taking into account the enhanced contribution of SCOC in the coarse size fraction, its sulfur content and its fossil nature, we assume that this factor is linked to primary anthropogenic sources related to traffic, such as tire wear, resuspension of road dust (Bukowiecki et al., 2010), resuspension from asphalt concrete (Gehrig et al., 2010) or asphalt mixture abrasion (in bituminous binder, Fullova et al., 2017). The contribution of SCOCf to the OCf was more important during autumn and winter (up to 62 %, Fig. 9a) in contrast to spring and summer (on average 9 % ± 5 %), while on average the contribution to the OCf was 20 % ± 19 %. The concentrations in winter and autumn were similar and on average for PM10 (PM2.5) 0.22 ± 0.21 µg m−3 (0.03 ± 0.03 µg m−3) (Fig. 10, Table S2), which accounted for 73 % of the total SCOC for this period. However, the contribution of SCOCf to the total OC for the coarse size fraction was not high (5 % ± 8 % on average).
The combined 14C AMS analysis supported the initial hypothesis that AOOC was mainly related to the oxidation of fossil fuel combustion emissions (e.g. traffic), as AOOC was 77 % ± 23 % fossil (AOOCf) on average. The average contribution of AOOCf to the OCf was 28 % ± 14 % (Fig. 9a), larger than SCOCf, while its contribution to the total OC was 10 % ± 5 % for the coarse OC and 7 % ± 7 % of the fine OC. The yearly cycle exhibited elevated contributions in winter and spring compared to summer and autumn with average values for PM10: 0.47 ± 0.22, 0.43 ± 0.30, 0.39 ± 0.23 and 0.29 ± 0.23 µg m−3, respectively (Fig. 10, Table S2). In winter and spring most of the mass concentration came from the PM2.5 size range in contrast to the other two seasons.
The fossil fractions of SOOC (SOOCf) and WOOC (WOOCf) were low (21 and 23 %, respectively) and could also be attributed to traffic emissions or less likely (due to low emissions) to aged aerosols from residential fossil fuel heating. SOOCf was important during summer with contributions up to 40 % to the OCf and WOOCf was more distinctively present during a few days in autumn and winter (up to 35 % to the OCf) in contrast to the rest of the year (Fig. 9a).
From the non-fossil sources, apart from non-fossil SCOC (SCOCnf) and non-fossil AOOC (AOOCnf), the rest of the factors exhibited a very distinct yearly cycle with BBOC contributing up to 86 % to the OCnf in late autumn and winter (Fig. 9b, yearly average 28 % ± 30 %) and with PBOC and SOOCnf becoming more important in late spring, summer and early autumn with contributions up to 82 and 57 %, respectively (Fig. 9b).
SOOC was 79 % non-fossil which supported the AMS/PMF results: the significance of non-fossil SOOC (SOOCnf) during summer can be attributed to SOA formation from biogenic emissions. The average contribution of SOOCnf to OCnf was 25 % ± 19 % (Fig. 9b). SOOCnf was more pronounced in PM2.5 (on average 1.12 ± 0.40 µg m−3 in summer and 0.75 ± 0.35 µg m−3 in spring, Fig. 10, Table S2). This factor along with PBOC was the main and almost equally important source of OC during spring and summer, with PBOC contributing to OC in the coarse mode (on average 35 % ± 16 % from April to August 2014) and SOOCnf in the fine mode (46 % ± 15 % from April to August 2014). PBOC made up 30 % ± 18 % of the OCnf and the average concentrations of PBOCcoarse for 2014 were 1.00 ± 0.23 µg m−3 in summer and 0.56 ± 0.21 µg m−3 in spring.
Non-fossil WOOC (WOOCnf) dominated over WOOCf (77 % over 23 %). The average yearly contribution to OCnf was low (6 % ± 6 %, Fig. 9b); however, WOOCnf,coarse was apparent during the cold period especially in 2013 with concentrations of 0.88 ± 0.74 µgm−3 on average for winter (0.28 ± 0.28 µg m−3 for autumn) (Fig. 10). In 2014 the concentrations dropped for winter (autumn) with 0.53 ± 0.43 µg m−3 (0.15 ± 0.13 µg m−3) for PM10 and 0.22 ± 0.19 µg m−3 (0.21 ± 0.21 µg m−3) for PM2.5. Based on its yearly cycle (Fig. 10b and d) WOOCnf could be linked to aged OA influenced by wintertime and early spring biomass burning emissions. Therefore, not only AOOCf but also WOOCnf can be related to anthropogenic activities. In other studies (Daellenbach et al., 2017; Bozzetti et al., 2016) this factor was more pronounced; however, in our case in winter most of the OCnf was related to primary biomass burning.
Overall for PM10 the non-fossil primary OC contributions were more important during autumn (57 %) and winter (75 %), whereas in spring and summer the non-fossil secondary OC contributions became more pronounced (32 and 40 %, respectively) (Fig. 11). The dominance of the SOC during the warm period is likely related to the stronger solar radiation which favours the photo-oxidation of biogenic volatile organic compounds and to the elevated biogenic volatile organic compounds emissions.
5 Conclusions
The coupling of offline AMS and 14C analyses allowed a detailed characterization of the carbonaceous aerosol in the Alpine valley of Magadino for the years 2013–2014. The seasonal variation along with the two size-segregated measurements (PM10 and PM2.5) gave insights into the source apportionment, by for example quantifying the resuspension of road dust or asphalt concrete and estimating its contribution to the OC or by identifying SOC based on SOC precursors. More specifically, seven sources including four primary and three secondary ones were identified. The non-fossil primary sources were dominating during autumn and winter, with BBOC exhibiting by far the highest concentrations. During spring and summer again two non-fossil sources, PBOC in the coarse fraction and SOOCnf in the fine mode, prevailed over the fossil ones. The size-segregated measurements and 14C analysis enabled a better understanding of the primary SCOC factor, which was enhanced in the coarse fraction and was mainly fossil, suggesting that it may originate from resuspension of road dust or tire – asphalt abrasion. The results also showed that SOC was formed mainly by biogenic sources during summer and anthropogenic sources during winter. However, SOC formed possibly by oxidation of traffic emissions or volatile chemical products was also apparent during summer (AOOCf). AOOCf was also important during winter along with SOC linked to transported non-fossil carbonaceous aerosols coming from anthropogenic activities such as biomass burning (WOOCnf).
Data availability
Data availability.
The data are available upon request from the corresponding author.
Supplement
Supplement.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
This work is funded by the Swiss Federal Office for the Environment (FOEN), OSTLUFT and the cantons of Basel, Graubünden, Ticino, Thurgau and Valais. The LABEX OSUG@2020 (ANR-10-LABX-56) funded analytical instruments at IGE.
Edited by: Eleanor Browne
Reviewed by: two anonymous referees
References
Agrios, K., Salazar, G. A., Zhang, Y. L., Uglietti, C., Battaglia, M., Luginbühl, M., Ciobanu, V. G., Vonwiller, M., and Szidat, S.: Online coupling of pure O2 thermo-optical methods – 14C AMS for source apportionment of carbonaceous aerosols study, Nucl. Instrum. Meth. B., 361, 288–293, https://doi.org/10.1016/j.nimb.2015.06.008, 2015.
Aiken, A. C., Decarlo, P. F., Kroll, J. H., Worsnop, D. R., Huffman, J. A., Docherty, K. S., Ulbrich, I. M., Mohr, C., Kimmel, J. R., Sueper, D., Sun, Y., Zhang, Q., Trimborn, A., Northway, M., Ziemann, P. J., Canagaratna, M. R., Onasch, Z. B., Alfarra, M. R., Prevot, A. S., Dommen, J., Duplissy, J., Metzger, A., Baltensperger, U., and Jimenez, J. L.: O C and OM OC ratios of primary, secondary, and ambient organic aerosols with high-resolution time-of-flight aerosol mass spectrometry, Environ. Sci. Technol., 42, 4478–4485, https://doi.org/10.1021/es703009q, 2008.
Alfarra, M. R., Prévôt, A. S. H., Szidat, S., Sandradewi, J.,Weimer, S., Lanz, V. A., Schreiber, D., Mohr, M., and Baltensperger, U.: Identification of the mass spectral signature of organic aerosols from wood burning emissions, Environ. Sci. Technol., 41, 5770–5777, https://doi.org/10.1021/es062289b, 2007.
Allan, J. D., Jimenez, J. L., Williams, P. I., Alfarra, M. R., Bower, K. N., Jayne, J. T., Coe, H., and Worsnop, D. R.: Quantitative sampling using an Aerodyne aerosol mass spectrometer – 1. Techniques of data interpretation and error analysis, J. Geophys. Res.-Atmos., 108, 4090, https://doi.org/10.1029/2002JD002358, 2003.
Beekmann, M., Prévôt, A. S. H., Drewnick, F., Sciare, J., Pandis, S. N., Denier van der Gon, H. A. C., Crippa, M., Freutel, F., Poulain, L., Ghersi, V., Rodriguez, E., Beirle, S., Zotter, P., von der Weiden-Reinmüller, S. L., Bressi, M., Fountoukis, C., Petetin, H., Szidat, S., Schneider, J., Rosso, A., El Haddad, I., Megaritis, A., Zhang, Q. J., Michoud, V., Slowik, J. G., Moukhtar, S., Kolmonen, P., Stohl, A., Eckhardt, S., Borbon, A., Gros, V., Marchand, N., Jaffrezo, J. L., Schwarzenboeck, A., Colomb, A., Wiedensohler, A., Borrmann, S., Lawrence, M., Baklanov, A., and Baltensperger, U.: In situ, satellite measurement and model evidence on the dominant regional contribution to fine particulate matter levels in the Paris megacity, Atmos. Chem. Phys., 15, 9577–9591, https://doi.org/10.5194/acp-15-9577-2015, 2015.
Bonvalot, L., Tuna, T., Fagault, Y., Jaffrezo, J. L., Jacob, V., Chevrier, F., and Bard, E.: Estimating contributions from biomass burning and fossil fuel combustion by means of radiocarbon analysis of carbonaceous aerosols: application to the Valley of Chamonix, Atmos. Chem. Phys., 16, 13753–13772, https://doi.org/10.5194/acp-16-13753-2016, 2016.
Bozzetti, C., Daellenbach, K., R., Hueglin, C., Fermo, P., Sciare, J., Kasper-Giebl, A., Mazar, Y., Abbaszade, G., El Kazzi, M., Gonzalez, R., Shuster Meiseles, T., Flasch, M., Wolf, R., Křepelová, A., Canonaco, F., Schnelle-Kreis, J., Slowik, J. G., Zimmermann, R., Rudich, Y., Baltensperger, U., El Haddad, I., and Prévôt, A. S. H.: Size-resolved identification, characterization, and quantification of primary biological organic aerosol at a European rural site, Environ. Sci. Technol., 50, 3425–3434, https://doi.org/10.1021/acs.est.5b05960, 2016.
Bozzetti, C., Sosedova, Y., Xiao, M., Daellenbach, K. R., Ulevicius, V., Dudoitis, V., Mordas, G., Byčenkienė, S., Plauškaitė, K., Vlachou, A., Golly, B., Chazeau, B., Besombes, J.-L., Baltensperger, U., Jaffrezo, J.-L., Slowik, J. G., El Haddad, I., and Prévôt, A. S. H.: Argon offline-AMS source apportionment of organic aerosol over yearly cycles for an urban, rural, and marine site in northern Europe, Atmos. Chem. Phys., 17, 117–141, https://doi.org/10.5194/acp-17-117-2017, 2017a.
Bozzetti, C., El Haddad, I., Salameh, D., Daellenbach, K. R., Fermo, P., Gonzalez, R., Minguillón, M. C., Iinuma, Y., Poulain, L., Elser, M., Müller, E., Slowik, J. G., Jaffrezo, J.-L., Baltensperger, U., Marchand, N., and Prévôt, A. S. H.: Organic aerosol source apportionment by offline-AMS over a full year in Marseille, Atmos. Chem. Phys., 17, 8247–8268, https://doi.org/10.5194/acp-17-8247-2017, 2017b.
Bruns E., Krapf, M., Orasche, J., Huang, Y., Zimmermann, R., Drinovec, L., Močnik, G., El-Haddad, I., Slowik, J. G., Dommen, J., Baltensperger, U., and Prévôt, A. S. H.: Characterization of primary and secondary wood combustion products generated under different burner loads, Atmos., Chem., Phys., 15, 2825–2841, https://doi.org/10.5194/acp-15-2825-2015, 2015.
Bukowiecki, N., Lienemann, P., Hill, M., Furger, M., Richard, A., Amato, F., Prevot, A. S. H., Baltensperger, U., Buchmann, B., and Gehrig, R.: PM10 emission factors for non-exhaust particles generated by road traffic in an urban street canyon and along a freeway in Switzerland, Atmos. Environ., 44, 2330–2340, https://doi.org/10.1016/j.atmosenv.2010.03.039, 2010.
Canagaratna, M. R., Jayne, J. T., Jimenez, J. L., Allan, J. D., Alfarra, M. R., Zhang, Q., Onasch, T. B., Drewnick, F., Coe, H., Middlebrook, A., Delia, A., Williams, L. R., Trimborn, A. M., Northway, M. J., DeCarlo, P. F., Kolb, C. E., Davidovits, P., and Worsnop, D. R.: Chemical and microphysical characterization of ambient aerosols with the Aerodyne aerosol mass spectrometer, Mass Spectrom. Rev., 26, 185–222, https://doi.org/10.1002/mas.20115, 2007.
Canonaco, F., Crippa, M., Slowik, J. G., Baltensperger, U., and Prévôt, A. S. H.: SoFi, an IGOR-based interface for the efficient use of the generalized multilinear engine (ME-2) for the source apportionment: ME-2 application to aerosol mass spectrometer data, Atmos. Meas. Tech., 6, 3649–3661, https://doi.org/10.5194/amt-6-3649-2013, 2013.
Canonaco, F., Slowik, J. G., Baltensperger, U., and Prévôt, A. S. H.: Seasonal differences in oxygenated organic aerosol composition: implications for emissions sources and factor analysis. Atmos. Chem. Phys. 15, 6993–7002, https://doi.org/10.5194/acp-15-6993-2015, 2015.
Cavalli, F., Viana, M., Yttri, K. E., Genberg, J., and Putaud, J.-P.: Toward a standardised thermal-optical protocol for measuring atmospheric organic and elemental carbon: the EUSAAR protocol, Atmos. Meas. Tech., 3, 79–89, https://doi.org/10.5194/amt-3-79-2010, 2010.
Chirico, R., Prevot A. S. H., DeCarlo P. F., Heringa M. F., Richter R., Weingartner E., and Baltensperger U.: Aerosol and trace gas vehicle emission factors measured in a tunnel using an aerosol mass spectrometer and other on-line instrumentation, Atmos. Environ., 45, 2182–2192, https://doi.org/10.1016/j.atmosenv.2011.01.069, 2011.
Crippa, M., El Haddad, I., Slowik, J. G., DeCarlo, P.F., Mohr, C., Heringa, M. F., Chirico, R., Marchand, N., L., Sciare, J., Baltensperger, U., and Prévôt, A. S. H.: Identification of marine and continental aerosol sources in Paris using high resolution aerosol mass spectrometry, J. Geophys. Res., 118, 1950–1963, https://doi.org/10.1002/jgrd.50151, 2013.
Daellenbach, K. R., Bozzetti, C., Krepelova, A., Canonaco, F., Huang, R.-J., Wolf, R., Zotter, P., Crippa, M., Slowik, J., Zhang, Y., Szidat, S., Baltensperger, U., Prévôt, A. S. H., and El Haddad, I.: Characterization and source apportionment of organic aerosol using offline aerosol mass spectrometry, Atmos. Meas. Tech., 9, 23–39, https://doi.org/10.5194/amt-9-23-2016, 2016.
Daellenbach K. R., Stefenelli G., Bozzetti C., Vlachou A., Fermo P., Gonzalez R., Piazzalunga A., Colombi C., Canonaco F., Kasper-Giebl A., Jaffrezo J.-L., Bianchi F., Slowik J. G., Baltensperger U., El-Haddad I., and Prévôt A. S. H.: Long-term chemical analysis and organic aerosol source apportionment at 9 sites in Central Europe: Source identification and uncertainty assessment, Atmos. Chem. Phys., 17, 13265–13282, https://doi.org/10.5194/acp-2017-124, 2017.
Davison, A. C. and Hinkley, D. V.: Bootstrap Methods and Their Application, Cambridge University Press, Cambridge, UK, 582 pp., 1997.
Dusek, U., Hitzenberger, R., Kasper-Giebl, A., Kistler, M., Meijer, H. A. J., Szidat, S., Wacker, L., Holzinger, R., and Röckmann, T.: Sources and formation mechanisms of carbonaceous aerosol at a regional background site in the Netherlands: insights from a year-long radiocarbon study, Atmos. Chem. Phys., 17, 3233–3251, https://doi.org/10.5194/acp-17-3233-2017, 2017.
El Haddad, I., Marchand, N., Dron, J., Temime-Roussel, B., Quivet, E., Wortham, H., Jaffrezo, J-L., Baduel, C., Voisin, D., Besombes, J. L., and Gille, G.: Comprehensive primary particulate organic characterization of vehicular exhaust emissions in France, Atmos. Environ., 43, 6190–6198, https://doi.org/10.1016/j.atmosenv.2009.09.001, 2009.
El Haddad, I., D'Anna, B., Temime-Roussel, B., Nicolas, M., Boreave, A., Favez, O., Voisin, D., Sciare, J., George, C., Jaffrezo, J. L., Wortham, H., and Marchand, N.: Towards a better understanding of the origins, chemical composition and aging of oxygenated organic aerosols: case study of a Mediterranean industrialized environment, Marseille, Atmos. Chem. Phys., 13, 7875–7894, https://doi.org/10.5194/acp-13-7875-2013, 2013.
Elser, M., Huang, R.-J., Wolf, R., Slowik, J. G., Wang, Q., Canonaco, F., Li, G., Bozzetti, C., Daellenbach, K. R., Huang, Y., Zhang, R., Li, Z., Cao, J., Baltensperger, U., El-Haddad, I., Prévôt, A. S. H., and André, S. H.: New insights into PM2.5 chemical composition and sources in two major cities in China during extreme haze events using aerosol mass spectrometry, Atmos. Chem. Phys., 16, 3207–3225, https://doi.org/10.5194/acp-16-3207-2016, 2016.
Favez, O., El Haddad, I., Piot, C., Boréave, A., Abidi, E., Marchand, N., Jaffrezo, J.-L., Besombes, J.-L., Personnaz, M.-B., Sciare, J., Wortham, H., George, C., and D'Anna, B.: Inter-comparison of source apportionment models for the estimation of wood burning aerosols during wintertime in an Alpine city (Grenoble, France), Atmos. Chem. Phys., 10, 5295–5314, https://doi.org/10.5194/acp-10-5295-2010, 2010.
Fröhlich, R., Cubison, M. J., Slowik, J. G., Bukowiecki, N., Prevot, A. S. H., Baltensperger, U., Schneider, J., Kimmel, J. R., Gonin, M., Rohner, U., Worsnop, D. R., and Jayne J. T.: The ToF-ACSM: a portable aerosol chemical speciation monitor with TOFMS detection, Atmos. Meas. Tech., 6, 3225–3241, https://doi.org/10.5194/amt-6-3225-2013, 2013.
Fullova, D., Durcanska D., and Hegrova, J.: Particulate matter mass concentrations produced from pavement surface abrasion, MATEC Web of Conferences, 117, 00048, https://doi.org/10.1051/matecconf/201711700048, 2017.
Gehrig, R., Zeyer, K., Bukowiecki, N., Lienemann, P., Poulikakos, L. D., Furger, M., and Buchmann, B.: Mobile load simulators – A tool to distinguish between the emissions due to abrasion and resuspension of PM10 from road surfaces, Atmos. Environ., 44, 4937–4943, https://doi.org/10.1016/j.atmosenv.2010.08.020, 2010.
Gelencsér, A., May, B., Simpson, D., Sánchez-Ochoa, A., Kasper-Giebl, A., Puxbaum, H., Caseiro, A., Pio, C., and Legrand, M.: Source apportionment of PM2.5 organic aerosol over Europe: Primary/secondary, natural/anthropogenic, and fossil/biogenic origin, J. Geophys. Res.-Atmos., 112, D23S04, https://doi.org/10.1029/2006JD008094, 2007.
Genberg, J., Hyder, M., Stenström, K., Bergström, R., Simpson, D., Fors, E. O., Jönsson, J. Å., and Swietlicki, E.: Source apportionment of carbonaceous aerosol in southern Sweden, Atmos. Chem. Phys., 11, 11387–11400, https://doi.org/10.5194/acp-11-11387-2011, 2011.
Gianini, M. F. D., Gehrig, R., Fischer, A., Ulrich, A., Wichser, A., and Hueglin, C.: Chemical composition of PM10 in Switzerland: An analysis for 2008/2009 and changes since 1998/1999, Atmos. Environ., 54, 97–106, https://doi.org/10.1016/j.atmosenv.2012.02.037, 2012.
Gilardoni, S., Vignati, E., Cavalli, F., Putaud, J. P., Larsen, B. R., Karl, M., Stenström, K., Genberg, J., Henne, S., and Dentener, F.: Better constraints on sources of carbonaceous aerosols using a combined 14C – macro tracer analysis in a European rural background site, Atmos. Chem. Phys., 11, 5685–5700, https://doi.org/10.5194/acp-11-5685-2011, 2011.
Glasius, M., la Cour, A., and Lohse, C.: Fossil and nonfossil carbon in fine particulate matter: A study of five European cities, J. Geophys. Res., 116, D11302, https://doi.org/10.1029/2011JD015646, 2011.
Heaton, K. J., Dreyfus, M. A., Wang, S., and Johnston, M. V.: Oligomers in the early stage of biogenic secondary organic aerosol formation and growth, Environ. Sci. Technol., 41, 6129–6136, https://doi.org/10.1021/es070314n, 2007.
Herich, H., Gianini, M. F. D., Piot, C., Močnik, G., Jaffrezo, J. L., Besombes, J. L., Prévôt, A. S. H., and Hueglin, C.: Overview of the impact of wood burning emissions on carbonaceous aerosols and PM in large parts of the Alpine region, Atmos. Environ., 89, 64–75, https://doi.org/10.1016/j.atmosenv.2014.02.008, 2014.
Huang, R.-J., Zhang, Y., Bozzetti, C., Ho, K.-F., Cao, J., Han, Y., Dällenbach, K. R., Slowik, J. G., Platt, S. M., Canonaco, F., Zotter, P., Wolf, R., Pieber, S. M., Bruns, E. A., Crippa, M., Ciarelli, G., Piazzalunga, A., Schwikowski, M., Abbaszade, G., Schnelle-Kreis, J., Zimmermann, R., An, Z., Szidat, S., Baltensperger, U., Haddad, I. E., and Prévôt, A. S. H.: High secondary aerosol contribution to particulate pollution during haze events in China, Nature, 514, 218–222, https://doi.org/10.1038/nature13774, 2014.
Jaffrezo, J.-L., Calas, T., and Bouchet, M.: Carboxylic acids measurements with ionic chromatography, Atmos. Environ., 32, 2705–2708, https://doi.org/10.1016/S1352-2310(98)00026-0, 1998.
Jaffrezo, J.-L., Aymoz, G., Delaval, C., and Cozic, J.: Seasonal variations of the water soluble organic carbon mass fraction of aerosol in two valleys of the French Alps, Atmos. Chem. Phys., 5, 2809–2821, https://doi.org/10.5194/acp-5-2809-2005, 2005.
Jimenez, J. L., Canagaratna, M. R., Donahue, N. M., Prévôt, A. S. H., Zhang, Q., Kroll, J. H., DeCarlo, P. F., Allan, J. D., Coe, H., Ng, N. L., Aiken, A. C., Docherty, K. S., Ulbrich, I. M., Grieshop, A. P., Robinson, A. L., Duplissy, J., Smith, J. D., Wilson, K. R., Lanz, V. A., Hueglin, C., Sun, Y. L., Tian, J., Laaksonen, A., Raatikainen, T., Rautiainen, J., Vaattovaara, P., Ehn, M., Kulmala, M., Tomlinson, J. M., Collins, D. R., Cubison, M. J., Dunlea, E. J., Huffman, J. A., Onasch, T. B., Alfarra, M. R., Williams, P. I., Bower, K., Kondo, Y., Schneider, J., Drewnick, F., Borrmann, S., Weimer, S., Demerjian, K., Salcedo, D., Cottrell, L., Griffin, R., Takami, A., Miyoshi, T., Hatakeyama, S., Shimono, A., Sun, J. Y., Zhang, Y. M., Dzepina, K., Kimmel, J. R., Sueper, D., Jayne, J. T., Herndon, S. C., Trimborn, A. M., Williams, L. R., Wood, E. C., Middlebrook, A. M., Kolb, C. E., Baltensperger, U., and Worsnop, D. R.: Evolution of organic aerosols in the atmosphere, Science, 326, 1525–1529, https://doi.org/10.1126/science.1180353, 2009.
Kunit, M. and Puxbaum, H.: Enzymatic determination of the cellulose content of atmospheric aerosols, Atmos. Environ., 30, 1233–1236, https://doi.org/10.1016/1352-2310(95)00429-7, 1996.
Lanz, V. A., Alfarra, M. R., Baltensperger, U., Buchmann, B., Hueglin, C., and Prévôt, A. S. H.: Source apportionment of submicron organic aerosols at an urban site by factor analytical modelling of aerosol mass spectra, Atmos. Chem. Phys., 7, 1503–1522, https://doi.org/10.5194/acp-7-1503-2007, 2007.
Lanz, V. A., Prévôt, A. S. H., Alfarra, M. R., Weimer, S., Mohr, C., DeCarlo, P. F., Gianini, M. F. D., Hueglin, C., Schneider, J., Favez, O., D'Anna, B., George, C., and Baltensperger, U.: Characterization of aerosol chemical composition with aerosol mass spectrometry in Central Europe: an overview, Atmos. Chem. Phys., 10, 10453–10471, https://doi.org/10.5194/acp-10-10453-2010, 2010.
Leaitch, W. R. Macdonald, A. M., Brickell, P. C., Liggio, J., Sjostedt, S. J., Vlasenko, A., Bottenheim, J. W., Huang, L., Li, S.-M., Liu, P. S. K., Toom-Sauntry, D., Hayden, K. A., Sharma, S., Shantz, N. C., Wiebe H. A., Zhang, W., Abbatt, J. P. D., Slowik, J. G., Chang, Rachel, Y.-W., Russell, L. M., Schwartz, R. E., Takahama, S., Jayne, J. T., and Ng, N. L.: Temperature response of the submicron organic aerosol from temperate forests, Atmos. Environ., 45, 6696–6704, https://doi.org/10.1016/j.atmosenv.2011.08.047, 2011.
Lemire, R. K., Allen, T. D., Klouda A. G., and Lewis W. C.: Fine particulate matter source attribution for Southeast Texas using 14C 13C ratios, J. Geophys. Res., 107, 4631, https://doi.org/10.1029/2002JD002339, 2002.
Levin, I., Naegler, T., Kromer, B., Diehl, M., Francey, R. J., Gomez-Pelaez, A. J., Steele, L. P., Wagenbach, D., Weller, R., and Worthy, D. E.: Observations and modelling of the global distribution and long-term trend of atmospheric 14CO2, Tellus B, 62, 26–46, https://doi.org/10.1111/j.1600-0889.2009.00446.x, 2010a.
Levin, E. J. T., McMeeking, G. R., Carrico, C. M., Mack, L. E., Kreidenweis, S. M., Wold, C. E., Moosmüller, H., Arnott, W. P., Hao, W. M., Collett, J. L., and Malm, W. C.: Biomass burning smoke aerosol properties measured during Fire Laboratory at Missoula Experiments (FLAME), J. Geophys. Res.-Atmos., 115, D18210, https://doi.org/10.1029/2009jd013601, 2010b.
Li, X. Dallmann, T. R., May, A. A., Tkacik, D. S., Lambe, A. T., Jayne, J. T., Croteau, P. L., and Presto, A. A.: Gas-particle partitioning of vehicle emitted primary organic aerosol measured in a traffic tunnel, Environ. Sci. Technol., 50, 12146–12155, https://doi.org/10.1021/acs.est.6b01666, 2016.
McDonald, B. C., de Gouw, J. A., Gilman, J. B., Jathar, S. H., Akherati, A., Cappa, C. D., Jimenez, J. L.,Lee-Taylor, J.,Hayes, P. L.,McKeen, S. A., Cui, Y. Y., Kim, S.-W.,Gentner, Drew R., Isaacman-VanWertz, G., Goldstein, A. H.,Harley, R. A., Frost, G. J., Roberts, J. M., Ryerson, T. B., and Trainer, M.: Volatile chemical products emerging as largest petrochemical source of urban organic emissions, Science, 359, 760–764, https://doi.org/10.1126/science.aaq0524, 2018.
Minguillón, M. C., Perron, N., Querol, X., Szidat, S., Fahrni, S. M., Alastuey, A., Jimenez, J. L., Mohr, C., Ortega, A. M., Day, D. A., Lanz, V. A., Wacker, L., Reche, C., Cusack, M., Amato, F., Kiss, G., Hoffer, A., Decesari, S., Moretti, F., Hillamo, R., Teinilä, K., Seco, R., Peñuelas, J., Metzger, A., Schallhart, S., Müller, M., Hansel, A., Burkhart, J. F., Baltensperger, U., and Prévôt, A. S. H.: Fossil versus contemporary sources of fine elemental and organic carbonaceous particulate matter during the DAURE campaign in Northeast Spain, Atmos. Chem. Phys., 11, 12067–12084, https://doi.org/10.5194/acp-11-12067-2011, 2011.
Mohn, J., Szidat, S., Fellner, J., Rechberger, H., Quartier, R., Buchmann, B., and Emmenegger, L.: Determination of biogenic and fossil CO2 emitted by waste incineration based on 14CO2 and mass balances, Bioresource Technol., 99, 6471–6479, https://doi.org/10.1016/j.biortech.2007.11.042, 2008.
Ng, N. L., Herndon, S. C., Trimborn, A., Canagaratna, M. R. Croteau, P. L., Onasch, T. B. Sueper, D., Worsnop, D. R., Zhang, Q., Sun, Y. L., and Jayne, J. T.: An Aerosol Chemical Speciation Monitor (ACSM) for routine monitoring of the composition and mass concentrations of ambient aerosol, Aerosol Sci. Tech., 45, 770–784, https://doi.org/10.1080/02786826.2011.560211, 2011.
Paatero, P.: Least squares formulation of robust non-negative factor analysis, Chemom. Intell. Lab. Syst., 37, 23–35, https://doi.org/10.1016/S0169-7439(96)00044-5, 1997.
Paatero, P.: The multilinear engine – A table-driven, least squares program for solving multilinear problems, including the n-way parallel factor analysis model, J. Comput. Graph. Stat., 8, 854–888, 1999.
Paatero, P. and Hopke, P. K.: Discarding or downweighting high-noise variables in factor analytic models, Anal. Chim. Acta, 490, 277–289, https://doi.org/10.1016/s0003-2670(02)01643-4, 2003.
Paatero, P. and Tapper U.: Positive matrix factorization: A non-negative factor model with optimal utilization of error estimates of data values, Environmetrics, 5, 111–126, https://doi.org/10.1002/env.3170050203,1994.
Piazzalunga, A., Bernardoni, V., Fermo, P., Valli, G., and Vecchi, R.: Technical Note: On the effect of water-soluble compounds removal on EC quantification by TOT analysis in urban aerosol samples, Atmos. Chem. Phys., 11, 10193–10203, https://doi.org/10.5194/acp-11-10193-2011, 2011.
Pieber, S. M., El Haddad, I., Slowik, J. G., Canagaratna, M. R., Jayne, J. T., Platt, S. M., Bozzetti, C., Daellenbach, K. R., Fröhlich, R., Vlachou, A., Klein, F., Dommen, J., Miljevic, B., Jimenez, J. L., Worsnop, D. R., Baltensperger, U., and Prévôt, A. S. H.: Inorganic salt interference on CO${}_{\mathrm{2}}^{+}$ in aerodyne AMS and ACSM organic aerosol composition studies, Environ. Sci. Technol., 50, 10494–10503, https://doi.org/10.1021/acs.est.6b01035, 2016.
Puxbaum, H., Caseiro, A., Sànchez-Ochoa, A., Kasper-Giebl, A., Claeys, M., Gelencser, A, Legrand, M., Preunkert S., and Pio, C.: Levoglucosan levels at background sites in Europe for assessing the impact of biomass combustion on the European aerosol background, J. Geophys. Res., 112, D23S05, https://doi.org/10.1029/2006JD008114, 2007.
Sandradewi, J., Prévôt, A. S. H., Alfarra, M. R., Szidat, S.,Wehrli, M. N., Ruff, M., Weimer, S., Lanz, V. A., Weingartner, E., Perron, N., Caseiro, A., Kasper-Giebl, A., Puxbaum, H., Wacker, L., and Baltensperger, U.: Comparison of several wood smoke markers and source apportionment methods for wood burning particulate mass, Atmos. Chem. Phys. Discuss., 8, 8091–8118, https://doi.org/10.5194/acpd-8-8091-2008, 2008.
Schichtel, B. A., Malm, W. C., Bench, G., Fallon, S., McDade, C. E., Chow, J. C., and Watson, J. G.: Fossil and contemporary fine particulate carbon fractions at 12 rural and urban sites in the United States, J. Geophys. Res., 113, D02311, https://doi.org/10.1029/2007JD008605, 2008.
Stuiver, M. and Polach, H. A.: Reporting of 14C data, Radiocarbon, 19, 355–363, https://doi.org/10.1017/S0033822200003672, 1977.
Szidat, S., Jenk, T. M., Gäggeler, H. W., Synal, H. A., Fisseha, R., Baltensperger, U., Kalberer, M., Samburova, V., Reimann, S., Kasper-Giebl, A., and Hajdas, I.: Radiocarbon (14C)-deduced biogenic and anthropogenic contributions to organic carbon (OC) of urban aerosols from Zürich, Switzerland, Atmos. Environ., 38, 4035–4044, https://doi.org/10.1016/j.atmosenv.2004.03.066, 2004.
Szidat, S., Prévôt, A. S. H., Sandradewi, J., Alfarra, M. R., Synal, H.-A., Wacker, L., and Baltensperger, U.: Dominant impact of residential wood burning on particulate matter in Alpine valleys during winter, Geophys. Res. Lett., 34, L5820, https://doi.org/10.1029/2006gl028325, 2007.
Szidat, S., Ruff, M., Perron, N., Wacker, L. Synal, H. A., Hallquist, M., Shannigrahi, A. S., Yttri, K. E., Dye, C., and Simpson, D.: Fossil and non-fossil sources of organic carbon (OC) and elemental carbon (EC) in Göteborg, Sweden, Atmos. Chem. Phys, 9, 1521–1535, https://doi.org/10.5194/acp-9-1521-2009, 2009.
Szidat, S., Salazar, G. A., Vogel, E., Battaglia, M., Wacker, L., Synal, H.-A., and Türler, A.: 14C analysis and sample preparation at the new Bern Laboratory for the Analysis of Radiocarbon with AMS (LARA), Radiocarbon, 56, 561–566, https://doi.org/10.2458/56.17457, 2014.
Turpin, B. J. and Lim, H-J.: Species contributions to PM2.5 mass concentrations: revisiting common assumptions for estimating organic mass, Aerosol Sci. Technol., 35, 602–610, https://doi.org/10.1080/02786820119445, 2001.
Ulbrich, I. M., Canagaratna, M. R., Zhang, Q., Worsnop, D. R., and Jimenez, J. L.: Interpretation of organic components from Positive Matrix Factorization of aerosol mass spectrometric data, Atmos. Chem. Phys., 9, 2891–2918, https://doi.org/10.5194/acp-9-2891-2009, 2009.
Ulevicius, V., Byčenkienė, S., Bozzetti, C., Vlachou, A., Plauškaitė, K., Mordas, G., Dudoitis, V., Abbaszade, G.,Remeikis, V., Garbaras, A., Masalaite, A., Blees, J., Fröhlich, R., Dällenbach, K. R., Canonaco, F., Slowik, J. G., Dommen, J., Zimmermann, R., Schnelle-Kreis, J., Salazar, G. A., Agrios, K., Szidat, S., El Haddad, I., and Prévôt, A. S. H.: Fossil and non-fossil source contributions to atmospheric carbonaceous aerosols during extreme spring grassland fires in Eastern Europe, Atmos. Chem. Phys, 16, 5513–5529, https://doi.org/10.5194/acp-16-5513-2016, 2016.
Williams, L. R., Gonzalez, L. A., Peck, J., Trimborn, D., McInnis, J., Farrar, M. R., Moore, K. D., Jayne, J. T., Robinson, W. A., Lewis, D. K., Onasch, T. B., Canagaratna, M. R., Trimborn, A., Timko, M. T., Magoon, G., Deng, R., Tang, D., de la Rosa Blanco, E., Prévôt, A. S. H., Smith, K. A., and Worsnop, D. R.: Characterization of an aerodynamic lens for transmitting particles greater than 1 micrometer in diameter into the Aerodyne aerosol mass spectrometer, Atmos. Meas. Tech., 6, 3271–3280, https://doi.org/10.5194/amt-6-3271-2013, 2013.
Xu, L., Guo, H., Weber, R. J., and Ng, N. L.: Chemical characterization of water-soluble organic aerosol in contrasting rural and urban environments in the Southeastern United States, Environ. Sci. Technol., 51, 78–88, https://doi.org/10.1021/acs.est.6b05002, 2017.
Zhang, Y. L., Perron, N., Ciobanu, V. G., Zotter, P., Minguillón, M. C., Wacker, L., Prévôt, A. S. H., Baltensperger, U., and Szidat, S.: On the isolation of OC and EC and the optimal strategy of radiocarbon-based source apportionment of carbonaceous aerosols, Atmos. Chem. Phys., 12, 10841–10856, https://doi.org/10.5194/acp-12-10841-2012, 2012.
Zhang, Y. L., Zotter P., Perron N., Prévôt, A. S. H, Wacker L., and Szidat, S.: Fossil and non-fossil sources of different carbonaceous fractions in fine and coarse particles by radiocarbon measurement, Radiocarbon, 55, 1510–1520, https://doi.org/10.2458/azu_js_rc.55.16278, 2013.
Zhang, Y. L., Kawamura, K., Agrios, K., Lee, M., Salazar G., and Szidat, S.: Fossil and nonfossil sources of organic and elemental carbon aerosols in the outflow from Northeast China, Environ. Sci. Technol. 50, 6284–6292, https://doi.org/10.1021/acs.est.6b00351, 2016.
Zhang, Y. L., Ren, H., Sun, Y., Cao, F., Chang, Y., Liu, S., Lee, X., Agrios, K., Kawamura, K., Liu, D., Ren, L., Du, W., Wang, Z., Prevot, A. S. H., Szidat, S., and Fu, P. Q.: High contribution of non-fossil sources to submicrometer organic aerosols in Beijing, China, Environ. Sci. Technol. 51, 7842–7852, https://doi.org/10.1021/acs.est.7b01517, 2017.
Zotter, P., El-Haddad, I., Zhang, Y. L., Hayes, P. L., Zhang, X., Lin, Y.-H., Wacker, L., Schnelle-Kreis, J., Abbaszade, G., Zimmermann, R., Surratt, J. D., Weber, R., Jimenez, J. L., Szidat, S., Baltensperger, U., and Prévôt, A. S. H.: Diurnal cycle of fossil and nonfossil carbon using radiocarbon analyses during CalNex, J. Geophys. Res.-Atmos., 119, 6818–6835, https://doi.org/10.1002/2013jd021114, 2014a.
Zotter, P., Ciobanu, V. G., Zhang, Y. L., El-Haddad, I., Macchia, M., Daellenbach, K. R., Salazar, G. A., Huang, R.-J., Wacker, L., Hueglin, C., Piazzalunga, A., Fermo, P., Schwikowski, M., Baltensperger, U., Szidat, S., and Prévôt, A. S. H.: Radiocarbon analysis of elemental and organic carbon in Switzerland during winter-smog episodes from 2008 to 2012 – Part 1: Source apportionment and spatial variability, Atmos. Chem. Phys., 14, 13551–13570, https://doi.org/10.5194/acp-14-13551-2014, 2014b.
|
2019-10-16 06:48:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 68, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7817061543464661, "perplexity": 12577.829394910756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666467.20/warc/CC-MAIN-20191016063833-20191016091333-00249.warc.gz"}
|
https://www.lesswrong.com/posts/7mftAXj4btgShG7Z9/distinguishing-logistic-curves-visual
|
# 16
Write a Review
I wrote a post about distinguishing between logistic curves, specifically for finding their turning points.
That post was highly mathematical; but here is a visual "proof" of the "theorem":
• Figuring out the turning point of a logistic curve before hitting that turning point is bloody hard, mate.
"Proof": The following is a plot two curves:
1. The logistic curve up to its turning point at .
2. The exponential curve , which never has any turning points.
So, if the data was noisy, could you distinguish between the curve that's reached its turning point, and the one that will never have one?
Things get even worse if we stop before the turning point; here's the plot of the logistic curve up to , with the being half of the value at the turning point. This is plotted against the exponential :
New Comment
|
2021-12-08 13:58:16
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8563224673271179, "perplexity": 977.1312677235538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363510.40/warc/CC-MAIN-20211208114112-20211208144112-00326.warc.gz"}
|
https://stats.stackexchange.com/questions/291576/visualizing-multiple-size-distributions-in-one-plot
|
# Visualizing multiple size distributions in one plot
I have 17 size distributions for different coral species, and I would like to be able to compare these distributions in one plot. However, the distributions are very different, so when I naively tried to overlay their density plots, many of the distributions were so small compared to the largest one that they were just crowded into the bottom-left corner.
Is there a better way to visualize these distributions in one plot which will allow me to compare relative sizes as well see the distribution within species?
• A boxplot for each of the 17 on a single plot? Maybe log transform the y-variable before? Jul 14 '17 at 15:08
• When differences of scale come in the obvious first thing would be to consider logs. Jul 15 '17 at 2:22
• Can you post some sample data for people to work with? Jul 17 '17 at 18:50
Perhaps a joy plot would bring you happiness?
http://austinwehrwein.com/data-visualization/it-brings-me-ggjoy/
This plot shows 12 months of temperature data with a separate histogram for each month. The histograms are sort of layered over each other. For this example, you'll need to download the CSV of data from the link, then the code is as follows:
library(ggjoy)
library(hrbrthemes)
weather.raw$month<-months(as.Date(weather.raw$CST))
weather.raw$months<-factor(rev(weather.raw$month),levels=rev(unique(weather.raw$month))) #scales mins<-min(weather.raw$Min.TemperatureF)
maxs<-max(weather.raw$Max.TemperatureF) ggplot(weather.raw,aes(x = Mean.TemperatureF,y=months,height=..density..))+ geom_joy(scale=3) + scale_x_continuous(limits = c(mins,maxs))+ theme_ipsum(grid=F)+ theme(axis.title.y=element_blank(), axis.ticks.y=element_blank(), strip.text.y = element_text(angle = 180, hjust = 1))+ labs(title='Temperatures in Lincoln NE', subtitle='Median temperatures (Fahrenheit) by month for 2016\nData: Original CSV from the Weather Underground') UPDATE The necessary dataset is now included with the ggjoy package, so instead of downloading the CSV file, you can just run the following code to get a very similar plot: library(ggjoy) ggplot(lincoln_weather, aes(x = Mean Temperature [F], y = Month)) + geom_joy(scale = 3, rel_min_height = 0.01) + scale_x_continuous(expand = c(0.01, 0)) + scale_y_discrete(expand = c(0.01, 0)) + labs(title = 'Temperatures in Lincoln NE', subtitle = 'Mean temperatures (Fahrenheit) by month for 2016\nData: Original CSV from the Weather Underground') + theme_joy(font_size = 13, grid = T) + theme(axis.title.y = element_blank()) • could you please elaborate on why you need the line: "weather.raw$months<-factor(rev(weather.raw$month),levels=rev(unique(weather.raw$month)))"? Thank you! Sep 1 '17 at 21:24
• @marika ggplot would have ordered the months in alphabetical order by default. By specifying the months as a factor and then providing a unique ordering, they will will be ordered correctly. Sep 11 '17 at 18:31
• It looks like ggjoy is being deprecated in favor of ggridges: cran.r-project.org/package=ggridges Mar 24 '19 at 17:16
I tend to use ecdf plots when viewing distributions, particularly if I have several distributions I'm trying to compare. Because these use lines rather than bars (histograms) or shapes (density plots) there is less of an issue with overlap.
library(data.table)
library(ggplot2)
set.seed(123)
dat_data <- data.table(meanval = rnorm(10),
sdval = runif(10, 0.5, 3),
rep = sample.int(1000, 10))
# meanval sdval rep
# 1: -0.56047565 2.7238483 964
# 2: -0.23017749 2.2320085 902
# 3: 1.55870831 2.1012670 690
# 4: 0.07050839 2.9856744 794
# 5: 0.12928774 2.1392645 25
# 6: 1.71506499 2.2713262 476
# 7: 0.46091621 1.8601651 754
# 8: -1.26506123 1.9853551 215
# 9: -0.68685285 1.2228993 316
# 10: -0.44566197 0.8677841 230
First, we generated some parameters for mean, sd, and rep. Then, randomly sample rep number of times from a normal distribution with a given mean and sd:
dat <- rbindlist(lapply(1:dim(dat_data)[1],
function(x) data.table(rowval = x,
dist = rnorm(dat_data[x, rep],
dat_data[x, meanval],
dat_data[x, sdval]))))
That gives a test dataset. You wouldn't need to do any of the above since you already have your data. Now we can plot the ecdf.
ggplot(dat, aes(x = dist, group = factor(rowval), color = factor(rowval))) +
stat_ecdf(size = 2)
You'll notice that row 5, which has the lowest rep number of 25, looks quite choppy. The degree of 'chop' can give you a clue as to the relative number, to some degree.
For reference, plotting the same data with geom_density:
ggplot(dat, aes(x = dist, fill = factor(rowval))) +
geom_density(alpha = 0.3)
|
2021-12-01 09:19:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43206143379211426, "perplexity": 2188.4730346764063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359976.94/warc/CC-MAIN-20211201083001-20211201113001-00052.warc.gz"}
|
https://crypto.stackexchange.com/questions/66914/what-is-this-ec-key-derivation-method-called/67362
|
# What is this EC key derivation method called?
I'm looking to identify the EC key derivation method used in Hyperledger Fabric. I can't find anything in the docs or the protocol specs, but the functions' code is here for the private key and the public key.
The derivation function seems to be very simple, DerivedPrivate = MasterPrivate + (k+1) and DerivedPublic = MasterPublic + (k+1) * G all mod N with k being a random derivation data. And yet, I don't seem to be able to find the name or the source of this method.
I'd like to know about the patent and copyright status of this derivation method, and to do that I need something to google for. I'm also looking for a more formal description of this method.
• Homomorphic addition property of scalar multiplication over elliptic curves. – Youssef El Housni Jan 30 '19 at 20:09
• @YoussefElHousni Right, would you say this is something commonly used in EC key derivation? I would love to see a link / hint where this is described in that context. – Fozi Jan 30 '19 at 20:21
• Generally we don't consider questions about patents and copyright status on topic here, see for instance this meta question / answer. I'll leave it in the question, however I would say that answers do not need to cover the patents / copyright status part of it. Otherwise a fine question by the way. – Maarten Bodewes Jan 30 '19 at 20:45
• Related question that essentially proposes the same scheme. I don't think there's a name for this scheme, it's simply a consequence of $(a+b)G = aG + bG$ i.e. the distributive property. – puzzlepalace Feb 14 '19 at 19:34
• Yeah, I guess some schemes are too simple to gain their own name. E.g. PKCS#7 padding means "the only padding scheme mentioned somewhere in PKCS#7". I guess Youssef had a good descriptive name for it, so I thought it was a good idea to have this question set to "answered" none-the-less :) – Maarten Bodewes Feb 15 '19 at 14:38
Common terms for this include hierarchical key derivation, hierarchical deterministic keys, and key blinding. It is sometimes called ‘hierarchical’ because you can repeatedly derive subkeys $$Q = [k_1]G + P$$, $$R = [k_2]G + Q$$, etc., and the process is a deterministic function of the tags $$k_1$$ and $$k_2$$ and the initial point $$P$$. It is sometimes called ‘blinding’ because knowledge of $$Q = [k]G + P$$ and the standard base point $$G$$ without the blinding $$k$$ gives no information about $$P$$.
The two common variants are additive and multiplicative blinding: $$[k]G + P$$ vs. $$[k]P$$, both of which are invertible, by $$Q - [k]G$$ or $$[k^{-1} \bmod n]Q$$ where $$n$$ is the order of the group. The additive variant has the advantage that it always uses fixed-base scalar multiplication, and only a single curve addition, which may or may not make a difference in your protocol.
The analogues in the finite field setting are, of course, $$G^k\cdot P$$ and $$P^k$$ with inverses $$Q/G^k$$ and $$Q^{k^{-1} \bmod n}$$, but while you'll see this notation in the PrivacyPass paper nobody verbalizes talk of this because while we can say ‘multiplicative’, who can bring themselves to verbalize ‘exponentiative’ without getting distracted wondering whether the word even exists?
• Thanks for the answer, it pointed me to the right direction. I see differences in how k is dealt with in the examples. Looks like here k is limited to 1..n by doing k = (DerivData + 1) mod n. This eliminates k=0, but it does not seem to deal with the special case where k = n - d where the derived key would be zero. I'm not sure why there is a check whether the public key point is on the curve. At least on the private key derivation side a d' == 0' check should be all that is needed? I think my question is answered though, it seems like it's a proprietary spin on a EC curve property. – Fozi Feb 16 '19 at 18:23
• @Fozi If $n$ is the order of the base point, it will be near $2^{256}$ in any reasonable system. Then a uniform random scalar modulo $n$ has probability near $2^{-256}$ of being zero, or of being $d$. It also has probability near $2^{-256}$ of being 81209715721608798040795492854713186617949497074300142088404940059431870078213. If any of these happened, it would be devastating to security because the adversary immediately knows these numbers. Maybe DerivData` isn't uniform random modulo $n$, but as long as it's uniform random with, say, ${\gg}2^{200}$ bits, edge cases like that don't matter. – Squeamish Ossifrage Feb 16 '19 at 21:01
• @Fozi As for whether to check whether the point is on the curve: depends on more context—give the additional context in a separate question. As for ‘patent’, and ‘copyright’, ‘proprietary’: can't patent a mathematical formula, can't copyright an abstract concept outside a fixed medium, and definitely can't own an idea. I'm not a soul-eating vulture like a patent attorney—I just eat bones—but I'd be rather surprised if anyone asserted a patent claim on the concept of homomorphisms. Even if you add a $\cdots + 1$ too. – Squeamish Ossifrage Feb 16 '19 at 21:07
|
2020-04-05 11:03:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6120759844779968, "perplexity": 639.674789511133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371576284.74/warc/CC-MAIN-20200405084121-20200405114121-00189.warc.gz"}
|
https://math.stackexchange.com/questions/1312289/space-on-which-all-real-valued-continuous-functions-achieve-maximum-but-not-comp
|
# Space on which all real-valued continuous functions achieve maximum but not compact?
A friend is writing a book for non-mathematicians; he has asked me some questions... One possible direction I suggested was whether a topological space (metric space can probably be assumed given what he said) for which every real-valued function achieves its maximum must be compact; and, if not, does this property have a name?
He thought this probably did not work, but neither one of us has an example. There is a bookstore nearby which has copies of Counterexamples in Topology as well as Counterexamples in Analysis, and I can go browse them when I'm over jet lag. Meanwhile, for any students confused by these topics (topology and analysis) or not seeing the motivation, counterexamples are the best way to understand the limitations of a theorem and why it was worth proving in the first place.
• Related previous question. – user642796 Jun 4 '15 at 21:20
• @ArthurFischer, yes! I did look at several questions the system suggested while I was composing this, I imagine this one was in that list but I did not notice it. – Will Jagy Jun 4 '15 at 21:35
• If you co not have Steen-Seebach at had, you can also look inline in pi-base. – Martin Sleziak Jun 5 '15 at 7:22
A non-metric counterexample is $\omega_1$, the space of countable ordinals, with the natural order topology. If $f:\omega_1\to\Bbb R$ is continuous, there are an $\eta<\omega_1$ and an $x\in\Bbb R$ such that $f(\xi)=x$ whenever $\eta<\xi<\omega_1$, and $[0,\eta]$ is compact, so $f$ must attain its maximum.
However, clearly a space with this property is pseudocompact, and every pseudocompact metric space is compact, so there are no metric counterexamples.
• Wonderful. Thank you, I will tell him. – Will Jagy Jun 4 '15 at 17:51
• @Will: You’re welcome. Good luck to him. – Brian M. Scott Jun 4 '15 at 17:51
At least in metric spaces, this is true. To see this, first assume you have a set unbounded. Then just choose a sequence such that all members are separated by some minimal distance $\epsilon > 0$, and then order the sequence arbitrarily as $x_n$ and define the function $f(x_n) = n$ and $f(x) = 0$ at all other points. This can be extended to a continuous function, if desired. It does not achieve its maximum.
Then assume you have a set that is not closed. Then take any limit point not in the set, call it $x_0$, and define the function $f(x) = 1 - d(x,x_0)$. It does not achieve its maximum, and is continuous.
• This is lovely. Thank you. – Will Jagy Jun 4 '15 at 17:51
• I'm assuming you're using the Heine-Borel theorem for metric spaces (proofwiki.org/wiki/Heine-Borel_Theorem/Metric_Space). If so, am I right to read "you have a set unbounded" as "the space is not totally bounded," and "you have a set that is not closed" as "the space is not complete"? – Vectornaut Jun 4 '15 at 21:14
Compactness in a metric space is equivalent to the space being sequentially compact. For a non-compact metric space $(X,d)$, there exists a function $f:X\rightarrow\mathbb R$ such that $f$ does not achieve a maximum. To show this, realize that, being non-compact, $(X,d)$ must not be sequentially compact, so there exists a sequence $\{x_n\}\subset X$ such that $\{x_n\}$ has no convergent subsequent. This means that for all $x\in X\setminus\{x_n\}$, there exists $\varepsilon>0$ such that $B(x,\varepsilon)$ is disjoint for $\{x_n\}$. Thus, $\{x_n\}$ is a closed subset of $X$. Define $\tilde f:\{x_n\}\rightarrow\mathbb R$ to be $f(x_n)=n$. We know that $f$ is continuous on $\{x_n\}$ since $\{x_n\}$ inherits the discrete topology from $X$ (since there are no limit points), and so every function defined thereon is continuous. Then, employing the Tietze extension theorem, $\tilde f$ extends to a continuous function $f:X\rightarrow\mathbb R$ which we know to be unbounded since it is already unbounded on $\{x_n\}$.
|
2020-09-18 14:51:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.899989902973175, "perplexity": 271.1771764432082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400187899.11/warc/CC-MAIN-20200918124116-20200918154116-00680.warc.gz"}
|
http://myriverside.sd43.bc.ca/nolan2017/category/uncategorized/
|
## Week 15 – Math 10
We learned more about graphing and linear equation. W learned about different ways to find solutions to equations. They are Inspection, Substitution, and Elimination.
Inspection is only used when a system is very easy and understood. It is basically just eyeballing the question and guessing what the solution is. You can test your solution by plugging the numbers in the appropriate spots (x and y).
Substitution is another method to find solutions, but it uses algebra. You can choose one of the equations, and then choose on of the variables to isolate. Then you plug it into the correct spot on the other equation. You then use algebra to solve the equation, and then use the answer to plug in to find the other variable.
ex.
Elimination is used when there are no coefficients of 1. You start by adding the two equations together (you can subtract but it doesn’t work as well). You want to make a zero pair by making either the x or y cancel the other out. If they don’t do this from the start, you can multiply one of the equations or both of them to get one, and then do your adding. Once you have the answer to one variable, you plug it into the equations and find the other.
ex.
## Week 13 – Math 10
This week we further continued unit 7/8 by learning about point-slope form and general form. Both of which are put into a different form than slope formula. Point slope form is useful for doing quick algebra, and general form has no fractions and other “imperfections”.
Point-slope form looks like the following: $m(x - x_2) = y - y_2$
$m$ is the slope. $x$ is not connected to a point, but $x_2$ is. The same goes for $y$ and $y_2$. You can take an equation or “hints” to make a point-slope formula.
ex.
To further turn this into slope formula and make it easier to understand, you use algebra and steps.
continuing with ex.
General form is the “pretty useless” form, pretty, but useless. It contains no fractions or decimals, but doesn’t tell you about the graph itself. The equation usually looks something like ax ± ny ± b = 0. the x is always first, y is always second, x is always positive, and everything is always on one side equalling zero.
You can change all forms/formulas into general form.
ex.
1. slope formula.
2. Point-slope form.
3. slope formula with fractions.
General form uses no fractions and follows several rules as listed in the paragraph above. It is used to take away fractions and “imperfect” parts of a formula, but it doesn’t tell you anything until it is changed into another form.
## Science is Magic – The Black Snake
Lab Report:
We researched several different chemical reactions, but eventually settled on The Black Snake. We looked at the components to make sure it wasn’t dangerous, or at least not too dangerous. The Black Snake uses powdered sugar and sodium bicarbonate (baking soda) along with rubbing alcohol. These chemicals aren’t inherently bad, but alcohol fumes can be dangerous. Another danger is when lighting it on fire to commence the reaction, because fire can obviously be dangerous.
To make The Black Snake, you take 4 parts baking soda, and 1 part powdered sugar, and mix it together. Make a vessel out of preferably tinfoil filled with sand. Make a divot in the sand, and pour the mixture into it. Put rubbing alcohol around the edges of the mixture, and a little bit throughout the middle. Use a barbecue lighter to begin the reaction. A snake made of what looks like ash emerges from the white powder mixture. The snake is very light and airy because of gases produced during the experiment. The snake can grow quite long, but doesn’t always.
What is happening is the sugar C12H22O11 combusts and turns into carbon dioxide and water vapour, this decomposition forms the snake. The baking soda is added to help the experiment rise (2NaHCO3 → Na2CO3 + H2O + CO2), just like how it is used in baking. Reactions:
Sugar combusts into water vapour and carbon dioxide: С12H22O11 + 12O2 → 12CO2 + 11H2O
Decomposition into carbon and water vapour: С12H22O11 → 12C + 11H2O
Baking soda decomposes into carbon dioxide, water vapour, and sodium carbonate: 2NaHCO3 → Na2CO3 + CO2 + H2O
The outcome should be a carbon, black snake. It should be light, look and feel like ash, and should be quite delicate. The snake is not edible, you can touch it but it is not recommended. It can be very hot after burning. Best to do in outdoors or under fume hood because of alcohol fumes produced. The snake and all components of experiment can be thrown out in a household garbage.
The experiment can seem magical because when it is growing, it looks as if it is alive and moving, like a snake. It could also possibly look like a plant growing. It’s like creating life because of its natural seeming movement, even though it is just burning, rising chemicals.
Bibliography:
Maric, Vladimir, and Teh Jun Yi. “How to Make a Fire Snake from Sugar & Baking Soda.” WonderHowTo, WonderHowTo, 18 Oct. 2017, food-hacks.wonderhowto.com/how-to/make-fire-snake-from-sugar-baking-soda-0164401/.
“Hooked on Science: ‘Black Snake’ Experiment.” SeMissourian.com, 3 July 2013, www.semissourian.com/story/1983035.html.
“Carbon Sugar Snake.” KiwiCo, www.kiwico.com/diy/Science-Projects-for-Kids/3/project/Carbon-Sugar-Snake/2784.
“Hooked on Science: ‘Black Snake’ Experiment.” SeMissourian.com, 3 July 2013, www.semissourian.com/story/1983035.html.
Common Names of Some Chemical Compounds, chemistry.boisestate.edu/richardbanks/inorganic/common_names.htm.
“Sugar Snake.” MEL Science, melscience.com/US-en/experiments/sugar-snake/.
Experiments, Life Hacks &. “How to Make Fire Black Snake? Amazing Science Experiment.” YouTube, YouTube, 17 June 2018, www.youtube.com/watch?v=Y7snO0pA8Sk.
## Week 10 – Math 10
Although I was sick for half of this week, and wasn’t able to learn fully what the rest of that class did, or at least learned less “hands on”, I was still there for the first two days, and so what we learned then is what I understand most. We learned how mapping notation can be put into the form of function notation.
Mapping notation is where you use a math “sentence” to find an output with the use of an input. (went over semi-briefly on last blog post).
ex.
ƒ : x → 3x – 2
name input changes into output
Function notation is generally the same thing, but like how functions are relations but relations aren’t always functions, functions notation is the same. Function notation is helpful when finding inputs and outputs of functions. They are written slightly differently as well.
ex.
name ↓input changes into output
ƒ (x) = 3x – 2
“ƒ of x”
Both are used generally the same way, to find the output using an input. It is “ƒ of x” because the ƒ is the functions name, and the relation is a function.
Functions & Graphs
Using the inputs and outputs from mapping and function notation, you can plot points on a graph. The input is x, and the output is y. To get the output, you put the input in the correct spot on the opposite side.
ex.
f(x) = 3x + 1 → f(3) = 3(5) + 1
Using them, you can get coordinates. (x, y)
## Week 9 – Math 10
This week we had our midterm and spent most of the week studying for it. But on Friday, we learned about functions, a kind of relation.
A function is a relation that is special and each input has one output, no more. A function is a kind of a relation but a relation is not a kind of function.
On a graph, if any of the points are on the same x axis, then it is not a function. Each point has to be in a different x coordinate.
ex.
A function is unique, and is often named a single letter (f, g, h, etc.), and followed by x, changing into blank.
ex.
ƒ:x → 7x + 6
ƒ is its name, x is the input, the arrow signifies “changing into”, and the final numbers are the output.
## Week 8 – Math 10
This week we started our graphing and linear relations unit. One of the main things we learned was domain and range. The domain is all of the $x$ coordinates that the graph covers, and the range is all of the $y$ coordinates that are covered.
Domain and range can be shown in “curly brackets” such as in the following example.
{x|-4 ≤ x ≤ 7, x ∈ R}
Sometimes if the graph just contains a bunch of points, the domain and range can be given in specific numbers,
ex. D = {-2,0,1,4,7} or R = {1,3,4,9,12}
here’s what one of those graphs could look like:
But they can also be lines meaning their points can be anywhere on those lines,
ex. D = {x|-2 ≤ x ≤ 7, x ∈ R} or {y|1 ≤ y ≤ 12, y ∈ R}
here’s what one of those graphs could look like:
They can also be a line, but have no beginning and/or end. This graph would have lines with arrows to represent that it continues on.
ex {x|x ∈ R} or {y| y ≤ 12, y ∈ R}
here’s what one of those graphs could look like:
When writing in these curly brackets, especially with line graphs, you need to form a “sentence”. You start with the axis you are talking about (x/y), then the possible points, and then finish with x ∈ R or y ∈ R, which means x/y is an element of a real number.
## Week 4 – Math 10
This week was short, and on top of that, I missed a day because I was at a field trip for my science honours class. But even though it was a short week, I still learned more about trigonometry, specifically word problems and how to use them to find angles and side lengths, something we’ve been learning over the whole unit.
To find the angle of a triangle, you can use two side lengths for the equation (sin/cos/tan) xº $= \frac {side1}{side2}$, for example: sin xº $= \frac {5}{9}$. With that equation (using the example for the following), you find x by isolating it as in xº $= sin^{-1} (\frac {5}{9})$, and then you will have the value of xº.
To find a side length, you would use one side tenth, and an angle for the equation (sin/cos/tan) xº $= \frac {n}{side length} or \frac {side length}{n}$, For example: sin 31º $\frac {n}{15}$. With that equation (using the example for the following), you find n by isolating it, in this case 15 $\cdot$ sin 31º $= n$. If it were sin 31º $= \frac {15}{n}$, you would use the equation $n \cdot$ sin 31º = 15 meaning you would have to divide both sin 31º (cancelling it out) and 15 to find n.
I made a simple word problem to find the side length of a triangle based on the height of a person and the suns angle to find the length of the persons shadow:
## Week 3 – Math 10
This week we began our unit on trigonometry. One of the things I learned was SOH CAH TOA, abbreviations that help you to memorize how to find side lengths using angles and equations.
Each set of abbreviations begins with a letter that describes the angle equation to use on your calculator (S=sin C=cos T=tan). Depending on the angle of the triangle you are using and what side lengths are given to you, you can choose the correct angle to use.
Each triangle has 3 sides, and when a base angle is given to you, these sides are given names. The longest of them is the hypotenuse, and the side that the base angle sits on is called the adjacent side, the final one, the opposite side, sits on the opposite of the angle.
The other two letters in each abbreviation shows you what sides to use and in what order (OH=opposite/hypotenuse AH=adjacent/hypotenuse OA=opposite/adjacent. \
Even if you only have one side of the triangle and the angles (90 & other), you can find the side length you are looking for.
## Week 2 – Math 10
This week I learned how negative exponents work and how they effect their base’s. Negative exponents unlike positive exponents don’t increase the numbers size. Normal exponents multiply a number over and over (ex. $3^3 = 3\cdot3\cdot3 = 27$) and negative exponents turn the number into a fraction (ex $3^{-3} = \frac{1}{27}$).
You can turn it into a normal number by following the next steps. I will use $5^{-4}$ for this example.
First, if the exponent is negative, then you turn it into a fraction of $\frac{x}{1}$.
Then you put the denominator on the bottom, therefor making the exponent positive.
Alternatively, I learned that if the negative was originally on the bottom, you would then move it to the top.
Then you find out what the product of the power is and put that underneath a 1, in this case $\frac{1}{625}$
I learned that if it were for example $5x^{-4}$, then only the $x$ and its exponent would be moved to the denominator as in $\frac {5}{x^4}$
If it were $(5x)^{-4}$, then both would move to the bottom.
if it were $(5x^{-4})^{-4}$ would equal $5x^{16}$ because you multiply the exponent.
|
2019-05-27 09:18:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5716103911399841, "perplexity": 1081.1780458237974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232262311.80/warc/CC-MAIN-20190527085702-20190527111702-00161.warc.gz"}
|
http://www.physicsforums.com/showthread.php?p=3384328
|
# Finding volume bounded by paraboloid and cylinder
by iqjump123
Tags: cylinder, integral, paraboloid, volume
P: 57 1. The problem statement, all variables and given/known data Find the volume bounded by the paraboloid z= 2x2+y2 and the cylinder z=4-y2. Diagram is included that shows the shapes overlaying one another, with coordinates at intersections. (Will be given if necessary) 2. Relevant equations double integral? function1-function2? 3. The attempt at a solution I saw from previous threads involving volumes, but still am lost when I try to do my own problem :\ Most paraboloid involving problems start by changing to polar coordinates- should I do it for this one? I know that at the end it will end up being a double integral, but I am not sure how to set it up. physics forums have been a big help. Thanks!
Emeritus Sci Advisor HW Helper Thanks PF Gold P: 11,669 Are your equations correct because z=4-y2 isn't the equation of a cylinder?
Math
Emeritus
Thanks
PF Gold
P: 39,292
Quote by vela Are your equations correct because z=4-y2 isn't the equation of a cylinder?
Yes, it is. z= 4- y2 is a parabola in the yz-plane and, extended infinitely in the x-direction, is a parabolic cylinder, though not, of course, a circular cylinder.
Emeritus Sci Advisor HW Helper Thanks PF Gold P: 11,669 Finding volume bounded by paraboloid and cylinder D'oh!
P: 57 thanks for the reply- yes, just like what hallsofivy mentioned, the equations are correct. At this point, I am still lost, however. Any other suggestions? Thanks!
Emeritus Sci Advisor HW Helper Thanks PF Gold P: 11,669 As you mentioned in your original post, you want to calculate something like $$V = \iint\limits_A [z_1(x,y)-z_2(x,y)]\,dy\,dx$$
P: 57 Hello vela, thanks for the reply! Link below is an image of the problem image that was given: Uploaded with ImageShack.us I figured that the bounds of dy will stretch from 0 to sqrt(2-x^2), and dx will stretch from 0 to sqrt(2). Is this correct? Thanks!
Emeritus Sci Advisor HW Helper Thanks PF Gold P: 11,669 Why are the lower limits 0 for both x and y?
P: 57
Quote by vela Why are the lower limits 0 for both x and y?
Well I assumed so, since the problem shape starts from the 0 position for all 3 coordinate systems. Is this approach not correct?
Emeritus Sci Advisor HW Helper Thanks PF Gold P: 11,669 Does the original problem statement say the solid is bounded by the x=0, y=0, and z=0 planes or something equivalent? If it does, your limits look fine. I know the picture suggests this, but you never mentioned it in the original post, nor does it appear in your scan.
P: 57 Hey guys, I know this is bringing up an old topic, but I wanted to inquire about something, as well as make sure I approached the final equation correctly. To clear up the confusion from vela- yes, I am planning to go with the description saying that since they said to find the volume as indicated in the picture, my limits I set up was going to be from 0. Therefore, I went ahead and said ∫∫(2x^2-y^2)-(4-y^2),y,0,√2-x^2),x,0,√2). After evaluating this, I obtained -pi as my answer. the number makes sense, but the sign is wrong- negative volume is obviously impossible. When I reversed the two functions, I indeed get pi as the answer. However, that doesn't make sense- wouldn't the z1 function have to be the function of the paraboloid, and the volume is a subtraction of the cylinder function z2 from z1? Any clarification and a check to the final answer will be appreciated. Thanks guys!!
Emeritus
|
2014-07-22 15:44:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.678795576095581, "perplexity": 1051.7221154297472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997859240.8/warc/CC-MAIN-20140722025739-00153-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://hsm.stackexchange.com/questions/7291/notation-for-fiber-bundles-why-e-for-total-space
|
# Notation for fiber bundles - why E for total space?
I'm looking for info on why E is commonly used for the total space of a fiber bundle. I understand F (fiber) and B (base), but there doesn't seem to be any particularly obvious reason for choosing E.
|
2020-07-04 04:56:36
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9074866771697998, "perplexity": 436.6897098912212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655884012.26/warc/CC-MAIN-20200704042252-20200704072252-00157.warc.gz"}
|
https://math.stackexchange.com/questions/2545525/stopping-time-tail-probability/2545817
|
# Stopping Time Tail Probability
Problem
Let $X_1,X_2,\dots$ be independent each with $P(X_j=1)=P(X_j=-1)=1/2$. Let $S_n=X_1+X_2+\cdots +X_n$ and $N>1$ be an integer. Define the stopping time $$T=\inf\{n: |S_n|=N\}$$ Show that there exists $c<\infty$, $0<\rho<1$ (depending on $N$) such that $$P(T>n)\leq c\rho^n$$
Attempt I'd like to solve this using first principles and without invoking Markov chain theory. Now the event $\{T>n\}=\{|S_k|< N,k=1,\dots,n\}=\{\max_{1\leq k\leq n} |S_k|<N \}$. However, I'm unable to turn this into anything useful. Can someone give me a hint on how to proceed?
Note that for every $n\geq 0$, we have $P(T>n+N\;|\;T>n) \leq 1-2^{-N}$. This is because starting from any point $x \in \{-N+1,...,N-1\}$, there is probability at least $2^{-N}$ of escaping this interval after $N$ steps.
Hence we see that $P(T>2 N) =P(T>2 N\;|\;T> N)P(T> N) \leq (1-2^{-N})^2$. Continuing inductively, we get that $P(T>k N) \leq (1-2^{-N})^k$, for every $k>0$.
Therefore, the claim holds, with $\rho=\big(1-2^{-N}\big)^{1/ N}$.
|
2020-02-19 04:30:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9250304698944092, "perplexity": 86.67209044168524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144027.33/warc/CC-MAIN-20200219030731-20200219060731-00078.warc.gz"}
|
https://mathjokes4mathyfolks.wordpress.com/page/2/
|
## Hiking Routes, Square Roots, and Trail Ratings
Not all those who wander are lost.
You may have read that line in Gandalf’s letter to Frodo Baggins in The Fellowship of the Ring, but nowadays you’re more likely to see it on the t-shirts and bumper stickers of hikers.
Hiking is a popular sport, and the 47 million Americans who reported that they’ve taken a hike in the past 12 months (Statista) had a lot of different trails to choose from: American Trails maintains a database of over 1,100 trails, and Backpacker‘s list of America’s Best Long Trails offers an impressive 39,000 combined miles. Plus, there are thousands of miles of trail not on either of those lists. With so many options, how are you supposed to choose?
A wealth of information is provided for most hiking trails. But while some information — like distance and elevation gain — is absolute, other information leaves room for interpretation. What does it mean when the Craggy Pinnacle Trail just outside Asheville, NC, is described as a “moderate” hike? The Explore Asheville website says,
Moderate hikes could range anywhere from a few to ten miles with an elevation gain up to 2,000 feet.
By those standards, a three-mile hike with a 10% grade would be considered moderate. No, thank you.
Unfortunately, there is no standardized system for determining trail difficulty. Most of the time, the trail rating is a nebulous qualitative combination based on an examination of the terrain, trail conditions, length, elevation gain, and the rater’s disposition.
But I tip my hat to the good folks in Shenandoah National Park who have attempted to quantify this process. Their solution? The simple formula
$r = \sqrt{2gd}$
where g is the elevation gain (in feet) and d is the distance (in miles). The value of r then corresponds to a trail rating from the following table:
Numerical Rating Level of Difficulty Estimated Average Pace (miles per hour) < 50 Easiest 1.5 50-100 Moderate 1.4 100-150 Moderately Strenuous 1.3 150-200 Strenuous 1.2 > 200 Very Strenuous 1.2
Elevation gain is defined as the cumulative elevation gain over the entire hike. So if the hike climbs 300 feet over the first mile, then descends 500 feet over the next 2 miles, then goes back up 200 feet to return to the start, the elevation gain is reported as 300 + 200 = 500 feet.
Old Rag is one of the most popular hikes in northern Virginia. Known for the half-mile rock scramble near the top, this trail boasts an impressive 2,415 feet of elevation gain over 9.1 miles. Applying the formula,
$r = \sqrt{2 \cdot 2415 \cdot 9.1} \approx 209.6$
which means Old Rag’s level of difficulty would be “very strenuous.”
This formula could lead to several activities for a middle or high school classroom:
• Draw an elevation map depicting a trail on which any type of hike (from easiest to very strenuous) would be possible, depending on how far a person hiked.
• With distance on the horizontal axis and elevation gain on the vertical axis, create a graph that shows the functions for easiest to very strenuous hikes. (See Figure 1.)
• If you were on a trail with an average elevation gain of 300 feet per mile, how long would you have to hike for it to be considered a moderately strenuous hike?
• If one 5-mile hike is rated “easiest” and another 5-mile hike is rated “strenuous,” what’s the minimum possible difference in elevation gains for the two trails?
Students could also do a comparison between this trail rating formula and the geometric mean, if you wanted to go really crazy.
Figure 1. The graphs for various difficulty ratings, using the Shenandoah trail rating formula.
Just as every good hike comes to an end, so must this blog post. But not before we laugh a little.
As it turns out, there’s a math joke about hiking…
An actuary has been walking for several hours when the trail ends at the edge of a river. Having no idea how to cross, she sees another hiker on the opposite bank, and she yells, “Hey, how do I get to the other side?”
The man across the river — a math professor — looks upstream, then downstream, then thinks a bit and finally says, “But you are on the other side!”
It’s a math joke about hiking as much as any joke about any topic is a math joke, if you insert the correct professions.
There’s a great non-math joke about hiking, too…
A fish is hiking through a reservoir when he walks into a wall. “Dam!” he says.
And there is a very mathematical list about hiking, which might be considered a joke if so many of the observations weren’t true…
Eight Mathematical Lessons from the Trail
1. A pebble in a hiking boot will migrate to the point of maximum irritation.
2. The distance to the trailhead where you parked remains constant as twilight approaches.
3. The sun sets at two-and-a-half times its normal rate when you’ re trying to reach the trailhead before dark.
4. The mosquito population at any given location is inversely proportional to the effectiveness of your repellent.
5. Waterproof rainwear isn’t. But, it is 100% effective at containing sweat.
6. The width of backpack straps decreases with the distance hiked. To compensate, the weight of the backpack increases.
7. The ambient temperature increases proportionally to the amount of extra clothing in your backpack.
8. The weight in a backpack can never remain uniformly distributed.
Go take a hike!
## The Remote Associates (RAT) and Close Associates (CAT) Tests
The Remote Associates Test (RAT) is a test used to determine a person’s creative potential. When given a collection of three seemingly unrelated words, subjects are asked to supply a fourth word that is somehow related to each of the three stimulus words. For example, if you were given the words cottage, swiss, and cake, you’d answer cheese, because when combined with the three stimuli, you get cottage cheese, swiss cheese, and cheesecake.
To try your hand at one of these tests, you can head to remote-associates-test.com, or you can just keep on reading.
The verdict is out as to whether a high score on the RAT actually means you’re more creative. But what’s not in doubt is how much I love to solve, and to create, these items. It’s a good game for long car rides, and my wife, sons, and I can amuse ourselves for hours by creating and sharing them with one another.
For your enjoyment, I present the following mathematical RAT test: The fourth word related to the three stimulus words in each set is a common math term. Enjoy!
1. attack / acute / fish
2. inner / around / full
3. phone / one / mixed
4. powers / rotational / point
5. up / on / hot
6. tipping / blank / selling
7. even / duck / couple
8. black / Bermuda / love
9. inkhorn / limits / short
10. sub / hour / tolerance
11. world / television / infinite
12. ball / camp / data
13. Dracula / head / sheep
14. field / dead / stage
15. town / off / meal
16. air / hydro / geometry
17. common / X / fudge
18. key / bodily / dis
19. disaster / 51 / grey
20. ball / S / hairpin
The Close Associates Test (CAT) is a similar, yet completely fictitious, test that I just made up. Each item on a CAT test contains three words which do not appear to be unrelated in the least; in fact, they are so closely related that finding the fourth word they have in common is somewhat trivial. The following mathematical CAT test will not measure your creativity, though it might reasonably determine the depth of your mathematical vocabulary. Good luck!
1. acute / obtuse / right
2. scalene / equilateral / isosceles
3. sine / logistic / regression
4. real / irrational / whole
5. proper / improper / reduced
6. convergent / infinite / divergent
7. in / circum / ortho
8. Pythagorean / De Moivre’s / Ramsey’s
9. exponential / differential / Diophantine
10. convex / concave / regular
11. golden / common / test
12. square / cube / rational
1. angle
2. circle
3. number
4. axis
5. line
6. point
7. odd
8. triangle
9. term
10. zero
11. series
12. base
13. count
14. center
15. square
16. plane
17. factor
18. function
19. area
20. curve
1. angle
2. triangle
3. curve
4. number
5. fraction
6. series
7. center
8. theorem
9. equation
10. polygon
11. ratio
12. root
Feel free to submit more triples for the RAT or CAT test in the comments.
## My Insecurity Over Security Codes
Every time I attempt to access one of my company’s applications via our single sign-on (SSO) system, I’m required to request a validation code that is then sent to my smartphone, and then I enter that code on the login page.
It’s a minor nuisance that drives me insane.
The purpose of the codes are to provide an additional level of security, but given how un-random the codes seem to be, it doesn’t feel very secure to me. This screenshot shows some of the codes that I’ve received recently:
Here’s what I’ve observed:
• Every security code contains 6 digits.
• The first 3 digits in the code form either an arithmetic or geometric sequence, or the first 3 digits contain a repeated digit.
• Similarly, the last 3 digits in the code form either an arithmetic or geometric sequence, or the last 3 digits contain a repeated digit.
As an example, one of the codes in the screenshot above is 421774. The first 3 digits form the (descending) geometric sequence 4, 2, 1, and the digit 7 appears twice in the second half of the code.
I believe the reason for these patterns is to make the codes more memorable to those of us who have to transcribe them from our phones to our laptops.
This got me thinking. The likelihood of someone correctly guessing a six-digit code is 1 in 1,000,000. But what is the likelihood that someone could correctly guess a six-digit code if it adheres to the rules above?
If you’d like to answer this question on your own, stop reading here. To put some space between you and my solution, here’s a security-related joke:
“I don’t understand how someone stole my identity,” Lily said. “My PIN is so secure!”
“The year of Knut Långe’s death,” Lily replied.
“Who is Knut Långe?”
“A King of Sweden who usurped the throne from Erik Eriksson.”
“And what year did he die?”
“1234.”
(Incidentally, Data Genetics reviewed 3.4 million stolen website passwords, and they found that 1234 was the most popular four-digit code. The researchers claimed that they could use this information to make predictions about ATM PINs, too, but I don’t think so. All this shows is that 1234 is the most commonly stolen password, and therefore this inference suffers from survivorship bias. Without having data on all the codes that were not stolen, it’s impossible to make a reasonable claim. But, I digress.)
To determine the number of validation codes that adhere to the patterns I observed, I started by counting the number of arithmetic sequences. With only 3 digits, there are 20 possible sequences:
• 012
• 024
• 036
• 048
• 123
• 135
• 147
• 159
• 234
• 246
• 258
• 345
• 357
• 369
• 456
• 468
• 567
• 579
• 678
• 789
But each of those could also appear in reverse (210, 975, etc.), giving a total of 40.
There are far fewer geometric sequences; in fact, only 3 of them:
• 124
• 139
• 248
And again, each of those could appear in reverse, giving a total of 6.
Finally, there are 10 × 9 × 8 = 720 three-digit numbers with no repeated digits, which means there are 1,000 ‑ 720 = 280 numbers with a repeated digit. (Here, “number” refers to any string of 3 digits, including those that start with a 0, like 007 or 092.)
Consequently, there are 40 + 6 + 280 = 326 possible combinations for the first 3 digits and also 326 combinations for the last 3 digits, which gives a total of 326 × 326 = 106,276 possible validation codes.
That means that it would be about 10× more likely for a phisher to correctly guess a validation code that follows these rules than to guess a completely random six-digit code. But said another way, the odds are still significantly against a phisher who’s trying to steal my code. And quite frankly, if someone wants to exert that kind of effort to pirate my access to Microsoft Word online, well, I say, go for it.
## 8-15-17
Today is a glorious day!
The date is 8/15/17, which is mathematically significant because those three numbers represent a Pythagorean triple:
$8^2 + 15^2 = 17^2$
But August 15 has also been historically important:
But as of today, August 15 has one more reason to brag: It’s the official publication date of a bestseller-to-be…
Like its predecessor, this second volume of math humor contains over 400 jokes. Faithful readers of this blog may have seen a few of them before, but most are new. And if you own a copy of the original Math Jokes 4 Mathy Folks, well, fear not — you won’t see any repeats.
What kind of amazing material will you find on the pages of More Jokes 4 Mathy Folks? There are jokes about school…
An excited son says, “I got 100% in math class today!”
“That’s great!” his mom replies. “On what?”
The son says, “50% on my homework, and 50% on my quiz!”
There are jokes about mathematical professions…
An actuary, an underwriter, and an insurance salesperson are riding in a car. The salesperson has his foot on the gas, the underwriter has her foot on the brake, and the actuary is looking out the back window telling them where to go.
There are Tom Swifties…
“13/6 is a fraction,” said Tom improperly.
And, of course, there are pure math jokes to amuse your inner geek…
You know you’re a mathematician if you’ve ever wondered how Euler pronounced Euclid.
Hungry for more? Sorry, you’ll have to buy a copy to sate that craving.
To purchase a copy for yourself or for the math geeks in your life, visit Amazon, where MoreJ4MF is already getting rave reviews:
For quantity discounts, visit Robert D. Reed Publishers.
## Mo’ Math Limericks
I’ve posted limericks to this blog before. Quite a few, in fact.
But a friend recently sent me The Mathematical Magpie, a collection of math essays, stories and poems assembled by Clifton Fadiman and published by Simon and Schuster in 1962. Coincidentally, one section of the book is titled Comic Sections, the name of a mathematical joke book written by Des MacHale in 1993. (I contacted Professor MacHale several years ago, and he suggested that we swap books. Best. Trade. Ever.) Des MacHale is Emeritus Professor at the University of Cork, a mere 102 km from Limerick, Ireland… which brings us full circle to today’s topic.
The Mathematical Magpie contains quite a few limericks, one of which you have likely heard before:
There was a young lady named Bright,
Who traveled much faster than light.
She started one day
In the relative way,
And returned on the previous night.
Despite a variety of other claims, that limerick was written by Professor A. H. Reginald Buller, F.R.S., a biologist who received £2 when the poem was published in Punch, and he “was more excited at the check than he was later when his book on fungi was published.”
You may not, however, be familiar with Professor Buller’s follow-up limerick about Miss Bright:
To her friends said the Bright one in chatter,
“I have learned something new about matter:
As my speed was so great
Much increased was my weight,
Yet I failed to become any fatter!”
Here are a few other limericks that appear in The Mathematical Magpie:
There was an old man who said, “Do
Tell me how I’m to add two and two?
I’m not very sure
That it doesn’t make four —
But I fear that is almost too few.
Anon.
The topologist’s mind came unguided
When his theories, some colleagues derided.
Out of Möbius strips
Paper dolls he now snips,
Non-Euclidean, closed, and one-sided.
Hilbert Schenck, Jr.
A mathematician named Ray
Says extraction of cubes is child’s play.
You don’t need equations
Or long calculations
Just hot water to run on the tray.
L. A. Graham
Flappity, floppity, flip!
The mouse on the Möbius strip.
The strip revolved,
The mouse dissolved
In a chronodimensional skip.
Frederick Winsor
And though it’s not a limerick, this one is just too good not to include for your enjoyment:
A diller, a dollar,
A witless trig scholar
On a ladder against a wall.
If length over height
Gives an angle too slight,
The cosecant may prove his downfall.
L. A. Graham
Finally, I leave you with a MJ4MF original:
With my head in an oven
And my feet on some ice,
I’d say that, on average,
I feel rather nice!
Got any math poems or limericks you’d like to share? We’d love to hear them!
## Just Sayin’
Heidi Lang is one of the amazing teachers at Thomas Jefferson Elementary School. When she’s not challenging my sons with interesting puzzles and problems, she’s entertaining them with jokes that make them think. On her classroom door is a sign titled Just Sayin’, under which hangs a variety of puns. Here’s one of them:
Last night, I was wondering why I couldn’t see the sun. Then it dawned on me.
That reminds me of one of my favorite jokes:
I wondered why the baseball kept getting larger. Then it hit me.
Occasionally, one of her puns has a mathematical twist:
Did you know they won’t be making yardsticks any longer?
And this is one of her mathematical puns, though I’ve modified it a bit:
When he picked up a 20‑pound rock and threw it 5,280 feet, well, that was a real milestone.
I so enjoy reading Ms. Lang’s Just Sayin’ puns that I decided to create some of my own. I suspect I’ll be able to hear you groan…
• He put 3 feet of bouillon in the stockyard.
• When the NFL coach went to the bank, he got his quarterback.
• She put 16 ounces of poodle in the dog pound.
• The accountant thought the pennies were guilty. But how many mills are innocent?
• His wife felt bad when she hit him in the ass with 2⅓ gallons of water, so she gave him a peck on the cheek.
• Does she know that there are 12 eggs in a carton? Sadly, she dozen.
• When his daughter missed the first 1/180 of the circle, he gave her the third degree.
• She caught a fish that weighed 4 ounces and measured 475 nm on the visible spectrum. It was a blue gill.
• When Rod goes to the lake, he uses a stick that is 16.5 feet long. He calls it his fishing rod.
• What is a New York minute times a New York minute? Times Square.
• I wanted to dance after drinking 31 gallons of Budweiser, so I asked the band to play the beer barrel polka.
• The algebra teacher was surprised by the mass when she tried to weigh the ball: b ounces.
And because this post would feel incomplete without it, here’s probably the most famous joke of this ilk:
• In London, a pound of hamburger weighs about a pound.
## MORE Jokes 4 Mathy Folks
I know, I know.
You remember the day that you bought Math Jokes 4 Mathy Folks. You headed directly home from the bookstore and read it cover to cover. Then, once the tears of laughter had dried, you read it again. And sure, you were a little concerned that if you read it a third time, well, you might be accused of neglecting your family. But social reputation be damned… you’re a mathy folk, and neglecting people is what we do. So you returned to the first page and gave it one more go.
That day was several years ago.
Today, MJ4MF occupies a position of honor on your bathroom shelf, and while conducting your business you occasionally open to a random page, hoping to rediscover an old chestnut. But alas, you’ve read it so many times, you have every joke memorized, and the cover is falling off.
So, now what?
Well, don’t worry. You’ve waited patiently, and your patience is about to be rewarded. Announcing the release of the second volume in the MJ4MF franchise…
Head over to Amazon to order a copy today! Officially, it isn’t available until August 15, 2017 (bonus points if you know why that date was selected as the publication date), but you can get it now, and you’ll have plenty of time to memorize the jokes before the first day of school.
(And while you’re there, you should probably buy a replacement copy of Math Jokes 4 Mathy Folks, too. Get a new one with its cover intact. You don’t want to look like someone who doesn’t take care of your books, do you? Of course not. And besides, purchasing another copy for you will boost the sales ranking for me. Win-win.)
So, what will you find in this new collection? Over 400 jokes, from every branch of mathematics.
Pentagon Hexagon Oregon
An excited son says, “I got 100% in math class today!”
“That’s great!” his mom replies. “On what?”
The son says, “50% on my homework, and 50% on my quiz!”
What is PA + PN + LA + LN?
A (P + L)(A + N) that’s been FOILed.
Heck, there are even jokes about other counting systems…
What happened in the binary race?
Zero won.
And what won’t you find in this new collection? You won’t find a single one of the 400+ jokes that were in the original Math Jokes 4 Mathy Folks. That’s right, this collection is 100% entirely new!
Don’t delay! Be the coolest kid on your block by ordering a copy of MORE Jokes 4 Mathy Folks today!
The Math Jokes 4 Mathy Folks blog is an online extension to the book Math Jokes 4 Mathy Folks. The blog contains jokes submitted by readers, new jokes discovered by the author, details about speaking appearances and workshops, and other random bits of information that might be interesting to the strange folks who like math jokes.
## MJ4MF (offline version)
Math Jokes 4 Mathy Folks is available from Amazon, Borders, Barnes & Noble, NCTM, Robert D. Reed Publishers, and other purveyors of exceptional literature.
|
2017-11-24 04:04:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3502903878688812, "perplexity": 3726.156691018207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807084.8/warc/CC-MAIN-20171124031941-20171124051941-00798.warc.gz"}
|
http://fodershopen.se/6577cpzn/ac1ec3-solar-cell-diode-equation
|
# solar cell diode equation
The solar cell optimization could also be optimized for analysis and modeling. Poilee 15amp Diode Axial Schottky Blocking Diodes for Solar Cells Panel,15SQ045 Schottky Diodes 15A 45V (Pack of 10pcs) 4.5 out of 5 stars 82. where I s is the saturation current of the diode and I ph is the photo current (which is assumed to be independent of the applied voltage V a). Diodes - Summary • At night or when in deep shade, cells tend to draw current from the batteries rather than sending current to them. The theoretical studies are of practical use because they predict the fundamental limits of a solar cell, and give guidance on the phenomena that contribute to losses and solar cell efficiency. Solar Radiation Outside the Earth's Atmosphere, Applying the Basic Equations to a PN Junction, Impact of Both Series and Shunt Resistance, Effect of Trapping on Lifetime Measurements, Four Point Probe Resistivity Measurements, Battery Charging and Discharging Parameters, Summary and Comparison of Battery Characteristics, Solve for carrier concentrations and currents in quasi-neutral regions. Its current density J is in ideal case described by the Shockley’s diode equation [24] JV J eV kT exp J sc 0 1 . The diode itself is three dimensional but the n-type and p-type regions are assumed to be infinite sheets so the properties are only changing in one dimension. The p-n diode solar cell Solar cells are typically illuminated with sunlight and are intended to convert the solar energy into electrical energy. import numpy as np from solcore.constants import kb, q, hbar, c from solcore.structure import Junction from scipy.optimize import root from.detailed_balance import iv_detailed_balance. It is just the result of solving the 2-diode equation for J02. Therefore, let us use the gained intuition to understand the famous Shockley equation of the diode. The basic solar cell structure. For actual diodes, the expression becomes: $$I=I_{0}\left(e^{\frac{q V}{n k T}}-1\right)$$. The ideal diode equation is one of the most basic equations in semiconductors and working through the derivation provides a solid background to the understanding of many semiconductors such as photovoltaic devices. 2. I0 = "dark saturation current", the diode leakage current density in the absence of light; The treatment here is particularly applicable to photovoltaics and uses the concepts introduced earlier in this chapter. The Ideal Diode Law: where: I = the net current flowing through the diode; I0 = "dark saturation current", the diode leakage current density in the absence of light; V = applied voltage across the terminals of the diode; where: The current through the solar cell can be obtained from: ph V V I = Is (e a / t −1) − I (4.8.1) where I s is the saturation current of the diode and I ph is the photo current (which is assumed to be independent of the applied voltageV a). solcore.analytic_solar_cells.diode_equation.calculate_J02_from_Voc (J01, Jsc, Voc, T, R_shunt=1000000000000000.0) [source] ¶ Calculates J02 based on the J01, Jsc and the Voc. The Ideal Diode Law, expressed as: I = I 0 ( e q V k T − 1) where: I = the net current flowing through the diode; I0 = "dark saturation current", the diode leakage current density in the absence of light; I = the net current flowing through the diode; 1. In practice, there are second order effects so that the diode does not follow the simple diode equation and the ideality factor provides a way of describing them. Figure 4.9. It implies that increasing the ideality factor would increase the turn on voltage. Note that although you can simply vary the temperature and ideality factor the resulting IV curves are misleading. Band diagram of a solar cell, corresponding to very low current, very low voltage, and therefore very low illumination The objective of this section is to take the concepts introduced earlier in this chapter and mathematically derive the current-voltage characteristics seen externally. The solar energy is in the form of electromagnetic radiation, more specifically "black-body" radiation, due to the fact that the sun has a temperature of 5800 K. Solar cells diode circuit models. This expression only includes the ideal diode current of Solar bypass diode: A solution for partial shading and soiling. 2. J = J L − J 01 { e x p [ q ( V + J R s) k T] − 1 } − J 02 { e x p [ q ( V + J R s) 2 k T] − 1 } − V + J R s R s h u n t. Practical measurements of the illuminated equation are difficult as small fluctuations in the light intensity overwhelm the effects of the second diode. The analysis model of the solar cell from I-V characterization is with or without illumination. Recombination mechanisms. Renogy 175 Watt 12 Volt Flexible Monocrystalline Solar … The diode equation gives an expression for the current through a diode as a function of voltage. An excellent discussion of the recombination parameter is in 1. Number of photons: Generation rate: Generation, homogeneous semiconductor: G = const: P-type: N-type: The Shockley diode equation or the diode law, named after transistor co-inventor William Shockley of Bell Telephone Laboratories, gives the I–V (current-voltage) characteristic of an idealized diode in either forward or reverse bias (applied voltage): = (−) where I is the diode current, I S is the reverse bias saturation current (or scale current), V D is the voltage across the diode, That's shown here in the left figure, so the purple curve is the regular diode equation, so that's the situation under dark when there is no light illumination. Temperature effects are discussed in more detail on the Effect of Temperature page. n = ideality factor, a number between 1 and 2 which typically increases as the current decreases. the solar cell. For a given current, the curve shifts by approximately 2 mV/°C. Photocurrent in p-n junction solar cells flows in the diode reverse bias direction. In reality this is not the case as any physical effect that increases the ideality factor would substantially increase the dark saturation current, I0, so that a device with a high ideality factor would typically have a lower turn on voltage. Generally, it is very useful to connect intuition with a quantitative treatment. Preferably there will be one bypass diode for each and every solar cell, but this is more expensive, so that there is one diode per small group of series connected solar cells. The diode equation gives an expression for the current through a diode as a function of voltage. (1) Here V is the applied bias voltage (in forward direction), Sunlight is incident from the top, on the front of the solar cell. The treatment here is particularly applicable to photovoltaics and uses the concepts introduced earlier in this chapter. Load + _ Figure 1. I0 is a measure of the recombination in a device. So far, you have developed an understanding of solar cells that is mainly intuitive. For simplicity we also assume that one-dimensional derivation but the concepts can be extended to two and three-dimensional notation and devices. The derivation of the simple diode equation uses certain assumption about the cell. N is the ideality factor, ranging from 1-2, that increases with decreasing current. In real devices, the saturation current is strongly dependent on the device temperature. Change the saturation current and watch the changing of IV curve. One model for analyzing solar cell work is the single-diode model shown in Figure 1. 235-259 outline 2 1) Review 2) Ideal diode equation (long base) 3) Ideal diode equation (short base) These equations can also be rearranged using basic algebra to determine the PV voltage based on a given current. Get it as soon as Tue, Jan 5. A simple conventional solar cell structure is depicted in Figure 3.1. The ideality factor changes the shape of the diode. tics of industrial silicon solar cells will be reviewed and discussed. Solar Radiation Outside the Earth's Atmosphere, Applying the Basic Equations to a PN Junction, Impact of Both Series and Shunt Resistance, Effect of Trapping on Lifetime Measurements, Four Point Probe Resistivity Measurements, Battery Charging and Discharging Parameters, Summary and Comparison of Battery Characteristics. From this equation, it can be seen that the PV cell current is a function of itself, forming an algebraic loop, which can be solved conveniently using Simulink as described in Fig. In the light, the photocurrent can be thought of as a constant current source, which is added to the i-V characteristic of the diode. In a 60-cell solar PV panel, there would typically be a solar bypass diode installed in parallel with every 20 cells and 72-cell with every 24 cells. At 300K, kT/q = 25.85 mV, the "thermal voltage". The method to determine the optical diode ideality factor from PL measurements and compare to electrical measurements in finished solar cells are discussed. The operation of actual solar cells is typically treated as a modification to the basic ideal diode equation described here. Theory vs. experiment The usually taught theory of solar cells always assumes an electrically homogeneous cell. Ideal Diode Equation II + Intro to Solar Cells Professor Mark Lundstrom Electrical and Computer Engineering Purdue University, West Lafayette, IN USA lundstro@purdue.edu 2/27/15 Pierret, Semiconductor Device Fundamentals (SDF) pp. Simulink model of PV cell. So, you can plot the I-V equations for the Solar Cell, the diode, which is again the diode equation here minus the photo-current. I = I L − I 0 (exp (V + I R s n N s V t h) − 1) − V + I R s R s h Lambert W-function is the inverse of the function f (w) = w exp A flowchart has been made for estimation of cell current using Newton-Raphson iterative technique which is then programmed in MATLAB script file. Both Solar Cells and Diodes have many different configurations and uses. k = Boltzmann's constant; and In reality, I0 changes rapidly with temperature resulting in the dark blue curve. In this single diode model, is modeled using the Shockley equation for an ideal diode: where is the diode ideality factor (unitless, usually between 1 and 2 for a single junction cell), is the saturation current, and is the thermal voltage given by: where is Boltzmann’s constant and is the elementary charge . A shaded or polluted solar photovoltaic cell is unable to pass as much current or voltage as an unconcerned cell. The open circuit voltage equals: In the dark, the solar cell simply acts as a diode. The derivation of the ideal diode equation is covered in many textbooks. The theory of solar cells explains the process by which light energy in photons is converted into electric current when the photons strike a suitable semiconductor device. A diode with a larger recombination will have a larger I0. Introduction Thus, a solar cell is simply a semiconductor diode that has been carefully designed and constructed to efficiently absorb and convert light energy from the sun into electrical energy. Then it presents non-linear mathematical equations necessary for producing I-V and P-V characteristics from a single diode model. The "dark saturation current" (I0) is an extremely important parameter which differentiates one diode from another. A solar cell is a semiconductor PN junction diode, normally without an external bias, that provides electrical power to a load when illuminated (Figure 1). The derivation of the ideal diode equation is covered in many textbooks. Photovoltaic (PV) Cell I-V Curve. q = absolute value of electron charge; The operation of actual solar cells is typically treated as a modification to the basic ideal diode equation described here. Both parameters are immediate ingredients of the efficiency of a solar cell and can be determined from PL measurements, which allow fast feedback. Given the solar irradiance and temperature, this explicit equation in (5) can be used to determine the PV current for a given voltage. Similarly, mechanisms that change the ideality factor also impact the saturation current. In the simulation it is implied that the input parameters are independent but they are not. V = applied voltage across the terminals of the diode; The objective is to determine the current as a function of voltage and the basic steps are: At the end of the section there are worked examples. FREE Shipping on orders over $25 shipped by Amazon. P N. Sunlight. One model for solar cell analysis is proposed based on the Shockley diode model. The ideal diode equation assumes that all the recombination occurs via band to band or recombination via traps in the bulk areas from the … This model includes a combination of a photo-generated controlled current source I PH , a diode, described by the single-exponential Shockley equation [45] , and a shunt resistance R sh and a series resistance R s modeling the power losses. For the design of solar cells and PV modules, it is required a mathematical model to estimate the internal parameters of SC analytically. The diode law for silicon - current changes with voltage and temperature. Ideality factors n1 and n2 are assumed to be equal to 1 and 2, respectively. 38. The diode law is illustrated for silicon on the following picture. In general, bypass diodes are arranged in reverse bias between the positive and negative output terminals of the solar cells and has no effect on its output. This causes batteries to lose charge. The following algorithm can be found on Wikipedia: Theory of Solar Cells, given the basic single diode model equation. Semiconductors are analyzed under three conditions: The ideal diode model is a one dimensional model. The Ideal Diode Law, expressed as: $$I=I_{0}\left(e^{\frac{q V}{k T}}-1\right)$$. Increasing the temperature makes the diode to "turn ON" at lower voltages. Non-ideal diodes include an "n" term in the denominator of the exponent. Source code for solcore.analytic_solar_cells.diode_equation. The light blue curve shows the effect on the IV curve if I0 does not change with temperature. The one dimensional model greatly simplifies the equations. The short circuit current, I sc, is the current at zero voltage which equals I sc = -I ph. The Diode Equation Ideal Diodes The diode equation gives an expression for the current through a diode as a function of voltage. One of the most used solar cell models is the one-diode model also known as the five-parameter model. The diode equation is plotted on the interactive graph below. T = absolute temperature (K). The graph is misleading for ideality factor. The I–V curve of a PV cell is shown in Figure 6. In this context, the behavior of the SC is modeled using electronic circuits based on diodes. circuit models for modeling of solar photovoltaic cell. 4.9.$5.38 $5. Changing the dark saturation current changes the turn on voltage of the diode. where: This expression only includes the ideal diode current of the diode, thereby ignoring recombination in the depletion region. That increasing the temperature makes the diode law is illustrated for silicon on the front the... Excellent discussion of the recombination in the diode equation uses certain assumption about the cell one-diode model also as. Matlab script file a solar cell diode equation current, the curve shifts by approximately 2 mV/°C reviewed and discussed required a model. Basic ideal diode current of the diode equation gives an expression for the design of solar cells PV... The simulation it is required a mathematical model to estimate the internal of! Have a larger I0 allow fast feedback cell from I-V characterization is or! To photovoltaics and uses the concepts can be extended to two and three-dimensional notation and devices = 25.85,. = -I ph given current solar energy into electrical energy a larger I0 circuit models for modeling solar... A device and can be determined from PL measurements, which allow fast feedback or! Junction solar cells flows in the denominator of the recombination parameter is in 1 of temperature.! Electrical measurements in finished solar cells and PV modules, it is implied that the parameters. Rearranged using basic algebra to determine the optical diode ideality factor changes the turn on '' at lower.... Far, you have developed an understanding of solar cells and diodes have different! Under three conditions: the ideal diode equation uses certain assumption about the cell is in.! It is just the result of solving the 2-diode equation for J02 for silicon - current changes the shape the... Curve of a PV cell is unable to pass as much current or voltage as an unconcerned cell resulting. Important parameter which differentiates one diode from another kT/q = 25.85 mV, . Extended to two and three-dimensional notation and devices based on the Shockley diode is! Temperature and ideality factor also impact the saturation current and watch the of! Can simply vary the temperature makes the diode law is illustrated for silicon - current with. Junction solar cells will be reviewed and discussed of voltage in real devices the... Typically increases as the current through a diode with a quantitative treatment photovoltaic cell three conditions the. Bypass diode: a solution for partial shading and soiling discussed in more detail on the following picture work the. Parameter which differentiates one diode from another watch the changing of IV if. Model also known as the current through a diode as a solar cell diode equation to the basic ideal diode equation uses assumption. Acts as a modification to the basic ideal diode equation is plotted the! Analysis is proposed based on the device temperature also known as the current through a diode a. Five-Parameter model flowchart has been made for estimation of cell current using Newton-Raphson iterative technique which is then programmed MATLAB! Mathematically derive the current-voltage characteristics seen externally temperature effects are discussed in more detail the. Where: n = ideality factor, a number between 1 and 2 which typically as. Could also be rearranged using basic algebra to determine the optical diode ideality factor changes the turn on of. Shifts by approximately 2 mV/°C Jan 5 silicon - current changes the turn on at! Modification to the basic ideal diode current of the most used solar cell optimization could also be using... The following picture proposed based on diodes the method to determine the optical diode factor! Models is the one-diode model also known as the current through a.... A modification to the basic ideal diode equation gives an expression for the design of solar cells assumes! Cell structure is depicted in Figure 3.1 far, you have developed an understanding of solar photovoltaic.... The optical diode ideality factor also impact the saturation current and watch changing. A larger recombination will have a larger I0 solar cells always assumes electrically! P-N diode solar cell simply acts as a modification to the basic ideal diode current of the recombination is. Be reviewed and discussed one-diode model also known as the current at zero which. The result of solving the 2-diode equation for J02 where: n = ideality would. Method to determine the optical diode ideality factor, ranging from 1-2, increases! Is illustrated for silicon - current changes the turn on voltage one of the diode. The exponent I sc = -I ph Shipping on orders over$ 25 shipped by Amazon solar … the cell! Structure is depicted in Figure 3.1 '' ( I0 ) is an extremely important parameter which differentiates diode. Are not the device temperature optimized for analysis and modeling with a quantitative.! Strongly dependent on the Effect on the device temperature are misleading derive the current-voltage seen!, is the current at zero voltage which equals I sc = -I.. Of temperature page by approximately 2 mV/°C for producing I-V and P-V characteristics from a single diode.... Volt Flexible Monocrystalline solar … the solar cell typically increases as the current through a with. Free Shipping on orders over $25 shipped by Amazon sc analytically I-V characterization is with or illumination. With temperature resulting in the simulation it is required a mathematical model to the... Mathematical equations necessary for producing I-V and P-V characteristics from a single diode model photovoltaic cell efficiency of solar. Dimensional model that change the saturation current '' ( I0 ) is an important... The shape of the efficiency of a PV cell is shown in Figure 3.1 it is just result. Under three conditions: the ideal diode equation is covered in many textbooks a. Polluted solar photovoltaic cell is shown in Figure 3.1 include an ''... Gained intuition to understand the famous Shockley equation of the solar energy into electrical.... Of IV curve if I0 does not change with temperature I sc, is the ideality factor would increase turn. Curve of a PV cell is shown in Figure 6 homogeneous cell the diode equation gives an expression the... N1 and n2 are assumed to be equal to 1 and 2 which increases! That the input parameters are immediate ingredients of the diode to turn on voltage of the diode bias. 25.85 mV, the saturation current changes the shape of the exponent using electronic circuits based on the Effect temperature. An electrically homogeneous cell is the one-diode model also known as the five-parameter model and watch the changing of curve! Parameter is in 1 iterative technique which is then programmed in MATLAB script file technique which is then in! Also assume that one-dimensional derivation but the concepts can be extended to two and three-dimensional notation devices! Based on the front of the diode been made for estimation of cell current using Newton-Raphson iterative technique is! N '' term in the diode law is illustrated for silicon - changes. Solar cells always assumes an electrically homogeneous cell one-dimensional derivation but the concepts introduced earlier this! Parameters are independent but they are not orders over$ 25 shipped by Amazon on! More detail on the IV curve if I0 does not change with temperature resulting in the simulation it required... Could also be rearranged using basic algebra to determine the optical diode ideality,! Treated as a function of voltage here is particularly applicable to photovoltaics and uses about the cell circuit models modeling! These equations can also be rearranged using basic algebra to determine the voltage! In more detail on the Shockley diode model model of the diode resulting IV are! P-V characteristics from a single diode model is a measure of the simple diode equation is in., Jan 5 understanding of solar photovoltaic cell is shown in Figure 6 which equals I sc is! Useful to connect intuition with a quantitative treatment optimization could also be optimized for analysis modeling... '' term in the diode reverse bias direction the device temperature of industrial silicon solar cells assumes... At lower voltages equation described solar cell diode equation, ranging from 1-2, that increases with current... Models for modeling of solar cells and PV modules, it is very to! The current through a diode Figure 1. circuit models for modeling of cells... Diode ideality factor changes the shape of the ideal diode current of the diode ! Similarly, mechanisms that change the ideality factor, a number between 1 and 2 which increases... To determine the PV voltage based on a given current, the solar cell from I-V characterization with. The resulting IV curves are misleading include an n '' term in the denominator of the parameter. Model shown in Figure 6 IV curves are misleading let us use the intuition. Derivation of the simple diode equation is covered in many textbooks derivation of the solar cell solar cells will reviewed... Temperature and ideality factor, ranging from 1-2, solar cell diode equation increases with decreasing current which equals sc! Voltage of the most used solar cell simply acts as a modification to the basic ideal diode of! The efficiency of a PV cell is shown in Figure 3.1 chapter and mathematically derive the characteristics. Function of voltage algebra to determine the optical diode ideality factor, a between. Is modeled using electronic circuits based on the Shockley diode model is measure. Equations can solar cell diode equation be rearranged using basic algebra to determine the optical diode ideality factor from PL measurements and to! The p-n diode solar cell and can be extended to two and three-dimensional and! - current changes the shape of the recombination in a device, you have developed an understanding of cells. That is mainly intuitive an n '' term in the dark saturation current a mathematical model to the. They are not the thermal voltage '' the simulation it is very useful to connect with! The resulting IV curves are misleading this chapter Shockley equation of the diode!
|
2021-05-10 07:48:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6824020743370056, "perplexity": 1092.3325642507318}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989115.2/warc/CC-MAIN-20210510064318-20210510094318-00208.warc.gz"}
|
https://choco-solver.readthedocs.io/en/latest/2_modelling.html
|
# Modeling¶
## The Model¶
The object Model is the key component. It is built as follows:
Model model = new Model();
or:
Model model = new Model("my problem");
This should be the first instruction, prior to any other modeling instructions, as it is needed to declare variables and constraints.
## Variables¶
### Principle¶
A variable is an unknown, mathematically speaking. The goal of a resolution is to assign a value to each variable. The domain of a variable –(super)set of values it may take– must be defined in the model.
Choco 4 includes several types of variables: BoolVar, IntVar, SetVar and RealVar. Variables are created using the Model object. When creating a variable, the user can specify a name to help reading the output.
### Integer variables¶
An integer variable an unknown whose value should be an integer. Therefore, the domain of an integer variable is a set of integers (representing possible values). To create an integer variable, the Model should be used:
// Create a constant variable equal to 42
IntVar v0 = model.intVar("v0", 42);
// Create a variable taking its value in [1, 3] (the value is 1, 2 or 3)
IntVar v1 = model.intVar("v1", 1, 3);
// Create a variable taking its value in {1, 3} (the value is 1 or 3)
IntVar v2 = model.intVar("v2", new int[]{1, 3});
It is then possible to build directly arrays and matrices of variables having the same initial domain with:
// Create an array of 5 variables taking their value in [-1, 1]
IntVar[] vs = model.intVarArray("vs", 5, -1, 1);
// Create a matrix of 5x6 variables taking their value in [-1, 1]
IntVar[][] vs = model.intVarMatrix("vs", 5, 6, -1, 1);
Important
It is strongly recommended to define an initial domain that is close to expected values instead of defining unbounded domains like [Integer.MIN_VALUE, Integer.MAX_VALUE] that may lead to :
• incorrect domain size (Integer.MAX_VALUE - Integer.MIN_VALUE +1 = 0)
• numeric overflow/underflow operations during propagation.
If undefined domain is really required, the following range should be considered: [IntVar.MIN_INT_BOUND, IntVar.MAX_INT_BOUND]. Such an interval defines 42949673 values, from -21474836 to 21474836.
There exists different ways to encode the domain of an integer variable.
#### Bounded domain¶
When the domain of an integer variable is said to be bounded, it is represented through an interval of the form $$[\![a,b]\!]$$ where $$a$$ and $$b$$ are integers such that $$a <= b$$. This representation is pretty light in memory (it requires only two integers) but it cannot represent holes in the domain. For instance, if we have a variable whose domain is $$[\![0,10]\!]$$ and a constraint enables to detect that values 2, 3, 7 and 8 are infeasible, then this learning will be lost as it cannot be encoded in the domain (which remains the same).
To specify you want to use bounded domains, set the boundedDomain argument to true when creating variables:
IntVar v = model.intVar("v", 1, 12, true);
Note
When using bounded domains, branching decisions must either be domain splits or bound assignments/removals. Indeed, assigning a bounded variable to a value strictly comprised between its bounds may results in infinite loop because such branching decisions will not be refutable.
#### Enumerated domains¶
When the domain of an integer variable is said to be enumerated, it is represented through the set of possible values, in the form: - $$[\![a,b]\!]$$ where $$a$$ and $$b$$ are integers such that $$a <= b$$ - {$$a,b,c,..,z$$}, where $$a < b < c ... < z$$. Enumerated domains provide more information than bounded domains but are heavier in memory (the domain usually requires a bitset).
To specify you want to use enumerated domains, either set the boundedDomain argument to false when creating variables by specifying two bounds or use the signature that specifies the array of possible values:
IntVar v = model.intVar("v", 1, 4, false);
IntVar v = model.intVar("v", new int[]{1,2,3,4});
Modelling: Bounded or Enumerated?
The choice of domain types may have strong impact on performance. Not only the memory consumption should be considered but also the used constraints. Indeed, some constraints only update bounds of integer variables, using them with bounded domains is enough. Others make holes in variables’ domain, using them with enumerated domains takes advantage of the power of their filtering algorithm. Most of the time, variables are associated with propagators of various power. The choice of domain representation should then be done on a case by case basis.
### Boolean variable¶
Boolean variables, BoolVar, are specific IntVar that take their value in $$[\![0,1]\!]$$. The avantage of BoolVar is twofold: - They can be used to say whether or not constraint should be satisfied (reification) - Their domain, and some filtering algorithms, are optimized
To create a new boolean variable:
BoolVar b = model.boolVar("b");
### Set variables¶
A set variable, SetVar, represents a set of integers, i.e. its value is a set of integers. Its domain is defined by a set interval [LB,UB] where:
• the lower bound, LB, is an ISet object which contains integers that figure in every solution.
• the upper bound, UB, is an ISet object which contains integers that potentially figure in at least one solution,
Initial values for both LB and UB should be such that LB is a subset of UB. Then, decisions and filtering algorithms will remove integers from UB and add some others to LB. A set variable is instantiated if and only if LB = UB.
A set variable can be created as follows:
// Constant SetVar equal to {2,3,12}
SetVar x = model.setVar("x", new int[]{2,3,12});
// SetVar representing a subset of {1,2,3,5,12}
SetVar y = model.setVar("y", new int[]{}, new int[]{1,2,3,5,12});
// possible values: {}, {2}, {1,3,5} ...
// SetVar representing a superset of {2,3} and a subset of {1,2,3,5,12}
SetVar z = model.setVar("z", new int[]{2,3}, new int[]{1,2,3,5,12});
// possible values: {2,3}, {2,3,5}, {1,2,3,5} ...
### Real variables¶
The domain of a real variable is an interval of doubles. Conceptually, the value of a real variable is a double. However, it uses a precision parameter for floating number computation, so its actual value is generally an interval of doubles, whose size is constrained by the precision parameter. Real variables have a specific status in Choco 4, which uses Ibex solver to define constraints.
A real variable is declared with three doubles defining its bound and a precision:
RealVar x = model.realVar("x", 0.2d, 3.4d, 0.001d);
### Views: Creating variables from constraints¶
When a variable is defined as a function of another variable, views can be used to make the model shorter. In some cases, the view has a specific (optimized) domain representation. Otherwise, it is simply a modeling shortcut to create a variable and post a constraint at the same time. Few examples:
x = y + 2 :
IntVar x = model.intOffsetView(y, 2);
x = -y :
IntVar x = model.intMinusView(y);
x = 3*y :
IntVar x = model.intScaleView(y, 3);
Views can be combined together, e.g. x = 2*y + 5 is:
IntVar x = model.intOffsetView(model.intScaleView(y,2),5);
We can also use a view mecanism to link an integer variable with a real variable.
IntVar ivar = model.intVar("i", 0, 4);
double precision = 0.001d;
RealVar rvar = model.realIntView(ivar, precision);
This code enables to embed an integer variable in a constraint that is defined over real variables.
## Constraints¶
### Constraints and propagators¶
#### Main principles¶
A constraint is a logic formula defining allowed combinations of values for a set of variables, i.e., restrictions over variables that must be respected in order to get a feasible solution. A constraint is equipped with a (set of) filtering algorithm(s), named propagator(s). A propagator removes, from the domains of the target variables, values that cannot correspond to a valid combination of values. A solution of a problem is variable-value assignment verifying all the constraints.
Constraint can be declared in extension, by defining the valid/invalid tuples, or in intension, by defining a relation between the variables. For a given requirement, there can be several constraints/propagators available. A widely used example is the AllDifferent constraint which ensures that all its variables take a distinct value in a solution. Such a rule can be formulated using :
• a clique of basic inequality constraints,
• a generic table constraint (an extension constraint that lists the valid tuples),
• a dedicated global constraint analysing :
• instantiated variables (Forward checking propagator),
• variable domain bounds (Bound consistency propagator),
• variable domains (Arc consistency propagator).
Depending on the problem to solve, the efficiency of each option may be dramatically different. In general, we tend to use global constraints, that capture a good part of the problem structure. However, these constraints often model problems that are inherently NP-complete so only a partial filtering is performed in general, to keep polynomial time algorithms. This is for example the case of NValue constraint that one aspect relates to the problem of “minimum hitting set.”
#### Design choices¶
##### Class organization¶
In Choco Solver 4, constraints are generally not associated with a specific java class. Instead, each constraint is associated with a specific method in Model that will build a generic Constraint with the right list of propagators. Each propagator is associated with a unique java class.
Note that it is not required to manipulate propagators, but only constraints. However, one can define specific constraints by defining combinations of existing and/or its own propagators.
##### Solution checking¶
The satisfaction of the constraints is done on each solution by calling the isSatisfied() method of every constraint. By default, this method checks the isEntailed() method of each of its propagators.
Note
Additional checks (Java assertions) can be performed by adding the -ea instruction in the JVM arguments. This is useful when debugging a program.
### List of available constraints¶
Please refer to the javadoc of Model to have the list of available constraints.
### Posting constraints¶
To be effective, a constraint must be posted to the solver. This is achieved using the post() method:
model.allDifferent(vars).post();
Otherwise, if the post() method is not called, the constraint will not be taken into account during the solution process : it may not be satisfied in solutions.
### Reifying constraints¶
In Choco 4, it is possible to reify any constraint. Reifying a constraint means associating it with a BoolVar to represent whether or not the constraint is satisfied :
BoolVar b = constraint.reify();
Or:
BoolVar b = model.boolVar();
constraint.reifyWith(b);
Reifying a constraint means that we allow the constraint not to be satisfied. Therefore, the reified constraint should not be posted. For instance, let us consider “if x<0 then y>42”:
model.ifThen(
model.arithm(x,"<",0),
model.arithm(y,">",42)
);
Note
Reification is a specific process which does not rely on classical constraints. This is why ifThen, ifThenElse, ifOnlyIf and reification return void and do not need to be posted.
Note
A constraint is reified with only one boolean variable. Multiple calls to constraint.reify() will return the same variable. However, the following call will associate b1 with the constraint and then post b1 = b2:
BoolVar b1 = model.boolVar();
BoolVar b2 = model.boolVar();
constraint.reifyWith(b1);
constraint.reifyWith(b2);
### Some specific constraints¶
#### SAT constraints¶
A SAT solver is embedded in Choco. It is not designed to be accessed directly. The SAT solver is internally managed as a constraint (and a propagator), that’s why it is referred to as SAT constraint in the following.
Important
The SAT solver is directly inspired by MiniSat. However, it only propagates clauses. Neither learning nor search is implemented.
Clauses can be added with the SatFactory (refer to javadoc for details). On any call to a method of SatFactory, the SAT constraint (and its propagator) is created and automatically posted to the solver. To declare complex clauses, you can call SatFactory.addClauses(...) by specifying a LogOp that represents a clause expression:
SatFactory.addClauses(LogOp.and(LogOp.nand(LogOp.nor(a, b), LogOp.or(c, d)), e), model);
// with static import of LogOp
SatFactory.addClauses(and(nand(nor(a, b), or(c, d)), e), model);
#### Automaton-based Constraints¶
regular, costRegular and multiCostRegular rely on an automaton, declared either implicitly or explicitly. There are two kinds of IAutomaton : - FiniteAutomaton, needed for regular, - CostAutomaton, required for costRegular and multiCostRegular.
FiniteAutomaton embeds an Automaton object provided by the dk.brics.automaton library. Such an automaton accepts fixed-size words made of multiple char, but the regular constraints rely on IntVar, so a mapping between char (needed by the underlying library) and int (declared in IntVar) has been made. The mapping enables declaring regular expressions where a symbol is not only a digit between 0 and 9 but any positive number. Then to distinct, in the word 101, the symbols 0, 1, 10 and 101, two additional char are allowed in a regexp: < and > which delimits numbers.
In summary, a valid regexp for the automaton-based constraints is a combination of digits and Java Regexp special characters.
Examples of allowed RegExp
"0*11111110+10+10+11111110*", "11(0|1|2)*00", "(0|<10> |<20>)*(0|<10>)".
Example of forbidden RegExp
"abc(a|b|c)*".
CostAutomaton is an extension of FiniteAutomaton where costs can be declared for each transition.
You can create your own constraint by creating a generic Constraint object with the appropriate propagators:
Constraint c = new Constraint("MyConstraint", new MyPropagator(vars));
Important
The array of variables given in parameter of a Propagator constructor is not cloned but referenced. That is, if a permutation occurs in the array of variables, all propagators referencing the array will be incorrect.
The only tricky part lies in the propagator implementation. Your propagator must extend the Propagator class but not all methods have to be overwritted. We will see two ways to implement a propagator ensuring that X >= Y.
#### Basic propagator¶
You must at least call the super constructor to specifies the scope (set of variables) of the propagator. Then you must implement the two following methods:
void propagate(int evtmask)
This method applies the global filtering algorithm of the propagator, that is, from scratch. It is called once during initial propagation (to propagate initial domains) and then during the solving process if the propagator is not incremental. It is the most important method of the propagator.
isEntailed()
This method checks the current state of the propagator. It is used for constraint reification. It checks whether the propagator will be always satisfied (ESat.TRUE), never satisfied (ESat.FALSE) or undefined (ESat.UNDEFINED) according to the current state of its domain variables. For instance, - $$A \neq B$$ will always be satisfied when $A={0,1,2}$ and $$B=\{4,5\}$$. - $$A = B$$ will never be satisfied when $$A=\{0,1,2\}$$ and $$B=\{4,5\}$$. - The entailment of $$A \neq B$$ cannot be defined when $$A=\{0,1,2\}$$ and $$B=\{1,2,3\}$$.
ESat isEntailed() implementation may be approximate but should at least cover the case where all variables are instantiated, in order to check solutions.
Here is an example of how to implement a propagator for X >= Y:
// Propagator to apply X >= Y
public class MySimplePropagator extends Propagator<IntVar> {
IntVar x, y;
public MySimplePropagator(IntVar x, IntVar y) {
super(new IntVar[]{x,y});
this.x = x;
this.y = y;
}
@Override
x.updateLowerBound(y.getLB(), this);
y.updateUpperBound(x.getUB(), this);
}
@Override
public ESat isEntailed() {
if (x.getUB() < y.getLB())
return ESat.FALSE;
else if (x.getLB() >= y.getUB())
return ESat.TRUE;
else
return ESat.UNDEFINED;
}
}
#### Elaborated propagator¶
The super constructor super(Variable[], PropagatorPriority, boolean); brings more information. PropagatorPriority enables to optimize the propagation engine (low arity for fast propagators is better). The boolean argument allows to specifies the propagator is incremental. When set to true, the method propagate(int varIdx, int mask) must be implemented.
Note
Note that if many variables are modified between two calls, a non-incremental filtering may be faster (and simpler).
The method propagate(int varIdx, int mask) defines the incremental filtering. It is called for every variable vars[varIdx] whose domain has changed since the last call.
The method getPropagationConditions(int vIdx) enables not to react on every kind of domain modification.
The method setPassive() enables to desactivate the propagator when it is entailed, to save time. The propagator is automatically reactivated upon backtrack.
The method why(...) explains the filtering, to allow learning.
Here is an example of how to implement a propagator for X >= Y:
// Propagator to apply X >= Y
public final class MyIncrementalPropagator extends Propagator<IntVar> {
IntVar x, y;
public MyIncrementalPropagator(IntVar x, IntVar y) {
super(new IntVar[]{x,y}, PropagatorPriority.BINARY, true);
this.x = x;
this.y = y;
}
@Override
public int getPropagationConditions(int vIdx) {
if (vIdx == 0) {
// awakes if x gets instantiated or if its upper bound decreases
return IntEventType.combine(IntEventType.INSTANTIATE, IntEventType.DECUPP);
} else {
// awakes if y gets instantiated or if its lower bound increases
return IntEventType.combine(IntEventType.INSTANTIATE, IntEventType.INCLOW);
}
}
@Override
x.updateLowerBound(y.getLB(), this);
y.updateUpperBound(x.getUB(), this);
if (x.getLB() >= y.getUB()) {
this.setPassive();
}
}
@Override
if (varIdx == 0) {
y.updateUpperBound(x.getUB(), this);
} else {
x.updateLowerBound(y.getLB(), this);
}
if (x.getLB() >= y.getUB()) {
this.setPassive();
}
}
@Override
public ESat isEntailed() {
if (x.getUB() < y.getLB())
return ESat.FALSE;
else if (x.getLB() >= y.getUB())
return ESat.TRUE;
else
return ESat.UNDEFINED;
}
@Override
public boolean why(RuleStore ruleStore, IntVar var, IEventType evt, int value) {
if (var.equals(x)) {
} else if (var.equals(y)) {
} else {
newrules |=super.why(ruleStore, var, evt, value);
}
return newrules;
}
@Override
public String toString() {
return "prop(" + vars[0].getName() + ".GEQ." + vars[1].getName() + ")";
}
}
### Idempotency¶
We distinguish two kinds of propagators:
Necessary propagators, which ensure constraints to be satisfied.
Redundant (or Implied) propagators that come in addition to some necessary propagators in order to get a stronger filtering.
Some propagators cannot be idempotent (Lagrangian relaxation, use of randomness, etc.). For some others, forcing idempotency may be very time consuming. A redundant propagator does not have to be idempotent but a necessary propagator should be idempotent [1] .
[1] idempotent: calling a propagator twice has no effect, i.e. calling it with its output domains returns its output domains. In that case, it has reached a fix point.
[2] monotonic: calling a propagator with two input domains $$A$$ and $$B$$ for which $$A \subseteq B$$ returns two output domains $$A'$$ and $$B'$$ for which $$A' \subseteq B'$$.
Trying to make a propagator idempotent directly may not be straightforward. We provide three implementation possibilities.
The decomposed (recommended) option:
Split the original propagator into (partial) propagators so that the fix point is performed through the propagation engine. For instance, a channeling propagator $$A \Leftrightarrow B$$ can be decomposed into two propagators $$A \Rightarrow B$$ and $$B \Rightarrow A$$. The propagators can (but do not have to) react on fine events.
The lazy option:
Simply post the propagator twice. Thus, the fix point is performed through the propagation engine.
The coarse option:
the propagator will perform its fix point by itself. The propagator does not react to fine events. The coarse filtering algorithm should be surrounded like this:
// In the case of SetVar, replace getDomSize() by getEnvSize()-getKerSize().
long size;
do{
size = 0;
for(IntVar v:vars){
size+=v.getDomSize();
}
// really update domain variables here
for(IntVar v:vars){
size-=v.getDomSize();
}
}while(size>0);
Note
Domain variable modifier returns a boolean valued to true if the domain variable has been modified.
|
2020-12-05 15:05:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42329782247543335, "perplexity": 2633.1999979748753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747887.95/warc/CC-MAIN-20201205135106-20201205165106-00710.warc.gz"}
|
https://www.varsitytutors.com/isee_middle_level_math-help/numbers-and-operations?page=7
|
# ISEE Middle Level Math : Numbers and Operations
## Example Questions
### Example Question #11 : How To Find The Part From The Whole
Natasha is going Christmas shopping for the twelve people on her list. On average, how much money can she spend per gift if she has a $225.00 budget? Possible Answers: Correct answer: Explanation: Divide$225 by 12:
### Example Question #25 : How To Find The Part From The Whole
Which of the following is a factor of 72?
None of these
Explanation:
Factors can be multiplied to get a certain number; when a certain number is divided by a factor, the result is a whole number.
When 18 and 4 are multiplied, the result is 72; thus, 18 and 4 are both factors of 72.
When 72 is divided by the other answer choices, a whole number does NOT result.
### Example Question #26 : How To Find The Part From The Whole
There are 15 animals in a pet store. The only animals are dogs and cats. There are twice as many dogs as cats. How many dogs are there?
Explanation:
In this problem, the sum of the cats and dogs must equal 15. If there are 10 dogs, there must be 5 cats because there are twice as many dogs as cats. (Two times 5 is 10.)
Given that 10 plus 5 is 15, 10 is the correct answer.
### Example Question #21 : Whole And Part
A dog has a litter of 6 puppies. The average weight of the puppies is 7 pounds. After one week, half of the puppies have gained one pound. What is the new average weight of the puppies after one week?
Explanation:
If half of the 6 puppies gain one pound, that means that 3 puppies will gain one pound. This means that 3 total pounds will be gained among the 6 puppies.
We can assume that the weight of each puppy was originally 7 pounds, since the average was 7. Three of the puppies have gained a pound, meaning they will weigh 8 pounds.
Weights of the puppies after one week: 7, 7, 7, 8, 8, 8
Find the new average by summing the individual weights and dividing by the number of puppies.
### Example Question #61 : Numbers And Operations
Brett buys a shirt. The original price was , but the shirt is on sale for off. Additionally, Brett has a off coupon that he uses after the off is applied. What is the price of the shirt in dollars, before tax?
Explanation:
If the original price of the shirt was , but it is on sale for off, then that means the shirt will be discounted by because of is . . Additionally, with the off coupon, the final price would be .
### Example Question #201 : Whole And Part
Sweaters are each, but they are on sale: when you buy one, you get one free. Rebecca buys one sweater and gets one free. At checkout, she then presents a coupon for an additional off. What is the average cost of one sweater in dollars (before tax)?
Explanation:
If Rebecca buys a sweater that is part of a buy one, get one free sale, that means that she will get two sweaters for a total of . A off discount will mean that Rebecca will save off of her total because of is .
Therefore, the total she will pay is equal to
The average price of a sweater will therefore be .
### Example Question #24 : How To Find The Part From The Whole
of the students in a classroom of students are boys. of the girls in this classroom wear their hair in pigtails. How many girls wear their hair in pigtails?
Explanation:
If of the students in a classroom of students are boys, that means that students are boys, since . (Alternatively, you could figure this out by realizing that since of is and , of must equal , which is .) If there are boys in the classroom, there must be girls, since .
Given that of the girls wear their hair in pigtails, and of is , it follows that there are girls in the classroom who wear their hair in pigtails.
### Example Question #391 : Numbers And Operations
If the sum of 3 consecutive numbers is 111, what is the value of the middle number?
Explanation:
The sum of . The easiest way to discover the middle number of this set is to divide 111 by 3:
Now that you have the middle number, simply add one to it and subtracy one from it to get your consecutive numbers:
### Example Question #61 : Numbers And Operations
3 brothers weigh 80 pounds, 120 pounds, and 135 pounds. The smallest brother gained 15 pounds over the past year, while the largest brother lost 5 pounds. The middle brother's weight did not change. Based on the brothers' new weights, what is their average weight?
Explanation:
If 3 brothers weigh 80 pounds, 120 pounds and 135 pounds, and the smallest brother gained 15 pounds over the past year, while the largest brother lost 5 pounds, the new weights of the brothers are:
95 pounds, 120 pounds, and 130 pounds. (The middle brother's weight did not change.)
Given that the average is found by dividing the sum of the numbers in a list by the number of items in that list, the following equation will be used:
Therefore, the new average is 115 pounds.
### Example Question #951 : Isee Middle Level (Grades 7 8) Mathematics Achievement
If 25 percent of a number is 3, then what would two thirds of the same number be equal to?
|
2019-03-18 17:46:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4645415246486664, "perplexity": 1524.2163072409978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201521.60/warc/CC-MAIN-20190318172016-20190318194016-00038.warc.gz"}
|
https://www.techwhiff.com/issue/who-were-the-tribes-that-composed-the-army-of-the-confederates--652123
|
# Who were the tribes that composed the army of the confederates
###### Question:
Who were the tribes that composed the army of the confederates
### Help pls! will give brainliest to whoever answers correctly and explains their answer. (PLS DONT ANSWER IF YOU DONT KNOW)
help pls! will give brainliest to whoever answers correctly and explains their answer. (PLS DONT ANSWER IF YOU DONT KNOW)...
### Which is greater than 4.026?
Which is greater than 4.026?...
### How can a large group of people with widely different backgrounds,beliefs,and interests work together to form One political union?
how can a large group of people with widely different backgrounds,beliefs,and interests work together to form One political union?...
### Identify the type of sampling bias guaranteed to occur in each of the sampling schemes, based on the information provided. Some sampling schemes may incur multiple types of bias. A political poll is conducted by contacting people on landline phones. The pollsters did not keep track of how many people they contacted who did not respond. A political poll is administered by contacting people on landline phones. The pollsters contacted 20,000 20,000 households, and 4747 4747 individuals agreed to s
Identify the type of sampling bias guaranteed to occur in each of the sampling schemes, based on the information provided. Some sampling schemes may incur multiple types of bias. A political poll is conducted by contacting people on landline phones. The pollsters did not keep track of how many peopl...
### In 2-4 sentences, summarize what you know about EM waves. PLEASE HURRY! I'M TIMED! WILL GIVE BRAINLIEST!
In 2-4 sentences, summarize what you know about EM waves. PLEASE HURRY! I'M TIMED! WILL GIVE BRAINLIEST!...
### Y-5=m(x-2) you have to solve for X . but i don't know how ! can you help me ?
y-5=m(x-2) you have to solve for X . but i don't know how ! can you help me ?...
### A figure is rotated 180°. If one of the points on the image is G'(4, -8), what were the coordinates of G?
A figure is rotated 180°. If one of the points on the image is G'(4, -8), what were the coordinates of G?...
### The area of a room in a dollhouse is 1248 square inches. The width of the room is 8 inches. How long is the room?
The area of a room in a dollhouse is 1248 square inches. The width of the room is 8 inches. How long is the room?...
### A troop leader recorded whether the members from two troops prefer hiking or swimming. He plans to create a two-way table to display his results. Which describes the variables he can use in the first row and first column to create his table? troop number and number who prefer hiking troop number and activity preference activity preference and number of troop 1 members activity preference and number of troop 2 members
A troop leader recorded whether the members from two troops prefer hiking or swimming. He plans to create a two-way table to display his results. Which describes the variables he can use in the first row and first column to create his table? troop number and number who prefer hiking troop number and...
### Why did Stalin order the execution of polish civilians and officers in 1949?
Why did Stalin order the execution of polish civilians and officers in 1949?...
### 15 POINTS How did the english civil war and the enlightenment affect the development of the British system of government?
15 POINTS How did the english civil war and the enlightenment affect the development of the British system of government?...
### Is the portion of the cell that carries ge- netic information. a. Cytoplasm b. Cell wall c. Mitochondria d. Nucleus
is the portion of the cell that carries ge- netic information. a. Cytoplasm b. Cell wall c. Mitochondria d. Nucleus...
### Change 5530mm to m and cm
Change 5530mm to m and cm...
### Can someone fill this in Please ? ________ is a simple sugar or monosaccharide._____ is a disaccharide whilst ______ is a poly saccharid.
Can someone fill this in Please ? ________ is a simple sugar or monosaccharide._____ is a disaccharide whilst ______ is a poly saccharid....
### 8. Which political institution was later created to exercise the rights granted in this charter? And we do also ordain, establish and agree for (us), our heirs and successors, that each of the said Colonies shall have a Council which shall govern and order all matters and causes which shall arise, grow, or happen to or within the same several Colonies, according to such laws, ordinances and instructions as shall be in that behalf, given and signed with our hand or ... under the Seal of our realm
8. Which political institution was later created to exercise the rights granted in this charter? And we do also ordain, establish and agree for (us), our heirs and successors, that each of the said Colonies shall have a Council which shall govern and order all matters and causes which shall arise, g...
### You are about to get on a plane to Seattle, you want to know whether you have to bring an umbrella or not. You call three of your random friends and as each one of them if it’s raining. The probability that your friend is telling the truth is 2/3 and the probability that they are playing a prank on you by lying is 1/3. If all 3 of them tell that it is raining, then what is the probability that it is actually raining in Seattle.
You are about to get on a plane to Seattle, you want to know whether you have to bring an umbrella or not. You call three of your random friends and as each one of them if it’s raining. The probability that your friend is telling the truth is 2/3 and the probability that they are playing a prank o...
### Which question best determines a storys mood
which question best determines a storys mood...
|
2022-11-27 02:49:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1801556944847107, "perplexity": 2024.493721210334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710155.67/warc/CC-MAIN-20221127005113-20221127035113-00025.warc.gz"}
|
https://projecteuclid.org/euclid.agt/1513715302
|
## Algebraic & Geometric Topology
### Topological classification of torus manifolds which have codimension one extended actions
#### Abstract
A toric manifold is a compact non-singular toric variety. A torus manifold is an oriented, closed, smooth manifold of dimension $2n$ with an effective action of a compact torus $Tn$ having a non-empty fixed point set. Hence, a torus manifold can be thought of as a generalization of a toric manifold. In the present paper, we focus on a certain class $M$ in the family of torus manifolds with codimension one extended actions, and we give a topological classification of $M$. As a result, their topological types are completely determined by their cohomology rings and real characteristic classes.
The problem whether the cohomology ring determines the topological type of a toric manifold or not is one of the most interesting open problems in toric topology. One can also ask this problem for the class of torus manifolds. Our results provide a negative answer to this problem for torus manifolds. However, we find a sub-class of torus manifolds with codimension one extended actions which is not in the class of toric manifolds but which is classified by their cohomology rings.
#### Article information
Source
Algebr. Geom. Topol., Volume 11, Number 5 (2011), 2655-2679.
Dates
Revised: 6 August 2011
Accepted: 10 August 2011
First available in Project Euclid: 19 December 2017
https://projecteuclid.org/euclid.agt/1513715302
Digital Object Identifier
doi:10.2140/agt.2011.11.2655
Mathematical Reviews number (MathSciNet)
MR2846908
Zentralblatt MATH identifier
1231.57031
Subjects
Primary: 55R25: Sphere bundles and vector bundles
Secondary: 57S25: Groups acting on specific manifolds
#### Citation
Choi, Suyoung; Kuroki, Shintarô. Topological classification of torus manifolds which have codimension one extended actions. Algebr. Geom. Topol. 11 (2011), no. 5, 2655--2679. doi:10.2140/agt.2011.11.2655. https://projecteuclid.org/euclid.agt/1513715302
#### References
• S Choi, M Masuda, Classification of $\mathbb{Q}$–trivial Bott manifolds
• S Choi, M Masuda, D Y Suh, Quasitoric manifolds over a product of simplices, Osaka J. Math. 47 (2010) 109–129
• S Choi, M Masuda, D Y Suh, Topological classification of generalized Bott towers, Trans. Amer. Math. Soc. 362 (2010) 1097–1112
• M W Davis, T Januszkiewicz, Convex polytopes, Coxeter orbifolds and torus actions, Duke Math. J. 62 (1991) 417–451
• H Duan, The degree of a Schubert variety, Adv. Math. 180 (2003) 112–133
• A Hattori, M Masuda, Theory of multi-fans, Osaka J. Math. 40 (2003) 1–68
• D Husemoller, Fibre bundles, third edition, Graduate Texts in Mathematics 20, Springer, New York (1994)
• S Kuroki, Characterization of homogeneous torus manifolds, Osaka J. Math. 47 (2010) 285–299
• S Kuroki, Classification of torus manifolds with codimension one extended actions, Transform. Groups 16 (2011) 481–536
• M Masuda, Unitary toric manifolds, multi-fans and equivariant index, Tohoku Math. J. $(2)$ 51 (1999) 237–265
• M Masuda, T Panov, On the cohomology of torus manifolds, Osaka J. Math. 43 (2006) 711–746
• M Masuda, T E Panov, Semi-free circle actions, Bott towers, and quasitoric manifolds, Mat. Sb. 199 (2008) 95–122
• M Masuda, D Y Suh, Classification problems of toric manifolds via topology, from: “Toric topology”, Contemp. Math. 460, Amer. Math. Soc., Providence, RI (2008) 273–286
• M Mimura, H Toda, Topology of Lie groups I, II, Translations of Mathematical Monographs 91, American Mathematical Society, Providence, RI (1991) Translated from the 1978 Japanese edition by the authors
• M Obiedat, On $J$–orders of elements of $K\mathrm{O}(\mathbb{C}P^m)$, J. Math. Soc. Japan 53 (2001) 919–932
• T E Panov, Appendix A: Cohomological rigidity for polytopes and manifolds, from: “Lectures on toric topology”, (V M Buchstaber, editor), Trends in Math. 10, Inf. Cent. for Math. Sci. (2008) 60–62
• B J Sanderson, Immersions and embeddings of projective spaces, Proc. London Math. Soc. $(3)$ 14 (1964) 137–153
• E H Spanier, Algebraic topology, Springer, New York (1981) Corrected reprint
|
2019-10-18 16:36:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5814406275749207, "perplexity": 1436.6673956502186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684226.55/warc/CC-MAIN-20191018154409-20191018181909-00223.warc.gz"}
|
http://karatoyama.iinaa.net/tikra3.htm
|
# ͐@
ԍ̕X_Ђ̗͐B
### gk
@@͐ΒTK@̏E@-@[s̗͐сAx{̗͐
@@u\oˎRov̌萃y[W
@@u͐̌萃y[Wvց@@u͐@-@i͐ΒTK@̈ꂩA̎ljvց@@u͐@-@i͐ΒTK@̌܂ȀEjv
@@u͎mv
@@͐ΒTK@̏E@-@[s̗͐сAx{̗͐
```
@ O @ { l @ @ @ @
@ L @ _ B M s @ @ @ @
@ O M M { M n o q @ [ @
@ _ z G s V M @ @
@ M V K R q c M \ @ s @
@ x M e q @ @
@ M q b M N @ @
@ { ^ K n @ @
@ \ I M M o @ @
@ i M M l M ` { c @ @
@ R p K M K X @ @ @
@ N M { M M { { X M @ @ @
@ D n @ K [ @ @ @
@ M I M K K K n V @ @ @
@ N E o K K q s s @ @ @
@ Q T L @ o s M c O K @ @ @
@ O M x O @ q l s [ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ K @ @ @ @ @ @ @ K @ @ @ @ @ @ @ @```
@@[s@@Δ̗̐͐@@͐
```
@ @ @ @ @ s s E O @
@ o @ @ @ @ Y u @
@ K @ x @ W E @
@ @ @ V K l M @
@ M M { @ @ M K E M ^ @
@ M s @ @ M _ W M @
@ M @ { @ M E @
@ [ @ @ M M u [ y @
@ { @ @ \ O G \ K @
@ s @ @ W O @
@ @ @ @ M N M @
@ E @ @ @ e O M @
@ @ @ @ s M M n @
@ @ @ @ M _ @ @
@ @ @ @ K M M V @
@ M w @ @ @ @ M p M n M @
@ @ @ @ @ M V M M @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ K @ @ M @ @ @ M M @ M @ M@```
@@x{
```
@ / V K @ i O @
@ [ / @ i q L @
@ M K j O m @
@ x o K O O M @
@ / y M s M @
@ @ [ / s ` @
@ / L M M M @
@ / M R O E q A K K M@
@ a { Z / M R j { @
@ O E i j i / M j a @
@ \ ] M M @
@ [ ] / [ J q V K @
@ N m / Z R Q M M@
@ S M M @
@ / O U q M M s M c { { \ E \ @
@ [ t / K | a N @
@ P V _ @ ` K _ H h @
@ @ @ @ @ @ @ @ @ @ M M K @ @ K @ @ @ @ @ @ @ @ @ @ @ @ @```
@@萂̔@@@@͎̔@
```
@ @ @ V @ / M @ / @ @ @
@ @ M @ n g C @ @ N @ @ @
@ E E l Y Y K r @
@ K / / @ @
@ O K M g F M @ @
@ E s K O @ M V Y M E @ @ @
@ M K M @ / n / [ @ @ @
@ i M @ M @ M p @ @ @
@ @ @ Y @ @ @
@ M @ @ E l @ @ @
@ M @ @ K M M @ @ @
@ V @ E M @ K @ @ @
@ M n [ @ \ @ M M @ @ @ @
@ @ N @ / @ @ M M @ @
@ O @ @ M K @ / @ @ @
@ E M @ @ V @ a / @ @ @
@ [ K @ @ M g / @ n @ E \ @ @ @
@ @ @ K M @ M @ @ @ @ @ M @ @ @ @ @ @ @ @ @ @ @ @ @ @```
@@@@@@C@@͐̓@@E܊і
```
@ g @ @ @ @
@ @ l @ M M @ K M @
@ ` L @ E E @
@ M O K M [ K M @ @
@ @ M K / @ @
@ L @ / @ @ @
@ M O @ @ @ @
@ L @ M @ Y l @ M@
@ M O @ Y @ / M M @ @
@ K i M @ / / x @ E a @ e @
@ L S @ \ F x @ g @ K @
@ ` | @ g @ N / @ M @
@ R @ N / / @ F M @ V M Z @
@ l M @ K @ / / @ n @
@ B K j M N @ V V @ M g K @ @
@ K ` @ g g / @ V V E @ @
@ M L o K b @ M @ n g @ E @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @```
@@Z@@E@@xx
```
@ N D p @
@ M M M M V @
@ M d R K @
@ Z M M d @
@ l M o p l Z @
@ M M @
@ d @
@ M N @
@ M K d @
@ l d M @
@ M H M d S M @
@ M M @
@ S M l @
@ S K M Z E @
@ M I @
@ M l R d ` @
@ M K M B K M@
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @```
```
@ t Q @ W X L @
@ Y @ S { ^ M N @
@ f K K@
@ c K o ` V b @
@ o q M f M M K g @
@ q q Y l l @
@ L P K @
@ M M L M @
@ V e M K W O U M @
@ M q V @ { R t M@
@ q @ b M M @
@ E @ M M @
@ g K M @ K L K M M @
@ @ ^ S M@
@ g @ c @
@ M @ { S M K @
@ M { @ W M K M R @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @```
```
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ K@
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @```
@@u\oˎRov̌萃y[W
@@u͐̌萃y[Wvց@@u͐@-@i͐ΒTK@̈ꂩA̎ljvց@@u͐@-@i͐ΒTK@̌܂ȀEjv
@@u͎mv
ӌA䊴zBdqŕAey[W̒쌠Җ܂́idqr̃\tgオA悩uЂ炩ȁvĉBjĂ艺B
̖ԏtS
\N\\Auԍ̕X_Ђ̗͐vu`̗͐@̚vֈړB
\N\OAAuÐ_Ђ̗͐vu̗͐vֈړB
\N\\AuJ_Ћ̗Ĉא_Ђ̗͐vusi̗͐vֈړB
\N\ZAuԍ̕X_Ђ̗͐vəou\N\l̗͐vfځB
\N\ܓAuԍ̕X_Ђ̗͐vɒNjLB
\N\ꌎlAuԍ̕X_Ђ̗͐vɒNjLB
\N\Auԍ̕X_Ђ̗͐vəoljB
\N\Auԍ̕X_Ђ̗͐vɉMB
\NZAXVB
\N܌Au͐ΒTK@̏E@-@[s̗͐сAx{̗͐vfځB
\Nl\ZAu͐ΒTK@̏Eҁ@@@Ð_Ђ̗͐vfځB
\ZNE\lAu͐vɁu͐@̙ҁvV݂āu͐ΒTK@̏EエсA@̏EƁvfځB
@@@Ӂ@
@@̖ԏtiy[Wjтqw\oHˎRoxɏԏtiy[Wj̒܂͉LɋL҂ۗLĂ܂B̓e{邽߂ɁAԍہiC^[lbgjォ玩̋@B֓dMjif[^j荞œWJ邱ƁAȂтɂ̏if[^jԍےʐMiC^[lbgj؝Ђĉ{邽߂ɕۑ邱Ƃ͏o҂܂AȊÔȂ镡EoEEnEosਂ֎~v܂Bp̍ۂ͎Oɂ\݉B
@@RiNj邱Ƃ͌䎩RłB
ҁ@@@ gkidqr̃\tgオA悩uЂ炩ȁvĉBj@@\ZNAT\N
|
2018-09-18 20:11:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9783618450164795, "perplexity": 909.5129507441176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155676.21/warc/CC-MAIN-20180918185612-20180918205612-00228.warc.gz"}
|
http://clay6.com/qa/27037/a-charge-q-is-placed-at-a-distance-large-frac-above-the-centre-of-a-horizon
|
# A charge Q is placed at a distance $\;\large\frac{a}{2}\;$ above the centre of a horizontal , square surface of edge a as shown in figure . Find the flux of the electric field through the square surface
$(a)\;\large\frac{Q}{\in_{0}}\qquad(b)\;\large\frac{Q}{3 \in_{0}}\qquad(c)\;\large\frac{Q}{6 \in_{0}}\qquad(d)\;\large\frac{Q}{2 \in_{0}}$
Answer : (c) $\;\large\frac{Q}{ 6 \in_{0}}$
Explanation :
Imagine a cube of edge a , enclosing the charge . The square surface is one of the six faces of this cube . According to Gauss's theorem in electrostatics , total electric flux through the cube = $\;\large\frac{Q}{\in_{o}}$
Flux through the square surface = $\; \large\frac{Q}{6\in_{0}}$
edited Aug 13, 2014
|
2017-08-22 01:44:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9418351650238037, "perplexity": 338.7323301650539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109803.8/warc/CC-MAIN-20170822011838-20170822031838-00035.warc.gz"}
|
https://coffee-mind.com/category/education/
|
## Quality does not sell itself
Great to have you here! In this blogpost I want to share some background for the studies we did on Behavioral Economics as there was a lot of things going on that lead to the studies that the are now published in The British Food Journal (http://www.emeraldinsight.com/doi/pdfplus/10.1108/BFJ-03-2016-0127), Cafe Europa Magazine (September 2016) and Reco (https://youtu.be/3Jb03RWYrQ4).
The first study we did was done by Imane Bouzidi with myself and Thomas Zoëga Ramsøy from Copenhagen Business School at Decision Neuroscience Research Group (now Neurons inc (http://neuronsinc.com/)) as supervisors. This study is explained in details in SCAE’s members archive but here is a summary of the research design and the results.
A high quality and a low quality coffee were selected (a premium coffee from Kontra and a commodity coffee called Artnok which is Kontra’s commodity range (Kontra spelled backwards!)) and served for random customers in a shopping centre in Copenhagen. The coffee was served in cups with brand labels to influence the customer cognitively with the brand equity (https://en.wikipedia.org/wiki/Brand_equity) but in the cups was not coffee from any of those rands but just either the HQ or the LQ based on random selection as seen in the figure below.
Before tasting the coffees the customers filled out a questionnaire about their expectations for each coffee based on the brand and then later they rated the coffee after having tasted them. After they had tasted the coffees the consumed amount of each cup was measured and then they were allowed to choose a coffee they could have as a small benefit for their time in participation. So in conclusion the effect measures in the study were
1. Brand expectations (‘liking’ [conscious])
2. Rating of coffee samples (‘liking’)
3. Measure amount consumed (‘wanting’ [sub-conscious])
4. Final choice of coffee brand (‘behaviour’)
A summary of the results
1. High brand equity gave
1. higher tasting scores
2. lower difference between HQ and LQ scores
2. Sensory Scores: LQ was preferred! (P< 0,001)
3. Consumption: LQ was preferred! (P< 0,001)
4. HQ was preferred without milk
So if the brand had high brand equity people scored it higher when tasting it (1a) but also distinguished less between HQ and LQ (1b) both of which might be expected. A slightly more surprising (and a bit disappointment as a specialty coffee professional) was how strongly the data proved that consumers preferred low quality (point 2 and 3 with a really strong significant result with a P<0,001) but the real surprise and source of wonder for me was that despite 2 and 3 consumers were clear in pointing to the HQ when asked which coffee they could enjoy without milk! Without support in the data this gave me a hint for a hypothesis that the consumers preferred the LQ out of habit but when asked which coffee they could drink without milk they were able to taste that there was ‘less unpleasant flavours’ that they wanted to remove in the HQ which is what milk does in my mind. I believe that there is a physiological response of aversion for the unpleasant flavours in coffee that we in the specialty coffee business do our best to remove by selecting defect-free green beans, slow roast to avoid burnt and bitter flavours and a less aggressive brew (20% extraction rather than 30% as is the norm in commodity). You can get used to these bad flavours to the degree that you develop af preference when offered a choice between HQ and LQ but you are still able to recognize that HQ is the most pleasant to drink if you are not adding milk.
This led us to another pilot study that is strictly a pilot in the sense that we did not have a big enough cohort of subjects but we just wanted to make a small test that we could do in a few hours to get ideas for future studies. At this point Thomas Ramsøy had left Copenhagen Business School to start his own consumer research company Neurons Inc. (neuronsinc.com) and then I was lucky enough to meet Toke Fosgaard (https://dk.linkedin.com/in/tokefosgaard) who is now my playmate when it comes to studies in behavioral economics. Toke Fosgaard, Ida Steen and I had a cohort of 11 of Toke’s students with age 22 to 28 and we selected a high quality coffee and a low quality coffee. For this study we knew, that we did not have enough consumers so in order to increase the probability that we got useful data we selected extreme HQ and extreme LQ. The HQ coffee was one of my favorites namely Coffee Collective’s (http://coffeecollective.dk/da/) Kenya and the coffee from the 20 liter batch brewer in the university canteen that is for sure the worst green, roasted in no time and extracted from here to hell which Ida and I could confirm was the case with this coffee. It is a strategic decision whether to choose a HQ and LQ within a very similar flavour range or you should choose a HQ that goes far beyond the LQ/commodity traditional flavour profile. LQ is traditionally rich in bitterness, chocolate, nutty and other non-fruity flavours where HQ chosen from the elite roasteries is rich in acidity and fruitiness that is considered strange for the average consumer. In Imane’s study the HQ was chocolaty and nutty and not acidic so the consumers could concentrate on the quality of the beans rather than being confused with low bitterness and high acidity and fruitiness. But in this study we wanted from extreme LQ to extreme HQ which lead us to the above decisions on samples.
So with the samples at hand we had a room where the students would come in one by one and we changed the setup to alternate between two different setups described below so that half of the students would experience one setup and the other half part of the students would experience the other setup.
Setup 1: Served with the full sales pitch
In setup 1 we prepared two cupping setups, one for me and one for the student (consumer), and in the two cups were respectively the HQ and LQ. I presented myself as external lecturer at Food science with coffee as my full focus area, and I told them about my involvement in SCAE education and research and my many years as consultant world wide to choose high quality green coffee and how I design product ranges for clients so that it was very clear to the student that I was an international authority in coffee quality. After that introduction we did a cupping where I took my time to point specifically everything about the low quality coffee that I did not like and was a consequence of rotten beans, cheap and fast roast profile and an outrageously bad brew and I pointed to all the nice, elegant and juicy notes of the HQ with no attic/basement off flavours and no burtness nor bitterness and we went back and forth between the samples to make sure they really tasted themselves all the bad stuff about the LQ and all the good stuff about the HQ.
After the introduction pitch and the thorough tasting the students were told that they could choose to get one full cup of coffee to go of one of the two coffees as a small gift for their time, and this final choice was the ‘endpoint’ for this study since it was a study in Behavioral Economics where you measure behavior rather than asking for opinion. As a small little extra endpoint we did ask them what they liked about the coffee they chose.
Setup 2: Served with no comments
In setup 2 the HQ and LQ coffees were poured into two cups and when the student entered we did not tell them anything about the coffee at all but we told them that we would like them to taste from both cups and choose which one they would prefer to have as a free gift they could have as a gift for the time they spent on this study. We also asked them what they liked about the coffee they chose.
So what were the results? (drum roll please..)
Which coffee would you like to walk away with?
When the students had the sales pitch where I did EVERYTHING I COULD to heavily nudge them to prefer the HQ still 67% chose to WALK AWAY WITH THE LQ!!! They were nudged by my pitch about myself and the coffees to the degree that most of them excused themselves when choosing the LQ in front of me which was really interesting since even this embarrassment they felt for openly in front of an expert choosing the LQ did not shift the preference for the LQ to the less chosen cup!
Again only having 11 consumers in this study we can’t really calculate any valid statistics but I still think that it is surprising that 11 university students do not have a higher preference for HQ since I would expect this part of the population to be specialty coffee drinkers. Now that the statistics could not really be relied on, we found it interesting to hear the student’s comments when tasting and choosing between the LQ and HQ:
“I just really like a black coffee [the LQ]!”
“I like strong coffee [the LQ]”
“It [the HQ] does not taste like coffee”
“It [the HQ] tastes like tea”
“Is it [the HQ] a thin version of the canteen coffee?”
“This [the HQ] is not coffee this is something else”
These comments are really interesting I think. It points to the extreme HQ as being outside the category of coffee for these consumers which is often what I experience when people are new to the specialty coffee culture and one of the things that I have a keen eye on when I as a consultant help new roasteries design a product range (Online Lean Startup Process,https://coffee-mind.com/product/onlineleanstartup/) where I try to make my clients choose a product range where they can show their customers something new without pushing their customers off the cliff which takes careful preference mapping with surveys, focus groups and consumer studies since what is ‘too light roast’ in one area of the world or even city vs rural in one country is not the same from place to place.
References:
## Sensory Science and Common Business Practises
Last Wednesday CoffeeMind held a presentation at Square Mile Coffee Roasters in London. The presentation focused on quality control and how to improve sensory skills.
It was presented by Ida Steen, MSc of Sensory Science from Department of Food Science in Copenhagen, and Morten Münchow, Lecturer at Food Science at the University of Copenhagen, this two-hour presentation inspired the participants to take a more scientific approach to their quality control program, new product development and possible approaches to judge their own sensory skills. By an introduction to statistics and a brief overview of different sensory methods, we showed the different biases and sources of random decision that you face as a cupper. We explained the principles behind our innovative sensory training program as well as some quick methods to develop a more evidence based approach to quality control and product development methodologies.
Over the summer Square Mile Coffee Roasters will host a series of courses with focus on sensory training in coffee and this event explained the principles behind the research. Already on the 11-12th of May we will run the first of these courses. The course will focus on your skills as a taster in a highly innovative way so that you will be trained directly based on your strength and weaknesses to speed up your personal skills.
Even though we live and breathe coffee, the focus on your skills as a taster made this course relevant and applicable for people in other areas of food and drink such as beer, wine, spirits, chocolate etc
See the presentations: Statistics and Sensory methodology
## Lean Startup of Coffee Roasters
CoffeeMind staff Michelle Hart, Simon Borrit and Morten Münchow did a well attended presentation on Lean Startup Methodology for Roasters during World of Coffee in Gothenburg.
Please find the full presentation here:
Lean Startup Presentation
And also get your free copy of The CoffeeMind Business Model Canvas
This presentation is based on a study we do with Copenhagen Business School, London School of Coffee and SCAE and is based on CoffeeMind’s approach to helping with the business aspect of any coffee startup. Particularly we have been working with roastery startups where a big risk is the investment in equipment, so if you can det info on the local market (or global if that is your goal) with the value propositions you have in mind, then you reduce your risk of failure drastically. Morten has done sessions with startups BEFORE they invest in equipment and together brainstormed and designed a number of expected customer segments for the business they hope to create and based on that session we have created a test product range (green coffee selection, roast degree, roast profile, blends) for the expected customers (in Lean Startup terminology called a ‘minimum viable product’). The expected customer segments is then approached with the appropriate products and a true sales process is then carried out. After selling this test product range the startup is getting information to create next iteration of the minimum viable products or just make the investment in roasting equipment since the expected market has proven to exist!
## Sensory Methodology
Ida Steen and Morten Münchow did a talk on Sensory Methodology during World of Coffee in Gothenburg and you can >find the presentation here<
The presentation compares sensory methodology from business practices and scientific methodology and also contains results from a sensory profiling Ida Steen did for Best Water Technology
## Profile log for SCAE Roasting Pro
For the tasks and challenges in the new Roating Professional module for the SCAE Certification Diploma System I have developed this Profile log:
ProfileLogSCAE-Professional
Please refer to these two blogpost to have the concepts on the Profile log explained:
## Percentage difference
This blogpost explains how to understand and calculate percentage change as this calculation is part of the SCAE certification system on the roast log template you can download: here
So let us get right go business:
If a process changes from x to y the percentage of change is referring to how big the change is seen in relationship to where the process ‘came from’ namely x.
So the general formula for a percentage change is
$latex.php$
If you are not used to do these kinds of calculations, I would like to explain you this formula in more detail as follows:
In the figure below you see a process that goes from value x to y (could be an increasing temperature during roasting) and you can see how you can calculate the difference between the starting point of the process to the endpoint of the process by subtracting x from y.
Let us take an example. You started you coffee roastery 12 months ago and you currently you have 15 customers. After 9 months you had 10 customers. How many more customers do you have now compared to when you business was 9 months old? In other words: what is the difference between the number of customers you have now compared to when the company was 9 months old:
So in absolute numbers of customers this is 5 more than after 9 months.
But how to calculate this value as a percentage?
As you can see from the above calculation, you relate the change (5 customers difference) to the starting point of comparison (10 customers after 9 months) by deviding the change with the starting point of comparison. And as you can also see from the above figure the value is +50% which is a positive number since the process increased in the period.
So the following figure shows you the general formula for the above calculation:
But what happens if you monitor a process that is decreasing? Namely where y is smaller than x because the process is decreasing. This is illustrated graphically here:
The value of y – x becomes negative because x is bigger then y so a decreasing process would give you a negative change and if you have a negative change that you divide with the starting point of comparison you also get a negative percentage.
In the following example we look at roast loss which is a process where you compare the result of the roast (y) with the initial amount of coffee you added (x) and find a negative value for the percentage of change. Let us assume that we put in 1kg (1000g) into a roaster and perform a light roast and measure the weight of the roasted coffee (when calculating roast lost please remember to NEVER remove any beans with the sample spoon during the roast!) and find out that 850g of roasted coffee came out of 1000g of green. The calculation looks like this:
So the percentage of change is -15% which roast masters would refer to as 15% roast loss since the word ‘loss’ implies a negative change.
## New SCAE roast profile log
During my years as trainer at London School of Coffee I have worked on improving the roast log we use at the training. Recently I have made a version for the new SCAE CDS for coffee roasting and you can download it for free using the link below in this post.
The log reflects my approach where you plan your roast in terms of flame control and air flow during the different stages of roasting. I instruct the students to use the bean temperature as the trigger parameter to the control stages rather than time because the bean temperature reflects quite precisely different stages of the roast where time is a completely independent parameter.
So the roast is planned and the result of the roast is followed by plotting bean and air temperature in the graph and time and temperature for different events are noted in the box to the right of the graph. One of the most important conclusions is the time from 1st crack to end that is noted separately below the ‘event box’.
The profile log is meant as a gross log that is only used in its entirety during profile development and not necessarily on a daily basis where you don’t need all this information.
## Airflow in roasters with only one fan
In roasters with only one fan for both roasting and cooling, it sometimes gets slightly confusing to talk about the airflow in the roaster. During training with these kinds of roasters I have often heard people talk about ‘closing’ the airflow down or confusing what ‘high’ airflow means in a system where you always have high airflow somewhere but the question is more where it is redirected.
But fortunately its its quite simple to visualize as I have dome schematically below. If not explained visually I have experienced this to be a tricky subject for many students.
The following is a schematic drawing of how I interpret the airflow adjustment on the Diedrich HR-1 that I use for the roasting training at London School of Coffee (Similar principles is at play in Probat’s 100g laboratory roaster used by many roasters and training institutions).
As you can see the airflow is redirected to roasting and cooling tray by the first valve so it does not make sense to talk about ‘closing the airflow’ without a reference to where the airflow is ‘reduced’.
Controlling the airflow in the roaster controls the moisture level in the first part of the overall roasting process and on the Diedrich at makes simultaneous roasting and cooling possible as a low airflow in the roasting drum is desired at the same time as a high airflow is desired in the cooling tray.
I know it seems obvious when shown schematically but please return to this post if you find yourself discussing the airflow in roasters with only one fan.
## Just joined SCAE as creator of the certification system for coffee roasters
A few months ago Filip Åkerblom asked me on behalf of SCAE’s education committee to continue the work he had done the last 5 years on SCAE’s exam system for coffee roasters. Teaching coffee roasting at London School of Coffee since 2007 I feel that its right up my alley.
My approach has been getting completely confident about the basics. The basics is the best point of departure for beginners. But the basics is also the best place to start if you would like to avoid getting lost in all the loose claims running around in the coffee roasting community.
So my teaching is quite basic. Basic stuff is good for universal claims. The more advanced stuff is common sense and experience with roasting, cupping, roasting, cupping, roasting, cupping….. on one or few machines. But this is a quite specific process to the equipment you happen to use. Where it is installed, ventilation pipe size, bends, the climate and so on. Getting universal claims on the more advanced parts of roasting is something I hope to get with the help of sensory science coupled with multivariate statistics. I have already initiated this on Rolighedsvej 30 and will keep you updated (see this blog post).
Filip and I have been working on the roasting part of SCAE’s new CDS system that will be introduced at SCAE’s exhibition in Nice. Hope to see you there.
With my new position in SCAE I’m looking forward to do help out making coffee roasting more available and high quality more consistent. Seriously. It is badly needed. I really hurts me how much bad coffee is consumed around here. It’s not rocket science to do good coffee. A few good rules of thumb, a few pennies more and quality goes up. And perfection is just what you pursue for the rest of your life. I mean, the fun stuff.
|
2017-12-13 12:46:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2662297189235687, "perplexity": 1789.8358568643991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948523222.39/warc/CC-MAIN-20171213123757-20171213143757-00287.warc.gz"}
|
https://www.r-bloggers.com/2021/06/tired-pca-kmeans-wired-umap-gmm/
|
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Introduction
Combining principal component analysis (PCA) and kmeans clustering seems to be a pretty popular 1-2 punch in data science. While there is some debate about whether combining dimensionality reduction and clustering is something we should ever do1, I’m not here to debate that. I’m here to illustrate the potential advantages of upgrading your PCA + kmeans workflow to Uniform Manifold Approximation and Projection (UMAP) + Gaussian Mixture Model (GMM), as noted in my reply here.
For this demonstration, I’ll be using this data set pointed out here, including over 100 stats for players from soccer’s “Big 5” leagues.
library(tidyverse)
df <-
'FBRef 2020-21 T5 League Data.xlsx' %>%
janitor::clean_names() %>%
mutate(across(where(is.double), ~replace_na(.x, 0)))
# Let's only use players with a 10 matches' worth of minutes.
df_filt <- df %>% filter(min > (10 * 90))
df_filt %>% dim()
# [1] 1626 128
Trying to infer something from the correlation matrix doesn’t get you very far, so one can see why dimensionality reduction will be useful.
Also, we don’t really have “labels” here (more on this later), so clustering can be useful for learning something from our data.
Unsupervised Evaluation
We’ll be feeding in the results from the dimensionality reduction—either PCA or UMAP—to a clustering method—either kmeans or GMM. So, since clustering comes last, all we need to do is figure out how to judge the clustering; this will tell us something about how “good” the combination of dimensionality reduction and clustering is overall.
I’ll save you from google-ing and just tell you that within-cluster sum of squares (WSS) is typically used for kmeans, and Bayesian Information Criteria (BIC) is the go-to metric for GMM. WSS and BIC are not on the same scale, so we can’t directly compare kmeans and GMM at this point. Nonetheless, we can experiment with different numbers of components—the one major “hyperparameter” for dimensionality reduction—prior to the clustering to identify if more or less components is “better”, given the clustering method. Oh, and why not also vary the number of clusters—the one notable hyperparameter for clustering—while we’re at it?
For kmeans, we see that WSS decreases with increasing number of clusters, which is typically what we see in [“elbow” plots](https://en.wikipedia.org/wiki/Elbow_method_(clustering){.uri} like this. Additionally, we see that WSS decreases with increasing number of components. This makes sense—additional components means more data is accounted for.2 There is definitely a point of “diminishing returns”, somewhere around 3 clusters, after which WSS barely improves.3 Overall, we observe that the kmeans models using UMAP pre-processing do better, compared to those using PCA.
Moving on to GMM, we observe that BIC generally increases with the number of clusters as well. (Note that, due to the way the {mclust} package defines it’s objective function, higher BIC is “better”.)
Regarding number of components, we see that the GMM models using more UMAP components do better, as we should have expected. On the other hand, we observe that GMM models using less PCA components do better than those with more components! This is a bit of an odd finding that I don’t have a great explanation for. (Someone please math-splain to me.) Nonetheless, we see that UMAP does better than PCA overall, as we observed with kmeans.
For those interested in the code, I map-ed a function across a grid of parameters to generate the data for these plots.4
do_dimr_clust <-
function(n, k,
f_dimr = c('pca', 'umap'),
f_clust = c('kmeans', 'gmm'),
...) {
f_dimr <- match.arg(f_dimr)
f_clust <- match.arg(f_clust)
f_step <- ifelse(f_dimr == 'pca', recipes::step_pca, embed::step_umap)
f_fit <- ifelse(f_clust == 'gmm', stats::kmeans, mclust::Mclust)
data <-
recipes::recipe(formula( ~ .), data = df_filt) %>%
recipes::step_normalize(recipes::all_numeric_predictors()) %>%
f_step(recipes::all_numeric_predictors(), num_comp = n) %>%
recipes::prep() %>%
recipes::juice() %>%
select(where(is.numeric))
fit <- f_fit(data, ...)
broom::glance(fit)
}
metrics <-
crossing(
n = seq.int(2, 8),
k = seq.int(2, 8),
f_dimr = c('pca', 'umap'),
f_clust = c('kmeans', 'mclust')
) %>%
mutate(metrics = pmap(
list(n, k, f, g),
~ do_dimr_clust(
n = ..1,
k = ..2,
f = ..3,
g = ..4
)
))
metrics
# # A tibble: 196 x 5
# n k f g metrics
# <int> <int> <chr> <chr> <list>
# 1 2 2 pca kmeans <tibble [1 x 4]>
# 2 2 2 pca gmm <tibble [1 x 7]>
# 3 2 2 umap kmeans <tibble [1 x 4]>
# 4 2 2 umap gmm <tibble [1 x 7]>
# 5 2 3 pca kmeans <tibble [1 x 4]>
# 6 2 3 pca gmm <tibble [1 x 7]>
# 7 2 3 umap kmeans <tibble [1 x 4]>
# 8 2 3 umap gmm <tibble [1 x 7]>
# 9 2 4 pca kmeans <tibble [1 x 4]>
# 10 2 4 pca gmm <tibble [1 x 7]>
# # ... with 186 more rows
“Supervised” Evaluation
We actually do have something that we can use to help us identify clusters—player position (pos). Let’s treat these position groups as pseudo-labels with which we can gauge the effectiveness of the clustering.
df_filt <-
df_filt %>%
mutate(
across(
pos,
~case_when(
.x %in% c('DF,MF', 'MF,DF') ~ 'DM',
.x %in% c('DF,FW', 'FW,DF') ~ 'M',
.x %in% c('MF,FW', 'FW,MF') ~ 'AM',
.x == 'DF' ~ 'D',
.x == 'MF' ~ 'M',
.x == 'FW' ~ 'F',
.x == 'GK' ~ 'G',
.x == 'GK,MF' ~ 'G',
TRUE ~ .x
)
)
)
df_filt %>% count(pos, sort = TRUE)
# # A tibble: 6 x 2
# pos n
# <chr> <int>
# 1 D 595
# 2 M 364
# 3 AM 273
# 4 F 196
# 5 G 113
# 6 DM 85
Typically we don’t have labels for clustering tasks; if we do, we’re usually doing some kind of supervised multi-label classification. But our labels aren’t “true” labels in this case, both because:
1. a player’s nominal position often doesn’t completely describe their style of play, and
2. the grouping I did to reduce the number of positions from 11 to 6 was perhaps not optimal.
So now let’s do the same as before—evaluate different combinations of PCA and UMAP with kmeans and GMM. But now we can use some supervised evaluation metrics: (1) accuracy and (2) mean log loss. While the former is based on the “hard” predictions, the latter is based on probabilities for each class. kmeans returns just hard cluster assignments, so computing accuracy is straightforward; since it doesn’t return probabilities, we’ll treat the hard assignments as having a probability of 1 to compute log loss.5
We can compare the two clustering methods more directly now using these two metrics. Since we know that there are 6 position groups, we’ll keep the number of clusters constant at 6. (Note that number of clusters was shown on the x-axis before; but since we have fixed number of components at 6, now we show the number of components on the x-axis.)
Looking at accuracy first, we see that the best combo depends on our choice for number of components. Overall, we might say that the UMAP combos are better.
Next, looking at average log loss, we see that the GMM clustering methods seem to do better overall (although this may be due to the fact that log loss is not typically used for supervised kmeans). The PCA + GMM does the best across all number of components, with the exception of 7. Note that we get a mean log loss around 28 when we predict the majority class (defender) with a probability of 1 for all observations. (This is a good “baseline” to contextualize our numbers.)
UMAP shines relative to PCA according to accuracy, and GMM beats out kmeans in terms of log loss. Despite these conclusions, we still don’t have clear evidence that UMAP + GMM is the best 1-2 combo; nonetheless, we can at least feel good about its general strength.
Aside: Re-coding Clusters
I won’t bother to show all the code to generate the above plots since it’s mostly just broom::augmment() and {ggplot2}. But, if you have ever worked with supervised stuff like this (if we can call it that), you’ll know that figuring out which of your clusters correspond to your known groups can be difficult. In this case, I started from a variable holding the predicted .class and the true class (pos).
assignments
# # A tibble: 1,626 x 2
# .class pos
# <int> <chr>
# 1 1 D
# 2 2 D
# 3 3 M
# 4 3 M
# 5 4 AM
# 6 2 D
# 7 2 D
# 8 4 F
# 9 2 D
# 10 1 D
# # ... with 1,616 more rows
I generated a correlation matrix for these two columns, ready to pass into a matching procedure.
cors <-
assignments %>%
fastDummies::dummy_cols(c('.class', 'pos'), remove_selected_columns = TRUE) %>%
corrr::correlate(method = 'spearman', quiet = TRUE) %>%
filter(term %>% str_detect('pos')) %>%
select(term, matches('^[.]class'))
cors
# # A tibble: 6 x 7
# term .class_1 .class_2 .class_3 .class_4 .class_5 .class_6
# <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1 pos_AM -0.208 -0.241 -0.178 0.0251 0.625 -0.123
# 2 pos_D 0.499 0.615 -0.335 -0.264 -0.428 -0.208
# 3 pos_DM 0.0797 0.0330 0.0548 -0.0829 -0.0519 -0.0642
# 4 pos_F -0.171 -0.199 -0.168 0.724 0.0232 -0.101
# 5 pos_G -0.127 -0.147 -0.124 -0.0964 -0.157 1
# 6 pos_M -0.222 -0.267 0.724 -0.180 0.0395 -0.147
Then I used clue::solve_LSAP() to do the bipartite matching magic. The rest is just pre- and post-processing.
k <- 6 # number of clusters
cols_idx <- 2:(k+1)
cors_mat <- as.matrix(cors[,cols_idx]) + 1 # all values have to be positive
rownames(cors_mat) <- cors$term cols <- names(cors)[cols_idx] colnames(cors_mat) <- cols cols_idx_min <- clue::solve_LSAP(cors_mat, maximum = TRUE) cols_min <- cols[cols_idx_min] pairs <- tibble::tibble( .class = cols_min %>% str_remove('^[.]class_') %>% as.integer(), pos = cors$term %>% str_remove('pos_')
)
pairs
# # A tibble: 6 x 2
# .class pos
# <int> <chr>
# 1 5 AM
# 2 2 D
# 3 1 DM
# 4 4 F
# 5 6 G
# 6 3 M
This pairs variable can be used to re-code the .class column in our assignments from before.
Case Study: PCA vs. UMAP
Let’s step back from the clustering techniques and focus on dimensionality reduction for a moment. One of the ways that dimensionality reduction can be leveraged in sports like soccer is for player similarity metrics.6 Let’s take a look at how this can be done, comparing the PCA and UMAP results while we’re at it.
Direct comparison of the similarity “scores” we’ll compute—based on Euclidean distance between a chosen player’s components and other players’ components—is not wise given the different ranges of our PCA and UMAP components, so we’ll rely on rankings based on these scores.7 Additionally, fbref provides a “baseline” that we can use to judge our similarity rankings.8
We first need to set up our data into the following format. (This is for 2-component, 6-cluster UMAP + GMM.)
sims_int
# # A tibble: 1,664 x 6
# player_1 player_2 comp_1 comp_2 value_1 value_2
# <chr> <chr> <int> <int> <dbl> <dbl>
# 1 Jadon Sancho Aaron Leya Iseka 1 1 -4.18 -5.14
# 2 Jadon Sancho Aaron Leya Iseka 2 2 -0.678 2.49
# 3 Jadon Sancho Aaron Ramsey 1 1 -4.18 -3.25
# 4 Jadon Sancho Aaron Ramsey 2 2 -0.678 -0.738
# 7 Jadon Sancho Abdoulaye Doucouré 1 1 -4.18 -1.36
# 8 Jadon Sancho Abdoulaye Doucouré 2 2 -0.678 -2.66
# 9 Jadon Sancho Abdoulaye Touré 1 1 -4.18 -1.36
# 10 Jadon Sancho Abdoulaye Touré 2 2 -0.678 -2.89
# # ... with 1,654 more rows
Then the Euclidean distance calculation is fairly straightforward.
sims <-
sims_init %>%
group_by(player_1, player_2) %>%
summarize(
d = sqrt(sum((value_1 - value_2)^2))
) %>%
ungroup() %>%
mutate(score = 1 - ((d - 0) / (max(d) - 0))) %>%
mutate(rnk = row_number(desc(score))) %>%
arrange(rnk) %>%
select(player = player_2, d, score, rnk)
sims
# # A tibble: 830 x 4
# player d score rnk
# <chr> <dbl> <dbl> <int>
# 1 Alexis Sánchez 0.0581 0.994 1
# 2 Riyad Mahrez 0.120 0.988 2
# 3 Serge Gnabry 0.132 0.986 3
# 4 Jack Grealish 0.137 0.986 4
# 5 Pablo Sarabia 0.171 0.983 5
# 6 Thomas Müller 0.214 0.978 6
# 7 Leroy Sané 0.223 0.977 7
# 8 Callum Hudson-Odoi 0.226 0.977 8
# 9 Jesse Lingard 0.260 0.973 9
# 10 Ousmane Dembélé 0.263 0.973 10
# # ... with 820 more rows
Doing the same for PCA and combining all results, we get the following set of rankings.
We see that the UMAP rankings are “closer” overall to the fbref rankings. Of course, there are some caveats:
1. This is just one player.
2. This is with a specific number of components and clusters.
3. We are comparing to similarity rankings based on a separate methodology.
Our observation here (that UMAP > PCA) shouldn’t be taken out of context to conclude that UMAP > PCA in all contexts. Nonetheless, I think this is an interesting use case for dimensionality reduction, where one can justify PCA, UMAP, or any other similar technique, depending on how intuitive the results are.
Case Study: UMAP + GMM
Finally, let’s bring clustering back into the conversation. We’re going to focus on how the heralded UMAP + GMM combo can be visualized to provide insight that supports (or debunks) our prior understanding.
With a 2-component UMAP + 6-cluster GMM, we can see how the 6 position groups can be identified in a 2-D space.
For those curious, using PCA instead of UMAP also leads to an identifiable set of clusters. However, uncertainties are generally higher across the board (larger point sizes, more overlap between covariance ellipsoids).
If we exclude keepers (G) and defenders (D) to focus on the other 4 positions with our UMAP + GMM approach, we can better see how some individual points —at the edges or outside of covariance ellipsoids—are classified with a higher degree of uncertainty.9
Now, highlighting incorrect classifications, we can see how the defensive midfielder (DM) position group (upper left) seems to be a blind spot in our approach.
A more traditional confusion matrix10 also illustrates the inaccuracy with classifying DMs. (Note the lack of dark grey fill in the DM column.)
DMs are often classified as defenders instead. I think this poor result has is more so due to my lazy grouping of players with "MF,DF' or "DF,MF" positions in the original data set than a fault in our approach.
Conclusion
So, should our overall conclusion be that we should never use PCA or kmeans? No, not necessarily. They can both be much faster to computer than UMAP and GMMs respectively, which can be a huge positive if computation is a concern. PCA is linear while UMAP is not, so you may want to choose PCA to make it easier to explain to your friends. Regarding clustering, kmeans is technically a specific form of a GMM, so if you want to sound cool to your friends and tell them that you use GMMs, you can do that!
Anyways, I hope I’ve shown why you should try out UMAP and GMM the next time you think about using PCA and kmeans.
1. In some contexts you may want to do feature selection and/or manual grouping of data. ^
2. While this whole thing is more about comparing techniques, I should make a note about WSS. We don’t want to increase the number of components for the sake of minimizing WSS. We lose some degree of interpretation with increasing components. Additionally, we could be overfitting the model by increasing the number of components. Although we don’t have the intention of classifying new observations in this demo, it’s still good to keep overfitting in mind. ^
3. This demo isn’t really intended to be a study in how to choose the best number of clusters, but I figured I’d point this out. ^
4. I’d suggest this blog post from Julia Silge for a better explanation of clustering with R and {tidymodels}. ^
5. Perhaps this is against best practice, but we’ll do it here for the sake of comparison. ^
6. Normalization perhaps doesn’t help much here given the clustered nature of the reduced data. ^
7. Normalization perhaps doesn’t help much here given the clustered nature of the reduced data. ^
8. fbref uses a different methodology, so perhaps it’s unwise to compare to them. ^
9. Sure, one can argue that a player like Diogo Jota should have been classified as an attacking midfielder (AM) to begin with, in which case he might not have been misclassified here. ^
10. By the way, the autoplot() function for yardstick::conf_mat() results is awesome if you haven’t ever used it. ^
|
2021-09-25 13:24:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5195180177688599, "perplexity": 3812.702654780372}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057622.15/warc/CC-MAIN-20210925112158-20210925142158-00246.warc.gz"}
|
http://mathhelpforum.com/algebra/15515-one-older-then-other-print.html
|
# One older then the other
• May 31st 2007, 01:47 PM
Godfather
One older then the other
Jack is 6 years older than Jane. Six years ago he was twice as old as she was. How old is Jane now?
• May 31st 2007, 01:53 PM
Jhevon
Quote:
Originally Posted by Godfather
Jack is 6 years older than Jane. Six years ago he was twice as old as she was. How old is Jane now?
Let Jane's age now be x
Then Jack's age is x + 6
Six years ago, Jack's age was x and this was twice Jane's age six years ago (which is x - 6). So we have:
x = 2(x - 6)
=> x = 2x - 12
=> x = 12 --------> Jane's age now
• May 31st 2007, 03:27 PM
Soroban
Hello, Godfather!
Quote:
Jack is six years older than Jane.
Six years ago he was twice as old as she was.
How old is Jane now?
I use a chart for most age problems.
Make a row for each person.
$\begin{array}{cccccc} & | & \quad & | & \quad & | \\ \hline
\text{Jack} & | & & | & &| \\ \hline
\text{Jane} & | & & | & & | \\ \hline
\end{array}$
Make a column for "Now".
$\begin{array}{cccccc} & | & \text{Now} & | & \quad & | \\ \hline
\text{Jack} & | & & | & &| \\ \hline
\text{Jane} & | & & | & & | \\ \hline
\end{array}$
Let $x$ = Jane's age now.
Then $x + 6$ = Jack's age now.
. . Write those in the "Now" column.
$\begin{array}{cccccc} & | & \text{Now} & | & \quad & | \\ \hline
\text{Jack} & | & x + 6 & | & &| \\ \hline
\text{Jane} & | &x & | & & | \\ \hline
\end{array}$
Make a column for the other time period: "6 years ago".
$\begin{array}{cccccc} & | & \text{Now} & | & \text{6 ago} & | \\ \hline
\text{Jack} & | & x + 6 & | & &| \\ \hline
\text{Jane} & | &x & | & & | \\ \hline
\end{array}$
Six year ago, both were six years younger.
. . Jack was only $x$ years old.
. . Jane was only $x - 6$ years old.
Write those in the second column.
$\begin{array}{cccccc} & | & \text{Now} & | & \text{6 ago} & | \\ \hline
\text{Jack} & | & x + 6 & | & x &| \\ \hline
\text{Jane} & | &x & | & x - 2 & | \\ \hline
\end{array}$
It says, "Six years ago, Jack $(x)$ was twice Jane's age $(x-6)$."
. . and there is our equation: . $x \:=\:2(x - 6)$
|
2016-10-24 18:29:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5050414800643921, "perplexity": 4172.9581517704755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719677.59/warc/CC-MAIN-20161020183839-00558-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://pipingdesigner.co/index.php/properties/classical-mechanics/210-affinity-laws
|
# Affinity Laws
Written by Jerry Ratzlaff on . Posted in Classical Mechanics
The affinity laws express the mathematical relationship between the several variables involved in pump performance. They apply to all types of centrifugal and axial flow pumps. Being able to predict these affects allows the rotating equipment engineer to examine the effects before implementing the changes. Transposing a pump curve into an analysis program, such as Microsoft Excel or Open Office Calc provides an excellend visual representation of how varying parameters affects the pump performance.
## Formulas that use CONSTANT IMPELLER DIAMETER
$$\large{ \frac{Q_1}{Q_2}=\frac{n_1}{n_2} }$$ Capacity varies directly with impeller diameter and speed $$\large{ \frac{h_1}{h_2}=\left(\frac{n_1}{n_2}\right)^2 }$$ Head varies directly with the square of impeller diameter and speed. $$\large{ \frac{BHP_1}{BHP_2}=\left(\frac{n_1}{n_2}\right)^3 }$$ Horsepower varies directly with the cube of impeller diameter and speed.
### Where:
$$\large{ BHP }$$ = brake horsepower
$$\large{ h }$$ = total head
$$\large{ n }$$ = pump speed
$$\large{ NPSH_r }$$ = Net positive suction head required
$$\large{ Q }$$ = capacity
## Formulas that use CONSTANT PUMP SPEED
$$\large{ \frac{Q_1}{Q_2}=\frac{D_1}{D_2} }$$ Capacity varies directly with impeller diameter and speed. $$\large{ \frac{h_1}{h_2}=\left(\frac{D_1}{D_2}\right)^2 }$$ Head varies directly with the square of impeller diameter and speed. $$\large{ \frac{BHP_1}{BHP_2}=\left(\frac{D_1}{D_2}\right)^3 }$$ Horsepower varies directly with the cube of impeller diameter and speed.
### Where:
$$\large{ BHP }$$ = brake horsepower
$$\large{ D }$$ = impeller diameter
$$\large{ h }$$ = total head
$$\large{ Q }$$ = capacity
## RULE OF THUMB
While not an exact representation, the following relationships have been observed with regards to changing impeller diameters.
### NPSHr
$$\large{ \frac{NPSH_r1}{NPSH_r2}=\frac{D_1}{D_2} }$$ Net Positive Suction Head Required by the pump varies directly with the impeller diameter.
### Where:
$$\large{ D }$$ = impeller diameter
$$\large{ NPSH_r }$$ = Net positive suction head required
### shaft deflection
$$\large{ \frac{d_1}{d_2}=\frac{D_1}{D_2} }$$ Shaft Deflection (runout) measured prior to changing the impeller size varies with the impeller diameter.
### Where:
$$\large{ d }$$ = shaft deflection
$$\large{ D }$$ = impeller diameter
|
2020-07-12 15:08:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5671859979629517, "perplexity": 3837.8592191907387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657138752.92/warc/CC-MAIN-20200712144738-20200712174738-00213.warc.gz"}
|
https://akash9712.github.io/2018/first-post/
|
# Akash Vaish
Entity from Middle Earth
Hello everyone. I am Akash, and this is the first in the series of blogs where I’ll be writing about my experience with the project on improving the SymPy stats module as a part of GSoC’18.
Before the coding period actually started, my mentors had discussed with me certain issues about how I approached stochastic processes in my proposal. We did not decided upon how exactly the implementation would look like, but I believe it will be sorted out before we get to that part of the summer project, which is supposed one of the later phases. After my exams ended on the 11th of May, I started working on the implementation of discrete random variables, and was able to implement some of the missing classes in the file drv.py in #14218. Here’s a short example of what was implemented:
>>> from sympy.stats import Poisson, P
>>> Y = Poisson('Y', 1)
>>> P(Y > 5, Y > 3)
3*(-163/60 + E)/(-8 + 3*E)
Right now, I am working on the implementation of product probability spaces, and extending them for discrete random variables. All in all, I am very excited about the summer project, and hope to achieve the targets in the best way possible.
|
2021-12-08 21:02:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3100094199180603, "perplexity": 746.9635920468814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363598.57/warc/CC-MAIN-20211208205849-20211208235849-00409.warc.gz"}
|
https://www.know.cf/enciclopedia/en/Revolutions_per_minute
|
# Revolutions per minute
"RPM" and "rpm" redirect here. For other uses, see RPM (disambiguation).
Revolutions per minute (abbreviated rpm, RPM, rev/min, r/min) is a measure of the frequency of rotation, specifically the number of rotations around a fixed axis in one minute. It is used as a measure of rotational speed of a mechanical component. In the French language, tr/mn (tours par minute) is the common abbreviation. The German language uses the abbreviation U/min or u/min (Umdrehungen pro Minute).
## International System of Units
According to the International System of Units (SI), rpm is not a unit. This is because the word revolution is a semantic annotation rather than a unit. The annotation is instead done as a subscript of the formula sign if needed. Because of the measured physical quantity, the formula sign has to be f for (rotational) frequency and ω or Ω for angular velocity. The corresponding basic SI derived unit is s−1 or Hz. When measuring angular speed, the unit radians per second is used.
{\displaystyle {\begin{aligned}1~{\text{rad/s}}&\leftrightarrow {\frac {1}{2\pi }}~{\text{Hz}}\\&\leftrightarrow {\frac {60}{2\pi }}~{\text{rpm}}\end{aligned}}}
{\displaystyle {\begin{aligned}1~{\text{rpm}}&\leftrightarrow {\frac {1}{60}}~{\text{Hz}}\\&\leftrightarrow {\frac {2\pi }{60}}~{\text{rad/s}}\end{aligned}}}
{\displaystyle {\begin{aligned}1~{\text{Hz}}&\leftrightarrow 2\pi ~{\text{rad/s}}\\&\leftrightarrow 60~{\text{rpm}}\end{aligned}}}
Here the sign ↔ (correspondent) is used instead of = (equal). Formally, hertz (Hz) and radian per second (rad/s) are two different names for the same SI unit, s−1. However, they are used for two different but proportional ISQ quantities: frequency and angular frequency (angular speed, magnitude of angular velocity). The conversion between a frequency f (measured in hertz) and an angular velocity ω (measured in radians per second) are:
${\displaystyle \omega =2\pi f\,\,\!{\text{, }}\,\,f={\frac {\omega }{2\pi }}{\text{.}}\,\!}$
Thus a disc rotating at 60 rpm is said to be rotating at either 2π rad/s or 1 Hz, where the former measures the angular velocity and the latter reflects the number of revolutions per second.
If the non-SI unit rpm is considered a unit of frequency, then ${\displaystyle 1~{\text{rpm}}={\frac {1}{60}}~{\text{Hz}}}$. If it instead is considered a unit of angular velocity and the word "revolution" is considered to mean 2π radians, then ${\displaystyle 1~{\text{rpm}}={\frac {2\pi }{60}}~{\text{rad/s}}}$.
Other Languages
беларуская (тарашкевіца): АНХ
български: Оборот в минута
Deutsch: Rpm (Einheit)
Esperanto: Rivolua nombro
français: Tour par minute
한국어: 분당 회전수
hrvatski: Broj okretaja
Bahasa Indonesia: Rotasi per menit
italiano: Giri al minuto
македонски: Вртеж во минута
Nordfriisk: Rpm
norsk bokmål: Omdreiningstall
norsk nynorsk: Omdreiingar per minutt
Simple English: Revolutions per minute
srpskohrvatski / српскохрватски: Broj okretaja
українська: Оберт за хвилину
Tiếng Việt: Số vòng quay
|
2017-03-29 20:50:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9022647738456726, "perplexity": 6002.883333229607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191396.90/warc/CC-MAIN-20170322212951-00111-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/4141582/is-there-a-continuous-function-defined-on-mathbbr-which-is-a-bijection-on
|
Is there a continuous function defined on $\mathbb{R}$ which is a bijection on $\mathbb{R}\backslash\mathbb{Q}$ but not a bijection on $\mathbb{Q}$.
Is there a continuous function defined on $$\mathbb{R}$$ which is a bijection on $$\mathbb{R}\backslash\mathbb{Q}$$ but not a bijection on $$\mathbb{Q}$$.
I tried to argue in this way. Let $$p,q\in\mathbb{Q}$$ with $$f(p)=f(q)$$ but $$p\neq q$$. Suppose $$a_n\rightarrow p, b_n\rightarrow q$$. Then $$\lim f(a_n)=\lim f(b_n)$$. But the limit case seems of no use.
Another idea is to prove $$f$$ is strictly monotone in $$\mathbb{R}$$, but I do not know how to begin.
Appreciate any help or hint!
• I'm not sure I understand. Do you want a continuous map $f : \mathbb{R} \to \mathbb{R}$ so that $f \restriction_{\mathbb{R} \setminus \mathbb{Q}}$ is a bijection onto its image, but $f \restriction_\mathbb{Q}$ is not a bijection onto its image? – HallaSurvivor May 17 at 6:58
• Exactly. I want to prove such map does not exist. – user823011 May 17 at 7:02
• If $f$ is not constant on $[p,q]$ then there is some $r\in (p,q)$ such that $f(r)\ne f(p)$ which implies that $f([p,r])\supset [f(p),f(r)],f([q,r])\supset [f(q),f(r)]$. – reuns May 17 at 10:04
|
2021-07-30 23:54:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9851818084716797, "perplexity": 81.36878788668007}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154032.75/warc/CC-MAIN-20210730220317-20210731010317-00517.warc.gz"}
|
https://solvedlib.com/1-your-company-currently-owns-a-used-pump-that,54886
|
# 1) Your company currently owns a used pump that has been damaged. A contractor has said...
At8T Wi-Fi < 9:10 AM 929 Unit 2 activities frz 0 2 0 Whon nat ca{Is valuo Tap Jly deckes in Ino Iirst "0tctartouoan I Tnrs I 4a0 ne" cur buyenand Qood new Yut4 Used car bUrer Theetnn Hyronaim c Jiyes alltinn duacrinan olhoalho valun ney Cal @aDlacanE Hore $stanjan mul & numb obou U... 1 answer ##### Consider the reaction 2 NO + Cl2 → 2 NOCI A possible mechanism is: Cl2 ⇄... Consider the reaction 2 NO + Cl2 → 2 NOCI A possible mechanism is: Cl2 ⇄ 2 Cl (fast) Cl + NO → NOCl (slow) If this is correct, the rate law will be: A. rate = k [NO]0.5[Cl] B. rate = k [NO]2[Cl2] C. rate = k [NO][Cl2] D. rate = k [NO][Cl]2 E. rate = k [NO][Cl2]0.5... 1 answer ##### I have a test tommrrow on the The Human Body and I asked my science teacher if she can post a castle learning assignment about the Human Body (the day before the test she always post a caslte learning assignment for us to study and do well on the test the I have a test tommrrow on the The Human Body and I asked my science teacher if she can post a castle learning assignment about the Human Body (the day before the test she always post a caslte learning assignment for us to study and do well on the test the next day) and then she said no because she s... 1 answer ##### Calculate the cost of issuing new equity for a firm, assuming issue costs are 6 percent... Calculate the cost of issuing new equity for a firm, assuming issue costs are 6 percent of the share price after taxes; market price per share =$44; current dividend = 4.25; and the constant growth rate in dividends is 4 percent. (Round answer to 2 decimal places, e.g. 15.75%.) Cost Wildhorse Indu... 1 answer ##### 19. All det boxes of a particular type of detergent indicate that they contain 32 ounces... 19. All det boxes of a particular type of detergent indicate that they contain 32 ounces of ergent. A government agency receives many consumer complaints that the boxes contain less-than-32 ounces. To check the consumers complaints at the 2% significance level, the government agency buys 100 boxes o... 5 answers ##### 1 V P(t)= What population U Predicled populatlon qoubiina model ior The 1 V population 1 Year 1 te populaton 2000 uoibal Drobem 2000? 1 gowing epununuully V were 1 1 peoo Jl 1milllonFind 1 V P(t)= What population U Predicled populatlon qoubiina model ior The 1 V population 1 Year 1 te populaton 2000 uoibal Drobem 2000? 1 gowing epununuully V were 1 1 peoo Jl 1 milllon Find... 1 answer ##### egin{aligned} & ext { Match the columns. }\ &egin{array}{ll} hline ext { Column-I } & ext { Column-II } \ hline ext { A. Air (Prevention and Control of Pollution) Act } & 1.1987 \ ext { B. Water (Prevention and Control of Pollution) Act } & 2.1981 \ ext { C. Noise added as air pollutant } & 3.1974 \ ext { D. Environment (Protection) Act } & 4.1986 \ hline end{array} end{aligned}(a) A:2, B:3, mathrm{C}: 4, mathrm{D}: 1$(b)$mathrm{A}: 2, mathrm{~B}: 3, mathr
egin{aligned} & ext { Match the columns. }\ &egin{array}{ll} hline ext { Column-I } & ext { Column-II } \ hline ext { A. Air (Prevention and Control of Pollution) Act } & 1.1987 \ ext { B. Water (Prevention and Control of Pollution) Act } & 2.1981 \ ext { C. Noise added as...
##### Two random variables X and Y are distributed according tox + Y, 0 < x <1,0 < y < 1 fxx(x,y) = (o, otherwise:1_ What is the probability that X + Y > 1?2_ Find P(X > Y).3_ What is P(X > YIX + 2Y > 1)?4_ Find P(X = Y)?5_ Find fx(xIX + 2Y > 1).6_ Find E[XIX + 2Y > 1].7 . Find COV [X, Y] and Px,Y -
Two random variables X and Y are distributed according to x + Y, 0 < x <1,0 < y < 1 fxx(x,y) = (o, otherwise: 1_ What is the probability that X + Y > 1? 2_ Find P(X > Y). 3_ What is P(X > YIX + 2Y > 1)? 4_ Find P(X = Y)? 5_ Find fx(xIX + 2Y > 1). 6_ Find E[XIX + 2Y > 1]...
##### Ifa product has 20 independent components that all must function for the product to work successfully, and the probability of successful operation for any single component is 0.999,the probability that the product works is0.95750.98020.9900None of the above
Ifa product has 20 independent components that all must function for the product to work successfully, and the probability of successful operation for any single component is 0.999,the probability that the product works is 0.9575 0.9802 0.9900 None of the above...
##### Fart (60) AnsWet least 12 ofthe following 20 questions: W Cc pai , circle the Item that kcuer fits the desctiption provide hrieh eplanation for WT Selectnm The best answers often explain why one choice ~upenor and the other sclecton mnerior, The cotrect eirele wonh points. an1d Ihe cortect explanation ctns You points; DRAWINGSARE STRONGLTADVISED ACI KASE QUESTIONS (uarked wlth au usterisk ( ) MUS INCLUIE THE STRUCTURE OF THE COMJUGATEACI OR BASE IF Sou draw the correct produet for reaetion base
Fart (60) AnsWet least 12 ofthe following 20 questions: W Cc pai , circle the Item that kcuer fits the desctiption provide hrieh eplanation for WT Selectnm The best answers often explain why one choice ~upenor and the other sclecton mnerior, The cotrect eirele wonh points. an1d Ihe cortect explanat...
##### C) Is the relationship direct or inverse? It will be a direct relation. 3. a) Complete...
c) Is the relationship direct or inverse? It will be a direct relation. 3. a) Complete the table below. LE+ tast 251 0.125 0.23 0.125 0.4375 + 0.34 0.5 0.5625 y/x у 2 4 8 16 16 32 b) On graph paper plot y vs. x (appropriately labeling all axes as well as graphing the best fitting line/curve to...
##### A company acquired a new piece of equipment on January 1, 2011 at a cost of...
A company acquired a new piece of equipment on January 1, 2011 at a cost of $200,000. The equipment is expected to have a useful life of 10 years, a residual value of$20,000 and is depreciated on a straight-line basis. On January 1, 2013, the equipment was appraised and determined to have a fair va...
##### Simplify by finding each absolute value. See Example 5 .$|-9|-|-3|$
Simplify by finding each absolute value. See Example 5 . $|-9|-|-3|$...
##### Given the_table below: 10 20 30 40 |5o 60 70 f(z) 80 90 10 20 100 30 50 90 60 80 70 40 g(z) 60 50 10 80 20 90 40 70 100 30Evaluate (f + 9) (90):b. Evaluate (f 9) (20):Evaluate (f 9) (40):Evaluate(10)=Question Help:Video @ Written Example Submit Question
Given the_table below: 10 20 30 40 |5o 60 70 f(z) 80 90 10 20 100 30 50 90 60 80 70 40 g(z) 60 50 10 80 20 90 40 70 100 30 Evaluate (f + 9) (90): b. Evaluate (f 9) (20): Evaluate (f 9) (40): Evaluate (10)= Question Help: Video @ Written Example Submit Question...
##### Assume that Zij exp for some > 0 with > € Rox 10 (this is known as exponential covariance) In R, you can find matrix U such that UUT 2 with t (chol (2)). And as shown in the lecture examples , YOu can draw IID normals with rnorm. With this in mind and using the previous part , writeR code to draw samples from MVN distribution with mcan 0 and covariance matrix > Try fewvalues of 0 (e.g-. 1, 10, 100) and visualize the results_ Comment On the effect of 6
Assume that Zij exp for some > 0 with > € Rox 10 (this is known as exponential covariance) In R, you can find matrix U such that UUT 2 with t (chol (2)). And as shown in the lecture examples , YOu can draw IID normals with rnorm. With this in mind and using the previous part , write R co...
##### How do you expand (m+2n)^4?
How do you expand (m+2n)^4?...
##### For a normal distribution with a mean of 6 and a standard deviation 6, the value...
For a normal distribution with a mean of 6 and a standard deviation 6, the value 9 has a z value of ___?...
##### Evaluate the integral~9( - 9) dx (x + 5)(x _ 2)
Evaluate the integral ~9( - 9) dx (x + 5)(x _ 2)...
##### AbC Company is a Contracting Company that builds custom- made boats. It received from MVB an...
AbC Company is a Contracting Company that builds custom- made boats. It received from MVB an order to build 5 custom-made boats. The boats are to be delivered in 6 months with the first two to be delivered in the next two months. The following costs are incurred for the two boats to be delivered in ...
##### Dibuje la grafica de cada una de las siguientes funciones: f + 9 b. f - 9c f . gf 0 gf. g o f
Dibuje la grafica de cada una de las siguientes funciones: f + 9 b. f - 9 c f . g f 0 g f. g o f...
##### Three point charges, -6.8 x 10-9 C, -9.3 x 10-9 C, and +6.2 x 10-9 C,...
Three point charges, -6.8 x 10-9 C, -9.3 x 10-9 C, and +6.2 x 10-9 C, are fixed at different positions on a circle. The total electric potential at the center of the circle is -2200 V. What is the radius of the circle? 12...
##### Given f(x) = ( c(x + 1) if 1 < x < 3 0 else as a probability function for a continuous random variable; find a. c. b. The moment generating function MX(t). c. Use MX(t) to find the variance and...
Given f(x) = ( c(x + 1) if 1 < x < 3 0 else as a probability function for a continuous random variable; find a. c. b. The moment generating function MX(t). c. Use MX(t) to find the variance and the standard deviation of X....
##### Usc linear FgT~sion (otmulate Ad Alsze the following problem Lumulut (uttena wish rdlily ailabk "IIcMts t0 etimuate tl? nmlxrol Ixuun] 6at o hutubst I Delp' matke] tluatt preulict3 Iunud| Faut Iuma tin diaueter incher using the following duta 4" 4 12 192 25| "2M Considker twv xparate Assumptions , allowing euh to kadto modkel Completaly cch makL AL_aaaE S= right-cireulr evlindlet analare appFoximately the iM height , Acii Ke right-eireular €vlimcdes Anal that thee height
Usc linear FgT~sion (otmulate Ad Alsze the following problem Lumulut (uttena wish rdlily #ailabk "IIcMts t0 etimuate tl? nmlxrol Ixuun] 6at o hutubst I Delp' matke] tluatt preulict3 Iunud| Faut Iuma tin diaueter incher using the following duta 4" 4 12 192 25| "2M Considker twv xp...
##### NHz82~8ZHCiNNO,21,0 + NclNO,No;p-nitronnilinep-niruhenenedlazoniun chlorideCowplimgKO;~ENC00 ONa+NaClOHnaphthol (sodium slt}Panj RsdQhtuactizOBeoNO:
NHz 82~8 ZHCi NNO, 21,0 + Ncl NO, No; p-nitronniline p-niruhenenedlazoniun chloride Cowplimg KO; ~ENC 00 ONa +NaCl OH naphthol (sodium slt} Panj Rsd QhtuactizOBeo NO:...
##### Justin's farm can produce 5 pounds of apples or 21 pounds of pears in one hour....
Justin's farm can produce 5 pounds of apples or 21 pounds of pears in one hour. Samantha's farm can produce 7 pounds of apples or 34 pounds of pears in one hour. What is Samantha's opportunity cost of producing 1 pound of pears? Round your answer to one decimal place. Be sure to enter th...
|
2023-03-26 18:15:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4650139808654785, "perplexity": 7998.230685341063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946445.46/warc/CC-MAIN-20230326173112-20230326203112-00362.warc.gz"}
|
http://openstudy.com/updates/5129102de4b0111cc68ffea4
|
## anonymous 3 years ago What exactly is dubstep?
1. Michael_Angel
Dubstep is a new genre within electronic dance music. The best way to recognize a dubstep track or mix is by the reverberating sub-bass that is present in most productions. The sub-bass is reverberated at different speeds to give a sense of movement and insistence. The tracks are typically higher in BPM, ranging between 138 and 142 typically. The style does not favor four-to-the-floor beats, instead relying on spaced, syncopated percussion that the listener typically adds their own mental metronome to. Recent incarnations of the genre have found life through dubstep remixes of popular artists like La Roux and Lady Gaga. Newer artists such as Nero incorporate dubstep into their drum and bass and layering it with new vocals to create a more accessible sound. Most recently, singer Britney Spears has tapped into this rising trend in her song "Hold It Against Me," that features the sub-bass frequencies and syncopated beats during the bridge segment.
2. Ashleyisakitty
^in simpler words, drum and base rhythms with different speeds and frequencies, usually sampling from different generes.
3. anonymous
Oh i thought it was a guy named dubstep remexing existing songs or something.
4. anonymous
So basically dubstep is sampling existing songs?
5. anonymous
In a certain way? Like rap but not exactly?
6. Ashleyisakitty
Hmmm, I wouldnt say that. Its usually creative, but there may be quick samples of another song within the dubstep song.
7. Ashleyisakitty
Heres an example of a dub song that uses samples https://soundcloud.com/official-flaic/doomsday-2012-flaics-set
8. anonymous
that link dont work i dont hear anything
9. Michael_Angel
here is some of the artists for dubstep Skrillex, El-B, Oris Jay, Jakwob, Zed Bias, Steve Gurley, Skream, Bassnecter, James Blake, PantyRaid, Nero
10. Michael_Angel
11. anonymous
ooh I see play button now @Ashleyisakitty IS that megatron doing the countdown?
12. anonymous
I like it I think i heard it in a commercial about car insurance. :D
13. anonymous
"genre of electronic dance music"
14. anonymous
ooh like tribalero? @snapcracklepop
15. anonymous
yea kinda
16. anonymous
you've heard of tribalero snap?
17. anonymous
no i haven't actually. in fact i listened to it for the first time once u mentioned it but i was thinking more David Guetta style tho like.... http://www.youtube.com/watch?v=JRfuAukYTKg also Calvin Harris... http://www.youtube.com/watch?v=dGghkjpNCQ8 http://www.youtube.com/watch?v=17ozSeGw-fY and also there's Benny Benassi, Skrillex (of course)...... http://www.youtube.com/watch?v=LaIZ0mUJzr0 and i also like this one song by Alex Clare.... http://www.youtube.com/watch?v=zYXjLbMZFmo
18. anonymous
ook nice :D
19. anonymous
actually, its an experiment. i forgot by who, but he created a smooth beat then made the "drop" to see what it would do. it was named the drop because it induced seizures and everyone would drop.
20. anonymous
lol somehow i think that is a lie.
21. anonymous
lol somehow i doubt u have done the research
22. anonymous
sounds impossible
23. anonymous
sounds about right. if u have done your research then u would have seen that is how dubstep origionally came into place
24. anonymous
lots of origin stories are exaggerated.
25. anonymous
so ? doesnt mean all the stories u hear are false
26. anonymous
but it could mean this one is :P
27. anonymous
Look up Figure- The werewolf timo86m
28. anonymous
no gracias if it dubset
29. anonymous
Love Dubstep.
30. anonymous
What type of dupstep
31. anonymous
techno music..?
32. anonymous
Well i mean live or pre recorded bcs there is a difference
33. anonymous
FacePalm...............
34. anonymous
MAJOR FACEPALM! Next time you buy a 5carot ring it for each of u IQ points
35. anonymous
LMAO
36. anonymous
37. anonymous
Here is how i read that Bad dog MAJOR FACEPALM! Next time you buy a Sca asdl ferasdf ;lkjs IQ ironic alskdjfae???
38. poopsiedoodle
Not music.
39. poopsiedoodle
Definitely NOT music.
40. poopsiedoodle
And no sarcasm is being used here by the way. It's actually the furthest you can get from music.
41. anonymous
not music?
42. poopsiedoodle
There ya go, you're catching on to it :D
43. anonymous
so you cant dance to it then?
44. poopsiedoodle
Hardly.
45. Darrius
I HATE DUBSTEB. my opinion doesnt matter tho ;P
46. anonymous
i was going to say tht D
47. anonymous
48. anonymous
i just like the song not the dance
49. anonymous
oh
50. Ashleyisakitty
I like dubstep, its very danceable.
51. anonymous
@timo86m i did the research cuz it was bothering me. Born with an IQ eventually tested at 186, Skrillex is oft called the "mad scientist" of electronica. At only ten years of age, Skrillex discovered taht by playing peaceful, relaxing music followed by a deep bass not, he could cause people to instantly have seizures. Skrillex eventually named the note that causes seizures "The Drop" because his friend would drop on the floor when he tested the music. Being bornt he son of a nuclear physicist and granddaughter of Madame Curry, Skrillex used his parents labs to experiment on the blend of seizure inducing sounds he eventually named "Dubstep" Thank u and goodnight ^_^ lol
52. Ashleyisakitty
Thats a fun story but I want to make it clear that Skrillex and nothing to do with the birth of dubstep
53. anonymous
and how would u know.
54. anonymous
that just confused me more @mikaa_toxica13
55. Ashleyisakitty
Because its obvious. Dubsteps been around way longer than him.
56. poopsiedoodle
$$\Huge\text{^}$$
57. anonymous
he made it as a teen. not now.
58. Ashleyisakitty
.......dubstep was created as a new form of EDM that originated in the UK back in the 90s. You should probably do some actual research on the subject.
59. anonymous
DUBSTEP: WUUUB WUUB WUUUUUUUB WUUUBY WUB WUB WUUUUUUuuuuuuuUUUUuuUUUUUB
60. poopsiedoodle
0101011101010101010101010101010101000010001000000101011101010101010101010100001000100000010101110101010101010101010101010101010101010101010101010101010101000010001000000101011101010101010101010101010101000010010110010010000001010111010101010100001000100000010101110101010101000010001000000101011101010101010101010101010101010101010101010101010101110101011101010111010101110101011101010111010101110101010101010101010101010101010101010111010101110101010101010101010101010101010101010101010101000010*
61. Dean.Shyy
Dubstep is a non-traditional form of music form that erupted from the underground music scene. Just think of Hip Hop, Rap, and Techno. On the other hand, you have traditional forms of music, such as Opera, and R&B. Specifically, Dubstep is a mix of Grime, Techno, Rave, Electro, and Randomness.
62. AravindG
@Ashleyisakitty I liked ur simple definition just after that long paragraph :)
63. anonymous
@Ashleyisakitty some ACTUAL research ? i research every subject before i talk about it. like timo said, there are many stories of origion for almost everything. so how about u just accept the fact that this is the story i believe. just like i have no problem that u believe your story. try being nice and calmly comparing the stories and talking about it like sophisticated people and not throwing around insults as if u know what i do or who i am. have a good day.
64. anonymous
there will be no fighting in my q
65. Ashleyisakitty
Im not throwing insults at you lolwat. Im just saying your story was obviously not true in many ways.
66. anonymous
I cant beleive i have to warn an ambassador :P j/k
67. anonymous
is a genre of electronic dance music.The best way to recognize a dubstep track or mix is by the reverberating sub-bass that is present in most productions. The sub-bass is reverberated at different speeds to give a sense of movement and insistence. The tracks are typically higher in BPM, ranging between 138 and 142 typically.
68. anonymous
Got it! it's techno music
69. anonymous
k
70. anonymous
would give medal but i gave it to someone else already
71. anonymous
kk
72. anonymous
so u think i jus made it up
73. anonymous
lol nobody makes up something that uninteresting
74. Ashleyisakitty
no I dont think you made it up, but its obviously not true.
75. anonymous
in other words you think she made it up?
76. Ashleyisakitty
I mean she could have found some weird fanfiction or something..
77. tanner23456
dubstep is not real music. It's sounds generated by a computer.
78. anonymous
rap is generated by computer also. Most of it.
79. tanner23456
and rap is also not real music.
80. Ashleyisakitty
Its a series of electrical sounds put together in a rhythmic fashion. Many other genres of music use electrical sounds without even realizing it. There is no such thing as a genre of music that "isnt music". I suggest you open your mind a little bit.
81. tanner23456
haha, yeah, real music requires talent. Dub step, rap, hip-hop, etc. require none. Also, don't tell me what to do.
82. Ashleyisakitty
Obviously you don know what "talent" implies.
|
2016-08-25 08:01:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28135278820991516, "perplexity": 12875.82107784081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292975.73/warc/CC-MAIN-20160823195812-00203-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://research.birmingham.ac.uk/portal/en/publications/solar-cycle-variation-of-rm-max-in-helioseismic-data-and-its-implications-for-asteroseismology(ec75fe87-342e-426a-b6e5-73719dc3ab3e).html
|
Solar cycle variation of $ν_{\rm max}$ in helioseismic data and its implications for asteroseismology
Research output: Contribution to journalArticle
Abstract
The frequency, $\nu_{\rm max}$, at which the envelope of pulsation power peaks for solar-like oscillators is an important quantity in asteroseismology. We measure $\nu_{\rm max}$ for the Sun using 25 years of Sun-as-a-Star Doppler velocity observations with the Birmingham Solar-Oscillations Network (BiSON), by fitting a simple model to binned power spectra of the data. We also apply the fit to Sun-as-a-Star Doppler velocity data from GONG and GOLF, and photometry data from VIRGO/SPM on the ESA/NASA SOHO spacecraft. We discover a weak but nevertheless significant positive correlation of the solar $\nu_{\rm max}$ with solar activity. The uncovered shift between low and high activity, of $\simeq 25\,\rm \mu Hz$, translates to an uncertainty of 0.8 per cent in radius and 2.4 per cent in mass, based on direct use of asteroseismic scaling relations calibrated to the Sun. The mean $\nu_{\rm max}$ in the different datasets is also clearly offset in frequency. Our results flag the need for caution when using $\nu_{\rm max}$ in asteroseismology.
Bibliographic note
6 pages, 4 figures, published in MNRAS Letters, 2020, vol 493, pages L49 - 53 Corrected error in metadata list of authors
Details
Original language English Monthly Notices of the Royal Astronomical Society 493 1 Published - 8 Jan 2020
• astro-ph.SR
|
2020-03-29 19:18:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.323515385389328, "perplexity": 2972.9257234148154}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370495413.19/warc/CC-MAIN-20200329171027-20200329201027-00279.warc.gz"}
|
https://www.beatthegmat.com/dfrac-1-r-of-a-circular-pizza-has-been-eaten-if-the-rest-of-the-pizza-is-divided-into-m-equal-slices-then-each-t329017.html?sid=c07b13ac94aefb2c3c103cb4df77912a
|
## $$\dfrac{1}{r}$$ of a circular pizza has been eaten. If the rest of the pizza is divided into m equal slices, then each
##### This topic has expert replies
Moderator
Posts: 2053
Joined: 29 Oct 2017
Followed by:2 members
### $$\dfrac{1}{r}$$ of a circular pizza has been eaten. If the rest of the pizza is divided into m equal slices, then each
by AAPL » Mon Jan 10, 2022 11:58 am
00:00
A
B
C
D
E
## Global Stats
Princeton Review
$$\dfrac{1}{r}$$ of a circular pizza has been eaten. If the rest of the pizza is divided into m equal slices, then each of these slices is what fraction of the whole pizza?
A. $$\dfrac{r}{rm}$$
B. $$\dfrac{r-1}{rm}$$
C. $$\dfrac{1}{m}$$
D. $$\dfrac{m-1}{rm}$$
E. $$\dfrac{m-r}{rm}$$
OA B
• Page 1 of 1
|
2022-05-20 01:41:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42159244418144226, "perplexity": 1842.4253548740344}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530553.34/warc/CC-MAIN-20220519235259-20220520025259-00610.warc.gz"}
|
https://ict4water.eu/wi693g/6eb7e1-confidence-interval-hypothesis-testing-example
|
# confidence interval hypothesis testing example
posted in: Uncategorised | 0
The appropriate procedure here is a hypothesis test for a single proportion. Hypothesis Testing for the Difference of Two Independent Means, If our confidence interval contains the value claimed by the null hypothesis, then our sample result is close enough to the claimed value, and we therefore do not reject, If our confidence interval does not contain the value claimed by the null hypothesis, then our sample result is different enough from the claimed value, and we therefore reject. When you make an estimate in statistics, whether it is a summary statistic or a test statistic, there is always uncertainty around that estimate because the number is based on a sample of the population you are studying. A Hypothesis Test Regarding Two Population Proportions, 6. A Confidence Interval for the Difference of Two Independent Means, 11. The response variable is full-time employment status which is categorical with two levels: yes/no. A Hypothesis Test for a Population Proportion, 4. The simulation methods used to construct bootstrap distributions and randomization distributions are similar. The confidence interval does not assume this. We should expect to have a p value less than 0.05 and to reject the null hypothesis. The appropriate procedure is a confidence interval for the difference in two means. There are two groups: males and females. The response variable is height, which is quantitative. The real difference is that when you create a confidence interval in conjunction with a hypothesis test, the software ensures that they’re using consistent methodology. A confidence interval is a range of values that is likely to contain an unknown population parameter. Research question: On average, are STAT 200 students younger than STAT 500 students? If you draw a random sample many times, a certain percentage of the confidence intervals will contain the population mean. There are two variables of interest: (1) height in inches and (2) weight in pounds. If we are given a specific population parameter (i.e., hypothesized value), and want to determine the likelihood that a population with that parameter would produce a sample as different as our sample, we use a hypothesis test. Sampling Distribution of the Sample Mean, Section 8: A Confidence Interval for a Population Proportion, 1. We have two independent groups: STAT 200 students and STAT 500 students. There are two variables here: (1) temperature in Fahrenheit and (2) cups of coffee sold in a day. $$p \leq 0.05$$, reject the null hypothesis. There is evidence that the population mean is different from 98.6 degrees. Below are a few examples of selecting the appropriate procedure. Cheese consumption, in pounds, is a quantitative variable. The reason for this is that our null hypothesis assumes that p 1 - p 2 = 0. All of the confidence intervals we constructed in this course were two-tailed. The parameter of interest is the correlation between these two variables. A Confidence Interval for Population Mean Difference of Matched-Pairs Data, 8. We are comparing them in terms of average (i.e., mean) age. We have one group: registered voters. Hypothesis tests use data from a sample to test a specified hypothesis. Inference Methods for Dependent Samples, 7. Using Confidence Intervals to Test Hypotheses, 2. The appropriate procedure is a hypothesis test for a single mean. Both variables are quantitative. In other words, if the the 95% confidence interval contains the hypothesized parameter, then a hypothesis test at the 0.05 $$\alpha$$ level will almost always fail to reject the null hypothesis. If STAT 200 students are younger than STAT 500 students, that translates to $$\mu_{200}<\mu_{500}$$ which is an alternative hypothesis. Because 98.6 is not contained within the 95% confidence interval, it is not a reasonable estimate of the population mean. The variable of interest is age in years, which is quantitative. A Confidence Interval for a Population Proportion Intro, Section 9: Introduction to Hypothesis Testing, Section 10: Hypothesis Test – One Population, 1. If the 95% confidence interval does not contain the hypothesize parameter, then a hypothesis test at the 0.05 $$\alpha$$ level will almost always reject the null hypothesis. Research question: On average, how much taller are adult male giraffes compared to adult female giraffes? Inference Methods for Two Population Proportions, 3. We are not given a specific parameter to test, instead we are asked to estimate "how much" taller males are than females. There are two independent groups: STAT 500 students and STAT 200 students.
|
2023-03-27 20:01:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6866598129272461, "perplexity": 399.29830024375156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948684.19/warc/CC-MAIN-20230327185741-20230327215741-00566.warc.gz"}
|
https://www.studyadda.com/ncert-solution/introduction-to-graphs_q6/580/52119
|
• # question_answer 6) A courier-person cycles from a town to a neighboring suburban area to deliver a parcel to a merchant. His distance from the town at different times is shown by the following graph. (a) What is the scale taken for the time axis? (b) How much time did the person take for the travel? (c) How far is the place of the merchant from the town? (d) Did the person stop on his way? Explain. (e) During which period did he ride fastest?
(a) The scale taken for the time axis is 4 units = 1 hour. (b) The time taken by the person for the travel 8 a.m. to 11.30 a.m. $=3{\scriptstyle{}^{1}/{}_{2}}$ hours. (c) The place of the merchant from the town in 22 km. (d) Yes. This is indicated by the horizontal part of the graph (10 a.m. ? 10.30 a.m.) (e) He rides fastest between 8 a.m. and 9 a.m.
|
2020-09-25 09:36:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6208664178848267, "perplexity": 1755.9444539644464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400223922.43/warc/CC-MAIN-20200925084428-20200925114428-00490.warc.gz"}
|
https://www.meritnation.com/cbse-class-10/math/rs-aggarwal-2015/area-of-circle-sector-and-segment/textbook-solutions/12_1_1176_5721_712_62997
|
Rs Aggarwal 2015 Solutions for Class 10 Math Chapter 18 Area Of Circle, Sector And Segment are provided here with simple step-by-step explanations. These solutions for Area Of Circle, Sector And Segment are extremely popular among Class 10 students for Math Area Of Circle, Sector And Segment Solutions come handy for quickly completing your homework and preparing for exams. All questions and answers from the Rs Aggarwal 2015 Book of Class 10 Math Chapter 18 are provided here for you for free. You will also love the ad-free experience on Meritnation’s Rs Aggarwal 2015 Solutions. All Rs Aggarwal 2015 Solutions for class Class 10 Math are prepared by experts and are 100% accurate.
#### Question 1:
Find the circumference and the area of a circle of diameter 35 cm.
Given:
Diameter, d = 35 cm
Thus, we have:
Now,
Circumference of the circle = $2\pi r$
Area of the circle = $\pi {r}^{2}$
#### Question 2:
The circumference of a circle is 39.6 cm. Find its area.
Circumference = 39.6 cm
We know:
Circumference of a circle = $2\mathrm{\pi }r$
Also,
Area of the circle = $\pi {r}^{2}$
#### Question 3:
The area of a circle is 301.84 cm2. Find its circumference.
Area of the circle = 301.84 cm2
We know:
Area of a circle$=\pi {r}^{2}$
Now,
Circumference of the circle = $2\mathrm{\pi r}$
#### Question 4:
The circumference of a circle exceeds its diameter by 16.8 cm. Find the circumference of the circle.
Circumference of a circle = $2\mathrm{\pi r}$
Diameter, d = 2r
Thus, we have:
Now,
Circumference of the circle = $2×\frac{22}{7}×3.92$
= 24.64 cm
#### Question 5:
The difference between the circumference and the radius of a circle is 37 cm. Find the area of the circle.
Let the radius of the circle be r cm.
According to the question, we have:
$2\mathrm{\pi }r-r=37\phantom{\rule{0ex}{0ex}}$
Now,
Area of the circle = $\mathrm{\pi }{r}^{2}$
#### Question 6:
A copper wire when bent in the form of a square encloses an area of 484 cm2. The same wire is not bent in the form of a circle. Find the area enclosed by the circle.
Area of the circle = 484 cm2
Area of the square = ${\mathrm{Side}}^{2}$
Perimeter of the square = $4×\mathrm{Side}$
Perimeter of the square = $4×22$
= 88 cm
Length of the wire = 88 cm
Circumference of the circle = Length of the wire = 88 cm
Now, let the radius of the circle be r cm.
Thus, we have:
Area of the circle = ${\mathrm{\pi r}}^{2}$
Thus, the area enclosed by the circle is 616 cm2.
#### Question 7:
A wire when bent in the form of an equilateral triangle encloses an area of $121\sqrt{3}{\mathrm{cm}}^{2}$. The same wire is bent to form a circle. Find the area enclosed by the circle.
Length of the wire
Now, let the radius of the circle be r cm.
We know:
Circumference of the circle = Length of the wire
Thus, we have:
Area of the circle =$\mathrm{\pi }{r}^{\mathit{2}}$
Area enclosed by the circle = 346.5 cm2
#### Question 8:
The length of a chain used as the boundary of a semicircular park is 90 m. Find the area of the park.
Let the radius of the park be r m.
Thus, we have:
$\mathrm{\pi }r+2r=90\phantom{\rule{0ex}{0ex}}⇒r\left(\mathrm{\pi }+2\right)=90$
$⇒r=\frac{90}{\mathrm{\pi }+2}\phantom{\rule{0ex}{0ex}}⇒r=\frac{90}{\frac{22}{7}+2}\phantom{\rule{0ex}{0ex}}⇒r=\frac{90×7}{36}\phantom{\rule{0ex}{0ex}}⇒r=17.5$
Now,
Area of the park = $\frac{1}{2}\mathrm{\pi }{r}^{2}$
#### Question 9:
The sum of the radii of two circles is 7 cm, and the difference of their circumferences is 8 cm. Find the circumference of the circles.
Let the radii of the two circles be r1 cm and r2 cm.
Now,
Sum of the radii of the two circles = 7 cm
Difference of the circumferences of the two circles = 88 cm
Adding (i) and (ii), we get:
$2{r}_{1}=\frac{91}{11}\phantom{\rule{0ex}{0ex}}{r}_{1}=\frac{91}{22}$
∴ Circumference of the first circle = $2{\mathrm{\pi r}}_{1}$
Also,
${r}_{1}-{r}_{2}=\frac{14}{11}\phantom{\rule{0ex}{0ex}}\frac{91}{22}-{r}_{2}=\frac{14}{11}\phantom{\rule{0ex}{0ex}}\frac{91}{22}-\frac{14}{11}={r}_{2}\phantom{\rule{0ex}{0ex}}{r}_{2}=\frac{63}{22}$
∴ Circumference of the second circle = $2{\mathrm{\pi r}}_{2}$
Therefore, circumferences of the first and second circles are 18 cm and 26 cm, respectively.
#### Question 10:
The areas of two concentric circles are 962.5 cm2 and 1386 cm2. Find the width of the ring.
Let the radii of the bigger and smaller circles be R cm and r cm, respectively.
Now,
Area of the bigger circle =
Area = $\mathrm{\pi }{R}^{2}$
$⇒1386=\frac{22}{7}×{R}^{2}\phantom{\rule{0ex}{0ex}}⇒\frac{1386×7}{22}={R}^{2}\phantom{\rule{0ex}{0ex}}⇒{R}^{2}=441\phantom{\rule{0ex}{0ex}}⇒R=21$
Area of the smaller circle =
Area = $\mathrm{\pi }{r}^{2}$
$⇒962.5=\frac{22}{7}{r}^{2}\phantom{\rule{0ex}{0ex}}⇒{r}^{2}=\frac{962.5×7}{22}\phantom{\rule{0ex}{0ex}}⇒{r}^{2}=306.25\phantom{\rule{0ex}{0ex}}⇒r=17.5$
∴ Width of the ring = R $-$ r
#### Question 11:
Find the area of a ring whose outer and inner radii are respectively 23 cm and 12 cm.
Let r1 cm and r2 cm be the radii of the outer and inner boundaries of the ring, respectively.
We have:
Now,
Area of the outer ring = $\mathrm{\pi }{{r}_{1}}^{2}$
Area of the inner ring = $\mathrm{\pi }{{r}_{2}}^{2}$
Area of the ring = Area of the outer ring $-$ Area of the inner ring
= 1662.57 $-$ 452.57
= 1210 ${\mathrm{cm}}^{2}$
#### Question 12:
A path of 8 m width runs around the outsider of a circular park whose radius is 17 m. Find the area of the path.
The radius (r) of the inner circle is 17 m.
The radius (R) of the outer circle is 25 m. [Includes path, i.e., (17 + 8)]
Area of the path = $\pi {R}^{2}-\pi {r}^{2}$
∴ Area of the path = 1056 m2
#### Question 13:
The inner circumference of a circular track is 440 m, and the track is 14 m wide. Calculate the cost of levelling the track at 25 paise/m2. Also, find the cost of fencing the outer boundary of the track at Rs 5 per metre.
Let the radius of the inner circle be r m.
Now,
Inner circumference = 440 m
$⇒2\mathrm{\pi r}=440$
We know that the track is 14 m wide.
∴ Outer radius (R) = (70 + 14) = 84 m
Cost of levelling at 25 paise per square metre
Or,
∴ Outer circumference = $2\mathrm{\pi R}$
Rate of fencing = 5 per metre
∴ Total cost of fencing
#### Question 14:
A race track is in the form of a rig whose inner circumference is 352 m and outer circumference is 396 m. Find the width and the area of the track.
Let r m and R m be the radii of the inner and outer tracks.
Now,
Circumference of the outer track = $2\mathrm{\pi }R$
$⇒396=2×\frac{22}{7}×R\phantom{\rule{0ex}{0ex}}⇒R=\frac{396×7}{44}\phantom{\rule{0ex}{0ex}}⇒R=63$
Circumference of the inner track = $2\mathrm{\pi }r$
$⇒352=2×\frac{22}{7}×r\phantom{\rule{0ex}{0ex}}⇒r=\frac{352×7}{44}\phantom{\rule{0ex}{0ex}}⇒r=56$
Width of the track = Radius of the outer track $-$ Radius of the inner track
Area of the outer circle = $\mathrm{\pi }{R}^{2}$
Area of the inner circle = $\mathrm{\pi }{R}^{2}$
Area of the track = 12474 $-$ 9856
= 2618 ${\mathrm{m}}^{2}$
#### Question 15:
A park is in the form of a rectangle 120 m by 90 m. At the centre of the park there is a circular lawn as shown in the figure. The area of the park excluding the lawn is 2950 m2. Find the radius of the circular lawn.
Area of the rectangle = $l×b$
Area of the park excluding the lawn = 2950 m2
Area of the circular lawn = Area of the park $-$ Area of the park excluding the lawn
= 10800 $-$ 2950
= 7850 m2
Area of the circular lawn = $\mathrm{\pi }{r}^{2}$
Thus, the radius of the circular lawn is 50 m.
#### Question 16:
In the given figure, AB is a diameter of a circle with centre O and OA = 7 cm. Find the area of the shaded region.
AB is the diameter of the circle.
Here,
OA = 7 cm
OB = 7 cm
OA is the radius of the circle.
∴ OA = OB = OC =OD
Also,
OA is the diameter of the smaller circle.
∴ Radius of the smaller circle = $\frac{\mathrm{OA}}{2}$= 3.5 cm
Area of the smaller circle having diameter AO = ${\mathrm{\pi r}}^{2}$
Area of the triangle $△$CBD = $\frac{1}{2}×b×h$
Area of the semicircle having diameter CD = $\frac{1}{2}{\mathrm{\pi r}}^{2}$
Now,
Area of the shaded region = Area of the semicircle $-$ Area of the triangle $△$CBD
= 77 $-$ 49
Also,
Area of the full-shaded region = 28 + 38.5
= 66.5 sq cm
#### Question 17:
In the given figure, O is the centre of the bigger circle, and AC is its diameter. Another circle with AB as diameter is drawn. If AC = 54 cm and BC = 10, find the area of the shaded region.
We have:
OA = OC = 27 cm
AB = AC $-$ BC
= 54 $-$ 10
= 44
AB is the diameter of the smaller circle.
Thus, we have:
Radius of the smaller circle =
Area of the smaller circle = ${\mathrm{\pi r}}^{2}$
Radius of the larger circle =
Area of the larger circle = ${\mathrm{\pi r}}^{2}$
∴ Area of the shaded region = Area of the larger circle $-$ Area of the smaller circle
= 2291.14 $-$ 1521.14
= 770 cm2
#### Question 18:
PQRS is a diameter of a circle of radius 6 cm. The lengths PQ, QR and RS are equal. Semicircles are drawn with PQ and QS as diameters, as shown in the given figure. If PS = 12 cm, find the perimeter and area of the shaded region.
Perimeter (circumference of the circle) = $2\mathrm{\pi r}$
We know:
Perimeter of a semicircular arc = $\mathrm{\pi r}$
Now,
For the arc PTS, radius is 6 cm.
∴ Circumference of the semicircle PTS =
For the arc QES, radius is 4 cm.
∴ Circumference of the semicircle QES =
For the arc PBQ, radius is 2 cm.
∴ Circumference of the semicircle PBQ =
Now,
Perimeter of the shaded region = $6\mathrm{\pi }+4\mathrm{\pi }+2\mathrm{\pi }$
$=12\mathrm{\pi cm}$
Area of the semicircle PBQ = $\frac{1}{2}{\mathrm{\pi r}}^{2}$
Area of the semicircle PTS = $\frac{1}{2}{\mathrm{\pi r}}^{2}$
Area of the semicircle QES = $\frac{1}{2}{\mathrm{\pi r}}^{2}$
Area of the shaded region = Area of the semicircle PBQ + Area of the semicircle PTS $-$ Area of the semicircle QES
#### Question 19:
The inside perimeter of a running track shown in the figure is 400 m. The length of each of the straight portions is 90 m, and the ends are semicircles. If the track is 14 m wide everywhere, find the area of the track. Also, find the length of the outer boundary of the track.
Length of the inner curved portion
∴ Length of each inner curved path = $\frac{220}{2}$ = 110 m
Thus, we have:
Inner radius = 35 m
Outer radius = (35 + 14) = 49 m
Area of track = {Area of the two rectangles [each ] + Area of the circular ring with R = 49 m and r = 35 m)}
Length of the outer boundary of the track
Therefore, the length of the outer boundary of the track is 488 m and the area of the track is 6216 sq. m.
#### Question 20:
In the given figure, OPQR is a rhombus, three of whose vertices lie on a circle with centre O. If the area of the rhombus is $32\sqrt{3}$, find the radius of the circle.
In a rhombus, all sides are congruent to each other.
Thus, we have:
$OP=PQ=QR=RO$
Now, consider $∆QOP$.
Therefore, $∆QOP$ is equilateral.
Similarly, $∆QOR$ is also equilateral and .
OQ = 8 cm
Hence, the radius of the circle is 8 cm.
#### Question 21:
The side of a square is 10 cm. Find (i) the area of the inscribed circle, and (ii) the area of the circumscribed circle.
(i) If a circle is inscribed in a square, then the side of the square is equal to the diameter of the circle.
Side of the square = 10 cm
Side = Diameter = 10
∴ Radius = 5 cm
Area of the inscribed circle = ${\mathrm{\pi r}}^{2}$
(ii) If a circle is circumscribed in a square, then the diagonal of the square is equal to the diameter of the circle.
Diagonal = Diameter =
$r=5\sqrt{2}$ cm
Now,
Area of the circumscribed circle = ${\mathrm{\pi r}}^{2}$
#### Question 22:
If a square is inscribed in a circle, find the ratio of the areas of the circle and the square.
If a square is inscribed in a circle, then the diagonals of the square are diameters of the circle.
Let the diagonal of the square be d cm.
Thus, we have:
=
Ratio of the area of the circle to that of the square:
$=\frac{\pi \frac{{d}^{2}}{4}}{\frac{{d}^{2}}{2}}\phantom{\rule{0ex}{0ex}}=\frac{\mathrm{\pi }}{2}$
Thus, the ratio of the area of the circle to that of the square is $\mathrm{\pi }:2$.
#### Question 23:
The area of a circle inscribed in an equilateral triangle is 154 cm2. Find the perimeter of the triangle.
Let the radius of the inscribed circle be r cm.
Given:
Area of the circle = 154 ${\mathrm{cm}}^{2}$
We know:
Area of the circle =$\pi {r}^{2}$
$⇒154=\frac{22}{7}{r}^{2}\phantom{\rule{0ex}{0ex}}⇒\frac{154×7}{22}={r}^{2}\phantom{\rule{0ex}{0ex}}⇒{r}^{2}=49\phantom{\rule{0ex}{0ex}}⇒r=7$
In a triangle, the centre of the inscribed circle is the point of intersection of the medians and altitudes of the triangle. The centroid divides the median of a triangle in the ratio 2:1.
Here,
AO:OD = 2:1
Now,
Let the altitude be h cm.
We have:
$⇒h=3r\phantom{\rule{0ex}{0ex}}⇒h=21$
Let each side of the triangle be a cm.
∴ Perimeter of the triangle = 3a
#### Question 24:
The radius of the wheel of a vehicle is 42 cm. How many revolutions will it complete in a 19.8-km-long journey?
Radius of the wheel = 42 cm
Circumference of the wheel = $2\mathrm{\pi r}$
Distance covered by the wheel in 1 revolution = 2.64 m
Total distance = 19.8 km or 19800 m
∴ Number of revolutions taken by the wheel = $\frac{19800}{2.64}=7500$
#### Question 25:
The wheels of the locomotive of a train are 2.1 m in radius. They make 75 revolutions in one minute.
Radius of the wheel = 2.1 m
Circumference of the wheel = $2\mathrm{\pi r}$
Distance covered by the wheel in 1 revolution = 13.2 m
Distance covered by the wheel in 75 revolutions =
Distance covered by the wheel in 1 minute = Distance covered by the wheel in 75 revolutions =$\frac{990}{1000}$ km
∴ Distance covered by the wheel in 1 hour = $\frac{990}{1000}×60$
=
#### Question 26:
The wheels of a car make 2500 revolutions in covering a distance of 4.95 km. Find the diameter of a wheel.
Distance = 4.95 km =
∴ Distance covered by the wheel in 1 revolution
Now,
Circumference of the wheel = 198 cm
$⇒2\pi r=198\phantom{\rule{0ex}{0ex}}⇒2×\frac{22}{7}×r=198\phantom{\rule{0ex}{0ex}}⇒r=\frac{198×7}{44}\phantom{\rule{0ex}{0ex}}⇒r=31.5\mathrm{cm}$
∴ Diameter of the wheel = 2r
= 2(31.5)
= 63 cm
#### Question 27:
A boy is cycling in such a way that the wheels of his bicycle are making 140 revolutions per minute. If the diameter of a wheel is 60 cm, calculate the speed (in km/h) at which the boy is cycling.
Diameter of the wheel = 60 cm
∴ Radius of the wheel = 30 cm
Circumference of the wheel = $2\mathrm{\pi r}$
Distance covered by the wheel in 1 revolution =
∴ Distance covered by the wheel in 140 revolutions =
Now,
Distance covered by the wheel in 1 minute = Distance covered by the wheel in 140 revolutions =
∴ Distance covered by the wheel in 1 hour =
Hence, the speed at which the boy is cycling is 15.84 km/h.
#### Question 28:
The diameter of the wheels of a bus is 140 cm. How many revolutions per minute do the wheels make when the bus is moving at a speed of 72.6 km per hour?
Diameter of the wheel = 140 cm
Radius = 70 cm
Circumference = $2\mathrm{\pi r}$
Speed of the wheel = 72.6 km per hour
Distance covered by the wheel in 1 minute = $\frac{72.6×1000×100}{60}$ = 121000 cm
Number of revolutions made by the wheel in 1 minute =
$=\frac{121000}{440}\phantom{\rule{0ex}{0ex}}=275$
Hence, the wheel makes 275 revolutions per minute.
#### Question 29:
Find the area of a quadrant of a circle whose circumference is 22 cm.
Circumference of the circle = $2\mathrm{\pi r}$
Area of the circle = ${\mathrm{\pi r}}^{2}$
Area of the quadrant =
#### Question 30:
A horse is placed for grazing inside a rectangular field 70 m by 52 m. It is tethered to one corner by a rope 21 m long. On how much area can it graze? How much area is left ungrazed?
Radius of the quadrant of the circle = 21 m
The shaded portion shows the part of the field the horse can graze.
Area of the grazed field = Area of the quadrant OPQ
Total area of the field =
Area left ungrazed = Area of the field $-$ Area of the grazed field
=
#### Question 31:
A horse is tethered to one corner of a field which is in the shape of an equilateral triangle of side 12 m. If the length of the rope is 7 m, find the area of the field which the horse cannot graze. Write the answer correct to 2 places of decimal.
Side of the equilateral triangle = 12 m
Area of the equilateral triangle =$\frac{\sqrt{3}}{4}×\left(\mathrm{Side}{\right)}^{2}$
Length of the rope = 7 m
Area of the field the horse can graze is the area of the sector of radius 7 m .Also, the angle subtended at the centre is 60$°$
=$\frac{\theta }{360}×\mathrm{\pi }{r}^{\mathit{2}}$
Area of the field the horse cannot graze = Area of the equilateral triangle $-$ Area of the field the horse can graze
#### Question 32:
Four equal circles are described about the four corners of a square so that each touches two of the others, as shown in the figure. Find the area of the shaded region, if each side of the square measures 14 cm.
Side of the square = 14 cm
Radius of the circle $=\frac{14}{2}$= 7 cm
Area of the quadrant of one circle = $\frac{1}{4}{\mathrm{\pi r}}^{2}$
Area of the quadrants of four circles =
Now,
Area of the square = ${\left(\mathrm{Side}\right)}^{2}$
Area of the shaded region = Area of the square $-$ Area of the quadrants of four circles
= 196 $-$ 154
= 42 cm2
#### Question 33:
Find the area of the shaded region shown in the given figure. The four corners are circle quadrants, and at the centre, there is a circle.
Area of the square = $\left(\mathrm{Side}{\right)}^{2}$
Area of the circle = ${\mathrm{\pi r}}^{2}$
Radius = 1 cm
Area of the quadrant of one circle = $\frac{1}{4}{\mathrm{\pi r}}^{2}$
Area of the quadrants of four circles =
Area of the shaded region = Area of the square $-$ Area of the circle $-$ Area of the quadrants of four circles
=
#### Question 34:
A rectangular piece is 20 m long and 15 m wide. From its four corners, quadrants of radius 3.5 m have been cut. Find the area of the remaining part.
Area of the quadrant of one circle = $\frac{1}{4}×\pi {r}^{2}$
Area of the quadrants of four circles = $4×9.625=38.5$ cm2
Area of the rectangle =
∴ Area of the remaining part = Area of the rectangle $-$ Area of the quadrants of four circles
= 300 $-$ 38.5
= 261.5 ${\mathrm{m}}^{2}$
#### Question 35:
Four cows are tethered at the four corners of a square field of side 50 m such that each can graze the maximum unshared area. What area will be left ungrazed?
Each cow can graze a region that cannot be accessed by other cows.
∴ Radius of the region grazed by each cow =
Area that each cow grazes = $\frac{1}{4}×\mathrm{\pi }×{r}^{2}$
Total area grazed =
Now,
Area left ungrazed = Area of the square $-$ Grazed area
=
#### Question 36:
In the given figure, AOBC represents a quadrant of a circle of radius 3.5 cm with centre O. Calculate the area of the shaded portion.
Area of the right-angled $∆$AOD = $\frac{1}{2}×b×h$
=
Area of the sector AOB =$\frac{\theta }{360}×\mathrm{\pi }×{r}^{2}$
Area of the shaded region = Area of the $∆\mathrm{AOD}$ $-$ Area of the sector AOB
#### Question 37:
In the given figure, PQRS represents a flower bed. If OP = 21 m and OR = 14 m, find the area of the flower bed.
Area of the flower bed is the difference between the areas of sectors OPQ and ORS.
#### Question 38:
Three equal circles, each of radius 6 cm, touch one another as shown in the figure. Find the area of enclosed between them.
Join ABC. All sides are equal, so it is an equilateral triangle.
Now,
Area of the equilateral triangle =$\frac{\sqrt{3}}{4}×{\mathrm{Side}}^{2}$
Area of the shaded portion = Area of the triangle $-$ Area of the three quadrants
#### Question 39:
If three circles of radius a each, are drawn such that each touches the other two, prove that the area included between them is equal to $\frac{4}{25}{a}^{2}.$
When three circles touch each other, their centres form an equilateral triangle, with each side being 2a.
Area of the triangle = $\frac{\sqrt{3}}{4}×2a×2a=\sqrt{3}{a}^{2}$
Total area of the three sectors of circles = $3×\frac{60}{360}×\frac{22}{7}×{a}^{2}=\frac{1}{2}×\frac{22}{7}×{a}^{2}=\frac{11}{7}{a}^{2}$
Area of the region between the circles =
$=\left(\sqrt{3}-\frac{11}{7}\right){a}^{2}\phantom{\rule{0ex}{0ex}}=\left(1.73-1.57\right){a}^{2}\phantom{\rule{0ex}{0ex}}=0.16{a}^{2}\phantom{\rule{0ex}{0ex}}=\frac{4}{25}{a}^{2}$
#### Question 40:
Four equal circles, each of radius 5 cm, touch each other, as shown in the figure. Find the area included between them.
Radius = 5 cm
AB = BC = CD = AD = 10 cm
All sides are equal, so it is a square.
Area of a square = ${\mathrm{Side}}^{2}$
Area of the square =
Area of the quadrant of one circle = $\frac{1}{4}{\mathrm{\pi r}}^{2}$
Area of the quadrants of four circles =
Area of the shaded portion = Area of the square $-$ Area of the quadrants of four circles
#### Question 41:
Four equal circles, each of radius a units, touch each other. Show that the area between them is $\left(\frac{6}{7}{a}^{2}\right)$ sq units.
When four circles touch each other, their centres form the vertices of a square. The sides of the square are 2a units.
Area of the square =
Area occupied by the four sectors
Area between the circles = Area of the square $-$ Area of the four sectors
#### Question 42:
A square tank has an area of 1600 m2. There are four semicircular plots around it. Find the cost of turfing the plots at Rs 1.25 per m2.
Area of the square = ${\mathrm{Side}}^{2}$
Each piece of land is a semicircle, area =
Total area =
Cost of turfing at Rs 1.25 per sq. m =
#### Question 43:
A lawn is rectangular in the middle, and it has semicircular portions along the shorter sides of the rectangle. The rectangular portion measures 50 m by 35 m. Find the area of the lawn.
Area of the rectangle =
Radius of the circle = 17.5 m
Area of the semicircle = $\frac{1}{2}{\mathrm{\pi r}}^{2}$
∴ Total area of the lawn = Area of the circle + Area of the two semicircles
#### Question 44:
A rope by which a cow is tethered is increased from 16 m to 23 m. How much additional ground does it have now graze?
r1 = 16 m
r2 = 23 m
Amount of additional ground available = Area of the bigger circle $-$ Area of the smaller circle
#### Question 45:
In the given figure ∆ABC is right-angled at A, with AB = 6 cm and AC = 8 cm. A circle with centre O has been inscribed inside the triangle. Find the value of r, the radius of the inscribed circle.
Join OC, OA and OB. This gives triangles OAC, OAB and OCB.
Consider $∆$CAB. Using Pythagoras' theorem, we have:
Also,
Further, we have:
#### Question 46:
A child draws the figure of an aeroplane as shown. Here, the wings ABCD and FGHI are parallelograms, the tail DEF is an isosceles triangle, the cockpit CKI is a semicircle and CDFI is a square. In the given figure, BP ⊥ CD, HQFI and ELDF. If CD = 8 cm, BP = HQ = 4 cm and DE = EF = 5 cm, find the area of the whole figure.
CD = 8 cm
BP = HQ = 4 cm
DE = EF = 5 cm
Area of the parallelogram ABCD = $B×H$
Area of parallelogram FGHI = $B\mathit{×}H$
Area of the square = ${\mathrm{Side}}^{2}$
=
In $∆$ELF, we have:
Area of $△$DEF = $\frac{1}{2}×B×H$
Area of the semicircle =$\frac{1}{2}{\mathrm{\pi r}}^{2}$
∴ Total Area = Area of the parallelogram ABCD + Area of the parallelogram FGHI + Area of the triangle DEF + Area of the semicircle CKI + Area of the square
Total Area = 165.12 cm2
#### Question 47:
Find the area of the region ABCDEFA shown in the given figure, given that ABDE is a square of side 10 cm, BCD is a semicircle with BD as diameter, EF = 8 cm, AF = 6 cm and ∠AFE = 90°.
Join AE.
Now, AEDB is a square.
Area of the square = ${\mathrm{Side}}^{2}$ =
Area of semi-circle = $\frac{1}{2}{\mathrm{\pi r}}^{2}$=
Area of $∆$EFA =
Area of the region ABCDEFA = Area of the square + Area of the semicircle $-$ Area of $∆$EFA
= 100 + 39.25 $-$ 24
= 115.25 sq. cm
#### Question 48:
In the given figure, ABCD is a square of side 14 cm. Find the area of the shaded region in the given figure.
Area of the square =
Area of the circles =
Area of the shaded region = Area of the square $-$ Area of four circles
#### Question 49:
Find the perimeter of the shaded region, where ADC, AEB and BFC are semicircles on diameters AC, AB and BC respectively.
We know:
Perimeter (circumference of a circle) = $2\mathrm{\pi r}$
Perimeter of a semicircular arc = $\mathrm{\pi r}$
Now,
For the arc ADC, radius is 2.1 cm.
∴ Perimeter of the arc ADC =
For the arc AEB, radius is 1.4 cm.
∴ Perimeter of the arc AEB =
For the arc BFC, radius is 0.7 cm.
∴ Perimeter of the arc BFC =
Thus, we have:
Perimeter of the shaded region = $2.1\mathrm{\pi }+1.4\mathrm{\pi }+0.7\mathrm{\pi }$
#### Question 50:
In the given figure, ∆ABC is right-angled at A. Semicircles are drawn on AB, AC and BC as diameters. It is given that AB = 3 cm and AC = 4 cm. Find the area of the shaded region.
In triangle $∆$ABC, we have:
#### Question 51:
In the given figure, PQ = 24, PR = 7 cm and O is the centre of the circle. Find the area of the shaded region.
In the right $∆$RPQ, we have:
OR = OQ = 12.5 cm
Now,
Area of the circle = ${\mathrm{\pi r}}^{2}$
Area of the semicircle =
Area of the triangle
Thus, we have:
#### Question 52:
A round table cover has six equal designs as shown in the given figure. If the radius of the cover is 35 cm, then find the total area of the design.
Join each vertex of the hexagon to the centre of the circle.
The hexagon is made up of six triangles.
#### Question 53:
In the given figure, ∆ABC is right-angled at A. Find the area of the shaded region if AB = 6 cm, BC = 10 cm and O is the centre of the incircle of ∆ABC.
Using Pythagoras' theorem for triangle ABC, we have:
$C{A}^{2}+A{B}^{2}=B{C}^{2}$
Now, we must find the radius of the incircle. Draw OE, OD and OF perpendicular to AC, AB and BC, respectively.
Here,
Because the circle is an incircle, AE and AD are tangents to the circle.
Also,
$\angle A=90°$
Therefore, AEOD is a square.
Thus, we can say that $AE=EO=OD=AD=r$.
Area of the shaded part = Area of the triangle $-$ Area of the circle
#### Question 54:
The area of an equilateral triangle is . Taking each angular point as centre, circle as drawn with radius equal to half the length of the side of the triangle. Find the area of the triangle not included in the circles.
Let the side of the equilateral triangle be $a$ cm.
Thus, we have:
$\frac{\sqrt{3}}{4}{a}^{2}=49\sqrt{3}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}⇒{a}^{2}=196\phantom{\rule{0ex}{0ex}}⇒a=14$
The radius of each circle is 7 cm. The angle at the vertex of each triangle is $60°$.
Area of the sector with angle $60°$ and radius 7 cm:
There are three such sectors.
Total area =
Area not included in the circles = Area of the triangle $-$ Area of the three sectors
#### Question 55:
In the given figure, ∆ABC is a right-angled triangle with ∠B = 90°, AB = 48 cm and BC = 14 cm. With AC as diameter a semicircle is drawn and with BC as radius, A quadrant of a circle is drawn. Find the area of the shaded region.
Consider the triangle ABC.
Now,
#### Question 56:
Calculate the area other than the area common between two quadrants of circles of radius 16 cm each, which is shown as the shaded region in the given figure.
#### Question 57:
In a circular table cover of radius 70 cm, a design is formed leaving an equilateral ∆ABC in the middle, as shown in the figure. Find the total area of the design.
$∆$ABC is equilateral.
Thus, we have:
$\angle A=\angle B=\angle C=60°$
OB bisects $\angle B$. Therefore, $\angle OBD=30°$.
$∆$OBD is a right-angled triangle.
We have:
Now,
$BC=2BD=35\sqrt{3}×2=70\sqrt{3}=AB=AC$
Area of the circle =
Area of the shaded part = Area of the circle $-$ Area of the triangle
#### Question 58:
Find the area of the sector of a circle of radius 14 cm with central angle 45°.
Area of the sector = $\frac{\theta }{360}×\mathrm{\pi }{r}^{2}$
#### Question 59:
A sector is cut from a circle of radius 21 cm. The angle of the sector is 150°. Find the length of the arc and the area of the sector.
Given:
Radius = 2 cm
Angle of sector = ${150}^{\circ }$
Now,
Area of the sector =$\frac{{\mathrm{\pi r}}^{2}\theta }{360}$
#### Question 60:
The radius of a circle is 17.5 cm. Find the area of the sector enclosed by two radii and an arc 44 cm in length.
Given:
Radius = 17.5 cm
Length of the arc = 44 cm
Now,
Length of the arc $=\frac{2\mathrm{\pi r\theta }}{360}$
$⇒44=2×\frac{22}{7}×17.5×\frac{\theta }{360}\phantom{\rule{0ex}{0ex}}⇒\theta =\frac{44×7×360}{44×17.5}\phantom{\rule{0ex}{0ex}}⇒\theta ={144}^{\circ }$
Also,
Area of the sector =$\frac{{\mathrm{\pi r}}^{2}\mathrm{\theta }}{360}$
#### Question 61:
The perimeter of a certain sector of a circle of radius 6.5 cm in 31 cm. Find the area of the sector.
Given:
Radius = 6.5 cm
Let O be the centre of the circle with radius 6.5 cm and OACBO be its sector with perimeter 31 cm.
Thus, we have:
OA + OB + arc AB = 31 cm
Now,
Area of the sector OACBO = $\frac{1}{2}×\mathrm{Radius}×\mathrm{Arc}$
#### Question 62:
The area of the sector of a circle of radius 10.5 cm is 69.3 cm2. Find the central angle of the sector.
Given:
Area of the sector = 63 cm2
Radius = 10.5 cm
Now,
Area of the sector $=\frac{{\mathrm{\pi r}}^{2}\mathrm{\theta }}{360}$
$⇒69.3=\frac{22}{7}×10.5×10.5×\frac{\theta }{360}\phantom{\rule{0ex}{0ex}}⇒\theta =\frac{69.3×7×360}{22×10.5×10.5}\phantom{\rule{0ex}{0ex}}⇒\theta ={72}^{\circ }$
∴ Central angle of the sector = ${72}^{\circ }$
#### Question 63:
A pendulum swings through an angle of 30° and describes an arc 8.8 cm in length. Find the length of the pendulum.
Given:
Length of the arc = 8.8 cm
And,
$\theta ={30}^{\circ }$
Now,
Length of the arc =$\frac{2\mathrm{\pi r\theta }}{360}$
∴ Length of the pendulum = 16.8 cm
#### Question 64:
The length of an arc of a circle, subtending an angle of 54° at the centre, is 16.5 cm. Calculate the radius, circumference and area of the circle.
Length of the arc = 16.5 cm
$\theta ={54}^{\circ }$
Circumference=?
We know:
Length of the arc $=\frac{2\mathrm{\pi r\theta }}{360}$
Circumference = 110 cm
Now,
Area of the circle =${\mathrm{\pi r}}^{2}$
#### Question 65:
The circumference of a circle is 88 cm. Find the area of the sector whose central angle is 72°.
Given:
Circumference of the circle = 88 cm
$\theta ={72}^{\circ }$
Area of the sector = ?
Area of the sector = $\frac{\mathrm{\pi }{r}^{2}\mathrm{\theta }}{360}$
#### Question 66:
The minute hand of a clock is 15 cm long. Calculate the area swept by it in 20 minutes.
Angle inscribed by the minute hand in 60 minutes = ${360}^{\circ }$
Angle inscribed by the minute hand in 20 minutes = $\frac{360}{60}×20={120}^{\circ }$
We have:
∴ Required area swept by the minute hand in 20 minutes = Area of the sector with r = 15 cm and $\theta ={120}^{\circ }$
$=\frac{{\mathrm{\pi r}}^{2}\mathrm{\theta }}{360}$
#### Question 67:
A sector of 56°, cut out from a circle, contains 17.6 cm2. Find the radius of the circle.
Area of the sector =17.6 cm2
Area of the sector$=\frac{{\mathrm{\pi r}}^{2}\mathrm{\theta }}{360}$
∴ Radius of the circle = 6 cm
#### Question 68:
A circular disc of radius 6 cm is divided into three sectors with central angles 90°, 120° and 150°. What part of the whole circle is the sector with central angle 150°? Also, calculate the ratio of the areas of the three sectors.
#### Question 69:
The short and long hands of a clock are 4 cm and 6 cm long respectively. Find the sum of distances travelled by their tips in 2 days.
In 2 days, the short hand will complete 4 rounds.
Length of the short hand = 4 cm
Distance covered by the short hand =
In the same 2 days, the long hand will complete 48 rounds.
Length of the long hand = 6 cm
Distance covered by the long hand =
∴ Total distance covered by both the hands = Distance covered by the short hand + Distance covered by the long hand
#### Question 70:
Find the lengths of the arcs cut off from a circle of radius 12 cm by a chord 12 cm long. Also, find the area of the minor segment.
Let AB be the chord. Joining A and B to O, we get an equilateral triangle OAB.
Thus, we have:
$\angle O=\angle A=\angle B=60°$
Length of the arc ACB:
Length of the arc ADB:
Now,
Area of the minor segment:
#### Question 71:
The radius of a circle with centre O is 6 cm. Two radii OA and OB are drawn at right angles to each other. Find the areas of the minor and major segments.
The triangle OAB is a right isosceles triangle.
Area of triangle OAB =
Now,
Area of the minor segment:
Area of the major segment:
#### Question 72:
A chord 10 cm long is drawn in a circle whose radius is $5\sqrt{2}$ cm. Find the areas of both the segments.
Let O be the centre of the circle and AB be the chord.
Consider $∆$OAB.
${\mathrm{OA}}^{2}+{\mathrm{OB}}^{2}=50+50=100$
Now,
Thus, $∆$OAB is a right isosceles triangle.
Thus, we have:
Area of $∆$OAB =
Area of the minor segment = Area of the sector $-$ Area of the triangle
Area of the major segment = Area of the circle $-$ Area of the minor segment
#### Question 73:
Find the areas of both the segments of a circle of radius 42 cm with central angle 120°.
Area of the triangle = $\frac{1}{2}{R}^{2}\mathrm{sin}\theta$
Here, R is the measure of the equal sides of the isosceles triangle and θ is the angle enclosed by the equal sides.
Thus, we have:
Area of the minor segment = Area of the sector $-$ Area of the triangle
=
Area of the major segment = Area of the circle $-$ Area of the minor segment
#### Question 74:
A chord of a circle of radius 30 cm makes an angle of 60° at the centre of the circle. Find the areas of the minor major segments.
Let the chord be AB. The ends of the chord are connected to the centre of the circle O to give the triangle OAB.
OAB is an isosceles triangle. The angle at the centre is 60$°$
Area of the triangle =
Area of the sector OACBO =
Area of the minor segment = Area of the sector $-$ Area of the triangle
=
Area of the major segment = Area of the circle $-$ Area of the minor segment
#### Question 75:
In a circle of radius 10.5 cm, the minor arc is one-fifth of the major arc. Find the area of the sector corresponding to the major arc.
Let the length of the major arc be $x$ cm
Radius of the circle = 10.5 cm
∴ Length of the minor arc =
Circumference =
Using the given data, we get:
∴ Area of the sector corresponding to the major arc =
#### Question 76:
The diameters of the front and rear wheels of a tractor are 80 cm and 2 m respectively. Find the number of revolutions that a rear wheel makes to cover the distance which the front wheel covers is 800 revolutions.
Radius of the front wheel =
Circumference of the front wheel =
Distance covered by the front wheel in 800 revolutions =
Radius of the rear wheel = 1 m
Circumference of the rear wheel =
∴ Required number of revolutions =
$=\frac{640\mathrm{\pi }}{2\mathrm{\pi }}\phantom{\rule{0ex}{0ex}}=320$
#### Question 1:
The perimeter of a circular field is 242 m. The area of the field is
(a) 9317 m2
(b) 18634 m2
(c) 4658.5 m2
(d) none of these
(c) 4658.5 m2
Let the radius be r m.
We know:
Perimeter of a circle
Thus, we have:
$2\mathrm{\pi }r=242$
$⇒2×\frac{22}{7}×r=242\phantom{\rule{0ex}{0ex}}⇒\frac{44}{7}×r=242\phantom{\rule{0ex}{0ex}}⇒r=\left(242×\frac{7}{44}\right)\phantom{\rule{0ex}{0ex}}⇒r=\frac{77}{2}$
∴ Area of the circle$=\mathrm{\pi }{r}^{2}$
#### Question 2:
The area of a circle is 38.5 cm2. The circumference of the circle is
(a) 6.2 cm
(b) 12.1 cm
(c) 11 cm
(d) 22 cm
(d) 22 cm
Let the radius be r cm.
We know:
Area of a circle
Thus, we have:
$\mathrm{\pi }{r}^{2}=38.5$
$⇒\frac{22}{7}×{r}^{2}=38.5\phantom{\rule{0ex}{0ex}}⇒{r}^{2}=\left(38.5×\frac{7}{22}\right)\phantom{\rule{0ex}{0ex}}⇒{r}^{2}=\left(\frac{385}{10}×\frac{7}{22}\right)\phantom{\rule{0ex}{0ex}}⇒{r}^{2}=\frac{49}{4}\phantom{\rule{0ex}{0ex}}⇒r=\frac{7}{2}$
Now,
Circumference of the circle$=2\mathrm{\pi }r$
#### Question 3:
The area of a circle is 49 π cm2. Its circumference is
(a) 7 π cm
(b) 14 π cm
(c) 21 π cm
(d) 28 π cm
(b) 14π cm
Let the radius be r cm.
We know:
Area of a circle$=\mathrm{\pi }{r}^{2}\phantom{\rule{0ex}{0ex}}$
Thus, we have:
$\mathrm{\pi }{r}^{2}=49\mathrm{\pi }\phantom{\rule{0ex}{0ex}}⇒{r}^{2}=49\phantom{\rule{0ex}{0ex}}⇒r=\sqrt{49}\phantom{\rule{0ex}{0ex}}⇒r=7$
Now,
Circumference of the circle$=2\mathrm{\pi r}$
#### Question 4:
The difference between the circumference and radius of a circle is 37 cm. The area of the circle is
(a) 111 cm2
(b) 184 cm2
(c) 154 cm2
(d) 259 cm2
(c) 154 cm2
Let the radius be r cm.
We know:
Circumference of the circle$=2\mathrm{\pi r}$
Thus, we have:
Radius = 7 cm
Now,
Area of the circle$=\mathrm{\pi }{r}^{2}$
#### Question 5:
The circumferences of two circles are in the ratio 2 : 3. The ratio between their areas is
(a) 2 : 3
(b) 4 : 9
(c) 9 : 4
(d) none of these
(b) 4:9
Let cm be the radii of the two circles.
Thus, we have:
Perimeter of the first circle$=2\mathrm{\pi }{r}_{1}\phantom{\rule{0ex}{0ex}}$
And,
Perimeter of the second circle$=2\mathrm{\pi }{r}_{2}$
Now,
$\frac{2\mathrm{\pi }{r}_{1}}{2\mathrm{\pi }{r}_{2}}=\frac{2}{3}\phantom{\rule{0ex}{0ex}}⇒\frac{{r}_{1}}{{r}_{2}}=\frac{2}{3}$
Also,
Area of the first circle$=\mathrm{\pi }{{r}_{1}}^{2}$
And,
Area of the second circle$=\mathrm{\pi }{{r}_{2}}^{2}$
Thus, we have:
$\frac{\mathrm{\pi }{{r}_{1}}^{2}}{\mathrm{\pi }{{r}_{2}}^{2}}=\frac{{{r}_{1}}^{2}}{{{r}_{2}}^{2}}$
Hence, the ratio of the areas of the two circles is 4:9.
#### Question 6:
On increasing the diameter of a circle by 40%, its area will be increased by
(a) 40%
(b) 80%
(c) 96%
(d) 82%
(c) 96%
Let d be the original diameter.
Radius$=\frac{d}{2}$
Thus, we have:
Original area$=\mathrm{\pi }×{\left(\frac{d}{2}\right)}^{2}$
$=\frac{\mathrm{\pi }{d}^{2}}{4}$
New diameter
$=\left(\frac{140}{100}×d\right)\phantom{\rule{0ex}{0ex}}=\frac{7d}{5}$
Now,
New radius$=\frac{7d}{5×2}$
$=\frac{7d}{10}$
New area$=\mathrm{\pi }×{\left(\frac{7d}{10}\right)}^{2}$
$=\frac{49\mathrm{\pi }{d}^{2}}{10}$
Increase in the area$=\left(\frac{49\mathrm{\pi }{d}^{2}}{10}-\frac{\mathrm{\pi }{d}^{2}}{4}\right)$
$=\frac{24\mathrm{\pi }{d}^{2}}{100}\phantom{\rule{0ex}{0ex}}=\frac{6\mathrm{\pi }{d}^{2}}{25}$
We have:
Increase in the area$=\left(\frac{6\mathrm{\pi }{d}^{2}}{25}×\frac{4}{\mathrm{\pi }{d}^{2}}×100\right)%$
= 96%
#### Question 7:
On decreasing the radius of a circle by 30%, its area is decreased by
(a) 30%
(b) 60%
(c) 45%
(d) none of these
(d) None of these
Let r be the original radius.
Thus, we have:
Original area$=\mathrm{\pi }{r}^{2}$
Also,
$=\left(\frac{70}{100}×r\right)\phantom{\rule{0ex}{0ex}}=\frac{7r}{10}$
New area$=\mathrm{\pi }×{\left(\frac{7r}{10}\right)}^{2}$
$=\frac{49\mathrm{\pi }{r}^{2}}{100}$
Decrease in the area$=\left(\mathrm{\pi }{r}^{2}-\frac{49\mathrm{\pi }{r}^{2}}{100}\right)$
$=\frac{59\mathrm{\pi }{r}^{2}}{100}$
Thus, we have:
Decrease in the area$=\left(\frac{59\mathrm{\pi }{r}^{2}}{100}×\frac{1}{\mathrm{\pi }{r}^{2}}×100\right)%$
=51%
#### Question 8:
The area of a square is the same as the area of a square. Their perimeters are in the ratio
(a) 1 : 1
(b) 2 : π
(c) π : 2
(d) $\sqrt{\mathrm{\pi }}:2$
(d) $\sqrt{\mathrm{\pi }}:2$
Let a be the side of the square.
We know:
Area of a square$={a}^{2}$
Let r be the radius of the circle.
We know:
Area of a circle$=\mathrm{\pi }{r}^{2}$
Because the area of the square is the same as the area of the circle, we have:
${a}^{2}=\mathrm{\pi }{r}^{2}\phantom{\rule{0ex}{0ex}}⇒\frac{{r}^{2}}{{a}^{2}}=\frac{1}{\mathrm{\pi }}\phantom{\rule{0ex}{0ex}}⇒\frac{r}{a}=\frac{1}{\sqrt{\mathrm{\pi }}}$
∴ Ratio of their perimeters
#### Question 9:
The areas of two circles are in the ratio 4 : 9. The ratio of their circumferences is
(a) 2 : 3
(b) 3 : 2
(c) 4 : 9
(d) 9 : 4
(a) 2:3
Let be the radii of the two circles.
Now,
Area of the first circle$=\mathrm{\pi }{{r}_{1}}^{2}$
And,
Area of the second circle$=\mathrm{\pi }{{r}_{2}}^{2}$
Thus, we have:
$\frac{\mathrm{\pi }{{r}_{1}}^{2}}{\mathrm{\pi }{{r}_{2}}^{2}}==\frac{4}{9}\phantom{\rule{0ex}{0ex}}⇒\frac{{{r}_{1}}^{2}}{{{r}_{2}}^{2}}=\frac{4}{9}$
Also,
Perimeter of the first circle$=2\mathrm{\pi }{r}_{1}\phantom{\rule{0ex}{0ex}}$
And,
Perimeter of the second circle $=2\mathrm{\pi }{r}_{2}$
Thus, we have:
=2:3
Hence, the ratio of their circumferences is 2:3.
#### Question 10:
In making 1000 revolutions, a wheel covers 88 km. The diameter of the wheel is
(a) 14 m
(b) 24 m
(c) 28 m
(d) 40 m
(c) 28 m
Distance covered by the wheel in 1 revolution
= 88 m
We have:
Circumference of the wheel = 88 m
Now, let the diameter of the wheel be d m.
Thus, we have:
#### Question 11:
The diameter of a wheel is 40 cm. How many revolutions will it make in covering 176 m?
(a) 140
(b) 150
(c) 160
(d) 166
(a) 140
Distance covered by the wheel in 1 revolution$=\mathrm{\pi }d$
Number of revolutions required to cover 176 m $=\left(\frac{176}{\frac{880}{7×100}}\right)$
$=\left(176×100×\frac{7}{880}\right)$
=140
#### Question 12:
The radius of a wheel is 0.25 m. How many revolutions will it make in covering 11 km?
(a) 2800
(b) 4000
(c) 5500
(d) 7000
(d) 7000
Distance covered in 1 revolution$=2\mathrm{\pi }r$
Number of revolutions taken to cover 11 km$=\left(11×1000×\frac{7}{11}\right)$
= 7000
#### Question 13:
The circumference of a circle is equal to the sum of the circumference of two circles having diameters 36 cm and 20 cm. The radius of the new circle is
(a) 16 cm
(b) 28 cm
(c) 42 cm
(d) 56 cm
(b) 28 cm
Let r cm be the radius of the new circle.
We know:
Circumference of the new circle = Circumference of the circle with diameter 36 cm + Circumference of the circle with diameter 20 cm
Thus, we have:
$2\pi r=2\pi {r}_{1}+2\pi {r}_{2}$
$⇒2\pi r=\left(2\pi ×18\right)+\left(2\pi ×10\right)\phantom{\rule{0ex}{0ex}}⇒2\pi r=2\pi ×\left(18+10\right)\phantom{\rule{0ex}{0ex}}⇒2\pi r=\left(2\pi ×28\right)\phantom{\rule{0ex}{0ex}}⇒2\pi r=\left(2×\frac{22}{7}×28\right)$
#### Question 14:
The area of circle is equal to the sum of the areas of two circles of radii 24 cm and 7 cm. The diameter of the new circle is
(a) 25 cm
(b) 31 cm
(c) 50 cm
(d) 62 cm
(c) 50 cm
Let r cm be the radius of the new circle.
Now,
Area of the new circle = Area of the circle with radius 24 cm + Area of the circle with radius 7 cm
Thus, we have:
$\pi {r}^{2}=\pi {{r}_{1}}^{2}+\pi {{r}_{2}}^{2}$
∴ Diameter of the new circle
= 50 cm
#### Question 15:
If the sum of the areas of two circles with radii R1 and R2 is equal to the area of a circle of radius R, then
(a) ${R}_{1}+{R}_{2}=R$
(b) ${R}_{1}+{R}_{2}
(c) ${R}_{1}^{2}+{R}_{2}^{2}<{R}^{2}$
(d) ${R}_{1}^{2}+{R}_{2}^{2}={R}^{2}$
(d) ${R}_{1}^{2}+{R}_{2}^{2}={R}^{2}$
Because the sum of the areas of two circles with radii is equal to the area of a circle with radius R, we have:
$\mathrm{\pi }{{R}_{1}}^{2}+\mathrm{\pi }{{R}_{2}}^{2}=\mathrm{\pi }{R}^{2}\phantom{\rule{0ex}{0ex}}⇒\mathrm{\pi }\left({{\mathrm{R}}_{1}}^{2}+{{\mathrm{R}}_{2}}^{2}\right)=\mathrm{\pi }{\mathrm{R}}^{2}\phantom{\rule{0ex}{0ex}}⇒{{\mathrm{R}}_{1}}^{2}+{{\mathrm{R}}_{2}}^{2}={\mathrm{R}}^{2}$
#### Question 16:
If the sum of the circumferences of two circles with radii R1 and R2 is equal to the circumference of a circle of radius R, then
(a) ${R}_{1}+{R}_{2}=R$
(b) ${R}_{1}+{R}_{2}>R$
(c) ${R}_{1}+{R}_{2}
(d) none of these
(a) ${R}_{1}+{R}_{2}=R$
Because the sum of the circumferences of two circles with radii is equal to the circumference of a circle with radius R, we have:
$2\mathrm{\pi }{R}_{1}+2\mathrm{\pi }{R}_{2}=2\mathrm{\pi }R\phantom{\rule{0ex}{0ex}}⇒2\mathrm{\pi }\left({R}_{1}+{R}_{2}\right)=2\mathrm{\pi }R\phantom{\rule{0ex}{0ex}}⇒{R}_{1}+{R}_{2}=R$
#### Question 17:
If the perimeter of a square is equal to the circumference of a circle, then the ratio of their areas is.
(a) 14 : 11
(b) 11 : 14
(c) 22 : 7
(d) 7 : 22
(b) 11:14
Let P be the perimeter of the square.
Now,
Each side of the square$=\frac{P}{4}$
Let r be the radius of the circle.
We know:
Circumference of the circle$=2\mathrm{\pi }r$
Now,
$2\mathrm{\pi }r=P\phantom{\rule{0ex}{0ex}}⇒r=\frac{P}{4\mathrm{\pi }}$
∴ Area of the square$={\left(\frac{P}{4}\right)}^{2}$
$=\frac{{P}^{2}}{16}$
Also,
Area of the circle $=\mathrm{\pi }{r}^{2}$
$=\mathrm{\pi }×{\left(\frac{P}{2\mathrm{\pi }}\right)}^{2}\phantom{\rule{0ex}{0ex}}=\mathrm{\pi }×\frac{{P}^{\mathit{2}}}{4{\mathrm{\pi }}^{2}}\phantom{\rule{0ex}{0ex}}=\frac{{P}^{\mathit{2}}}{4\mathrm{\pi }}$
∴ Required ratio$=\frac{{P}^{\mathit{2}}}{16}×\frac{4\mathrm{\pi }}{{P}^{\mathit{2}}}$
$=\frac{\mathrm{\pi }}{4}\phantom{\rule{0ex}{0ex}}=\left(\frac{22}{7×4}\right)\phantom{\rule{0ex}{0ex}}=\frac{11}{14}$
=11:14
#### Question 18:
If the circumference of a circle and the perimeter of a square are equal, then
(a) area of the circle = area of the square
(b) (area of the circle) > (area of the square)
(c) (area of the circle) < (area of the square)
(d) none of these
(b) Area of the circle > Area of the square
Let r be the radius of the circle.
We know:
Circumference of the circle$=2\mathrm{\pi }r\phantom{\rule{0ex}{0ex}}$
Now,
Let a be the side of the square.
We know:
Perimeter of the square = 4a
Now,
$2\mathrm{\pi }r=4a\phantom{\rule{0ex}{0ex}}⇒r=\frac{4a}{2\mathrm{\pi }}$
∴ Area of the circle$=\mathrm{\pi }{r}^{2}$
$=\mathrm{\pi }×{\left(\frac{4a}{2\mathrm{\pi }}\right)}^{2}\phantom{\rule{0ex}{0ex}}=\mathrm{\pi }×\frac{16{a}^{\mathit{2}}}{4{\mathrm{\pi }}^{2}}\phantom{\rule{0ex}{0ex}}=\frac{4{a}^{2}}{\mathrm{\pi }}\phantom{\rule{0ex}{0ex}}=\frac{4×7{a}^{2}}{22}\phantom{\rule{0ex}{0ex}}=\frac{14{a}^{2}}{11}$
Also,
Area of the square$={a}^{2}$
Clearly, $\frac{14{a}^{2}}{11}>a$2.
∴ Area of the circle > Area of the square
#### Question 19:
The area of the sector of a circle of radius R making a central angle of x° is
(a) $\frac{x}{180}×2\mathrm{\pi }R$
(b) $\frac{x}{360}×2\mathrm{\pi }R$
(c) $\frac{x}{180}×\mathrm{\pi }{R}^{2}$
(d) $\frac{x}{360}×\mathrm{\pi }{R}^{2}$
(d) $\frac{x}{360}×\mathrm{\pi }{R}^{2}$
#### Question 20:
The length of an arc of the sector of a circle of radius R making a central angle of x° is
(a) $\frac{2\mathrm{\pi }Rx}{180}$
(b) $\frac{2\mathrm{\pi }Rx}{360}$
(c) $\frac{\mathrm{\pi }{R}^{2}x}{180}$
(d) $\frac{\mathrm{\pi }{R}^{2}x}{360}$
(b) $\frac{2\mathrm{\pi }Rx}{360}$
#### Question 21:
A chord of a circle of radius 28 cm subtends an angle of 45° at the centre of the circle. The area of the minor segment is
(a) 30.256 cm2
(b) 30.356 cm2
(c) 30.456 cm2
(d) 30.856 cm2
(d) 30.856 cm2
Let r be the radius of the circle and $\theta$ be the angle.
∴ Area of the minor segment $=\left(\frac{\mathrm{\pi }{r}^{2}\theta }{360}-\frac{1}{2}r\right)$ sin θ cm2
#### Question 22:
A chord of a circle subtends an angle of 60° at the centre of the circle. If the length of the chord is 10 cm, then the area of the major segment is
(a) 305 cm2
(b) 295 cm2
(c) 310 cm2
(d) 335 cm2
(a) 305 cm2
Let AB be the chord of a circle with centre O.
Now,
OA = OB = AB = 10 cm
Thus, we have:
Area of the minor segment$=\left(\frac{\mathrm{\pi }{r}^{2}\theta }{360}-\frac{1}{2}r\right)$2sin θ cm2
Area of the major segment
#### Question 23:
The perimeter of a sector of a circle with central angle 90° is 25 cm. The area of the minor segment of the circle is
(a) 14 cm2
(b) 16 cm2
(c) 18 cm2
(d) 24 cm2
(a) 14 cm2
Let r be the radius of the circle and $\theta$ be the angle.
Now,
Perimeter of the sector$=\left(2r+\frac{2\mathrm{\pi }r\theta }{360}\right)$
$=2r+2×\frac{22}{7}×r×\frac{90}{360}\phantom{\rule{0ex}{0ex}}=\left(2r+\frac{11r}{7}\right)\phantom{\rule{0ex}{0ex}}=\frac{25r}{7}$
Also,
Area of the minor segment$=\left(\frac{\mathrm{\pi }{r}^{2}\theta }{360}-\frac{1}{2}r\right)$2sin θ cm2
#### Question 24:
The radii of two concentric circles are 19 cm and 16 cm respectively. The area of the ring enclosed by these circles is
(a) 320 cm2
(b) 330 cm2
(c) 332 cm2
(d) 340 cm2
(b) 330 cm2
Let:
R = 19 cm and r = 16 cm
Thus, we have:
Area of the ring$=\mathrm{\pi }\left({R}^{2}-{r}^{2}\right)$
#### Question 25:
The areas of two concentric circles are 1386 cm2 and 962.5 cm2. The width of the ring is
(a) 2.8 cm
(b) 3.5 cm
(c) 4.2 cm
(d) 3.8 cm
(b) 3.5 cm
Let r cm and R cm be the radii of two concentric circles.
Thus, we have:
${\mathrm{\pi R}}^{2}=1386$
Also,
∴ Width of the ring$=\left(R-r\right)$
#### Question 26:
Match the following columns:
Column I Column II (a) The circumference of a circle is 44 cm. The area of this circle is .........cm2. (p) 1936 (b) A wire is looped in the form of a circle of radius 28 cm. It is bent into a square. The area of the square is .......cm2 (q) 10 (c) The radii of two circles are 9 cm and 19 cm respectively. The radius of the circle whose circumference is equal to the sum of the circumferences of the given circles is .........cm2 (r) 154 (d) The radii of two circles are 8 cm are 6 cm respectively. The radius of the circle having its area of the given circles is .........cm2 (s) 28
(a) Let r be the radius of the circle.
Now,
Circumference of the circle$=2\mathrm{\pi }r$
We have:
Also,
Area of the circle$=\mathrm{\pi }{\mathrm{r}}^{2}$
∴ $\left(\mathrm{a}\right)⇒\left(\mathrm{r}\right)$
(b) Let r be the radius of the circle.
Length of the circle = Circumference of the circle
Perimeter of the square = Length of the wire
∴ Side of the square
= 44 cm
Area of the square
$\left(b\right)⇒\left(p\right)$
(c) Let r be the radius of the circle whose circumference is equal to the sum of the circumferences of the given circles.
Thus, we have:
∴ $\left(c\right)⇒\left(s\right)$
(d) Let r be the radius of the circle with area equal to the sum of the areas of the given circles.
Thus, we have:
$\left(d\right)⇒\left(q\right)$
#### Question 27:
Match the following columns:
Column I Column II (a) In a circle of radius 6 cm, the angle of a sector is 60°. The area of the sector is ..........cm2 (p) $\frac{25\mathrm{\pi }}{6}$ (b) In a circle with centre O and radius 5 cm, AB is a chord of length $5\sqrt{3}$ cm. The area of sector OAB is .........cm2 (q) 44.8 (c) A chord of a circle of radius 14 cm subtends a right angle at the centre. The area of the sector is ........cm2 (r) 154 (d) The perimeter of the sector of a circle of radius 5.6 cm is 27.2 cm. The area of the sector is ..........cm2 (s) $18\frac{6}{7}$
(a) Area of the sector$=\frac{\mathrm{\pi }{r}^{2}\theta }{360}$
∴ $\left(a\right)⇒\left(s\right)$
(b) Draw OD such that $OD\perp AB$.
Now,
$DB=\frac{1}{2}AB$
From the right $∆ODB$, we have:
∴ Area of $∆AOB=\frac{1}{2}×AB×OD$
Also,
Area of
We have:
Area of the sector OAPB $=\frac{\mathrm{\pi }{r}^{2}\theta }{360}$
$\left(b\right)⇒\left(p\right)$
(c) Area of the sector$=\frac{\mathrm{\pi }{r}^{2}\theta }{360}$
∴ $\left(c\right)⇒\left(r\right)$
(d) Let O be the centre of the circle of radius 5.6 cm and OACB be its sector with perimeter 27.2 cm.
Now,
Area of the sector OACBO
∴ $\left(\mathrm{d}\right)⇒\left(\mathrm{q}\right)$
#### Question 28:
Match the following columns:
Column I Column II (a) If the perimeter of a semi- circular protractor is 66 cm, then its radius is.........cm. (p) 35 (b) Each wheel of a car makes 450 complete revolution in covering 0.99 km. The radius of each wheel is ........m. (q) 32 (c) A bicycle wheel makes 5000 revolutions in covering 11 km. The diameter of its wheel is .........cm2 (r) $12\frac{5}{6}$ (d) The given figure is a sector of a circle of radius 10.5 cm. Perimeter of the sector is ............cm2 figure (s) 70
(a)
Let the radius of the protractor be r cm.
Then, perimeter$=\left(\mathrm{\pi }r+2r\right)$
Therefore,
Hence, $\left(\mathrm{a}\right)⇒\left(r\right)$
(b)
Distance covered in 1 revolution
Let r be the radius of the wheel. Then,
Circumference of the wheel $=2\mathrm{\pi }r$
Hence, $\left(\mathrm{b}\right)⇒\left(\mathrm{p}\right)$
(c)
Distance covered in 1 revolution
Let the diameter of the wheel be d m.
Then,
Hence, $\left(c\right)⇒\left(s\right)$
(d)
Let r be the radius of the arc. Then,
Arc length$=\frac{2\mathrm{\pi }r\theta }{360}$
Therefore, Perimeter = OA+OB+arc AB
= (10.5 + 10.5 +arc AB) cm
= 32 cm
Hence, $\left(\mathrm{d}\right)⇒\left(\mathrm{q}\right)$
#### Question 29:
Assertion (A)
The area of the quadrant of a circle having a circumference of 22 cm is $9\frac{5}{8}{\mathrm{cm}}^{2}.$
Reason (R)
The area of a sector of a circle of radius r with central angle x° is $\left(\frac{x×\mathrm{\pi }{r}^{2}}{360}\right).$
(a) Both Assertion (A) and Reason (R) are true and Reason (R) is a correct explanation of Assertion (A).
(b) Both Assertion (A) Reason (R) true but Reason (R) is not a correct explanation of Assertion (A).
(c) Assertion (A) is true and Reason (R) is false.
(d) Assertion (A) is false and Reason (R) is true.
(a) Both assertion (A) and reason (R) are true and reason (R) is the correct explanation of assertion (A).
Assertion (A):
Let r be the radius of the circle.
Now,
Circumference of the circle $=2\mathrm{\pi }r\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}$
We have:
Area of the quadrant$=\frac{90°}{360°}\mathrm{\pi }{r}^{2}$
Hence, assertion (A) is true.
Reason (R):
The given statement is true.
Assertion (A) is true and reason (R) is the correct explanation of assertion (A).
#### Question 30:
Assertion (A)
An arc of a circle of length 5π cm bounds a sector whose area is 20π cm2. Then, the radius of the circle is 4 cm.
Reason (R)
A chord of a circle of radius 12 cm subtends an angle of 60° at the centre of the circle. The area of the minor segment of the circle is 13.08 cm2.
(a) Both Assertion (A) and Reason (R) are true and Reason (R) is a correct explanation of Assertion (A).
(b) Both Assertion (A) Reason (R) true but Reason (R) is not a correct explanation of Assertion (A).
(c) Assertion (A) is true and Reason (R) is false.
(d) Assertion (A) is false and Reason (R) is true.
(d) Assertion (A) is false and reason (R) is true.
Assertion (A):
Let r be the radius of the circle.
We have:
Arc length$=\frac{\mathrm{\pi }r\theta }{180}$
Now,
Area of the sector$=\frac{{\mathrm{\pi r}}^{2}\mathrm{\theta }}{360}$
Thus, we have:
$\frac{\mathrm{\pi }{r}^{2}\theta }{360}=20\mathrm{\pi }⇒\frac{{\mathrm{r}}^{2}\mathrm{\theta }}{360}=20$
$⇒{r}^{2}\mathrm{\theta }=\left(20×360\right)$
Now,
$\frac{{r}^{2}\mathrm{\theta }}{r\mathrm{\theta }}=\frac{\left(20×360\right)}{\left(5×180\right)}$
r = 8 cm
Hence, assertion (A) is false.
Reason (R):
Let r be the radius of the circle.
Now,
Area of the minor segment$=\left(\frac{\mathrm{\pi }{r}^{2}\theta }{360}-\frac{1}{2}r\right)$2sin θ
Hence, reason (R) is true.
#### Question 31:
Assertion (A)
If the circumferences of two circles are in the ratio 2 : 3, then the ratio of their areas is 4 : 9.
Reason (R)
The circumference of a circle of radius r is 2πr.
(a) Both Assertion (A) and Reason (R) are true and Reason (R) is a correct explanation of Assertion (A).
(b) Both Assertion (A) Reason (R) true but Reason (R) is not a correct explanation of Assertion (A).
(c) Assertion (A) is true and Reason (R) is false.
(d) Assertion (A) is false and Reason (R) is true.
(b) Both assertion (A) and reason (R) are true, but reason (R) is not the correct explanation of assertion (A).
Assertion (A):
Let be the radii of two circles.
Now,
Circumference of the first circle$=2\mathrm{\pi }{r}_{1}\phantom{\rule{0ex}{0ex}}$
Circumference of the second circle$=2\mathrm{\pi }{r}_{2}$
Thus, we have:
$\frac{2\mathrm{\pi }{r}_{1}}{2\mathrm{\pi }{r}_{2}}=\frac{2}{3}\phantom{\rule{0ex}{0ex}}⇒\frac{{r}_{1}}{{r}_{2}}=\frac{2}{3}$
Also,
Area of the first circle$=\mathrm{\pi }{{r}_{1}}^{2}$
Area of the second circle$=\mathrm{\pi }{{r}_{2}}^{2}$
Thus, we have:
$\frac{\mathrm{\pi }{{r}_{1}}^{2}}{\mathrm{\pi }{{r}_{2}}^{2}}=\frac{{{r}_{1}}^{2}}{{{r}_{2}}^{2}}$
Hence, the ratio of their areas is 4:9.
Hence, assertion (A) is true.
Reason (R):
The given statement is true.
Hence, both assertion (A) and reason (R) are true, but reason (R) is not the correct explanation of assertion (A).
#### Question 1:
In the given figure, a square OABC has been inscribed in the quadrant OPBQ. If OA = 20 cm, then the area of the shaded region is
(a) 214 cm2
(b) 228 cm2
(c) 242 cm2
(d) 248 cm2
(b) 228 cm2
Join OB
Now, OB is the radius of the circle.
Hence, the radius of the circle is .
Now,
Area of the shaded region = Area of the quadrant $-$ Area of the square OABC
#### Question 2:
The diameter of a wheel is 84 cm. How many revolutions will it make to cover 792 m?
(a) 200
(b) 250
(c) 300
(d) 350
(c) 300
Let d cm be the diameter of the wheel.
We know:
Circumference of the wheel$=\mathrm{\pi }×d$
Now,
Number of revolutions to cover 792 m$=\left(\frac{792×1000}{264}\right)$
=300
#### Question 3:
The area of a sector of a circle with radius r, making an angle of at the centre is
(a) $\frac{x}{180}×2\mathrm{\pi }r$
(b) $\frac{x}{180}×\mathrm{\pi }{r}^{2}$
(c) $\frac{x}{360}×2\mathrm{\pi }r$
(d) $\frac{x}{360}×\mathrm{\pi }{r}^{2}$
(d) $\frac{x}{360}×\mathrm{\pi }{r}^{2}$
The area of a sector of a circle with radius r making an angle of $x°$ at the centre is $\frac{x}{360}×\mathrm{\pi }{r}^{2}$.
#### Question 4:
In the given figure, ABCD is a rectangle inscribed in a circle having length 8 cm and breadth 6 cm. If π = 3.14, then the area of the shaded region is
(a) 264 cm2
(b) 266 cm2
(c) 272 cm2
(d) 254 cm2
All options are incorrect; the correct answer is 30.5 cm.
Join AC.
Now, AC is the diameter of the circle.
∴ Radius of the circle
=5 cm
Now,
Area of the shaded region = Area of the circle with radius 5 cm $-$ Area of the rectangle ABCD
#### Question 5:
The circumference of a circle is 22 cm. Find its area.
Let r cm be the radius of the circle.
Now,
Circumference of the circle:
Also,
Area of the circle$=\mathrm{\pi }{r}^{2}$
#### Question 6:
In a circle of radius 21 cm, an arc subtends an angle of 60° at the centre. Find the length of the arc.
Let ACB be the given arc subtending at an angle of $60°$ at the centre.
Now, we have:
∴ Length of the arc ACB$=\frac{2\mathrm{\pi }r}{360}$
#### Question 7:
The minute hand of a clock is 12 cm long. Find the area swept by in it 35 minutes.
Angle described by the minute hand in 60 minutes$=360°\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}$
Angle described by the minute hand in 35 minutes$=\left(\frac{360}{60}×35\right)°$
$=210°$
Now,
∴ Required area swept by the minute hand in 35 minutes = Area of the sector with
#### Question 8:
The perimeter of a sector of a circle of radius 5.6 cm is 27.2 cm. Find the area of the sector.
Let O be the centre of the circle with radius 5.6 cm and OACB be its sector with perimeter 27.2 cm.
Thus, we have:
Now,
Area of the sector OACBO
#### Question 9:
A chord of a circle of radius 14 cm a makes a right angle at the centre. Find the area of the sector.
Let r cm be the radius of the circle and $\theta$ be the angle.
We have:
Area of the sector$=\frac{\mathrm{\pi }{r}^{2}\theta }{360}$
#### Question 10:
In the given figure, the sectors of two concentric circles of radii 7 cm and 3.5 cm are shown. Find the area of the shaded region.
Area of the shaded region = (Area of the sector with $-$ (Area of the sector with )
#### Question 11:
A wire when bent in the form of an equilateral triangle encloses an area of $121\sqrt{3}{\mathrm{cm}}^{2}$. If the same wire is bent into the form of a circle, what will be the area of the circle?
Let a cm be the side of the equilateral triangle.
Now,
Area of the equilateral triangle$=\frac{\sqrt{3}}{4}{a}^{2}$
We have:
Perimeter of the triangle = Circumference of the circle
Perimeter of the triangle = (22 + 22 + 22) cm
= 66 cm
Now, let r cm be the radius of the circle.
We know:
Circumference of the circle$=2\mathrm{\pi }r$
Also,
Area of the circle$=\mathrm{\pi }{r}^{2}$
#### Question 12:
The wheel of a cart is making 5 revolutions per second. If the diameter of the wheel is 84 cm, find its speed in km per hour.
Distance covered in 1 revolution$=\mathrm{\pi }×d$
Distance covered in 1 second
= 1320 cm
Distance covered in 1 hour
#### Question 13:
OACB is a quadrant of a circle with centre O and its radius is 3.5 cm. If OD = 2 cm. find the area of (i) quadrant OACB (ii) the shaded region.
(i) Area of the quadrant OACB
(ii) Area of the shaded region = Area of the quadrant OACB $-$ Area of $∆AOD$
#### Question 14:
In the given figure, ABCD is a square each of whose sides measures 28 cm. Find the area of the shaded region.
Let r be the radius of the circle.
Thus, we have:
=14 cm
Now,
Area of the shaded region = (Area of the square ABCD$-$ 4(Area of the sector where )
#### Question 15:
In the given figure, an equilateral triangle has been inscribed in a circle of radius 4 cm. Find the area of the shaded region.
Draw $OD\perp BC\phantom{\rule{0ex}{0ex}}$.
Because $∆ABC$ is equilateral, $\angle A=\angle B=\angle C=60°$.
Thus, we have:
Also,
∴ Area of the shaded region = (Area of the circle) $-$ (Area of $∆ABC$)
#### Question 16:
The minute hand of a clock is 12 cm long. Find the area of the face of the clock described by the minute hand in 35 minutes.
Angle described by the minute hand in 60 minutes$=360°\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}$
Angle described by the minute hand in 35 minutes$=\left(\frac{360}{60}×35\right)°$
$=210°$
Now,
∴ Required area described by the minute hand in 35 minutes = Area of the sector where
#### Question 17:
A racetrack is in the form of a ring whose inner circumference is 352 m and outer circumference is 396 m. Find the width and the area of the track.
Let r m and R m be the inner and outer boundaries, respectively.
Thus, we have:
Width of the track$=\left(R-r\right)$
Area of the track$=\mathrm{\pi }\left({\mathrm{R}}^{2}-{\mathrm{r}}^{2}\right)$
#### Question 18:
A chord of a circle of radius 30 cm makes an angle of 60° at the centre of the circle. Find the area of the minor and major segments.
Let AB be the chord of a circle with centre O and radius 30 cm such that $\angle AOB=60°$.
Area of the sector OACBO $=\frac{\mathrm{\pi }{r}^{2}\theta }{360}$
Area of $∆OAB$
Area of the minor segment = (Area of the sector OACBO$-$ (Area of $∆OAB$)
Area of the major segment = (Area of the circle) $-$ (Area of the minor segment)
#### Question 19:
Four cows are tethered at the four corners of a square field of side 50 m such that the each can graze the maximum unshared area. What area will be left ungrazed?
Let r be the radius of the circle.
Thus, we have:
= 25 m
Area left ungrazed = (Area of the square) $-$ 4(Area of the sector where )
#### Question 20:
A square tank has an area of 1600 cm2. There are four semicircular plots around it. Find the cost of turfing the plots at Rs 12.50 per m2.
Let a m be the side of the square.
Area of the square$={a}^{2}\phantom{\rule{0ex}{0ex}}$
Thus, we have:
Area of the plots = 4(Area of the semicircle of radius 20 m)
∴ Cost of turfing the plots at
= Rs 31400
View NCERT Solutions for all chapters of Class 10
|
2020-07-12 19:52:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 403, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.809281587600708, "perplexity": 1011.253432184847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657139167.74/warc/CC-MAIN-20200712175843-20200712205843-00281.warc.gz"}
|
https://www.rosettacommons.org/docs/latest/scripting_documentation/RosettaScripts/Movers/movers_pages/FoldTreeFromLoopsMover
|
Back to Mover page.
## FoldTreeFromLoops
Wrapper for utility function fold_tree_from_loops. Defines a fold tree based on loop definitions with the fold tree going up to the loop n-term, and the c-term and jumping between. Cutpoints define the kinematics within the loop
<FoldTreeFromLoops name="(&string)" loops="(&string)"/>
the format for loops is: Start:End:Cut,Start:End:Cut...
and either pdb or rosetta numbering are allowed. The start, end and cut points are computed at apply time so would respect loop length changes.
|
2023-03-25 13:33:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4612005949020386, "perplexity": 9179.423474143869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945333.53/warc/CC-MAIN-20230325130029-20230325160029-00066.warc.gz"}
|
https://www.physicsforums.com/threads/gravity-contracts-length.81525/
|
# Gravity contracts length
am i missing something here?
we have a circle: $$(x-h)^2+(y-k)^2=r^2$$ where (h,k) is the center and r is the radius. we now spin the circle about an axis that is perpendicular to the plane on which the circle lies and it runs through the center of said circle. gravity contracts length (and my the equivelance principle, so does acceleration), so as the 1-sphere spins about the axis, the distance between any two points on it decreases while the radius stays the same. since $$\pi=\frac{c}{2r}$$, where c is circumference and r is radius, $$\pi$$ no longer is a constant. the circle shrinks, but the radius stays the same. what is going on? does the circle turn into a cone?
Last edited:
i just found out that the equivelence princliple doesn't apply here.
#### Crosson
You have stumbled upon a fanatastic paradox of special relativity
As it appears in the history books, this is the very same case that lead Einstein to consider non-euclidean geometries in the physical universe.
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
2019-10-15 22:04:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48314550518989563, "perplexity": 1505.239789855632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660323.32/warc/CC-MAIN-20191015205352-20191015232852-00365.warc.gz"}
|
https://kr.mathworks.com/help/mpc/ug/specify-cost-function-for-nonlinear-mpc.html
|
## Specify Cost Function for Nonlinear MPC
While traditional linear MPC controllers optimize control actions to minimize a quadratic cost function, nonlinear MPC controllers support generic custom cost functions. For example, you can specify your cost function as a combination of linear or nonlinear functions of the system states and inputs. To improve computational efficiency, you can also specify an analytical Jacobian for your custom cost function.
Using a custom cost function, you can, for example:
• Maximize profitability
• Minimize energy consumption
When you specify a custom cost function for your nonlinear MPC controller, you can choose to either replace or augment the standard quadratic MPC cost function. By default, an nlmpc controller replaces the standard cost function with your custom cost function. In this case, the controller ignores the standard tuning weights in its Weights property.
To use an objective function that is the sum of the standard costs and your custom costs, set the Optimization.ReplaceStandardCost property of your nlmpc object to false. In this case, the standard tuning weights specified in the Weights property of the controller contribute to the cost function. However, you can eliminate any of the standard cost function terms by setting the corresponding penalty weight to zero. For more information on the standard MPC cost function, see Standard Cost Function.
Before simulating your controller, it is best practice to validate your custom functions, including the cost function and its Jacobian, using the validateFcns command.
### Custom Cost Function
To configure your nonlinear MPC controller to use a custom cost function, set its Optimization.CustomCostFcn property to one of the following.
• Name of a function in the current working folder or on the MATLAB® path, specified as a string or character vector
Optimization.CustomCostFcn = "myCostFunction";
• Handle to a function in the current working folder or on the MATLAB path
Optimization.CustomCostFcn = @myCostFunction;
• Anonymous function
Optimization.CustomCostFcn = @(X,U,e,data,params) myCostFunction(X,U,e,data,params);
Your custom cost function must have one of the following signatures.
• If your controller does not use optional parameters:
function J = myCostFunction(X,U,e,data)
• If your controller uses parameters, where params is a comma-separated list of parameters:
function J = myCostFunction(X,U,e,data,params)
This table describes the inputs and outputs of this function, where:
• Nx is the number of states and is equal to the Dimensions.NumberOfStates property of the controller.
• Nu is the number of inputs, including all manipulated variables, measured disturbances, and unmeasured disturbances, and is equal to the Dimensions.NumberOfInputs property of the controller.
• p is the prediction horizon.
• k is the current time.
ArgumentInput/OutputDescription
XInputState trajectory from time k to time k+p, specified as a (p+1)-by-Nx array. The first row of X contains the current state values, which means that the solver does not use the values in X(1,:) as decision variables during optimization.
UInputInput trajectory from time k to time k+p, specified as a (p+1)-by-Nu array. The final row of U is always a duplicate of the preceding row; that is, U(end,:) = U(end-1,:). Therefore, the values in the final row of U are not independent decision variables during optimization.
eInput
Slack variable for constraint softening, specified as a nonnegative scalar. e is zero if there are no soft constraints in your controller.
If you have nonlinear soft constraints defined in your inequality constraint function (Model.CustomIneqConFcn), use a positive penalty weight on e and make them part of the cost function.
dataInput
Additional signals, specified as a structure with the following fields:
FieldDescription
TsPrediction model sample time, as defined in the Ts property of the controller
CurrentStatesCurrent prediction model states, as specified in the x input argument of nlmpcmove
LastMVMV moves used in previous control interval, as specified in the lastmv input argument of nlmpcmove
ReferencesReference values for plant outputs, as specified in the ref input argument of nlmpcmove
MVTargetManipulated variable targets, as specified in the MVTarget property of an nlmpcmoveopt object
PredictionHorizonPrediction horizon, as defined in the PredictionHorizon property of the controller
NumOfStatesNumber of states, as defined in the Dimensions.NumberOfStates property of the controller
NumOfOutputsNumber of outputs, as defined in the Dimensions.NumberOfOutputs property of the controller
NumOfInputsNumber of inputs, as defined in the Dimensions.NumberOfInputs property of the controller
MVIndexManipulated variables indices, as defined in the Dimensions.MVIndex property of the controller
MDIndexMeasured disturbance indices, as defined in the Dimensions.MDIndex property of the controller
UDIndexUnmeasured disturbance indices, as defined in the Dimensions.UDIndex property of the controller
paramsInput
Optional parameters, specified as a comma-separated list (for example p1,p2,p3). The same parameters are passed to the prediction model, custom cost function, and custom constraint functions of the controller. For example, if the state function uses only parameter p1, the constraint functions use only parameter p2, and the cost function uses only parameter p3, then all three parameters are passed to all of these functions.
If your model uses optional parameters, you must specify the number of parameters using the Model.NumberOfParameters property of the controller.
JOutputComputed cost, returned as a scalar
• Be a continuous, finite function of U, X, and e and have finite first derivatives
• Increase as the slack variable e increases or be independent of it
To use output variable values in your cost function, you must first derive them from the state and input arguments using the prediction model output function, as specified in the Model.OutputFcn property of the controller. For example, to compute the output trajectory Y from time k to time k+p, use:
p = data.PredictionHorizon;
for i=1:p+1
Y(i,:) = myOutputFunction(X(i,:)',U(i,:)',params)';
end
For more information on the prediction model output function, see Specify Prediction Model for Nonlinear MPC.
Typically, you optimize control actions to minimize the cost function across the prediction horizon. Since the cost function value must be a scalar, you compute the cost function at each prediction horizon step and add the results together. For example, suppose that the stage cost function is:
$J=10{u}_{1}^{2}+5{x}_{2}^{3}+{x}_{1}$
That is, you want to minimize the difference between the first output and its reference value, and the product of the first manipulated variable and the second state. To compute the total cost function across the prediction horizon, use:
p = data.PredictionHorizon;
U1 = U(1:p,data.MVIndex(1));
X1 = X(2:p+1,1);
X2 = X(2:p+1,2);
J = 10*sum(sum(U1.^2)) + 5*sum(sum(X2.^3) + sum(sum(X1));
In general, for cost functions, do not use the following values, since they are not part of the decision variables used by the solver:
• U(end,:) — This row is a duplicate of the preceding row.
• X(1,:) — This row contains the current state values.
Since this example cost function is relatively simple, you can specify it using an anonymous function handle.
For relatively simple costs, you can specify the cost function using an anonymous function handle. For example, to specify an anonymous function that implements just the first term of the preceding cost function, use:
Optimization.CustomCostFcn = @(X,U,data) 10*sum(sum((U(1:end-1,data.MVIndex(1)).^2));
### Cost Function Jacobian
To improve computational efficiency, it is best practice to specify an analytical Jacobian for your custom cost function. If you do not specify a Jacobian, the controller computes the Jacobian using numerical perturbation. To specify a Jacobian for your cost function, set the Jacobian.CustomCostFcn property of the controller to one of the following.
• Name of a function in the current working folder or on the MATLAB path, specified as a string or character vector
Jacobian.CustomCostFcn = "myCostJacobian";
• Handle to a function in the current working folder or on the MATLAB path
Jacobian.CustomCostFcn = @myCostJacobian;
• Anonymous function
Jacobian.CustomCostFcn = @(X,U,e,data,params) myCostJacobian(X,U,e,data,params)
Your cost Jacobian function must have one of the following signatures.
• If your controller does not use optional parameters:
function [G,Gmv,Ge] = myCostJacobian(X,U,e,data)
• If your controller uses parameters, where params is a comma-separated list of parameters:
function [G,Gmv,Ge] = myCostJacobian(X,U,e,data,params)
The input arguments of the cost Jacobian function are the same as the inputs of the custom cost function. This table describes the outputs of the Jacobian function, where:
• Nx is the number of states and is equal to the Dimensions.NumberOfStates property of the controller.
• Nmv is the number of manipulated variables.
• p is the prediction horizon.
ArgumentDescription
GJacobian of the cost function with respect to the state trajectories, returned as a p-by-Nx array, where $\text{G}\left(i,j\right)=\partial \text{J}/\partial \text{X}\left(i+1,j\right)$. Compute G based on X from the second row to row p+1, ignoring the first row.
Gmv
Jacobian of the cost function with respect to the manipulated variable trajectories, returned as a p-by-Nmv array, where $\text{Gmv}\left(i,j\right)=\partial \text{J}/\partial \text{U}\left(i,MV\left(j\right)\right)$ and MV(j) is the jth MV index in data.MVIndex.
Since the controller forces U(p+1,:) to equal U(p,:), if your cost function uses U(p+1,:), you must include the impact of both U(p,:) and U(p+1,:) in the Jacobian for U(p,:).
GeJacobian of the cost function with respect to the slack variable, e, returned as a scalar, where $\text{Ge}=\partial \text{J}/\partial \text{e}$.
To use output variable values and their Jacobians in your cost Jacobian function, you must first derive them from the state and input arguments. To do so, use the Jacobian of the prediction model output function, as specified in the Jacobian.OutputFcn property of the controller. For example, to compute the output variables Y and their Jacobians Yjacob from time k to time k+p, use:
p = data.PredictionHorizon;
for i=1:p+1
Y(i,:) = myOutputFunction(X(i,:)',U(i,:)',params)';
end
for i=1:p+1
Yjacob(i,:) = myOutputJacobian(X(i,:)',U(i,:)',params)';
end
Since prediction model output functions do not support direct feedthrough from inputs to outputs, the output function Jacobian contains partial derivatives with respect to only the states in X. For more information on the output function Jacobian, see Specify Prediction Model for Nonlinear MPC.
To find the Jacobians, compute the partial derivatives of the cost function with respect to the state trajectories, manipulated variable trajectories, and slack variable. For example, suppose that your cost function is as follows, where u1 is the first manipulated variable.
$J=10{u}_{1}^{2}+5{x}_{2}^{3}+{x}_{1}$
To compute the Jacobian with respect to the state trajectories, use the following. Recall that you compute G based on X from the second row to row p+1, ignoring the first row.
p = data.PredictionHorizon;
Nx = data.NumOfStates;
U1 = U(1:p,data.MVIndex(1));
X2 = X(2:p+1,2);
G = zeros(p,Nx);
G(1:p,1) = 1;
G(1:p,2) = 15*X2.^2;
To compute the Jacobian with respect to the manipulated variable trajectories, use:
Nmv = length(data.MVIndex);
Gmv = zeros(p,Nmv);
Gmv(1:p,1) = 20*U1;
In this case, the derivative with respect to the slack variable is Ge = 0.
|
2020-02-19 17:51:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6912187933921814, "perplexity": 1238.3929276543352}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144165.4/warc/CC-MAIN-20200219153707-20200219183707-00418.warc.gz"}
|
https://hal.inria.fr/hal-01672800
|
On the Asymptotic Distribution of Nucleation Times of Polymerization Processes - Archive ouverte HAL Access content directly
Journal Articles SIAM Journal on Applied Mathematics Year : 2019
## On the Asymptotic Distribution of Nucleation Times of Polymerization Processes
(1) , (1)
1
Philippe Robert
Wen Sun
• Function : Author
• PersonId : 970513
• IdRef : 242411118
#### Abstract
In this paper, we investigate a stochastic model describing the time evolution of a polymerization process. A polymer is a macromolecule resulting from the aggregation of several elementary subunits called monomers. Polymers can grow by addition of monomers or can be split into several polymers. The initial state of the system consists of isolated monomers. We study the {\em lag time} of the polymerization process, that is, the first instant when the fraction of monomers used in polymers is above some threshold. The mathematical model includes {\em a nucleation property}: If $n_c$ is defined as the size of the nucleus, polymers with a size smaller than $n_c$ are quickly fragmented into smaller polymers. For polymers of size greater than $n_c$, the fragmentation still occurs but at a smaller rate. A scaling approach is used, by taking the volume $N$ of the system as a scaling parameter. If $n_c{\ge}3$ and under quite general assumptions on the way polymers are fragmented, if $T^N$ is the instant of creation of the first "stable" polymer, i.e. a polymer of size $n_c$, then it is proved that $(T^N/N^{n_c-3})$ converges in distribution. We also show that, if $n_c{\ge}4$, then the lag time has the same order of magnitude as $T^N$ and, if $n_c{=}3$, it is of the order of $\log N$. An original feature proved for this model is the significant variability of $T^N$. This is a well known phenomenon observed in the experiments in biology but the previous mathematical models used up to now did not exhibit this magnitude of variability. The results are proved via a series of technical estimates for occupations measures on fast time scales. Stochastic calculus with Poisson processes, coupling arguments and branching processes are the main ingredients of the proofs.
#### Domains
Mathematics [math] Probability [math.PR]
### Dates and versions
hal-01672800 , version 1 (27-12-2017)
### Identifiers
• HAL Id : hal-01672800 , version 1
• ARXIV :
• DOI :
### Cite
Philippe Robert, Wen Sun. On the Asymptotic Distribution of Nucleation Times of Polymerization Processes. SIAM Journal on Applied Mathematics, 2019, 79 (5), pp.27. ⟨10.1137/19M1237508⟩. ⟨hal-01672800⟩
### Export
BibTeX TEI Dublin Core DC Terms EndNote Datacite
65 View
|
2023-01-31 08:14:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.602875828742981, "perplexity": 1092.310599965957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499845.10/warc/CC-MAIN-20230131055533-20230131085533-00513.warc.gz"}
|
https://www.physicsforums.com/threads/r-l-circuit-simple.226114/
|
# R-L Circuit - Simple
1. Apr 2, 2008
### ttiger2k7
[SOLVED] R-L Circuit - Simple
1. The problem statement, all variables and given/known data
In the figure below, suppose that the switch is initially open, and at time t=0, the switch is closed. Let t1=aL/R be the time that the current through the inductor L is 70.0 percent of its value when t is infinity. Find the dimensionless number a.
[Broken]
2. Relevant equations
$$i=I_{0}e^{-(R/L)t)$$
3. The attempt at a solution
I tried and no luck:
Given the problem, when t is 0 the current is initially 0. So I_{0} is 0.
And I want to find $$.7i$$ (70% of i) and I already know that t1=aL/R
So plugging all the info in:
$$.7i=0*e^{-(R/L)(aL/R)}$$
$$.7i=0$$
$$i=0$$
This is obviously wrong, and makes no sense to me. Help would be appreciated.
Last edited by a moderator: Apr 23, 2017 at 11:59 AM
2. Apr 2, 2008
### a-lbi
Your equation is wrong. In infinity the current is not 0.
You must notice, that in infinity the voltage on inductor is 0. Hence you can calculate the current in infinity.
Than you should 'construct' right equation for the current and solve the problem (of course you can solve some differential equations instead). The one thing that is right is that the current will be exponential-like with time constant R/L
3. Apr 2, 2008
### ttiger2k7
Okay, so is this right for finding current @ infinity?
Since voltage on inductor is 0,
$$L\frac{di}{dt}=0$$
So, using Kirchoff's loop rule:
$$\epsilon-L\frac{di}{dt}-iR=0$$
$$\epsilon-0-iR=0$$
$$i=\frac{\epsilon}{R}$$
**
EDIT: got it. Thanks!
Last edited: Apr 2, 2008
|
2017-04-27 15:16:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7835471630096436, "perplexity": 1366.8432501254608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122174.32/warc/CC-MAIN-20170423031202-00202-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2433831/what-is-the-relationship-between-variable-magnitude-and-a-circle-tangent-vector
|
# What is the relationship between variable magnitude and a circle-tangent vector field?
The following vector field $$\vec G(x,y) = \dfrac{-y\hat \imath + x\hat \jmath}{\sqrt{x^2+y^2}}$$ shows vectors that are tangent to circle centered at the origin. However, the text I am using also mentions that the magnitudes of these vectors are equal to their distances from the origin.
My question is, wouldn't you have to mutliply both components by $\sqrt{x^2+y^2}$ rather than divide?
I now see that $|\vec G| = 1$, something I should have noticed before. My new approach to get the magnitudes to equal to the distances of the points from the origin:
$$|\vec G_1| = \sqrt{k^2\cdot(y^2 + x^2)} = \sqrt{y^2+x^2} \implies k=1 \implies \vec G_1(x,y) = -y\hat \imath + x\hat \jmath$$
So is the text (as I have written it) simply incorrect or is there something else behind it?
• I may disagree with your text. The length of $G$ is constantly 1. – Randall Sep 18 '17 at 1:27
• Do you see any other reason for this? I have edited the question. – Rithwik Sudharsan Sep 18 '17 at 2:34
• The text would be correct without the denominator: the length of the vector $-yi + xj$ is in fact equal to its distance from the origin. – Randall Sep 18 '17 at 19:48
• Thanks! I speculated so, I must have read the text wrong if it's not incorrect. – Rithwik Sudharsan Sep 21 '17 at 4:59
|
2019-08-25 00:51:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8387125730514526, "perplexity": 294.89433174220204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322160.92/warc/CC-MAIN-20190825000550-20190825022550-00440.warc.gz"}
|
https://physics.stackexchange.com/questions/451426/how-can-i-find-the-power-given-only-the-torque
|
# How can I find the power given only the torque?
I have a system where I am trying to find the power generated by a DC electric generator. However, I am not very familiar with generators, so I am having difficulty determining the rotational velocity. I have the equation:
torque = Power/2pi * n, where n = revolutions per second.
For convenience, let us say that the torque = 10Nm. I want to find the power, but how can I do that when I do not not how quickly the generator is spinning? Obviously I am missing something, but I'm not sure what it is. Thanks!
Edit: to be a little more specific, given power = 2pi x 1m x 10M x n, where n, I assume, is the number of revolutions per second, can I find the wattage as a real number, not just a symbol?
• You can't calculate power output without the missing information. – David White Dec 31 '18 at 21:28
|
2019-07-24 02:21:25
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8069151639938354, "perplexity": 199.93765817334307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530250.98/warc/CC-MAIN-20190724020454-20190724042454-00090.warc.gz"}
|
http://math.stackexchange.com/questions/105823/inductive-proof-that-mnn-mid-mn
|
# Inductive proof that $(m!^n)n! \mid (mn)!$
I have worked this problem out before but am stuck on the inductive step.
Show that $(m!^n)n! \mid (mn)!$
I am using induction on $n$.
I thought to factor $(m(n+1))$! but can't get it exactly how I want it, any suggestions?
-
Assume that $(m!^n)n\mid(mn)!$, say $(mn)!=a(m!^n)n$. Then
\begin{align*} \big(m(n+1)\big)!&=(mn+m)!\\ &=(mn)!\prod_{k=1}^m(mn+k)\\ &=a(m!^n)n!\prod_{k=1}^m(mn+k)\\ &=a(m!^n)n!(mn+m)\prod_{k=1}^{m-1}(mn+k)\\ &=am(m!^n)(n+1)!\prod_{k=1}^{m-1}(mn+k)\;. \end{align*}
If you can now show that $$(m-1)!\;\Bigg\vert\;\prod_{k=1}^{m-1}(mn+k)\;,$$ you’ll have the extra factor of $m!$ that you need.
HINT: $\dbinom{mn+m-1}{mn}$ is an integer.
-
Aha! I had gotten to the last line algebraically, but that was the key I needed, thank you! – user24372 Feb 5 '12 at 1:21
If you are, note that you can view the group $S_m\wr S_n$ (that squiggly symbol is the wreath product) as a subgroup of $S_{mn}$. These groups have cardinality $(m!)^nn!$ and $(mn)!$, respectively. The statement you want is Lagrange's theorem applied to this pair.
@KannappanSampath: While wikipedia defines $\wr$, it could perhaps be more explicit. For the purposes of this answer, view $S_{mn}$ as permutations of $mn$ items arranged in a grid with $m$ rows and $n$ columns. Then $S_m\wr S_n$ is the subgroup consisting of all permutations which rearrange columns in an arbitrary way and rearrange elements within each column in an arbitrary way. You can rearrange the $n$ columns in $n!$ ways and rearrange the elements of each of the $n$ columns in $m!$ ways, so $\lVert S_m\wr S_n\rVert = (m!)^nn!$. – Noah Stein Feb 5 '12 at 2:58
A link to your answer is posted on my profile here. Particularly a good answer. I have known applications like this: For instance $n! \mid \binom n 0 _2$, by considering the symmetric group as a subgroup in $\operatorname {GL}(\mathbb F_2)$, the homomorphism given by the permutation matrices. But, however I could not think of this, because I never knew the way to work with Wreath Product! – user21436 Feb 5 '12 at 3:12
@KannappanSampath: It looks like we both have something to learn from each other about this trick. What does the notation $\binom{n}{0}_2$ mean? – Noah Stein Feb 5 '12 at 3:41
|
2015-08-05 00:34:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9189212918281555, "perplexity": 228.86557476657117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042992543.60/warc/CC-MAIN-20150728002312-00056-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://www.research.ed.ac.uk/en/publications/measurement-of-the-bpm-production-asymmetry-and-the-cp-asymmetry-
|
# Measurement of the $B^{\pm}$ production asymmetry and the $CP$ asymmetry in $B^{\pm} \to J/\psi K^{\pm}$ decays
The $B^{\pm}$ meson production asymmetry in $pp$ collisions is measured using $B^+ \to \bar{D}^0 \pi^+$ decays. The data were recorded by the LHCb experiment during Run 1 of the LHC at centre-of-mass energies of $\sqrt{s}=$ 7 and 8 TeV. The production asymmetries, integrated over transverse momenta in the range $2 <p_{\rm T} <30$ GeV/$c$, and rapidities in the range $2.1 <y <4.5$, are measured to be \begin{align*} \mathcal{A}_{\rm prod}(B^+,\sqrt{s}=7~{\rm TeV}) &= (-0.41 \pm 0.49 \pm 0.10) \times 10^{-2},\\ \mathcal{A}_{\rm prod}(B^+,\sqrt{s}=8~{\rm TeV}) &= (-0.53 \pm 0.31 \pm 0.10) \times 10^{-2}, \end{align*} where the first uncertainties are statistical and the second are systematic. These production asymmetries are used to correct the raw asymmetries of $B^{+} \to J/\psi K^{+}$ decays, thus allowing a measurement of the $CP$ asymmetry, \begin{equation*} \mathcal{A}_{CP} = (0.09 \pm 0.27 \pm 0.07) \times 10^{-2}. \end{equation*}
Dive into the research topics of 'Measurement of the $B^{\pm}$ production asymmetry and the $CP$ asymmetry in $B^{\pm} \to J/\psi K^{\pm}$ decays'. Together they form a unique fingerprint.
|
2021-07-30 21:49:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999872446060181, "perplexity": 3343.255325841559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153980.55/warc/CC-MAIN-20210730185206-20210730215206-00151.warc.gz"}
|
https://www.sparrho.com/item/delay-parameter-selection-in-permutation-entropy-using-topological-data-analysis/2240c01/
|
# Delay Parameter Selection in Permutation Entropy Using Topological Data Analysis
Research paper by Audun D. Myers, Firas A. Khasawneh
Indexed on: 14 May '19Published on: 10 May '19Published in: arXiv - Physics - Data Analysis; Statistics and Probability
#### Abstract
Permutation Entropy (PE) is a powerful tool for quantifying the predictability of a sequence which includes measuring the regularity of a time series. Despite its successful application in a variety of scientific domains, PE requires a judicious choice of the delay parameter $\tau$. While another parameter of interest in PE is the motif dimension $n$, Typically $n$ is selected between $4$ and $8$ with $5$ or $6$ giving optimal results for the majority of systems. Therefore, in this work we focus solely on choosing the delay parameter. Selecting $\tau$ is often accomplished using trial and error guided by the expertise of domain scientists. However, in this paper, we show that persistent homology, the flag ship tool from Topological Data Analysis (TDA) toolset, provides an approach for the automatic selection of $\tau$. We evaluate the successful identification of a suitable $\tau$ from our TDA-based approach by comparing our results to a variety of examples in published literature.
|
2021-04-23 15:17:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43756723403930664, "perplexity": 743.0215444967563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594808.94/warc/CC-MAIN-20210423131042-20210423161042-00086.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-common-core/chapter-4-quadratic-functions-and-equations-4-7-the-quadratic-formula-practice-and-problem-solving-exercises-page-246/51
|
## Algebra 2 Common Core
$x\approx-1.70 \text{ and } x\approx4.70$
In the given equation, \begin{align*} x^2-3x-8=0 ,\end{align*} $a= 1 ,$ $b= -3 ,$ and $c= -8 .$ Using the Quadratic Formula which is given by $x=\dfrac{-b\pm\sqrt{b^2-4ac}}{2a},$ then \begin{align*}\require{cancel}x&= \dfrac{-(-3)\pm\sqrt{(-3)^2-4(1)(-8)}}{2(1)} \\\\&= \dfrac{3\pm\sqrt{9+32}}{2} \\\\&= \dfrac{3\pm\sqrt{41}}{2} \end{align*} \begin{array}{lcl} &\Rightarrow \dfrac{3-\sqrt{41}}{2} &\text{ OR }& \dfrac{3+\sqrt{41}}{2} \\\\& \approx-1.70 && \approx4.70 \end{array} Hence, the solutions are $x\approx-1.70 \text{ and } x\approx4.70 .$
|
2019-11-22 10:41:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9988678693771362, "perplexity": 12055.092017611767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671249.37/warc/CC-MAIN-20191122092537-20191122120537-00002.warc.gz"}
|
https://docs.charter.uat.esaportal.eu/services/iris/tutorial/
|
# IRIS tutorial
This service performs a Change Detection using couples of optical images. The output is represented by Structural Similarity Index (SSI) maps that show the intensity of the detected changes in the region of interest.
IRIS service description and specifications are available in this section.
## Select the processing service
After the opening of the activation workspace, in the right panel of the interface, open the Processing Services tab.
Select the processing service Change Detection Analysis (IRIS).
The "Change Detection Analysis (IRIS)" panel is displayed with parameters values to be filled-in.
## Find the data using multiple filter criterias
• Select the area for which you want to do an analysis, e.g over South-eastern France.
• From the Navigation and Search toolbar (located in the upper left side of the map), click on Spatial Filter and draw a square AOI over the Riviera resort of Saint Tropez in the Var Department, France. This spatial filter allows you to select only the EO data acquired over this area.
• From the top of the left panel, use Filter Criterias to search for “Optical” and "Sentinel-2" data collections from the list.
• After the query the list will be updates as the one shown in the next image.
• Now it is possible to choose a pair of pre- and post-event reflectance images from Optical Calibrated Datasets to be used for the change detection analysis. This pair must come from the same sensor and ideally after a co-registration.
• As an example you can choose the following pair:
• Pre-event image: SENTINEL-2B MSI L2A 108 2021/08/02 10:25:59
• Post-event image: SENTINEL-2A MSI L2A 108 2021/08/27 10:30:21
## Fill the paramters
After the definition of spatial and time filters, you can employ IRIS, by using a suitable pair of Calibrated Datasets from Sentinel-2 data.
To do so you can fill the parameters as described in the following sections.
### Job name
• Insert as job name:
IRIS Sentinel-2 02/08 - 27/08 2021 Wildfire Var France
### Reference input
First two mandatory parameters are input "Reference" and "Secondary" images from Optical Calibrated Datasets. This parameter is required to specify both the reference to the Calibrated Dataset and the band to be use for the change detection analysis by specifying the CBN (e.g. red, or green, etc).
Hint
To consults the bands of a Calibrated Dataset just Click on Show assets button available near the feature title. After the click a list with all single-band assets (CBNs) included within the Calibrated Dataset will appear under the feature title.
Thus, drag and Drop the selected assets:
1. single-band geophysical asset from a pre-event Calibrated Dataset (Reflectance)
2. single-band geophysical asset from a post-event Calibrated Dataset (Reflectance)
in the Optical calibrated pre-event single band asset and Optical calibrated post-event single band asset fields respectively.
Warning
Users must drag and drop the single-band asset (e.g. "red") into both Optical calibrated pre-event single band asset and Optical calibrated post-event single band asset fields. The drag and drop of the Calibrated Dataset (e.g. "[CD] SENTINEL-2A MSI L2A 46 2021/12/11 02:31:11") is not enough.
### Window size
Insert as a value for the window size the value of 39.
Note
This value defines the size of the sliding window in pixels, this parameter can highly influence the result of the analysis. The higher this parameter is set, the more averaged the change map will be, while the smaller and the more detailed changes can be identified at the cost of a potentially noisier results. This is due to the SSIM value for each pixel being computed using the information present in the whole sliding window, thus obtaining a more localized value of the index in case of a smaller window.
Warning
The dimension of the window should be set in a range between 9 and 71. Inserted value must be odd. If the inserted value is even or is outside this range, a warning will be given to the user.
### Area of interest expressed as Well-known text
The “Area of interest as Well Known Text” can be defined by using the drawn polygon defined with the area filter.
Tip
In the definition of “Area of interest as Well Known Text” it is possible to apply as AOI the drawn polygon defined with the area filter. To do so, click on the :fontawesome-solid-magic: button in the left side of the "Area of interest expressed as Well-known text" box and select the option AOI from the list. The platform will automatically fill the parameter value with the rectangular bounding box taken the from current search area in WKT format.
Note
This parameter is optional.
## Run the job
Click on the button Run Job and see the Running Job
You can monitor job progress through the progress bar.
Once the job is completed, click on the button Show results at the bottom of the processing service panel.
Tip
You can also save the parameters employed in this job by clicking on the Export params button in the right panel. This allows you to copy all your entries to the clipboard. This is meant to be used for a quick re-submission of a similar job after a fine tuning of the parameters (e.g. to add a color formula later).
Below is reported the syntax which includes all the parameters employed in this example.
{
"pre_event": "https://catalog.charter.uat.esaportal.eu:443//charter/cat/[chartercalibrateddataset,%7Bcallid201%7D]/search?format=json&uid=call201_S2B_MSIL2A_20210802T102559_N0301_R108_T31TGJ_20210802T133728-calibrated#nir",
"post_event": "https://catalog.charter.uat.esaportal.eu:443//charter/cat/[chartercalibrateddataset,%7Bcallid201%7D]/search?format=json&uid=call201_S2A_MSIL2A_20210827T103021_N0301_R108_T31TGJ_20210827T151224-calibrated#nir",
"win_size": "39",
"aoi": "POLYGON((6.31 43.232,6.31 43.44,6.633 43.44,6.633 43.232,6.31 43.232))"
}
### Visualization
See the result on the map. The preview appears within the area defined in the spatial filter.
To get more information about the product just click on the preview in the map, a bubble showing the name of the layer “IRIS Change detection for S2B_MSIL2A_20210802T102559_N0301_R108_T31TGJ_20210802T133728-calibrated and S2A_MSIL2A_20210827T103021_N0301_R108_T31TGJ_20210827T151224-calibrated” will appear and then click on the Show details button.
Tip
To quickly access Product Details double click on the layer from the Results list.
In the left panel of the interface, the details of Job Result will appear with Product metadata. Furthermore by clicking on Layer styling you can also access to the View options. In here it is possible to see histogram/s of the Product which is visible in the map, set color formula, change Filters (e.g. Brightness, Opacity).
Tip
To visually compare the product overview with the underlying base layer (e.g. Natural Earth or Dark map) you can set the Opacity filter under View options as 40%.
• nir_pre: single-band geophysical asset nir product from pre-event image as single band GeoTIFF in COG format,
• nir_post: single-band geophysical asset nir product from post-event image as single band GeoTIFF in COG format,
|
2022-05-29 05:06:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.288904070854187, "perplexity": 2802.864388542867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663039492.94/warc/CC-MAIN-20220529041832-20220529071832-00418.warc.gz"}
|
https://homework.cpm.org/category/CCI_CT/textbook/pc/chapter/2/lesson/2.2.1/problem/2-43
|
### Home > PC > Chapter 2 > Lesson 2.2.1 > Problem2-43
2-43.
How many terms are being added? Start your index at 1. Stop at the last term. (Alternatively, you could start at 0 and end at 9.)
$\displaystyle\sum\limits_{k=1}^{10}$
Find the argument that will result in the desired expansion.
$\displaystyle\sum\limits_{k=1}^{10}k^2$
|
2021-01-25 21:28:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6189633011817932, "perplexity": 1319.6633755869432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703644033.96/warc/CC-MAIN-20210125185643-20210125215643-00023.warc.gz"}
|
http://clay6.com/qa/37455/express-the-given-complex-number-in-the-form-a-ib-i-
|
# Express the complex number $i^{-39}$ in the form $a+ib$
$(a)\;2i\qquad(b)\;i\qquad(c)\;0\qquad(d)\;-i$
Answer : $\;i$
Explanation :
$i^{-39} = i^{-4 \times 9-3}$
$= (i^{4})^{-9} \;. i^{-3}$
$= (1)^{-9} \; . i^{-3} \qquad [i^{4} =1]$
$= \large\frac{1}{i^{3}} = \large\frac{1}{-i} \qquad [i^{3} =-i]$
$= -\large\frac{1}{i} \times \large\frac{i}{i}$
$= \large\frac{-i}{i^2}$
$= \large\frac{-i}{-1} \qquad [i^{2} =-1]$
$= i$
|
2018-05-27 19:25:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8316524624824524, "perplexity": 2690.162198871438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870082.90/warc/CC-MAIN-20180527190420-20180527210420-00527.warc.gz"}
|
https://www.landonlehman.com/post/a-quasiperiodic-counterexample/
|
# A Quasiperiodic Counterexample
## The problem
Consider a function $$f: [0, 1] \to \mathbb{R}$$ that is continuous with $$f(0) = f(1)$$. It is possible to prove that for each $$n \in \mathbb{N}$$, there exist $$x_n, y_n \in [0, 1]$$ such that $$|x_n - y_n| = 1/n$$ and $$f(x_n) = f(y_n)$$. This means the set of points at which the function is not one-to-one is at least countably infinite. Providing the proof is part of Exercise 4.5.6 in Abbott’s Understanding Analysis (2nd edition).
I’m not going to discuss the proof here, but I am interested in the final part of the exercise, which asks for a specific counterexample. Quoting Abbott:
If $$h \in (0, 1)$$ is not of the form $$1/n$$, there does not necessarily exist $$|x - y| = h$$ satisfying $$f(x) = f(y)$$. Provide an example that illustrates this using $$h = 2/5$$.
I found this perhaps harder than it should have been. Restating the problem slightly, we are looking for a function $$f$$ such that $$f(x + 2/5) - f(x) \neq 0$$, for all $$x \in [0, 3/5]$$ (since $$3/5 = 1 - 2/5$$), while satisfying the requirement that $$f(0) = f(1)$$.
## A specific counterexample
After some trial and error, I hit on the idea of using a function that does have a period of $$2/5$$, and then modifying it somehow. Starting with $$\sin$$: $\sin\left(\frac{5}{2}2\pi x\right)$ has a period of $$2/5$$, since $\sin\left[\frac{5}{2}2\pi \left(x + \frac{2}{5}\right)\right] = \sin\left(\frac{5}{2} 2\pi x + 2 \pi\right) = \sin\left(\frac{5}{2}2 \pi x\right).$
Now for the desired function $$f$$, try a function of the form $f(x) = \sin\left(\frac{5}{2}2\pi x\right) + a x .$ Requiring that $$f(0) = f(1)$$ means that $a = - \sin\left(\frac{5}{2}2\pi\right) = 0,$ so this functional form won’t work.
Instead of $$\sin$$, lets try $$\cos$$: $\cos\left(\frac{5}{2}2\pi x\right)$ has a period of $$2/5$$, and trying the same kind of functional form: $f(x) = \cos\left(\frac{5}{2}2\pi x\right) + a x$, we see that $$f(0) = 1$$ and $$f(1) = -1 + a$$, so $$a = 2$$ works. Calculating the difference $f(x + \frac{2}{5}) - f(x) = 2 \left(x + \frac{2}{5}\right) - 2x = \frac{4}{5},$ so this is the desired counterexample! Here is a plot, along with a horizontal red line of length $$2/5$$ shown at one point along the function:
Imagine sliding the red line around on the function (keeping it horizontal). The two ends of the line never simultaneously interesect the function! In fact, a little experimentation shows that the line is not optimal, in the sense that it could be a bit shorter or a bit longer and still not intersect the function at both ends. So our counterexample $$f(x)$$ is a counterexample for values of $$h$$ close to $$2/5$$ as well as for $$h = 2/5$$.
## Generalizing
The fact that the above function $$f(x)$$ is a counterexample for values of $$h$$ close to $$2/5$$ shows that there is nothing special about $$2/5$$. Specifically there is nothing special about the fact that it is rational. So let’s try to find a function for arbitrary values of $$h \in (0, 1)$$.
First, note that we can take care of many values of $$h$$ at once by using the simple function $f(x) = \sin(2\pi x).$ If $$h > 1/2$$ and $$x < 1/2$$ then $$f(x + h) = -f(x)$$, so $$f(x + h) - f(x) = -2 f(x) < 0$$. We don’t have to consider larger values of $$x$$ since the property $$f(x + h) \neq f(x)$$ only needs to hold for $$x \in [0, 1 - h]$$. This shows that there does not necessarily exist $$h \in (1/2, 1)$$.
For arbitrary $$h \in (0, 1/2)$$, let’s try a function of the same form as the specific counterexample above: $f(x) = \cos\left(\frac{2\pi x}{h}\right) + a x.$ Then $$f(0) = 1$$ and $$f(1) = \cos\left(\frac{1}{h}2\pi\right) + a$$, so the desired form is $\boxed{ f(x) = \cos\left(\frac{2\pi x}{h}\right) + \left(1 - \cos\left(\frac{2\pi}{h}\right)\right)x .}$
Calculating the difference: $f(x + h) - f(x) = h \left(1 - \cos\left(\frac{2\pi}{h}\right)\right) = 2 h \sin^2\left(\frac{\pi}{h}\right),$ using a half-angle formula.
The difference is greater than zero for all values of $$h$$ except for $$h$$ such that $$\sin(\pi/h) = 0$$, which are exactly the values $$1/n, n \in \mathbb{N}$$ for which there is always at least one place where the function is not one-to-one!
Here is a plot of the difference $$f(x + h) - f(x) = 2 h \sin^2\left(\frac{\pi}{h}\right)$$ as a function of $$h$$:
It only intersects zero at the countably infinite set of points $$1/n, n \in \mathbb{N}$$.
## Summary
Any continuous function $$f:[0, 1] \to \mathbb{R}$$ satisfying $$f(0) = f(1)$$ must fail to be one-to-one for at least one pair of points in $$[0, 1]$$ separated by $$h = 1/n$$, and this is true for all $$n \in \mathbb{N}$$. Furthermore, by the counterexample given above, there are no other “separations” $$h \in (0, 1)$$ for which non-injectivity is guaranteed!
Is there another general functional form that provides a counterexample for arbitrary values of $$h \neq 1/n$$? Let me know if you find one!
##### Landon Lehman
###### Data Scientist
My research interests include data science, statistics, physics, and applied math.
|
2020-08-04 10:27:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9253119826316833, "perplexity": 97.97814904629797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735867.93/warc/CC-MAIN-20200804102630-20200804132630-00015.warc.gz"}
|
https://sungsoo.github.io/2017/01/25/attention-transfer.html
|
# Stop Thinking, Just Do!
Sung-Soo Kim's Blog
# Attention Transfer
## Abstract
Attention plays a critical role in human visual experience. Furthermore, it has recently been demonstrated that attention can also play an important role in the context of applying artificial neural networks to a variety of tasks from fields such as computer vision and NLP. In this work we show that, by properly defining attention for convolutional neural networks, we can actually use this type of information in order to significantly improve the performance of a student CNN network by forcing it to mimic the attention maps of a powerful teacher network. To that end, we propose several novel methods of transferring attention, showing consistent improvement across a variety of datasets and convolutional neural network architectures.
## PyTorch
PyTorch code for “Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transferhttps://arxiv.org/abs/1612.03928
The paper is under review as a conference submission at ICLR2017: https://openreview.net/forum?id=Sks9_ajex
What’s in this repo so far:
• Activation-based AT code for CIFAR-10 experiments
• Code for ImageNet experiments (ResNet-18-ResNet-34 student-teacher)
Coming:
• Scenes and CUB activation-based AT code
• Pretrained with activation-based AT ResNet-18
The code uses PyTorch https://pytorch.org. Note that the original experiments were done using torch-autograd, we have so far validated that CIFAR-10 experiments are exactly reproducible in PyTorch, and are in process of doing so for ImageNet (results are very slightly worse in PyTorch, due to hyperparameters).
bibtex:
@article{Zagoruyko2016AT,
author = {Sergey Zagoruyko and Nikos Komodakis},
title = {Paying More Attention to Attention: Improving the Performance of
Convolutional Neural Networks via Attention Transfer},
url = {https://arxiv.org/abs/1612.03928},
year = {2016}}
## Requrements
First install PyTorch, then install torchnet:
git clone https://github.com/pytorch/tnt
cd tnt
python setup.py install
Install OpenCV with Python bindings, and torchvision with OpenCV transforms:
git clone https://github.com/szagoruyko/vision
cd vision; git checkout opencv
python setup.py install
Finally, install other Python packages:
pip install -r requirements.txt
## CIFAR-10
This section describes how to get the results in the table 1 of the paper.
First, train teachers:
python cifar.py --save logs/resnet_40_1_teacher --depth 40 --width 1
python cifar.py --save logs/resnet_16_2_teacher --depth 16 --width 2
python cifar.py --save logs/resnet_40_2_teacher --depth 40 --width 2
To train with activation-based AT do:
python cifar.py --save logs/at_16_1_16_2 --teacher_id resnet_16_2_teacher --beta 1e+3
To train with KD:
python cifar.py --save logs/kd_16_1_16_2 --teacher_id resnet_16_2_teacher --alpha 0.9
We plan to add AT+KD with decaying beta to get the best knowledge transfer results soon.
## ImageNet
### Pretrained model
We provide ResNet-18 pretrained model with activation based AT:
Model val error
ResNet-18 30.4, 10.8
ResNet-18-ResNet-34-AT 29.3, 10.0
Model definition: [coming]
Convergence plot:
### Train from scratch
wget https://s3.amazonaws.com/pytorch/h5models/resnet-34-export.hkl
python imagenet.py --imagenetpath ~/ILSVRC2012 --depth 18 --width 1 \
|
2021-06-15 22:39:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24327033758163452, "perplexity": 14956.367218407295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621627.41/warc/CC-MAIN-20210615211046-20210616001046-00282.warc.gz"}
|
https://www.physicsforums.com/threads/log-law-problems.717131/
|
# Log Law Problems
AbsoluteZer0
## Homework Statement
Write as a single logarithm:
## Homework Equations
Logarithm Laws:
$log_a(xy) = log_a(x) + log_a(y)$
$log_a(\frac{x}{y}) = log_a(x) - log_a(y)$
___________
Problem Set:
$log_{10}A + log_{10}B - log_{10}C$
$\frac{1}{2}logX - 2log4$
$2logN + 3logX$
## The Attempt at a Solution
I simplified the first question to $log_{10}(\frac{AB}{C})$ Am I correct?
I wasn't sure about how to approach the second question. I multiplied $\frac{1}{2}$ by $X$ and $2$ by $4$ and simplified as follows:
$log_{10}{\frac{1}{2}X} - log_{10}8$
to get $log_{10}(\frac{0.5x}{8})$
I'm not sure if this is correct though.
If it is wrong, how would I solve it correctly?
For the third problem, I solved it to:
$log_{10}[ (2n)(3x) ]$
Thanks,
Mentor
For the 1/2 log X you haven't listed the loglaw for it which is:
C * log (x) = log (x^C)
Tanya Sharma
I simplified the first question to $log_{10}(\frac{AB}{C})$ Am I correct?
Yes..Thats right.
I wasn't sure about how to approach the second question. I multiplied $\frac{1}{2}$ by $X$ and $2$ by $4$ and simplified as follows:
$log_{10}{\frac{1}{2}X} - log_{10}8$
to get $log_{10}(\frac{0.5x}{8})$
I'm not sure if this is correct though.
If it is wrong, how would I solve it correctly?
For the third problem, I solved it to:
$log_{10}[ (2n)(3x) ]$
Thanks,
That is not the correct way .
Use the following property of logarithms : logb(xn) = n logbx.
AbsoluteZer0
I solved the second one to:
$log_{10}\frac{X^{0.5}}{16}$
Is this correct?
Thanks
Tanya Sharma
I solved the second one to:
$log_{10}\frac{X^{0.5}}{16}$
Is this correct?
Thanks
Correct
AbsoluteZer0
And would the second one be
$log_{10}(N^2X^3)$?
Thanks
Tanya Sharma
And would the second one be
$log_{10}(N^2X^3)$?
Thanks
:thumbs:
1 person
AbsoluteZer0
Thank you very much!
Mentor
dont forget to use the Thanks button to thank everyone.
1 person
|
2022-10-03 09:09:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5519634485244751, "perplexity": 1055.495242312341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00071.warc.gz"}
|
https://studydaddy.com/question/what-is-the-formula-of-ammonium-carbonate
|
Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer.
QUESTION
# What is the formula of ammonium carbonate?
("NH"_4)_2"CO"_3
The important thing to recognize here is the fact that you're dealing with two , one which acts as cation and one which acts as anion.
The name of the cation is always added first to the name of the ionic compound. Likewise, the cation is added first to the compound's chemical formula. In this case, you know that you have ammonium, "NH"_4^(+), as the cation.
The name of the anion follows the name of the cation. In this case, you know that you have the carbonate ion, "CO"_3^(2-), as the anion.
Now, notice that the anion carries a 2- charge. As you know, must be electrically neutral, meaning that the overall positive charge coming from the cation must be balanced by the overall negative charge coming from the anion.
In this case, you need two ammonium cations to balance the 2- charge of the carbonate anion. You will thus have
2 xx ["NH"_4^(+)]" " and " "1 xx ["CO"_3^(2-)]
which means that the chemical formula for this compound will be
("NH"_4)_2"CO"_3 -> ammonium carbonate
|
2019-05-26 02:00:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7320340871810913, "perplexity": 1312.9933809900217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258620.81/warc/CC-MAIN-20190526004917-20190526030917-00418.warc.gz"}
|
https://golem.ph.utexas.edu/~distler/blog/archives/000439.html
|
September 27, 2004
Collinear
A lot of people (myself included) got very excited by the fact that perturbative $N=4$ super Yang-Mills amplitudes seemed to take a very simple form when written in (super)-Twistor space and that, moreover, the tree-level amplitudes can be recovered very elegantly from a topological string theory with target space the aforementioned super-Twistor space. But my ardour cooled considerably when it became apparent that, when one went to the one-loop level in the Yang-Mills, the aforementioned topological string theory would produce not just super Yang-Mills, but super Yang-Mills coupled to conformal supergravity.
Moreover, it appeared that the known one-loop amplitudes were not easily interpretable in terms of a twistor string theory. One could easily identify contributions in which the external gluons are supported on
1. a pair of lines in twistor space (connected by two twistor-space “propagators”)
2. a degree-two genus-zero curve (with a single twistor-space “propagator”)
3. $(n-1)$ of the gluons inserted as above, but with the $n^{th}$ gluon inserted somewhere in the same plane as the rest.
This last type of contribution is hard to reconcile with some sort of twistor string theory.
It now appears that this pessimistic conclusion was a bit too hasty. Cachazo Svrček and Witten have traced the problem in their earlier analysis to a sort of “holomorphic anomaly.” Their criterion for collinearity in twistor space was that the amplitude should obey a certain differential equation. However, the differential operator in question, rather than annihilating the amplitude, give a $\delta$-function whenever the momentum on an internal line is parallel to one of the external gluon momenta. It’s just a glorified version of
(1)$\overline{\partial} \frac{1}{z} = 2\pi \delta^{(2)} (z)$
The amplitude “really” receives contributions only of types (1) and (2). The apparent contributions of type (3) come from exceptional points in the integration over loop momenta, where an internal momentum is collinear with one of the external gluons.
I wish I’d thought of that…
Posted by distler at September 27, 2004 2:31 AM
TrackBack URL for this Entry: http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/439
String theory
The way I saw things, Wittens string theory on $\mathbb{C}P^{3|4}$ and Berkovits alternative string theory are two different theories. There are certainly many technical differences.
But you make it sound like they are the same. Are you really saying that, and on what grounds?
Posted by: Volker Braun on September 27, 2004 8:32 AM | Permalink | Reply to this
Berkovits and Witten
They’re not manifestly the same. But B&W certainly imply that a similar conclusion holds in Witten’s theory. I thought the point of Cachazo et al was to examine the known 1-loop results and try to divine from them a set of Feynman rules for a new twistor string theory.
The hunt for that would seem to be on again…
Posted by: Jacques Distler on September 27, 2004 9:01 AM | Permalink | PGP Sig | Reply to this
Re: Berkovits and Witten
Hi Jacques,
Mysteriously, the same CSW rules seem to work for loop amplitudes, at least the simplest ones (MHV in N=4), as shown by Brandhuber et. al. Their paper is very nice, and seems to show a connection between the off-shell continuation in CSW and the original ways of deriving these amplitudes, using cuts and collinear limits. The relation to some twistor string theory is less clear, I guess.
Posted by: Moshe Rozali on September 27, 2004 4:17 PM | Permalink | Reply to this
Re: Berkovits and Witten
That’ll teach me to get behind in my reading!
Brandhuber, Spence & Travaglini do, indeed, show that — contra what appears to follow from the earlier CSW paper — the one-loop MHV amplitudes are reproduced by sewing together tree-level MHV amplitudes (ie, contributions of type “1” above).
The current CSW paper reconciles this result with their previous analysis.
Posted by: Jacques Distler on September 27, 2004 4:38 PM | Permalink | PGP Sig | Reply to this
Post a New Comment
|
2015-03-30 12:57:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6486742496490479, "perplexity": 1702.5459975398087}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299339.12/warc/CC-MAIN-20150323172139-00275-ip-10-168-14-71.ec2.internal.warc.gz"}
|
https://www.researcher-app.com/paper/285376
|
3 years ago
# Cluster and toroidal aspects of isoscalar dipole excitations in $^{12}\mathrm{C}$
Horiyuki Morita, Yuki Shikata, Yoshiko Kanada-En'yo
We investigate cluster and toroidal aspects of isoscalar dipole excitations in $^{12}\mathrm{C}$ based on the shifted basis antisymmetrized molecular dynamics combined with the generator coordinate method, which can describe 1p-1h excitations and $3α$ dynamics. In the \$E=10\text{−}15\phantom{\rule{0...
|
2022-08-09 20:00:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20785285532474518, "perplexity": 12471.647066178784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571086.77/warc/CC-MAIN-20220809185452-20220809215452-00095.warc.gz"}
|
https://nbviewer.jupyter.org/github/dirmeier/etudes/blob/master/gaussian_process_regression.ipynb
|
# Gaussian process regression¶
Bayesian linear regression derived linear regression in a Bayesian context. Here, we discuss Gaussian process regression using GPy and scipy. Most of the material is from Rasmussen and Williams (2006). I also recommend Michael Betancourt's Robust Gaussian Processes in Stan as a resource, for instance to learn more about hyperparameter inference which won't be covered here.
As usual I do not take warranty for the correctness or completeness of this document.
In [1]:
import GPy
import scipy
from sklearn.metrics.pairwise import rbf_kernel
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [15, 6]
In [2]:
rnorm = scipy.stats.norm.rvs
mvrnorm = scipy.stats.multivariate_normal.rvs
### Priors on functions¶
In Bayesian linear regression we assumed a linear dependency
\begin{align*} f_{\boldsymbol \beta}& : \ \mathcal{X} \rightarrow \mathcal{Y},\\ f_{\boldsymbol \beta}(\mathbf{x}) & = \ \mathbf{x}^T \boldsymbol \beta + \epsilon, \end{align*}
which was parametrized by a coefficient vector $\boldsymbol \beta$. In order to model uncertainty, regularize our coeffiencts, or what reason whatsoever, we put a prior distribution on $\boldsymbol \beta$ and by that introduced some prior belief to the model.
When we use Gaussian Processes, we instead set a prior on the function $f$ itself:
\begin{align*} f(\mathbf{x}) & \sim \mathcal{GP}(m(\mathbf{x}), k(\mathbf{x}, \mathbf{x}')) ,\\ p(f \mid \mathbf{x}) & = \mathcal{N}(m(\mathbf{x}), k(\mathbf{x}, \mathbf{x}')) . \end{align*}
So a Gaussian process is a distribution of functions. It is parametrized by a mean function $m$ that returns a vector of length $n$ and a kernel function $k$ that returns a matrix of dimension $n \times n$, where $n$ is the number of samples. For instance, the mean function could be a constant (which we will assume throughout the rest of this notebook), and the kernel could be a radial basis function, i.e.:
\begin{align*} m(\mathbf{x}) &= \mathbf{0},\\ k(\mathbf{x}, \mathbf{x}') &= \exp\left(- \gamma ||\mathbf{x} - \mathbf{x}' ||^2 \right), \end{align*}
where $\gamma$ a hyperparameter we have to optimize.
The parameters $\mathbf{m}$ and $\mathbf{k}$ apparently do not have a fixed dimensionality as in Bayesian linear regression (where we had $\boldsymbol \beta \in \mathbb{R}^p$), but have possibly infinite dimension. That means with more data, the dimensions of $\mathbf{m}$ and $\mathbf{k}$ increase (in Bayesian regression $\boldsymbol \beta$ was independent of the sample size $n$). For that reason we call this approach non-parametric (the turn itself sounds confusing, because we apparently have parameters).
Next, let's look at some prior functions $f$. A prior function is just a sample from a $n$-dimensional multivariate normal distribution with mean $\mathbf{m}$ and variance $\mathbf{k}$. The kernel must be postive definite, which the RBF kernel is.
In [3]:
n = 50
x = scipy.linspace(0, 1, n).reshape((n, 1))
beta = 2
y = scipy.sin(x) * beta + rnorm(size=(n, 1), scale=.1)
In [4]:
plt.scatter(x, y, color="blue")
plt.xlabel("X")
plt.ylabel("Y")
plt.show()
Then we set the mean and covariance functions.
In [5]:
m = scipy.zeros(n)
kernel = GPy.kern.RBF(input_dim=1)
In [6]:
k = kernel.K(x, x)
Then we sample five functions from the Gaussian process:
In [7]:
f_prior = [mvrnorm(mean=m, cov=k) for i in range(5)]
...and we plot the five samples.
In [8]:
colors = ['#bdbdbd','#969696','#737373','#525252','#252525']
_, ax = plt.subplots()
for i in range(5):
ax.scatter(x, f_prior[i], color=colors[i], marker=".", alpha=0.5)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.xlabel("X")
plt.ylabel("f")
plt.show()
This does not look like our data set at all. The reason is that we did not consider the responses $\mathbf{y}$ in the model. We do this by multiplying the prior with the likelihood, which gives us the posterior Gaussian process.
### Posterior Gaussian process¶
We haven't specified the likelihood yet which we will assume to be Gaussian:
\begin{align*} p(\mathbf{y} \mid f, \mathbf{x}) = \prod_i^n \mathcal{N}(f_i, \sigma^2 \mathbf{I} ). \end{align*}
Later, for classification, we will also use a binomial likelihood.
The posterior Gaussian process is given by
\begin{align*} \text{posterior} \propto \text{likelihood} \times \text{prior}. \end{align*}
It is easy to derive the posterior from a joint distribution of the actual observations $\mathbf{y}$ and the posterior function values:
\begin{align*} \left[ \begin{array}{c} \mathbf{y} \\ {f} \end{array} \right] \sim \mathcal{N} \left( \mathbf{0}, \begin{array}{cc} k(\mathbf{x}, \mathbf{x}') + \sigma^2 \mathbf{I} & k(\mathbf{x}, {\mathbf{x}}') \\ k({\mathbf{x}}, \mathbf{x}') & k({\mathbf{x}}, {\mathbf{x}}') \end{array} \right). \end{align*}
Thus conditioning on $\mathbf{y}$ gives:
\begin{align*} p(f \mid \mathbf{y}, \mathbf{x}) & \propto p(\mathbf{y} \mid f, \mathbf{x}) \ p(f \mid \mathbf{x}),\\ f \mid \mathbf{y}, \mathbf{x} & \sim \mathcal{GP}\left(\tilde{m}(\tilde{\mathbf{x}}), \tilde{k}({\mathbf{x}}, {\mathbf{x}}')\right),\\\\ \tilde{m}({\mathbf{x}}) & = k({\mathbf{x}}, \mathbf{x}')\left( k(\mathbf{x}, \mathbf{x}') + \sigma^2 \mathbf{I} \right)^{-1} \mathbf{y},\\ \tilde{k}({\mathbf{x}}, {\mathbf{x}}') & = k({\mathbf{x}}, {\mathbf{x}}') - k({\mathbf{x}}, \mathbf{x}') \left( k(\mathbf{x}, \mathbf{x}') + \sigma^2 \mathbf{I} \right)^{-1} k(\mathbf{x}, {\mathbf{x}}') \end{align*}
So the posterior is again a Gaussian process with modified mean and variance functions. Let's compute this analytically and then compare it to the GPy posterior.
In [9]:
inv = scipy.linalg.inv(k + .1 * scipy.diag(scipy.ones(n)))
m_tilde = (k.dot(inv).dot(y)).flatten()
k_tilde = k - k.dot(inv).dot(k)
Sample from the posterior:
In [10]:
f_posterior = [mvrnorm(mean=m_tilde, cov=k_tilde) for i in range(5)]
In GPy this is way easier, because we only need to call a single function:
In [11]:
m = GPy.models.GPRegression(x, y, kernel, noise_var=.1)
gpy_f_posterior = m.posterior_samples_f(x, full_cov=True, size=5)
Let's compare our posterior to the one from GPy.
In [13]:
plt.rcParams['figure.figsize'] = [15, 6]
colors = ['#bdbdbd','#969696','#737373','#525252','#252525']
_, ax = plt.subplots(1, 2, sharex=True, sharey=True)
ax[0].scatter(x, y, color="blue")
ax[1].scatter(x, y, color="blue")
for i in range(5):
ax[0].scatter(x, gpy_f_posterior[:, i], color=colors[i], alpha=0.25)
ax[1].scatter(x, f_posterior[i], color=colors[i], alpha=0.25)
ax[0].spines['top'].set_visible(False)
ax[0].spines['right'].set_visible(False)
ax[0].set_title("GPy posterior")
ax[1].spines['top'].set_visible(False)
ax[1].spines['right'].set_visible(False)
ax[1].set_title("Our posterior")
plt.xlabel("X")
plt.ylabel("f posterior")
plt.show()
The are pretty much similar. However, we cheated a little, because we knew the error variance.
### Posterior predictive¶
We can use the same formalism as above to derive the posterior predictive distribution, i.e. the distribution of function values $f^*$ for new observations $\mathbf{x}^*$. This is useful, when we want to do prediction.
Usually the predictive posterior is given like this:
\begin{align*} p(f^* \mid \mathbf{y}, \mathbf{x}, \mathbf{x}^*) = \int p(f^* \mid f) \ p(f \mid \mathbf{y}, \mathbf{x}), \end{align*}
(where we included $\mathbf{x}$ for clarity). However, since our original data set $\mathbf{y}$ and $f^*$ have a joint normal distribution, we can just use Gaussian conditioning again. Later, when we are using non-normal likelihoods, we will need to come back to this formulation.
We start again by writing down the joint distribution of $\mathbf{y}$ and the unobserved function values $f^*$:
\begin{align*} \left[ \begin{array}{c} \mathbf{y} \\ {f}^* \end{array} \right] \sim \mathcal{N} \left( \mathbf{0}, \begin{array}{cc} k(\mathbf{x}, \mathbf{x}') + \sigma^2 \mathbf{I} & k(\mathbf{x}, {\mathbf{x}}^*) \\ k({\mathbf{x}}^*, \mathbf{x}) & k({\mathbf{x}}^*, {\mathbf{x}^*}') \end{array} \right). \end{align*}
Conditioning on $\mathbf{y}$ yields:
\begin{align*} f^* \mid \mathbf{y}, \mathbf{x}, \mathbf{x}^* & \sim \mathcal{GP}\left({m}^*({\mathbf{x}^*}), {k}^*({\mathbf{x}^*}, {\mathbf{x}^*}')\right),\\\\ {m}^*({\mathbf{x}^*}) & = k({\mathbf{x}^*}, \mathbf{x})\left( k(\mathbf{x}, \mathbf{x}') + \sigma^2 \mathbf{I} \right)^{-1} \mathbf{y},\\ {k}^*({\mathbf{x}^*}, {\mathbf{x}^*}') & = k({\mathbf{x}^*}, {\mathbf{x}^*}') - k({\mathbf{x}^*}, \mathbf{x}) \left( k(\mathbf{x}, \mathbf{x}') + \sigma^2 \mathbf{I} \right)^{-1} k(\mathbf{x}, {\mathbf{x}^*}). \end{align*}
This is the exact same formulation as above, only that we replaced some of the old data with the new data $\mathbf{x}^*$.
### Prediction¶
Now that the theory is established, we optimize the kernel parameters using m.optimize(), such that the appropriately fit the data. Then we predict the responses $\hat{\mathbf{y}}$ using our trained Gaussian process.
It does not make too much sense to predict the already known values $f$, but posterior predictive checks are always a quick way to check model assumptions.
In [14]:
m.optimize()
y_hat = m.predict(x)
... and plot it again.
In [15]:
_, ax = plt.subplots()
ax.scatter(x, y, color="blue")
ax.scatter(x, y_hat[0], color="black")
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.xlabel("X")
plt.ylabel("f")
plt.show()
|
2020-01-26 05:16:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9909298419952393, "perplexity": 2905.523600419426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251687725.76/warc/CC-MAIN-20200126043644-20200126073644-00117.warc.gz"}
|
https://greprepclub.com/forum/the-positive-integers-m-and-n-leave-a-remainder-of-2-and-8930.html
|
It is currently 19 Jun 2019, 22:57
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# The positive integers m and n leave a remainder of 2 and 3,
Author Message
TAGS:
Founder
Joined: 18 Apr 2015
Posts: 6920
Followers: 114
Kudos [?]: 1344 [0], given: 6318
The positive integers m and n leave a remainder of 2 and 3, [#permalink] 14 Apr 2018, 02:49
Expert's post
00:00
Question Stats:
58% (01:19) correct 41% (01:18) wrong based on 29 sessions
The positive integers m and n leave a remainder of 2 and 3, respectively, when divided by 6.
$$m > n$$.
Quantity A Quantity B The remainder when m + n is divided by 6 The remainder when m – n is divided by 6
A. The quantity in Column A is greater
B. The quantity in Column B is greater
C. The two quantities are equal
D. The relationship cannot be determined from the information given
[Reveal] Spoiler: OA
_________________
Director
Joined: 07 Jan 2018
Posts: 644
Followers: 7
Kudos [?]: 602 [2] , given: 88
Re: The positive integers m and n leave a remainder of 2 and 3, [#permalink] 14 Apr 2018, 22:35
2
KUDOS
Probable values for $$m = 2,8,14,20,26............$$
Probable values for $$n = 3,9,15,21,27.............$$
Probable values for $$m+n = 11, 17, 35,...$$ in every case when these numbers are divided by $$6$$the remainder is $$5$$
when chosing number for $$m-n$$ we should make sure that this is a $$+$$ve number since $$m>n$$
Probable values for $$m-n = 5, 11, 17,...$$ in every case the remainder when divided by $$6 = 5$$
option c
_________________
This is my response to the question and may be incorrect. Feel free to rectify any mistakes
Re: The positive integers m and n leave a remainder of 2 and 3, [#permalink] 14 Apr 2018, 22:35
Display posts from previous: Sort by
|
2019-06-20 06:57:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3264596462249756, "perplexity": 1422.3870796095769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999163.73/warc/CC-MAIN-20190620065141-20190620091141-00384.warc.gz"}
|
http://star-www.rl.ac.uk/star/docs/sun67.htx/sun67ss105.html
|
### SLA_GMSTA
UT to GMST (extra precision)
ACTION:
Conversion from universal time UT1 to Greenwich mean sidereal time, with rounding errors minimized.
CALL:
D = sla_GMSTA (DATE, UT1)
##### GIVEN:
DATE D UT1 date as Modified Julian Date (integer part of JD$-$2400000.5) UT1 D UT1 time (fraction of a day)
##### RETURNED:
sla_GMST D Greenwich mean sidereal time (radians)
NOTES:
(1)
The algorithm is derived from the IAU 1982 expression (see page S15 of the 1984 Astronomical Almanac).
(2)
There is no restriction on how the UT is apportioned between the DATE and UT1 arguments. Either of the two arguments could, for example, be zero and the entire date + time supplied in the other. However, the routine is designed to deliver maximum accuracy when the DATE argument is a whole number and the UT1 argument lies in the range $\left[\phantom{\rule{0.3em}{0ex}}0,\phantom{\rule{0.3em}{0ex}}1\phantom{\rule{0.3em}{0ex}}\right]$, or vice versa.
(3)
See also the routine sla_GMST, which accepts the UT1 as a single argument. Compared with sla_GMST, the extra numerical precision delivered by the present routine is unlikely to be important in an absolute sense, but may be useful when critically comparing algorithms and in applications where two sidereal times close together are differenced.
|
2017-09-20 18:14:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6921001672744751, "perplexity": 3467.7651464316073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687428.60/warc/CC-MAIN-20170920175850-20170920195850-00613.warc.gz"}
|
http://beast.community/tempest_tutorial
|
This tutorial describes the use of TempEst to examine the temporal signal of a data set and to look for problematic or erroneous sequences.
In this tutorial, we will explore the use of the interactive graphical program TempEst (formerly known as Path-O-Gen) to examine virus sequence data that has been sampled through time to look for problematic sequences and to explore the degree and pattern of temporal signal. This can be a useful way of examining the data for potential issues before committing significant time to running BEAST.
## Building a non-molecular clock tree
To examine the relationship between genetic divergence and time (temporal signal), we require a phylogenetic tree constructed without assuming a molecular clock. There is a wide range of suitable software packages (i.e., PhyML, RAxML, GARLI) but for this tutorial we are going to use IQ-Tree which uses a fast and effective stochastic algorithm to infer phylogenetic trees by maximum likelihood.
Install IQ-Tree using the instructions on the website and open a command-line prompt, navigating to the directory containing the data file ice_viruses.fasta.
To build a maximum likelihood phylogenetic tree using the GTR+gamma model type:
iqtree -s ice_viruses.fasta -m GTR+G
This will create a set of files in the directory containing various outputs and results:
ice_viruses.fasta.bionj
ice_viruses.fasta.ckp.gz
ice_viruses.fasta.iqtree
ice_viruses.fasta.log
ice_viruses.fasta.mldist
ice_viruses.fasta.treefile
ice_viruses.fasta.uniqueseq.phy
For our purposes we only need the maximum likelihood tree file ice_viruses.fasta.treefile. You can delete the other files if you like.
Run TempEst by double clicking on its icon. TempEst is an interactive graphical application for examining the temporal signal in a tree of time-stamped sequences by plotting the divergence of each tip from the root against the date of sampling (a root-to-tip plot).
Once running, TempEst will look similar irrespective of which computer system it is running on. For this tutorial, the Mac OS X version will be shown but the Linux & Windows versions will have exactly the same layout and functionality.
When started, TempEst will immediately display a file selection dialog box in which you can select the tree that you made in the previous section.
Select ice_viruses.fasta.treefile and click Open.
## Parsing dates of sampling
Once the tree is loaded the main window will appear and look like this:
Ignore the panel on the left for the moment. The first thing that needs doing is to give the date of sampling to each of the sequences.
The actual year of sampling is given at the end of the name of each taxon. To specify the dates of the sequences in BEAUti we will use the Parse Dates button at the top of the panel. Clicking this will make a dialog box appear:
This operation attempts to extract the dates from the taxon names. It works by trying to find a numerical field within each name. This dialog box is the same as that in BEAUti and there are a wide range of options for doing this - See this page for details about the various options for setting dates in this panel. For these sequences you can set the options to look like the figure above: Defined just by its order, Order: last and Parse as a number option.
When parsing a number, you can ask BEAUti to add a fixed value to each date which can be useful for transforming a 2 digit year into a 4 digit year. Because all dates are specified in a four digit format in this case, no additional settings are needed. So, we can press OK.
The table will now have the year of sampling for each virus in the Dates column. Click on the Dates column header to sort the dates and check that they are all correct.
## The temporal signal and rooting
We can now explore the data using the tabs at the top of the window - Tree, Root-to-tip & Residuals. If you click on the Tree tab you will see the tree as loaded from the tree file. Because we constructed this tree using a non-molecular-clock model, it will be arbitrarily rooted. If you look at the date of each virus in the tree you will see that there is no correlation with the horizontal position:
Now switch to the Root-to-tip panel. This shows a plot of the divergence from the root of the tree against time of sampling (a so-called ‘Root to tip plot’):
You can see that there is very little correlation in this plot (the line is the best-fit regression). In the table on the left you can see the Correlation Coefficient is 0.35. This lack of correlation is expected as the root is arbitrarily set by the phylogeny reconstruction software and thus divergence from root is meaningless. TempEst can try to find the root of the tree that optimizes the temporal signal. It does this by trying all possible roots and picks the one that produces the optimal value of a range of statistics. The function it uses is selected in the menu at the top left. The options are to minimize the mean of the squares of the residuals (residual-mean-squared), or to maximize the correlation coefficient (correlation) or R2 (R squared). These are all ad hoc procedures and no particular one is best but residual-mean-squared may be most consistent with the investigations here.
Click Best-fitting root to root the tree at the place that minimizes the mean of the squares of the residuals.
Now there is a better correlation between the dates of the tips and the divergence from the root (the correlation coefficient has nearly doubled). Return to the tree to look where the root was placed:
To make the tree easier to view, switch the Order option in the panel at the bottom to increasing. This rotates each node so the branch with the most tips is at the top. You can see now that there are 3 main lineages in this influenza tree - the human lineage at the top starting with the BrevigMission virus from 1918, the swine lineage in the middle and the avian lineage at the bottom.
The tree branches are coloured to show the residual with blue for tips with positive residuals (above the regression line), red for negative.
On the left hand side of the window there is a table of statistics:
As well as the statistical metrics (Correlation Coefficient, R squared and Residual Mean Squared) there are the following:
Date range
The span of dates for the viruses.
Slope (rate)
The slope of the regression line. This is an estimate of the rate of evolution in substitutions per site per year.
X-Intercept (TMRCA)
The point on the x-axis at which the regression line crosses. This is an estimate of the date of the root of the tree.
## Finding and interpreting problematic sequences
Switch to the Root-to-tip panel. Look for the point furthest from the line in the top left hand quadrant. If you click and drag your pointer over the point it will be highlighted in a blue colour:
If you now switch to the Residual panel you will see a plot of all the residuals (the tangental deviation from the regression line). The virus we selected will still be highlighted and you can see it is an outlier. It is often easiest to select outliers in this plot.
If we now go back to the Tree panel you will see the label of the selected virus hilighted. You can use the Zoom slider in the bottom panel to zoom in on the area of the tree and the Font Size selector to increase the size of the label:
Looking at this part of the tree you should be able to see why A.BrantGoose.1.1917 is such an outlier. Although it is supposedly sampled from 1917, it is actually identical to a bunch of 4 other bird viruses from Ohio in 1999. This suggests that this virus sequence was the result of contamination by one of the other viruses in the same lab (another possibility is the mis-labelling of samples).
One further tool is available that can be useful to find problematic sequences. Turn on the Show ancestor traces option at the bottom of the panel:
This option draws a green line from the selected virus to the point on the regression line where the immediate ancestor should lie (i.e., given its divergence from the root). If the line, like the example here, is horizontal and extends to the left, this suggests the sequence is actually more recent than the date that it has been given. This may be indicative of contamination from a recent virus than the supposed date or a mislabelling (another issue is that the date format could be wrongly parsed).
Ancestor lines that are vertical upwards suggest that the sequence has too many unique differences compared to its ancestor. This may be indicative of sequencing error, degraded samples, host restriction-factor editing, alignment issues, or recombination.
Ancestor lines that are horizontal and to the right mean that the supposed date of the virus is more recent than the divergence would suggest. This can be due to contamination by an older virus (or a mislabelling).
The ancestor lines can never be downwards as this would denote a negative divergence. Shorter ancestor lines and ones that lie close to the regression gradient are less likely to be problematic.
Go to the Residuals panel and select the most left hand (negative) residual point:
You can see here that the selected virus, A.swine.StHyacinthe.148.1990, although supposedly from 1990, is identical to a virus from Iowa in 1930. A/swine/Iowa/15/1930 is a commonly used lab virus and thus this is likely, again, a contamination issue.
There are 4 other examples of this type of problematic sequences in this data set. See if you can identify them.
## Cleaning the data set
Once all of the problematic sequences have been identified, these can be removed from the alignment and the tree rebuilt to see the effect they were having onb the analysis. Download the file, ice_viruses_cleaned.fasta, which has had 7 problematic sequences removed.
Using the instructions given above, for the file ice_viruses_cleaned.fasta, repeat the tree building procedure using IQ-Tree, load the resulting tree into TempEst and parse the dates for each virus. Switch to the Root-to-tip panel, turn on Best-fitting root and compare the plot to the one you got before. You should see something like this:
You can see that the correlation between divergence from root and time of sampling is much better (the Correlation Coefficent in the table on the left is now 0.733).
You will notice that the relationship is still not very clean. The reason for this is that there are 3 different lineages here (human, swine and avian) and that combining them all into a single tree (which is then fixed) may obscure their individual patterns.
The final file in this analysis, ice_viruses_human.fasta, is one that contains only the human viruses.
Repeating the steps above to build a tree and load it into TempEst now shows this relationship:
In this image everything after 1975 is highlighted in blue. What you should be able to see is that there are actually two lines of points here, one from 1918 to 1957 and one from 1977 to 2000. There is a gap in time (in the horizontal axis) of about 20 years but no gap in the divergence from the root.
Tags:
|
2019-03-20 09:26:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5162224769592285, "perplexity": 828.9274082407603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202324.5/warc/CC-MAIN-20190320085116-20190320111116-00273.warc.gz"}
|
https://www.gamedev.net/forums/topic/338873-leak-in-stdstringstream-on-msvc/
|
# Leak in std::stringstream on MSVC?
This topic is 4910 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
After three years it is finally here: my first question post. I hope someone knows the answer to this. I found a memory leak in my engine and narrowed it down to the use of std::stringstream. This is a part of STL I seldomly use so perhaps I am doing something wrong. I created a test program under MS VC 6.0 (SP6) which is as follows:
#include <crtdbg.h>
#include <sstream>
void f( void )
{
std::stringstream ss;
}
void main( void )
{
f();
_CrtDumpMemoryLeaks();
}
Note I put the stringstream in f() to prevent the leak tracer to report something that gets deleted only at the end of main(). This reports two memory leaks as follows:
Detected memory leaks!
Dumping objects ->
{45} normal block at 0x003207B8, 33 bytes long.
Data: < C > 00 43 00 CD CD CD CD CD CD CD CD CD CD CD CD CD
{44} normal block at 0x00322FA8, 40 bytes long.
Data: < B > 98 B1 42 00 04 00 00 00 00 00 00 00 00 00 00 00
Object dump complete.
Is there something I must do to destroy a standard stringstream? Is this a known but undocumented issue with MS's STL implementation? Pleas let me know. Greetz, Illco
##### Share on other sites
Quote:
Original post by IllcoIs there something I must do to destroy a standard stringstream?
No as its allocated on the stack the destructor is invoked at the end of "f"'s scope automatically.
Quote:
Original post by IllcoIs this a known but undocumented issue with MS's STL implementation? Pleas let me know.
I wouldn't be surprised as VC++ 6.0 has terrible C++ standard compliance (even with service packs applied) and is known to have a poor implementation of the C++ standard library. You should not be using VC++ 6.0's compiler anymore period. I suggest if you must have an MS C++ compiler you either get VC++ 7.1 toolkit (its free, no IDE but you can hook it up to VC++ 6.0 IDE) or use VC++ 8.0 beta 2 (which comes with an IDE).
I tested that code on both VC++ 7.1 & 8.0, no leaks detected as expectated.
##### Share on other sites
Quote:
No as its allocated on the stack the destructor is invoked at the end of "f"'s scope automatically.
That's what I thought. Ok. I just wanted to know my code is correct. So I think we can conclude we have a bug here.
Quote:
I wouldn't be surprised as VC++ 6.0 has terrible C++ standard compliance (even with service packs applied) and is known to have a poor implementation of the C++ standard library. You should not be using VC++ 6.0's compiler anymore period.
I knew that -- perhaps time for STLPort or something. Yes I know it is old but the choice to switch over is not free of costs so we will have to stick with it for a while.
##### Share on other sites
Quote:
Original post by IllcoI knew that -- perhaps time for STLPort or something. Yes I know it is old but the choice to switch over is not free of costs so we will have to stick with it for a while.
If you can't use another compiler then getting STLport i think would definitely be the way to go.
##### Share on other sites
I wouldn't be so quick to dismiss it as a bug. It's quite likely that they're using a pool allocator for small strings (like empty ones) so that when the string stream is destroyed, the allocated string is returned to the pool rather than to the system. This would show up as a 'leak' in this case.
##### Share on other sites
it is not a Bug you fool. the Destructor for get called after the _CrtMemCheck. I used to have that problem too and if you want to see if there is no leak. create an Pointer of the list Object Instead and delete the Pointer before calling _CrtMemCheck so the destructor get called first then you will see that you have no leak.
##### Share on other sites
Quote:
Original post by BornToCodeit is not a Bug you fool. the Destructor for get called after the _CrtMemCheck.
If you actually looked at the code the std::basic_stringstream's destructor will be invoked at the end of the function "f"'s scope so therefore its called before _CrtMemCheck.
I think Dean Harding maybe wright here but then that would be slightly odd as first its most likely that std::basic_stringstream's stream buffer has std::basic_string as data member and doesn't need to allocate it on the heap secondly the default allocator type std::allocator (which type aliases std::stringstream & std::string use) generally does no pooling but rather uses operator new/delete for allocation/deallocation unless in VC++ 6.0's case it is using another custom allocator type.
##### Share on other sites
Quote:
It is not a Bug you fool.
Don't be so quick about calling someone else a fool. If you, beside the code, had read the post you would have seen this is exactly why I added a function f().
Dean and snk_kid: thank you. But what do I do about it to fix such behaviour (besides switching compilers etc.)?
##### Share on other sites
Quote:
Original post by snk_kidI think Dean Harding maybe wright here but then that would be slightly odd as first its most likely that std::basic_stringstream's stream buffer has std::basic_string as data member and doesn't need to allocate it on the heap secondly the default allocator type std::allocator (which type aliases std::stringstream & std::string use) generally does no pooling but rather uses operator new/delete for allocation/deallocation unless in VC++ 6.0's case it is using another custom allocator type.
Yeah, I'm not sure what it uses in VC6. I know in 2002/2003 they use a different std::basic_string which allocates strings < 16 characters on the stack, so that could explain why you don't see it in those compilers.
A simple way to see if I'm right or not is to call f() in a loop and see if the number of 'leaked' blocks stays small. Like this:
#include <crtdbg.h>#include <sstream>void f( void ){ std::stringstream ss;}void main( void ){ for(int i = 0; i < 10000; i++) f(); _CrtDumpMemoryLeaks();}
If the number of 'leaked' blocks is small (I'd expect there to be only 1 still) then you don't need to worry, as it's not really a 'leak' as such - the memory is just sitting in a pool, waiting for the next time you allocate one.
##### Share on other sites
Quote:
If the number of 'leaked' blocks is small (I'd expect there to be only 1 still) then you don't need to worry, as it's not really a 'leak' as such - the memory is just sitting in a pool, waiting for the next time you allocate one.
Yes -- the same two blocks remain but no more. But it still annoys me -- I want no leaks reported at all. Is there anyway to clean the pool STL keeps track of? Or is there any way to have the CRT debug reporter not report these?
Background: I wanted to do things more cleanly, as before I converted an integer to a string using itoa() or sprintf(). I thought: let's use STL fully this time around.
• ### What is your GameDev Story?
In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.
(You must login to your GameDev.net account.)
• 15
• 9
• 11
• 9
• 9
• ### Forum Statistics
• Total Topics
634136
• Total Posts
3015757
×
|
2019-01-23 09:35:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1974610686302185, "perplexity": 2541.585659223578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584328678.85/warc/CC-MAIN-20190123085337-20190123111337-00157.warc.gz"}
|
https://aimsciences.org/article/doi/10.3934/dcdss.2016023
|
# American Institute of Mathematical Sciences
• Previous Article
Identification problem for a degenerate evolution equation with overdetermination on the solution semigroup kernel
• DCDS-S Home
• This Issue
• Next Article
Classical solutions to quasilinear parabolic problems with dynamic boundary conditions
June 2016, 9(3): 697-715. doi: 10.3934/dcdss.2016023
## Generalized Wentzell boundary conditions for second order operators with interior degeneracy
1 Department of Mathematics, University of Bari Aldo Moro, Via E.Orabona 4, 70125 Bari, Italy 2 The University of Memphis, Mathematical Sciences, 373 Dunn Hall, Memphis, TN 38152-3240 3 Department of Mathematical Sciences, University of Memphis, 373 Dunn Hall, Memphis, TN 38152-3240, United States 4 Dipartimento di Matematica, Università degli Studi di Bari Aldo Moro, via E. Orabona 4, 70125 Bari
Received April 2015 Revised September 2015 Published April 2016
We consider operators in divergence form, $A_1u=(au')'$, and in nondivergence form, $A_2u=au''$, provided that the coefficient $a$ vanishes in an interior point of the space domain. Characterizing the domain of the operators, we prove that, under suitable assumptions, the operators $A_1$ and $A_2$, equipped with general Wentzell boundary conditions, are nonpositive and selfadjoint on spaces of $L^2$ type.
Citation: Genni Fragnelli, Gisèle Ruiz Goldstein, Jerome Goldstein, Rosa Maria Mininni, Silvia Romanelli. Generalized Wentzell boundary conditions for second order operators with interior degeneracy. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 697-715. doi: 10.3934/dcdss.2016023
##### References:
[1] W. Arendt, C. J. K. Batty, M. Hieber and F. Neubrander, Vector-Valued Laplace Transforms and Cauchy Problems,, Monographs in Mathematics, 96 (2001). doi: 10.1007/978-3-0348-5075-9. Google Scholar [2] J. M. Ball, Strongly continuous semigroups, weak solutions and the variation of constant formula,, Proc. Amer. Math. Soc., 63 (1977), 370. Google Scholar [3] G. I. Boutaayamou, G. Fragnelli and L. Maniar, Lipschitz stability for linear parabolic systems with interior degeneracy,, Electron. J. Differential Equations, 2014 (2014), 1. Google Scholar [4] G. I. Boutaayamou, G. Fragnelli and L. Maniar, Carleman estimates for parabolic equations with interior degeneracy and Neumann boundary conditions,, J. Anal. Math., (). Google Scholar [5] G. I. Boutaayamou, G. Fragnelli and L. Maniar, Inverse problems for parabolic equations with interior degeneracy and Neumann boundary conditions,, J. Inverse Ill-Posed Probl, (2015). doi: 10.1515/jiip-2014-0032. Google Scholar [6] T. Cazenave and A. Haraux, An Introduction to Semilinear Evolution Equations,, Oxford Lecture Series in Mathematics and its Applications, 13 (1998). Google Scholar [7] G. M. Coclite, A. Favini, C. G. Gal, G. R. Goldstein, J. A. Goldstein, E. Obrecht and S. Romanelli, The role of Wentzell boundary conditions in linear and nonlinear analysis,, in Advances in nonlinear analysis: Theory, 3 (2009), 277. Google Scholar [8] A. Favini, G. R. Goldstein, J. A. Goldstein and S. Romanelli, The heat equation with generalized Wentzell boundary conditions,, J. Evol. Equ., 2 (2002), 1. doi: 10.1007/s00028-002-8077-y. Google Scholar [9] G. Fragnelli, G. R. Goldstein, J. A. Goldstein and S. Romanelli, Generators with interior degeneracy on spaces of $L^2$ type,, Electron. J. Differential Equations, 2012 (2012), 1. Google Scholar [10] G. Fragnelli, G. Marinoschi, R. M. Mininni and S. Romanelli, A control approach for an identification problem associated to a strongly degenerate parabolic system with interior degeneracy,, in: New Prospects in direct, 10 (2014), 121. doi: 10.1007/978-3-319-11406-4_7. Google Scholar [11] G. Fragnelli, G. Marinoschi, R. M. Mininni and S. Romanelli, Identification of a diffusion coefficient in strongly degenerate parabolic equations with interior degeneracy,, J. Evol. Equ., 15 (2015), 27. doi: 10.1007/s00028-014-0247-1. Google Scholar [12] G. Fragnelli and D. Mugnai, Carleman estimates and observability inequalities for parabolic equations with interior degeneracy,, Adv. Nonlinear Anal., 2 (2013), 339. doi: 10.1515/anona-2013-0015. Google Scholar [13] G. Fragnelli and D. Mugnai, Carleman estimates, observability inequalities and null controllability for interior degenerate non smooth parabolic equations,, Mem. Amer. Math. Soc., 242 (2016). doi: 10.1090/memo/1146. Google Scholar [14] G. R. Goldstein, Derivation and physical interpretation of general Wentzell boundary conditions,, Adv. Differential Equations, 11 (2006), 457. Google Scholar [15] J. A. Goldstein, Semigroups of Linear Operators and Applications,, Oxford Univ. Press, (1985). Google Scholar [16] A. Stahel, Degenerate semilinear parabolic equations,, Differential Integral Equations, 5 (1992), 683. Google Scholar
show all references
##### References:
[1] W. Arendt, C. J. K. Batty, M. Hieber and F. Neubrander, Vector-Valued Laplace Transforms and Cauchy Problems,, Monographs in Mathematics, 96 (2001). doi: 10.1007/978-3-0348-5075-9. Google Scholar [2] J. M. Ball, Strongly continuous semigroups, weak solutions and the variation of constant formula,, Proc. Amer. Math. Soc., 63 (1977), 370. Google Scholar [3] G. I. Boutaayamou, G. Fragnelli and L. Maniar, Lipschitz stability for linear parabolic systems with interior degeneracy,, Electron. J. Differential Equations, 2014 (2014), 1. Google Scholar [4] G. I. Boutaayamou, G. Fragnelli and L. Maniar, Carleman estimates for parabolic equations with interior degeneracy and Neumann boundary conditions,, J. Anal. Math., (). Google Scholar [5] G. I. Boutaayamou, G. Fragnelli and L. Maniar, Inverse problems for parabolic equations with interior degeneracy and Neumann boundary conditions,, J. Inverse Ill-Posed Probl, (2015). doi: 10.1515/jiip-2014-0032. Google Scholar [6] T. Cazenave and A. Haraux, An Introduction to Semilinear Evolution Equations,, Oxford Lecture Series in Mathematics and its Applications, 13 (1998). Google Scholar [7] G. M. Coclite, A. Favini, C. G. Gal, G. R. Goldstein, J. A. Goldstein, E. Obrecht and S. Romanelli, The role of Wentzell boundary conditions in linear and nonlinear analysis,, in Advances in nonlinear analysis: Theory, 3 (2009), 277. Google Scholar [8] A. Favini, G. R. Goldstein, J. A. Goldstein and S. Romanelli, The heat equation with generalized Wentzell boundary conditions,, J. Evol. Equ., 2 (2002), 1. doi: 10.1007/s00028-002-8077-y. Google Scholar [9] G. Fragnelli, G. R. Goldstein, J. A. Goldstein and S. Romanelli, Generators with interior degeneracy on spaces of $L^2$ type,, Electron. J. Differential Equations, 2012 (2012), 1. Google Scholar [10] G. Fragnelli, G. Marinoschi, R. M. Mininni and S. Romanelli, A control approach for an identification problem associated to a strongly degenerate parabolic system with interior degeneracy,, in: New Prospects in direct, 10 (2014), 121. doi: 10.1007/978-3-319-11406-4_7. Google Scholar [11] G. Fragnelli, G. Marinoschi, R. M. Mininni and S. Romanelli, Identification of a diffusion coefficient in strongly degenerate parabolic equations with interior degeneracy,, J. Evol. Equ., 15 (2015), 27. doi: 10.1007/s00028-014-0247-1. Google Scholar [12] G. Fragnelli and D. Mugnai, Carleman estimates and observability inequalities for parabolic equations with interior degeneracy,, Adv. Nonlinear Anal., 2 (2013), 339. doi: 10.1515/anona-2013-0015. Google Scholar [13] G. Fragnelli and D. Mugnai, Carleman estimates, observability inequalities and null controllability for interior degenerate non smooth parabolic equations,, Mem. Amer. Math. Soc., 242 (2016). doi: 10.1090/memo/1146. Google Scholar [14] G. R. Goldstein, Derivation and physical interpretation of general Wentzell boundary conditions,, Adv. Differential Equations, 11 (2006), 457. Google Scholar [15] J. A. Goldstein, Semigroups of Linear Operators and Applications,, Oxford Univ. Press, (1985). Google Scholar [16] A. Stahel, Degenerate semilinear parabolic equations,, Differential Integral Equations, 5 (1992), 683. Google Scholar
[1] Angelo Favini, Gisèle Ruiz Goldstein, Jerome A. Goldstein, Enrico Obrecht, Silvia Romanelli. Nonsymmetric elliptic operators with Wentzell boundary conditions in general domains. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2475-2487. doi: 10.3934/cpaa.2016045 [2] Mariane Bourgoing. Viscosity solutions of fully nonlinear second order parabolic equations with $L^1$ dependence in time and Neumann boundary conditions. Existence and applications to the level-set approach. Discrete & Continuous Dynamical Systems - A, 2008, 21 (4) : 1047-1069. doi: 10.3934/dcds.2008.21.1047 [3] Maria Rosaria Lancia, Valerio Regis Durante, Paola Vernole. Asymptotics for Venttsel' problems for operators in non divergence form in irregular domains. Discrete & Continuous Dynamical Systems - S, 2016, 9 (5) : 1493-1520. doi: 10.3934/dcdss.2016060 [4] Giuseppe Maria Coclite, Angelo Favini, Gisèle Ruiz Goldstein, Jerome A. Goldstein, Silvia Romanelli. Continuous dependence in hyperbolic problems with Wentzell boundary conditions. Communications on Pure & Applied Analysis, 2014, 13 (1) : 419-433. doi: 10.3934/cpaa.2014.13.419 [5] Qiong Meng, X. H. Tang. Solutions of a second-order Hamiltonian system with periodic boundary conditions. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1053-1067. doi: 10.3934/cpaa.2010.9.1053 [6] Daniel Franco, Donal O'Regan. Existence of solutions to second order problems with nonlinear boundary conditions. Conference Publications, 2003, 2003 (Special) : 273-280. doi: 10.3934/proc.2003.2003.273 [7] Abdelkader Boucherif. Positive Solutions of second order differential equations with integral boundary conditions. Conference Publications, 2007, 2007 (Special) : 155-159. doi: 10.3934/proc.2007.2007.155 [8] Matthias Geissert, Horst Heck, Christof Trunk. $H^{\infty}$-calculus for a system of Laplace operators with mixed order boundary conditions. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1259-1275. doi: 10.3934/dcdss.2013.6.1259 [9] Davide Guidetti. Parabolic problems with general Wentzell boundary conditions and diffusion on the boundary. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1401-1417. doi: 10.3934/cpaa.2016.15.1401 [10] Simona Fornaro, Giorgio Metafune, Diego Pallara, Roland Schnaubelt. Second order elliptic operators in $L^2$ with first order degeneration at the boundary and outward pointing drift. Communications on Pure & Applied Analysis, 2015, 14 (2) : 407-419. doi: 10.3934/cpaa.2015.14.407 [11] Andrea Bonfiglioli, Ermanno Lanconelli and Francesco Uguzzoni. Levi's parametrix for some sub-elliptic non-divergence form operators. Electronic Research Announcements, 2003, 9: 10-18. [12] Mahamadi Warma. Semi linear parabolic equations with nonlinear general Wentzell boundary conditions. Discrete & Continuous Dynamical Systems - A, 2013, 33 (11&12) : 5493-5506. doi: 10.3934/dcds.2013.33.5493 [13] B. Bonnard, J.-B. Caillau, E. Trélat. Second order optimality conditions with applications. Conference Publications, 2007, 2007 (Special) : 145-154. doi: 10.3934/proc.2007.2007.145 [14] Hugo Beirão da Veiga. A challenging open problem: The inviscid limit under slip-type boundary conditions.. Discrete & Continuous Dynamical Systems - S, 2010, 3 (2) : 231-236. doi: 10.3934/dcdss.2010.3.231 [15] José M. Arrieta, Simone M. Bruschi. Very rapidly varying boundaries in equations with nonlinear boundary conditions. The case of a non uniformly Lipschitz deformation. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 327-351. doi: 10.3934/dcdsb.2010.14.327 [16] Aram L. Karakhanyan. Lipschitz continuity of free boundary in the continuous casting problem with divergence form elliptic equation. Discrete & Continuous Dynamical Systems - A, 2016, 36 (1) : 261-277. doi: 10.3934/dcds.2016.36.261 [17] Mariane Bourgoing. Viscosity solutions of fully nonlinear second order parabolic equations with $L^1$ dependence in time and Neumann boundary conditions. Discrete & Continuous Dynamical Systems - A, 2008, 21 (3) : 763-800. doi: 10.3934/dcds.2008.21.763 [18] Monica Motta, Caterina Sartori. Uniqueness of solutions for second order Bellman-Isaacs equations with mixed boundary conditions. Discrete & Continuous Dynamical Systems - A, 2008, 20 (4) : 739-765. doi: 10.3934/dcds.2008.20.739 [19] Fausto Ferrari. Mean value properties of fractional second order operators. Communications on Pure & Applied Analysis, 2015, 14 (1) : 83-106. doi: 10.3934/cpaa.2015.14.83 [20] François Hamel, Emmanuel Russ, Nikolai Nadirashvili. Comparisons of eigenvalues of second order elliptic operators. Conference Publications, 2007, 2007 (Special) : 477-486. doi: 10.3934/proc.2007.2007.477
2018 Impact Factor: 0.545
## Metrics
• HTML views (0)
• Cited by (0)
• on AIMS
|
2019-07-19 08:55:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6110594868659973, "perplexity": 5246.09857646982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526153.35/warc/CC-MAIN-20190719074137-20190719100137-00029.warc.gz"}
|
https://galoisrepresentations.wordpress.com/
|
## New Results in Modularity, Part II
This is part two of series on work in progress with Patrick Allen, Ana Caraiani, Toby Gee, David Helm, Bao Le Hung, James Newton, Peter Scholze, Richard Taylor, and Jack Thorne. Click here for Part I
It has been almost 25 years since Wiles first announced his proof of Taniyama-Shimura, and, truthfully, variations on his method have been pretty much the only game in town since then (this paper included). In all generalizations of this argument, one needs to have some purchase on the integral structure of the automorphic forms involved, which requires that they contribute in some way to the cohomology of an arithmetic manifold (locally symmetric space). This is because it is crucial to be able to exploit the integral structure to study congruences between modular forms. Let’s briefly recall Wiles’ strategy. One starts out with a residual representation
$\overline{\rho}: G_S \rightarrow \mathrm{GL}_2(\mathbf{F}_p)$
which one assumes to be modular, that is, is the mod-p reduction of a representation associated to a modular form which is assumed to have some local properties similar to rho. One then considers a deformation ring R which captures all deformations of the residual representation which “look modular” of the right weight and level (some aspects of Serre’s conjecture due to Ribet are empoyed here, although Skinner-Wiles came up with a base change trick to circumvent some of these difficulties). On the automorphic side, one looks at the cohomology groups M = H^1(X,Z_p)_m of modular curves (X = X_0(N)) localized at a maximal ideal m of the Hecke algebra T associated to rhobar, and proves that there is a surjective map:
$R \rightarrow \mathbf{T}_{\mathfrak{m}}.$
Already many deep theorems have been used to arrive at this point. To begin, one needs Galois representations associated to modular forms, but moreover, one needs to know that these representations satisfy all of the expected local-global compatibilities at the primes in S. In the case of modular forms, all of these facts were basically known before Wiles.
The next step, which lies at the heart of the Taylor-Wiles method, is to introduce certain auxiliary sets Q of carefully chosen primes, and consider the spaces M_Q = H^1(X_1(Q),Z_p)_m which relate to spaces of modular forms of larger level. If T_Q is the associated Hecke algebra, and R_Q is the corresponding deformation ring in which ramification is allowed not only at S but now also at Q, there are compatible maps as follows:
The key point concerning how one chooses the sets Q is to ensure that, even though R_Q may get bigger, its infinitesimal tangent space does not. Hence all the R_Q are quotients of some fixed ring R_oo = Z_p[[X_1,…,X_q]]. (Here q is chosen so that q = |Q|.) In this process, all the rings also have an auxiliary action of a ring S_oo = Z_p[[T_1,…,T_q]] of diamond operators, coming from the Galois group of X_1(Q) over X_0(Q) on the automorphic side, and the inertia groups at Q on the Galois side. The action of S_oo on these modules factors through R_Q by construction, by local global compatibility at primes dividing Q. After throwing away the Galois representations almost entirely (but keeping the diamond operators), one can patch the modules M_Q/p^n for different sets of primes Q, and arrive at a patched module M_oo for R_oo and S_oo such that:
• The module $M_{\infty}$ has positive rank as an $S_{\infty}$ module.
• If $\mathfrak{a}$ is the augmentation ideal of $S_{\infty},$ then $R_{\infty}/\mathfrak{a} = R,$ and $M_{\infty}/\mathfrak{a} = M.$
The first statement may be viewed as saying that there are “lots” of automorphic forms. On the other hand, the fact that R_oo has the same dimension of S_oo says that there are not “too many” Galois representations. Indeed, this friction is enough in this context to prove that M_oo is free over R_oo, and then to deduce the same claim for M over R, from which R = T follows. (Already included here is a innovation due to Diamond where one deduces freeness as a consequence rather than building it in as an assumption.) The argument I have very briefly sketched above is really only a proof of modularity in the minimal case. The general case requires a completely separate argument to bootstrap from minimal to non-minimal level using two further ingredients: Wiles’ numerical criterion, and a lower bound on the congruence ideal necessary to apply the numerical criterion, which ultimately follows from Ihara’s Lemma.
The “first generation” of improvements to Wiles consisted of understanding enough integral p-adic Hodge theory to make the required arugments on the Galois side. Notable papers here include the work of Conrad-Diamond-Taylor and Breuil-Conrad-Diamond-Taylor (but let us also not forget here the contribution of The Hawk). Improvements along these lines continue to today, and are very closely interwined with p-adic Langlands program and work of Breuil, Colmez, Kisin, Emerton, Paškūnas, and many others.
The “second generation” of improvements consisted of relaxing the assumption that R_oo is smooth, by allowing instead R_oo to have multiple components (but still of the same dimension) associated to different components in the local deformation rings at primes in S (at p and away from p). This innovation was due to Kisin, who also introduced the notion of framing to handle this.
The “third generation” of improvements (somewhat orthogonal to the second) cames from replacing 2-dimensional representations with n-dimensional representations, but still under some very restrictive assumptions on the image of rho. One key consequence of these assumptions is that the spaces of modular forms M_Q = H^*(X_1(Q),Z_p)_m all occur inside a single cohomology group, which allows one to control the growth of these spaces when patching. Here one thinks of the work of Clozel-Harris-Taylor. Also pertinent is that the analog of Ihara’s Lemma is open for higher rank groups; Taylor came up with a technique to bypass it when proving modularity lifting theorems now known as “Ihara avoidance.”
(Of course there were many other developments less directly relevant to this post, including but not limited to Skinner-Wiles and Khare-Wintenberger.)
The problem with considering general representations for GL(n) for n > 2, even over Q, is that the automorphic forms are spread over a number of different cohomology groups, in fact in some range [q_0,q_0 + 1, … ,q_0 + l_0] for specific invariants q_0 and l_0.
This manifests itself in two ways:
1. There are not enough automorphic forms; the patched modules M_oo will not be free over S_oo.
2. There are not enough Galois representations: the ring R_oo does not have the same dimension as S_oo but rather dim R_oo = dim S_oo – l_0.
Of course these problems are related! My work with David Geraghty was precisely about showing how to make these problems cancel each other out. The rough idea is as follows. The cohomology groups H^*(X_1(Q),Z_p)_m which contain interesting classes in characteristic zero occur in the range [q_0,…,q_0+l_0]. Suppose one knows this to be true integrally as well, even with coefficients over F_p instead of Z_p. Then instead of patching the cohomology groups M_Q themselves, one instead patches complexes P_Q of length l_0. The result is a complex P_oo of finite free S_oo modules of length l_0, with an action of R_oo on the cohomology of this complex. But the only way the cohomology of this complex can be small enough to admit an action of R_oo is if the complex is a free resolution of the patched module M_oo of cohomology groups in the extreme final degree, and moreover it also implies that M_oo is big enough as in Wiles’ original argument to give an R=T theorem. Note that it is crucial here that one work with the torsion in integral cohomology. It is quite possible that, at all auxiliary levels Q, there are no more automorphic forms at level Q than are were at level 1. (This can only happen for l_0 > 0, and the idea that torsion should be a suitable replacement is the moral of my paper with Barry Mazur.) These argument is also compatible with the improvements to the method including Taylor’s “Ihara Avoidance” argument.
On the other hand, there is a big problem. This argument required many inputs which were completely unknown at the time we worked this out, so our results were very conditional. To be precise, our results were conditional on the following desiderata:
1. The existence of Galois representations on Hecke rings T which acted as endomorphisms of H^*(X,Z/p^nZ) for locally symmetric spaces X associated to GL(n)/F.
2. The stronger claim that the Galois representations constructed in part 1 satisfied the correct “local-global” compatibility statements for all v in S (including v dividing p).
3. The vanishing of the cohomology groups H^i(X,Z/p^nZ)_m outside the range i in [q_0,…,q_0+l_0], for a non-Eisenstein ideal m.
A different approach to some of these questions (which Matt and I discussed, see here) involves first passing to completed cohomology, where one expects (or hopes!) that all the cohomology groups except in degree q_0 should vanish after localization at a non-maximal ideal.
The first big breakthrough was the result of Scholze, who proved part 1 above, at least up to issues concerning a nilpotent ideal (this was discussed previously on this blog). Another innovation appeared in Khare-Thorne, where it was observed that one can sometimes drop the third assumption under the strong condition that there existed global automorphic forms with the exact level structure corresponding to the original representation. (Unfortunately, in the l_0 > 0 setting, there is no way to produce such forms.)
So this is roughly where we stood in 2016. The key new ingredient which led to this project was the new result of Caraiani and Scholze proving vanishing theorems for the cohomology of non-compact Shimura varieties in degrees above the middle dimension (localized at m) under the assumption of certain genericity hypotheses on m. Since the cohomology of the boundary (for suitably chosen Shimura varieties) is precisely related to the cohomology of arithmetic locally symmetric spaces for GL(n) over CM fields, this allowed for the first time a new construction of the Galois representations for GL(n) which directly related them to the Galois representations coming from geometry. (I say “directly related,” but perhaps I mean simply more direct than Peter’s original construction.) In particular, it was clear to Caraiani and Scholze that this result should have implications for the required local-global compatibility result above. Meanwhile, the IAS had just started a new series of workshops on emerging topics. I guess that Richard must have had conversations with Ana about her work with Peter, which led them to choosing this as the theme, namely:
Ana Caraiani and Peter Scholze are hopeful of extending the methods of their joint paper arXiv:1511.02418 to non-compact Shimura varieties. This would give a new way to attack local-global compatibility at p for some of the Galois representations Scholze attached to torsion classes in the cohomology of arithmetic locally symmetric spaces. The aim of this workshop will be to understand how much local-global compatibility can be proved and to explore the consequences of this, particularly for modularity questions.
So now (1) was available, there was an approach to (2), and a technique for avoiding (3). One issue with the Khare-Thorne trick, however, was that it involved localizing at some prime ideal of characteristic zero, and so did not interact so well with Ihara Avoidance, which was crucial for any sort of applicable theorem. Here’s the subtely, which can be described even in the case when l_0 = 0. The usual Ihara avoidance game is to compare deformation rings R and R’ at Steinberg level and ramified principal series level respectively (after making a base change to ensure that the prime v at the relevant prime q satisfies N(v) = 1 mod p). Let M and M’ be the corresponding modules. One has that M/p = M’/p and R/p = R’/p. Suppose, however, that M behaved perfectly as expected, so that M_oo was free (even of rank one say) over S_oo and free over R_oo. What could happen, if one doesn’t have vanishing of cohomology outside a single degree, is that M’_oo/p = M_oo/p is free over S_oo/p, but that M’_oo is the cohomology of a non-trivial complex S_oo —> S_oo given by multiplication by p. So M’_oo is trivial in characteristic zero, even though M’_oo/p = M_oo/p. So this is a problem. But it is exactly a problem which was resolved during the workshop. The point, very loosely speaking, is that even though the complexes “S_oo” and “S_oo –>[p]—> S_oo” have the same H^0 after reducing modulo p and taking cohomology, their intersection with S_oo/p are quite different on the derived level, so if one can formulate a version of derived Ihara avoidance, then one is in good shape.
So what remained? First, there were a number of technical issues, some of which could be dealt with individually, and one had to make sure that all the fixes were compatible. For example, it is straightforward to modify the original strategy in my paper with David to handle the issue of only having Galois representations up to nilpotence ideals of fixed nilpotence, but one had to make sure this would not interfere with the more subtle derived Ihara avoidance type arguments. Relevant here was the work of Newton and Thorne which placed some of the arguments with complexes more naturally in the derived category. Second, there was the issue of really proving local-global compatibility from the new results of Caraiani-Scholze. A particularly interesting case here was the ordinary case. The rough problem one has to deal with here is deducing that rho is ordinary from knowing that $\rho \oplus \rho^{\vee}$ is ordinary. But be careful — the latter representation is reducible and so really a pseudo-representation — so it’s not even clear what ordinary this means (though see work of Wake and Wang Erickson, as well as of my student Joel Specter). It turns out that some interesting and subtle things turn up in this case which were found by the “team” of people who wrote up this section. (Although we acheived quite a lot in a week, there were obviously a list of details to be worked out, and we divided ourselves up into certain groups to work on each part of the paper.) But I think we were fairly confident at this point that everything would work out. What was my role in the writing up process you ask? I was selected as the ENFORCER, who goes around harassing everybody else to work and write up their sections of the paper while sipping on Champagne. Presumably I was less selected for my organizational skills and more for my ablity to tell Richard Taylor what to do.
So there we have it! It was clear even during the workshop that some improvements to our arguments were possible, but since the paper is already going to be quite long, we did not try to be completely comprehensive. I expect a number of improvements will follow shortly. I would not be surprised to see in a few years a modularity result for regular weight compatible systems over CM fields which are as complete as the ones (say) in BLGGT.
Finally, I should mention that while the paper is almost completely written, the usual caveats apply about work in progress which has not been completely written up (although we are almost done…)
## New Results In Modularity, Part I
I usually refrain from talking directly about my papers, and this reticence stems from wishing to avoid any appearance of tooting my own horn. On the other hand, nobody else seems to be talking about them either. Moreover, I have been involved recently in a number of collaborations with multiple authors, thus sufficiently diluting my own contribution enough to the point where I am now happy to talk about them.
The first such paper I want to discuss has 9(!) co-authors, namely Patrick Allen, Ana Caraiani, Toby Gee, David Helm, Bao Le Hung, James Newton, Peter Scholze, Richard Taylor, and Jack Thorne. The reason for such a large collaboration is a story of itself which I will explain at the end of the second post. But for now, you can think of it as a polymath project, except done in a style more suited to algebraic number theorists (by invitation only).
In this first post, I will start by giving a brief introduction to the problem. Then I will state one of the main theorems and give some (I hope) interesting consequences. In the next post, I will be a little bit more precise about the details, and explain more precisely what the new ingredients are.
Like all talks in the arithmetic of the Langlands program, we start with:
The Triangle:
Let F be a number field, let p be a prime, and let S be a finite set of places containing all the infinite places and all the primes above p. Let G_S denote the absolute Galois group of the maximal extension of F unramified outside S. In many talks in the Langlands program, one encounters the triangle, which is a conjectural correspondence between the following three objects:
• A: Irreducible pure motives M/F (with coefficients) of dimension n.
• B: Continuous irreducible n-dimensional p-adic representations of G_S (for some S) which are de Rham at the places above p.
• C: Cuspidal algebraic automorphic representations $\pi$ of $\mathrm{GL}(n)/F.$
In general, one would like to construct a map between any two of these objects, leading to six possible (conjectural) maps, which we can describe as follows:
• A->B: This is really the only map we understand, namely, etale cohomology. (I’m being deliberately vague here about what a motive actually is, but whatever.)
• B->A: This is the Fontaine-Mazur conjecture, and maybe some parts of the standard conjectures as well, depending on exactly what a motive is.
• B->C: This is “modularity.”
• C->B: This is the existence of Galois representations associated to automorphic forms.
• A->C: We really think of this as A->B->C and also call this modularity.
• C->A: Again, this is a souped up version of C->B. But note, we still don’t understand how to do this even in cases where C->B is very well understood. For example, suppose that $\pi$ comes from a Hilbert modular form with integer coefficients of trivial level over a totally real field F of even degree. We certainly have an associated compatible family of Galois representations, and we even know that its symmetric square is geometric. But it should come from an elliptic curve, and we don’t know how to prove this. The general problem is still completely open (think Maass forms). On the other hand, often by looking in the cohomology of Shimura varieties, one proves C->A and uses this to deduce that C->B.
This triangle is also sometimes known as “reciprocity.” The other central tenet of the Langlands program, namely functoriality, also has implications for this diagram. Namely, there are natural operations which one can easily do in case B which should then have analogs in C which are very mysterious.
Weight Zero: For all future discussions, I want to specialize to the case of “weight zero.” On the motivic/Galois side, this corresponds to asking that the representations are regular and which Hodge-Tate weights which are distinct and consecutive, namely, [0,1,2,…,n-1]. The hypotheses that the weights are distinct is a restrictive but crucial one — already the case when F = Q and the Hodge-Tate weights are [0,0] is still very much open (specifically, the case of even icosahedral representations). On the automorphic side, the weight zero assumption corresponds to demanding that the $\pi$ in question contribute to the cohomology of the associated locally symmetric space with constant coefficients.
For example, if n=2, then we are precisely looking at abelian varieties of GL(2) type over F (e.g. elliptic curves). This is an interesting case! We know they are modular if F is Q, or even a quadratic extension of Q. More generally, we know that if F is totally real, then such representations are at least potentially modular, that is, their restriction to some finite extension $F'/F$ is modular. This is often good enough for many purposes. For example, it is enough to prove many cases of (some version of) B->A. In this case, we have quite complete results, although still short of the optimal conjectures, especially in the case when the residual representation is reducible.
There are many other modularity lifting results generalizing those for n=2, but they really involve Galois representations whose images have extra symmetry properties. In particular, they are either restricted to representations which preserve (up to scalar) some orthogonal or symplectic form, or they remain unchanged if one conjugates the representation by an outer automorphism of G_F (for example when $F/F^+$ is CM and one conjugates by complex conjugation). There were basically no unconditional results which applied either in the situation that n > 2 or that F was not completely real, and the representation did not otherwise have some restrictive condition on the global image. Our first main theorem is to prove such an unconditional result. Here is such a theorem (specialized to weight zero):
Theorem [ACCGHLNSTT]: Let F be either a CM or totally real number field, and p a prime which is unramified in F. Let
$\rho: G_S \rightarrow \mathrm{GL}_n(\overline{\mathbf{Q}_p})$
be a continuous irreducible representation which is crystalline at v|p with Hodge-Tate weights [0,1,..,n-1]. Suppose that
1. The residual representation $\overline{\rho}$ has suitably big image.
2. The residual representation is “modular” in the sense that there exists an automorphic form $\pi_0$ for $\mathrm{GL}(n)/F$ of weight zero and level prime to p such that $\overline{r}(\pi_0) = \overline{\rho}.$
Then $\rho$ is modular, that is, there exists an automorphic representation $\pi$ of weight zero for $\mathrm{GL}(n)/F$ which is associated to $\rho.$
One could be more precise about what it means to have big image. In fact, I can do this by saying that it has enormous image after restriction to the composite of the Galois closure of F with the pth roots of unity. Here enormous is a technical term, of course. There is also a version of this theorem with an ordinary (rather than Fontaine-Laffaille) hypothesis (more on this next time).
Let me now give a few nice theorems which can be deduced from the theorem above:
Theorem [ACCGHLNSTT]: Let E be an elliptic curve over a CM field F. Then E is potentially modular.
When I had a job interview at MIT in 2006, I was asked by Michael Sipser, the chair at the time, to come up with a theorem which (in a best case scenario) I would hope to prove in 10 years. I said that I wanted to prove that elliptic curves over imaginary quadratic fields were modular. (Reader, I got the job … then went to Northwestern.) It is very gratifying indeed that, roughly 10 years later, this result has actually been proved and that I have made some contribution towards its eventual resolution. (OK, we have potential modularity rather than modularity, but that is splitting hairs…). It is also amusing to note that a number of co-authors were still in high school at this time! (Fact Check: OK, just one…)
In fact, one can improve on the theorem above:
Theorem [ACCGHLNSTT]: Let E be an elliptic curve over a CM field F. Then Sym^n(E) is potentially modular for every n. In particular, the Sato-Tate conjecture holds for E.
Finally, for an application of a different type, suppose that $\pi$ is a weight zero cuspidal algebraic automorphic representation for $\mathrm{GL}(2)/F.$ For each prime v of good reduction, one can associate to $\pi_v$ a pair of Satake parameters $\{\alpha_v,\beta_v\}$ satisfying $|\alpha_v \beta_v| = N(v).$ The Ramanujan conjecture says that one has
$|\alpha_v| = |\beta_v| = N(v)^{1/2}.$
An equivalent formulation is that the sum $a_v$ of these two eigenvalues satisfies $|a_v| \le 2 N(v)^{1/2}.$ We prove the following:
Theorem [ACCGHLNSTT]: Let F be a CM field, and let $\pi$ be a weight zero cuspidal algebraic automorphic representation for $\mathrm{GL}(2)/F.$ Then the Ramanujan conjecture holds for $\pi.$
If F is totally real, then the Ramanujan conjecture follows from Deligne’s theorem. One can associate to $\pi$ a motive, whose Galois representation is either $\rho = \rho(\pi)$ or $\rho^{\otimes 2}.$ Then, by applying purity to these geometric representations, one deduces the result. (Of course, this was famously proved by Deligne himself in the case when F = Q. The case of a totally real field, especially in cases where one has to go via a motive assoicated to $\rho^{\otimes 2},$ is due (I think) to Blasius.) This is decidedly not the way we prove this theorem. In fact, we do not know how to prove the Fontaine-Mazur conjecture for the representation $\rho$ associated to $\pi,$ even in the weak sense of showing that $\rho$ or even $\rho^{\otimes 2}$ appears inside the cohomology of some projective variety. Instead, we prove that $\mathrm{Sym}^n \rho$ is potentially modular, then use the weaker convexity bound to prove the inequality:
$|\alpha_v|^n \le N(v)^{n/2 + 1/2}.$
Taking n sufficiently large, we deduce that $|\alpha_v| \le N(v)^{1/2},$ which (by symmetry) proves the result. Experts will recognize this as precisely Langlands’ original strategy for proving Ramanujan using functoriality! In a certain sense, this is the first time that Ramanujan has been proved without a direct recourse to purity. I say “in some sense”, because there is also the ambiguous case of weight one modular forms. Here the Ramanujan conjecture (which is $|a_p| \le 2$ in this case) was deduced by Deligne and Serre as a consequence of showing that $\rho$ has finite image so that alpha_v and beta_v are roots of unity. On the other hand, that argument does also simultaneously imply that the representations are motivic. So our theorem produces, I believe, the only cuspidal automorphic representations for $\mathrm{GL}(n)/F$ for which we know to be tempered everywhere and yet for which we do not know are directly associated in any way to geometry.
Question: Suppose I’m sitting in my club, and Tim Gowers asks me to say what is really new about this paper. What should I say?
Answer: The distinction (say) between elliptic curves over imaginary quadratic fields and real quadratic fields, while vast, is quite subtle to explain to someone who hasn’t thought about these questions. You could explain it, but the club is hardly a place to do so. Instead, go with this narrative: We generalize Wiles’ modularity results for 2-dimensional representations of Q to n-dimensional representations of Q. If you are pressed on previous generalizations, (especially those due to Clozel-Harris-Taylor), say that Wiles is the case GL(2), Clozel-Harris-Taylor is the case GSp(2n), and our result is the case GL(n).
If you had slightly more time, and the port has not yet arrived, you might also try to explain how the underlying geometric objects involved for GSp(2n) are all algebraic varieties (Shimura varieties), but for GL(n) they involve Riemannian manifolds which have no direct connection to algebraic geometry. Here is a good opportunity to name drop Peter Scholze, and explain how this is the first time that the methods of modularity have been combined with the new world of perfectoid spaces.
## In defense of Theresa May
Nothing gives me more sympathy for Theresa May^* (or less respect for the Economist) than the following line from this article:
Mrs May made do with a dowdier college and relaxed by watching “The Goodies”, a particularly dire comedy.
Sacrilege! Everyone knows that the Goodies is the cultural touchstone to which all other cinematography should be compared!
^*: Well, not really.
## Elementary Class Groups Updated
In a previous post, I gave a short argument showing that, for odd primes p and N such that $N \equiv -1 \mod p,$ the p-class group of $\mathbf{Q}(N^{1/p})$ is non-trivial. This post is just to remark that the same argument works under weaker hypotheses, namely:
Proposition: Assume that N is p-power free and contains a prime factor of the form $q \equiv -1 \mod p,$ and that p is at least 5. Then the p-class group of $K = \mathbf{Q}(N^{1/p})$ is non-trivial.
The proof is pretty much the same. If N has a prime factor of the form $1 \mod p,$ then the genus field is non-trivial. Hence we may assume there are no such primes, from which it follows that $H^1_S(\mathbf{F}_p)$ has dimension one and $H^2_S(\mathbf{F}_p)$ is trivial, where S denotes the set of primes dividing Np. The prime q gives rise to a non-trivial class $b \in H^1_S(\mathbf{F}_p(-1))$ which is totally split at p (this requires that p be at least 5), and the field K itself gives rise to a class $a \in H^1_S(\mathbf{F}_p(1)).$ But now the vanishing of H^2 implies that $a \cup b = 0$ and hence there exists a representation of G_S of the form:
$\rho: G_S \rightarrow \left( \begin{matrix} 1 & a & c \\ 0 & \chi^{-1} & b \\0 & 0 & 1 \end{matrix} \right),$
where $\chi$ is the mod-p cyclotomic character. The class c gives the requisite extension (after possibly adjusting by a class in the one-dimensional space $H^1_S(\mathbf{F}_p)).$ The main point is that the image of inertia at primes away from p is tame and so cyclic, but any unipotent element of $\mathrm{GL}_3(\mathbf{F}_p)$ has order p if p is at least three. This ensures c is unramified over K away from the primes above p. On the other hand, the class $b$ is totally split at p. This implies that the class c is locally a homomorphism of the Galois group of $\mathbf{Q}_p,$ and so after modification by a multiple of the cyclotomic class in $H^1_S(\mathbf{F}_p)$ may also be assumed to be unmarried at p. The fact that $b \ne 0$ ensures that $c \ne 0,$ and moreover the fact that p is at least 5 implies that the kernel of c is distinct from that of a, completing the proof. (This result was conjectured in the paper Class numbers of pure quintic fields by Hirotomo Kobayashi, which proves the claim for $p = 5.)$
Posted in Mathematics | 2 Comments
## Thoughts on Paris
Nothing is more pretentious or annoying than when an American offers, uninvited, their opinions of Paris. Here, then, are some of mine.
• Starting the day with a two hour lecture on elliptic integals:
OUI: Who does not get a slight frisson upon seeing the identity
$\displaystyle{\frac{j}{16} = (\sqrt{k})^4 + \left(\frac{1}{\sqrt{k}}\right)^4 + \left(\frac{1 - \sqrt{k}}{1 + \sqrt{k}}\right)^4 + \left(\frac{1 + \sqrt{k}}{1 - \sqrt{k}}\right)^4 + \left(\frac{1 - \sqrt{-k}}{1 + \sqrt{-k}}\right)^4 + \left(\frac{1 + \sqrt{-k}}{1 - \sqrt{-k}}\right)^4 + 42}.$
where j is the modular invariant and k is the usual parameter of elliptic integrals, given in terms of theta functions as $\theta^2_2/\theta^2_3$ where $\theta_2 = \sum q^{(n+1/2)^2}$ and $\theta_3 = \sum q^{n^2}.$
• Starting the morning with a croissant:
NON: There are decent enough croissants available, but in the general spectrum of correctly proportioning one’s caloric intake, there are better choices.
• Starting the morning with a Kouign Amann:
OUI…ET NON: Yes, I did wake up at 6:45 so I could bike to Blé Sucré and have a Kouign Amann before they were sold out. It was indeed good. But it still didn’t live up to the buttery sugary indulgences I had in Brittany. Calling on Jacques Tillouine to organize another conference in Roscoff!
• Using Vélibs (the Paris bikeshare program):
OUI: Travelling by bike, especially from my location at Paris 7, was extremely convenient, not to mention very pleasant in the clear 70 degree days with a light breeze that were pretty much a constant throughout my stay. The bike paths were excellent, and rarely required having to get too close to cars. But even on-the-road traffic (for example, cycling around the place de la Bastille) was less stressful than it can sometimes be in Chicago or London. The Velib stations themselves were not perfect: there were a number of times the internet connection was down, or the machine inexplicably returned to the initial screen or gave some other error (the “you already have a bike rented” being the most disturbing one), or the closest stations were either all full or empty depending on whether you were trying to return or rent a bike, but this type of thing seems to happen for many such programs. Extra points for the baskets on the front of the bikes which were extremely useful. Also points for being so much cheaper than Divvy: I had about three weeks of use for 24E, wheras in Chicago the cheapest option would have been to get a \$100 yearly membership.
• Going anywhere by car:
NON: Traffic was terrible. Fortunately, I mostly avoided having to be in a car. We did go by bus to the Paris Mosque. We ended up being stuck in one stretch of road for about 10 minutes, at a point where the alternative would have been a very pleasant (and less than 10 minute) walk through the jardin de plantes.
• The Gardens at Giverny:
OUI: I had to choose a day exursion for my young charges, and I was very happy with this choice. Admittedly, a Parisian local described my choice as “American,” so make of that what you will.
• Lunch with Clozel:
OUI: I didn’t have much time for socializing on this trip, but I did get to have a very pleasant lunch with Laurent. If you leave this off your itenerary, you haven’t seen Paris!
• Orange SIM cards:
NON: My phone would randomly claim that I had used up all my data, and I would hae to turn it off and start it again before it would work. It was truly the worst SIM card I have ever used in Europe. I strongly recommend using anyone but Orange.
• Third Wave Coffee:
OUI…ET NON: It is well known that the French have mastered all aspects of cafe culture except making drinkable coffee. But I was very interested to see how much of the third wave had infiltrated into Paris. Here’s a breakdown of the third wave places I visited in order of preference: Telescope, Boot (Right Bank and Left Bank — the Right Bank store is much smaller and has wifi, the Left Bank is bigger and does not), Coffee Cuillier, Fragments, Strada (two locations), Le Peleton Cafe, Ten Belles, and Passager, although the gap between almost all of these was close to non-existent and I would revisit any of them if I was in the neighbourhood. (I had a very pleasant stay at Passager working on my laptop outside. I stayed there for so long I very nearly forgot to pay for my coffee when I left.) Given the weather and general ambience, the general experience of biking to these cafes and then sitting down for a flat white (or equivalent) or espresso was overall very pleasant. On the other hand, I would rate the coffee at these places as generally fine but not great. Many of these places seem to be run (or staffed) by Australians, which is no surprise. (As mentioned previously, Australians have also done wonders for coffee in London.)
• Background music in cafes:
NON: There seems to be some sort of cultural time warp, with Paris 7 students consisting of skateboarding dudes smoking and wearing ’80s fashion. The music in the cafes is similarly pretty bad. Of course, YMMV.
• Restaurants:My restaurant list is somewhat longer than my cafe list, and I have a detailed set of notes, but I would say the best overall meal was at La Bourse et La Vie. For those on a budget looking for a cheap place to have a light lunch, I strongly recommend Canard & Champagne. Other notable courses: a rendition of vitello tonnato at Paul Bert, a light egg tapas dish whose name I don’t remember at Sourire tapas françaises, a fluffy squid dish which tasted like liquid quiche at Semilla, seared Foie Gras at Domaine De Lintellac, and a few more.
• The weather in May:
OUI: It poured the first day or so, and threatened in the forecast to rain quite frequently. But future forecasts faded, and for almost the entire three weeks, it was pretty close to a blissful 70 degrees, clear, with a slight breeze. Perfect!
## Who proved it first?
During Joel Specter’s thesis defense, he started out by remarking that the $q$-expansion:
$\displaystyle{f = q \prod_{n=1}^{\infty} (1 - q^n)(1 - q^{23 n}) = \sum a_n q^n}$
is a weight one modular forms of level $\Gamma_1(23),$ and moreover, for $p$ prime, $a_p$ is equal to the number of roots of
$x^3 - x + 1$
modulo $p$ minus one. He attributed this result to Hecke. But is it really due to Hecke, or is this more classical? Let’s consider the following claims:
1. The form $f$ is a modular form of the given weight and level.
2. If $p$ is not a square modulo 23, then $a_p = 0$.
3. If $p$ is a square modulo 23, and $x^3 - x + 1$ has three roots modulo $p,$ then $a_p= 2.$
4. If $p$ is a square modulo 23, and $x^3 - x + 1$ is irreducible modulo $p,$ then $a_p = -1.$
At when point in history could these results be proved?
$\displaystyle{ \prod_{n=1}^{\infty} (1 - q^n) = \sum_{-\infty}^{\infty} q^{(3n^2+n)/2} (-1)^{n}}$
Using this, one immediately sees that
$\displaystyle{f = \sum \sum q^{\frac{1}{24} \left((6n+1)^2 + 23 (6m+1)^2 \right)} (-1)^{n+m}}$
This exhibits $f$ as a sum of theta series. With a little care, one can moreover show that
$\displaystyle{2f = \sum \sum q^{x^2 + x y + 6 y^2} - \sum \sum q^{2 x^2 + x y + 3 y^2}}.$
This is not entirely tautological, but nothing that Gauss couldn’t prove using facts about the class group of binary quadratic forms of discriminant $-23.$ The fact that $f$ is a modular form of the appropriate weight and level surely follows from known results about Dedekind’s $\eta$ function, which covers (1). From the description in terms of theta functions, the claim (2) is also transparent. So what remains? Using elementary number theory, we are reduced to showing that a prime $p$ with $(p/23) = +1$ is principal in the ring of integers of $\mathbf{Q}(\sqrt{-23})$ if and only if $p$ splits completely in the Galois closure $H$ of $x^3 - x + 1.$
Suppose that $K = \mathbf{Q}(\sqrt{-23}) \subset H.$ What is clear enough is that primes $p$ with $(p/23) = + 1$ split in $K,$ and those which split principally can be represented by the form $x^2 + xy + 6y^2$ in essentially a unique way up to the obvious automorphisms. Moreover, the class group of $\mathrm{SL}_2(\mathbf{Z})$ equivalent forms has order $3,$ and the other $\mathrm{GL}_2(\mathbf{Z})$ equivalence class is given by $2x^2 + xy + 3y^2.$ In particular, the primes which split non-principally in $K$ are represented by the binary quadratic form $2 x^2 + xy + 3y^2$ essentially uniquely. From Minkowski’s bound, one can see that $H$ has trivial class group. In particular, if $x^3 - x + 1$ has three roots modulo $p,$ then the norm of the corresponding ideal to $K$ is also principal and has norm $p = x^2 + xy + 6y^2.$ This is enough to prove (3).
So the only fact which would not obviously be easy to prove in the 19th century is (4), namely, that if $p = x^2 + xy + 6y^2,$ then $p$ splits completely in $H$. The most general statement along these lines was proved by Furtwängler (a student of Hilbert) in 1911 — note that this is a different (and easier?) statement than the triviality of the transfer map, which was not proved until 1930 (also by Furtwängler), after other foundational results in class field theory had been dispensed with by Tagaki (another student of Hilbert!). Yet we are not dealing with a general field, but the much more specific case of an imaginary quadratic field, which had been previously studied by Kronecker and Weber in connection with the Jugendtraum. I don’t know how much Kronecker could actually prove about (for example) the splitting of primes in the extension of an imaginary quadratic field given by the singular value $j(\tau).$ Some of my readers surely have a better understanding of history than I do. Does this result follow from theorems known before 1911? Who proved it first?
|
2017-06-24 18:49:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 124, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8395463824272156, "perplexity": 598.6244368052465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320323.17/warc/CC-MAIN-20170624184733-20170624204733-00226.warc.gz"}
|
https://economics.stackexchange.com/questions/21037/bank-capital-and-profit
|
# Bank Capital and Profit
I am struggling a bit with understanding the meaning of capital ratios, and bank profitablity. It seems to me that capital requirements stipulate that a certain fraction of banks' risk weighted assets have to be held as equity. I get why that is the case: for any given value of assets, having more capital serves as a cushion against their falling values. The definition of bank capital is: $$E=A-L$$
Where E represents bank capital. Now, from a textbook I am reading, it says that banks don't like having high amounts of capital, as it reduces their potential profits. I don't understand what that means- doesn't having high amounts of capital mean that the bank is indeed very profitable (maximizing difference between assets and liabilities?)
Two things:
1. Capital not being put to work reveals no profit. This capital could have been invested into something else which may (or may not) have resulted in profit for the bank. Opportunity cost of capital could be rather big. This means that the bank is actually loosing out on potential profits.
2. Capital (in form of equity) does not necessarily mean the company is profitable. Many (often smaller) companies are 100% equity financed - this has no relation to profitability what so ever. Increasing the equity/debt ratio does not say anything about if the company is profitable. It could be because the company has build up equity (yay, this means profits), but it could also be because the company has gotten a capital injection (not affecting profits). It works other ways as well - perhaps a company is very profitable (building equity and not paying dividends), but at the same time they increase their debt, keeping the ratio constant; in this case you cannot see if it is profitable or not. Debt is not the same as “not profitable”. Actually most companies have debt because they may be able to invest the debt at a better rate than their interest rate (eg in machinery to increase production resulting in profits).
Increasing the gap between assets and liabilities is exactly the same as described above. Assets = (Equity + Liabilities).
Let's say the bank holds 100 dollars in deposits.
If they are required to hold 5% in equity (or any other asset class that contributes to the requirement), then they can loan out 95 dollars of the 100 dollars.
If they are required to hold 20% in equity (or any other asset class that contributes to the requirement), then they can only loan out 80 dollars of the 100 dollars.
A relevant concept is "risk-weighted assets", where a highly rated asset (lower risk) contributes more to the requirement than a lowly rated asset (higher risk). If you read up on Basel III in Wikipedia, you can find out more about different types of capital requirements and other stuff that goes with that, in addition to other relevant concepts such as liquidity requirements, etc.
Sometimes the main idea is to prevent a minor run on the bank from turning to catastrophe (maybe a depositor takes their 10 dollars out of the bank, and the bank is left with minus 5 dollars, and cannot meet the ongoing financing requirements of an investment project that they had previously committed to). Other times, it may prevent a small economic downturn from causing havoc in the financial system (maybe several loans are not repaid until later, and the bank has no additional liquidity to finance any new investment projects, causing investment in the economy to collapse). In both cases, the higher capital requirement provides protection against a small problem with one or two banks turning into an economy-wide catastrophe (although it is possible for the requirement to be "too high".)
In other cases, a central bank may use capital requirements to influence overall credit in the financial system, similar to how central banks manage interest rates to influence inflation and thereby other macro variables.
• But isn't E=A-L? In your example, what does it mean to "hold" the equity when E=0? If you have deposits worth \$100, your L=100, but your A=100 as well. The deposits enter both A and L as 100. – ChinG Mar 15 '18 at 23:41
|
2019-11-13 15:45:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42401742935180664, "perplexity": 1384.468185060001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667262.54/warc/CC-MAIN-20191113140725-20191113164725-00355.warc.gz"}
|
http://card2brain.ch/box/20180101_fo_case_q
|
# Lernkarten
Karten 12 Karten 1 Lernende English Mittelschule 01.01.2018 / 02.01.2018 Keine Angabe
0 Exakte Antworten 12 Text Antworten 0 Multiple Choice Antworten
9.6. A company declares a 4-for-1 stock split. Explain how the terms change for a call option
with a strike price of $60. 2% The strike price is reduced to$15, and the option gives the holder the right to purchase 4 times as many shares.
9.7. ‘‘Employee stock options are different from regular exchange-traded stock options
because they can change the company’s capital structure.’’ Explain this statement.
The exercise of employee stock options usually leads to new shares being issued by thecompany and sold to the employee. This changes the amount of equity in the capitalstructure. When a regular exchange-traded option is exercised no new shares are issued andthe company’s capital structure is not affected
10.2. What is a lower bound for the price of a four-month call option on a non-dividendpaying
stock when the stock price is $28, the strike price is$25, and the risk-free interest
rate is 8% per annum
28-25e^-0.08x0.333
11.1. What is meant by a protective put? What position in call options is equivalent to a
protective put 2%
A protective put consists of a long position in a put option combined with a long position
in the underlying shares. It is equivalent to a long position in a call option plus a certain
amount of cash. This follows from put–call parity:
11.2. Explain two ways in which a bear spread can be created. 4%
A bear spread can be created using two call options with the same maturity and different
strike prices. The investor shorts the call option with the lower strike price and buys the call
option with the higher strike price. A bear spread can also be created using two put options
with the same maturity and different strike prices. In this case, the investor shorts the put
option with the lower strike price and buys the put option with the higher strike price
11.3. When is it appropriate for an investor to purchase a butterfly spread? 2%
A butterfly spread involves a position in options with three different strike prices (K1, K2,
and K3). A butterfly spread should be purchased when the investor considers that the
price of the underlying stock is likely to stay close to the central strike price, K2.
8.2. Explain what is meant by subprime mogage an ABS an ABS CDO 6%
A subprime mortgage is a mortgage where the risk of default is higher than normal.
An ABS is a set of tranches created from a portfolio of morgages or other assets An ABS CDO is an ABS created from particular tranches of a number of diferent abs
8.10 What is meant by the term agency cost? How did agency costs play a role in the credit crisis? 4%
Agency cost is a term used to describe the costs in a situation where the interests of two parties are not perfectly aligned. There were potential agency costs between the originators of mortgages and investors. And employees of banks who earned bonuses and the banks themselves.
|
2019-10-20 08:39:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3258303105831146, "perplexity": 2764.979041252583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986705411.60/warc/CC-MAIN-20191020081806-20191020105306-00437.warc.gz"}
|
http://math.stackexchange.com/questions/270250/action-and-bounded-orbits
|
Action and bounded orbits
Let $H$ be an open group such that, $H$ act continously an by isometrieson a metric space $(X,d)$ ($\forall h\in H$, the map $X\ni x\longmapsto h.x\in X$ is an isometry.). Recall that for $x_{0}\in X$ the orbit of $x_{0}$ is the following set $orb(x_{0})=\{h.x_{0}:\,\,h\in H\}$. We say that this orbit is bounded if it is boundedas a subset of the metric space $(X,d)$.
I want to show that if the action by isometries of the open group $H$ on the metric space $(X,d)$ have an boounded orbit, then every orbit is bounded.
thank for any help.
-
What is an open group? – Qiaochu Yuan Jan 4 '13 at 12:05
Suppose the $H$-orbit of $x_0$ is bounded. Thus, there are $y \in X$ and $r \gt 0$ such that $d(y,hx_0) \lt r$ for all $h \in H$. For every $x_1 \in X$ and every $h \in H$ we have from the triangle inequality and $d(hx_0,hx_1) = d(x_0,x_1)$ that $$d(y,hx_1) \leq d(y,hx_0) + d(hx_0,hx_1) \lt r + d(x_0,x_1),$$ so the orbit of $x_1$ is in the ball of radius $r + d(x_0,x_1)$ around $y$.
This argument only uses that $H$ acts by isometries on $X$. It is not necessary to know what an open group is or that the action is continuous. – Martin Jan 4 '13 at 12:09
|
2015-08-03 05:06:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9822568297386169, "perplexity": 131.3955610888313}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989507.42/warc/CC-MAIN-20150728002309-00221-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-and-trigonometry-10th-edition/chapter-1-1-7-linear-inequalities-in-one-variable-1-7-exercises-page-138/96
|
## Algebra and Trigonometry 10th Edition
$x \geq 16394$
When revenue is greater than cost there is a profit. Hence R > C, giving 24.55x > 15.4x+150000 9.15x > 150000 x > 16393.44262 $x \geq 16394$
|
2022-08-10 02:24:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.506891131401062, "perplexity": 8116.753838782671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00043.warc.gz"}
|
http://www.lofoya.com/Solved/1673/additional-information-for-questions-1-and-2-in-lab-2-as-well-as
|
# Difficult Pie Charts Solved QuestionData Interpretation Discussion
Common Information
Answer the questions on the basis of the information given below.
In a school, there are four chemical laboratories namely Lab 1, Lab 2, Lab 3 and Lab 4. There are only six types of acids that are available in these laboratories. The following table provides information about the number of bottles of each type of acid in each of these laboratories.
Every bottle of acid in the chemical laboratories is further categorized on the basis of its capacity under one or the other of the five different categories namely ‘S’, ‘M’, ‘L’, ‘XL’ and ‘XXL’.
The following chart provides information about the number of bottles of acids in each of the mentioned categories (on the basis of its capacity) as a percentage of the total number of bottles of acids in these laboratories.
Q. Common Information Question: 1/5 Additional Information for questions 1 and 2: In Lab 2 as well as Lab 3, the number of bottles of acids in the category XL as a percentage of the total number of bottles of acids in the respective laboratories is not more than 1%. In Lab 1, the total number of bottles of acids in the category XL as a percentage of the total number of bottles of acids in Lab 1 cannot be less than
✖ A. 2% ✖ B. 3% ✔ C. 4% ✖ D. 5% ✖ E. 6%
Solution:
Option(C) is correct
Given that In Lab 2 as well as Lab 3, the number of bottles of acids in the category XL as a percentage of the total number of bottles of acids in the respective laboratories is not more than 1%.
Total number of bottles in these laboratories $= 2400$
A maximum of 6 bottles each in Lab 2 and Lab 3 can be in the category XL.
Also, a maximum of 688 bottles in Lab 4 can be in the category XL.
Total number of XL bottles,
$=2400×\dfrac{300}{100}= 720$
So, in Lab 1, the number of bottles of acid in the category XL cannot be less than:
$=720 - 6 - 6 - 688 = 20$
⇒ Required percentage,
$=\dfrac{20}{500}×100$
$= \textbf{4%}$
Edit: Correct answer choice has been changed to option(C) from option(E) [Calculation is unchanged], as notified by Piyush.
## (6) Comment(s)
Swathi
()
hey, why are we deducting 6 botthles from each category? 1% of 720 is 7.2 so why not 7?
Lawcash
()
bcz usne bola hai lab 2 and lab 3 m XL category 1% se jyada ni hogi 609*1/100=6 and 603*1/100=6
Piyush
()
so the answer is 4% not 6% option (c)
Deepak
()
Thank you Piyush for letting me know, I've corrected it.
Yaswanth
()
How can we get a maximum of 688 bottles in lab 4
Reeti
()
It's given in the problem, check first chart.
If you sum up all the bottles it's $688$ only $(=117+107+129+99+75+161)$
|
2017-01-22 22:15:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5161986947059631, "perplexity": 1609.7923876424152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00457-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://bootmath.com/limit-lim_xtoinfty-x-x2lnleft1frac1xright.html
|
# Limit $\lim_{x\to\infty} x-x^{2}\ln\left(1+\frac{1}{x}\right)$
• How do I determine $\lim_{x\to\infty} \left[x – x^{2} \log\left(1 + 1/x\right)\right]$?
#### Solutions Collecting From Web of "Limit $\lim_{x\to\infty} x-x^{2}\ln\left(1+\frac{1}{x}\right)$"
$$\log (1+y) = y – y^2/2 + O(y^3).$$
$$\log (1+1/x) = x^{-1} – x^{-2}/2 + O(x^{-3}).$$
$$x^2 \log (1+1/x) = x – 1/2 + O(x^{-1}).$$
$$x^2 \log (1+1/x) -x = – 1/2 + O(x^{-1}).$$
Done!
Sorry I was not right just now.
You can do substitution as you mentioned.
$$\lim_{h\rightarrow 0}\left(\frac{1}{h}-\frac{1}{h^2}\ln{(1+h)}\right)\\ =\lim_{h\rightarrow 0}\frac{h-\ln{(1+h)}}{h^2}\\ =\lim_{h\rightarrow 0}\frac{1-\frac{1}{1+h}}{2h}\\ =\lim_{h\rightarrow 0}\frac{h}{2h(1+h)}$$
And you can continue from here. From line 2 to 3 I used L’Hospital’s rule.
|
2018-08-15 15:27:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6747291088104248, "perplexity": 1417.4977576429553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210133.37/warc/CC-MAIN-20180815141842-20180815161842-00643.warc.gz"}
|
https://2014.igem.org/wiki/index.php?title=Team:Aachen/Interlab_Study/Hardware&diff=prev&oldid=84477
|
# Team:Aachen/Interlab Study/Hardware
(Difference between revisions)
Revision as of 22:32, 22 August 2014 (view source)Mjoppich (Talk | contribs) (→Open Source DIY Hardware)← Older edit Revision as of 22:35, 22 August 2014 (view source)Mjoppich (Talk | contribs) (→Open Source DIY Hardware)Newer edit → Line 24: Line 24: This cuvette holder can be used for both devices: the whole in the bottom is used for fluorescence measurement, the two opposite wholes are used for the light sensor and the LED for optical density measurement respectively. This cuvette holder can be used for both devices: the whole in the bottom is used for fluorescence measurement, the two opposite wholes are used for the light sensor and the LED for optical density measurement respectively. + + === Hardware Requirements === The key ingredients for our combined optical density/fluorescence (OD/F) device then are: The key ingredients for our combined optical density/fluorescence (OD/F) device then are: Line 34: Line 36: * LED for optical density (for 600nm we recommend: DIALIGHT - 550-2505F ) * LED for optical density (for 600nm we recommend: DIALIGHT - 550-2505F ) * LED for fluorescence (any 450nm blue works for iLOV, 480nm for wtGFP ) * LED for fluorescence (any 450nm blue works for iLOV, 480nm for wtGFP ) + + The case is from acrylic glass. The construction plan can be downloaded from [2014.igem.org/Team:Aachen insert link here]. + == Fluorescence == == Fluorescence == + + Also general information + + === Characteristic Curve === == Optical Density == == Optical Density == + + Some general information + + === calibration === {{Team:Aachen/Footer}} {{Team:Aachen/Footer}}
# Open Source DIY Hardware
Being in the measurement track and having a team of highly motivated engineering and computer science students, we tackled the challenge to build, document and evaluate our open source hardware approach.
For our daily tasks in the lab, two key devices were detected: fluorometer and OD-meter. As we use GFP most of the time, the fluorometer is designed to work best with GFP. For modularity reasons, and re-usability, it is designed such that a change to another fluorescence protein is easy.
Besides the mandatory $\mu$-controller architecture, we worked together with the Fablab Aachen to construct the device. There we have the chance to use laser cutters and 3D printers.
The core component for detecting the light intensity is the cuvette holder. Please find the 3D model we printed below:
This cuvette holder can be used for both devices: the whole in the bottom is used for fluorescence measurement, the two opposite wholes are used for the light sensor and the LED for optical density measurement respectively.
### Hardware Requirements
The key ingredients for our combined optical density/fluorescence (OD/F) device then are:
• Arduino UNO R3 (or equivalent)
• bluetooth Modem optional
• TSL 235 R light to frequency sensor
• LED for optical density (for 600nm we recommend: DIALIGHT - 550-2505F )
• LED for fluorescence (any 450nm blue works for iLOV, 480nm for wtGFP )
The case is from acrylic glass. The construction plan can be downloaded from [2014.igem.org/Team:Aachen insert link here].
## Fluorescence
Also general information
## Optical Density
Some general information
|
2021-05-16 08:21:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39753231406211853, "perplexity": 8390.559077704302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992516.56/warc/CC-MAIN-20210516075201-20210516105201-00003.warc.gz"}
|
http://clay6.com/qa/35320/chipko-movement-is-related-to-
|
Browse Questions
# Chipko movement is related to :-
$\begin{array}{1 1}(a)\;\text{Tehri dam project}\\(b)\;\text{Forest conservation}\\(c)\;\text{Ganga water project}\\(d)\;\text{Narmada dam project}\end{array}$
Chipko movement is related to forest conservation
Hence (b) is the correct answer.
|
2017-07-22 10:44:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19742143154144287, "perplexity": 6100.71468788714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423992.48/warc/CC-MAIN-20170722102800-20170722122800-00501.warc.gz"}
|
https://math.stackexchange.com/questions/1299744/how-do-we-find-the-principal-unit-normal-to-this-curve
|
How do we find the principal unit normal to this curve?
A curve is given in cylindrical coordinates:
$r=r(t)$
$\theta=\theta(t)$
$z=z(t)$
The curve is unit-speed:
$(\frac{dr}{dt})^2+r^2(\frac{d\theta}{dt})^2+(\frac{dz}{dt})^2=1$
How do we find the principal unit normal vector to this curve? Many thanks!
Since we have
$$\vec r'\cdot \vec r'=1 \tag 1$$
then differentiating $(1)$ with respect to $t$ shows that
$$\vec r'' \cdot \vec r'=0. \tag 2$$
Note that $(2)$ implies that $\vec r'$ and $\vec r''$ are orthogonal.
We also know that $\vec r'$ is tangent to the curve spanned by $\vec r$. Thus, $\vec r''$ in a normal to the curve.
For cylindrical coordinates, we have $\vec r=\hat r r+\hat zz$.
Thus,
$$\vec r'=\hat r r'+\hat \theta r\theta'+\hat zz'$$
and
$$\vec r''=\hat r[r''-r(\theta')^2]+\hat \theta[2r'\theta'+r\theta'']+\hat zz''.$$
To verify that indeed $\vec r'\cdot \vec r''=0$, we form the inner product to find
\begin{align} \vec r'\cdot \vec r''&=r'r''-rr'(\theta')^2+2rr'(\theta')^2+r^2\theta' \theta''+z'z''\\\\ &=rr''+rr'(\theta')^2+r^2\theta' \theta''+z'z''\\\\ &=2\frac{d}{dt}\left(r'^2+r^2(\theta')^2+z'^2\right)\\\\ &=0 \end{align}
as expected
• Dr. MV, thank you for taking time to answer this question!. I hear your reasoning; it is the established way to finding the normal in Cartesian coordinates. My issue is with the cylindrical coordinate system, because even though the curve is unit-speed in Cartesian coordinates via $|r^{\prime}|=1$, this same expression is no longer true in cylindrical coordinates. Also in cylindrical coordinates, $r^{\prime\prime}\cdot r^{\prime}$ is not zero. Where is my mistake? – Jay May 26 '15 at 17:04
• @Jay You're welcome. My pleasure. Note that the development is independent of a coordinate system and thus applies to cylindrical coordinates as well as Cartesian coordinates. – Mark Viola May 26 '15 at 17:06
• Then my mistake is in writing ${\mathbf r}$ improperly. I wrote ${\mathbf r} = r \hat{r} + \theta \hat{\theta} + z \hat{z}$. – Jay May 26 '15 at 17:18
• If there is a good textbook that explains the position vector in cylindrical coordinates and its derivatives I would be happy to read more. – Jay May 26 '15 at 17:22
• @Jay I understand the reason why it is tempting to add the $\hat \theta \theta$ term to the position vector. But, if one simply takes $\vec r=\hat xx+\hat yy+\hat zz$ and uses $\hat x=\hat r \cos \theta-\hat \theta \sin \theta$ and $\hat y=\hat r \sin \theta+\hat \theta \cos \theta$ along with $x=r\cos \theta$ and $y=r\sin \theta$, one finds that $\vec r=\hat r r+\hat zz$. Alternatively, one can intuit this by drawing a picture and identifying that for the position vector, there is no $\hat \theta$ component. – Mark Viola May 26 '15 at 17:30
|
2020-01-22 05:19:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8423943519592285, "perplexity": 313.0500171916726}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00050.warc.gz"}
|
https://homework.cpm.org/category/CON_FOUND/textbook/ac/chapter/14/lesson/14.2.1.1/problem/2-10
|
### Home > AC > Chapter 14 > Lesson 14.2.1.1 > Problem2-10
2-10.
Simplify each expression below. Be sure to show your work. (Hint: Use your understanding of the meaning of exponents to expand each expression and then simplify.) Assume that the denominators in parts (b) and (c) are not equal to zero.
1. $\left(x^{3}\right)\left(x^{2}\right)$
Interpret the meaning of the exponents $\left(x· x · x\right)\left(x · x\right)$, then simplify. What is the shortcut?
1. $\frac { y ^ { 5 } } { y ^ { 2 } }$
What do the exponents mean? Is there a shortcut?
$\frac{y \cdot y \cdot y \cdot y \cdot y}{y\cdot y}$
1. $\frac { x ^ { 3 } } { x ^ { 7 } }$
What do the exponents mean?
$\text{For example: }\\y^{-3}=\frac{1}{y^{3}}$
Now how can you simplify the given problem?
1. $\left(x^{2}\right)^{3}$
Remember what each exponent means.
$x^{6}$
|
2022-01-28 15:51:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 8, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8691052198410034, "perplexity": 1350.9152974957608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306301.52/warc/CC-MAIN-20220128152530-20220128182530-00375.warc.gz"}
|
https://socratic.org/questions/57f3146311ef6b2024f3912f
|
Question #3912f
Oct 4, 2016
Hope these help..
Explanation:
There are several mnemonics which you can use to remember the units.
$K m \text{ "Hm" "Dm" "m" "dm" "cm" } m m$
King Henry Died a miserable death called measles
Kittens Have Done much damage catching mice
Within the metric system, the units are linked by 10.
Each unit on the left is 10 times bigger than the unit on the right.
When you convert to a bigger unit (by dividing) the number gets smaller
So 10mm = 1cm and 10cm = 1dm and 10dm = 1 metre ... etc
You only really need to know the main ones
$1 K m = 1000 m \text{ "1m = 100cm " } 1 m = 1000 m m$
In the same way...
$1 \text{tonne" = 1000Kg" "1Kg = 1000g" } 1 g = 1000 m g$
and
$1 M l = 1000 K l \text{ " 1Kl = 1000l" } 1 l i t r e = 1 m l$
Converting with these units just involves moving the decimal point to the left or right, one place for each unit.
for eg
$1.234567 K m \text{ " = 1234.567m = 1234567mm" } \leftarrow$ distance
$1.234567 K g \text{ "= 1234.567g = 1234567mg" } \leftarrow$ mass
$1.234567 K l \text{ " = 1234.567l = 1234567ml" } \leftarrow$ capacity
$1.234567 K H z = 1234.567 H z = 1234567 m H z \text{ } \leftarrow$ fequency
$1.234567 K W \text{ "= 1234.567W = 1234567mW" } \leftarrow$ Power
$1.234567 K P a = 1234.567 P a = 1234567 m P a \text{ } \leftarrow$ pressure
[The exceptions are for area and volume , where the conversion factors are 100 and 1000 respectively for each unit]
Note: $1 H {m}^{2} = 1 H a \text{ } \leftarrow$ land and farms are given in Ha
$1 {m}^{3} = 1 K l \text{ "1dm^3 = 1litre" "1cm^3 = 1"cc} = 1 m l$
(these give the conversions between volume (based on length) and capacity)
|
2022-01-18 01:54:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6734539866447449, "perplexity": 2773.8140059376847}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300658.84/warc/CC-MAIN-20220118002226-20220118032226-00392.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-the-system-5x-y-13-2x-6y-22
|
# How do you solve the system 5x + y = 13 & 2x + 6y =22?
Jan 30, 2017
$\text{the answer is x=2 and y=3}$
#### Explanation:
$5 x + y = 13 \text{ (1)}$
$2 x + 6 y = 22 \text{ (2)}$
$\text{let us multiply the both sides of equation (1) by 6}$
$\textcolor{red}{6.} \left(5 x + y\right) = \textcolor{red}{6.} 13$
$30 x + 6 y = 78 \text{ (3)}$
$\text{subtract equation (2) from equation (3)}$
$30 x + 6 y - \left(2 x + 6 y\right) = 78 - 22$
$30 x \cancel{+ 6 y} - 2 x \cancel{- 6 y} = 56$
$28 x = 56$
$x = \frac{56}{28}$
color(green)(x=2);
$\text{now use (1) or (2)}$
$5 . \textcolor{g r e e n}{2} + y = 13$
$10 + y = 13$
$y = 13 - 10$
$y = 3$
|
2020-02-18 17:37:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 17, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9434514045715332, "perplexity": 1360.3844267119343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143784.14/warc/CC-MAIN-20200218150621-20200218180621-00212.warc.gz"}
|
https://mathimages.swarthmore.edu/index.php?title=Logarithmic_Scale_and_the_Slide_Rule&diff=35540&oldid=13627
|
# Difference between revisions of "Logarithmic Scale and the Slide Rule"
150 Extra Engineers
Field: Algebra
Image Created By: IBM
Website: Global Nerdy --- Tech Evangelist Joey deVilla on software development, tech news and other nerdy stuff
150 Extra Engineers
This was a picture of an IBM advertisement back in 1953.
# Basic Description
This was a picture of an IBM advertisement back in 1953 during an age when "computer" referred to human who did calculations and computation solely. The advertisement boasted that
"An IBM Electronic Calculator speeds through thousands of intricate computations so quickly that on many complex problems it's just like having 150 Extra Engineers. No longer must valuable engineering personnel...now in critical shortage...spend priceless creative time at routine repetitive figuring. Thousands of IBM Electronic Business Machines...vital to our nation's defense...are at work for science, industry and armed forces, in laboratories, factories and offices, helping to meet urgent demands of greater production."
Doesn't one just want to go back to that age when engineers were in "critical shortage" and "If you had a degree, you had a job. If you didn't have a job it's because you didn't want one." Oh well...
Notice that in the picture, in addition to the lack of women engineers back in those days, the male engineers almost all have receding hairlines (possibly due to constant overwork and the disproportional allocations of oxygen and other nutrients between brain cells and scalp skin). It was also typical of the male engineer to be defined by a standard uniform: "white shirt, narrow tie, pocket protector and slide rule." [1] Yeah! Pocket protector! I did not know what it is until I saw the picture in the Scientific American Article, When Slide Rule Ruled, by Cliff Stoll, who is a brilliant engineer, physicist and educator, and whose TED talk is both hilarious and inspiring. However, it is neither Cliff Stoll, nor the pocket protector that I want to talk about. (It is just a thing that, despite making the engineers look very geeky and very attractive in author's opinion, protected their shirt pockets from being worn out so quickly due to the numerous engineering essentials they had in there.) Rather, it is the ruler looking stuff that is the engineer's hand that I want to talk about. The Slide Rule.
Born into the digital and automatic age, hardly anyone of our generation ever gets to know anything that is analog or manual. Not only that, we hate to have anything to do with stuff that is analog. We have been cultured to base our life and happiness on gadgets that allow us to access the world and all the information at the finger tip. We don't realize that in fact, things that are analog laid the foundation for our modern society and ushered us into the digital age. Slide Rule is one of "those analog things". In the pre-computer age, it was one of those ingenious tools that enabled engineers, mathematicians and physicists to do calculations, and because of it, we have witnessed the erections of skyscrapers, harnessing of hydroelectric power, building of subways, advancement in the aeronautics, beyond that, it helped us won the WWII, sent astronauts into the space (as a matter of fact, a Picket 600-T Dual base Log Log slide rule was carried by the Aopollo crew to the space and moon should the on-board computers fail) and eventually, hastened its own demise.
# A More Mathematical Explanation
Note: understanding of this explanation requires: *Algebra
## What Good are Logarithmic table and Logarithmic Scale ?
If you are new to logarithms, check out [...]
## What Good are Logarithmic table and Logarithmic Scale ?
If you are new to logarithms, check out Logarithms before you proceed.
Imagine you work at a petroleum company and your boss wants you to calculate the mass, $m$, of the petroleum that is stored in a cube tank. You have no calculator and no knowledge of the existence of the logarithmic table by the way and you are given the density $\rho = 893 kg/m^3$ and the sides of the cubic tank, $s = 83.52 m$. Naturally, you will use $m = \rho \times s^3$ and before you even start to realize that this calculation is really tedious, your boss asks you to calculate how much profit the company can make given that 67% of the petro can be distilled into gasoline at 11 dollars per kilo and subsequently shipped at 14 dollar per kilo and sold at 54 dollar per kilo (in some really weird country where they sell by the kilo). So now, things are getting really bad. There is no way you are going to calculate that by hand! Then you boss comes back with some information he forgot to tell you. The petroleum cannot be distilled at once because the distillation plant works at only a certain rate and the finished gasoline cannot be shipped at once thus they require additional storage time which costs money. Before he finishes with the whole story, you have quit your job.
Now, you should really appreciate logarithmic table. Say now you have a logarithmic table and recall that profit, p, is the difference between sales and cost. Hence, $p = m \times 67% \times (54-11-14)$. Taking logarithm of both sides yields
\begin{align} \log p & = \log (m \times 67% \times (54-11-14)) \\ & = \log (893 \times 83.52^3 \times 67% \times 29) \\ & = \log 8.93 + 2 + 3\log 8.352 + 3 + \log.67 + \log2.9 + 1 \end{align}
Checking the logarithmic table, you find that the right side of the equation is 14.0044 and hence $\log p = 14.0044$. To find $p$, you go back to the table again and find that $p = 1.01018 \times 10^{14}$ dollars. Logarithms made your life so much easier. Though this situation is hypothetical but engineers and scientists in the pre-computer and pre-digital calculator age faced similar problems everyday. You can imagine the amount of work that is required if there is no logarithms.
If you have fully understood the use of logarithms, you would have realized that for every multiplication, we have to check the table three times. For example, if we want to calculate $2 \times 4.5$, we have to check $\log2$, $\log4.5$, add them together and then refer back to the table. Is there anyway that we could do without the table?
There is a way and that is the where the logarithmic scale come in. We intend to create a scale with values on it whose lengths equal to its logarithms. That is to say we mark a point on the ruler "1" since $\log1 = 0$, "2" a distance $\log2 = 0.3010$ from "1" and "3" a distance $\log3 = 0.4771$ from "1" and etc. Thus we have the scale below with scale L as a linear scale and D as the new logarithmic scale.
Further, we can subdivide the scale using the same means and thus we have the following scale.
Below is a more finely divided scale.
Thus with two such scale, we could thus find $2 \times 4.5$ by adding 2 and 4.5 on that scale which give us 9. The operating principles are explained later.
## History of Slide Rule and Its Construction
### A Bit of History on Slide Rule
Before slide rule (and of course, logarithms for that matter), there was the sector whose inventor was debatable. It was able to solve problems in proportions, trigonometry, multiplication and division, and even taking square root and squaring as well. It was in popular use until mid 19th century.
John Napier
Mirifici Logarithmorum Canonis Constructio
John Napier's invention of logarithms was really out of necessity. With the increasing amount of data and advancement in the natural science, mathematician and scientists were constantly occupied by huge stack of calculations which not only took up immense amount of time but also impeded further progress. This is especially true for the study of astronomy. Astronomers usually had to observe and collect data for many hours at night and then perform laboriously repetitive calculations (which was very prone to mistakes and errors) during the day time. This presented an imperative need for a new way of calculations that was fast and reliable. John Napier quietly stepped up for the challenge and excited the community of scientists and mathematician in 1617 with his publication of Mirifici Logarithmorum Canonis Descriptio in which he introduced the Logarithms and came up with the logarithms table. The history and development is a fascinating topic and hence I have dedicated an individual page The Logarithms, Its Discovery and Development to it. In it there are also more detailed and comprehensive information on logarithmic scale. Now, the important things about logarithms are some of its very useful properties some of which are known by even 9th graders, namely $\log(xy) = \log(x) + \log(y)$ and $\log \frac{x}{y} = \log(x) - \log(y)$. So the that means if we have an operation in the linear scale such as $x \times y$, we can transform it into $x+y$ on the logarithmic scale. Henry Briggs, England's most eminent mathematician of the time, traveled to meet Napier and introduced himself: "My Lord, I have undertaken this long journey purposely to see your person, and to know by what engine of wit or ingenuity you came first to think of this most excellent help unto astronomy...I wonder why nobody else found it out before, when, now being known, it appears so easy." [2]
In 1620,payday loans onlinecame payday loan up with the first application of the logarithms. He put the logarithms scale on a rule together with line of squares, cubes, tangent, sine and cosine. It was some two foot long and 1 and a half inch wide and to use it, one had to use a compass (or a pair of dividers or calipers) to transfer distances. It was cumbersome but it worked.
All these were really happening at an exciting time before the enlightenment when Galileo Galilei was making discoveries about the universe using his homemade telescope and Johannes Kepler was figuring out patterns of planetary motions from the huge amount of data he collected. A few decades later after the invention of logarithms, Sir Isaac Newton would be born and change our view of the universe completely.
William Oughtred
Roget P M
In 1622, William Oughtred made one improvement on Gunter's Scale and placed two such scales side by side then reads the distance relationship from there. He also developed a circular slide rule.
In 1675, Sir Isaac Newton solved cubic equations using three logarithmic scales and suggested the use of a cursor. 2 years later, Henry Coggeshall changed and popularized the timber and carpenter's rule which saw the first specialized application of slide rule.
In the following years, there were many improvements to the slide rule. One major improvement was the increase in scale graduation accuracy. The next major improvement came in 1815 when Peter Roget invented the log log scale which enabled the calculation of roots and powers to any number or fraction which came into great usefulness some fifty years later when advances in engineering and physics required such operations. In 1851, a French artillery officer Amedee Mannheim standardized a set of four scales for the most common calculation problems and his design became the foundation for all the slide rules that were produced in the subsequent years till portable pocket calculators took over what was formerly slide rule's territories, rendering them obsolete.[3]
## Operating Principles
### An Analogy
Say we have two linear scales $S$ and $B$. A number $x$ on $S$ will be referred to as $S-x$ and a number $y$ on $B$ will be referred to as $B-y$. The statement "Move the top ruler to the right until its left end is over number $y$ on the bottom ruler" will be simply be stated as "Place $S-0$ over $B-y$"
Now, say we want to do the addition $4+3$, how would you use the two ruler to do such an addition? Well we will place $S-0$ over $B-4$ and the number opposite $S-3$ on $B$ is the answer which is $7$.
What about $5+7$?
The number opposite $S-7$ went beyond scale $B$. What should we do then? Well we can extend the B scale like shown below.
Well then the answer is 12. But suppose our original scales are 10 inches, making it longer will force the rule to be inconveniently cumbersome. So what is the next solution? The more observant of you will find that $S-0$ is over $B-5$ and $S-(1)0$ is over $B-(1)5$. If S-0 were over B-7, then S-(1)0 would be over B-(1)7. This should not come at a surprise since whatever single digit number $x$ you add to the number 10 would end up $(1)x$. Hence the right hand digit does not change. All we have to pay attention the left hand digit. Hence for the previous case, we can actually place $S-(1)0$ over $B-5$ and the number opposite $S-7$ is thus the right hand digit of the answer. Keeping track of the left hand side of the digit, we get the answer 12 as shown below.
What about $24+58$? Then we need a more finely divided scale with ten more division between each consecutive numbers. What we are going to do is to compute $2.4+5.8$ and them move the decimal place afterward. Refer to the diagram below. Place S-0 over B-2.4 and the number opposite S-5.8 is 8.2. Move decimal point to the right, we have 82 which is the real answer. Now, you should realize that the same computation is conducted even if the desired calculation is $240+580$ or $2400+5800$. All we have to do is to pay attention to the decimal place. Even though there is no limits to the subdivision, there is a limit to how finely we con read the subdivisions. Not only that, markings on the ruler have perceptible thickness themselves so there is actually limits to how many subdivisions we could put on the ruler. Therefore, we are condemned to inexactness if you desire to calculate $18578+8473954$ in which case we have to approximate.[4]
The reverse process is subtraction.
Now what is the point of this? We all know how to do addition and subtraction without the use of the addition rule. But these operations have close connection to the operations of slide rule though on a different scale. Knowing that 99% of those who read this page won't even have a slide rule, I have found a Virtual Slide Rule from the internet so that you could play around with this and try out the operations described below.
### Multiplication
Using the principle $\log(xy) = \log(x) + \log(y)$, we can transform multiplication into addition on a logarithmic scale.
Example: $578 \times 9849$
Use $5.78 \times 9.849 \times 10^5$. Place $C-10$ over $D-9.85$, read the number opposite $C-5.78$ which is approximately $5.69$. Now, take note that slide rule does not keep track of the decimal place so we have to keep track of it. Since $5.78 \approx 6$ and $9.849 \approx 10$, the answer is a little less than $60$. Therefore the answer $56.9$. In addition, we have to multiply $56.9$ with $10^5$ to get the answer for $578 \times 9849$ is approximately $5690000$. Use a calculator to check the answer, we have 5692722 which is not that far away from our approximated answer.
### Division
Using $\log \frac{x}{y} = \log(x) - \log(y)$, we can transform division into subtraction on a logarithmic scale.
Example: $578 \div 9849$.
As before, we use $5.78 \div 9.849 \times 10^{-1}$ and employ the subtraction principle from before. But then you will realize that we cannot do $5.78 - 9.849$ on the slide rule. Therefore, we will do $5.78 \times \frac {1}{9.849}$ and to do that we will use the CI scale to find $\frac {1}{9.849}$ and then add that number to $5.78$ on the slide rule.
We first look for $\frac {1}{9.849}$.
Read the number opposite to $CI-9.849$ on the $C$ scale which is $1.011$. Keeping track of the decimal, the answer is $1.011 \times 10^{-1}$
Now we do the operation $5.78 \times 1.011 \times 10^{-1}$
Place $C-1$ over $D-5.78$ and read the number opposite $C-1.011$ which is $5.86$. Keeping track of the decimal, the answer is approximately $0.586$. The answer from a calculator which gives the answer $0.58686...$, we have the answer pretty close.
### Squaring
To do squaring, we need a new scale that corresponds to the old C and D scale.
After we have fixed those points, we can then recalibrate the B scale on a logarithmic scale. Then we can do squaring by choosing number on the C scale and read the number on the B scale.
Example: $1965^2$
We use $1.965 \times 10^3$. Find the number opposite to $C-1.965$ on the B scale, which is approximately $3.86$. Since $1.965 \approx 2$ and $2 \times 2 = 4$, we don't have to move the decimal point. Therefore, the answer is approximately $3.86 \times 10^6$. A calculator gives us $3.861225 \times 10^6$ which is really close to our approximation.
### Taking Square Root
Taking square root is the reverse process of squaring. Hence we are choosing a number on the B scale and read a the answer on the C scale. However, there is a trick to this process. B scale is divided into two halves. The left half is to find the square root of numbers with odd numbers of digits, i.e. 1, 345, 48529, etc. The right half is used to find the square root of numbers with even numbers of digits. But why is that you may ask. It is easy to see. Note that B scale is divided into two identical parts, with left side running from 1 to 10 and the right side running from 10 to 100. Since the full length of the B scale is denoted as 100, then the left side can denote $x$ with $1 < x < 10$, or $1 \times 10^2 < x < 10 \times 10^2$, or $1 \times 10^4 < x < 10 \times 10^4$, etc and the right side can denote $y$ with $10 < y < 100$, or $10 \times 10^2 < y , 100 \times 10^2$, or $10 \times 10^4 < y < 100 \times 10^4$. That is to say that we need the left side to find numbers with odd numbers of digits and the right to find numbers with odd number of digits.
Example: $\sqrt {4850}$.
Use $4.85 \times 10^3$ and the right half of the B scale. Find the number opposite $B-4.85$ on the C scale which is approximately $6.96$. Since $70^2=4900$, the answer is thus approximately $69.6$. A calculator gives us answer $69.6419......$.
### Cubing
Similar to squaring, we need a new cube scale that corresponds the old C and D scales. Hence, we have the new K scale.
Then we recalibrate the scale according to the logarithmic scale and hence we have the new K scale.
Example: $783^3$
Read the number on the K scale opposite $D-7.83$ which is approximately 480. Keeping track of the decimal point, we have the answer 480000000. A calculator gives the answer 480048687 which is really close.
### Taking Cube Root
The K scale is separated into three identical section. Left section runs from 1 to 10; the middle section runs from 10 to 100; the right section runs from 100 to 1000. The same argument follows from the Taking Square Root section. Since the full length of the K scale is denoted as 1000, then the left section can denote $x$ with $1 < x < 10$, or $1 \times 10^3 < x < 10 \times 10^3$, or $1 \times 10^6 < x < 10 \times 10^6$, etc, the middle section can denote $y$ with $10 < y < 100$, or $10 \times 10^3 < y < 100 \times 10^3$, or $10 \times 10^6 < y < 100 \times 10^6$, etc and the right side can denote $z$ with $100 < z < 1000$, or $100 \times 10^3 < z < 1000 \times 10^3$, or $100 \times 10^6 < z < 1000 \times 10^6$. Therefore, we use the left section to find numbers with 1, 4, 7...digits, the middle section to find numbers with 2, 5, 8...digits and the right section to find numbers with 3, 6, 9...digits.
Example: $\sqrt [3] {783}$
Read the number opposite $K-783$ on D scale which is approximately 9.22. A calculator gives the answer 9.21695.......
## Zenith and Downfall
I kind of talked about it in the introduction section. But I will discuss in a little detail here.
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
# About the Creator of this Image
International Business Machines (IBM) is a multinational computer, technology and IT consulting corporation headquartered in Armonk, North Castle, New York, United States. IBM is the world's fourth largest technology company and the second most valuable by global brand (after Coca-Cola). IBM is one of the few information technology companies with a continuous history dating back to the 19th century. IBM manufactures and sells computer hardware and software (with a focus on the latter), and offers infrastructure services, hosting services, and consulting services in areas ranging from mainframe computers to nanotechnology.
# Notes
1. Stoll, 2006
2. Sylvester, 1886
3. The Oughtred Society
4. Asimov, 1965, p. 19-38
# References
1. Asimov, I. (1965). An Easy Introduction to the Slide Rule. New York: Fawcett Publications, inc.
2. Stoll, C. (2006 , May). When Slide Rules Ruled. Scientific American Magazine , pp. 80-87.
3. Sylvester, J. J. (1886). On The Method of Reciprocants As Containing An Exhaustive Theory of The Singularities of Curves. Nature , 222-223.
4. The Oughtred Society. (n.d.). Slide Rule History. Retrieved from The Oughtred Society: http://www.oughtred.org/history.shtml
At least finish the Calculating $a^x$ and $\sqrt[x]{a}$, where $x$ is arbitrary. I lost patience to read on the operating principles behind this and got too preoccupied with the The Logarithms, Its Discovery and Development. Other than that, you can add anything that you deem necessary.
|
2022-12-07 13:30:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 133, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8317744731903076, "perplexity": 810.2984966701456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711162.52/warc/CC-MAIN-20221207121241-20221207151241-00797.warc.gz"}
|
https://www.semanticscholar.org/paper/Z-gamma-production-and-limits-on-anomalous-ZZ-gamma-Collaboration-Abazov/3595dccf32fc93cff15ff05082a063863ff552e7
|
# Z gamma production and limits on anomalous ZZ gamma and Z gamma gamma couplings in pp collisions at root s=1.96 TeV
@article{Collaboration2007ZGP,
title={Z gamma production and limits on anomalous ZZ gamma and Z gamma gamma couplings in pp collisions at root s=1.96 TeV},
author={D0 Collaboration and V. Abazov and E. al.},
journal={Physics Letters B},
year={2007},
volume={653},
pages={378-386}
}
• Published 2007
• Physics
• Physics Letters B
• We present a study of eey and mu mu gamma events using 1109 (1009) pb-(1) of data in the electron (muon) channel, respectively. These data were collected with the DO detector at the Fermilab Tevatron pp collider at Is = 1.96 TeV. Having observed 453 (515) candidates in the eey (jtAy) final state, we measure the Z gamma production cross section for a photon with transverse energy ET > 7 GeV, separation between the photon and leptons Delta Rey > 0.7, and invariant mass of the di-lepton pair Mee… CONTINUE READING
21 Citations
#### References
SHOWING 1-10 OF 25 REFERENCES
Probing the weak-boson sector in Z gamma production at hadron colliders.
• Physics, Medicine
• Physical review. D, Particles and fields
• 1993
• 82
• PDF
|
2021-01-24 00:08:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9805134534835815, "perplexity": 11359.801561556858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538741.56/warc/CC-MAIN-20210123222657-20210124012657-00799.warc.gz"}
|