url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://mathhelpforum.com/advanced-statistics/67949-i-need-help-how-set-up.html
|
# Math Help - I need HELP on how to set this up!
1. ## I need HELP on how to set this up!
A manufacturer of bolts wants to determine the length of the manufactured bolts. A random sample of 160 bolts shows that the sample mean length is 2.9 inches and that the sample standard deviation is 0.1 inches.
Part (a) (5 points)
Calculate a 95 percent confidence interval for the mean length of all the bolts (i.e., for the population mean).
Part (b) (2 points)
Consider the following statement: “It is always better to construct a 99-percent confidence interval instead of a 95-percent confidence interval.” Do you agree or disagree with this statement? Justify your answer
2. Originally Posted by nac123
A manufacturer of bolts wants to determine the length of the manufactured bolts. A random sample of 160 bolts shows that the sample mean length is 2.9 inches and that the sample standard deviation is 0.1 inches.
Part (a) (5 points)
Calculate a 95 percent confidence interval for the mean length of all the bolts (i.e., for the population mean).
Part (b) (2 points)
Consider the following statement: “It is always better to construct a 99-percent confidence interval instead of a 95-percent confidence interval.” Do you agree or disagree with this statement? Justify your answer
Part A:
For any confidence interval, call value of the confidence level $C$ (such as 90% or 95%, as in your case). Then the probability that the true population mean (called $\mu$) lies within a given interval with a confidence level of $C$ is:
$P(\bar{x} - z \frac{\sigma}{\sqrt{n}} \leq \mu \leq \bar{x} + z \frac{\sigma}{\sqrt{n}})$ where $\bar{x}, z, \sigma, n$ are your mean, z-score, standard deviation, and sample size, respectively.
So your confidence interval is between the points
$\bar{x} - z \frac{\sigma}{\sqrt{n}}, \bar{x} + z \frac{\sigma}{\sqrt{n}}$
To find a given confidence interval, plug in those values. In your case, [tex]\bar{x} = 2.9, n = 160, \sigma = 0.1[tex]. The z-score will be based on your confidence level. Look them up in a z-table. A 90% confidence interval will exclude 5% of the normal distribution on neither side (a total of 10% excluded). A 95% confidence interval will exclude 2.5% (for a total of 5% excluded).
Part B:
Consider what a confidence interval means. A 90% confidence interval means that repeated trials using those values of $z, \sigma, n$ will give you intervals that contain the true mean 90% of the time. A 95% confidence will do the same thing 95% of the time. But for a 95% confidence interval, the z-value is higher, so what happens to the width of the interval?
The answer is that both have their pros and cons - you are essentially trading off how certain you are about where the mean is with precise of a net you are casting. The higher the confidence level, the wider the interval is. It's sort of like: would you prefer "I am 95% that this guy is in America" vs. "I am 80% sure this guy is in New York?"
|
2015-05-04 11:12:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.636574923992157, "perplexity": 500.3730034087963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430454040787.28/warc/CC-MAIN-20150501042040-00055-ip-10-235-10-82.ec2.internal.warc.gz"}
|
http://www.math.ubc.ca/Dept/Events/colloquia/ChenNov7.html
|
COLLOQUIUM
3:00 p.m., Wednesday (November 7, 2007)
WMAX 110
## Xiuxiong Chen University of Wisconsin
### A recent update on the existence of extremal Kaehler metrics in Kaehler surfaces
Abstract: In this talk, we will first give a brief tour" of Kaehler geometry and discuss some key results obtained in recent years. For instance, the uniqueness of extremal Kaehler metric as well as the existence of a sharp lower bound of the Calabi energy in any Kaehler class. We then discuss the existence of extremal Kaehler metrics in Fano surface via continuous deformation method. In particular, we prove the existence of a new Einstein metric in CP2 \sharp 2 \bar{CP}2 with positive scalar curvature. Combining with famous theorem of G. Tian, this gives a final and affirmative answer to a long standing question: does every Fano surface admits an Einstein metric with positive scalar curvature?
Refreshments will be served at 2:45 p.m. (PIMS Lounge).
|
2013-06-18 20:50:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5791738629341125, "perplexity": 728.3638819029117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707186142/warc/CC-MAIN-20130516122626-00058-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://stackoverflow.com/questions/6921105/get-file-name-from-a-path-string-in-c-sharp
|
# Get file name from a path string in C#
I program in WPF C#. I have e.g. the following Path:
C:\Program Files\hello.txt
and I want to output "hello" from it.
The path is a string extract from database. Currently I'm using the following method (split from path by '\' then split again by a '.'):
string path = "C:\\Program Files\\hello.txt";
string[] pathArr = path.Split('\\');
string[] fileArr = pathArr.Last().Split('.');
string fileName = fileArr.Last().ToString();
It works, but I believe there should be shorter and smarter solution to that. Any idea?
• In my system, Path.GetFileName("C:\\dev\\some\\path\\to\\file.cs") is returning the same string and not converting it to "file.cs" for some reason. If I copy/paste my code into an online compiler (like rextester.com), it works...? – jbyrd Feb 21 '18 at 21:35
Path.GetFileName
Path.GetFileNameWithoutExtension
The Path class is wonderful.
try
fileName = Path.GetFileName (path);
http://msdn.microsoft.com/de-de/library/system.io.path.getfilename.aspx
try
System.IO.Path.GetFileNameWithoutExtension(path);
demo
string fileName = @"C:\mydir\myfile.ext";
string path = @"C:\mydir\";
string result;
result = Path.GetFileNameWithoutExtension(fileName);
Console.WriteLine("GetFileNameWithoutExtension('{0}') returns '{1}'",
fileName, result);
result = Path.GetFileName(path);
Console.WriteLine("GetFileName('{0}') returns '{1}'",
path, result);
// This code produces output similar to the following:
//
// GetFileNameWithoutExtension('C:\mydir\myfile.ext') returns 'myfile'
// GetFileName('C:\mydir\') returns ''
https://msdn.microsoft.com/en-gb/library/system.io.path.getfilenamewithoutextension%28v=vs.80%29.aspx
• It seems Path.GetFileNameWithoutExtension() is not working with a file extension > 3 characters. – Nolmë Informatique Aug 19 '18 at 21:12
You can use Path API as follow:
var filenNme = Path.GetFileNameWithoutExtension([File Path]);
var fileNameWithoutExtension = Path.GetFileNameWithoutExtension(path);
Path.GetFileNameWithoutExtension
Try this:
string fileName = Path.GetFileNameWithoutExtension(@"C:\Program Files\hello.txt");
This will return "hello" for fileName.
string Location = "C:\\Program Files\\hello.txt";
string FileName = Location.Substring(Location.LastIndexOf('\\') +
1);
• +1 since this might be Helpful in the case wherein this works as a Backup wherein the file name contains invalid characters [ <, > etc in Path.GetInvalidChars()]. – bhuvin Sep 8 '15 at 6:17
• This is actually quite useful when working with path on UNIX ftp servers. – s952163 Jun 3 at 22:59
Try this,
string FilePath=@"C:\mydir\myfile.ext";
string Result=Path.GetFileName(FilePath);//With Extension
string Result=Path.GetFileNameWithoutExtension(FilePath);//Without Extension
• So exactly like the highest voted answer says? – CodeCaster Jun 27 '17 at 7:44
• You used the exact same methods as mentioned in the highest voted answer. – CodeCaster Jun 27 '17 at 12:39
Namespace: using System.IO;
//use this to get file name dynamically
string filelocation = Properties.Settings.Default.Filelocation;
//use this to get file name statically
//string filelocation = @"D:\FileDirectory\";
string[] filesname = Directory.GetFiles(filelocation); //for multiple files
Your path configuration in App.config file if you are going to get file name dynamically -
<userSettings>
<ConsoleApplication13.Properties.Settings>
<setting name="Filelocation" serializeAs="String">
<value>D:\\DeleteFileTest</value>
</setting>
</ConsoleApplication13.Properties.Settings>
</userSettings>
string filepath = "C:\\Program Files\\example.txt";
FileVersionInfo myFileVersionInfo = FileVersionInfo.GetVersionInfo(filepath);
FileInfo fi = new FileInfo(filepath);
Console.WriteLine(fi.Name);
//input to the "fi" is a full path to the file from "filepath"
//This code will return the fileName from the given path
//output
//example.txt
|
2019-12-06 03:40:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18962830305099487, "perplexity": 12039.691619876945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540484477.5/warc/CC-MAIN-20191206023204-20191206051204-00196.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/algebra/college-algebra-7th-edition/chapter-1-equations-and-graphs-section-1-10-modeling-variation-1-10-exercises-page-164/16
|
## College Algebra 7th Edition
$S=kr^{2}\theta^{2}$
Since $S$ is directly proportional (jointly) to the square of $r$ and the square of $\theta$, we have:: $S=kr^{2}\theta^{2}$ (and $k$ is a constant.)
|
2018-04-24 03:33:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9847777485847473, "perplexity": 556.1470869258003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946453.89/warc/CC-MAIN-20180424022317-20180424042317-00223.warc.gz"}
|
https://nebusresearch.wordpress.com/tag/a-to-z/
|
## My All 2020 Mathematics A to Z: John von Neumann
Mr Wu, author of the Singapore Maths Tuition blog, suggested another biographical sketch for this year of biographies. Once again it’s of a person too complicated to capture in full in one piece, even at the length I’ve been writing. So I take a slice out of John von Neumann’s life here.
# John von Neumann.
In March 1919 the Hungarian People’s Republic, strained by Austria-Hungary’s loss in the Great War, collapsed. The Hungarian Soviet Republic, the world’s second Communist state, replaced it. It was a bad time to be a wealthy family in Budapest. The Hungarian Soviet lasted only a few months. It was crushed by the internal tension between city and countryside. By poorly-fought wars to restore the country’s pre-1914 borders. By the hostility of the Allied Powers. After the Communist leadership fled came a new Republic, and a pogrom. Europeans are never shy about finding reasons to persecute Jewish people. It was a bad time to be a Jewish family in Budapest.
Von Neumann was born to a wealthy, (non-observant) Jewish family in Budapest, in 1903. He acquired the honorific “von” in 1913. His father Max Neumann was honored for service to the Austro-Hungarian Empire and paid for a hereditary appellation.
It is, once again, difficult to encompass von Neumann’s work, and genius, in one piece. He was recognized as genius early. By 1923 he published a logical construction for the counting numbers that’s still the modern default. His 1926 doctoral thesis was in set theory. He was invited to lecture on quantum theory at Princeton by 1929. He was one of the initial six mathematics professors at the Institute for Advanced Study. We have a thing called von Neumann algebras after his work. He gave the first rigorous proof of an ergodic theorem. He partly solved one of Hilbert’s problems. He studied non-linear partial differential equations. He was one of the inventors of the electronic computer as we know it, both the theoretical and the practical ideas.
And, the sliver I choose to focus on today, he made game theory into a coherent field.
The term “game theory” makes it sound like a trifle. We don’t call “genius” anyone who comes up with a better way to play tic-tac-toe. The utility of the subject appears when we notice what von Neumann thought he was writing about. Von Neumann’s first paper on this came in 1928. In 1944 he with Oskar Morgenstern published the textbook Theory Of Games And Economic Behavior. In Chapter 1, Section 1, they set their goals:
The purpose of this book is to present a discussion of some fundamental questions of economic theory which require a treatment different from that which they have found thus far in the literature. The analysis is concerned with some basic problems arising from a study of economic behavior which have been the center of attention of economists for a long time. They have their origin in the attempts to find an exact description of the endeavor of the individual to obtain a maximum of utility, or in the case of the entrepreneur, a maximum of profit.
Somewhere along the line von Neumann became interested in how economics worked. Perhaps because his family had money. Perhaps because he saw how one could model an “ideal” growing economy — matching price and production and demand — as a linear programming question. Perhaps because economics is a big, complicated field with many unanswered questions. There was, for example, little good idea of how attendees at an auction should behave. What is the rational way to bid, to get the best chances of getting the things one wants at the cheapest price?
In 1928, von Neumann abstracted all sorts of economic questions into a basic model. The model has almost no features, so very many games look like it. In this, you have a goal, and a set of options for what to do, and an opponent, who also has options of what to do. Also some rounds to achieve your goal. You see how this abstract a structure describes many things one could do, from playing Risk to playing the stock market.
And von Neumann discovered that, in the right circumstances, you can find a rational way to bid at an auction. Or, at least, to get your best possible outcome whatever the other person does. The proof has the in-retrospect obviousness of brilliance. von Neumann used a fixed-point theorem. Fixed point theorems came to mathematics from thinking of functions as mappings. Functions match elements in a set called the domain to those in a set called the range. The function maps the domain into the range. If the range is also the domain? Then we can do an iterated mapping. Under the right circumstances, there’s at least one point that maps to itself.
In the light of game theory, a function is the taking of a turn. The domain and the range are the states of whatever’s in play. In this type of game, you know all the options everyone has. You know the state of the game. You know what the past moves have all been. You know what you and your opponent hope to achieve. So you can predict your opponent’s strategy. And therefore pick a strategy that gets you the best option available given your opponent is trying to do the same. So will your opponent. So you both end up with the best attainable outcome for the both of you; this is the minimax theorem.
It may strike you that, given this, the game doesn’t need to be played anymore. Just pick your strategy, let your opponent pick one, and the winner is determined. So it would, if we played our strategies perfectly, and if we didn’t change strategies mid-game. I would chuckle at the mathematical view that we study a game to relieve ourselves of the burden of playing. But I know how many grand strategy video games I have that I never have time to play.
After this 1928 paper von Neumann went on to other topics for about a dozen years. Why create a field of mathematics and then do nothing with it? For one, we see it as a gap only because we are extracting, after the fact, this thread of his life. He had other work, particularly in quantum mechanics, operators, measure theory, and lattice theory. He surely did not see himself abandoning a new field. He saw, having found an interesting result, new interesting questions..
But Philip Mirowski’s 1992 paper What Were von Neumann and Morgenstern Trying to Accomplish? points out some context. In September 1930 Kurt Gödel announced his incompleteness proof. Any logical system complex enough has things which are true and can’t be proven. The system doesn’t have to be that complex. Mathematical rigor must depend on something outside mathematics. This shook von Neumann. He would say that after Gödel published, von Neumann never bothered reading another paper on symbolic logic. Mirowski believes this drove von Neumann into what we now call artificial intelligence. At least, into mathematics that draws from empirical phenomena. von Neumann needed time to recover from the shock. And needed the prodding of Morgenstern to return to economics.
After publishing Theory Of Games And Economic Behavior the book … well, Mirowski calls it more “cited in reverence than actually read”. But game theory, as a concept? That took off. It seemed to offer a way to rationalize the world.
von Neumann would become a powerful public intellectual. He would join the Manhattan Project. He showed that the atomic bomb would be more destructive if it exploded kilometers above the ground, rather than at ground level. He was on the target selection committee which, ultimately, slated Hiroshima and Nagasaki for mass murder. He would become a consultant for the Weapons System Evaluation Group. They advised the United States Joint Chiefs of Staff on developing and using new war technology. He described himself, to a Senate committee, as “violently anti-communist and much more militaristic than the norm”. He is quoted in 1950 as remarking, “if you say why not bomb [ the Soviets ] tomorrow, I say, why not today? If you say today at five o’clock, I say why not one o’clock?”
The quote sounds horrifying. It makes game-theory sense, though. If war is inevitable, it is better fought when your opponent is weaker. And while the Soviet Union had won World War II, it was also ruined in the effort.
There is another game-theory-inspired horror for which we credit von Neumann. This is Mutual Assured Destruction. If any use of an atomic, or nuclear, weapon would destroy the instigator in retaliation, then no one would instigate war. So the nuclear powers need, not just nuclear arsenals. They need such vast arsenals that the remnant which survives the first strike can destroy the other powers in the second strike.
Perhaps the reasoning holds together. We did reach the destruction of the Soviet Union without using another atomic weapon in anger. But it is hard to say that was rationally accomplished. There were at least two points, in 1962 and in 1983, when a world-ruining war could too easily have happened, by people following the “obvious” strategy.
Which brings a flaw of game theory, at least as applied to something as complicated as grand strategy. Game theory demands the rules be known, and agreed on. (At least that there is a way of settling rule disputes.) It demands we have the relevant information known truthfully. It demands we know what our actual goals are. It demands that we act rationally, and that our opponent acts rationally. It demands that we agree on what rational is. (Think of, in Doctor Strangelove, the Soviet choice to delay announcing its doomsday machine’s completion.) Few of these conditions obtain in grand strategy. They barely obtain in grand strategy games. von Neumann was aware of at least some of these limitations, though he did not live long enough to address them. He died of either bone, pancreatic, or prostate cancer, likely caused by radiation exposure working at Los Alamos.
Game theory has been, and is, a great tool in many fields. It gives us insight into human interactions. It does good work in economics, in biology, in computer science, in management. But we can come to very bad conditions when we forget the difference between the game we play and the game we modelled. And if we forget that the game is value-indifferent. The theory makes no judgements about the ethical nature of the goal. It can’t, any more than the quadratic equation can tell us whether ‘x’ is which fielder will catch the fly ball or which person will be killed by a cannonball.
It makes an interesting parallel to the 19th century’s greatest fusion of mathematics and economics. This was utilitarianism, the attempt to bring scientific inquiry to the study of how society should be set up. Utilitarianism offers exciting insights into, say, how to allocate public services. But it struggles to explain why we should refrain from murdering someone whose death would be convenient. We need a reason besides the maximizing of utility.
No war is inevitable. One comes about only after many choices. Some are grand choices, such as a head of government issuing an ultimatum. Some are petty choices, such as the many people who enlist as the sergeants that make an army exist. We like to think we choose rationally. Psychological experiments, and experience, and introspection tell us we more often choose and then rationalize.
von Neumann was a young man, not yet in college, during the short life of the Hungarian Soviet Republic, and the White Terror that followed. I do not know his biography well enough to say how that experience motivated his life’s reasoning. I would not want to say that 1919 explained it all. The logic of a life is messier than that. I bring it up in part to fight the tendency of online biographic sketches to write as though he popped into existence, calculated a while, inspired a few jokes, and vanished. And to reiterate that even mathematics never exists without context. Even what seem to be pure questions on an abstract idea of a game is often inspired by a practical question. And that work is always done in a context that affects how we evaluate it.
Thank you all for reading. This grew a bit more serious than I had anticipated. This and all the other 2020 A-to-Z essays should appear at this link. Both the 2020 and all past A-to-Z essays should be at this link.
I am hosting the Playful Math Education Blog Carnival at the end of September, so appreciate any educational or recreational or fun mathematics material you know about. I’m hoping to publish next week and so hope that you can help me this week.
And, finally, I am open for mathematics topics starting with P, Q, and R to write about next month. I should be writing about them this month and getting ahead of deadline, but that seems not to be happening.
## I’m looking for P, Q, and R topics for the All 2020 A-to-Z
And now I am at the actual halfway point in the year’s A-to-Z. I’m still not as far ahead of deadline as I want to be, but I am getting at least a little better.
As I continue to try to build any kind of publication buffer, I’d like to know of any mathematical terms starting with the letters P, Q, or R that you’d like me to try writing. I might write about anything, of course; my criteria is what topic I think I could write something interesting about. But that’s a pretty broad set of things. Part of the fun of an A-to-Z series is learning enough about a subject I haven’t thought about much, in time to write a thousand-or-more words about it.
So please leave a comment with any topics you’d like to see discussed. Also please leave a mention of your own blog or YouTube channel or Twitter account or anything else you do that’s worth some attention. I’m happy giving readers new things to pay attention to, even when it’s not me.
It hasn’t happened yet, but I am open to revisiting a topic I’ve written about before, in case I think I can do better. My list of past topics may let you know if something satisfactory’s already been written about, say, quaternions. But if you don’t like what I already have about something, make a suggestion. I might do better.
Topics I’ve already covered, starting with the letter ‘P’, are:
Topics I’ve already covered, starting with the letter ‘Q’, are:
Topics I’ve already covered, starting with the letter ‘R’, are:
Thanks for reading and thanks for your thoughts.
## My All 2020 Mathematics A to Z: Möbius Strip
Jacob Siehler suggested this topic. I had to check several times that I hadn’t written an essay about the Möbius strip already. While I have talked about it some, mostly in comic strip essays, this is a chance to specialize on the shape in a way I haven’t before.
# Möbius Strip.
I have ridden at least 252 different roller coasters. These represent nearly every type of roller coaster made today, and most of the types that were ever made. One type, common in the 1920s and again since the 70s, is the racing coaster. This is two roller coasters, dispatched at the same time, following tracks that are as symmetric as the terrain allows. Want to win the race? Be in the train with the heavier passenger load. The difference in the time each train takes amounts to losses from friction, and the lighter train will lose a bit more of its speed.
There are three special wooden racing coasters. These are Racer at Kennywood Amusement Park (Pittsburgh), Grand National at Blackpool Pleasure Beach (Blackpool, England), and Montaña Rusa at La Feria Chapultepec Magico (Mexico City). I’ve been able to ride them all. When you get into the train going up, say, the left lift hill, you return to the station in the train that will go up the right lift hill. These racing roller coasters have only one track. The track twists around itself and becomes a Möbius strip.
This is a fun use of the Möbius strip. The shape is one of the few bits of advanced mathematics to escape into pop culture. Maybe dominates it, in a way nothing but the blackboard full of calculus equations does. In 1958 the public intellectual and game show host Clifton Fadiman published the anthology Fantasia Mathematica. It’s all essays and stories and poems with some mathematical element. I no longer remember how many of the pieces were about the Möbius strip one way or another. The collection does include A J Deutschs’s classic A Subway Named Möbius. In this story the Boston subway system achieves hyperdimensional complexity. It does not become a Möbius strip, though, in that story. It might be one in reality anyway.
The Möbius strip we name for August Ferdinand Möbius, who in 1858 was the second person known to have noticed the shape’s curious properties. The first — to notice, in 1858, and to publish, in 1862 — was Johann Benedict Listing. Listing seems to have coined the term “topology” for the field that the Möbius strip would be emblem for. He wrote one of the first texts on the field. He also seems to have coined terms like “entrophic phenomena” and “nodal points” and “geoid” and “micron”, for a millionth of a meter. It’s hard to say why we don’t talk about Listing strips instead. Mathematical fame is a strange, unpredictable creature. There is a topological invariant, the Listing Number, named for him. And he’s known to ophthalmologists for Listing’s Law, which describes how human eyes orient themselves.
The Möbius strip is an easy thing to construct. Loop a ribbon back to itself, with an odd number of half-twist before you fasten the ends together. Anyone could do it. So it seems curious that for all recorded history nobody thought to try. Not until 1858 when Lister and then Möbius hit on the same idea.
An irresistible thing, while riding these roller coasters, is to try to find the spot where you “switch”, where you go from being on the left track to the right. You can’t. The track is — well, the track is a series of metal straps bolted to a base of wood. (The base the straps are bolted to is what makes it a wooden roller coaster. The great lattice holding the tracks above ground have nothing to do with it.) But the path of the tracks is a continuous whole. To split it requires the same arbitrariness with which mapmakers pick a prime meridian. It’s obvious that the “longitude” of a cylinder or a rubber ball is arbitrary. It’s not obvious that roller coaster tracks should have the same property. Until you draw the shape in that ∞-loop figure we always see. Then you can get lost imagining a walk along the surface.
And it’s not true that nobody thought to try this shape before 1858. Julyan H E Cartwright and Diego L González wrote a paper searching for pre-Möbius strips. They find some examples. To my eye not enough examples to support their abstract’s claim of “lots of them”, but I trust they did not list every example. One example is a Roman mosaic showing Aion, the God of Time, Eternity, and the Zodiac. He holds a zodiac ring that is either a Möbius strip or cylinder with artistic errors. Cartwright and González are convinced. I’m reminded of a Looks Good On Paper comic strip that forgot to include the needed half-twist.
Islamic science gives us a more compelling example. We have a book by Ismail al-Jazari dated 1206, The Book of Knowledge of Ingenious Mechanical Devices. Some manuscripts of it illustrate a chain pump, with the chain arranged as a Möbius strip. Cartwright and González also note discussions in Scientific American, and other engineering publications in the United States, about drive and conveyor belts with the Möbius strip topology. None of those predate Lister or Möbius, or apparently credit either. And they do come quite soon after. It’s surprising something might leap from abstract mathematics to Yankee ingenuity that fast.
If it did. It’s not hard to explain why mechanical belts didn’t consider Möbius strip shapes before the late 19th century. Their advantage is that the wear of the belt distributes over twice the surface area, the “inside” and “outside”. A leather belt has a smooth and a rough side. Many other things you might make a belt from have a similar asymmetry. By the late 19th century you could make a belt of rubber. Its grip and flexibility and smoothness is uniform on all sides. “Balancing” the use suddenly could have a point.
I still find it curious almost no one drew or speculated about or played with these shapes until, practically, yesterday. The shape doesn’t seem far away from a trefoil knot. The recycling symbol, three folded-over arrows, suggests a Möbius strip. The strip evokes the ∞ symbol, although that symbol was not attached to the concept of “infinity” until John Wallis put it forth in 1655.
Even with the shape now familiar, and loved, there are curious gaps. Consider game design. If you play on a board that represents space you need to do something with the boundaries. The easiest is to make the boundaries the edges of playable space. The game designer has choices, though. If a piece moves off the board to the right, why not have it reappear on the left? (And, going off to the left, reappear on the right.) This is fine. It gives the game board, a finite rectangle, the topology of a cylinder. If this isn’t enough? Have pieces that go off the top edge reappear at the bottom, and vice-versa. Doing this, along with matching the left to the right boundaries, makes the game board a torus, a doughnut shape.
A Möbius strip is easy enough to code. Make the top and bottom impenetrable borders. And match the left to the right edges this way: a piece going off the board at the upper half of the right edge reappears at the lower half of the left edge. Going off the lower half of the right edge brings the piece to the upper half of the left edge. And so on. It isn’t hard, but I’m not aware of any game — board or computer — that uses this space. Maybe there’s a backgammon variant which does.
Still, the strip defies our intuition. It has one face and one edge. To reflect a shape across the width of the strip is the same as sliding a shape along its length. Cutting the strip down the center unfurls it into a cylinder. Cutting the strip down, one-third of the way from the edge, divides it into two pieces, a skinnier Möbius strip plus a cylinder. If we could extract the edge we could tug and stretch it until it was a circle.
And it primes our intuition. Once we understand there can be shapes lacking sides we can look for more. Anyone likely to read a pop mathematics blog about the Möbius strip has heard of the Klein bottle. This is a three-dimensional surface that folds back on itself in the fourth dimension of space. The shape is a jug with no inside, or with nothing but inside. Three-dimensional renditions of this get suggested as gifts to mathematicians. This for your mathematician friend who’s already got a Möbius scarf.
Though a Möbius strip looks — at any one spot — like a plane, the four-color map theorem doesn’t hold for it. Even the five-color theorem won’t do. You need six colors to cover maps on such a strip. A checkerboard drawn on a Möbius strip can be completely covered by T-shape pentominoes or Tetris pieces. You can’t do this for a checkerboard on the plane. In the mathematics of music theory the organization of dyads — two-tone “chords” — has the structure of a Möbius strip. I do not know music theory or the history of music theory. I’m curious whether Möbius strips might have been recognized by musicians before the mathematicians caught on.
And they inspire some practical inventions. Mechanical belts are obvious, although I don’t know how often they’re used. More clever are designs for resistors that have no self-inductance. They can resist electric flow without causing magnetic interference. I can look up the patents; I can’t swear to how often these are actually used. There exist — there are made — Möbius aromatic compounds. These are organic compounds with rings of carbon and hydrogen. I do not know a use for these. That they’ve only been synthesized this century, rather than found in nature, suggests they are more neat than practical.
Perhaps this shape is most useful as a path into a particular type of topology, and for its considerable artistry. And, with its “late” discovery, a reminder that we do not yet know all that is obvious. That is enough for anything.
There are three steel roller coasters with a Möbius strip track. That is, the metal rail on which the coaster runs is itself braced directly by metal. One of these is in France, one in Italy, and one in Iran. One in Liaoning, China has been under construction for five years. I can’t say when it might open. I have yet to ride any of them.
This and all the other 2020 A-to-Z essays should be at this link. Both the 2020 and all past A-to-Z essays should be at this link. I am hosting the Playful Math Education Blog Carnival at the end of September, so appreciate any educational or recreational or simply fun mathematics material you know about. And, goodness, I’m actually overdue to ask for topics for the latters P through R; I’ll have a post for that tomorrow, I hope. Thank you for your reading and your help.
## My All 2020 Mathematics A to Z: Leibniz
Today’s topic suggestion was suggested by bunnydoe. I know of a project bunnydoe runs, but not whether it should be publicized. It is another biographical piece. Biographies and complex numbers, that seems to be the theme of this year.
# Gottfried Wilhelm Leibniz.
The exact suggestion I got for L was “Leibniz, the inventor of Calculus”. I can’t in good conscience offer that. This isn’t to deny Leibniz’s critical role in calculus. We rely on many of the ideas he’d had for it. We especially use his notation. But there are few great big ideas that can be truly credited to an inventor, or even a team of inventors. Put aside the sorry and embarrassing priority dispute with Isaac Newton. Many mathematicians in the 16th and 17th century were working on how to improve the Archimedean “method of exhaustion”. This would find the areas inside select curves, integral calculus. Johannes Kepler worked out the areas of ellipse slices, albeit with considerable luck. Gilles Roberval tried working out the area inside a curve as the area of infinitely many narrow rectangular strips. We still learn integration from this. Pierre de Fermat recognized how tangents to a curve could find maximums and minimums of functions. This is a critical piece of differential calculus. Isaac Barrow, Evangelista Torricelli (of barometer fame), Pietro Mengoli, and Stephano Angeli all pushed mathematics towards calculus. James Gregory proved, in geometric form, the relationship between differentiation and integration. That relationship is the Fundamental Theorem of Calculus.
This is not to denigrate Leibniz. We don’t dismiss the Wright Brothers though we know that without them, Alberto Santos-Dumont or Glenn Curtiss or Samuel Langley would have built a workable airplane anyway. We have Leibniz’s note, dated the 29th of October, 1675 (says Florian Cajori), writing out $\int l$ to mean the sum of all l’s. By mid-November he was integrating functions, and writing out his work as $\int f(x) dx$. Any mathematics or physics or chemistry or engineering major today would recognize that. A year later he was writing things like $d(x^n) = n x^{n - 1} dx$, which we’d also understand if not quite care to put that way.
Though we use his notation and his basic tools we don’t exactly use Leibniz’s particular ideas of what calculus means. It’s been over three centuries since he published. It would be remarkable if he had gotten the concepts exactly and in the best of all possible forms. Much of Leibniz’s calculus builds on the idea of a differential. This is a quantity that’s smaller than any positive number but also larger than zero. How does that make sense? George Berkeley argued it made not a lick of sense. Mathematicians frowned, but conceded Berkeley was right. By the mid-19th century they had a rationale for differentials that avoided this weird sort of number.
It’s hard to avoid the differential’s lure. The intuitive appeal of “imagine moving this thing a tiny bit” is always there. In science or engineering applications it’s almost mandatory. Few things we encounter in the real world have the kinds of discontinuity that create logic problems for differentials. Even in pure mathematics, we will look at a differential equation like $\frac{dy}{dx} = x$ and rewrite it as $dy = x dx$. Leibniz’s notation gives us the idea that taking derivatives is some kind of fraction. It isn’t, but in many problems we act as though it were. It works out often enough we forget that it might not.
Better, though. From the 1960s Abraham Robinson and others worked out a different idea of what real numbers are. In that, differentials have a rigorous logical definition. We call the mathematics which uses this “non-standard analysis”. The name tells something of its use. This is not to call it wrong. It’s merely not what we learn first, or necessarily at all. And it is Leibniz’s differentials. 304 years after his death there is still a lot of mathematics he could plausibly recognize.
There is still a lot of still-vital mathematics that he touched directly. Leibniz appears to be the first person to use the term “function”, for example, to describe that thing we’re plotting with a curve. He worked on systems of linear equations, and methods to find solutions if they exist. This technique is now called Gaussian elimination. We see the bundling of the equations’ coefficients he did as building a matrix and finding its determinant. We know that technique, today, as Cramer’s Rule, after Gabriel Cramer. The Japanese mathematician Seki Takakazu had discovered determinants before Leibniz, though.
Leibniz tried to study a thing he called “analysis situs”, which two centuries on would be a name for topology. My reading tells me you can get a good fight going among mathematics historians by asking whether he was a pioneer in topology. So I’ll decline to take a side in that.
In the 1680s he tried to create an algebra of thought, to turn reasoning into something like arithmetic. His goal was good: we see these ideas today as Boolean algebra, and concepts like conjunction and disjunction and negation and the empty set. Anyone studying logic knows these today. He’d also worked in something we can see as symbolic logic. Unfortunately for his reputation, the papers he wrote about that went unpublished until late in the 19th century. By then other mathematicians, like Gottlob Frege and Charles Sanders Peirce, had independently published the same ideas.
We give Leibniz’ name to a particular series that tells us the value of π:
$1 - \frac13 + \frac15 - \frac17 + \frac19 - \frac{1}{11} + \cdots = \frac{\pi}{4}$
(The Indian mathematician Madhava of Sangamagrama knew the formula this comes from by the 14th century. I don’t know whether Western Europe had gotten the news by the 17th century. I suspect it hadn’t.)
The drawback to using this to figure out digits of π is that it takes forever to use. Taking ten decimal digits of π demands evaluating about five billion terms. That’s not hyperbole; it just takes like forever to get its work done.
Which is something of a theme in Leibniz’s biography. He had a great many projects. Some of them even reached a conclusion. Many did not, and instead sprawled out with great ambition and sometimes insight before getting lost. Consider a practical one: he believed that the use of wind-driven propellers and water pumps could drain flooded mines. (Mines are always flooding.) In principle, he was right. But they all failed. Leibniz blamed deliberate obstruction by administrators and technicians. He even blamed workers afraid that new technologies would replace their jobs. Yet even in this failure he observed and had bracing new thoughts. The geology he learned in the mines project made him hypothesize that the Earth had been molten. I do not know the history of geology well enough to say whether this was significant to that field. It may have been another frustrating moment of insight (lucky or otherwise) ahead of its time but not connected to the mainstream of thought.
Another project, tantalizing yet incomplete: the “stepped reckoner”, a mechanical arithmetic machine. The design was to do addition and subtraction, multiplication and division. It’s a breathtaking idea. It earned him election into the (British) Royal Society in 1673. But it never was quite complete, never getting carries to work fully automatically. He never did finish it, and lost friends with the Royal Society when he moved on to other projects. He had a note describing a machine that could do some algebraic operations. In the 1690s he had some designs for a machine that might, in theory, integrate differential equations. It’s a fantastic idea. At some point he also devised a cipher machine. I do not know if this is one that was ever used in its time.
His greatest and longest-lasting unfinished project was for his employer, the House of Brunswick. Three successive Brunswick rulers were content to let Leibniz work on his many side projects. The one that Ernest Augustus wanted was a history of the Guelf family, in the House of Brunswick. One that went back to the time of Charlemagne or earlier if possible. The goal was to burnish the reputation of the house, which had just become a hereditary Elector of the Holy Roman Empire. (That is, they had just gotten to a new level of fun political intriguing. But they were at the bottom of that level.) Starting from 1687 Leibniz did good diligent work. He travelled throughout central Europe to find archival materials. He studied their context and meaning and relevance. He organized it. What he did not do, by his death in 1716, was write the thing.
It is always difficult to understand another person. Moreso someone you know only through biography. And especially someone who lived in very different times. But I do see a particular an modern personality type here. We all know someone who will work so very hard getting prepared to do a project Right that it never gets done. You might be reading the words of one right now.
Leibniz was a compulsive Society-organizer. He promoted ones in Brandenberg and Berlin and Dresden and Vienna and Saint Petersburg. None succeeded. It’s not obvious why. Leibniz was well-connected enough; he’s known to have over six hundred correspondents. Even for a time of great letter-writing, that’s a lot.
But it does seem like something about him offended others. Failing to complete big projects, like the stepped reckoner or the History of the Guelf family, seems like some of that. Anyone who knows of calculus knows of the dispute about the Newton-versus-Leibniz priority dispute. Grant that Leibniz seems not to have much fueled the quarrel. (And that modern historians agree Leibniz did not steal calculus from Newton.) Just being at the center of Drama causes people to rate you poorly.
There seems like there’s more, though. He was liked, for example, by the Electress Sophia of Hanover and her daughter Sophia Charlotte. These were the mother and the sister of Britain’s King George I. When George I ascended to the British throne he forbade Leibniz coming to London until at least one volume of the history was written. (The restriction seems fair, considering Leibniz was 27 years into the project by then.)
There are pieces in his biography that suggest a person a bit too clever for his own good. His first salaried position, for example, was as secretary to a Nuremberg alchemical society. He did not know alchemy. He passed himself off as deeply learned, though. I don’t blame him. Nobody would ever pass a job interview if they didn’t pretend to have expertise. Here it seems to have worked.
But consider, for example, his peace mission to Paris. Leibniz was born in the last years of the Thirty Years War. In that, the Great Powers of Europe battled each other in the German states. They destroyed Germany with a thoroughness not matched until World War II. Leibniz reasonably feared France’s King Louis XIV had designs on what was left of Germany. So his plan was to sell the French government on a plan of attacking Egypt and, from there, the Dutch East Indies. This falls short of an early-Enlightenment idea of rational world peace and a congress of nations. But anyone who plays grand strategy games recognizes the “let’s you and him fight” scheming. (The plan became irrelevant when France went to war with the Netherlands. The war did rope Brandenberg-Prussia, Cologne, Münster, and the Holy Roman Empire into the mess.)
And I have not discussed Leibniz’s work in philosophy, outside his logic. He’s respected for the theory of monads, part of the long history of trying to explain how things can have qualities. Like many he tried to find a deductive-logic argument about whether God must exist. And he proposed the notion that the world that exists is the most nearly perfect that can possibly be. Everyone has been dragging him for that ever since he said it, and they don’t look ready to stop. It’s an unfair rap, even if it makes for funny spoofs of his writing.
The optimal world may need to be badly defective in some ways. And this recognition inspires a question in me. Obviously Leibniz could come to this realization from thinking carefully about the world. But anyone working on optimization problems knows the more constraints you must satisfy, the less optimal your best-fit can be. Some things you might like may end up being lousy, because the overall maximum is more important. I have not seen anything to suggest Leibniz studied the mathematics of optimization theory. Is it possible he was working in things we now recognize as such, though? That he has notes in the things we would call Lagrange multipliers or such? I don’t know, and would like to know if anyone does.
Leibniz’s funeral was unattended by any dignitary or courtier besides his personal secretary. The Royal Academy and the Berlin Academy of Sciences did not honor their member’s death. His grave was unmarked for a half-century. And yet historians of mathematics, philosophy, physics, engineering, psychology, social science, philology, and more keep finding his work, and finding it more advanced than one would expect. Leibniz’s legacy seems to be one always rising and emerging from shade, but never being quite where it should.
And that’s enough for one day. All of the 2020 A-to-Z essays should be at this link. Both 2020 and all past A-to-Z essays should be at this link. And, as I am hosting the Playful Math Education Blog Carnival at the end of September, I am looking for any blogs, videos, books, anything educational or recreational or just interesting to read about. Thank you for your reading and your help.
## My All 2020 Mathematics A to Z: K-Theory
I should have gone with Vayuputrii’s proposal that I talk about the Kronecker Delta. But both Jacob Siehler and Mr Wu proposed K-Theory as a topic. It’s a big and an important one. That was compelling. It’s also a challenging one. This essay will not teach you K-Theory, or even get you very far in an introduction. It may at least give some idea of what the field is about.
# K-Theory.
This is a difficult topic to discuss. It’s an important theory. It’s an abstract one. The concrete examples are either too common to look interesting or are already deep into things like “tangent bundles of Sn-1”. There are people who find tangent bundles quite familiar concepts. My blog will not be read by a thousand of them this month. Those who are familiar with the legends grown around Alexander Grothendieck will nod on hearing he was a key person in the field. Grothendieck was of great genius, and also spectacular indifference to practical mathematics. Allegedly he once, pressed to apply something to a particular prime number for an example, proposed 57, which is not prime. (One does not need to be a genius to make a mistake like that. If I proposed 447 or 449 as prime numbers, how long would you need to notice I was wrong?)
K-Theory predates Grothendieck. Now that we know it’s a coherent mathematical idea we can find elements leading to it going back to the 19th century. One important theorem has Bernhard Riemann’s name attached. Henri Poincaré contributed early work too. Grothendieck did much to give the field a particular identity. Also a name, the K coming from the German Klasse. Grothendieck pioneered what we now call Algebraic K-Theory, working on the topic as a field of abstract algebra. There is also a Topological K-Theory, early work on which we thank Michael Atiyah and Friedrick Hirzebruch for. Topology is, popularly, thought of as the mathematics of flexible shapes. It is, but we get there from thinking about relationships between sets, and these are the topologies of K-Theory. We understand these now as different ways of understandings structures.
Still, one text I found described (topological) K-Theory as “the first generalized cohomology theory to be studied thoroughly”. I remember how much handwaving I had to do to explain what a cohomology is. The subject looks intimidating because of the depth of technical terms. Every field is deep in technical terms, though. These look more rarefied because we haven’t talked much, or deeply, into the right kinds of algebra and topology.
You find at the center of K-Theory either “coherent sheaves” or “vector bundles”. Which alternative depends on whether you prefer Algebraic or Topological K-Theory. Both alternatives are ways to encode information about the space around a shape. Let me talk about vector bundles because I find that easier to describe. Take a shape, anything you like. A closed ribbon. A torus. A Möbius strip. Draw a curve on it. Every point on that curve has a tangent plane, the plane that just touches your original shape, and that’s guaranteed to touch your curve at one point. What are the directions you can go in that plane? That collection of directions is a fiber bundle — a tangent bundle — at that point. (As ever, do not use this at your thesis defense for algebraic topology.)
Now: what are all the tangent bundles for all the points along that curve? Does their relationship tell you anything about the original curve? The question is leading. If their relationship told us nothing, this would not be a subject anyone studies. If you pick a point on the curve and look at its tangent bundle, and you move that point some, how does the tangent bundle change?
If we start with the right sorts of topological spaces, then we can get some interesting sets of bundles. What makes them interesting is that we can form them into a ring. A ring means that we have a set of things, and an operation like addition, and an operation like multiplication. That is, the collection of things works somewhat like the integers do. This is a comfortable familiar behavior after pondering too much abstraction.
Why create such a thing? The usual reasons. Often it turns out calculating something is easier on the associated ring than it is on the original space. What are we looking to calculate? Typically, we’re looking for invariants. Things that are true about the original shape whatever ways it might be rotated or stretched or twisted around. Invariants can be things as basic as “the number of holes through the solid object”. Or they can be as ethereal as “the total energy in a physics problem”. Unfortunately if we’re looking at invariants that familiar, K-Theory is probably too much overhead for the problem. I confess to feeling overwhelmed by trying to learn enough to say what it is for.
There are some big things which it seems well-suited to do. K-Theory describes, in its way, how the structure of a set of items affects the functions it can have. This links it to modern physics. The great attention-drawing topics of 20th century physics were quantum mechanics and relativity. They still are. The great discovery of 20th century physics has been learning how much of it is geometry. How the shape of space affects what physics can be. (Relativity is the accessible reflection of this.)
And so K-Theory comes to our help in string theory. String theory exists in that grand unification where mathematics and physics and philosophy merge into one. I don’t toss philosophy into this as an insult to philosophers or to string theoreticians. Right now it is very hard to think of ways to test whether a particular string theory model is true. We instead ponder what kinds of string theory could be true, and how we might someday tell whether they are. When we ask what things could possibly be true, and how to tell, we are working for the philosophy department.
My reading tells me that K-Theory has been useful in condensed matter physics. That is, when you have a lot of particles and they interact strongly. When they act like liquids or solids. I can’t speak from experience, either on the mathematics or the physics side.
I can talk about an interesting mathematical application. It’s described in detail in section 2.3 of Allen Hatcher’s text Vector Bundles and K-Theory, here. It comes about from consideration of the Hopf invariant, named for Heinz Hopf for what I trust are good reasons. It also comes from consideration of homomorphisms. A homomorphism is a matching between two sets of things that preserves their structure. This has a precise definition, but I can make it casual. If you have noticed that, every (American, hourlong) late-night chat show is basically the same? The host at his desk, the jovial band leader, the monologue, the show rundown? Two guests and a band? (At least in normal times.) Then you have noticed the homomorphism between these shows. A mathematical homomorphism is more about preserving the products of multiplication. Or it preserves the existence of a thing called the kernel. That is, you can match up elements and how the elements interact.
What’s important is Adams’ Theorem of the Hopf Invariant. I’ll write this out (quoting Hatcher) to give some taste of K-Theory:
The following statements are true only for n = 1, 2, 4, and 8:
a. $R^n$ is a division algebra.
b. $S^{n - 1}$ is parallelizable, ie, there exist n – 1 tangent vector fields to $S^{n - 1}$ which are linearly independent at each point, or in other words, the tangent bundle to $S^{n - 1}$ is trivial.
This is, I promise, low on jargon. “Division algebra” is familiar to anyone who did well in abstract algebra. It means a ring where every element, except for zero, has a multiplicative inverse. That is, division exists. “Linearly independent” is also a familiar term, to the mathematician. Almost every subject in mathematics has a concept of “linearly independent”. The exact definition varies but it amounts to the set of things having neither redundant nor missing elements.
The proof from there sprawls out over a bunch of ideas. Many of them I don’t know. Some of them are simple. The conditions on the Hopf invariant all that $S^{n - 1}$ stuff eventually turns into finding values of n for for which $2^n$ divides $3^n - 1$. There are only three values of ‘n’ that do that. For example.
What all that tells us is that if you want to do something like division on ordered sets of real numbers you have only a few choices. You can have a single real number, $R^1$. Or you can have an ordered pair, $R^2$. Or an ordered quadruple, $R^4$. Or you can have an ordered octuple, $R^8$. And that’s it. Not that other ordered sets can’t be interesting. They will all diverge far enough from the way real numbers work that you can’t do something that looks like division.
And now we come back to the running theme of this year’s A-to-Z. Real numbers are real numbers, fine. Complex numbers? We have some ways to understand them. One of them is to match each complex number with an ordered pair of real numbers. We have to define a more complicated multiplication rule than “first times first, second times second”. This rule is the rule implied if we come to $R^2$ through this avenue of K-Theory. We get this matching between real numbers and the first great expansion on real numbers.
The next great expansion of complex numbers is the quaternions. We can understand them as ordered quartets of real numbers. That is, as $R^4$. We need to make our multiplication rule a bit fussier yet to do this coherently. Guess what fuss we’d expect coming through K-Theory?
$R^8$ seems the odd one out; who does anything with that? There is a set of numbers that neatly matches this ordered set of octuples. It’s called the octonions, sometimes called the Cayley Numbers. We don’t work with them much. We barely work with quaternions, as they’re a lot of fuss. Multiplication on them doesn’t even commute. (They’re very good for understanding rotations in three-dimensional space. You can also also use them as vectors. You’ll do that if your programming language supports quaternions already.) Octonions are more challenging. Not only does their multiplication not commute, it’s not even associative. That is, if you have three octonions — call them p, q, and r — you can expect that p times the product of q-and-r would be different from the product of p-and-q times r. Real numbers don’t work like that. Complex numbers or quaternions don’t either.
Octonions let us have a meaningful division, so we could write out $p \div q$ and know what it meant. We won’t see that for any bigger ordered set of $R^n$. And K-Theory is one of the tools which tells us we may stop looking.
This is hardly the last word in the field. It’s barely the first. It is at least an understandable one. The abstractness of the field works against me here. It does offer some compensations. Broad applicability, for example; a theorem tied to few specific properties will work in many places. And pure aesthetics too. Much work, in statements of theorems and their proofs, involve lovely diagrams. You’ll see great lattices of sets relating to one another. They’re linked by chains of homomorphisms. And, in further aesthetics, beautiful words strung into lovely sentences. You may not know what it means to say “Pontryagin classes also detect the nontorsion in $\pi_k(SO(n))$ outside the stable range”. I know I don’t. I do know when I hear a beautiful string of syllables and that is a joy of mathematics never appreciated enough.
Thank you for reading. The All 2020 A-to-Z essays should be available at this link. The essays from all A-to-Z sequence, 2015 to present, should be at this link. And I am still open for M, N, and O essay topics. Thanks for your attention.
## My All 2020 Mathematics A to Z: Jacobi Polynomials
Mr Wu, author of the Singapore Maths Tuition blog, gave me a good nomination for this week’s topic: the j-function of number theory. Unfortunately I concluded I didn’t understand the function well enough to write about it. So I went to a topic of my own choosing instead.
The Jacobi Polynomials discussed here are named for Carl Gustav Jacob Jacobi. Jacobi lived in Prussia in the first half of the 19th century. Though his career was short, it was influential. I’ve already discussed the Jacobian, which describes how changes of variables change volume. He has a host of other things named for him, most of them in matrices or mathematical physics. He was also a pioneer in those elliptic curves you hear so much about these days.
# Jacobi Polynomials.
Jacobi Polynomials are a family of functions. Polynomials, it happens; this is a happy case where the name makes sense. “Family” is the name mathematicians give to a bunch of functions that have some similarity. This often means there’s a parameter, and each possible value of the parameter describes a different function in the family. For example, we talk about the family of sine functions, $S_n(z)$. For every integer n we have the function $S_n(z) = \sin(n z)$ where z is a real number between -π and π.
We like a family because every function in it gives us some nice property. Often, the functions play nice together, too. This is often something like mutual orthogonality. This means two different representatives of the family are orthogonal to one another. “Orthogonal” means “perpendicular”. We can talk about functions being perpendicular to one another through a neat mechanism. It comes from vectors. It’s easy to use vectors to represent how to get from one point in space to another. From vectors we define a dot product, a way of multiplying them together. A dot product has to meet a couple rules that are pretty easy to do. And if you don’t do anything weird? Then the dot product between two vectors is the cosine of the angle made by the end of the first vector, the origin, and the end of the second vector.
Functions, it turns out, meet all the rules for a vector space. (There are not many rules to make a vector space.) And we can define something that works like a dot product for two functions. Take the integral, over the whole domain, of the first function times the second. This meets all the rules for a dot product. (There are not many rules to make a dot product.) Did you notice me palm that card? When I did not say “the dot product is take the integral …”? That card will come back. That’s for later. For now: we have a vector space, we have a dot product, we can take arc-cosines, so why not define the angle between functions?
Mostly we don’t because we don’t care. Where we do care? We do like functions that are at right angles to one another. As with most things mathematicians do, it’s because it makes life easier. We’ll often want to describe properties of a function we don’t yet know. We can describe the function we don’t yet know as the sum of coefficients — some fixed real number — times basis functions that we do know. And then our problem of finding the function changes to one of finding the coefficients. If we picked a set of basis functions that are all orthogonal to one another, the finding of these coefficients gets easier. Analytically and numerically: we can often turn each coefficient into its own separate problem. Let a different computer, or at least computer process, work on each coefficient and get the full answer much faster.
The Jacobi Polynomials have three coefficients. I see them most often labelled α, β, and n. Likely you imagine this means it’s a huge family. It is huger than that. A zoologist would call this a superfamily, at least. Probably an order, possibly a class.
It turns out different relationships of these coefficients give you families of functions. Many of these families are noteworthy enough to have their own names. For example, if α and β are both zero, then the Jacobi functions are a family also known as the Legendre Polynomials. This is a great set of orthogonal polynomials. And the roots of the Legendre Polynomials give you information needed for Gaussian quadrature. Gaussian quadrature is a neat trick for numerically integrating a function. Take a weighted sum of the function you’re integrating evaluated at a set of points. This can get a very good — maybe even perfect — numerical estimate of the integral. The points to use, and the weights to use, come from a Legendre polynomial.
If α and β are both $-\frac{1}{2}$ then the Jacobi Polynomials are the Chebyshev Polynomials of the first kind. (There’s also a second kind.) These are handy in approximation theory, describing ways to better interpolate a polynomial from a set of data. They also have a neat, peculiar relationship to the multiple-cosine formulas. Like, $\cos(2\theta) = 2\cos^2(\theta) - 1$. And the second Chebyshev polynomial is $T_2(x) = 2x^2 - 1$. Imagine sliding between x and $cos(\theta)$ and you see the relationship. $cos(3\theta) = 4 \cos^3(\theta) - 3\cos(\theta)$ and $T_3(x) = 4x^3 - 3x$. And so on.
Chebyshev Polynomials have some superpowers. One that’s most amazing is accelerating convergence. Often a numerical process, such as finding the solution of an equation, is an iterative process. You can’t find the answer all at once. You instead find an approximation and do something that improves it. Each time you do the process, you get a little closer to the true answer. This can be fine. But, if the problem you’re working on allows it, you can use the first couple iterations of the solution to figure out where this is going. The result is that you can get very good answers using the same amount of computer time you needed to just get decent answers. The trade, of course, is that you need to understand Chebyshev Polynomials and accelerated convergence. We always have to make trades like that.
Back to the Jacobi Polynomials family. If α and β are the same number, then the Jacobi functions are a family called the Gegenbauer Polynomials. These are great in mathematical physics, in potential theory. You can turn the gravitational or electrical potential function — that one-over-the-distance-squared force — into a sum of better-behaved functions. And they also describe zonal spherical harmonics. These let you represent functions on the surface of a sphere as the sum of coefficients times basis functions. They work in much the way the terms of a Fourier series do.
If β is zero and there’s a particular relationship between α and n that I don’t want to get into? The Jacobi Polynomials become the Zernike Polynomials, which I never heard of before this paragraph either. I read they are the tools you need to understand optics, and particularly how lenses will alter the light passing through.
Since the Jacobi Polynomials have a greater variety of form than even poison ivy has, you’ll forgive me not trying to list them. Or even listing a representative sample. You might also ask how they’re related at all.
Well, they all solve the same differential equation, for one. Not literally a single differential equation. A family of differential equations, where α and β and n turn up in the coefficients. The formula using these coefficients is the same in all these differential equations. That’s a good reason to see a relationship. Or we can write the Jacobi Polynomials as a series, a function made up of the sum of terms. The coefficients for each of the terms depends on α and β and n, always in the same way. I’ll give you that formula. You won’t like it and won’t ever use it. The Jacobi Polynomial for a particular α, β, and n is the polynomial
$P_n^{(\alpha, \beta)}(z) = (n+\alpha)!(n + \beta)!\sum_{s=0}^n \frac{1}{s!(n + \alpha - s)!(\beta + s)!(n - s)!}\left(\frac{z-1}{2}\right)^{n-s}\left(\frac{z + 1}{2}\right)^s$
Its domain, by the way, is the real numbers from -1 to 1. We need something for the domain. It turns out there’s nothing you can do on the real numbers that you can’t fit into the domain from -1 to 1 anyway. (If you have to do something on, say, the interval from 10 to 54? Do a change of variable, scaling things down and moving them, and use -1 to 1. Then undo that change when you’re done.) The range is the real numbers, as you’d expect.
(You maybe noticed I used ‘z’ for the independent variable there, rather than ‘x’. Usually using ‘z’ means we expect this to be a complex number. But ‘z’ here is definitely a real number. This is because we can also get to the Jacobi Polynomials through the hypergeometric series, a function I don’t want to get into. But for the hypergeometric series we are open to the variable being a complex number. So many references carry that ‘z’ back into Jacobi Polynomials.)
Another thing which links these many functions is recurrence. If you know the Jacobi Polynomial for one set of parameters — and you do; $P_0^{(\alpha, \beta)}(z) = 1$ — you can find others. You do this in a way rather like how you find new terms in the Fibonacci series by adding together terms you already know. These formulas can be long. Still, if you know $P_{n-1}^{(\alpha, \beta)}$ and $P_{n-2}^{(\alpha, \beta)}$ for the same α and β? Then you can calculate $P_n^{(\alpha, \beta)}$ with nothing more than pen, paper, and determination. If it helps,
$P_1^{(\alpha, \beta)}(z) = (\alpha + 1) + (\alpha + \beta + 2)\frac{z - 1}{2}$
and this is true for any α and β. You’ll never do anything with that. This is fine.
There is another way that all these many polynomials are related. It goes back to their being orthogonal. We measured orthogonality by a dot product. Back when I palmed that card I told you was the integral of the two functions multiplied together. This is indeed a dot product. We can define others. We make those others by taking a weighted integral of the product of these two functions. That is, integrate the two functions times a third, a weight function. Of course there’s reasons to do this; they amount to deciding that some parts of the domain are more important than others. The weight function can be anything that meets a few rules. If you want to get the Jacobi Polynomials out of them, you start with the function $P_0^{(\alpha, \beta)}(z) = 1$ and the weight function
$w_n(z) = (1 - z)^{\alpha} (1 + z)^{\beta}$
As I say, though, you’ll never use that. If you’re eager and ready to leap into this work you can use this to build a couple Legendre Polynomials. Or Chebyshev Polynomials. For the full Jacobi Polynomials, though? Use, like, the command JacobiP[n, a, b, z] in Mathematica, or jacobiP(n, a, b, z) in Matlab. Other people have programmed this for you. Enjoy their labor.
In my work I have not used the full set of Jacobi Polynomials much. There’s more of them than I need. I do rely on the Legendre Polynomials, and the Chebyshev Polynomials. Other mathematicians use other slices regularly. It is stunning to sometimes look and realize that these many functions, different as they look, are reflections of one another, though. Mathematicians like to generalize, and find one case that covers as many things as possible. It’s rare that we are this successful.
I thank you for reading this. All of this year’s A-to-Z essays should be available at this link. The essays from every A-to-Z sequence going back to 2015 should be at this link. And I’m already looking ahead to the M, N, and O essays that I’ll be writing the day before publication instead of the week before like I want! I appreciate any nominations you have, even ones I can’t cover fairly.
## I’m looking for M, N, and O topics for the All 2020 A-to-Z
I have reached the halfway point in this year’s A-to-Z! Not in the number of essays written — this week I should hit the 10th — but in preparing for topics? We are almost halfway done.
So for this, as with any A-to-Z essay, I’d like to know some mathematical term starting with the letters M, N, or O that you would like to see me write about. While I reserve the right to talk about anything I do care, I usually will pick the nominated topic I think I can be most interesting about. Or that I want to hurriedly learn something about. Please put in a comment with whatever you’d like me to discuss. And, please, if you do suggest something let me know how to credit you, and of any project that you do that I can mention. This project may be a way for me to show off, but I’d like everybody to have a bit more attention.
Topics I’ve already covered, starting with the letter ‘M’, are:
Topics I’ve already covered, starting with the letter ‘N’, are:
Topics I’ve already covered, starting with the letter ‘O’, are:
Thank you all for what you nominate. Also for thinking about nominations; I appreciate the work you do for me.
All of this year’s A-to-Z essays should be at this link. And all of the A-to-Z essays, from all years, should be at this link. Thanks for reading.
## My All 2020 Mathematics A to Z: Imaginary Numbers
I have another topic today suggested by Beth, of the I Didn’t Have My Glasses On …. inspiration blog. It overlaps a bit with other essays I’ve posted this A-to-Z sequence, but that’s all right. We get a better understanding of things by considering them from several perspectives. This one will be a bit more historical.
# Imaginary Numbers.
Pop science writer Isaac Asimov told a story he was proud of about his undergraduate days. A friend’s philosophy professor held court after class. One day he declared mathematicians were mystics, believing in things they even admit are “imaginary numbers”. Young Asimov, taking offense, offered to prove the reality of the square root of minus one, if the professor gave him one-half pieces of chalk. The professor snapped a piece of chalk in half and gave one piece to him. Asimov said this is one piece of chalk. The professor answered it was half the length of a piece of chalk and Asimov said that’s not what he asked for. Even if we accept “half the length” is okay, how do we know this isn’t 48 percent the length of a standard piece of chalk? If the professor was that bad on “one-half” how could he have opinions on “imaginary numbers”?
This story is another “STEM undergraduates outwitting the philosophy expert” legend. (Even if it did happen. What we know is the story Asimov spun it into, in which a plucky young science fiction fan out-argued someone whose job is forming arguments.) Richard Feynman tells a similar story, befuddling a philosophy class with the question of how we can prove a brick has a interior. It helps young mathematicians and science majors feel better about their knowledge. But Asimov’s story does get at a couple points. First, that “imaginary” is a terrible name for a class of numbers. The square root of minus one is as “real” as one-half is. Second, we’ve decided that one-half is “real” in some way. What the philosophy professor would have baffled Asimov to explain is: in what way is one-half real? Or minus one?
We’re introduced to imaginary numbers through polynomials. I mean in education. It’s usually right after getting into quadratics, looking for solutions to equations like $x^2 - 5x + 4 = 0$. That quadratic has two solutions, but it’s possible to have a quadratic with only one, such as $x^2 + 6x + 9 = 0$. Or to have a quadratic with no solutions, such as, iconically, $x^2 + 1 = 0$. We might underscore that by plotting the curve whose x- and y-coordinates makes true the equation $y = x^2 + 1$. There’s no point on the curve with a y-coordinate of zero, so, there we go.
Having established that $x^2 + 1 = 0$ has no solutions, the course then asks “what if we go ahead and say there was one”? Two solutions, in fact, $\imath$ and $-\imath$. This is all right for introducing the idea that mathematics is a tool. If it doesn’t do something we need, we can alter it.
But I see trouble in teaching someone how you can’t take square roots of negative numbers and then teaching them how to take square roots of negative numbers. It’s confusing at least. It needs some explanation about what changed. We might do better introducing them in a more historical method.
Historically, imaginary numbers (in the West) come from polynomials, yes. Different polynomials. Cubics, and quartics. Mathematicians still liked finding roots of them. Mathematicians would challenge one another to solve sets of polynomials. This seems hard to believe, but many sources agree on this. I hope we’re not all copying Eric Temple Bell here. (Bell’s Men of Mathematics is an inspiring collection of biographical sketches. But it’s not careful differentiating legends from documented facts.) And there are enough nerd challenges today that I can accept people daring one another to find solutions of $x^3 - 15x - 4 = 0$.
Quadratics, equations we can write as $ax^2 + bx + c = 0$ for some real numbers a, b, and c, we’ve known about forever. Euclid solved these kinds of equations using geometric reasoning. Chinese mathematicians 2200 years ago described rules for how to find roots. The Indian mathematician Brahmagupta, by the early 7th century, described the quadratic formula to find at least one root. Both possible roots were known to Indian mathematicians a thousand years ago. We’ve reduced the formula today to
$x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$
With that filtering into Western Europe, the search was on for similar formulas for other polynomials. This turns into several interesting threads. One is a tale of intrigue and treachery involving Gerolamo Cardano, Niccolò Tartaglia, and Ludovico Ferrari. I’ll save that for another essay because I have to cut something out, so of course I skip the dramatic thing. Another thread is the search for quadratic-like formulas for other polynomials. They exist for third-power and fourth-power polynomials. Not (generally) for the fifth- or higher-powers. That is, there are individual polynomials you can solve by formulas, like, $x^6 - 5x^3 + 4 = 0$. But stare at it and you can see where that’s “really” a quadratic pretending to be sixth-power. Finding there was no formula to find, though, lead people to develop group theory. And group theory underlies much of mathematics and modern physics.
The first great breakthrough solving the general cubic, $ax^3 + bx^2 + cx + d = 0$, came near the end of the 14th century in some manuscripts out of Florence. It’s built on a transformation. Transformations are key to mathematics. The point of a transformation is to turn a problem you don’t know how to do into one you do. As I write this, MathWorld lists 543 pages as matching “transformation”. That’s about half what “polynomial” matches (1,199) and about three times “trigonometric” (184). So that can help you judge importance.
Here, the transformation to make is to write a related polynomial in terms of a new variable. You can call that new variable x’ if you like, or z. I’ll use z so as to not have too many superscript marks flying around. This will be a “depressed polynomial”. “Depressed” here means that at least one of the coefficients in the new polynomial is zero. (Here, for this problem, it means we won’t have a squared term in the new polynomial.) I suspect the term is old-fashioned.
Let z be the new variable, related to x by the equation $z = x - \frac{b}{3a}$. And then figure out what $z^2$ and $z^3$ are. Using all that, and the knowledge that $ax^3 + bx^2 + cx + d = 0$, and a lot of arithmetic, you get to one of these three equations:
$z^3 + pz = q \\ z^3 = pz + q \\ z^3 + q = pz$
where p and q are some new coefficients. They’re positive numbers, or possibly zeros. They’re both derived from a, b, c, and d. And so in the 15th Century the search was on to solve one or more of these equations.
From our perspective in the 21st century, our first question is: what three equations? How are these not all the same equation? And today, yes, we would write this as one depressed equation, most likely $z^3 + pz = q$. We would allow that p or q or both might be negative numbers.
And there is part of the great mysterious historical development. These days we generally learn about negative numbers. Once we are comfortable, our teachers hope, with those we get imaginary numbers. But in the Western tradition mathematicians noticed both, and approached both, at roughly the same time. With roughly similar doubts, too. It’s easy to point to three apples; who can point to “minus three” apples? We can arrange nine apples into a neat square. How big a square can we set “minus nine” apples in?
Hesitation and uncertainty about negative numbers would continue quite a long while. At least among Western mathematicians. Indian mathematicians seem to have been more comfortable with them sooner. And merchants, who could model a negative number as a debt, seem to have gotten the idea better.
But even seemingly simple questions could be challenging. John Wallis, in the 17th century, postulated that negative numbers were larger than infinity. Leonhard Euler seems to have agreed. (The notion may seem odd. It has echoes today, though. Computers store numbers as bit patterns. The normal scheme represents negative numbers by making the first bit in a pattern 1. These bit patterns make the negative numbers look bigger than the biggest positive numbers. And thermodynamics gives us a temperature defined by the relationship of energy to entropy. That definition implies there can be negative temperatures. Those are “hotter” — higher-energy, at least — than infinitely-high positive temperatures.) In the 18th century we see temperature scales designed so that the weather won’t give negative numbers too often. Augustus De Morgan wrote in 1831 that a negative number “occurring as the solution of a problem indicates some inconsistency or absurdity”. De Morgan was not an amateur. He coded the rules for deductive logic so well we still call them De Morgan’s laws. He put induction on a logical footing. And he found negative numbers (and imaginary numbers) a sign of defective work. In 1831. 1831!
But back to cubic equations. Allow that we’ve gotten comfortable enough with negative numbers we only want to solve the one depressed equation of $z^3 + pz = q$. How to do it? … Another transformation, then. There are a couple you can do. Modern mathematicians would likely define a new variable w, set so that $z = w - \frac{p}{3w}$. This turns the depressed equation into
$w^3 - \frac{p^3}{27 w^3} - q = 0$
And this, believe it or not, is a disguised quadratic. Multiply everything in it by $w^3$ and move things around a little. You get
$(w^3)^2 - q(w^3) - \frac{1}{27}p^3 = 0$
From there, quadratic formula to solve for $w^3$. Then from that, take cube roots and you get three values of z. From that, you get your three values of x.
You see why nobody has taught this in high school algebra since 1959. Also why I am not touching the quartic formula, the equivalent of this for polynomials of degree four.
There are other approaches. And they can work out easier for particular problems. Take, for example, $x^3 - 15x - 4 = 0$ which I introduced in the first act. It’s past the time we set it off.
Rafael Bombelli, in the 1570s, pondered this particular equation. Notice it’s already depressed. A formula developed by Cardano addressed this, in the form $x^3 = 15x + 4$. Notice that’s the second of the three sorts of depressed polynomial. Cardano’s formula says that one of the roots will be at
$x = \sqrt[3]{\frac{q}{2} + r} + \sqrt[3]{\frac{q}{2} - r}$
where
$r = \sqrt{\left(\frac{q}{2}\right)^2 - \left(\frac{p}{3}\right)^3}$
Put to this problem, we get something that looks like a compelling reason to stop:
$x = \sqrt[3]{2 + \sqrt{-121}} + \sqrt[3]{2 - \sqrt{-121}}$
Bombelli did not stop with that, though. He carried on as though these expressions of the square root of -121 made sense. And, if he did that he found these terms added up. You get an x of 4.
Which is true. It’s easy to check that it’s right. And here is the great surprising thing. Start from the respectable enough $x^3 = 15x + 4$ equation. It has nothing suspicious in it, not even negative numbers. Follow it through and you need to use negative numbers. Worse, you need to use the square roots of negative numbers. But keep going, as though you were confident in this, and you get a correct answer. And a real number.
We can get the other roots. Divide $(x - 4)$ out of $x^3 - 15x - 4$. What’s left is $x^2 + 4x + 1$. You can use the quadratic formula for this. The other two roots are $x = -2 + \frac{1}{2} \sqrt{12}$, about -0.268, and $x = -2 - \frac{1}{2} \sqrt{12}$, about -3.732.
So here we have good reasons to work with negative numbers, and with imaginary numbers. We may not trust them. But they get us to correct answers. And this brings up another little secret of mathematics. If all you care about is an answer, then it’s all right to use a dubious method to get there.
There is a logical rigor missing in “we got away with it, I guess”. The name “imaginary numbers” tells of the disapproval of its users. We get the name from René Descartes, who was more generally discussing complex numbers. He wrote something like “in many cases no quantity exists which corresponds to what one imagines”.
John Wallis, taking a break from negative numbers and his other projects and quarrels, thought of how to represent imaginary numbers as branches off a number line. It’s a good scheme that nobody noticed at the time. Leonhard Euler envisioned matching complex numbers with points on the plane, but didn’t work out a logical basis for this. In 1797 Caspar Wessel presented a paper that described using vectors to represent complex numbers. It’s a good approach. Unfortunately that paper too sank without a trace, undiscovered for a century.
In 1806 Jean-Robert Argand wrote an “Essay on the Geometrical Interpretation of Imaginary Quantities”. Jacques Français got a copy, and published a paper describing the basics of complex numbers. He credited the essay, but noted that there was no author on the title page and asked the author to identify himself. Argand did. We started to get some good rigor behind the concept.
In 1831 William Rowan Hamilton, of Hamiltonian fame, described complex numbers using ordered pairs. Once we can define their arithmetic using the arithmetic of real numbers we have a second solid basis. More reason to trust them. Augustin-Louis Cauchy, who proved about four billion theorems of complex analysis, published a new construction of them. This used a group theory approach, a polynomial ring we denote as $R[x]/(x^2 + 1)$. I don’t have the strength to explain all that today. Matrices give us another approach. This matches complex numbers with particular two-row, two-column matrices. This turns the addition and multiplication of numbers into what Hamilton described.
And here we have some idea why mathematicians use negative numbers, and trust imaginary numbers. We are pushed toward them by convenience. Negative numbers let us work with one equation, $x^3 + px + q = 0$, rather than three. (Or more than three equations, if we have to work with an x we know to be negative.) Imaginary numbers we can start with, and find answers we know to be true. And this encourages us to find reasons to trust the results. Having one line of reasoning is good. Having several lines — Argand’s geometric, Hamilton’s coordinates, Cauchy’s rings — is reassuring. We may not be able to point to an imaginary number of anything. But if we can trust our arithmetic on real numbers we can trust our arithmetic on imaginary numbers.
As I mentioned Descartes gave the name “imaginary number” to all of what we would now call “complex numbers”. Gauss published a geometric interpretation of complex numbers in 1831. And gave us the term “complex number”. Along the way he complained about the terminology, though. He noted “had +1, -1, and $\sqrt{-1}$, instead of being called positive, negative, and imaginary (or worse still, impossible) unity, been given the names say, of direct, inverse, and lateral unity, there would hardly have been any scope for such obscurity”. I’ve never heard them term “impossible numbers”, except as an adjective.
The name of a thing doesn’t affect what it is. It can affect how we think about it, though. We can ask whether Asimov’s professor would dismiss “lateral numbers” as mysticism. Or at least as more mystical than “three” is. We can, in context, understand why Descartes thought of these as “imaginary numbers”. He saw them as something to use for the length of a calculation, and that would disappear once its use was done. We still have such concepts, things like “dummy variables” in a calculus problem. We can’t think of a use for dummy variables except to let a calculation proceed. But perhaps we’ll see things differently in four hundred years. Shall have to come back and check.
Thank you for reading through all that. Once again a topic I figured would be a tight 1200 words spilled into twice that. This and the other A-to-Z topics for 2020 should be at this link. And all my A-to-Z essays, this year and past years, should be at this link.
I’m still looking for J, K, and L topics for coming weeks. I’m grateful for any subject nominations you’d care to offer.
## My All 2020 Mathematics A to Z: Hilbert’s Problems
Beth, author of the popular inspiration blog I Didn’t Have My Glasses On …. proposed this topic. Hilbert’s problems are a famous set of questions. I couldn’t hope to summarize them all in an essay of reasonable length. I’d have trouble to do them justice in a short book. But there are still things to say about them.
# Hilbert’s Problems.
It’s easy to describe what Hilbert’s Problems are. David Hilbert, at the 1900 International Congress of Mathematicians, listed ten important problems of the field. In print he expanded this to 23 problems. They covered topics like number theory, group theory, physics, geometry, differential equations, and more. One of the problems was solved that year. Eight of them have been resolved fully. Another nine have been partially answered. Four remain unanswered. Two have generally been regarded as too vague to resolve.
Everyone in mathematics agrees they were big, important questions. Things that represented the things mathematicians of 1900 would most want to know. Things that guided mathematical research for, so far, 120 years.
It does present us with a dilemma. Were Hilbert’s problems listed because he understood what mathematicians would find important? Or did mathematicians find them important because Hilbert listed them? Sadly, mathematicians know of no professionals who have studied questions like this and could offer insight.
There is reason to say that Hilbert’s judgement was good. He listed, for example, the Riemann hypothesis. The hypothesis is still unanswered. Many interesting results would follow from it being proved true, or proved false, or proved unanswerable. Hilbert did not list Fermat’s Last Theorem, unresolved then. Any mathematician would have liked an answer. But nothing of consequence depends on it. But then he also listed making advances in the calculus of variations. A good goal, but not one that requires particular insight to want.
So here is a related problem. Why hasn’t anyone else made such a list? A concise summary of the problems that guides mathematical research?
It’s not because no one tried. At the 1912 International Conference of Mathematicians, Edmund Landau identified four problems in number theory worth solving. None of them have been solved yet. Yutaka Taniyama listed three dozen problems in 1955. William Thurston put forth 24 questions in 1982. Stephen Smale, famous for work in chaos theory, gathered a list of 18 questions in 1998. Barry Simon offered fifteen of them in 2000. Also in 2000 the Clay Mathematics Institute put up seven problems, with a million-dollar bounty on each. Jair Minoro Abe and Shotaro Tanaka gathered 22 questions for a list for 2001. The United States Defense Advanced Research Projects Agency put out a list of 23 of them in 2007.
Apart from Smale’s and the Clay Mathematics lists I never heard of any of them either. Why not? What was special about Hilbert’s list?
For one, he was David Hilbert. Hilbert was a great mathematician, held in high esteem then and now. Besides his list of problems he’s known for the axiomatization of geometry. This built not just logical rigor but a new, formalist, perspective. Also, he’s known for the formalist approach to mathematics. In this, for example, we give up the annoyingly hard task of saying exactly what we mean by a point and a line and a plane. We instead talk about how points and lines and planes relate to each other, definitions we can give. He’s also known for general relativity: Hilbert and Albert Einstein developed its field equations at the same time. We have Hilbert spaces and Hilbert curves and Hilbert metrics and Hilbert polynomials. Fans of pop mathematics speak of the Hilbert Hotel, a structure with infinitely many rooms and used to explore infinitely large sets.
So he was a great mind, well-versed in many fields. And he was in an enviable position, professor of mathematics at the University of Göttingen. At the time, German mathematics was held in particularly high renown. When you see, for example, mathematicians using ‘Z’ as shorthand for ‘integers’? You are seeing a thing that makes sense in German. (It’s for “Zahlen”, meaning the counting numbers.) Göttingen was at the top of German mathematics, and would be until the Nazi purges of academia. It would be hard to find a more renowned position.
And he was speaking at a great moment. The transition from one century to another is a good one for ambitious projects and declarations to be remembered. But the International Congress of Mathematicians was of particular importance. This was only the second meeting of the International Congress of Mathematicians. International Congresses of anything were new in the late 19th century. Many fields — not only mathematics — were asserting their professionalism at the time. It’s when we start to see professional organizations for specific subjects, not just “Science”. It’s when (American) colleges begin offering elective majors for their undergraduates. When they begin offering PhD degrees.
So it was a field when mathematics, like many fields (and nations), hoped to define its institutional prestige. Having an ambitious goal is one way to define that.
It was also an era when mathematicians were thinking seriously about what the field was about. The results were mixed. In the last decades of the 19th century, mathematicians had put differential calculus on a sound logical footing. But then found strange things in, for example, mathematical physics. Boltzmann’s H-theorem (1872) tells us that entropy in a system of particles always increases. Poincaré’s recurrence theorem (1890) tells us a system of particles has to, eventually, return to its original condition. (Or to something close enough.) And therefore it returns to its original entropy, undoing any increase. Both are sound theorems; how can they not conflict?
Even ancient mathematics had new uncertainty. In 1882 Moritz Pasch discovered that Euclid, and everyone doing plane geometry since then, had been using an axiom no one had acknowledged. (If a line that doesn’t pass through any vertex of a triangle intersects one leg of the triangle, then it also meets one other leg of the triangle.) It’s a small and obvious thing. But if everyone had missed it for thousands of years, what else might be overlooked?
I wish now to share my interpretation of this background. And with it my speculations about why we care about Hilbert’s Problems and not about Thurston’s. And I wish to emphasize that, whatever my pretensions, I am not a professional historian of mathematics. I am an amateur and my training consists of “have read some books about a subject of interest”.
By 1900 mathematicians wanted the prestige and credibility and status of professional organizations. Who would not? But they were also aware the foundation of mathematics was not as rigorous as they had thought. It was not yet the “crisis of foundations” that would drive the philosophy of mathematics in the early 20th century. But the prelude to the crisis was there. And here was a universally respected figure, from the most prestigious mathematical institution. He spoke to all the best mathematicians in a way they could never have been addressed before. And presented a compelling list of tasks to do. These were good tasks, challenging tasks. Many of these tasks seemed doable. One was even done almost right away.
And they covered a broad spectrum of mathematics of the time. Everyone saw at least one problem relevant to their field, or to something close to their field. Landau’s problems, posed twelve years later, were all about number theory. Not even all number theory; about prime numbers. That’s nice, but it will only briefly stir the ambitions of the geometer or the mathematical physicist or the logician.
By the time of Taniyama, though? 1955? Times are changed. Taniyama is no inconsiderable figure. The Taniyama-Shimura theorem is a major piece of elliptic functions. It’s how we have a proof of Fermat’s last theorem. But by then, too, mathematics is not so insecure. We have several good ideas of what mathematics is and why it should work. It has prestige and institutional authority. It has enough Congresses and Associations and Meetings that no one can attend them all. It’s moreso by 1982, when William Thurston set up questions. I know that I’m aware of Stephen Smale’s list because I was a teenager during the great fractals boom of the 80s and knew Smale’s name. Also that he published his list near the time I finished my quals. Quals are an important step in pursuing a doctorate. After them you look for a specific thesis problem. I was primed to hear about great ambitious projects I could not possibly complete.
Only the Clay Mathematics Institute’s list has stood out, aided by its catchy name of Millennium Prizes and its offer of quite a lot of money. That’s a good memory aid. Any lay reader can understand that motivation. Two of the Millennium Prize problems were also Hilbert’s problems. One in whole (the Riemann hypothesis again). One in part (one about solutions to elliptic curves). And as the name states, it came out in 2000. It was a year when many organizations were trying to declare bold and fresh new starts for a century they hoped would be happier than the one before. This, too, helps the memory. Who has any strong associations with 1982 who wasn’t born or got their driver’s license that year?
These are my suppositions, though. I could be giving a too-complicated answer. It’s easy to remember that United States President John F Kennedy challenged the nation to land a man on the moon by the end of the decade. Space enthusiasts, wanting something they respect to happen in space, sometimes long for a president to make a similar strong declaration of an ambitious goal and specific deadline. President Ronald Reagan in 1984 declared there would be a United States space station by 1992. In 1986 he declared there would be by 2000 a National Aerospace Plane, capable of flying from Washington to Tokyo in two hours. President George H W Bush in 1989 declared there would be humans on the Moon “to stay” by 2010 and to Mars thereafter. President George W Bush in 2004 declared the Vision for Space Exploration, bringing humans to the moon again by 2020 and to Mars thereafter.
No one has cared about any of these plans. Possibly because the first time a thing is done, it has a power no repetition can claim. But also perhaps because the first attempt succeeded. Which was not due only to its being first, of course, but to the factors that made its goal important to a great number of people for long enough that it succeeded.
Which brings us back to the Euthyphro-like dilemma of Hilbert’s Problems. Are they influential because Hilbert chose well, or did Hlbert’s choosing them make them influential? I suspect this is a problem that cannot be resolved.
Thank you for reading. This and the other other A-to-Z topics for 2020 should be at this link. All my essays for this and past A-to-Z sequences are at this link. And I am taking nominations for J, K, and L topics. I’m grateful for anything you can offer me.
## I’m looking for J, K, and L topics for the All 2020 A-to-Z
As the subject line says, I’m looking at what the next couple of letters should be for my 2020 A-to-Z. Please put in a comment here with something you think it’d be interesting to see me explain. I’m up for most any topic with some mathematical connection, including biographies.
Please if you suggest something, let me know of any project that you have going on. I’m happy to share links to other blogs, teaching projects, YouTube channels, or whatever else you have going on that’s worth sharing.
I am open to revisiting a subject from past years, if I think I could do a better piece on it. Topics I’ve already covered, starting with the letter ‘J’, are:
Topics I’ve already covered, starting with the letter ‘K’, are:
Topics I’ve already covered, starting with the letter ‘L’, are:
The essays for my All 2020 Mathematics A to Z are at this link. Posts from all of the A-to-Z posts, this year and previous years, are at this link.
## My All 2020 Mathematics A to Z: J Willard Gibbs
Charles Merritt sugested a biographical subject for G. (There are often running themes in an A-to-Z and this year’s seems to be “biography”.) I don’t know of a web site or other project that Merritt has that’s worth sharing, but if I learn of it, I’ll pass it along.
# J Willard Gibbs.
My love and I, like many people, tried last week to see the comet NEOWISE. It took several attempts. When finally we had binoculars and dark enough sky we still had the challenge of where to look. Finally determined searching and peripheral vision (which is more sensitive to faint objects) found the comet. But how to guide the other to a thing barely visible except with binoculars? Between the silhouettes of trees and a convenient pair of guide stars we were able to put the comet’s approximate location in words. Soon we were experts at finding it. We could turn a head, hold up the binoculars, and see a blue-ish puff of something.
To perceive a thing is not to see it. Astronomy is full of things seen but not recognized as important. There is a great need for people who can describe to us how to see a thing. And this is part of the significance of J Willard Gibbs.
American science, in the 19th century, had an inferiority complex compared to European science. Fairly, to an extent: what great thinkers did the United States have to compare to William Thompson or Joseph Fourier or James Clerk Maxwell? The United States tried to argue that its thinkers were more practical minded, with Joseph Henry as example. Without downplaying Henry’s work, though? The stories of his meeting the great minds of Europe are about how he could fix gear that Michael Faraday could not. There is a genius in this, yes. But we are more impressed by magnetic fields than by any electromagnet.
Gibbs is the era’s exception, a mathematical physicist of rare insight and creativity. In his ability to understand problems, yes. But also in organizing ways to look at problems so others can understand them better. A good comparison is to Richard Feynman, who understood a great variety of problems, and organized them for other people to understand. No one, then or now, doubted Gibbs compared well to the best European minds.
Gibbs’s life story is almost the type case for a quiet academic life. He was born into an academic/ministerial family. Attended Yale. Earned what appears to be the first PhD in engineering granted in the United States, and only the fifth non-honorary PhD in the country. Went to Europe for three years, then came back home, got a position teaching at Yale, and never left again. He was appointed Professor of Mathematical Physics, the first such in the country, at age 32 and before he had even published anything. This speaks of how well-connected his family was. Also that he was well-off enough not to need a salary. He wouldn’t take one until 1880, when Yale offered him two thousand per year against Johns Hopkins’s three.
Between taking his job and taking his salary, Gibbs took time to remake physics. This was in thermodynamics, possibly the most vibrant field of 19th century physics. The wonder and excitement we see in quantum mechanics resided in thermodynamics back then. Though with the difference that people with a lot of money were quite interested in the field’s results. These were people who owned railroads, or factories, or traction companies. Extremely practical fields.
What Gibbs offered was space, particularly, phase space. Phase space describes the state of a system as a point in … space. The evolution of a system is typically a path winding through space. Constraints, like the conservation of energy, we can usually understand as fixing the system to a surface in phase space. Phase space can be as simple as “the positions and momentums of every particle”, and that often is what we use. It doesn’t need to be, though. Gibbs put out diagrams where the coordinates were things like temperature or pressure or entropy or energy. Looking at these can let one understand a thermodynamic system. They use our geometric sense much the same way that charts of high- and low-pressure fronts let one understand the weather. James Clerk Maxwell, famous for electromagnetism, was so taken by this he created plaster models of the described surface.
This is, you might imagine, pretty serious, heady stuff. So you get why Gibbs published it in the Transactions of the Connecticut Academy: his brother-in-law was the editor. It did not give the journal lasting fame. It gave his brother-in-law a heightened typesetting bill, and Yale faculty and New Haven businessmen donated funds.
Which gets to the less-happy parts of Gibbs’s career. (I started out with ‘less pleasant’ but it’s hard to spot an actually unpleasant part of his career.) This work sank without a trace, despite Maxwell’s enthusiasm. It emerged only in the middle of the 20th century, as physicists came to understand their field as an expression of geometry.
That’s all right. Chemists understood the value of Gibbs’s thermodynamics work. He introduced the enthalpy, an important thing that nobody with less than a Master’s degree in Physics feels they understand. Changes of enthalpy describe how heat transfers. And the Gibbs Free Energy, which measures how much reversible work a system can do if the temperature and pressure stay constant. A chemical reaction where the Gibbs free energy is negative will happen spontaneously. If the system’s in equilibrium, the Gibbs free energy won’t change. (I need to say the Gibbs free energy as there’s a different quantity, the Helmholtz free energy, that’s also important but not the same thing.) And, from this, the phase rule. That describes how many independently-controllable variables you can see in mixing substances.
In the 1880s Gibbs worked on something which exploded through physics and mathematics. This was vectors. He didn’t create them from nothing. Hermann Günter Grassmann — whose fascinating and frustrating career I hadn’t known of before this — laid much of the foundation. Building on Grassman and W K Clifford, though, let Gibbs present vectors as we now use them in physics. How to define dot products and cross products. How to use them to simplify physics problems. How they’re less work than quaternions are. Gibbs was not the only person to recast physics in vector form. Oliver Heaviside is another important mathematical physicist of the time who did. But Gibbs identified the tools extremely well. You can read his Elements of Vector Analysis. It’s not very different from what a modern author would write on the subject. It’s terser than I would write, but terse is also respectful of someone’s time and ability to reason out explanations of small points.
There are more pieces. They don’t all fit in a neat linear timeline; nobody’s life really does. Gibbs’s thermodynamics work, leading into statistical mechanics, foreshadows much of quantum mechanics. He’s famous for the Gibbs Paradox, which concerns the entropy of mixing together two different kinds of gas. Why is this different from mixing together two containers of the same kind of gas? And the answer is that we have to think more carefully about what we mean by entropy, and about the differences between containers.
There is a Gibbs phenomenon, known to anyone studying Fourier series. The Fourier series is a sum of sine and cosine functions. It approximates an arbitrary original function. The series is a continuous function; you could draw it without lifting your pen. If the original function has a jump, though? A spot where you have to lift your pen? The Fourier series for that represents the jump with a region where its quite-good approximation suddenly turns bad. It wobbles around the ‘correct’ values near the jump. Using more terms in the series doesn’t make the wobbling shrink. Gibbs described it, in studying sawtooth waves. As it happens, Henry Wilbraham first noticed and described this in 1848. But Wilbraham’s work went unnoticed until after Gibbs’s rediscovery.
And then there was a bit in which Gibbs was intrigued by a comet that prolific comet-spotter Lewis Swift observed in 1880. Finding the orbit of a thing from a handful of observations is one of the great problems of astronomical mathematics. Karl Friedrich Gauss started the 19th century with his work projecting the orbit of the newly-discovered and rapidly-lost asteroid Ceres. Gibbs put his vector notation to the work of calculating orbits. His technique, I am told by people who seem to know, is less difficult and more numerically stable than was earlier used.
Swift’s comet of 1880, it turns out, was spotted in 1869 by Wilhelm Tempel. It was lost after its 1908 perihelion. Comets have a nasty habit of changing their orbits on us. But it was rediscovered in 2001 by the Lincoln Near-Earth Asteroid Research program. It’s next to reach perihelion the 26th of November, 2020. You might get to see this, another thing touched by J Willard Gibbs.
This and the other other A-to-Z topics for 2020 should be at this link. All my essays for this and past A-to-Z sequences are at this link. I’ll soon be opening f or topics for J, K, and L, essays also. Thanks for reading.
## My All 2020 Mathematics A to Z: Fibonacci
Dina Yagodich suggested today’s A-to-Z topic. I thought a quick little biography piece would be a nice change of pace. I discovered things were more interesting than that.
# Fibonacci.
I realized preparing for this that I have never read a biography of Fibonacci. This is hardly unique to Fibonacci. Mathematicians buy into the legend that mathematics is independent of human creation. So the people who describe it are of lower importance. They learn a handful of romantic tales or good stories. In this way they are much like humans. I know at least a loose sketch of many mathematicians. But Fibonacci is a hard one for biography. Here, I draw heavily on the book Fibonacci, his numbers and his rabbits, by Andriy Drozdyuk and Denys Drozdyuk.
We know, for example, that Fibonacci lived until at least 1240. This because in 1240 Pisa awarded him an annual salary in recognition of his public service. We think he was born around 1170, and died … sometime after 1240. This seems like a dismal historical record. But, for the time, for a person of slight political or military importance? That’s about as good as we could hope for. It is hard to appreciate how much documentation we have of lives now, and how recent a phenomenon that is.
Even a fact like “he was alive in the year 1240” evaporates under study. Italian cities, then as now, based the year on the time since the notional birth of Christ. Pisa, as was common, used the notional conception of Christ, on the 25th of March, as the new year. But we have a problem of standards. Should we count the year as the number of full years since the notional conception of Christ? Or as the number of full and partial years since that important 25th of March?
If the question seems confusing and perhaps angering let me try to clarify. Would you say that the notional birth of Christ that first 25th of December of the Christian Era happened in the year zero or in the year one? (Pretend there was a year zero. You already pretend there was a year one AD.) Pisa of Leonardo’s time would have said the year one. Florence would have said the year zero, if they knew of “zero”. Florence matters because when Florence took over Pisa, they changed Pisa’s dating system. Sometime later Pisa changed back. And back again. Historians writing, aware of the Pisan 1240 on the document, may have corrected it to the Florence-style 1241. Or, aware of the change of the calendar and not aware that their source already accounted for it, redated it 1242. Or tried to re-correct it back and made things worse.
This is not a problem unique to Leonardo. Different parts of Europe, at the time, had different notions for the year count. Some also had different notions for what New Year’s Day would be. There were many challenges to long-distance travel and commerce in the time. Not the least is that the same sun might shine on at least three different years at once.
We call him Fibonacci. Did he? The question defies a quick answer. His given name was Leonardo, and he came from Pisa, so a reliable way to address him would have “Leonardo of Pisa”, albeit in Italian. He was born into the Bonacci family. He did in some manuscripts describe himself as “Leonardo filio Bonacci Pisano”, give or take a few letters. My understanding is you can get a good fun quarrel going among scholars of this era by asking whether “Filio Bonacci” would mean “the son of Bonacci” or “of the family Bonacci”. Either is as good for us. It’s tempting to imagine the “Filio” being shrunk to “Fi” and the two words smashed together. But that doesn’t quite say that Leonardo did that smashing together.
Nor, exactly, when it did happen. We see “Fibonacci” used in mathematical works in the 19th century, followed shortly by attempts to explain what it means. We know of a 1506 manuscript identifying Leonardo as Fibonacci. But there remains a lot of unexplored territory.
If one knows one thing about Fibonacci though, one knows about the rabbits. They give birth to more rabbits and to the Fibonacci Sequence. More on that to come. If one knows two things about Fibonacci, the other is about his introducing Arabic numerals to western mathematics. I’ve written of this before. And the subject is … more ambiguous, again.
Most of what we “know” of Fibonacci’s life is some words he wrote to explain why he was writing his bigger works. If we trust he was not creating a pleasant story for the sake of engaging readers, then we can finally say something. (If one knows three things about Fibonacci, and then five things, and then eight, one is making a joke.)
Fibonacci’s father was, in the 1290s, posted to Bejaia, a port city on the Algerian coast. The father did something for Pisa’s duana there. And what is a duana? … Again, certainty evaporates. We have settled on saying it’s a customs house, and suppose our readers know what goes on in a customs house. The duana had something to do with clearing trade through the port. His father’s post was as a scribe. He was likely responsible for collecting duties and registering accounts and keeping books and all that. We don’t know how long Fibonacci spent there. “Some days”, during which he alleges he learned the digits 1 through 9. And after that, travelling around the Mediterranean, he saw why this system was good, and useful. He wrote books to explain it all and convince Europe that while Roman numerals were great, Arabic numerals were more practical.
It is always dangerous to write about “the first” person to do anything. Except for Yuri Gagarin, Alexei Leonov, and Neil Armstrong, “the first” to do anything dissolves into ambiguity. Gerbert, who would become Pope Sylvester II, described Arabic numerals (other than zero) by the end of the 10th century. He added in how this system along with the abacus made computation easier. Arabic numerals appear in the Codex Conciliorum Albeldensis seu Vigilanus, written in 976 AD in Spain. And it is not as though Fibonacci was the first European to travel to a land with Arabic numerals, or the first perceptive enough to see their value.
Allow that, though. Every invention has precursors, some so close that it takes great thinking to come up with a reason to ignore them. There must be some credit given to the person who gathers an idea into a coherent, well-explained whole. And with Fibonacci, and his famous manuscript of 1202, the Liber Abaci, we have … more frustration.
It’s not that Liber Abaci does not exist, or that it does not have what we credit it for having. We don’t have any copies of the 1202 edition, but we do have a 1228 manuscript, at least, and that starts out with the Arabic numeral system. And why this system is so good, and how to use it. It should convince anyone who reads it.
If anyone read it. We know of about fifteen manuscripts of Liber Abaci, only two of them reasonably complete. This seems sparse even for manuscripts in the days they had to be hand-copied. This until you learn that Baldassarre Boncompagni published the first known printed version in 1857. In print, in Italian, it took up 459 pages of text. Its first English translation, published by Laurence E Sigler in 2002(!) takes up 636 pages (!!). Suddenly it’s amazing that as many as two complete manuscripts survive. (Wikipedia claims three complete versions from the 13th and 14th centuries exist. And says there are about nineteen partial manuscripts with another nine incomplete copies. I do not explain this discrepancy.)
He had other books. The Liber Quadratorum, for example, a book about algebra. Wikipedia seems to say we have it through a single manuscript, copied in the 15th century. Practica Geometriae, translated from Latin in 1442 at least. A couple other now-lost manuscripts. A couple pieces about specific problems.
So perhaps only a handful of people read Fibonacci. Ah, but if they were the right people? He could have been a mathematical Velvet Underground, read by a hundred people, each of whom founded a new mathematics.
We could trace those hundred readers by the first thing anyone knows Fibonacci for. His rabbits, breeding in ways that rabbits do not, and the sequence of whole numbers those provide. Fibonacci did not discover this sequence. You knew that. Nothing in mathematics gets named for its discoverer. Virahanka, an Indian mathematician who lived somewhere between the sixth and eighth centuries, described the sequence exactly. Gopala, writing sometime in the 1130s, expanded on this.
This is not to say Fibonacci copied any of these (and more) Indian mathematicians. The world is large and manuscripts are hard to read. The sequence can be re-invented by anyone bored in the right way. Ah, but think of those who learned of the sequence and used it later on, following Fibonacci’s lead. For example, in 1611 Johannes Kepler wrote a piece that described Fibonacci’s sequence. But that does not name Fibonacci. He mentions other mathematicians, ancient and contemporary. The easiest supposition is he did not know he was writing something already seen. In 1844, Gabriel Lamé used Fibonacci numbers in studying algorithm complexity. He did not name Fibonacci either, though. (Lamé is famous today for making some progress on Fermat’s last theorem. He’s renowned for work in differential equations and on ellipse-like curves. If you have thought what a neat weird shape the equation $x^4 + y^4 = 1$ can describe you have tread in Lamé’s path.)
Things picked up for Fibonacci’s reputation in 1876, thanks to Édouard Lucas. (Lucas is notable for other things. Normal people might find interesting that he proved by hand the number $2^{127} - 1$ was prime. This seems to be the largest prime number ever proven by hand. He also created the Tower of Hanoi problem.) In January of 1876, Lucas wrote about the Fibonacci sequence, describing it as “the series of Lamé”. By May, though in writing about prime numbers, he has read Boncompagni’s publications. He says how this thing “commonly known as the sequence of Lamé was first presented by Fibonacci”.
And Fibonacci caught Lucas’s imagination. Lucas shared, particularly, the phrasing of this sequence as something in the reproduction of rabbits. This captured mathematicians’, and then people’s imaginations. It’s akin to Émile Borel’s room of a million typing monkeys. By the end of the 19th century Leonardo of Pisa had both a name and fame.
We can still ask why. The proximate cause is Édouard Lucas, impressed (I trust) by Boncompagni’s editions of Fibonacci’s work. Why did Baldassarre Boncompagni think it important to publish editions of Fibonacci? Well, he was interested in the history of science. He edited the first Italian journal dedicated to the history of mathematics. He may have understood that Fibonacci was, if not an important mathematician, at least one who had interesting things to write. Boncompagni’s edition of Liber Abaci came out in 1857. By 1859 the state of Tuscany voted to erect a statue.
So I speculate, without confirming that at least some of Fibonacci’s good name in the 19th century was a reflection of Italian unification. The search for great scholars whose intellectual achievements could reflect well on a nation trying to build itself.
And so we have bundles of ironies. Fibonacci did write impressive works of great mathematical insight. And he was recognized at the time for that work. The things he wrote about Arabic numerals were correct. His recommendation to use them was taken, but by people who did not read his advice. After centuries of obscurity he got some notice. And a problem he did not create nor particularly advance brought him a fame that’s lasted a century and a half now, and looks likely to continue.
I am always amazed to learn there are people not interested in history.
And now I can try to get ahead of deadline for next week. This and all my other A-to-Z topics for the year should be at this link. All my essays for this and past A-to-Z sequences are at this link. And I am still taking topics to discuss in the coming weeks. Thank you for reading and please take care.
## My All 2020 Mathematics A to Z: Exponential
GoldenOj suggested the exponential as a topic. It seemed like a good important topic, but one that was already well-explored by other people. Then I realized I could spend time thinking about something which had bothered me.
In here I write about “the” exponential, which is a bit like writing about “the” multiplication. We can talk about $2^3$ and $10^2$ and many other such exponential functions. One secret of algebra, not appreciated until calculus (or later), is that all these different functions are a single family. Understanding one exponential function lets you understand them all. Mathematicians pick one, the exponential with base e, because we find that convenient. e itself isn’t a convenient number — it’s a bit over 2.718 — but it has some wonderful properties. When I write “the exponential” here, I am looking at this function where we look at $e^{t}$.
This piece will have a bit more mathematics, as in equations, than usual. If you like me writing about mathematics more than reading equations, you’re hardly alone. I recommend letting your eyes drop to the next sentence, or at least the next sentence that makes sense. You should be fine.
# Exponential.
My professor for real analysis, in grad school, gave us one of those brilliant projects. Starting from the definition of the logarithm, as an integral, prove at least thirty things. They could be as trivial as “the log of 1 is 0”. They could be as subtle as how to calculate the log of one number in a different base. It was a great project for testing what we knew about why calculus works.
And it gives me the structure to write about the exponential function. Anyone reading a pop-mathematics blog about exponentials knows them. They’re these functions that, as the independent variable grows, grow ever-faster. Or that decay asymptotically to zero. Some readers know that, if the independent variable is an imaginary number, the exponential is a complex number too. As the independent variable grows, becoming a bigger imaginary number, the exponential doesn’t grow. It oscillates, a sine wave.
That’s weird. I’d like to see why that makes sense.
To say “why” this makes sense is doomed. It’s like explaining “why” 36 is divisible by three and six and nine but not eight. It follows from what the words we have mean. The “why” I’ll offer is reasons why this strange behavior is plausible. It’ll be a mix of deductive reasoning and heuristics. This is a common blend when trying to understand why a result happens, or why we should accept it.
I’ll start with the definition of the logarithm, as used in real analysis. The natural logarithm, if you’re curious. It has a lot of nice properties. You can use this to prove over thirty things. Here it is:
$log\left(x\right) = \int_{1}^{x} \frac{1}{s} ds$
The “s” is a dummy variable. You’ll never see it in actual use.
So now let me summon into existence a new function. I want to call it g. This is because I’ve worked this out before and I want to label something else as f. There is something coming ahead that’s a bit of a syntactic mess. This is the best way around it that I can find.
$g(x) = \frac{1}{c} \int_{1}^{x} \frac{1}{s} ds$
Here, ‘c’ is a constant. It might be real. It might be imaginary. It might be complex. I’m using ‘c’ rather than ‘a’ or ‘b’ so that I can later on play with possibilities.
So the alert reader noticed that g(x) here means “take the logarithm of x, and divide it by a constant”. So it does. I’ll need two things built off of g(x), though. The first is its derivative. That’s taken with respect to x, the only variable. Finding the derivative of an integral sounds intimidating but, happy to say, we have a theorem to make this easy. It’s the Fundamental Theorem of Calculus, and it tells us:
$g'(x) = \frac{1}{c}\cdot\frac{1}{x}$
We can use the ‘ to denote “first derivative” if a function has only one variable. Saves time to write and is easier to type.
The other thing that I need, and the thing I really want, is the inverse of g. I’m going to call this function f(t). A more common notation would be to write $g^{-1}(t)$ but we already have $g'(x)$ in the works here. There is a limit to how many little one-stroke superscripts we need above g. This is the tradeoff to using ‘ for first derivatives. But here’s the important thing:
$x = f(t) = g^{-1}(t)$
Here, we have some extratextual information. We know the inverse of a logarithm is an exponential. We even have a standard notation for that. We’d write
$x = f(t) = e^{ct}$
in any context besides this essay as I’ve set it up.
What I would like to know next is: what is the derivative of f(t)? This sounds impossible to know, if we’re thinking of “the inverse of this integration”. It’s not. We have the Inverse Function Theorem to come to our aid. We encounter the Inverse Function Theorem briefly, in freshman calculus. There we use it to do as many as two problems and then hide away forever from the Inverse Function Theorem. (This is why it’s not mentioned in my quick little guide to how to take derivatives.) It reappears in real analysis for this sort of contingency. The inverse function theorem tells us, if f the inverse of g, that:
$f'(t) = \frac{1}{g'(f(t))}$
That g'(f(t)) means, use the rule for g'(x), with f(t) substituted in place of ‘x’. And now we see something magic:
$f'(t) = \frac{1}{\frac{1}{c}\cdot\frac{1}{f(t)}}$
$f'(t) = c\cdot f(t)$
And that is the wonderful thing about the exponential. Its derivative is a constant times its original value. That alone would make the exponential one of mathematics’ favorite functions. It allows us, for example, to transform differential equations into polynomials. (If you want everlasting fame, albeit among mathematicians, invent a new way to turn differential equations into polynomials.) Because we could turn, say,
$f'''(t) - 3f''(t) + 3f'(t) - f(t) = 0$
into
$c^3 e^{ct} - 3c^2 e^{ct} + 3c e^{ct} - e^{ct} = 0$
and then
$\left(c^3 - 3c^2 + 3c - 1\right) e^{ct} = 0$
by supposing that f(t) has to be $e^{ct}$ for the correct value of c. Then all you need do is find a value of ‘c’ that makes that last equation true.
Supposing that the answer has this convenient form may remind you of searching for the lost keys over here where the light is better. But we find so many keys in this good light. If you carry on in mathematics you will never stop seeing this trick, although it may be disguised.
In part because it’s so easy to work with. In part because exponentials like this cover so much of what we might like to do. Let’s go back to looking at the derivative of the exponential function.
$f'(t) = c\cdot f(t)$
There are many ways to understand what a derivative is. One compelling way is to think of it as the rate of change. If you make a tiny change in t, how big is the change in f(t)? So what is the rate of change here?
We can pose this as a pretend-physics problem. This lets us use our physical intuition to understand things. This also is the transition between careful reasoning and ad-hoc arguments. Imagine a particle that, at time ‘t’, is at the position $x = f(t)$. What is its velocity? That’s the first derivative of its position, so, $x' = f'(t) = c\cdot f(t)$.
If we are using our physics intuition to understand this it helps to go all the way. Where is the particle? Can we plot that? … Sure. We’re used to matching real numbers with points on a number line. Go ahead and do that. Not to give away spoilers, but we will want to think about complex numbers too. Mathematicians are used to matching complex numbers with points on the Cartesian plane, though. The real part of the complex number matches the horizontal coordinate. The imaginary part matches the vertical coordinate.
So how is this particle moving?
To say for sure we need some value of t. All right. Pick your favorite number. That’s our t. f(t) follows from whatever your t was. What’s interesting is that the change also depends on c. There’s a couple possibilities. Let me go through them.
First, what if c is zero? Well, then the definition of g(t) was gibberish and we can’t have that. All right.
What if c is a positive real number? Well, then, f'(t) is some positive multiple of whatever f(t) was. The change is “away from zero”. The particle will push away from the origin. As t increases, f(t) increases, so it pushes away faster and faster. This is exponential growth.
What if c is a negative real number? Well, then, f'(t) is some negative multiple of whatever f(t) was. The change is “towards zero”. The particle pulls toward the origin. But the closer it gets the more slowly it approaches. If t is large enough, f(t) will be so tiny that $c\cdot f(t)$ is too small to notice. The motion declines into imperceptibility.
What if c is an imaginary number, though?
So let’s suppose that c is equal to some real number b times $\imath$, where $\imath^2 = -1$.
I need some way to describe what value f(t) has, for whatever your pick of t was. Let me say it’s equal to $\alpha + \beta\imath$, where $\alpha$ and $\beta$ are some real numbers whose value I don’t care about. What’s important here is that $f(t) = \alpha + \beta\imath$.
And, then, what’s the first derivative? The magnitude and direction of motion? That’s easy to calculate; it’ll be $\imath b f(t) = -\beta + \alpha\imath$. This is an interesting complex number. Do you see what’s interesting about it? I’ll get there next paragraph.
So f(t) matches some point on the Cartesian plane. But f'(t), the direction our particle moves with a small change in t, is another poiat whatever complex number f'(t) is as another point on the plane. The line segment connecting the origin to f(t) is perpendicular to the one connecting the origin to f'(t). The ‘motion’ of this particle is perpendicular to its position. And it always is. There’s several ways to show this. An easy one is to just pick some values for $\alpha$ and $\beta$ and b and try it out. This proof is not rigorous, but it is quick and convincing.
If your direction of motion is always perpendicular to your position, then what you’re doing is moving in a circle around the origin. This we pick up in physics, but it applies to the pretend-particle moving here. The exponentials of $\imath t$ and $2 \imath t$ and $-40 \imath t$ will all be points on a locus that’s a circle centered on the origin. The values will look like the cosine of an angle plus $\imath$ times the sine of an angle.
And there, I think, we finally get some justification for the exponential of an imaginary number being a complex number. And for why exponentials might have anything to do with cosines and sines.
You might ask what if c is a complex number, if it’s equal to $a + b\imath$ for some real numbers a and b. In this case, you get spirals as t changes. If a is positive, you get points spiralling outward as t increases. If a is negative, you get points spiralling inward toward zero as t increases. If b is positive the spirals go counterclockwise. If b is negative the spirals go clockwise. $e^{(a + \imath b) t}$ is the same as $e^{at} \cdot e^{\imath b t}$.
This does depend on knowing the exponential of a sum of terms, such as of $a + \imath b$, is equal to the product of the exponential of those terms. This is a good thing to have in your portfolio. If I remember right, it comes in around the 25th thing. It’s an easy result to have if you already showed something about the logarithms of products.
Thank you for reading. I have this and all my A-to-Z topics for the year at this link. All my essays for this and past A-to-Z sequences are at this link. And I am still interested in topics to discuss in the coming weeks. Take care, please.
## I’m looking for G, H, and I topics for the All 2020 A-to-Z
When I look at how much I’m not getting ahead of deadline for these essays I’m amazed to think I should be getting topics for as much as five weeks out. Still, I should.
What I’d like is suggestions of things to write about. Any topics that have a name starting with the letters ‘G’, ‘H’, or ‘I’, that might be used for a topic. People with mathematical significance count, too. Please, with any nominations, let me know how to credit you for the topic. Also please mention any projects that you’re working on that could use attention. I try to credit and support people where I can.
These are the topics I’ve covered in past essays. I’m willing to revisit one if I realize I have fresh thoughts about it, too. I haven’t done so yet, but I’ll tell you, I was thinking hard about doing a rewrite on “dual”.
Topics I’ve already covered, starting with the letter ‘G’, are:
Topics I’ve already covered, starting with the letter ‘H’, are:
Topics I’ve already covered, starting with the letter ‘I’, are:
Thank you all for your thoughts, and for reading.
## My All 2020 Mathematics A to Z: Delta
I have Dina Yagodich to thank for my inspiration this week. As will happen with these topics about something fundamental, this proved to be a hard topic to think about. I don’t know of any creative or professional projects Yagodich would like me to mention. I’ll pass them on if I learn of any.
# Delta.
In May 1962 Mercury astronaut Deke Slayton did not orbit the Earth. He had been grounded for (of course) a rare medical condition. Before his grounding he had selected his flight’s callsign and capsule name: Delta 7. His backup, Wally Schirra, who did not fly in Slayton’s place, named his capsule the Sigma 7. Schirra chose sigma for its mathematical and scientific meaning, representing the sum of (in principle) many parts. Slayton said he chose Delta only because he would have been the fourth American into space and Δ is the fourth letter of the Greek alphabet. I believe it, but do notice how D is so prominent a letter in Slayton’s name. And S, Σ, prominent in both Slayton and Schirra’s.
Δ is also a prominent mathematics and engineering symbol. It has several meanings, with several of the most useful ones escaping mathematics and becoming vaguely known things. They blur together, as ideas that are useful and related and not identical will do.
If “Δ” evokes anything mathematical to a person it is “change”. This probably owes to space in the popular imagination. Astronauts talking about the delta-vee needed to return to Earth is some of the most accessible technical talk of Apollo 13, to pick one movie. After that it’s easy to think of pumping the car’s breaks as shedding some delta-vee. It secondarily owes to school, high school algebra classes testing people on their ability to tell how steep a line is. This gets described as the change-in-y over the change-in-x, or the delta-y over delta-x.
Δ prepended to a variable like x or y or v we read as “the change in”. It fits the astronaut and the algebra uses well. The letter Δ by itself means as much as the words “the change in” do. It describes what we’re thinking about, but waits for a noun to complete. We say “the” rather than “a”, I’ve noticed. The change in velocity needed to reach Earth may be one thing. But “the” change in x and y coordinates to find the slope of a line? We can use infinitely many possible changes and get a good result. We must say “the” because we consider one at a time.
Used like this Δ acts like an operator. It means something like “a difference between two values of the variable ” and lets us fill in the blank. How to pick those two values? Sometimes there’s a compelling choice. We often want to study data sampled at some schedule. The Δ then is between one sample’s value and the next. Or between the last sample value and the current one. Which is correct? Ask someone who specializes in difference equations. These are the usually numeric approximations to differential equations. They turn up often in signal processing or in understanding the flows of fluids or the interactions of particles. We like those because computers can solve them.
Δ, as this operator, can even be applied to itself. You read ΔΔ x as “the change in the change in x”. The prose is stilted, but we can understand it. It’s how the change in x has itself changed. We can imagine being interested in this Δ2 x. We can see this as a numerical approximation to the second derivative of x, and this gets us back to differential equations. There are similar results for ΔΔΔ x even if we don’t wish to read it all out.
In principle, Δ x can be any number. In practice, at least for an independent variable, it’s a small number, usually real. Often we’re lured into thinking of it as positive, because a phrase like “x + Δ x” looks like we’re making a number a little bigger than x. When you’re a mathematician or a quality-control tester you remember to consider “what if Δ x is negative”. From testing that learn you wrote your computer code wrong. We’re less likely to assume this positive-ness for the dependent variable. By the time we do enough mathematics to have opinions we’ve seen too many decreasing functions to overlook that Δ y might be negative.
Notice that in that last paragraph I faithfully wrote Δ x and Δ y. Never Δ bare, unless I forgot and cannot find it in copy-editing. I’ve said that Δ means “the change in”; to write it without some variable is like writing √ by itself. We can understand wishing to talk about “the square root of”, as a concept. Still it means something else than √ x does.
We do write Δ by itself. Even professionals do. Written like this we don’t mean “the change in [ something ]”. We instead mean “a number”. In this role the symbol means the same thing as x or y or t might, a way to refer to a number whose value we might not know. We might not care about. The implication is that it’s small, at least if it’s something to add to the independent variable. We use it when we ponder how things would be different if there were a small change in something.
Small but not tiny. Here we step into mathematics as a language, which can be as quirky and ambiguous as English. Because sometimes we use the lower-case δ. And this also means “a small number”. It connotes a smaller number than Δ. Is 0.01 a suitable value for Δ? Or for δ? Maybe. My inclination would be to think of that as Δ, reserving δ for “a small number of value we don’t care to specify”. This may be my quirk. Others might see it different.
We will use this lowercase δ as an operator too, thinking of things like “x + δ x”. As you’d guess, δ x connotes a small change in x. Smaller than would earn the title Δ x. There is no declaring how much smaller. It’s contextual. As with δ bare, my tendency is to think that Δ x might be a specific number but that δ x is “a perturbation”, the general idea of a small number. We can understand many interesting problems as a small change from something we already understand. That small change often earns such a δ operator.
There are smaller changes than δ x. There are infinitesimal differences. This is our attempt to make sense of “a number as close to zero as you can get without being zero”. We forego the Greek letters for this and revert to Roman letters: dx and dy and dt and the other marks of differential calculus. These are difficult numbers to discuss. It took more than a century of mathematicians’ work to find a way our experience with Δ x could inform us about dx. (We do not use ‘d’ alone to mean an even smaller change than δ. Sometimes we will in analysis write d with a space beside it, waiting for a variable to have its differential taken. I feel unsettled when I see it.)
Much of the completion of work we can credit to Augustin Cauchy, who’s credited with about 800 publications. It’s an intimidating record, even before considering its importance. Cauchy is, per Florian Cajori’s History Mathematical Notations, one of the persons we can credit with the use of Δ as symbol for “the change in”. (Section 610.) He’s not the only one. Leonhardt Euler and Johann Bernoulli (section 640) used Δ to represent a finite difference, the difference between two values.
I’m not aware of an explicit statement why Δ got the pick, as opposed to other letters. It’s hard to imagine a reason besides “difference starts with d”. That an etymology seems obvious does not make it so. It does seem to have a more compelling explanation than the use of “m” for the slope of a line, or $\frac{\Delta y}{\Delta x}$, though.
Slayton’s Mercury flight, performed by Scott Carpenter, did not involve any appreciable changes in orbit, a Δ v. No crewed spacecraft would until Gemini III. The Mercury flight did involve tests in orienting the spacecraft, in Δ θ and Δ φ on the angles of the spacecraft’s direction. These might have been in Slayton’s mind. He eventually flew into space on the Apollo-Soyuz Test Project, when an accident during landing exposed the crew to toxic gases. The investigation discovered a lesion on Slayton’s lung. A tiny thing, ultimately benign, which discovered earlier could have kicked him off the mission and altered his life so.
Thank you all for reading. I’m gathering all my 2020 A-to-Z essays at this link, and have all my A-to-Z essays of any kind at this link. Here is hoping there’s a good week ahead.
## My All 2020 Mathematics A to Z: Complex Numbers
Mr Wu, author of the Singapore Maths Tuition blog, suggested complex numbers for a theme. I wrote long ago a bit about what complex numbers are and how to work with them. But that hardly exhausts the subject, and I’m happy revisiting it.
# Complex Numbers.
A throwaway joke somewhere in The Hitchhiker’s Guide To The Galaxy has Marvin The Paranoid Android grumble that he’s invented a square root for minus one. Marvin’s gone and rejiggered all of mathematics while waiting for something better to do. Nobody cares. It reminds us while Douglas Adams established much of a particular generation of nerd humor, he was not himself a nerd. The nerds who read The Hitchhiker’s Guide To The Galaxy obsessively know we already did that, centuries ago. Marvin’s creation was as novel as inventing “one-half”. (It may be that Adams knew, and intended Marvin working so hard on the already known as the joke.)
Anyone who’d read a pop mathematics blog like this likely knows the rough story of complex numbers in Western mathematics. The desire to find roots of polynomials. The discovery of formulas to find roots. Polynomials with numbers whose formulas demanded the square roots of negative numbers. And the discovery that sometimes, if you carried on as if the square root of a negative number made sense, the ugly terms vanished. And you got correct answers in the end. And, eventually, mathematicians relented. These things were unsettling enough to get unflattering names. To call a number “imaginary” may be more pejorative than even “negative”. It hints at the treatment of these numbers as falsework, never to be shown in the end. To call the sum of a “real” number and an “imaginary” “complex” is to warn. An expert might use these numbers only with care and deliberation. But we can count them as numbers.
I mentioned when writing about quaternions how when I learned of complex numbers I wanted to do the same trick again. My suspicion is many mathematicians do. The example of complex numbers teases us with the possibilites of other numbers. If we’ve defined $\imath$ to be “a number that, squared, equals minus one”, what next? Could we define a $\sqrt{\imath}$? How about a $\log{\imath}$? Maybe something else? An arc-cosine of $\imath$?
You can try any of these. They turn out to be redundant. The real numbers and $\imath$ already let you describe any of those new numbers. You might have a flash of imagination: what if there were another number that, squared, equalled minus one, and that wasn’t equal to $\imath$? Numbers that look like $a + b\imath + c\jmath$? Here, and later on, a and b and c are some real numbers. $b\imath$ means “multiply the real number b by whatever $\imath$ is”, and we trust that this makes sense. There’s a similar setup for c and $\jmath$. And if you just try that, with $a + b\imath + c\jmath$, you get some interesting new mathematics. Then you get stuck on what the product of these two different square roots should be.
If you think of that. If all you think of is addition and subtraction and maybe multiplication by a real number? $a + b\imath + c\jmath$ works fine. You only spot trouble if you happen to do multiplication. Granted, multiplication is to us not an exotic operation. Take that as a warning, though, of how trouble could develop. How do we know, say, that complex numbers are fine as long as you don’t try to take the log of the haversine of one of them, or some other obscurity? And that then they produce gibberish? Or worse, produce that most dread construct, a contradiction?
Here I am indebted to an essay that ten minutes ago I would have sworn was in one of the two books I still have out from the university library. I’m embarrassed to learn my error. It was about the philosophy of complex numbers and it gave me fresh perspectives. When the university library reopens for lending I will try to track back through my borrowing and find the original. I suspect, without confirming, that it may have been in Reuben Hersh’s What Is Mathematics, Really?.
The insight is that we can think of complex numbers in several ways. One fruitful way is to match complex numbers with points in a two-dimensional space. It’s common enough to pair, for example, the number $3 + 4\imath$ with the point at Cartesian coordinates $(3, 4)$. Mathematicians do this so often it can take a moment to remember that is just a convention. And there is a common matching between points in a Cartesian coordinate system and vectors. Chaining together matches like this can worry. Trust that we believe our matches are sound. Then we notice that adding two complex numbers does the same work as adding ordered coordinate pairs. If we trust that adding coordinate pairs makes sense, then we need to accept that adding complex numbers makes sense. Adding coordinate pairs is the same work as adding real numbers. It’s just a lot of them. So we’re lead to trust that if addition for real numbers works then addition for complex numbers does.
Multiplication looks like a mess. A different perspective helps us. A different way to look at where point are on the plane is to use polar coordinates. That is, the distance a point is from the origin, and the angle between the positive x-axis and the line segment connecting the origin to the point. In this format, multiplying two complex numbers is easy. Let the first complex number have polar coordinates $(r_1, \theta_1)$. Let the second have polar coordinates $(r_2, \theta_2)$. Their product, by the rules of complex numbers, is a point with polar coordinates $(r_1\cdot r_2, \theta_1 + \theta_2)$. These polar coordinates are real numbers again. If we trust addition and multiplication of real numbers, we can trust this for complex numbers.
If we’re confident in adding complex numbers, and confident in multiplying them, then … we’re in quite good shape. If we can add and multiply, we can do polynomials. And everything is polynomials.
We might feel suspicious yet. Going from complex numbers to points in space is calling on our geometric intuitions. That might be fooling ourselves. Can we find a different rationalization? The same result by several different lines of reasoning makes the result more believable. Is there a rationalization for complex numbers that never touches geometry?
We can. One approach is to use the mathematics of matrices. We can match the complex number $a + b\imath$ to the sum of the matrices
$a \left[\begin{tabular}{c c} 1 & 0 \\ 0 & 1 \end{tabular}\right] + b \left[\begin{tabular}{c c} 0 & 1 \\ -1 & 0 \end{tabular}\right]$
Adding matrices is compelling. It’s the same work as adding ordered pairs of numbers. Multiplying matrices is tedious, though it’s not so bad for matrices this small. And it’s all done with real-number multiplication and addition. If we trust that the real numbers work, we can trust complex numbers do. If we can show that our new structure can be understood as a configuration of the old, we convince ourselves the new structure is meaningful.
The process by which we learn to trust them as numbers, guides us to learning how to trust any new mathematical structure. So here is a new thing that complex numbers can teach us, years after we have learned how to divide them. Do not attempt to divide complex numbers. That’s too much work.
## My All 2020 Mathematics A to Z: Butterfly Effect
It’s a fun topic today, one suggested by Jacob Siehler, who I think is one of the people I met through Mathstodon. Mathstodon is a mathematics-themed instance of Mastodon, an open-source microblogging system. You can read its public messages here.
# Butterfly Effect.
I take the short walk from my home to the Red Cedar River, and I pour a cup of water in. What happens next? To the water, anyway. Me, I think about walking all the way back home with this empty cup.
Let me have some simplifying assumptions. Pretend the cup of water remains somehow identifiable. That it doesn’t evaporate or dissolve into the riverbed. That it isn’t scooped up by a city or factory, drunk by an animal, or absorbed into a plant’s roots. That it doesn’t meet any interesting ions that turn it into other chemicals. It just goes as the river flows dictate. The Red Cedar River merges into the Grand River. This then moves west, emptying into Lake Michigan. Water from that eventually passes the Straits of Mackinac into Lake Huron. Through the St Clair River it goes to Lake Saint Clair, the Detroit River, Lake Erie, the Niagara River, the Niagara Falls, and Lake Ontario. Then into the Saint Lawrence River, then the Gulf of Saint Lawrence, before joining finally the North Atlantic.
If I pour in a second cup of water, somewhere else on the Red Cedar River, it has a similar journey. The details are different, but the course does not change. Grand River to Lake Michigan to three more Great Lakes to the Saint Lawrence to the North Atlantic Ocean. If I wish to know when my water passes the Mackinac Bridge I have a difficult problem. If I just wish to know what its future is, the problem is easy.
So now you understand dynamical systems. There’s some details to learn before you get a job, yes. But this is a perspective that explains what people in the field do, and why that. Dynamical systems are, largely, physics problems. They are about collections of things that interact according to some known potential energy. They may interact with each other. They may interact with the environment. We expect that where these things are changes in time. These changes are determined by the potential energies; there’s nothing random in it. Start a system from the same point twice and it will do the exact same thing twice.
We can describe the system as a set of coordinates. For a normal physics system the coordinates are the positions and momentums of everything that can move. If the potential energy’s rule changes with time, we probably have to include the time and the energy of the system as more coordinates. This collection of coordinates, describing the system at any moment, is a point. The point is somewhere inside phase space, which is an abstract idea, yes. But the geometry we know from the space we walk around in tells us things about phase space, too.
Imagine tracking my cup of water through its journey in the Red Cedar River. It draws out a thread, running from somewhere near my house into the Grand River and Lake Michigan and on. This great thin thread that I finally lose interest in when it flows into the Atlantic Ocean.
Dynamical systems drops in phase space act much the same. As the system changes in time, the coordinates of its parts change, or we expect them to. So “the point representing the system” moves. Where it moves depends on the potentials around it, the same way my cup of water moves according to the flow around it. “The point representing the system” traces out a thread, called a trajectory. The whole history of the system is somewhere on that thread.
Phase space, like a map, has regions. For my cup of water there’s a region that represents “is in Lake Michigan”. There’s another that represents “is going over Niagara Falls”. There’s one that represents “is stuck in Sandusky Bay a while”. When we study dynamical systems we are often interested in what these regions are, and what the boundaries between them are. Then a glance at where the point representing a system is tells us what it is doing. If the system represents a satellite orbiting a planet, we can tell whether it’s in a stable orbit, about to crash into a moon, or about to escape to interplanetary space. If the system represents weather, we can say it’s calm or stormy. If the system is a rigid pendulum — a favorite system to study, because we can draw its phase space on the blackboard — we can say whether the pendulum rocks back and forth or spins wildly.
Come back to my second cup of water, the one with a different history. It has a different thread from the first. So, too, a dynamical system started from a different point traces out a different trajectory. To find a trajectory is, normally, to solve differential equations. This is often useful to do. But from the dynamical systems perspective we’re usually interested in other issues.
For example: when I pour my cup of water in, does it stay together? The cup of water started all quite close together. But the different drops of water inside the cup? They’ve all had their own slightly different trajectories. So if I went with a bucket, one second later, trying to scoop it all up, likely I’d succeed. A minute later? … Possibly. An hour later? A day later?
By then I can’t gather it back up, practically speaking, because the water’s gotten all spread out across the Grand River. Possibly Lake Michigan. If I knew the flow of the river perfectly and knew well enough where I dropped the water in? I could predict where each goes, and catch each molecule of water right before it falls over Niagara. This is tedious but, after all, if you start from different spots — as the first and the last drop of my cup do — you expect to, eventually, go different places. They all end up in the North Atlantic anyway.
Except … well, there is the Chicago Sanitary and Ship Canal. It connects the Chicago River to the Des Plaines River. The result is that some of Lake Michigan drains to the Ohio River, and from there the Mississippi River, and the Gulf of Mexico. There are also some canals in Ohio which connect Lake Erie to the Ohio River. I don’t know offhand of ones in Indiana or Wisconsin bringing Great Lakes water to the Mississippi. I assume there are, though.
Then, too, there is the Erie Canal, and the other canals of the New York State Canal System. These link the Niagara River and Lake Erie and Lake Ontario to the Hudson River. The Pennsylvania Canal System, too, links Lake Erie to the Delaware River. The Delaware and the Hudson may bring my water to the mid-Atlantic. I don’t know the canal systems of Ontario well enough to say whether some water goes to Hudson Bay; I’d grant that’s possible, though.
Think of my poor cups of water, now. I had been sure their fate was the North Atlantic. But if they happen to be in the right spot? They visit my old home off the Jersey Shore. Or they flow through Louisiana and warmer weather. What is their fate?
I will have butterflies in here soon.
Imagine two adjacent drops of water, one about to be pulled into the Chicago River and one with Lake Huron in its future. There is almost no difference in their current states. Their destinies are wildly separate, though. It’s surprising that so small a difference matters. Thinking through the surprise, it’s fair that this can happen, even for a deterministic system. It happens that there is a border, separating those bound for the Gulf and those for the North Atlantic, between these drops.
But how did those water drops get there? Where were they an hour before? … Somewhere else, yes. But still, on opposite sides of the border between “Gulf of Mexico water” and “North Atlantic water”. A day before, the drops were somewhere else yet, and the border was still between them. This separation goes back to, even, if the two drops came from my cup of water. Within the Red Cedar River is a border between a destiny of flowing past Quebec and of flowing past Saint Louis. And between flowing past Quebec and flowing past Syracuse. Between Syracuse and Philadelphia.
How far apart are those borders in the Red Cedar River? If you’ll go along with my assumptions, smaller than my cup of water. Not that I have the cup in a special location. The borders between all these fates are, probably, a complicated spaghetti-tangle. Anywhere along the river would be as fortunate. But what happens if the borders are separated by a space smaller than a drop? Well, a “drop” is a vague size. What if the borders are separated by a width smaller than a water molecule? There’s surely no subtleties in defining the “size” of a molecule.
That these borders are so close does not make the system random. It is still deterministic. Put a drop of water on this side of the border and it will go to this fate. But how do we know which side of the line the drop is on? If I toss this new cup out to the left rather than the right, does that matter? If my pinky twitches during the toss? If I am breathing in rather than out? What if a change too small to measure puts the drop on the other side?
And here we have the butterfly effect. It is about how a difference too small to observe has an effect too large to ignore. It is not about a system being random. It is about how we cannot know the system well enough for its predictability to tell us anything.
The term comes from the modern study of chaotic systems. One of the first topics in which the chaos was noticed, numerically, was weather simulations. The difference between a number’s representation in the computer’s memory and its rounded-off printout was noticeable. Edward Lorenz posed it aptly in 1963, saying that “one flap of a sea gull’s wings would be enough to alter the course of the weather forever”. Over the next few years this changed to a butterfly. In 1972 Philip Merrilees titled a talk Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas? My impression is that these days the butterflies may be anywhere, and they alter hurricanes.
That we settle on butterflies as agents of chaos we can likely credit to their image. They seem to be innocent things so slight they barely exist. Hummingbirds probably move with too much obvious determination to fit the role. The Big Bad Wolf huffing and puffing would realistically be almost as nothing as a butterfly. But he has the power of myth to make him seem mightier than the storms. There are other happy accidents supporting butterflies, though. Edward Lorenz’s 1960s weather model makes trajectories that, plotted, create two great ellipsoids. The figures look like butterflies, all different but part of the same family. And there is Ray Bradbury’s classic short story, A Sound Of Thunder. If you don’t remember 7th grade English class, in the story time-travelling idiots change history, putting a fascist with terrible spelling in charge of a dystopian world, by stepping on a butterfly.
The butterfly then is metonymy for all the things too small to notice. Butterflies, sea gulls, turning the ceiling fan on in the wrong direction, prying open the living room window so there’s now a cross-breeze. They can matter, we learn.
## I’m looking for D, E, and F topics for the All 2020 A-to-Z
It does seem like I only just began the All 2020 Mathematics A-to-Z and already I need some fresh topics. This is because I really want to get to having articles done a week before publication, and while I’m not there yet, I can imagine getting there.
I’d like your nominations for subjects. Please comment here with some topic, name starting ‘D’, ‘E’, or ‘F’, that I might try writing about. This can include people, too. And also please let me know how to credit you for a topic suggestion, including any projects you’re working on that could do with attention.
These are the topics I’ve covered in past essays. I’m open to revisiting ones that I think I have new ideas about. Thanks all for your ideas.
Topics I’ve already covered, starting with the letter D, are:
Topics I’ve already covered, starting with the letter E, are:
Topics I’ve already covered, starting with the letter F, are:
Thank you for reading, and for your thoughts.
## My All 2020 Mathematics A to Z: Michael Atiyah
To start this year’s great glossary project Mr Wu, author of the MathTuition88.com blog, had a great suggestion: The Atiyah-Singer Index Theorem. It’s an important and spectacular piece of work. I’ll explain why I’m not doing that in a few sentences.
Mr Wu pointed out that a biography of Michael Atiyah, one of the authors of this theorem, might be worth doing. GoldenOj endorsed the biography idea, and the more I thought it over the more I liked it. I’m not able to do a true biography, something that goes to primary sources and finds a convincing story of a life. But I can sketch out a bit, exploring his work and why it’s of note.
# Michael Atiyah.
Theodore Frankel’s The Geometry of Physics: An Introduction is a wonderful book. It’s 686 pages, including the index. It all explores how our modern understanding of physics is our modern understanding of geometry. On page 465 it offers this:
The Atiyah-Singer index theorem must be considered a high point of geometrical analysis of the twentieth century, but is far too complicated to be considered in this book.
I know when I’m licked. Let me attempt to look at one of the people behind this theorem instead.
The Riemann Hypothesis is about where to find the roots of a particular infinite series. It’s been out there, waiting for a solution, for a century and a half. There are many interesting results which we would know to be true if the Riemann Hypothesis is true. In 2018, Michael Atiyah declared that he had a proof. And, more, an amazing proof, a short proof. Albeit one that depended on a great deal of background work and careful definitions. The mathematical community was skeptical. It still is. But it did not dismiss outright the idea that he had a solution. It was plausible that Atiyah might solve one of the greatest problems of mathematics in something that fits on a few PowerPoint slides.
So think of a person who commands such respect.
His proof of the Riemann Hypothesis, as best I understand, is not generally accepted. For example, it includes the fine structure constant. This comes from physics. It describes how strongly electrons and photons interact. The most compelling (to us) consequence of the Riemann Hypothesis is in how prime numbers are distributed among the integers. It’s hard to think how photons and prime numbers could relate. But, then, if humans had done all of mathematics without noticing geometry, we would know there is something interesting about π. Differential equations, if nothing else, would turn up this number. We happened to discover π in the real world first too. If it were not familiar for so long, would we think there should be any commonality between differential equations and circles?
I do not mean to say Atiyah is right and his critics wrong. I’m no judge of the matter at all. What is interesting is that one could imagine a link between a pure number-theory matter like the Riemann hypothesis and a physical matter like the fine structure constant. It’s not surprising that mathematicians should be interested in physics, or vice-versa. Atiyah’s work was particularly important. Much of his work, from the late 70s through the 80s, was in gauge theory. This subject lies under much of modern quantum mechanics. It’s born of the recognition of symmetries, group operations that you can do on a field, such as the electromagnetic field.
In a sequence of papers Atiyah, with other authors, sorted out particular cases of how magnetic monopoles and instantons behave. Magnetic monopoles may sound familiar, even though no one has ever seen one. These are magnetic points, an isolated north or a south pole without its opposite partner. We can understand well how they would act without worrying about whether they exist. Instantons are more esoteric; I don’t remember encountering the term before starting my reading for this essay. I believe I did, encountering the technique as a way to describe the transitions between one quantum state and another. Perhaps the name failed to stick. I can see where there are few examples you could give an undergraduate physics major. And it turns out that monopoles appear as solutions to some problems involving instantons.
This was, for Atiyah, later work. It arose, in part, from bringing the tools of index theory to nonlinear partial differential equations. This index theory is the thing that got us the Atiyah-Singer Index Theorem too complicated to explain in 686 pages. Index theory, here, studies questions like “what can we know about a differential equation without solving it?” Solving a differential equation would tell us almost everything we’d like to know, yes. But it’s also quite hard. Index theory can tell us useful things like: is there a solution? Is there more than one? How many? And it does this through topological invariants. A topological invariant is a trait like, for example, the number of holes that go through a solid object. These things are indifferent to operations like moving the object, or rotating it, or reflecting it. In the language of group theory, they are invariant under a symmetry.
It’s startling to think a question like “is there a solution to this differential equation” has connections to what we know about shapes. This shows some of the power of recasting problems as geometry questions. From the late 50s through the mid-70s, Atiyah was a key person working in a topic that is about shapes. We know it as K-theory. The “K” from the German Klasse, here. It’s about groups, in the abstract-algebra sense; the things in the groups are themselves classes of isomorphisms. Michael Atiyah and Friedrich Hirzebruch defined this sort of group for a topological space in 1959. And this gave definition to topological K-theory. This is again abstract stuff. Frankel’s book doesn’t even mention it. It explores what we can know about shapes from the tangents to the shapes.
And it leads into cobordism, also called bordism. This is about what you can know about shapes which could be represented as cross-sections of a higher-dimension shape. The iconic, and delightfully named, shape here is the pair of pants. In three dimensions this shape is a simple cartoon of what it’s named. On the one end, it’s a circle. On the other end, it’s two circles. In between, it’s a continuous surface. Imagine the cross-sections, how on separate layers the two circles are closer together. How their shapes distort from a real circle. In one cross-section they come together. They appear as two circles joined at a point. In another, they’re a two-looped figure. In another, a smoother circle. Knowing that Atiyah came from these questions may make his future work seem more motivated.
But how does one come to think of the mathematics of imaginary pants? Many ways. Atiyah’s path came from his first research specialty, which was algebraic geometry. This was his work through much of the 1950s. Algebraic geometry is about the kinds of geometric problems you get from studying algebra problems. Algebra here means the abstract stuff, although it does touch on the algebra from high school. You might, for example, do work on the roots of a polynomial, or a comfortable enough equation like $x^2 + y^2 = 1$. Atiyah had started — as an undergraduate — working on projective geometries. This is what one curve looks like projected onto a different surface. This moved into elliptic curves and into particular kinds of transformations on surfaces. And algebraic geometry has proved important in number theory. You might remember that the Wiles-Taylor proof of Fermat’s Last Theorem depended on elliptic curves. Some work on the Riemann hypothesis is built on algebraic topology.
(I would like to trace things farther back. But the public record of Atiyah’s work doesn’t offer hints. I can find amusing notes like his father asserting he knew he’d be a mathematician. He was quite good at changing local currency into foreign currency, making a profit on the deal.)
It’s possible to imagine this clear line in Atiyah’s career, and why his last works might have been on the Riemann hypothesis. That’s too pat an assertion. The more interesting thing is that Atiyah had several recognizable phases and did iconic work in each of them. There is a cliche that mathematicians do their best work before they are 40 years old. And, it happens, Atiyah did earn a Fields Medal, given to mathematicians for the work done before they are 40 years old. But I believe this cliche represents a misreading of biographies. I suspect that first-rate work is done when a well-prepared mind looks fresh at a new problem. A mathematician is likely to have these traits line up early in the career. Grad school demands the deep focus on a particular problem. Getting out of grad school lets one bring this deep knowledge to fresh questions.
It is easy, in a career, to keep studying problems one has already had great success in, for good reason and with good results. It tends not to keep producing revolutionary results. Atiyah was able — by chance or design I can’t tell — to several times venture into a new field. The new field was one that his earlier work prepared him for, yes. But it posed new questions about novel topics. And this creative, well-trained mind focusing on new questions produced great work. And this is one way to be credible when one announces a proof of the Riemann hypothesis.
Here is something I could not find a clear way to fit into this essay. Atiyah recorded some comments about his life for the Web of Stories site. These are biographical and do not get into his mathematics at all. Much of it is about his life as child of British and Lebanese parents and how that affected his schooling. One that stood out to me was about his peers at Manchester Grammar School, several of whom he rated as better students than he was. Being a good student is not tightly related to being a successful academic. Particularly as so much of a career depends on chance, on opportunities happening to be open when one is ready to take them. It would be remarkable if there wre three people of greater talent than Atiyah who happened to be in the same school at the same time. It’s not unthinkable, though, and we may wonder what we can do to give people the chance to do what they are good in. (I admit this assumes that one finds doing what one is good in particularly satisfying or fulfilling.) In looking at any remarkable talent it’s fair to ask how much of their exceptional nature is that they had a chance to excel.
## I turn out also to need B and C topics for the All 2020 A-to-Z
In looking over what I want my deadlines to be, and what the calendar looks like, I learn I should also be asking for topics that start with the letters B and C … yesterday, really. Today is probably close enough. And I am still interested in letter-A topics.
It all remains straightforward. Please comment with some mathematical topic, with a name that starts ‘B’ or ‘C’, that you’d like to see me try to write about. This can include people, too. I’m not likely to add to the depth of people’s biographical understandings. But I can share what I know, or have learned. Please also let me know how to credit you for a topic, and any project you’re working on that I can give some attention to.
I’m not afraid of re-visiting a topic, if I think I can improve it. But topics I have already covered, starting with the letter B, are:
And here are topics I’ve written about for the letter C before:
My actual deadline will remain “about three minutes before publication”.
|
2020-09-22 02:27:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 138, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5135002732276917, "perplexity": 930.4780208654798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202686.56/warc/CC-MAIN-20200922000730-20200922030730-00361.warc.gz"}
|
https://datascience.stackexchange.com/questions/116520/cnn-for-time-series-input-0-of-layer-conv2d-5-is-incompatible-with-the-layer
|
# CNN for time series: Input 0 of layer "conv2d_5" is incompatible with the layer: expected min_ndim=4, found ndim=2. Full shape received: (None, 2)
I am trying to use CNN on multivariate time series instead the most common usage on images. The number of features are between 90 and 120, depending on which I need to consider and experiment. This is my code
scaler = StandardScaler()
X_train_s = scaler.fit_transform(X_train)
X_test_s = scaler.transform(X_test)
X_train_s = X_train_s.reshape((X_train_s.shape[0], X_train_s.shape[1],1))
X_test_s = X_test_s.reshape((X_test_s.shape[0], X_test_s.shape[1],1))
batch_size = 1024
length = 120
n_features = X_train_s.shape[1]
generator = TimeseriesGenerator(X_train_s, pd.DataFrame.to_numpy(Y_train[['TARGET_KEEP_LONG',
'TARGET_KEEP_SHORT']]),
length=length,
batch_size=batch_size)
validation_generator = TimeseriesGenerator(X_test_s, pd.DataFrame.to_numpy(Y_test[['TARGET_KEEP_LONG', 'TARGET_KEEP_SHORT']]), length=length, batch_size=batch_size)
early_stop = EarlyStopping(monitor = 'val_accuracy', mode = 'max', verbose = 1, patience = 20)
CNN_model = Sequential()
Conv2D(
filters=64,
kernel_size=(1, 5),
strides=1,
activation="relu",
input_shape=(length, n_features, 1),
use_bias=True,
)
)
Conv2D(
filters=64,
kernel_size=(1, 5),
strides=1,
activation="relu",
use_bias=True,
)
)
CNN_model.summary()
CNN_model.compile(
)
CNN_model.fit_generator(
generator, steps_per_epoch=1,
validation_data=validation_generator,
epochs=200,
)
In other words, I take the features as one dimension and a certain number of rows as the other dimension. But I get this error
ValueError: Input 0 of layer "conv2d_5" is incompatible with the layer: expected min_ndim=4, found ndim=2. Full shape received: (None, 2)
that is referred to the first CNN layer as stated here
Cell In [26], line 50
25 CNN_model = Sequential()
28 # Conv1D(
29 # filters=128,
(...)
47 # )
48 # )
51 Conv2D(
52 filters=64,
53 kernel_size=(1, 5),
54 strides=1,
55 activation="relu",
57 input_shape=(batch_size, length, n_features, 1),
58 use_bias=True,
59 )
60 )
63 Conv2D(
64 filters=64,
• How model is initialized? Dec 6, 2022 at 15:12
• Based on the final code block in your question the expected input shape to the model is: input_shape=(batch_size, length, n_features, 1). When you pass data into the model, it should have 4 dimensions (batch dimension, length dimension, feature dimension, and padded dimension of 1). Your generator seems to produce data with 2 dimensions. To know how to fix this, we would have to see the generator code, but my guess is your generator only produces a single sample at a time (missing batch dimension) and does not add the padded dimension. Dec 7, 2022 at 14:45
After days of attempts and looking to post that gave some indirect insights, I found the trouble and I can share the solution for
• using 2DCNN models with time series and not images
• avoiding memory troubles for preparing the dataset using TimeseriesGenerator
As expected the trouble was in preparing the dataset with the proper shape. The main bug in my code was this
X_train_s = X_train_s.reshape((X_train_s.shape[0], X_train_s.shape[1],1))
X_test_s = X_test_s.reshape((X_test_s.shape[0], X_test_s.shape[1],1))
that should be replaced with this (I also changed the names of the series, but just to keep the original one untouched)
X_train_s_CNN = X_train_s.reshape(*X_train_s.shape, 1)
X_test_s_CNN = X_test_s.reshape(*X_test_s.shape, 1)
Here is the full working code
from tensorflow.keras.layers import Conv1D, MaxPooling1D, Dense, Flatten, Conv2D, MaxPooling2D
from tensorflow.keras.layers import BatchNormalization
scaler = StandardScaler()
X_train_s = scaler.fit_transform(X_train)
X_test_s = scaler.transform(X_test)
X_train_s_CNN = X_train_s.reshape(*X_train_s.shape, 1)
X_test_s_CNN = X_test_s.reshape(*X_test_s.shape, 1)
batch_size = 64
length = 300
n_features = X_train_s.shape[1]
generator = TimeseriesGenerator(X_train_s_CNN, pd.DataFrame.to_numpy(Y_train[['TARGET_KEEP_LONG',
'TARGET_KEEP_SHORT']]),
length=length,
batch_size=batch_size)
validation_generator = TimeseriesGenerator(X_test_s, pd.DataFrame.to_numpy(Y_test[['TARGET_KEEP_LONG',
'TARGET_KEEP_SHORT']]),
length=length,
batch_size=batch_size)
early_stop = EarlyStopping(monitor = 'val_accuracy', mode = 'max', verbose = 1, patience = 10)
CNN_model = Sequential()
Conv2D(
filters=64,
kernel_size=(2,2),
strides=1,
activation="relu",
input_shape=(length, n_features, 1),
use_bias=True,
)
)
Conv2D(
filters=128,
kernel_size=(2,2),
strides=1,
activation="relu",
)
)
# Conv2D(
# filters=256,
# kernel_size=(2,2),
# strides=1,
# activation="relu",
# )
# )
# Conv2D(
# filters=256,
# kernel_size=(2,2),
# strides=1,
# activation="relu",
# )
# )
CNN_model.compile(
|
2023-01-28 21:09:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32035690546035767, "perplexity": 11514.958754521735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499654.54/warc/CC-MAIN-20230128184907-20230128214907-00875.warc.gz"}
|
https://www.physicsforums.com/threads/how-to-show-bernoullis-equation-applies-on-same-streamline.907271/
|
How to show Bernoulli's equation applies on same streamline?
1. Mar 11, 2017
joshmccraney
When deriving Bernoulli's equation from Navier Stokes, how do we know it is only valid along a streamline? At the very end of my derivation, assuming Newtonian, incompressible, inviscid, irrotational flow I have $\nabla(\partial_t \phi + |\vec{u}|^2/2+p/\rho + g z) = \vec{0} \implies \partial_t \phi + |\vec{u}|^2/2+p/\rho + g z = const.$ where I think the constant implies something about which streamline you're on. Also, $\vec{u}=\nabla \phi$.
Thanks!
2. Mar 11, 2017
It's probably easiest if you start from the Euler equations cast in terms of streamline variables. If you do that, then you show that integrating along a streamline results in Bernoulli's equation.
Of course, it can also be applied globally if the flow is irrotational or across multiple streamlines if all of them originate with the same total pressure.
3. Mar 11, 2017
joshmccraney
That's a good way to do it. But my problem is, deriving it through NS as shown above, when I arrive at $\partial_t \phi + |\vec{u}|^2/2+p/\rho + g z = const.$ how would I know, or rather where is the assumption, that only validates this equation along a streamline?
I'm not really looking at how to derive Bernoulli's along streamline, I'm wondering why that equation is only valid along one. Thanks for your reply!
|
2018-02-24 17:01:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.859542727470398, "perplexity": 580.4174455579279}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815843.84/warc/CC-MAIN-20180224152306-20180224172306-00721.warc.gz"}
|
https://www.nature.com/articles/s41566-017-0007-1?error=cookies_not_supported&code=6033c668-5f5a-475f-9aae-041a45310c15
|
# Single-qubit quantum memory exceeding ten-minute coherence time
## Abstract
A long-time quantum memory capable of storing and measuring quantum information at the single-qubit level is an essential ingredient for practical quantum computation and communication1,2. Currently, the coherence time of a single qubit is limited to less than 1 min, as demonstrated in trapped ion systems3,4,5, although much longer coherence times have been reported in ensembles of trapped ions6,7 and nuclear spins of ionized donors8,9. Here, we report the observation of a coherence time of over 10 min for a single qubit in a 171Yb+ ion sympathetically cooled by a 138Ba+ ion in the same Paul trap, which eliminates the problem of qubit-detection inefficiency from heating of the qubit ion10,11. We also apply a few thousand dynamical decoupling pulses to suppress ambient noise from magnetic-field fluctuations and phase noise from the local oscillator8,9,12,13,14,15,16. The long-time quantum memory of the single trapped ion qubit would be the essential component of scalable quantum computers1,17,18, quantum networks2,19,20 and quantum money21,22.
## Access options
from\$8.99
All prices are NET prices.
## Change history
• ### 12 January 2018
In the version of this Letter originally published, in Fig. 2c legend, the entry ‘LO phase noise’ should not have been included. This has now been corrected in the online versions.
## References
1. 1.
Ladd, T. D. et al. Quantum computers. Nature 464, 45–53 (2010).
2. 2.
Duan, L.-M. & Monroe, C. Quantum networks with trapped ions. Rev. Mod. Phys. 82, 1209–1224 (2012).
3. 3.
Langer, C. et al. Long-lived qubit memory using atomic ions. Phys. Rev. Lett. 95, 060502 (2005).
4. 4.
Häffner, H. et al. Robust entanglement. Appl. Phys. B 81, 151–153 (2005).
5. 5.
Harty, T. et al. High-fidelity preparation, gates, memory, and readout of a trapped-ion quantum bit. Phys. Rev. Lett. 113, 220501 (2014).
6. 6.
Bollinger, J., Heizen, D., Itano, W., Gilbert, S. & Wineland, D. A 303-MHz frequency standard based on trapped Be+ ions. IEEE Trans. Instrum. Meas. 40, 126–128 (1991).
7. 7.
Fisk, P. et al. Very high Q microwave spectroscopy on trapped 171Yb+ ions: application as a frequency standard. IEEE Trans. Instrum. Meas. 44, 113–116 (1995).
8. 8.
Saeedi, K. et al. Room-temperature quantum bit storage exceeding 39 minutes using ionized donors in silicon-28. Science 342, 830–833 (2013).
9. 9.
Zhong, M. et al. Optically addressable nuclear spins in a solid with a six-hour coherence time. Nature 517, 177–180 (2015).
10. 10.
Epstein, R. J. et al. Simplified motional heating rate measurements of trapped ions. Phys. Rev. A 76, 033411 (2007).
11. 11.
Wesenberg, J. et al. Fluorescence during Doppler cooling of a single trapped atom. Phys. Rev. A 76, 053416 (2007).
12. 12.
Khodjasteh, K. et al. Designing a practical high-fidelity long-time quantum memory. Nat. Commun. 4, 2045 (2013).
13. 13.
Biercuk, M. J. et al. Optimized dynamical decoupling in a model quantum memory. Nature 458, 996–1000 (2009).
14. 14.
Kotler, S., Akerman, N., Glickman, Y. & Ozeri, R. Nonlinear single-spin spectrum analyzer. Phys. Rev. Lett. 110, 110503 (2013).
15. 15.
Souza, A. M., Álvarez, G. A. & Suter, D. Robust dynamical decoupling for quantum computing and quantum memory. Phys. Rev. Lett. 106, 240501 (2011).
16. 16.
Haeberlen, U. High Resolution NMR in Solids Selective Averaging (Elsevier, 1976).
17. 17.
Kielpinski, D., Monroe, C. & Wineland, D. J. Architecture for a large-scale ion-trap quantum computer. Nature 417, 709–711 (2002).
18. 18.
Lekitsch, B. et al. Blueprint for a microwave trapped ion quantum computer. Sci. Adv. 3, e1601540 (2017).
19. 19.
Monroe, C. & Kim, J. Scaling the ion trap quantum processor. Science 339, 1164–1169 (2013).
20. 20.
Nickerson, N. H., Fitzsimons, J. F. & Benjamin, S. C. Freely scalable quantum technologies using cells of 5-to-50 qubits with very lossy and noisy photonic links. Phys. Rev. X 4, 041041 (2014).
21. 21.
Wiesner, S. Conjugate coding. ACM SIGACT News 15, 78–88 (1983).
22. 22.
Pastawski, F., Yao, N. Y., Jiang, L., Lukin, M. D. & Cirac, J. I. Unforgeable noise-tolerant quantum tokens. Proc. Natl Acad. Sci. USA 109, 16079–16082 (2012).
23. 23.
Hite, D. A. et al. 100-fold reduction of electric-field noise in an ion trap cleaned with in situ argon-ion-beam bombardment. Phys. Rev. Lett. 109, 103001 (2012).
24. 24.
Deslauriers, L. et al. Scaling and suppression of anomalous heating in ion traps. Phys. Rev. Lett. 97, 103007 (2006).
25. 25.
Nielsen, M. A. & Chuang, I. L. Quantum Computation and Quantum Information (Cambridge Univ. Press, 2010).
26. 26.
Home, J. P. et al. Complete methods set for scalable ion trap quantum information processing. Science 325, 1227–1230 (2009).
27. 27.
Hanneke, D. et al. Realization of a programmable two-qubit quantum processor. Nat. Phys. 6, 13–16 (2010).
28. 28.
Duan, L. M., Blinov, B. B., Moehring, D. L. & Monroe, C. Scalable trapped ion quantum computation with a probabilistic ion-photon mapping. Quantum Inf. Comput. 4, 165–173 (2004).
29. 29.
Blinov, B., Moehring, D., Duan, L.-M. & Monroe, C. Observation of entanglement between a single trapped atom and a single photon. Nature 428, 153–157 (2004).
30. 30.
Moehring, D. L. et al. Entanglement of single atom quantum bits at a distance. Nature 449, 68–71 (2007).
31. 31.
Kurz, C. et al. Experimental protocol for high-fidelity heralded photon-to-atom quantum state transfer. Nat. Commun. 5, 5527 (2014).
32. 32.
Ball, H., Oliver, W. D. & Biercuk, M. J. The role of master clock stability in quantum information processing. Nat. Quantum Inf. 2, 16033 (2016).
33. 33.
Bylander, J. et al. Noise spectroscopy through dynamical decoupling with a superconducting flux qubit. Nat. Phys. 7, 565–570 (2011).
34. 34.
Knill, E. et al. Randomized benchmarking of quantum gates. Phys. Rev. A 77, 012307 (2008).
35. 35.
Kielpinski, D., Kafri, D., Woolley, M. J., Milburn, G. J. & Taylor, J. M. Quantum interface between an electrical circuit and a single atom. Phys. Rev. Lett. 108, 130504 (2012).
36. 36.
Daniilidis, N., Gorman, D. J., Tian, L. & Hffner, H. Quantum information processing with trapped electrons and superconducting electronics. New J. Phys. 251, 073017 (2013).
37. 37.
Ozeri, R. et al. Hyperfine coherence in the presence of spontaneous photon scattering. Phys. Rev. Lett. 95, 030403 (2005).
38. 38.
Uys, H. et al. Decoherence due to elastic Rayleigh scattering. Phys. Rev. Lett. 105, 200401 (2010).
39. 39.
Campbell, W. et al. Ultrafast gates for single atomic qubits. Phys. Rev. Lett. 105, 090502 (2010).
40. 40.
Fisk, P. T., Sellars, M. J., Lawn, M. A. & Coles, G. Accurate measurement of the 12.6 GHz clock transition in trapped 171Yb+ ions. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 44, 344–354 (1997).
41. 41.
Uhrig, G. S. Exact results on dynamical decoupling by π pulses in quantum information processes. New J. Phys. 10, 083024 (2008).
## Acknowledgements
This work was supported by the National Key Research and Development Program of China under grant 2016YFA0301900 (no. 2016YFA0301901) and the National Natural Science Foundation of China grants 11374178, 11504197 and 11574002.
## Author information
Authors
### Contributions
Y.W. and D.Y. developed the experimental system. Y.W., with the participation of M.U. and D.Y., collected and analysed the data. J.Z. and S.A. provided technical support. M.L., J.-N.Z., L.-M.D. and D.Y. provided theoretical support. K.K. supervised the project. All authors contributed to writing the manuscript.
### Corresponding authors
Correspondence to Dahyun Yum or Kihwan Kim.
## Ethics declarations
### Competing interests
The authors declare no competing financial interests.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Wang, Y., Um, M., Zhang, J. et al. Single-qubit quantum memory exceeding ten-minute coherence time. Nature Photon 11, 646–650 (2017). https://doi.org/10.1038/s41566-017-0007-1
• Accepted:
• Published:
• Issue Date:
• ### Submicrosecond entangling gate between trapped ions via Rydberg interaction
• Chi Zhang
• , Fabian Pokorny
• , Weibin Li
• , Gerard Higgins
• , Andreas Pöschl
• , Igor Lesanovsky
• & Markus Hennrich
Nature (2020)
• ### Logical performance of 9 qubit compass codes in ion traps with crosstalk errors
• Dripto M Debroy
• , Muyuan Li
• , Shilin Huang
• & Kenneth R Brown
Quantum Science and Technology (2020)
• ### Engineering of microfabricated ion traps and integration of advanced on-chip features
• Zak David Romaszko
• , Seokjun Hong
• , Martin Siegele
• , Reuben Kahan Puddy
• , Foni Raphaël Lebrun-Gallagher
• , Sebastian Weidt
• & Winfried Karl Hensinger
Nature Reviews Physics (2020)
• ### Photon-mediated entanglement scheme between a ZnO semiconductor defect and a trapped Yb ion
• Jennifer F. Lilieholm
• , Vasileios Niaouris
• , Alexander Kato
• , Kai-Mei C. Fu
• & Boris B. Blinov
Applied Physics Letters (2020)
• ### High-Rate, High-Fidelity Entanglement of Qubits Across an Elementary Quantum Network
• L. J. Stephenson
• , B. C. Nichol
• , S. An
• , P. Drmota
• , T. G. Ballance
• , K. Thirumalai
• , J. F. Goodwin
• , D. M. Lucas
• & C. J. Ballance
Physical Review Letters (2020)
|
2020-10-19 16:10:09
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8533430099487305, "perplexity": 13724.985359158556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107863364.0/warc/CC-MAIN-20201019145901-20201019175901-00706.warc.gz"}
|
http://www.math.wpi.edu/Course_Materials/MA1022A14/num_int/node1.html
|
Subsections
# Numerical Integration
## Purpose
The purpose of this lab is to give you some experience with using the trapezoidal rule and Simpson's rule to approximate integrals.
## Background
The trapezoidal rule and Simpson's rule are used for approximating area under a curve or the definite integral
Both methods start by dividing the interval into subintervals of equal length by choosing a partition
satisfying
where
is the length of each subinterval. For the trapezoidal rule, the integral over each subinterval is approximated by the area of a trapezoid. This gives the following approximation to the integral
For Simpson's rule, the function is approximated by a parabola over pairs of subintervals. When the areas under the parabolas are computed and summed up, the result is the following approximation.
## Maple commands
The commands for the trapezoidal rule and Simpson's rule are in the student package.
>with(student);
The following example will use the function
>f:=x->x^2*exp(x);
This computes the integral of the function from 0 to 2.
>int(f(x),x=0..2);
Using the evalf command provides a decimal approximation.
>evalf(int(f(x),x=0..2));
The command for using the trapezoidal rule is trapezoid. The syntax is very similar to that of the int command. the last argument specifies the number of subintervals to use. In the command below, the number of subintervals is set to 10, but you should experiment with increasing or decreasing this number. Note that Maple writes out the sum and doesn't evaluate it to a number.
>trapezoid(f(x),x=0..2,10);
Putting an evalf command on the outside computes the trapezoidal approximation.
>evalf(trapezoid(f(x),x=0..2,10));
The command for Simpson's rule is very similar.
>simpson(f(x),x=0..2,10);
>evalf(simpson(f(x),x=0..2,10));
## Error term
There is an error term associated with the trapezoidal rule that can be u sed to estimate the error. More precisely, we have
where
for some value between and that maximizes the second derivative in absolute value. Solving the error formula for guarantees a number of subintervals such that the error term is less than some desired tolerance . This gives:
The way to think about this result is that it gives a value for which guarantees that the error of the trapezoidal rule is less than the tolerance . It is generally a very conservative result.
Similarly, the number of subintervals for the simpson rule approximation to guarantee an error smaller than is
## Exercises
1. For the function , use the trapezoidal rule formula to approximate the area under over the interval ising . Verify your answer using the trapezoid command in Maple. Repeat this exercise with simpson's rule and Maple's simpson command.
2. For the function over the interval , complete the following steps.
(i)
By using Maple's int and possibly evalf commands, find a good approximation to the integral of the function over the given interval.
(ii)
Find the minimum number of subintervals necessary to approximate the value of the definite integral with error no greater than . Then do the same with Simpson's rule. Which is more accurate and why?
(iii)
Use the error estimate for the trapezoidal rule to find a value for , the number of subintervals, that ensures is within of the answer in part (i). How close is this answer to the number of subintervals you found in part (ii)?
(Hint: The part of the error term that is most difficult is . To find the maximum value of the second derivative of over the interval , an approximation based on the plot should suffice. To plot the second derivative of over the given interval, type the following command in Maple:
plot(abs(diff(f(x),x,x)),x=0..5);
Then, using your mouse, click on the part of the plot that appears to be the maximum. When you click on a plot, coordinates will appear in the upper left hand corner of your Maple window and the coordinate is used for the max.)
(a)
(iv)] Repeat part (iii) using Simpson's rule. (Hint: To maximize the fourth derivative of over the given interval, type the following command in Maple:
plot(abs(diff(f(x),x,x,x,x)),x=0..5);
|
2018-07-18 16:29:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.897229015827179, "perplexity": 344.6303456543063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590295.61/warc/CC-MAIN-20180718154631-20180718174631-00339.warc.gz"}
|
https://en.wikipedia.org/wiki/User:StevenD99
|
# User:StevenD99
RETIRED
This user is no longer active on Wikipedia.
I have officially lost almost all interest in editing Wikipedia and I thought I should retire after over 5 years of editing on this site. This place is much less interesting than it was when I first joined under my old account and I'm finally trying to move forward and away from this site which I first became interested in when I was a little 9 year old in 2009. So since I'm about done with editing Wikipedia and decided it's not for me anymore, wish me luck on my future endeavors. :) StevenD99 21:51, 26 November 2014 (UTC)
My archived userpage
Home Talk To Me Contribs E-Mail Me Sign My Guestbook Awards Edit count
Wikipedia, the free encyclopedia.
# Welcome to my user page!
Hello! · Hola! · Salut! · Cześć! · Hallo! · Privet! · Ciao! · Shalom! · Nî hâo! · Kon'nichiwa! · Namasté · Marhában! · Namaskar! · Olá! · Merhaba! · Γειά σας! · Salutare! · Sawatdee! · Habari! · Tungjatjeta!
Name — Wikipedian — Steven August 20, 1999 (age 16) American UTC−08:00 01:14 Caucasian Human 5'5 brown School student StevenD99
My Wikistress.
My Wikimood.
Introduction
Hey guys, and welcome to my userpage. For several years now, I have found many useful facts from Wikipedia. My username is StevenD99, and I'm a teenager who lives in Huntington Beach, California. I have been editing Wikipedia for 5 years, but I was only editing under this account since August 2013. Contribs from before that time are located here, under my old account of "Steve2011". Most of the time, I fight vandalism, do the new page patrol, do copyediting and grammar corrections on articles, use AutoEd to clean pages (especially the Today's Featured Article if it does need cleaning), and help with WikiProjects Tropical Cyclones and Amusement Parks. I use Twinkle to revert vandalism and tag pages for speedy deletion, but since I have the rollback right, I use Huggle to revert vandalism and plan on using STiki sometime soon. I also hope to become an admin sometime in the future. If you want to see more info about me, check out my userboxes. And also, I am a WikiCat, a WikiGnome, a WikiGoon, and a WikiGiant.
Things I do on here most of the time
• Reverting vandalism
• Voting in RfAs and AfDs
• Doing the new-page patrol
• Copyediting and correcting grammar
• AutoEd
• Helping out with WikiProjects Tropical Cyclones and Amusement Parks
• And finally, signing people's guestbooks and chatting with my wiki-friends (although I try not to do this very often or I will break this policy.)
What I hate seeing on Wikipedia
• People vandalizing articles and attacking others
• When people vandalize my userpage or talk page or attack me
• Seeing others semi-retire or retire
• When other people beat me to reverting vandalism
• When people create sockpuppets to vandalize Wikipedia
How did I get my username? + My interests
It's a combination of "Steven" (my first name), "D" (The first initial of my last name), and "99" (the last 2 digits of the year I was born in).
Here is my current signature: StevenD99 02:50, 2 June 2014 (UTC)
Here's some of my real-life interests (I know this doesn't have anything to do with Wikipedia but I'm just gonna include them anyway):
• Meteorology
• Astronomy
• Tropical cyclones
• Rollercoasters/Theme parks
• Extreme sports
• Highways
• Science
• Skyscrapers
• Music (More specifically Rock, Pop, Dance/Club, and Hip Hop)
• Computers
• Football
And much more!
My history on Wikipedia-pre account
I first started to read Wikipedia a long time ago (around 2006/2007). Reading articles back then helped me learn new things. In late 2008, I began editing Wikipedia under the IP of 75.83.148.71. I might have edited before that, but I can't find these edits. It later changed to 75.83.139.234 and 66.74.234.177.
First couple months
On July 18, 2009, I made my first account, December21st2012Freak. I chose that username because back then I was so scared that the world would 'end' on December 21, 2012, and I used to think it would actually happen. Why did I believe that? Well, first and most important of all was how old I was back then, I was only 9 years old when I created that account! It was only a month before my 10th birthday, more specifically. Even though I was that young, I still tried to constructively edit and revert vandalism. If you want to see my earliest edits under my old account, check these out. Once August 2009 came, I became more experienced, and I was using Twinkle to revert vandalism instead of the "undo" button. I also used to edit the Wikipedia:Sandbox a lot because I thought it was fun, and that was also where I did some very immature edits (since I was only 9/10 back then). Also in August, I got the rollback feature, started using Huggle, and began creating and improving articles, something a young kid probably won't do.
The rest of 2009
In September 2009, I continued to revert vandalism and began talking with some new wiki-friends such as Spongefrog and White Shadows. I continued with my original editing style until November 2009, when I stopped editing the Sandbox and improved more articles. In December, I revamped my userpage to a much cleaner style, since the older style was very cluttered. If you want to see my oldest userpage from late 2009, check this out.
Me in 2010
Once 2010 came, I wasn't as addicted to off-topic posting on user talk pages as I was in 2009, and I improved more articles and reverted more vandalism. Later that year, I started to copyedit lots of articles, and participated in these copyediting drives. I also stopped reverting vandalism by late 2010 for some reason. During 2010, I also matured a little bit compared to 2009.
Me in 2011 and my "retirement"
In 2011, I continued to improve articles and check for their grammar. In March 2011, I changed my username to Steve2011 because I didn't believe in that doomsday prediction anymore. The doomsday prediction eventually failed, like all end of the world predictions do. After that, I started to become less active, and soon became semi-retired. This was the last edit I made to an article under my old account before I began to take the long break between the last edit on my old account and the creation of my new account in February 2012 when I was 12 years old, when I posted a "retirement" message on my user and talk pages even though I wouldn't retire permanently.
In August 2013, I created this account called "StevenD99". Between the retirement of my old account and the creation of this account, I edited Wikipedia only 1 time, and it was this edit. That edit, in fact, happened only a couple weeks before the creation of this account. Since I created this account, I have started to revert vandalism again and do other stuff to help the wiki. However, beginning in September 2013, I started to become less active and started to take wikibreaks due to school and also being too busy on other parts of the internet.
Inactivity throughout late 2013-2014 and my retirement
Throughout late 2013 and the whole year of 2014 I was relatively inactive, editing only sometimes or when I felt like editing. I was too busy on other sites during this time. On November 26, 2014, after continuing to be inactive despite plans to return to very active editing, I officially retired from this site.
My goals
• Become an expert vandal fighter (once again)
• Create a few articles and maybe expand some articles to GA status if I can
• Join in on AfD, RFA, and other discussions
• Join more wikiprojects
• Work my way towards becoming an admin in a few years
Barnstars
My barnstars under this account are located here. If you want to see the barnstars of my old account, they are here.
Signature Archive
• StevenD99 (talk) (Just the plain old signature) (Used: On August 11 and 12, 2013)
• StevenD99 Chat (The same signature as my old account) (Used: August 12, 2013 - August 19, 2013)
• StevenD99 Talk | Stalk (The first tweaks) (Used: August 20, 2013 - September 15, 2013)
• StevenD99 Talk | Stalk | Sign! (Added "Sign!" to link to my guestbook) (Used: September 15, 2013 - October 10, 2013)
• StevenD99 Happy Halloween! (My Halloween 2013 sig) (Used: October 31, 2013 - November 2, 2013)
• StevenD99 Contribs Sign (Some more tweaks) (Used: October 11, 2013 - October 30, 2013, November 3, 2013 - April 18, 2014)
• StevenD99 (This signature is a major change from the last one, a black background with a blue border has been introduced and the signature text now includes different shades of blue.) (Used: April 19, 2014 - June 1, 2014)
• StevenD99 (After a signature complaint, I had to change it. This is the new signature I changed it too. It's similar to the last one except it has different colors and the unnecessary clutter's been removed.) (Used: June 1, 2014 - present)
Other stuff:
Picture of the day
Picture of the day
Current RfAs
RfA candidate S O N S% Ending (UTC) Time left Dups? Report
RfB candidate S O N S% Ending (UTC) Time left Dups? Report
No RfXs since 22:36, 11 July 2016 (UTC).—cyberbot ITalk to my owner:Online
|
2016-07-28 01:15:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2913118004798889, "perplexity": 4947.725537498181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827781.76/warc/CC-MAIN-20160723071027-00020-ip-10-185-27-174.ec2.internal.warc.gz"}
|
http://en.wikipedia.org/wiki/Malliavin's_absolute_continuity_lemma
|
# Malliavin's absolute continuity lemma
In mathematics — specifically, in measure theoryMalliavin's absolute continuity lemma is a result due to the French mathematician Paul Malliavin that plays a foundational rôle in the regularity (smoothness) theorems of the Malliavin calculus. Malliavin's lemma gives a sufficient condition for a finite Borel measure to be absolutely continuous with respect to Lebesgue measure.
## Statement of the lemma
Let μ be a finite Borel measure on n-dimensional Euclidean space Rn. Suppose that, for every x ∈ Rn, there exists a constant C = C(x) such that
$\left| \int_{\mathbf{R}^{n}} \mathrm{D} \varphi (y) (x) \, \mathrm{d} \mu(y) \right| \leq C(x) \| \varphi \|_{\infty}$
for every C function φ : Rn → R with compact support. Then μ is absolutely continuous with respect to n-dimensional Lebesgue measure λn on Rn. In the above, Dφ(y) denotes the Fréchet derivative of φ at y and ||φ|| denotes the supremum norm of φ.
## References
• Bell, Denis R. (2006). The Malliavin calculus. Mineola, NY: Dover Publications Inc. pp. x+113. ISBN 0-486-44994-7. MR 2250060 (See section 1.3)
• Malliavin, Paul (1978). "Stochastic calculus of variations and hypoelliptic operators". Proceedings of the International Symposium on Stochastic Differential Equations (Res. Inst. Math. Sci., Kyoto Univ., Kyoto, 1976). New York: Wiley. pp. 195–263. MR 536013
|
2014-12-25 00:21:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9942769408226013, "perplexity": 944.7070249668574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447543436.8/warc/CC-MAIN-20141224185903-00040-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://bastian.rieck.me/blog/posts/2017/simple_unit_tests/
|
# Simple unit tests with C++ and CMake
## Tags: programming, projects, research
« Previous post: A technique for selection sampling … — Next post: The KISS principle in machine learning »
A warning upfront: this post is sort of an advertisement for Aleph, my C++ library for topological data analysis. In this blog post I do not want to cover any of the mathematical algorithms present in Aleph—I rather want to focus on small but integral part of the project, viz. unit tests.
If you are not familiar with the concept of unit testing, the idea is (roughly) to write a small, self-sufficient test for every new piece of functionality that you write. Proponents of the methodology of test-driven development (TDD) even go so far as to require you to write the unit tests before you write your actual code. In this mindset, you first think of the results your code should achieve and which outputs you expect prior to writing any “real” code. I am putting the word real in quotation marks here because it may seem strange to focus on the tests before doing the heavy lifting.
However, this way of approaching software development may actually be quite beneficial, in particular if you are working on algorithms with a nice mathematical flavour. Here, thinking about the results you want to achieve with your code ensures that at least a few known examples are processed correctly by your code, making it more probable that the code will perform well in real-world scenarios.
When I started writing Aleph in 2016, I also wanted to add some unit tests, but I did not think that the size of the library warranted the inclusion of one of the big players, such as Google Test or Boost.Test. While arguably extremely powerful and teeming with more features than I could possibly imagine, they are also quite heavy and require non-trivial adjustments to any project.
Thus, in the best tradition of the not-invented-here-syndrome, I decided to roll my own testing framework, base on pure CMake and small dash of C++. My design decisions were rather simple:
• Use CTest, the testing framework of CMake to run the tests. This framework is rather simple and just uses the return type of a unit test program to decide whether the test worked correctly.
• Provide a set of routines to check the correctness of certain calculations within a unit test, throwing an error if something unexpected happened.
• Collect unit tests for the “larger” parts of the project in a single executable program.
Yes, you read that right—my approach actually opts for throwing an error in order to crash the unit test program. Bear with me, though, for I think that this is actually a rather sane way of approaching unit tests. After all, if the tests fails, I am usually not interested in whether other parts of a test program—that may potentially depend on previous calculations—run through or not. As a consequence, adding a unit test to Aleph is as simple as adding the following lines to a CMakeLists.txt file, located in the tests subdirectory of the project:
While in the main CMakeLists.txt, I added the following lines:
ENABLE_TESTING()
So far, so good. A test now looks like this:
#include <tests/Base.hh>
void testBasic()
{
// do some nice calculation; store the results in foo and bar,
// respectively
ALEPH_ASSERT_THROW( foo != bar );
ALEPH_ASSERT_EQUAL( foo, 2.0 );
ALEPH_ASSERT_EQUAL( bar, 1.0 );
}
{
}
int main(int, char**)
{
testBasic();
}
That is basically the whole recipe for a simple unit test. Upon execution, main() will ensure that all larger-scale test routines, i.e. testSimple() and testAdvanced() are called. Within each of these routines, the calls to the corresponding macros—more on that in a minute— ensure that conditions are met, or certain values are equal to other values. Else, an error will be thrown, the test will abort, and CMake will throw an error upon test execution.
So, how do the macros look like? Here is a copy of the current version of Aleph:
#define ALEPH_ASSERT_THROW( condition ) \
{ \
if( !( condition ) ) \
{ \
throw std::runtime_error( std::string( __FILE__ ) \
+ std::string( ":" ) \
+ std::to_string( __LINE__ ) \
+ std::string( " in " ) \
+ std::string( __PRETTY_FUNCTION__ ) \
); \
} \
}
#define ALEPH_ASSERT_EQUAL( x, y ) \
{ \
if( ( x ) != ( y ) ) \
{ \
throw std::runtime_error( std::string( __FILE__ ) \
+ std::string( ":" ) \
+ std::to_string( __LINE__ ) \
+ std::string( " in " ) \
+ std::string( __PRETTY_FUNCTION__ ) \
+ std::string( ": " ) \
+ std::to_string( ( x ) ) \
+ std::string( " != " ) \
+ std::to_string( ( y ) ) \
); \
} \
}
Pretty simple, I would say. The ALEPH_ASSERT_EQUAL macro actually tries to convert the corresponding values to strings, which may not always work. Of course, you could use more complicated string conversion routines, as Boost.Test does. For now, though, these macros are sufficient to make up the unit test framework of Aleph, which at the time of me writing this, encompasses more than 4000 lines of code.
The only remaining question is how this framework is used in practice. By setting ENABLE_TESTING(), CMake actually exposes a new target called test. Hence, in order to run those tests, a simple make test is sufficient in the build directory. This is what the result may look like:
\$ make test
Running tests...
Test project /home/bastian/Projects/Aleph/build
Start 1: barycentric_subdivision
1/36 Test #1: barycentric_subdivision ............ Passed 0.00 sec
Start 2: beta_skeleton
[...]
34/36 Test #34: union_find ......................... Passed 0.00 sec
Start 35: witness_complex
35/36 Test #35: witness_complex .................... Passed 1.82 sec
Start 36: python_integration
36/36 Test #36: python_integration ................. Passed 0.07 sec
100% tests passed, 0 tests failed out of 36
Total Test time (real) = 5.74 sec
In addition to being rather lean, this framework can easily be integrated into an existing Travis CI workflow by adding
- make test
as an additional step to the script target in your .travis.yml file.
If you are interested in using this testing framework, please take a look at the following files:
That is all for now, until next time—may your unit tests always work the way you expect them to!
|
2021-01-21 03:58:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47818124294281006, "perplexity": 2347.007126140442}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522242.73/warc/CC-MAIN-20210121035242-20210121065242-00085.warc.gz"}
|
http://honorscode.blogspot.com/2009/09/969-keybinds.html
|
## Friday, September 18, 2009
### 969 Keybinds
Crofe over at Crofe's Corner has an interesting article up on keybinding the 969 rotation.
Head over and give it a look see.
http://crofescorner.blogspot.com/2009/09/my-mind-just-exploded.html
Lanashara said...
Could you do a post, or possibly a recap on Paladin Tanking Macros?
I have a hard time figuring out what all the macro commands are (unless you can point me to a repository).
Lana
Crofe said...
Lanashara, http://www.wowwiki.com/Macro_API is a list of the commands you can use with macros. It contains more than just the combat specific ones though and it doesn't really explain *how* you would use them.
http://www.wowwiki.com/Useful_macros_for_paladins is a list of "Useful" Paladin macros. (Which apparently has the two I talk about in my post). I don't really find these macros helpful in a real world situation, but they are example macros which are semi explained.
|
2014-10-21 21:32:37
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8184487223625183, "perplexity": 2027.5314479861104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444829.13/warc/CC-MAIN-20141017005724-00241-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/show-that-an-area-function-is-constant-with-fund-thm-of-calc.630530/
|
Show that an area function is constant with fund. thm of calc
1. Aug 22, 2012
dustbin
1. The problem statement, all variables and given/known data
I need to show that the area function for a parabola in the first quadrant is constant.
2. Relevant equations
$$A(a) = \int^a_0 \frac{1}{a}-\frac{x^2}{a^3}\,dx$$
3. The attempt at a solution
Computing this integral gives an area of 2/3. Since the area will always be 2/3 and does not depend on the value of a, then the area function is constant. However, my question is about showing that the area function is constant using the Fundamental Theorem of Calculus.
Can I use the FTC, legitimately? Or is the hypothesis of the FTC not satisfied because of the division by a in the integrand?
Last edited: Aug 22, 2012
2. Aug 22, 2012
LCKurtz
Another way to show $A(a)$ is constant would be to show $A'(a)=0$.
3. Aug 22, 2012
dustbin
Right, but wouldn't that be using the FTC? Taking the derivative would result in:
$$= \frac{1}{a}-\frac{a^2}{a^3} = \frac{1}{a}-\frac{1}{a} = 0$$
NOTE: I made a mistake in my original area function above. I fixed the typo.
4. Aug 22, 2012
Staff: Mentor
If a = 0, then the area under the parabola between 0 and 0 is clearly zero, so it's reasonable to assume that a > 0. You don't have to consider a < 0, since you're concerned with the area in the first quadrant.
|
2018-01-22 19:07:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9439385533332825, "perplexity": 324.00834018309484}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891530.91/warc/CC-MAIN-20180122173425-20180122193425-00643.warc.gz"}
|
https://math.stackexchange.com/questions/2581422/using-jensens-inequality-to-prove-another-inequality
|
# Using Jensen's inequality to prove another inequality?
Suppose $u(\cdot)$ and $v(\cdot)$ are two differentiable, strictly increasing, and strictly concave real functions. Specifically, $v(\cdot)$ is "more concave" than $u(\cdot)$ in the sense that there exists an increasing and strictly concave function $\phi(\cdot)$ such that $v(x)=\phi(u(x))$ at all $x$. It is also equivalent to $$\frac{v''(x)}{v'(x)}<\frac{u''(x)}{u'(x)} \textrm{ for any }x\,.$$
Let $p_i\in(0,1), \sum_{i\in I}p_i=1$ be probabilities and $|I|>2$. Let $x_i$ and $y_i$ be strictly positive for all $i\in I$. Assume $$\sum_{i\in I}p_ix_i<\sum_{i\in I}p_iy_i,$$ and $$\sum_{i\in I}p_iu(x_i)=\sum_{i\in I}p_iu(y_i).$$
Conjecture: $$\sum_{i\in I}p_iv(x_i)>\sum_{i\in I}p_iv(y_i).$$ I believe this is right (after trying many numerical examples) and I think a clever use of Jensen's inequality (or its variants) will do this. But I'm stuck on doing it formally. Any hints/thoughts on providing a formal proof?
Remark: this is related to my other post: Proving an inequality of the expectation of concave functions?
Update: after some more attempts, I believe some techniques in convex analysis would be helpful. Geometrically, the middle equation represents a hyperplane in the $R^{|I|}$ space, and the desired result (very roughly) says that a concave transformation of that hyperplane should be separated from a convex transformation of it.
To be clear, I wasn't saying the conjecture should be generally true. Any thoughts on finding any sufficient conditions to make it work would be very helpful.
• Should it really be $|I| > 2$? – fourierwho Dec 27 '17 at 3:14
• Yes, I've already managed to prove the case of $|I|=2$, and I believe it's true for $|I|>2$ as well. – Tuzi Dec 27 '17 at 3:29
• Then it's induction. – fourierwho Dec 27 '17 at 3:30
• Could you be more specific? I don't think a usual induction argument works here because starting for $|I|=3$ one couldn't write the inequalities in a way that the $|I|=2$ case can handle, due to the non-linearity of $u(\cdot)$ and $v(\cdot)$. – Tuzi Dec 27 '17 at 3:38
• The cardinality of $I$ is finite, infinite countable or uncountable? – induction601 Dec 27 '17 at 5:45
The reverse inequality holds. A differentiable function of one variable is concave on an interval $J$ if and only if the function lies below all of its tangent, i.e., $$f(x) \leq f(y) + f'(y)(x-y)$$ for all $x,y\in J$. In particular, if $f′(c) =0$, then $c$ is a global maximum of $f(x)$.
Applying this for $v$, we have $$p_i(v(x_i) - v(y_i)) \le p_i v'(y_i)(x_i-y_i)$$ for all $p_i\in(0,1)$ and all $i\in I$.
Summing up over $i$ \begin{align} \sum_{i\in I}p_i(v(x_i) - v(y_i)) &\le \sum_{i\in I} p_i v'(y_i)(x_i-y_i) \\ &\le \sum_{i\in I} p_i |v'(y_i)|(x_i-y_i) \qquad \text{since} \,\, w\le |w|, \forall w \in \mathbb{R} \\ &\le \min_i |v'(y_i)|\sum_{i\in I} p_i (x_i-y_i) <0. \end{align} The second inequality holds since $v$ is increasing so that $|v'|=v'$ iff $v'>0$.
• Could you elaborate more on the second inequality in the last part? if $x_i-y_i < 0$, we can't conclude that $w(x_i-y_i) \le |w|(x_i-y_i)$. – induction601 Dec 27 '17 at 16:47
• Sorry but I'm not sure if the second inequality in your proposed answer is correct, as the above comment also suggested. More generally, as I stated in the conditions of my initial question, we can surely have $\sum_{i}p_iu(x_i)=\sum_{i}p_iu(y_i)$ for a concave function $u(\cdot)$. This means one can't generally prove the reserve inequality holds.'' – Tuzi Dec 27 '17 at 17:03
• @induction601 Right, the second inequality holds in one and only one case $v'>0$ which is already happen since $v$ is increasing i,e, $v'>0$ and this implies that $|v'|\ge v'\ge0$ – mwomath Dec 27 '17 at 18:27
• @mwomath But still the second inequality in your proposed answer is not true because $x_i-y_i$ is negative for some $i$'s... – Tuzi Dec 28 '17 at 1:53
|
2019-08-19 11:51:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.9839134216308594, "perplexity": 246.14101518429578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314732.59/warc/CC-MAIN-20190819114330-20190819140330-00010.warc.gz"}
|
http://www.talkstats.com/threads/discrete-probability-function-p-x-1-p-x-1-u-p-x-1.69275/
|
# [Discrete Probability Function] P(X²=1)= P(X=1) U P(X=-1)?
#### Ardêncio
##### New Member
I would like to know if this following form of interpretation is correct:
==========================
Suppose I have a discrete probability function P(X=k)=(7-k)/21, k between [1,6].
The question asks me to calculate P(X²=1).
In this case, may I do P(X²=1)= P(X=1) U P(X=-1)?
#### Dason
##### Ambassador to the humans
Close. I mean... you're right but your notation is a little off. You would want P(X²=1)= P(X=1 U X=-1) = P(X=1) + P(X = -1) since they're disjoint.
|
2018-02-19 23:32:56
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8623369932174683, "perplexity": 5882.1102093831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812855.47/warc/CC-MAIN-20180219231024-20180220011024-00613.warc.gz"}
|
https://eprints.utas.edu.au/9464/
|
# On Group Invariant Solutions to the Maxwell Dirac equations
Legg, GP 2007 , 'On Group Invariant Solutions to the Maxwell Dirac equations', Research Master thesis, University of Tasmania.
PDF (Front Matter) 01Front.pdf | Download (200kB) Available under University of Tasmania Standard License. PDF (Whole Thesis) 02Whole.pdf | Request a copy Full text restricted Available under University of Tasmania Standard License.
## Abstract
This work constitutes a study on group invariant solutions of the Maxwell Dirac
equations for a relativistic electron spinor in its own self-consistent electromagnetic field. First, the Maxwell Dirac equations are written in a gauge independent tensor form, in terms of bilinear Dirac currents and a gauge independent
total four-potential. A requirement of this form is that the length of the current
vector be non-zero. In this form they are amenable to the study of solutions invariant under subgroups of the Poincaré group without reference to the Abelian
gauge group. In particular, all subgroups of the Poincaré group that generate 4
dimensional orbits by transitive action on Minkowski space, and the corresponding invariant vector
fields are identifed, which will constitute invariant solutions
merely if various constants satisfy a set of algebraic equations. For each such
subgroup, the possibility of solutions to both the full Maxwell Dirac equations
and to a classical approximation to the self-field equations is determined. Of
the 19 classes of simply transitive subgroups, only one class yielded a solution.
Item Type: Thesis - Research Master Legg, GP The Author Copyright 2007 the author View statistics for this item
### Actions (login required)
Item Control Page
TOP
|
2018-08-20 08:37:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8120216727256775, "perplexity": 2362.8334015522555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216051.80/warc/CC-MAIN-20180820082010-20180820102010-00093.warc.gz"}
|
https://www.physicsforums.com/threads/force-on-a-charge-from-a-line-of-uniform-charge-density.434012/
|
# Force on a charge from a line of uniform charge density
#### lonely_coyote
1. The problem statement, all variables and given/known data
Write an equation in vector component form for the force when a point charge q0 is located a distance d from a vertical rod of uniform charge density Q and length L on the 45o line that bisects the positive x and y axes.
basically if you can picture a rod going vertically on the z axis, then halfway down the rod and a distance d away is a point that i need to find the force on
2. Relevant equations
dF=1/(4$$\pi$$$$\epsilon$$0) x q0dq/r2
note that these equations are supposed to look like 4 x pi x epsilon 0, not for raised to that power
3. The attempt at a solution
I have determined that the force were the charge to be locate on the y axis would be
Fy=1/(4$$\pi$$$$\epsilon$$0) x q0q/(y x (y2 + L2/4)1/2)
Related Introductory Physics Homework News on Phys.org
"Force on a charge from a line of uniform charge density"
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
2019-06-19 03:17:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48997148871421814, "perplexity": 1451.15713545642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998882.88/warc/CC-MAIN-20190619023613-20190619045613-00552.warc.gz"}
|
https://tex.stackexchange.com/questions/469617/utf-8-characters-not-rendered-properly?noredirect=1
|
# UTF-8 characters not rendered properly
I have a problem with some of the UTF-8 characters that are in my Tex file. Although I am saving the file in UTF-8 and my text editor (Atom) is displaying all of the characters properly, LaTeX throws errors like these:
``````[no file]:824: Package inputenc Error: Unicode character Δ (U+394) [...tes \$(x,y)\$ are \$x=x_0+Δx\$ and \$y=y_0+Δ]
[no file]:857: Package inputenc Error: Invalid UTF-8 byte sequence. [ # calculate ω]
[no file]:857: Package inputenc Error: Invalid UTF-8 byte 137. [ # calculate ω]
[no file]:889: Package inputenc Error: Unicode character θ (U+3B8) [...ce \$r\$. It was previously at an angle \$θ]
[no file]:889: Package inputenc Error: Unicode character ω (U+3C9) [...ngle \$θ\$ and is now at an angle \$θ + ω]
[no file]:976: Package inputenc Error: Invalid UTF-8 byte sequence. [ # calculate ω]
``````
I have tried using utf (and utfx) package to no avail:
``````\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
``````
Here is the entirety of the file: https://pastebin.com/dzzLtUmB.
The formatting might be a little rough on the eyes, because it's generated.
Is there something I could do to fix this?
• Welcome to TeX.SX! Please consider adding a shortened version of your example document as code directly to your question (we call that an MWE: tex.meta.stackexchange.com/q/228/35864). That makes it easier for other people to help you (because they don't have to go to a different site to find your code) and avoids the issue of link rot. – moewe Jan 10 at 20:04
• That said if you use pdfLaTeX (as opposed to the full Unicode engines LuaLaTeX and XeLaTeX) with `inputenc` only a limited subset of UTF-8 characters is set up for use (e.g. Latin-1 characters and other accented stuff). You'll have to add declarations for the one your are missing. See for example tex.stackexchange.com/q/429029/35864 or better tex.stackexchange.com/q/34604/35864, tex.stackexchange.com/q/83440/35864, tex.stackexchange.com/q/110042/35864 and tex.stackexchange.com/q/356958/35864 – moewe Jan 10 at 20:05
• Thanks for the advice, I will keep that in mind the next time I post. With that being said, isn't there a simpler way? This is looks quite tedious. – Tomáš Sláma Jan 10 at 20:12
• ... The alternative to declaring all the stuff is to use a Unicode engine (XeLaTeX or LuaLaTeX) with a font that has all the characters in it. Anyway, the `Invalid UTF-8 byte sequence.` and `Invalid UTF-8 byte 137.` errors don't sound good and probably need looking at. – moewe Jan 10 at 20:12
• I will look into it. Thank you for the quick response! – Tomáš Sláma Jan 10 at 20:13
If I add `draft` so latex does not stop at the missing figures, there are several errors due to incorrect math mode markup, and some utf8 characters that need to be redefined but you do not get the error
``````[no file]:976: Package inputenc Error: Invalid UTF-8 byte sequence. [ # calculate ω]
``````
I suspect that you have an older latex release and have a hit a bug in the UTF8 decoder, possibly
https://github.com/latex3/latex2e/pull/83
Make sure that you have an up to date latex.
After adding `[draft]` to skip past the missing images the remaining warnings are
``````\$ grep -i 'invalid\|error' bb251.log | sort | uniq
! Package amsmath Error: Erroneous nesting of equation structures;
! Package inputenc Error: Unicode character Δ (U+0394)
! Package inputenc Error: Unicode character θ (U+03B8)
! Package inputenc Error: Unicode character ω (U+03C9)
LaTeX Font Warning: Command \large invalid in math mode on input line 659.
LaTeX Font Warning: Command \large invalid in math mode on input line 788.
LaTeX Font Warning: Command \large invalid in math mode on input line 792.
LaTeX Font Warning: Command \large invalid in math mode on input line 794.
LaTeX Font Warning: Command \large invalid in math mode on input line 820.
LaTeX Font Warning: Command \large invalid in math mode on input line 822.
LaTeX Font Warning: Command \large invalid in math mode on input line 900.
LaTeX Font Warning: Command \large invalid in math mode on input line 904.
LaTeX Font Warning: Command \large invalid in math mode on input line 908.
LaTeX Font Warning: Command \large invalid in math mode on input line 916.
LaTeX Font Warning: Command \large invalid in math mode on input line 920.
LaTeX Font Warning: Command \large invalid in math mode on input line 922.
LaTeX Font Warning: Command \large invalid in math mode on input line 926.
LaTeX Font Warning: Command \large invalid in math mode on input line 928.
LaTeX Font Warning: Command \large invalid in math mode on input line 932.
LaTeX Font Warning: Command \large invalid in math mode on input line 934.
``````
• I have tried this with both Tex Live on my Linux laptop and MiKTeX on my PC and see the same errors. – Tomáš Sláma Jan 10 at 20:16
• @TomášSláma the operating system is not important, which version of latex did you use? I used the current release LaTeX2e <2018-12-01> – David Carlisle Jan 10 at 20:18
• I am relatively new to LaTeX, but running tex -v gives "TeX 3.14159265 (TeX Live 2018/Arch Linux)" – Tomáš Sláma Jan 10 at 20:23
• that is the tex system rather than latex, the top of the file log file will have a lien saying LaTeX2e <2018-12-01> or (since you have texlive 2018 208-04-01 there were some utf8 fixes in the December release, but actually I didn't get the invalid utf8 message even if i used an older latex) – David Carlisle Jan 10 at 20:37
|
2019-10-14 00:48:06
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8514217138290405, "perplexity": 7530.655175674527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648481.7/warc/CC-MAIN-20191014003258-20191014030258-00301.warc.gz"}
|
https://www.vasp.at/wiki/index.php/WC
|
Requests for technical support from the VASP group should be posted in the VASP-forum.
# WC
WC = [real]
Default: WC = 1000.
Description: WC specifies the weight factor for each step in Broyden mixing scheme (IMIX=4).
Set all weights identical to WC (resulting in Pulay's mixing method), up to now Pulay's scheme was always superior to Broyden's 2nd method.
Switch to Broyden's 2nd method, i.e., set the weight for the last step equal to 1000 and all other weights equal to 0.
• WC<0 (implemented for test purposes: not recommended)
Try some automatic setting of the weights according to:
${\displaystyle W_{\rm {iter}}=0.01|{\rm {WC}}|/||\rho _{\rm {out}}-\rho _{\rm {in}}||_{\rm {precond.}}\,}$
in order to set small weights for the first steps and increasing weights for the last steps.
|
2021-09-16 16:45:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5276797413825989, "perplexity": 3885.1824442185584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053657.29/warc/CC-MAIN-20210916145123-20210916175123-00417.warc.gz"}
|
https://jkk.name/reading-notes/old-blog/2019-07-10_disentanglement/
|
# A Large-Scale Corpus for Conversation Disentanglement (Kummerfeld et al., 2019)
This post is about my own paper to appear at ACL later this month. What is interesting about this paper will depend on your research interests, so that’s how I’ve broken down this blog post.
A few key points first:
### You study discourse
We investigated discourse structure when multiple conversations are occurring in the same stream of communication. In our case, the stream is a technical support channel for Ubuntu on Internet Relay Chat (IRC). We annotated each message with which message(s) it was a response to. As far as we are aware, this is the first large-scale corpus with this kind of discourse structure in synchronous chat. Here is an example from the data, with annotations marked by edges and colours:
We don’t frame the paper as being about reply-structure though. Instead, we focus on a byproduct of these annotations - conversation disentanglement. Given our graph of reply-structure, each connected component is a single conversation (as shown by each colour in the example). The key prior work on the disentanglement problem is Elsner and Charniak (2008), who released the largest annotated resource for the task, with 2,500 messages manually separated into conversations. We annotated their data with our annotation scheme and 75,000 additional messages.
We built a set of simple models for predicting reply-structure and did some analysis of assumptions about discourse from prior disentanglement work, but there is certainly more scope for study here. One direction would be to develop better models for this task. Another would be to study patterns in the data to understand how people are able to follow the conversation.
### You work on dialogue
There has been a lot of work recently using the Ubuntu dataset from Lowe et al., (2015), which was produced by heuristically disentangling conversations from the same IRC channel we use. Their work opened up a fantastic research opportunity by providing 930,000 conversations for training and evaluating dialogue systems. However, they were unable to evaluate the quality of their conversations because they had no annotated data.
Using our data, we found that only 20% of their conversations are a true prefix of a conversation (since their next utterance classification task cuts the conversation off part-way, being a true prefix is all that matters). Many conversations are missing messages, and some have extra messages from other conversations. Unsurprisingly, our trained model does better, producing conversations that are a true prefix 81% of the time. We also noticed that their heuristic was incorrectly linking messages far apart in time. This is not tested by our evaluation set, so we constructed this figure, which shows the problem is quite common:
The purple results are based on the output of our model over the entire Ubuntu IRC logs. That output is the basis of DSTC 8 Track 2. Once the competition finishes (October 20th, 2019) we will release all of the conversations.
### You am interested in studying online communities
This is not my area of expertise, but our data and models could enable the exploration of interesting questions. For example:
• What is the structure of the community? By looking at who asks for help and who responds we could see patterns of behaviour.
• How does a community evolve over time? This data spans 15 years, during which there were many Ubuntu releases, Stackoverflow was created, other Ubuntu forums were created, etc. It seems likely that those events and more would be reflected in the data.
It would be interesting to apply the model to other communities, but that would require additional in-domain data to get good results. We have no plans to collect additional data at this stage, and for other channels there are copyright questions that might be difficult to resolve (the Ubuntu channels have an open access license).
### You mainly care about neural network architectures
We experimented with a bunch of ideas that didn’t improve performance, so our final model is very simple (a feedforward network with features representing the logs and sentences represented by averaging and max-pooling GloVe embeddings). Maybe that means there is an opportunity for you to improve on our results with a fancy model? One of our motivations for making such a large new resource was to make it possible to train sophisticated models.
### Acknowledgments
This project has been going since I started at Michigan as a postdoc funded by a grant from IBM. The final paper is the result of collaboration with a large group of people from Michigan and IBM. Thank you!
## Citation
Paper
@InProceedings{acl19disentangle,
author = {Kummerfeld, Jonathan K. and Gouravajhala, Sai R. and Peper, Joseph and Athreya, Vignesh and Gunasekara, Chulaka and Ganhotra, Jatin and Patel, Siva Sankalp and Polymenakos, Lazaros and Lasecki, Walter S.},
title = {A Large-Scale Corpus for Conversation Disentanglement},
title: = {A Large-Scale Corpus for Conversation Disentanglement},
booktitle = {Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
location = {Florence, Italy},
month = {July},
year = {2019},
url = {https://github.com/jkkummerfeld/irc-disentanglement/raw/master/acl19irc.pdf},
arxiv = {https://arxiv.org/abs/1810.11118},
software = {https://jkk.name/irc-disentanglement},
data = {https://jkk.name/irc-disentanglement},
}
|
2022-01-16 22:08:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24137119948863983, "perplexity": 1608.9911229933807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300244.42/warc/CC-MAIN-20220116210734-20220117000734-00420.warc.gz"}
|
https://nrich.maths.org/public/leg.php?code=12&cl=2&cldcmpid=5775
|
# Search by Topic
#### Resources tagged with Factors and multiples similar to ACE, TWO, THREE...:
Filter by: Content type:
Stage:
Challenge level:
### There are 143 results
Broad Topics > Numbers and the Number System > Factors and multiples
### What Numbers Can We Make?
##### Stage: 3 Challenge Level:
Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make?
### What Numbers Can We Make Now?
##### Stage: 3 Challenge Level:
Imagine we have four bags containing numbers from a sequence. What numbers can we make now?
### Special Sums and Products
##### Stage: 3 Challenge Level:
Find some examples of pairs of numbers such that their sum is a factor of their product. eg. 4 + 12 = 16 and 4 × 12 = 48 and 16 is a factor of 48.
### Repeaters
##### Stage: 3 Challenge Level:
Choose any 3 digits and make a 6 digit number by repeating the 3 digits in the same order (e.g. 594594). Explain why whatever digits you choose the number will always be divisible by 7, 11 and 13.
### Three Times Seven
##### Stage: 3 Challenge Level:
A three digit number abc is always divisible by 7 when 2a+3b+c is divisible by 7. Why?
### Dozens
##### Stage: 2 and 3 Challenge Level:
Do you know a quick way to check if a number is a multiple of two? How about three, four or six?
##### Stage: 3 Challenge Level:
List any 3 numbers. It is always possible to find a subset of adjacent numbers that add up to a multiple of 3. Can you explain why and prove it?
### Even So
##### Stage: 3 Challenge Level:
Find some triples of whole numbers a, b and c such that a^2 + b^2 + c^2 is a multiple of 4. Is it necessarily the case that a, b and c must all be even? If so, can you explain why?
### Counting Factors
##### Stage: 3 Challenge Level:
Is there an efficient way to work out how many factors a large number has?
### Ben's Game
##### Stage: 3 Challenge Level:
Ben passed a third of his counters to Jack, Jack passed a quarter of his counters to Emma and Emma passed a fifth of her counters to Ben. After this they all had the same number of counters.
##### Stage: 3 Challenge Level:
Make a set of numbers that use all the digits from 1 to 9, once and once only. Add them up. The result is divisible by 9. Add each of the digits in the new number. What is their sum? Now try some. . . .
### Have You Got It?
##### Stage: 3 Challenge Level:
Can you explain the strategy for winning this game with any target?
### Hidden Rectangles
##### Stage: 3 Challenge Level:
Rectangles are considered different if they vary in size or have different locations. How many different rectangles can be drawn on a chessboard?
### Tiling
##### Stage: 2 Challenge Level:
An investigation that gives you the opportunity to make and justify predictions.
### American Billions
##### Stage: 3 Challenge Level:
Play the divisibility game to create numbers in which the first two digits make a number divisible by 2, the first three digits make a number divisible by 3...
### Stars
##### Stage: 3 Challenge Level:
Can you find a relationship between the number of dots on the circle and the number of steps that will ensure that all points are hit?
### Got It
##### Stage: 2 and 3 Challenge Level:
A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target.
### Cuboids
##### Stage: 3 Challenge Level:
Find a cuboid (with edges of integer values) that has a surface area of exactly 100 square units. Is there more than one? Can you find them all?
### Funny Factorisation
##### Stage: 3 Challenge Level:
Some 4 digit numbers can be written as the product of a 3 digit number and a 2 digit number using the digits 1 to 9 each once and only once. The number 4396 can be written as just such a product. Can. . . .
### Crossings
##### Stage: 2 Challenge Level:
In this problem we are looking at sets of parallel sticks that cross each other. What is the least number of crossings you can make? And the greatest?
### Product Sudoku
##### Stage: 3 Challenge Level:
The clues for this Sudoku are the product of the numbers in adjacent squares.
### Shifting Times Tables
##### Stage: 3 Challenge Level:
Can you find a way to identify times tables after they have been shifted up?
### Three Dice
##### Stage: 2 Challenge Level:
Investigate the sum of the numbers on the top and bottom faces of a line of three dice. What do you notice?
### Three Neighbours
##### Stage: 2 Challenge Level:
Look at three 'next door neighbours' amongst the counting numbers. Add them together. What do you notice?
### Gabriel's Problem
##### Stage: 3 Challenge Level:
Gabriel multiplied together some numbers and then erased them. Can you figure out where each number was?
### Always, Sometimes or Never? Number
##### Stage: 2 Challenge Level:
Are these statements always true, sometimes true or never true?
### Mystery Matrix
##### Stage: 2 Challenge Level:
Can you fill in this table square? The numbers 2 -12 were used to generate it with just one number used twice.
### Diagonal Product Sudoku
##### Stage: 3 and 4 Challenge Level:
Given the products of diagonally opposite cells - can you complete this Sudoku?
### Seven Flipped
##### Stage: 2 Challenge Level:
Investigate the smallest number of moves it takes to turn these mats upside-down if you can only turn exactly three at a time.
### Multiplication Square Jigsaw
##### Stage: 2 Challenge Level:
Can you complete this jigsaw of the multiplication square?
### A First Product Sudoku
##### Stage: 3 Challenge Level:
Given the products of adjacent cells, can you complete this Sudoku?
### Eminit
##### Stage: 3 Challenge Level:
The number 8888...88M9999...99 is divisible by 7 and it starts with the digit 8 repeated 50 times and ends with the digit 9 repeated 50 times. What is the value of the digit M?
### A Dotty Problem
##### Stage: 2 Challenge Level:
Starting with the number 180, take away 9 again and again, joining up the dots as you go. Watch out - don't join all the dots!
### AB Search
##### Stage: 3 Challenge Level:
The five digit number A679B, in base ten, is divisible by 72. What are the values of A and B?
### What Do You Need?
##### Stage: 2 Challenge Level:
Four of these clues are needed to find the chosen number on this grid and four are true but do nothing to help in finding the number. Can you sort out the clues and find the number?
### Star Product Sudoku
##### Stage: 3 and 4 Challenge Level:
The puzzle can be solved by finding the values of the unknown digits (all indicated by asterisks) in the squares of the $9\times9$ grid.
### Exploring Simple Mappings
##### Stage: 3 Challenge Level:
Explore the relationship between simple linear functions and their graphs.
### Times Tables Shifts
##### Stage: 2 Challenge Level:
In this activity, the computer chooses a times table and shifts it. Can you work out the table and the shift each time?
### Charlie's Delightful Machine
##### Stage: 3 and 4 Challenge Level:
Here is a machine with four coloured lights. Can you develop a strategy to work out the rules controlling each light?
### Light the Lights Again
##### Stage: 2 Challenge Level:
Each light in this interactivity turns on according to a rule. What happens when you enter different numbers? Can you find the smallest number that lights up all four lights?
### Factors and Multiples Game for Two
##### Stage: 2 Challenge Level:
Factors and Multiples game for an adult and child. How can you make sure you win this game?
### Missing Multipliers
##### Stage: 3 Challenge Level:
What is the smallest number of answers you need to reveal in order to work out the missing headers?
### Neighbours
##### Stage: 2 Challenge Level:
In a square in which the houses are evenly spaced, numbers 3 and 10 are opposite each other. What is the smallest and what is the largest possible number of houses in the square?
### The Remainders Game
##### Stage: 2 and 3 Challenge Level:
A game that tests your understanding of remainders.
##### Stage: 2 Challenge Level:
If you have only four weights, where could you place them in order to balance this equaliser?
### Beat the Drum Beat!
##### Stage: 2 Challenge Level:
Use the interactivity to create some steady rhythms. How could you create a rhythm which sounds the same forwards as it does backwards?
### The Moons of Vuvv
##### Stage: 2 Challenge Level:
The planet of Vuvv has seven moons. Can you work out how long it is between each super-eclipse?
### Venn Diagrams
##### Stage: 1 and 2 Challenge Level:
Use the interactivities to complete these Venn diagrams.
### LCM Sudoku II
##### Stage: 3, 4 and 5 Challenge Level:
You are given the Lowest Common Multiples of sets of digits. Find the digits and then solve the Sudoku.
### Colour Wheels
##### Stage: 2 Challenge Level:
Imagine a wheel with different markings painted on it at regular intervals. Can you predict the colour of the 18th mark? The 100th mark?
|
2017-10-17 20:39:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34916865825653076, "perplexity": 1730.026811197098}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822488.34/warc/CC-MAIN-20171017200905-20171017220905-00563.warc.gz"}
|
https://mooseframework.inl.gov/source/kernels/PorousFlowPreDis.html
|
# PorousFlow PreDis
Precipitation-dissolution of chemical species
This Kernel implements the residual In this equation, is the porosity (only the old value is used), is the aqueous saturation, the sum over is a sum over all the precipitated-or-dissolved (PreDis) mineral species, are stoichiometric coefficients, is the density of a solid lump of the mineral, and is the mineral reaction rate (m(precipitate)/m(solution).s) which is computed by PorousFlowAqueousPreDisChemistry.
Details concerning precipitation-dissolution kinetic chemistry may be found in the chemical reactions module.
warning
The numerical implementation of the chemical-reactions part of PorousFlow is quite simplistic, with very few guards against strange numerical behavior that might arise during the non-linear iterative process that MOOSE uses to find the solution. Therefore, care must be taken to define your chemical reactions so that the primary species concentrations remain small, but nonzero, and that mineralisation does not cause porosity to become negative or exceed unity.
This Kernel is usually added to a PorousFlowMassTimeDerivative Kernel to simulate precipitation-dissolution of a mineral from some primary chemical species. For instance in the case of just one precipitation-dissolution kinetic reaction (1) and including diffusion and dispersion, the Kernels block looks like
[Kernels]
[./mass_a]
type = PorousFlowMassTimeDerivative
fluid_component = 0
variable = a
[../]
[./diff_a]
type = PorousFlowDispersiveFlux
variable = a
fluid_component = 0
disp_trans = 0
disp_long = 0
[../]
[./predis_a]
type = PorousFlowPreDis
variable = a
mineral_density = 1000
stoichiometry = 1
[../]
[./mass_b]
type = PorousFlowMassTimeDerivative
fluid_component = 1
variable = b
[../]
[./diff_b]
type = PorousFlowDispersiveFlux
variable = b
fluid_component = 1
disp_trans = 0
disp_long = 0
[../]
[./predis_b]
type = PorousFlowPreDis
variable = b
mineral_density = 1000
stoichiometry = 1
[../]
(modules/porous_flow/test/tests/chemistry/2species_predis.i)
Appropriate stoichiometric coefficients must be supplied to this Kernel. Consider the reaction system (2)
Then the stoichiometric coefficients for the PorousFlowPreDis Kernels would be:
- stoichiometry = '1 4' for Variable a - stoichiometry = '2 -5' for Variable b - stoichiometry = '-3 6' for Variable c
note
This Kernel lumps the mineral masses to the nodes. It also only uses the old values of porosity, which is an approximation: see porosity for a discussion.
See mass lumping for details.
## Input Parameters
• variableThe name of the variable that this Kernel operates on
C++ Type:NonlinearVariableName
Options:
Description:The name of the variable that this Kernel operates on
• mineral_densityDensity (kg(precipitate)/m^3(precipitate)) of each secondary species in the aqueous precipitation-dissolution reaction system
C++ Type:std::vector
Options:
Description:Density (kg(precipitate)/m^3(precipitate)) of each secondary species in the aqueous precipitation-dissolution reaction system
• stoichiometryA vector of stoichiometric coefficients for the primary species that is the Variable of this Kernel: one for each precipitation-dissolution reaction (these are one columns of the 'reactions' matrix)
C++ Type:std::vector
Options:
Description:A vector of stoichiometric coefficients for the primary species that is the Variable of this Kernel: one for each precipitation-dissolution reaction (these are one columns of the 'reactions' matrix)
• PorousFlowDictatorThe UserObject that holds the list of PorousFlow variable names.
C++ Type:UserObjectName
Options:
Description:The UserObject that holds the list of PorousFlow variable names.
### Required Parameters
• displacementsThe displacements
C++ Type:std::vector
Options:
Description:The displacements
• blockThe list of block ids (SubdomainID) that this object will be applied
C++ Type:std::vector
Options:
Description:The list of block ids (SubdomainID) that this object will be applied
### Optional Parameters
• enableTrueSet the enabled status of the MooseObject.
Default:True
C++ Type:bool
Options:
Description:Set the enabled status of the MooseObject.
• save_inThe name of auxiliary variables to save this Kernel's residual contributions to. Everything about that variable must match everything about this variable (the type, what blocks it's on, etc.)
C++ Type:std::vector
Options:
Description:The name of auxiliary variables to save this Kernel's residual contributions to. Everything about that variable must match everything about this variable (the type, what blocks it's on, etc.)
• use_displaced_meshFalseWhether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used.
Default:False
C++ Type:bool
Options:
Description:Whether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used.
• control_tagsAdds user-defined labels for accessing object parameters via control logic.
C++ Type:std::vector
Options:
Description:Adds user-defined labels for accessing object parameters via control logic.
• seed0The seed for the master random number generator
Default:0
C++ Type:unsigned int
Options:
Description:The seed for the master random number generator
• diag_save_inThe name of auxiliary variables to save this Kernel's diagonal Jacobian contributions to. Everything about that variable must match everything about this variable (the type, what blocks it's on, etc.)
C++ Type:std::vector
Options:
Description:The name of auxiliary variables to save this Kernel's diagonal Jacobian contributions to. Everything about that variable must match everything about this variable (the type, what blocks it's on, etc.)
• implicitTrueDetermines whether this object is calculated using an implicit or explicit form
Default:True
C++ Type:bool
Options:
Description:Determines whether this object is calculated using an implicit or explicit form
• vector_tagstimeThe tag for the vectors this Kernel should fill
Default:time
C++ Type:MultiMooseEnum
Options:nontime time
Description:The tag for the vectors this Kernel should fill
• extra_vector_tagsThe extra tags for the vectors this Kernel should fill
C++ Type:std::vector
Options:
Description:The extra tags for the vectors this Kernel should fill
• matrix_tagssystem timeThe tag for the matrices this Kernel should fill
Default:system time
C++ Type:MultiMooseEnum
Options:nontime system time
Description:The tag for the matrices this Kernel should fill
• extra_matrix_tagsThe extra tags for the matrices this Kernel should fill
C++ Type:std::vector
Options:
Description:The extra tags for the matrices this Kernel should fill
|
2019-04-20 18:44:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19307436048984528, "perplexity": 6662.19439848406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529962.12/warc/CC-MAIN-20190420180854-20190420202854-00160.warc.gz"}
|
https://mathematica.stackexchange.com/questions/88983/how-should-i-iteratively-refine-a-mixturedistribution
|
# How should I iteratively refine a MixtureDistribution?
I am trying to find a stopping time for a randomized simulation such that say 95% of trials that will be successful will have already finished. I would like to do this by creating and then refining some representative distribution dist and then calling InverseCDF[dist, 0.95].
### Some background
My simulation tests a variety of configurations to the same problem. Each configuration is essentially a set of parameters and for a given instance of the problem, each configuration is either mathematically plausible or not. Because of the complexity of the problem, I test each configuration with random simulation. When the simulation reaches a solution, it will terminate, but a simulation of a configuration that cannot be solved will run forever until it reaches a point where I kill it. Currently this kill point is a constant that I have chosen arbitrarily, but I would like to have this kill point move dynamically so that it does not kill many simulations that would eventually finish, but also does not waste too much time on configurations that have taken much longer than most successful configurations.
I'd like a distribution that represents the expected stopping time of a randomly selected configuration. Unfortunately, the distribution of stopping times for any given configuration is not normal, different configuration will have a different distributions, and the same configuration in different instances of the problem will have a different distribution. This seems to mean that the distribution will need to be constructed on the fly during testing.
Using normal approximations do not come close to workable answers. Using a KernelMixtureDistribution does produce good results, however it does not seem easy to refine based on new data.
### My plan
What I intend to do:
1. Do a few trials on a configuration, then if it is successful, create dist1 = KernelMixtureDistribution[len1]
2. Create a similar approximation dist2 for the next successful configuration
3. Merge these two distributions MixtureDistribution[{1,1}, {dist1,dist2}]
4. For each new successful trial, create an approximation and then mix the new distribution with the old one (weighting appropriately)
So:
Module[{stopT, cfg, failPoint, n = 0, dist, distNew, c = 0.95},
Reap[Do[
cfg = configs[[i]];
stopT = Reap[Do[
simulation[cfg, failPoint], {10}]][[2, 1]];
If[Mean[stopT] < failPoint, (*sim was sucessful*)
n++;
distNew = KernelMixtureDistribution[stopT];
dist = MixtureDistribution[{n - 1, 1}, {dist, distNew}];
failPoint = InverseCDF[dist, c]];
Sow[{stopT, cfg}],
{i, 1, Length[configs]}
]][[2, 1]]]
Mathematica does not simplify these distributions, so it turns out to be a nested mess that takes forever to do anything with after a just few successful trials have been mixed in.
I considered creating an InterpolatingFunction of the CDF of the mixture. This worked well enough, but if I use the function to then define a ProbabilityDistribution it can not evaluate a CDF or its inverse. I cannot just use the InterpolatingFunction because I will later need to mix this distribution with the distribution of the next successful configuration.
I think that I need to use these empirical distributions to get good results, so is there
• A way to simplify mixture distributions
• A way to approximate distributions (the way a SmoothKernelDistribution does) that is still able to be evaluated and mixed agian
• A functionality in Mathematica that I am overlooking
• I think you will need to give some data driven example to give us a better idea how to help you. – Andy Ross Jul 24 '15 at 1:52
• What Andy says. Additionally, you have the problem that you have a truncated distribution, i.e., you throw away all stopping values larger than your fixed waiting time. This will skew you distribution (check e.g. the second example of the TruncatedDistribution page). As an alternative, why don't you go for a EmpiricalDistribution of all the stopping times of all configurations? InverseCDF is defined for that – Sjoerd C. de Vries Jul 24 '15 at 11:29
• @AndyRoss What kind of information should I add to be more helpful? Actual data points that I generate? Or explicit code that shows what I am trying to do? – aschankler Jul 24 '15 at 15:58
• Part of the trouble I have is understanding what you mean by "configurations". If you could give an illustrative example of what you are attempting that would probably help. Data never hurts and short, clean, code always helps. – Andy Ross Jul 24 '15 at 16:01
Though this doesn't completely answer your question it may be helpful in solving your problem. The issue is that you want to extend a KernelMixtureDistribution which isn't a particularly efficient thing to do in the built in framework. To solve this I've put together a sort of "online" KernelMixtureDistribution that lets you extend the data and add new kernel functions and bandwidths as you go.
Online KernelMixtureDistribution:
(* Helper functions to convert numbers to lists of length 1 and
pack values for faster evaluation*)
to1d[x_?NumericQ] := DeveloperToPackedArray[{x}, Real]
to1d[x_List] := DeveloperToPackedArray[x, Real]
(* Construct an online kernel mixture distribution *)
kmd[data_List, bw_?NumericQ, ker_?DistributionParameterQ] :=
With[{d = to1d[data]}, kmd[{Length[d]}, d, {ker}, {bw}]]
(* Extend the kernel mixture by giving new data *)
kmd[n_, d_, ker_, bw_][new_] :=
With[{newd = to1d[new]},
kmd[ReplacePart[n, -1 -> n[[-1]] + Length[newd]], Join[d, newd],
ker, bw]]
(* Extend with both data and a different bandwidth and kernel *)
kmd[n_, d_, ker_, bw_][new_, nbw_, nker_] :=
With[{newd = to1d[new]},
kmd[Append[n, Length[newd]], Join[d, newd], Append[ker, nker],
Append[bw, nbw]]]
(* PDFs *)
kmd /: PDF[k : kmd[{n_}, d_, {ker_}, {bw_}], x_?NumericQ] :=
Mean[PDF[ker, (x - d)/bw]]/bw
kmd /: PDF[k : kmd[n_, d_, ker_, bw_], x_?NumericQ] :=
Mean[Join @@
InternalPartitionRagged[x - d, n], bw}]]
kmd /: PDF[k_kmd, x_List] := PDF[k, #] & /@ x
(* CDFs *)
kmd /: CDF[k : kmd[{n_}, d_, {ker_}, {bw_}], x_?NumericQ] :=
Mean[CDF[ker, (x - d)/bw]]
kmd /: CDF[k : kmd[n_, d_, ker_, bw_], x_?NumericQ] :=
InternalPartitionRagged[x - d, n]/bw}])]
kmd /: CDF[k_kmd, x_List] := CDF[k, #] & /@ x
(* Quantiles *)
kmd /: Quantile[k : kmd[n_, d_, ker_, bw_], q_ /; 0 <= q <= 1] :=
FindArgMin[(CDF[k, \[FormalX]] - q)^2, {\[FormalX], Quantile[d, q]},
Method -> "PrincipalAxis", Evaluated -> False][[1]]
kmd /: Quantile[k_kmd, q_List] := Quantile[k, #] & /@ q
(* Formatting so the data doesn't try to display and unpack *)
Format[HoldPattern[kmd[n_, d_, ker_, bw_]], StandardForm] := kmd[n]
Examples:
Define a distribution with 6 data points, bandwidth .5 and a Gaussian kernel.
k = kmd[{-4, -3, 0, 1, 3, 6}, .5, NormalDistribution[]]
(* kmd[{6}] *)
Plot[PDF[k, x], {x, -6, 12}]
Add a single data point to the distribution at 4.5.
k = k[4.5];
Plot[PDF[k, x], {x, -6, 12}]
Add multiple points at one time.
k = k[{0, 6}];
Add four more points and switch to a different kernel and bandwidth for those points.
k = k[{9, 9, 9, 9}, 1, CauchyDistribution[0, 1]]
(* kmd[{9,2}] *)
The PDF, CDF, and Quantile can be computed. It is fairly trivial to extend to other functions that work with distributions.
CDF[k, 5]
(* 0.633271 *)
Quantile[k, .95]
(* 9.42704 *)
This doesn't pass arguments by reference. In your case, the number of points doesn't seem like a big issue so you can afford to be a little wasteful making copies of the data. If it is an issue you could probably make clever use of Bag in the Internal context so that multiple copies of the data aren't made.
Also note that you use InverseCDF in your code which is equivalent to Quantile for distributions.
• How does this not have many upvotes? +1 from me. – ciao Aug 1 '15 at 22:14
• I've been experimenting with this for a bit and it does exactly what I need. It let's me slowly build up a distribution but doesn't add huge overhead at each step or slow down excessively with many data points – aschankler Aug 6 '15 at 15:30
• Great! Glad to help. – Andy Ross Aug 6 '15 at 15:36
|
2020-09-30 20:11:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35149064660072327, "perplexity": 2292.4388081372595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402127397.84/warc/CC-MAIN-20200930172714-20200930202714-00116.warc.gz"}
|
http://www.zentralblatt-math.org/zmath/en/advanced/?q=an:1072.34014
|
Language: Search: Contact
Zentralblatt MATH has released its new interface!
For an improved author identification, see the new author database of ZBMATH.
Query:
Fill in the form and click »Search«...
Format:
Display: entries per page entries
Zbl 1072.34014
Infante, Gennaro; Webb, J.R.L.
Positive solutions of some nonlocal boundary value problems.
(English)
[J] Abstr. Appl. Anal. 2003, No. 18, 1047-1060 (2003). ISSN 1085-3375; ISSN 1687-0409/e
For two 4-point BVP $$u''(t)+g(t)f(u(t))=0\quad \text{a.e. on }[0,1],$$ $$u'(0)=0,\quad u(1)=\alpha_1u(\eta_1)+\alpha_2u(\eta_2),$$ or $$u(0)=0,\quad u(1)=\alpha_1u(\eta_1)+\alpha_2u(\eta_2),$$ the authors determine a region in the $(\alpha_1,\,\alpha_2)$-plane which ensures the existence of positive solutions. Further, they conclude that one can obtain the existence of positive solutions for an $m$-point boundary value problem under the weaker assumption that all parameters occurring in the boundary conditions are not required to be positive. Hence, their results allow more general behavior on $f$ than being either sub- or superlinear.
[Ruyun Ma (Lanzhou)]
MSC 2000:
*34B10 Multipoint boundary value problems
34B18 Positive solutions of nonlinear boundary value problems
47H10 Fixed point theorems for nonlinear operators on topol.linear spaces
34B15 Nonlinear boundary value problems of ODE
Keywords: m-point boundary value problem; positive solutions; existence
Highlights
Master Server
|
2013-05-19 21:43:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6026297807693481, "perplexity": 2414.847092013383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698090094/warc/CC-MAIN-20130516095450-00083-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://crypto.stackexchange.com/help/badges/86?page=4
|
# Help Center > Badges > Custodian
Complete at least one review task. This badge is awarded once per review type.
Awarded 457 times
Awarded apr 30 '14 at 21:00 to
for reviewing Close Votes
Awarded apr 29 '14 at 9:46 to
for reviewing Low Quality Posts
Awarded apr 28 '14 at 16:05 to
for reviewing Suggested Edits
Awarded apr 10 '14 at 14:41 to
for reviewing Suggested Edits
Awarded mar 27 '14 at 0:04 to
for reviewing Site Self-Evaluation
Awarded mar 20 '14 at 20:15 to
for reviewing Suggested Edits
Awarded mar 19 '14 at 21:49 to
for reviewing Suggested Edits
Awarded mar 13 '14 at 16:04 to
for reviewing Suggested Edits
Awarded mar 12 '14 at 19:47 to
for reviewing Suggested Edits
Awarded mar 6 '14 at 13:02 to
for reviewing Suggested Edits
Awarded feb 28 '14 at 5:04 to
for reviewing Suggested Edits
Awarded feb 23 '14 at 12:44 to
for reviewing Site Self-Evaluation
Awarded feb 23 '14 at 12:44 to
for reviewing Reopen Votes
Awarded feb 23 '14 at 12:44 to
for reviewing Late Answers
Awarded feb 23 '14 at 12:44 to
for reviewing First Posts
Awarded feb 23 '14 at 12:44 to
for reviewing Low Quality Posts
Awarded feb 23 '14 at 12:44 to
for reviewing Close Votes
Awarded feb 23 '14 at 12:44 to
for reviewing Suggested Edits
Awarded feb 22 '14 at 19:42 to
for reviewing Low Quality Posts
Awarded feb 20 '14 at 11:55 to
for reviewing Suggested Edits
Awarded feb 19 '14 at 0:20 to
for reviewing Suggested Edits
Awarded feb 16 '14 at 13:16 to
for reviewing Close Votes
Awarded feb 8 '14 at 15:48 to
for reviewing Low Quality Posts
Awarded feb 8 '14 at 11:57 to
for reviewing Late Answers
Awarded feb 8 '14 at 11:57 to
for reviewing First Posts
Awarded feb 5 '14 at 11:41 to
for reviewing Close Votes
Awarded feb 2 '14 at 19:30 to
for reviewing Close Votes
Awarded jan 30 '14 at 10:31 to
for reviewing Low Quality Posts
Awarded jan 29 '14 at 21:41 to
for reviewing Late Answers
Awarded jan 27 '14 at 4:29 to
for reviewing Suggested Edits
Awarded jan 18 '14 at 19:43 to
for reviewing Reopen Votes
Awarded jan 15 '14 at 12:52 to
for reviewing Suggested Edits
Awarded jan 12 '14 at 17:25 to
for reviewing Suggested Edits
Awarded jan 12 '14 at 7:13 to
for reviewing First Posts
Awarded jan 11 '14 at 1:52 to
for reviewing Suggested Edits
Awarded jan 7 '14 at 18:26 to
for reviewing Reopen Votes
Awarded jan 6 '14 at 8:37 to
for reviewing Suggested Edits
Awarded jan 6 '14 at 7:12 to
for reviewing Reopen Votes
Awarded jan 6 '14 at 5:46 to
for reviewing Suggested Edits
Awarded jan 5 '14 at 11:54 to
for reviewing Suggested Edits
Awarded jan 5 '14 at 11:34 to
for reviewing Suggested Edits
Awarded jan 4 '14 at 0:08 to
for reviewing Late Answers
Awarded jan 2 '14 at 2:14 to
for reviewing Close Votes
Awarded dec 26 '13 at 22:29 to
for reviewing Suggested Edits
Awarded dec 26 '13 at 21:22 to
for reviewing Site Self-Evaluation
Awarded dec 26 '13 at 15:21 to
for reviewing Site Self-Evaluation
Awarded dec 25 '13 at 19:48 to
for reviewing Site Self-Evaluation
Awarded dec 21 '13 at 19:22 to
for reviewing Site Self-Evaluation
Awarded dec 21 '13 at 7:13 to
for reviewing Site Self-Evaluation
Awarded dec 21 '13 at 3:48 to
for reviewing Site Self-Evaluation
Awarded dec 19 '13 at 9:44 to
for reviewing Low Quality Posts
Awarded dec 18 '13 at 23:49 to
for reviewing Suggested Edits
Awarded dec 17 '13 at 4:50 to
for reviewing Reopen Votes
Awarded dec 16 '13 at 21:33 to
for reviewing Low Quality Posts
Awarded dec 16 '13 at 6:06 to
for reviewing First Posts
Awarded dec 16 '13 at 5:51 to
for reviewing Close Votes
Awarded dec 8 '13 at 10:34 to
for reviewing Late Answers
Awarded dec 6 '13 at 10:26 to
for reviewing Low Quality Posts
Awarded dec 6 '13 at 10:21 to
for reviewing Suggested Edits
Awarded dec 5 '13 at 16:03 to
for reviewing Suggested Edits
|
2015-08-31 02:41:17
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.871590256690979, "perplexity": 8080.044209139962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065488.33/warc/CC-MAIN-20150827025425-00272-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://piping-designer.com/index.php/properties/fluid-mechanics/127-average-force
|
# Average Force
Written by Jerry Ratzlaff on . Posted in Fluid Dynamics
Average force, abbrevated as $$\bar F$$ or $$F_a$$, is used when the instantaneous velocity is not measured precisely between two points. It uses the starting velocity, final velocity and the object's mass to determine the average force. The equation and calculator for determining the average force is below. If there is no change in velocity, the object may be in static equilibrium.
## average force formulas
$$\large{ \bar F = m \; \frac { v_f \;-\; v_i } {t} }$$ $$\large{ \bar F = m \; \frac { \Delta v } {t} }$$
### Where:
$$\large{ \bar F }$$ = average force
$$\large{ m }$$ = mass
$$\large{ t }$$ = time
$$\large{ v_f }$$ = final velocity
$$\large{ v_i }$$ = initial velocity
$$\large{ \Delta v }$$ = velocity differential
|
2020-08-14 10:55:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8487709164619446, "perplexity": 2226.0961541018623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739211.34/warc/CC-MAIN-20200814100602-20200814130602-00455.warc.gz"}
|
https://astronomy.stackexchange.com/questions/26781/generate-an-uniform-distribution-on-the-sky-between-limits
|
# Generate an uniform distribution on the sky between limits
I have an analog question of this one : Generate an uniform distribution on the sky
I generate an uniform distribution on the sky, but just a patch of it, with $$\alpha_{min} \leq \alpha \leq \alpha_{max}$$ and $$\delta_{min} \leq \delta \leq \delta_{max}$$
for Right Ascention, I just have to take uniform distribution between $\alpha_{min}$and $\alpha_{max}$.
For declination it works if I take $$\delta_{random} = \sin^{-1}\left( \mathrm{Uniform}(\sin\delta_{min},\, \sin\delta_{max}) \right)$$
My problem is that I don't know how to demonstrate this formula for the generation of delinations. Any idea how to make the proof of this formula ?
I saw in Generate an uniform distribution on the sky @RobJeffries post which gives an expression of $P(\delta)$ and demonstrates that $\delta = \sin^{-1}(2P-1)$ where $P$ is a random number between 0 and 1, which is compatible to my formula with $\delta_{min} = -\frac{\pi}{2}$ and $\delta_{max} = +\frac{\pi}{2}$, but I was not able to generalize his demonstration.
• Hi, thanks for your answer, yes I am looking for a proof. I found this formula making tests, ans it works, but I have no clue how to make a proof of it. I mean how to prove $\delta_{random} = \sin^{-1}\left( \mathrm{Uniform}(\sin\delta_{min},\, \sin\delta_{max}) \right)$ Jun 28 '18 at 12:57
|
2021-09-25 00:45:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8687947988510132, "perplexity": 144.42458893329282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057584.91/warc/CC-MAIN-20210924231621-20210925021621-00243.warc.gz"}
|
https://www.physicsforums.com/threads/confusion-with-molecular-geometry-carbon-tetrachloride.517565/
|
# Confusion with molecular geometry [carbon tetrachloride]
1. Jul 28, 2011
### sinjan.j
Confusion with molecular geometry [carbon tetrachloride]
I'm trying to understand the molecular geometry of four different compounds:
$CCl_{4} ,CHCl_{3} ,CH_{2}Cl_{2} ,CH_{3}Cl$
Please tell me whether my thinking is right or not[conceptually]. If it's wrong kindly correct me.
for $CCl_{4}$
Carbon forms 4 different single bonds with chlorine. While deciding the shape of the molecule, we have to take into consideration the lone pairs and bond pairs of carbon. Since, lone pairs of carbon don't exist, we have to make sure that the angle between the bond pairs of electron of carbon is maximum and that is possible through tetrahedral arrangement. So, the shape is tetrahedral.
for $CHCl_{3} , CH_{2}Cl_{2} , CH_{3}Cl$
Same as the above arguement, therefore tetrahedral.
Am I right?
I have another doubt. While deciding the geomerty of the molecules, why do we have to consider the electrons of the valence shell of the central atom only, and why not the electrons of the valence shell of the peripheral atoms?
2. Jul 28, 2011
### Yanick
Yes you are correct.
You don't have to, it just doesn't make sense to talk about the geometry about the chlorine in carbon tetrachloride. Water adopts a bent geometry because of the lone pairs on the Oxygen, we don't really talk about the geometry around the Hydrogens.
When thinking about molecular geometry, you are trying to systematically model what a molecule may look like in space, hence we say that water adopts a bent geometry instead of saying that the geometry around the Hydrogen (in water) is X.
Hope that helps.
3. Jul 28, 2011
### sinjan.j
Thank You. got it.
|
2018-06-24 01:59:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42499715089797974, "perplexity": 1187.5279317711609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865995.86/warc/CC-MAIN-20180624005242-20180624025242-00456.warc.gz"}
|
https://cs.stackexchange.com/questions/55889/showing-that-an-algorithm-is-2-approximation
|
# Showing that an algorithm is 2-approximation
I'm having trouble showing that this algorithm is 2-approx.
We are given a set P of n points on the plane, and a positive integer k. We want to partition these points into k sets such that the largest distance between any two points which belong to the same part is minimized. Show that the following is a 2-factor approximation algorithm.
Set S = ∅.
For i = 1, . . . , k do
Select the farthest point from S and add it to S.
EndFor
Put each point p ∈ P in the same part as the closest point in S to p.
I honestly don't know where to start. I can see that after each iteration, the distance between any point in P and the points in S become smaller. But after that I can't really see any good observations.
• Revisiting the course material should give you an idea about where to start. Then flesh out your question. – Raphael Apr 13 '16 at 0:11
The place to start is at the simplest case: $k = 2$. Suppose that the points in $S$ are $x,y$. Consider some optimal solution of value $1$. We consider two cases.
• $x,y$ belong to the same cluster. Thus $d(x,y) \leq 1$. Since $y$ is the farthest point from $x$, all other points are at distance at most $1$ from $x$, and in particular at distance at most $1$ from $\{x,y\}$. This means that in the clusters generated by the algorithm, each point is at distance at most $1$ from its base (either $x$ or $y$), and so the diameter of each cluster is at most $2$.
• $x,y$ belong to different clusters. In this case, since the optimal solution has value $1$, each point other than $x,y$ is within distance $1$ of either $x$ or $y$ (the one which shares cluster with it). This means that in the clusters generated by the algorithm, each point is at distance at most $1$ from its base, and so again the diameter of each cluster is at most $2$.
See if you can generalize these ideas for $k > 2$.
|
2020-02-26 23:33:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.876594066619873, "perplexity": 102.57378812954987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146562.94/warc/CC-MAIN-20200226211749-20200227001749-00336.warc.gz"}
|
http://siddhartha-gadgil.github.io/
|
# Andrews-Curtis: First Run
Finally having run something in ProvingGround involving the main learning method, here are some observations.
## Simple Andrews-Curtis run
Starting with an initial distribution, evoution took place with the basic Andrews-Curtis moves. I ran two loops with two steps each, and then resumed and ran 3 more loops. The raw results are at first run results
## Conclusions
• In terms of the highest weights, the method worked well even with such a short run. For example, the presentation $\langle a, b; \bar{a}, b\rangle$ was one of those with high weight.
• In general, the order was according to expected behaviour.
• However, virtually all theorems were remembered.
• This stems from two weaknesses:
• two high a weight for theorems relative to proofs, leading to memos favoured over deduction.
• not enough methods of deduction, specifically exlusion of multiplication-inversion moves.
• These can be corrected by:
• adding more moves: multiplication-inversions, transpositions; or allowing composite moves.
• lowering the probability of word-continuation in generating presentations (just a runtime parameter).
# Lambda Islands
## A single lambda island
• We are given a type $T$.
• Generate a new variable $x : T$. The result and gradient do not depend on $x$ (upto equality).
• We have an inclusion map $i_x: FD(V) \to FD(V)$. This is almost the identity, except the gradient purges the weight of $x$.
• From this we get the initializer on distributions, introducing $x$ with weight $p(m)$ with $m$ the $\lambda$-parameter and scaling the rest by $1 - p(m)$.
• It is probably easiest to directly define the initializer, defining its application and gradient.
• We compose the initializer with a given dynamical system.
• Finally, export by mapping $y$ to $x \mapsto y$. This is composed with the initializer and the given dynamical system.
## Combining lambda islands.
• Given an initial distribution, we have weights $p(T)$ associated to each type $T$.
• Given a dynamical system $f$ and type $T$, we have a new system $\Lambda_T(f)$.
• We take the weighted sum of these, i.e., $\Lambda(f) = \sum_T w(T) \Lambda_t(f)$.
## Using recurrence
• Given a dynamical system, generate a family $g_k$ indexed by natural numbers $k$ corresponding to iterating $2^k$ times.
• Mix in lambda to get a new family $f_k$ with $f_k = g_k + \Lambda(f_{k-1})$, with the last term a priori vanishing for $k \leq 0$.
# Dynamical Actors
## Context
• Dynamical system $f: X\to X$ depending on parameters.
• Goals to check.
## Worker
• Called with state, parameter,iterations, goals.
• Iterates until success or loops completed.
## Database
• commits
• solutions: commit, goal.
• goals
### Commits
• State
• Hash-tag : string representing hex code of hash-tag
• Ancestor
• Meta-data: who committed, why.
## Hub
• Maintains all workers and communication with outside. Messages (and a few similar ones).
• For simplicity, assume that pausing on success is independent of actor, and pausing on query is part of the query string.
### Callback on success
• Commit the state.
• Record that the commit contains a success.
• Possibly update goals removing those attained.
• SSE log the goals.
## Channels
• from interface: post JSON to server.
• to interface: send SSE with type used for a switch to decide action.
# Feedbacks
## Feedbacks
• A feedback is a function $f: X \to X$, typically depending on parameters, but which is not differentiable (i.e., no gradient).
## Combinators
• Linear Structure: We can define a linear structure on A => B given one on B. This does not collide with that on differentiable functions as linear structures are not covariant.
• Conjugation: Given a differentiable function $f: X \to Y$ and a (feedback) function $g: Y\to Y$, we can define a conjugate function $g^f: X\to X$ by
• This behaves well with respect to composition of differentiable functions and linear combinations of feedbacks.
• Partial conjugation: Sometimes the feedback also depends on $x \in X$, so we have $g: X\to Y$. We are then back-propagating $g$ using $f$.
## Matching distributions:
• Given a target distribution on $Y$, we get a feedback towards the target.
• Given a distribution on $X$ and $f: X\to Y$, we have a pushforward distribution. We get an image feedback.
• To get a feedback on $X$, we need to distribute among all terms with a given image proportional to the term, or depending only on the coefficient of the image image.
• Question Can we define this shift on $X$ in one step correctly?
• Gradient view: Just view $f$ as a differentiable function and back-propagate by the gradient. This shifts weights independently.
## Terms matching types
• We compare the distribution of terms that are types with the map from terms to types.
• The difference between these gives a flow on types.
• We back-propagate to get a flow on terms.
## Approximate matches
• We can filter by having a specific type, and then an error matching having a refined type (after some function application).
• For a given term, we get the product of its weight with an error factor.
• In terms of combinators, we replace inclusion by a weighted inclusion, or a product of the inclusion with a conformal factor. We need a new basic differentiable function.
## Blending feedbacks.
• As in the case of matching types, suppose
• $X$ is a finite distribution.
• various feedbacks are based on atoms.
• we have weights for the various feedbacks.
• We give more credit for feedback components attained by fewer terms.
• This is achieved by sharing the feedback coming from one component, instead of just giving gradient
• The terms representing types feedback is a special instance of this.
# Blending Pruning and Backprop
## The problem
• One way to evolve a system is the genetic algorithm style, where we consider the fitness of the final elements and prune based on this. This however has trivial credit assignment.
• At the other extreme is a pure back-propogation. This has nice credit assignment, but done directly the support never changes.
## Solution
• Firstly, there must be an identity term in the evolution, so credit can go to objects just persisting.
• Note that while the result of a big sum does not depend on terms that are zero, specifically $\mu(v)f$ with $\mu(v) = 0$, the gradient still depends on such terms. Specifically we can get a flow back to add weight to $v$
• We decouple the big sum component, so that when we apply to an argument which is a distribution, we do not just take a set of functions depending on that distribution. We can instead take a specified support.
• We take a big sum over all generated elements, to allow some of them to acquire weight, while computing the gradient.
• We should periodically prune by weight, but only after flowing for long enough to allow for picking up weight.
## Code
• We consider differentiable functions that also depend on a subset of V, by taking linear combinations including bigsums that are the union of the support of a distribution with the given subset.
• For propagation, we just take an empty subset of V.
• Once we have the image, we take its (essential) support as the given subset of V. We recompute the differentiable function. Note that the value is unchanged.
• We use the gradient of the new function to back-propagate.
• This gives a single loop, which we repeat to evolve the system.
• Purge after looping for enough time.
## Using the identity
• The gradient of the identity does not depend on any form of a priori support.
• Hence the gradient of a persistence term, i.e., identity multiplied by a weigh, attributes weight to everything in the final distribution.
• Note: This lets us avoid the above complications.
# Differentiable Function Combinators
## Basic functions and Combinators (all implemented)
We build differentiable functions for our basic learning system using some basic ones and combinators:
• compositions (in class)
• sums and scalar products of differentiable functions. (done)
• product of a real valued and a vector valued function - the subtlest case (done).
• identity function (done).
• projections on (V, W) (done)
• inclusions to (V, W) (dome)
• evaluation of a finite distribution at a point (done).
• atomic distribution as a function of weight (done).
• point-wise multiplication of a finite distribution by a given function (done).
• sum of a set of functions, with even the set depending on argument (done).
• this can be interpreted as the sum of a fixed set of functions, but with all but finitely many zero.
• repeated squaring $k$ times, with $k=0$ and $k<0$ cases (done).
• recursive definitions for families indexed by integers - generic given zero. Done (for vector spaces).
## Derived from these
• (optional) moves
• (optional) pairings
• linear combinations.
## Convenience code
• Have implicit conversion from
• a type T implicitly depending on a linear structure on T to have methods ++ and *:
• DiffbleFunction[V, V] to a class with a multiplication method **:
### To Do:
• Construct differentiable functions for Finite Distributions - atom, evaluate and point-wise product.
# Gradient of Product
A subtle differentiable function is the scalar product,
for a vector space $V$.
## Derivative
By Liebniz rule, the total derivative is
Note that depends on the vector space structure on $(\mathbb{R}, V)$. For a vector $w$ in $V$, note that we can write
By definition of gradient (as adjoint derivative), we have
## Code
We need an additional implicit structure with combinators:
# Linear Structure, Differentiable Functions (Implicit) Combinators
The best way to build differentiable functions for the typical learning system in function-finder is using combinators. At present there is some ad hoc version of this. The better way is to use linear structures systematically.
## Linear structures
Linear Spaces can be built with
• Real numbers - Double
• Finite Distributions have ++ and *
• Pairs of linear spaces
• Differentiable functions between linear spaces
• (Eventually) Maps with values in Linear spaces - for representation learning.
### To Do:
• Add a field for the zero vector (done).
• Create a linear structure for differentiable functions(done).
• Have functions that implicitly use linear structures(done).
## More vector structures
In addition, we need inner products for computing certain gradients as well as totals for normalization. These are built for
• Finite distributions
• pairs
• Real numbers
• (Eventually) Maps with co-domain having inner products and totals.
## Differentiable functions
Differentiable functions are built from
• Composing and iterating differentiable functions.
• (Partial) Moves on X giving FD(X) -> FD(X)
• (Partial) Combinations on X giving FD(X) -> FD(X)
• Co-ordinate functions on FD(X).
• Inclusion of a basis vector.
• Projections and Inclusions for pairs.
• Scalar product of differentiable functions, $x\mapsto f(x) * g(x)$ with $f: V \to \mathbb{R}$ and $g: V\to W$.
• The sum of a set of differentiable functions X \to Y, with the set of functions depending on $x : X$.
From the above, we should be able to build.
• Islands: Recursive definitions in terms of the function itself - more precisely a sequence of functions at different depths (in a more rational form).
• Multiple islands: A recurrence corresponding to a lot of island terms.
### To Do:
• Clean up the present ad hoc constructors replacing them by combinators.
## Islands
• After adding an island, we get a family of differentiable functions $f_d$ indexed by depth.
• This can be simply viewed as a recursive definition.
• Given a function $g$, we get a function at depth $d$ by iterating $2^d$ times by repeated squaring.
• Determined by:
• An initial family of differentiable functions $g_d$, corresponding to iterating $2^d$ times.
• A transformation on differentiable functions $f \mapsto L(f)$.
• Recurrence relation:
• $f_d = g_d + L(f_{d-1})$, for $d \geq 0$, with $f_{-1} = 0$.
• Here $L(f) = i \circ f \circ j$.
• When $d=0$, the recurrence relation just gives just gives $id = id$.
• Then $d=1$, we get $f_1 = g_1 + L(id)$.
## Multiple Islands
• We have a set of islands associated to a collection of objects, with a differentiable function associated to each.
• We can assume the set to be larger, with functions for the rest being zero. So the set is really an a priori bound.
• We get a recurence relation:
• $f_d(\cdot) = g_d(\cdot) + \sum_v i_v(\cdot) * L_v(f_{d-1}(\cdot)).$
• Here $i_v$ is evaluation at $v$, for a pair of finite distributions.
# Quasi-literate Programming
This blog will now have various comments, often technical, concerning the issues I face while working, mainly programming for Automated theorem proving.
|
2018-12-19 07:47:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8834912776947021, "perplexity": 1636.8754116953357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831715.98/warc/CC-MAIN-20181219065932-20181219091932-00538.warc.gz"}
|
http://openstudy.com/updates/4f331a3ee4b0fc0c1a0b57c1
|
## snowballhockey 2 years ago 12/4= /5 wat is the blank
1. UnkleRhaukus
15
2. Preetha
UnkleRhaukus, it may help her if you explained how you arrived at that solution.
3. Directrix
12/4 = x / 5 4x = 60 x = 15
4. Preetha
And does that make sense Snowball? Are you able to do one on your own? Thanks Directrix.
5. snowballhockey
i think u multiply 12 and 5 and then get answer that doesn' tmake sence
6. Preetha
Ok, I see. When you have an equation, you can do the same thing on both sides, it remains an equation. So if you add 10 to the right hand side, you can add 10 to the left hand side and it will still be an equation. With me?
7. Preetha
Now, we are trying to find out what is the number that is above 5, so we represent the blank by a x.
8. Preetha
The next step is to simplify it. We want all the terms with x on one side of the = and all the terms without x on the other side.
9. MSMR
I would cross multiply.
10. snowballhockey
i am with msmer
11. UnkleRhaukus
$12/4=3$ $x/5=3$$x=15$
12. Preetha
Multiply both sides by 5. On the left you have, 5 x (12/4) and on the other side you have 5 x (x/4) What do you get then ? What happens when you multiply 5 into x/4
13. MSMR
This means: $\frac{12}{4} = \frac{x}{5}$ To cross multiply, you multiply the numerator of one fraction with the denominator of the other fraction.
14. Preetha
I like Unkle's solution as well.
15. MSMR
5*12 = 4*x
16. UnkleRhaukus
why do it the hard way for?
17. MSMR
60 = 4x
18. MSMR
x = 15
19. snowballhockey
GIVE ME THE EASY WAY ONE STEP AT A TIME OK DRAW IT
20. Preetha
Ok, snowball, go back to my last post, Are you with me?
21. snowballhockey
no draw it
22. MSMR
unklerhaukus: for this particular problem, your solution would be easier because 12/4 simplifies to 3. However, in a case such as 13/4 where it does not simplify into a nice number, cross multiplying will be much easier - I thought it would be good to show snowballhockey how to set that up.
23. Preetha
|dw:1328749514080:dw|
24. MSMR
|dw:1328749565365:dw|
25. Preetha
MSMR's does look better.
26. snowballhockey
i getthe 60 part then after that u simplfy that with 5
27. Preetha
Yes.
28. Preetha
So you have three ways to do this. All are correct. You need to understand what Unkle did, what MSMR did and what I showed you. Can you try one on your own? We will help
29. snowballhockey
um ok
30. Not the answer you are looking for?
Search for more explanations.
Search OpenStudy
|
2014-12-19 03:24:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8708786368370056, "perplexity": 1506.0153273760764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768197.70/warc/CC-MAIN-20141217075248-00054-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://lewtun.github.io/blog/research/conference/2020/07/31/icml2020.html
|
This year I had the opportunity to attend the International Conference on Machine Learning (ICML) and decided to highlight some of the talks I found especially interesting. Although the conference was hosted entirely online, this provided two key benefits over attending in person:
• Clash resolution: with 1,088 papers accepted, it is inevitable that multiple talks of interest would clash in the timetable. Watching the pre-recorded presentations in my own time provided a simple solution, not to mention the ability to quickly switch to a new talk if desired.
• Better Q&A sessions: at large conferences it is not easy to get your questions answered directly after a talk, usually because the whole session is running overtime and the moderator wants to move onto the next speaker. By having two (!) dedicated Q&A sessions for each talk, I found the discussions to be extremely insightful and much more personalised.
Since I'm resigned to being in quarantine until 2050, I hope other virtual conferences will adopt a similar format. Conference highlights are below!
## Transformers
### Generative Pretraining from Pixels
Predicting the next pixel with a GPT-2 scale model yields high quality representations. The best representations lie in the middle of the network.
This talk showed that with enough compute, it is possible to adapt transformer architectures to images and achieve strong results in self-supervised learning benchmarks. Dubbed iGPT, this approach relies on a three-step process:
1. Downsize the images, cluster the RGB pixel values to create a 9-bit colour map, and reshape to 1D.1
2. Pre-train on either an autoregressive next pixel or masked pixel prediction task.
3. Evaluate the quality of the learned representations on downstream tasks.
One surprising result of the linear probe2 experiments is that representation quality tends to be highest in the middle of the network.
I think this work provides a compelling example of Sutton's "bitter lesson"
Early methods conceived of vision as searching for edges, or generalized cylinders, or in terms of SIFT features. But today all this is discarded. Modern deep-learning neural networks use only the notions of convolution and certain kinds of invariances, and perform much better.
but takes it one step further by discarding knowledge of the 2D structure in images entirely!
Although the iGPT models are 2-30 times larger than ResNet-152, I expect it is only a matter of time before people find ways to make this approach more efficient. In the meantime, it's nice to see that the pre-trained models have been open-sourced and a port to HuggingFace's transformers library is already underway.
### Retrieval Augmented Language Model Pre-Training
Augmenting language models with knowledge retrieval sets a new benchmark for open-domain question answering.
I liked this talk a lot because it takes a non-trivial step towards integrating world knowledge into language models and addresses Gary Marcus' common complaint that data and compute aren't enough to produce Real Intelligence™.
To integrate knowledge into language model pretraining, this talk proposes adding a text retriever that is learned during the training process. Unsurprisingly, this introduces a major computational challenge because the conditional probability now involves a sum over all documents in a corpus $\mathcal{Z}$:
$$p(y|x) = \sum_{z\in \mathcal{Z}} p(y|x,z)p(z)\,.$$
To deal with this, the authors compute an embedding for every document in the corpus and then use Maximum Inner Product Search algorithms to find the approximate top $k$ documents. The result is a hybrid model that significantly outperforms other approaches in open-domain question answering.
### Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
A clever choice of kernel reduces the computational complexity of attention from $O(N^2)$ to $O(N)$. Generate images 4000x faster than vanilla transformers .
It's refreshing to see a transformer talk that isn't about using a "bonfire worth of GPU-TPU-neuromorphic wafer scale silicon"4 to break NLP benchmarks. This talk observes that the main bottleneck in vanilla transformer models is the softmax attention computation
$$V' = \mathrm{softmax} \left(\frac{QK^T}{\sqrt{D}} \right) V$$
whose time and space complexity is $O(N^2)$ for sequence length $N$. To get around this, the authors first use a similarity function to obtain a generalised form of self-attention
$$V_i' = \frac{\sum_j \mathrm{sim}(Q_i, K_j)V_j}{\sum_j \mathrm{sim}(Q_i, K_j)}$$
which can be simplified via a choice of kernel and matrix associativity:
$$V_i' = \frac{\phi(Q_i)^T\sum_j\phi(K_j)V_j^T}{\phi(Q_i)^T\sum_j\phi(K_j)}\,.$$
The result is a self-attention step that is $O(N)$ because the sums in the above expression can be computed once and reused for every query. In practice, this turns out to be especially powerful for inference, with speed-ups of 4000x reported in the talk!
The authors go on to show that their formulation can also be used to express transformers as RNNs, which might be an interesting way to explore the shortcomings of these large langauge models.
### XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalisation
A new benchmark to test zero-shot cross-lingual transfer from English to 39 diverse languages.
In this talk, the authors introduce the XTREME benchmark to evaluate the ability of multilingual representations to generalise across 40 languages and 9 tasks. To evaluate a model in XTREME, the main idea is to follow a three-stage recipe:
1. Pre-train on a large corpus of multilingual text.
2. Fine-tune on English data for each task.
3. Evaluate the model on zero-shot transfer performance, e.g. evaluate the accuracy on a German text classification task.
English is chosen for fine-tuning because it's the langauge with the most labelled data, and the authors employ a neat trick using Google Translate to generate proxy test sets for the tasks where a pre-existing translation does not exist.
Although not strictly about Transformers, the baseline models for this benchmark are all variants of the Transformer architecture, and the authors find that XLM-R achieves the best zero-shot transfer performance across all languages in each task. What I especially like about XTREME is that the tasks are designed to be trainable on a single GPU for less than a day. This should make it possible for research labs with tight budgets to create competitive models, where the gains in performance are likely to come from architectural design rather than simply scaling-up the compute.
I'm excited about this benchmark because I expect it will produce models that have a direct impact on my professional work in Switzerland. With four national languages and a smattering of English, building natural language applications that serve the whole population is a constant challenge.
## Time series
### Set Functions for Time Series
High-performance classification for multivariate, irregularly sampled time series.
Time series seems to be the neglected child of machine learning research, so I was excited to see a talk that combines a lot of cool ideas like Deep Sets, attention, and positional encodings in a new architecture. The motivation for this work is based on the observation that:
• Imputation techniques for sparse or irregularly sampled time series introduce bias or don't make sense at all.5
• Many time series of practical interest are multivariate in nature, and often with unaligned measurements
The authors note that for time series classification tasks, the order of input measurements is not important and thus one can reframe the problem as classifing a set of observations. By representing each observation as a tuple $(t_i, z_i, m_i)$ of timestamp $t_i$, observation $z_i$ and indicator $m_i$, an entire time series can be written as
$$\mathcal{S} = \{(t_1,z_1,m_1), \ldots , (t_M, z_M, m_M) \}$$
The goal is then to learn a function $f: \mathcal{S} \to \mathbb{R}^C$ which the authors do via the Deep Sets approach to obtain a highly-scalable architecture. One aspect I especially liked in this talk is the use of attention to visualise which observations contributed to the model output.
In industry it is quite common for domain experts to have a different mental model on how to interpret the predictions from your model, and visualisations like these could be really handy as a common discussion point. I'm quite excited to see if I can use this approach to tackle some thorny time series problems at work!
### Interpretable, Multidimensional, Multimodal Anomaly Detection with Negative Sampling for Detection of Device Failure
A new unsupervised anomaly detection algorithm for IoT devices.
This talk proposes a new technique to distinguish "normal" from "abnormal" events in streams of telemetry data from IoT devices. Like almost every real-world anomaly detection problem, one rarely has training data with labelled anomalies.6
The main novelty in this talk is a method to deal with the lack of labels by framing the problem as a binary classification task, where one class contains positive (mostly "normal") samples while the other contains negative samples that are supposed to represent the space of anomalies. A sample ratio parameter $r_s$ controls the ratio of negative to positive sample sizes and acts as a sort of hyperparameter or threshold that is tuned.
Although this method will generate false positive and false negative labelling errors, the author notes that the former are rare (by definition) and the latter decay exponentially for high-dimensional time series. Once the "labelled" dataset is created, it is then a simple matter to train a classifier and the talk notes that both neural nets and random forests perform comparably well.
One really neat aspect of this work is that it also introduces a novel way to interpret anomalies for root-cause analysis. The aim here is to figure out which dimensions contribute most to an anomaly score and the talk proposes a method based on integrated gradients. Here the basic idea is to identify which dimensions of the time series must be changed to transform an anomalous point into a normal one.
I think the methods in this paper can have a direct impact in my day job and I'm interested to see how it performs on the challenging Numenta Anomaly Benchmark. Since the code is open-sourced, this will be a nice weekend project!
## Physics
### Learning to Simulate Complex Physics with Graph Networks
A single architecture creates high-fidelity particle simulations of various interacting materials.
I'm a sucker for flashy demos and this talk from DeepMind didn't disappoint. They propose an "encode-process-decode" architecture to calculate the dynamics of physical systems, where particle states are represented as graphs and a graph neural network learns the particle interactions.
During training, the model predicts each particle's position and velocity one timestep into the future, and these predictions are compared against the ground-truth values of a simulator. Remarkably, this approach generalises to thousands of timesteps at test time, even under different initial conditions and an order of magnitude more particles!3
I think this work is a great example of how machine learning can help physicists build better simulations of complex phenomena. It will be interesting to see whether this approach can scale to systems with billions of particles, like those found in dark matter simulations or high-energy collisions at the Large Hadron Collider.
1. Downscaling is needed because naively training on a $224^2 \times 3$ sequence length would blow up the memory of the largest TPU!
2. A linear probe refers to using the model as a feature extractor and passing those features through a linear model like logistic regression.
3. The authors ascribe this generalisation power to the fact that each particle is only aware of local interactions in some 'connectivity radius', so the model is flexible enough to generalise to out-of-distribution inputs.
4. Quote from Stephen Merity's brilliant Single Headed Attention RNN: Stop Thinking With Your Head.
5. For example, in a medical context where a patient's vitals may only be measured if the doctor orders a test.
6. And even if you did, supervised approaches tend to experience 'model rot' quite quickly when dealing with vast streams of data.
|
2022-05-28 16:13:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48081937432289124, "perplexity": 1160.364670990423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016949.77/warc/CC-MAIN-20220528154416-20220528184416-00711.warc.gz"}
|
https://math.stackexchange.com/questions/1615593/a-topological-space-is-compact-iff-each-infinite-subset-has-a-complete-accumulat
|
# A topological space is compact iff each infinite subset has a complete accumulation point
This is based on the comments and answers provided in this post. However, I have some questions on the proof and the hint given in Kelleys book p.163. I will highlight the hint of the book. My own thoughts will be preceded by a dot.
If $X$ is not compact choose an open cover $A$ with no finite subcover such that the cardinal number $c$ of $A$ is as small as possible.
• This is possible since assuming $X$ not to be compact there is at least one open cover $\mathcal{O}$ without finite subcover.
• If one then considers the non empty set $\mathfrak{M}$ of all coverings without finite subcovers, the well-ordering principle implies the existence of $A$ as above, i.e. an element of $\mathfrak{M}$ with minimal cardinality.
Let $C$ be a well-ordered set of cardinal $c$ such that the set of predecessors of each member has a cardinal less than $c$ (It is shown in the appendix that $c$ is such a set.)
• If we take the ordinal number corresponding to the cardinality $c$ this should be satisfied.
Let $f$ be a one-to-one map of $C$ onto $A$. Then for each member $b$ of $C$ the union $$\left[U_b :=\right]\quad \bigcup \, \{f(a):a<b\}$$ does not cover $X$
• because if it did, there would exist an cover with cardinality strictly less than $c$.
and, in fact, the complement of this union must have cardinal number at least as great as $c$.
• Why?
It is therefore possible to choose $x_b$ from the complement such that $x_a \neq x_b$ for $a < b$.
• Why?
Consider the set of all $x_b$.
• Denote this set with $M$. It is infinite because for each Union $U_b$ there exists $x_b \in M$ with $x_a \neq x_b$ for $a < b$. This implies $|M| = c$.
• To show that $M$ has no limit point, consider $x \in X$. The covering property gives $O_x \in A$ such that $x \in O_x$. Then the following calculation should hold $$|M \cap O_x | \leq | \{x_b \mid b \leq a \} | \leq | a | < c = |M|.$$
• Consequently $M$ has no limit point.
Did I make any mistakes so far? Any suggestions on the points of the proof that still remain unclear to me?
• The fact that $c$ exists just follows by the fact that corresponding to the set of the covers (without a finite subcover) we have a set of their cardinal numbers (by the axiom of replacement) which are in particular ordinal numbers, so this non-empty set of cardinals has a minimum. But maybe this is what you meant. – Henno Brandsma Jan 17 '16 at 18:54
For $b \in C$ let $A_b := \{ f(a) : a < b \}$ (so that $U_b = \bigcup A_b$).
• I'll back up a bit, and give a fairly full justification for why $A_b$ cannot cover $X$.
If $A_b$ covers $X$ (so that $U_b = X$), since $A_b$ is a subfamily of $A$ it must be that $A_b$ does not have a finite subcover (otherwise $A$ would). But the choice of the well-ordered set $C$ implies that $| A_b | < c$, which contradicts the choice of the cardinal $c$! Therefore $A_b$ cannot cover $X$.
• If $| X \setminus U_b | < c$, then for each $x \in X \setminus U_b$ pick any $V_x \in A$ containing $x$. Then $A_b \cup \{ V_x : x \in X \setminus U_b \}$ covers $X$ and is a subfamily of $A$. Similarly to the above, this cannot have a finite subcover, and it has cardinality $< c$, which then contradicts our choice of $c$. Therefore $| X \setminus U_b | \geq c$.
• Given any $b \in C$, as $\{ x_a : a < b \}$ has cardinality $< c$ and as $| X \setminus U_b | \geq c$, it follows that $X \setminus ( U_b \cup \{ x_a : a < b \} )$ is nonempty, and so we may choose $x_b$ from this set arbitrarily to be distinct from $x_a$ for $a < b$.
• Note that the set $M = \{ x_b : b \in C \}$ has cardinality $c$. It has no complete accumulation point because for each $x \in X$ if $b \in C$ is least such that $x \in U_b$, then $U_b$ is an open neighbourhood of $x$ and $U_b \cap M \subseteq \{ x_a : a < b \}$, and so has cardinality $<c$.
|
2019-05-24 05:43:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9852535724639893, "perplexity": 109.42791786878705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257514.68/warc/CC-MAIN-20190524044320-20190524070320-00078.warc.gz"}
|
http://www.physicsforums.com/showthread.php?p=4216847
|
## Bochner-Weitzenbock formula (-> Laplacian)
Hi! I'm trying to understand a proof for the Bochner-Weitzenbock formular. I'm sorry I have to bother you with such a basic question but I've worked at this for more than an hour now, but I just don't get the very first step, i.e.:
$-\frac{1}{2} \Delta |\nabla f|^2 = \frac{1}{2} \sum_{i}X_iX_i \langle \nabla f, \nabla f \rangle$
Where we are in a complete Riemannian manifold, $f \in C^\infty(M)$ at a point $p \in M$, with a local orthonormal frame $X_1, ..., X_n$ such that $\langle X_i, X_j \rangle = \delta_{ij}, D_{X_i}X_j(p) = 0$, and of course
$\langle \nabla f, X \rangle = X(f) = df(X)$
$\textrm{Hess }f(X, Y) = \langle D_X(\nabla f), Y \rangle$
$\Delta f = - \textrm{tr(Hess )}f$
I've tried to use the Levi-Civita identities, but I'm getting entangled in these formulas and don't get anywhere.
Any help is appreciated.
PhysOrg.com science news on PhysOrg.com >> New language discovery reveals linguistic insights>> US official: Solar plane to help ground energy use (Update)>> Four microphones, computer algorithm enough to produce 3-D model of simple, convex room
I got it now :)
Blog Entries: 9 Recognitions: Homework Help Science Advisor You may try to post a solution/sketch of solution for the one interested. That would be nice of you.
## Bochner-Weitzenbock formula (-> Laplacian)
Sorry, i didn't notice the post. In case anyone ever finds this through google or the search function, here it is:
$-\frac{1}{2} \Delta\|\nabla f\|^2 = \frac{1}{2}\text{tr}(\text{Hess}(\langle \nabla f, \nabla f \rangle ))$
$= \frac{1}{2}\sum_{i=1}^n \langle \nabla_{X_i} \text{grad}\langle \nabla f, \nabla f \rangle, X_i\rangle$ (<- these are the diagonal entries of the representation matrix)
$= \frac{1}{2}\sum_{i=1}^n X_i \langle \text{grad}\langle \nabla f, \nabla f\rangle, X_i\rangle - \langle \text{grad}\langle \nabla f, \nabla f\rangle, \nabla_{X_i} X_i\rangle$ (where the second summand is zero)
$= \frac{1}{2} \sum_{i=1}^n X_i X_i \langle \nabla f, \nabla f\rangle$
Similar discussions for: Bochner-Weitzenbock formula (-> Laplacian) Thread Forum Replies Calculus & Beyond Homework 10 Set Theory, Logic, Probability, Statistics 1 Calculus 1 Calculus & Beyond Homework 0 Calculus 9
|
2013-06-18 21:45:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7511853575706482, "perplexity": 1575.9258230425598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707434477/warc/CC-MAIN-20130516123034-00032-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://www.maa.org/publications/periodicals/convergence/mathematical-treasures-van-heuraets-rectification-of-curves?device=mobile
|
# Mathematical Treasures - Van Heuraet's Rectification of Curves
Author(s):
Frank J. Swetz and Victor J. Katz
This is the title page of the brief work On the Transformation of Curves into Straight Lines, by Hendrick van Heuraet (1634 - 1660), published in the 1659 Latin edition of Descartes's Geometry, edited by van Schooten. Although van Heuraet was not the first to accomplish a rectification, a task that Descartes had said could not be done, this is the first publication of a general procedure, a procedure very close to our standard calculus procedure for finding the length of a curve.
On these two pages, van Heuraet describes his general procedure for rectification, one which tranforms the length into an integral, that is, the area under a curve. He then illustrates the procedure by calculating the length of the semi-cubical parabola, y2 = x3/a. (We can take a = 1 for simplicity.) Note that since the procedure for finding arc length involved first finding dy/dx (or the tangent to the curve), van Heuraet accomplishes this by using Descartes's normal method and Hudde's rule for finding a double root. Note also that van Heuratet uses Descartes's symbol for "equal" rather than our modern equal sign.
On this page, van Heuraet completes his calculation, noting that the answer is found by determining the area under the parabola. He further notes that he could also determine the lengths of the curves y4 = x5, y6 = x7, and so on. Finally, he shows that to find the length of a parabola he needs to be able to find the area under a hyperbola.
|
2015-04-27 13:59:45
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8599697351455688, "perplexity": 1282.7807891158627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246658376.88/warc/CC-MAIN-20150417045738-00031-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://golem.ph.utexas.edu/category/2021/03/can_we_understand_the_standard_1.html
|
## March 31, 2021
### Can We Understand the Standard Model Using Octonions?
#### Posted by John Baez
I gave two talks in Latham Boyle and Kirill Krasnov’s Perimeter Institute workshop Octonions and the Standard Model.
The first talk was on Monday April 5th at noon Eastern Time. The second was exactly one week later, on Monday April 12th at noon Eastern Time.
Here they are…
Can we understand the Standard Model? (video, slides)
Abstract. 40 years trying to go beyond the Standard Model hasn’t yet led to any clear success. As an alternative, we could try to understand why the Standard Model is the way it is. In this talk we review some lessons from grand unified theories and also from recent work using the octonions. The gauge group of the Standard Model and its representation on one generation of fermions arises naturally from a process that involves splitting 10d Euclidean space into 4+6 dimensions, but also from a process that involves splitting 10d Minkowski spacetime into 4d Minkowski space and 6 spacelike dimensions. We explain both these approaches, and how to reconcile them.
Can we understand the Standard Model using octonions? (video, slides)
Abstract. Dubois-Violette and Todorov have shown that the Standard Model gauge group can be constructed using the exceptional Jordan algebra, consisting of 3×3 self-adjoint matrices of octonions. After an introduction to the physics of Jordan algebras, we ponder the meaning of their construction. For example, it implies that the Standard Model gauge group consists of the symmetries of an octonionic qutrit that restrict to symmetries of an octonionic qubit and preserve all the structure arising from a choice of unit imaginary octonion. It also sheds light on why the Standard Model gauge group acts on 10d Euclidean space, or Minkowski spacetime, while preserving a 4+6 splitting.
You can see all the slides and videos and also some articles with more details here.
Posted at March 31, 2021 10:20 PM UTC
TrackBack URL for this Entry: https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3310
### Re: Can We Understand the Standard Model Using Octonions?
Typo in the second slide deck: On slide 15, “10” should be “10d” twice.
Posted by: Blake Stacey on April 1, 2021 5:02 PM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
And on the summary slide of the second deck, the algebra for the complex qubit should be $\mathfrak{h}_2(\mathbb{C})$ instead of $\mathfrak{h}_2(\mathbb{O})$.
Posted by: Blake Stacey on April 1, 2021 7:43 PM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
Thanks very much — I’ll fix these!
I want to improve my second batch of slides.
Right now they don’t explain why the subgroup of automorphisms of $\mathfrak{h}_3(\mathbb{O})$ preserving a copy of $\mathfrak{h}_2(\mathbb{O})$ is $Spin(9)$. I explained this in my series of posts.
The quick thing to say is that you can identify $\mathfrak{h}_2(\mathbb{O})$ with 10d Minkowski spacetime and think of $\mathfrak{h}_3(\mathbb{O})$ as the direct sum of 3 spaces:
• $\mathfrak{h}_2(\mathbb{O})$ (10d vectors)
• $\mathbb{O}^2$ (10d spinors)
• $\mathbb{R}$ (scalars).
Then the group of determinant-preserving transformations of $\mathfrak{h}_3(\mathbb{O})$ preserving $\mathfrak{h}_2(\mathbb{O})$ preserves this splitting and all the invariant operations involving vectors, spinor and scalars, so it’s $Spin(9,1)$. The automorphisms of $\mathfrak{h}_3(\mathbb{O})$ preserving $\mathfrak{h}_2(\mathbb{O})$ also preserve the ‘time axis’ in 10d Minkowski spacetime, so they form the subgroup $Spin(9)$.
I also don’t give any explanation of where $(SU(3) \times SU(3)) / \mathbb{Z}_3$ comes from. It’s not that hard to get an idea of where this comes from.
I don’t like how these important facts show up in my talk ‘as if by magic’, stated but not explained. But maybe explaining all this stuff would overload the talk. There’s a limit to how much information people can absorb in 50 minutes. I guess blogging and papers are better for detailed explanations. I want to continue my blog series and use it to expand on these talks.
Posted by: John Baez on April 2, 2021 4:20 PM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
Okay, I massively revised my second deck of slides:
I now give a rough explanation of why the subgroups
$(SU(3) \times SU(3))/\mathbb{Z}_3, \; Spin(9) \subset \mathrm{F}_4$
show up in this game, and why their intersection is the true gauge group of the Standard Model.
Posted by: John Baez on April 4, 2021 12:16 AM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
The fact of F$_4$ having the wrong dimensionality to act on $\mathbb{O}^3$ is interesting, and these slides seem to make the point more forcefully than I recall seeing before. This ties to something else I’ve thought a little about, the possibility of defining a Wigner function for the octonionic qutrit following the Wootters construction for the complex case. The idea goes like this: For simplicity, assume that $d$ is a power of an odd prime. Then we can construct a full set of $d+1$ mutually unbiased orthonormal bases, i.e., a set such that $\langle \psi | \phi \rangle = 1/d$ whenever $|\psi\rangle$ and $|\phi\rangle$ belong to different bases. Each basis defines a measurement with $d$ possible outcomes, and so a probability distribution over those outcomes has $d-1$ free parameters when we account for normalization. Together, all these probabilities provide $(d-1)(d+1) = d^2 - 1$ real parameters, which is just the dimensionality we need for density matrices on $\mathbb{C}^d$ (self-adjointness for $d^2$, less 1 for trace normalization). A discrete Wigner function can be defined as a thing that is summed over to get the probabilities for the MUB measurements, following a nice geometrical pattern. See Veitch et al. (2014) for some worked examples for the regular qutrit.
The interesting thing is that, per Theorem 8.3 of Cohn, Kumar and Minton (2013), the octonionic qutrit has a set of thirteen mutually unbiased bases. So, counting up the parameters, we get $13 \cdot (3 - 1) = 26$, which matches the dimension of the smallest nontrivial representation of F$_4$ (and, I guess, the dimension of the Albert algebra minus 1 for trace normalization).
Posted by: Blake Stacey on April 5, 2021 10:47 PM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
Whoops, that should be $|\langle \psi | \phi \rangle|^2 = 1/d$.
Posted by: Blake Stacey on April 5, 2021 10:51 PM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
Blake wrote:
The fact of F$_4$ having the wrong dimensionality to act on $\mathbb{O}^3$ is interesting, and these slides seem to make the point more forcefully than I recall seeing before.
I wouldn’t say $\mathrm{F}_4$ has the wrong dimensionality: it has dimension 52. I’d say the problem is that its lowest-dimensional nontrivial representation has dimension 26, while $\mathbb{O}^3$ just has dimension 24.
(By the way, 52 is twice 26.)
I wish I had time to get into these mutually unbiased bases. What do Cohn, Kumar and Minton mean when they say the octonionic qutrit has 13 mutually unbiased bases? What do they mean by an octonionic qutrit?
In my talk I’m mainly discussing the Jordan algebra of observables of the octonionic qutrit, which is the exceptional Jordan algebra $\mathfrak{h}_3(\mathbb{O})$. Associated to any Euclidean Jordan algebra we have a set of states and a set of pure states. The set of pure states for $\mathfrak{h}_3(\mathbb{O})$ is $\mathbb{O}\mathrm{P}^2$, so this is another important avatar of the octonionic qutrit. $\mathrm{F}_4$ is
• the automorphism group of the Jordan algebra $\mathfrak{h}_3(\mathbb{O})$
and also
• the isometry group of $\mathbb{O}\mathrm{P}^2$ (which is a Riemannian manifold).
So, when I say ‘octonionic qutrit’, I either mean the Jordan algebra of observables $\mathfrak{h}_3(\mathbb{O})$ or the space of pure states $\mathbb{O}\mathrm{P}^2$.
What’s ‘defective’ in this story is that $\mathrm{F}_4$ doesn’t act in any way (other than trivially) on the vector space $\mathbb{O}^3$. This is true despite the fact that
• elements of $\mathfrak{h}_3(\mathbb{O})$ give linear operators on $\mathbb{O}^3$ in the obvious way
and
• points of $\mathbb{O}\mathrm{P}^2$ correspond to ‘1-dimensional octonionic subspaces’ of $\mathbb{O}^3$ (which need to be defined with great care).
Posted by: John Baez on April 6, 2021 4:33 PM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
I wouldn’t say $\mathrm{F}_4$ has the wrong dimensionality: it has dimension 52. I’d say the problem is that its lowest-dimensional nontrivial representation has dimension 26, while $\mathbb{O}^3$ just has dimension 24.
Yes, sorry for being sloppy there; it was the mismatch between the dimensions of that representation and of $\mathbb{O}^3$ that I had in mind.
Here’s what Cohn, Kumar and Minton have to say. First, after Eq. (2.1) they define a metric on $K\mathbb{P}^{d-1}$ in terms of the inner product of projection matrices: $\rho(x_1, x_2) = \sqrt{1 - \langle \Pi_1, \Pi_2 \rangle}$. With that in mind, here is their Theorem 8.3:
There exists a tight code $\mathcal{C}$ of 39 points in $\mathbb{O}\mathbb{P}^2$. It consists of 13 orthogonal triples such that, for any two points $x_i, x_j$ in distinct triples, $\rho(x_i,x_j) = \sqrt{2/3}$. In other words, if $\Pi, \Pi'$ are the projection matrices corresponding to two distinct points in $\mathcal{C}$, then $\langle \Pi, \Pi' \rangle$ equals 0 if the two points are in the same triple and otherwise equals 1/3.
The construction is inspired by a 12-point set in $\mathbb{C}\mathbb{P}^2$, which starts with the basis vectors
$(1, 0, 0),\ (0, 1, 0),\ (0, 0, 1) \, ,$
and then includes the 9 vectors
$\frac{1}{\sqrt{3}}(1, \omega^a, \omega^b)$
where $\omega$ is the cube root of unity $e^{2\pi i/3}$ and the indices $a,b = 0, 1, 2$. To extend this into $\mathbb{O}\mathbb{P}^2$, they let $1,i,j,k$ be the standard basis of the quaternions, $l$ any of the four remaining standard basis elements of $\mathbb{O}$, and $n = j l$. Defining $\omega$ as before, they construct the vectors
$\begin{array}{cc}(1,\omega^a,\omega^b)/\sqrt{3}, & (1,\omega^a j, \omega^b l)/\sqrt{3}, \\ (1, \omega^a l, \omega^b n)/\sqrt{3}, & (1, \omega^a n, \omega^b j)/\sqrt{3}. \end{array}$
Hopefully that’s less ambiguous than what I wrote before. So, it’s $\mathbb{O}\mathbb{P}^2$ that they have in mind.
Posted by: Blake Stacey on April 6, 2021 8:48 PM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
Thanks! So I guess they’re thinking of a basis as an ‘orthogonal triple’ of points in $\mathbb{O}\mathrm{P}^2$, where a point in $\mathbb{O}\mathrm{P}^2$ corresponds to a rank-one projection in $\mathfrak{h}_3(\mathbb{O})$, and two of these, say $\Pi_1, \Pi_2$, are orthogonal iff
$\langle \Pi_1, \Pi_2 \rangle = 0$
where the inner product on $\mathfrak{h}_3(\mathbb{O})$ is
$\langle \Pi_1, \Pi_2 \rangle = tr(\Pi_1 \circ \Pi_2)$
where $\circ$ is the Jordan product and $tr$ is the trace on $\mathfrak{h}_3(\mathbb{O})$: just the sum of the diagonal entries.
Posted by: John Baez on April 12, 2021 5:59 AM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
Hi, I seem to recall you commenting a while back that you didn’t have a lot of confidence that the apparent links between octonions and the standard model would lead anywhere useful.
Of course, I realise that you have prepared these slides for a particular workshop … but are you perhaps slightly less pessimistic now by any chance?
Thanks!
Posted by: Bertie on April 4, 2021 1:32 PM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
I think any particular researcher’s optimism or pessimism is uninteresting, especially in fundamental physics where the chance of any one idea working out is very low, and the most optimistic people often come up with the worst ideas. So I’m not going to answer that question.
Posted by: John Baez on April 4, 2021 9:55 PM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
I might be wrong, but I think what Bertie mean by useful is to get the Lagrangian of the SM, with the “usual physics”. Kind of a first principles derivation.
Posted by: Daniel de França on April 4, 2021 11:04 PM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
That’s exactly what I’ll start out talking about in my first talk! I don’t feel like summarizing today what I will say tomorrow at 9 am (much too early). A video of the talk will appear eventually, and you can already get the slides.
Posted by: John Baez on April 4, 2021 11:20 PM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
Posted by: Blake Stacey on April 6, 2021 7:38 AM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
Posted by: Blake Stacey on April 13, 2021 10:07 PM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
A lovely talk!🙏 It’s obvious you will be a great loss to your university when the time comes to progress your retirement plans 😃
Posted by: Bertie on April 6, 2021 9:35 AM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
Thanks! I look forward to not teaching calculus and serving on university committees — it’ll give me more time to give talks and write.
Posted by: John Baez on April 6, 2021 4:37 PM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
I have an obnoxious kind of question. While 𝔥2(H) is a six dimensional hermitian matrix, why switching time for a spacial dimension ? It would live in the perpendicular 6d space.
Posted by: Daniel de França on April 6, 2021 6:53 PM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
I wasn’t able to make it to this week’s talk in real time, but I watched the video, and I should be there on Monday.
Your talk this week started out with a very enticing question: Why does $5 = 2+3$? As far as I can tell, your explanation is “Because $10=4+6$”, which is less than satisfying. Indeed, the whole $S(U(2)\times U(3)) = SU(5) \cap (SO(4) \times SO(6))$ is really just an unpacking of the (a?) definition of the LHS, isn’t it?
One other comment: We do know that $S(U(2)\times U(3))$ is compact and connected. If you know furthermore that it fits in a noncompact group like $SO(9,1)$, then you know that it fits in the maximal compact subgroup $SO(9) = SO(9) \times SO(1)$. The maximal compact is unique up to conjugacy, so there’s no further choice needed to refine $S(U(2)\times U(3)) \subset SO(9,1)$ through $SO(9)$.
I’m having trouble getting markdown to give me the formatting that I want. I have two follow-up questions, but I will post them as replies.
Posted by: Theo Johnson-Freyd on April 9, 2021 2:06 PM | Permalink | Reply to this
### Is the intersection special or generic?
It’s all well and good to realize $K = H_1 \cap_G H_2$ for some subgroups $H_1,H_2 \subset G$. But subgroups come in conjugacy classes. When you said that you had $SU(5) \subset SO(10)$ and $SO(4) \times SO(6) \subset SO(10)$, I think you had in mind some specific subgroups for a specific $SO(10)$, but a topologist or representation theorist would find it more natural just to declare the conjugacy classes of these inclusions. Now there is a problem: up to $SO(10)$-conjugacy, there is a unique $SU(5) \subset SO(10)$, but once you have chosen it, you have broken the $SO(10)$-symmetry. In particular, when you go looking for an $SO(4) \times SO(6) \subset SO(10)$, a priori you should look at all of these up to $SU(5)$-conjugacy, and there might very well be multiple orbits.
A typical thing that happens when there are multiple orbits is that there is a “generic orbit” and some “special” orbits. Actually, the “generic orbit” typically is not a single orbit. What it is is a family of orbits, but they are all qualitatively the same. In particular, you could imagine that, up to common $G$-conjugacy, perhaps there are lots of pairs of subgroups $H_1, H_2 \subset G$, but for most of them, the intersection $H_1 \cap H_2$ is always a copy of $K$.
Let’s do a trivial example. Let’s take $H_1 = H_2 = U(1)$ living inside $G = SU(2)$. Up to $SU(2)$-conjugacy, $H_1$ might as well be the standard $U(1) = diag(\lambda,\lambda^*)$. Then there is a 1-dimensional family of choices for $H_2$. There is a special choice: $H_1 = H_2$, in which case they intersect along a $U(1)$. All the other choices are generic, and the intersection is $\{\pm 1\} \subset SU(2)$.
In the Standard Model = Georgi-Glashow $\cap$ Pati-Salam example, what happens? Is this a generic intersection or a special one?
Posted by: Theo Johnson-Freyd on April 9, 2021 2:07 PM | Permalink | Reply to this
### Re: Is the intersection special or generic?
Theo wrote:
When you said that you had SU(5) ⊂ SO(10) and SO(4) × SO(6) ⊂ SO(10), I think you had in mind some specific subgroups for a specific SO(10), but a topologist or representation theorist would find it more natural just to declare the conjugacy classes of these inclusions.
I’m aware of this issue, but since this was a physics talk I didn’t want to bore and bewilder the audience by dwelling on it.
In both cases the specific subgroups chosen are the “blitheringly obvious” ones: we use the standard inclusion of $SU(5)$ in $SO(10)$ given by the usual isomorphism $\mathbb{C}^5 \cong \mathbb{R}^10$, and the usual inclusion of $SO(4) \times SO(6)$ in $SO(10)$ via block diagonal matrices with a $4 \times 4$ block and a $6 \times 6$ block.
When I wanted to discuss the issue you’re talking about, I did so by taking a 10d real inner product space and stacking structures on it until in the end it was a 5d complex inner product space with a 2+3 splitting. I described some ‘compatibility conditions’ between these structures that secretly pick out which sort of $SO(4) \times SO(6)$ subgroups of $SO(10)$ are allowed after we’ve chosen our $SU(5)$ subgroup. Namely: first we make our 10d real inner product space into a complex inner product space, and then we split it into an orthogonal 4d subspace and 6d subspace that are actually complex subspaces.
I’m not sure how generic or nongeneric this is, but it feels nongeneric, because we have to be ‘lucky’ to get the 4+6 real splitting to actually be a 2+3 complex splitting.
Posted by: John Baez on April 9, 2021 10:36 PM | Permalink | Reply to this
### Is there a quaternionic Pati-Salam?
One could just as well contemplate the embedding $S(U(2) \times U(3)) \subset SU(5) \subset Sp(5)$.
Here my topologist is showing. Topologists tend to use the name $Sp(n)$ for the rank-$n$ compact group of $Sp$ type, with Dynkin diagram $C_n$. It is a thing which acts on $\mathbb{H}^n$. [Namely, the group which commutes with the left $\mathbb{H}^\times$ action is an $\mathbb{R}_{\gt 0}$ central extension of $Sp(n)$.] Representation theorists tend to write “$Sp(n)$” only when $n$ is even, in which case it is the split form of the $Sp$-type group with rank $n/2$, and is a thing which acts on $\mathbb{R}^n$ preserving a symplectic form.
Anyway, $Sp(5)$ has the same dimension as $Spin(11)$, but really it is morally the same size as $Spin(10)$. This is because, if you have an $10$-dimensional complex vector space, then to realize it as $5$ quaternionic dimensional requires choosing an antilinear square root of $-\mathrm{id}$, whereas to realize it as $5$-real-dimensional tensored with $\bC$ requires choosing an antilinear square root of $+\mathrm{id}$.
Do you have any interesting stories to tell about splitting $\mathbb{H}^5 = \mathbb{H}^2 \times\mathbb{H}^3$ analogous to the splitting $\mathbb{R}^{10} = \mathbb{R}^4 \times \mathbb{R}^6$? Is there a “quaternionic Pati-Salam model”?
Posted by: Theo Johnson-Freyd on April 9, 2021 2:08 PM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
Theo wrote:
Your talk this week started out with a very enticing question: Why does 5=2+3? As far as I can tell, your explanation is “Because 10=4+6”, which is less than satisfying.
That wasn’t supposed to be the explanation of anything yet, though string theorists have thought a lot about 10=4+6. I was just reviewing work people have done on massaging the Standard Model and its representation on fermions into a mathematically simple form, which is a prerequisite for understanding it in a more conceptual way. The second part of my talk will describe an octonionic “explanation” of the 10=4+6 split… but I’m not claiming this explanation really works: there are problems with it that I don’t know how to solve. It’s really just a vague sketch of the sort of thing that might eventually count as an explanation: something based on math that’s not just pretty but has some physical content.
Posted by: John Baez on April 9, 2021 10:11 PM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
The copy of $(SU(3) \times SU(3)) / \mathbb{Z}_3 \subset F_4$ is one of my favourite groups. There are many reasons why it is fun.
A very basic reason is that it is the centralizer of an order-$3$ element in $F_4$. Up to conjugacy, I count three order-$3$ elements of $F_4$. Two of them have nonsemisimple centralizers, i.e. centralizers whose Lie algebras contain a $U(1)$ factor. This one has a semisimple centralizer.
A second reason is that the two $SU(3)$s are very different inside $F_4$, and in particular are not conjugate to each other. One way to see this is that the roots of the two copies have different lengths. (Said another way, the Killing form for $F_4$ restricts differently to the two $\mathfrak{su}(3)$s — off by a factor of $2$, I think.) This is presumably related to your comment about the strong force versus a different one.
A third reason is that $SU(3)$ contains a copy of the finite Heisenberg group $3^{1+2}_+$ (the unique noncommutative group of order $27$ in which all nontrivial elements have order $3$) as what a quantum computer programmer would call the “clock” and “shift” matrices, but if you take the diagonal copy $diag(3^{1+2}_+) \subset SU(3) \times SU(3)$ [or maybe the antidiagonal copy, depending on what coordinates you choose for the two $SU(3)$s], then its image in $(SU(3) \times SU(3)) / \mathbb{Z}_3$ is just a $\mathbb{Z}_3^2$. Together with the residual centre of $(SU(3) \times SU(3)) / \bZ_3$, you get a copy of $\mathbb{Z}_3^3 \subset F_4$ which turns out to be maximal abelian. In particular, it does not fit inside of any torus. Serre (I believe) explained that having a finite abelian $p$-subgroup of a Lie group which does not fit inside any torus exactly corresponds to having nontrivial $p$-local cohomology.
The normalizer of this $\mathbb{Z}_3^3 \subset F_4$ is a nonsplit extension $\mathbb{Z}_3^3 \cdot SL_3(\mathbb{F}_3)$. This explains why the number $26$ shows up so much in the representation theory of $F_4$: any representation must diagonalize over $\mathbb{Z}_3^3$, but the 26 nontrivial eigenspaces must have the same dimension because they are permuted by the $SL_3(\mathbb{F}_3)$. Going the other way, Griess explained how to build the exceptional Jordan algebra, and hence $F_4$, just from this group.
This $\mathbb{Z}_3^3 \subset F_4$ is part of a trilogy. There is are analogous subgroups $\mathbb{Z}_2^3 \cdot SL_3(\mathbb{F}_2) \subset G_2$ and $\mathbb{Z}_5^3 \cdot SL_3(\mathbb{F}_5) \subset E_8$. The former can be used to provide a construction of $\mathbb{O}$ directly from $\mathbb{Z}_2^3 \cdot SL_3(\mathbb{F}_2)$. It is an easier version of Griess’s construction of the exceptional Jordan algebra and hence of $G_2$.
There seems to be no such construction of $E_8$ starting with $\mathbb{Z}_5^3 \cdot SL_2(\mathbb{F}_5)$. The problem seems to be the following. The “Weyl group” $SL_3(\mathbb{F}_p)$ suggests that you look at the determinant form $\det$ on $\mathbb{Z}_p^3$. This is an alternating trilinear form, and so, after including $\mathbb{F}_p \subset U(1)$, you get a 3-cocycle for ordinary $U(1)$-valued cohomology of $\mathbb{Z}_p^3$. It gives you as a class in $H^4(B\mathbb{Z}_p^3; \mathbb{Z})$, which I believe that it is the restriction along $\mathbb{Z}_p^3 \subset G$ of the generator of $H^4(BG; \mathbb{Z}) \cong \mathbb{Z}$, where $G = G_2,F_4,E_8$ for $p=2,3,5$. Anyway, the problem is that for $p \geq 5$, this $U(1)$-valued cocycle is not a coboundary: the class is nontrivial. For $p \leq 3$, it is a coboundary. The reason is that in three dimensions $\det$ is a sum of $3! = 6$ terms, and these terms are themselves cocycles, all of which are cohomologous in $U(1)$-cohomology. So as a cohomology class $\det$ is “divisible by 6”. Indeed, $H^3(\mathbb{Z}_p^3; U(1))$ has a copy of $\mathbb{Z}_p$ inside of it (living as the “essential cohomology”, i.e. the kernel of all restrictions to proper subgroups), and this copy is generated by “$\frac16 \det$”. The reason this is a problem for constructing $E_8$ is because the constructions of $\mathbb{O}$ and the exceptional Jordan algebra both begin by choosing an antiderivative of this specific cocycle.
Posted by: Theo Johnson-Freyd on April 9, 2021 2:46 PM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
Neat! I think I’ve been coming at some of the same concepts from a different direction (and a background of near-total ignorance).
It looks like the Markdown processing glitched your link because it contained parentheses.
Terminological note: The “finite Heisenberg group” was conceived by Hermann Weyl in 1925, in the course of devising a finite-dimensional counterpart to the canonical commutation relation that was originally proposed by Max Born. It’s described in Weyl’s Gruppentheorie und Quantenmechanik (along with other neat things like an early recognition of the importance of entanglement). Accordingly, one will hear it called the “Weyl–Heisenberg group”; less frequently, in my experience, it is also termed the “generalized Pauli group”.
I just learned the other day that the normalizer in $SL_5(\mathbb{C})$ of the Weyl–Heisenberg group for dimension 5 is the symmetry group of the Horrocks–Mumford bundle. See He and McKay (2015), section 4. The Hesse SIC, which I care about for quantum-information reasons, has a symmetry group (the Hessian group) that is a semidirect product of the Weyl–Heisenberg group in dimension 3 with the binary tetrahedral group. It looks like the symmetry group of the Horrocks–Mumford bundle is analogous, except we go up to dimension 5 and use the binary icosahedral group instead.
Posted by: Blake Stacey on April 9, 2021 5:48 PM | Permalink | Reply to this
### Re: Can We Understand the Standard Model Using Octonions?
Theo wrote:
A very basic reason is that it is the centralizer of an order-3 element in $F_4$.
Yes, that’s great! Yokota gives a nice account of this in terms of the exceptional Jordan algebra here:
and that’s the basis of Dubois-Violette and Todorov’s work, which I’ll be discussing on Monday. Yokota constructs the compact exceptional Lie groups using octonions, and then describes their maximal subgroups as centralizers of elements of finite order… all very octonionically.
Thanks for all the other nice information about this $(SU(3) \times SU(3))/\mathbb{Z}_3$ subgroup. It could come in handy! It’s nice to hear that the two $SU(3)$’s are intrinsically different.
Posted by: John Baez on April 10, 2021 8:07 AM | Permalink | Reply to this
Post a New Comment
|
2021-04-19 00:07:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 219, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8195560574531555, "perplexity": 516.0554815284522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038862159.64/warc/CC-MAIN-20210418224306-20210419014306-00273.warc.gz"}
|
https://gamedev.stackexchange.com/questions/150054/get-type-of-the-object-where-attribute-was-declared-hide-in-derived-inspector
|
# Get type of the object where attribute was declared - Hide In Derived Inspector Attribute
I want to implement a custom Unity inspector attribute, like usual ones [HideInInspector] or [Range(0, 100f)].
This attribute should hide a field from the inspector if it's inherited from a base type, rather being defined directly in the type of the object we're inspecting.
So, for example, if I have these classes...
public class Enemy : MonoBehaviour {
[SuperField]
public int EnemyHealth;
}
public class GiantMonster : Enemy {
[SuperField]
public int Armor;
}
then the inspector for a base Enemy should show a control for the EnemyHealth variable, but when inspecting a GiantMonster we should not see a control for this variable.
So far I have my attribute:
public class SuperFieldAttribute : PropertyAttribute {}
which has its own PropertyDrawer:
[CustomPropertyDrawer(typeof(SuperFieldAttribute))]
public class SuperFieldDrawer : PropertyDrawer {
public override void OnGUI(Rect position, SerializedProperty property, GUIContent label) {
// ...
}
}
Inside OnGUI I need to access/find the type where the field we're drawing was declared with the [SuperFieldAttribute].
So if SuperFieldDrawer.OnGUI() is called for the SerializedProperty corresponding to GiantMonster.EnemyHealth, I should be able to determine that the type where EnemyHealth was declared is Enemy and not GiantMonster. (I'll use this to hide the field so it's not drawn)
And if SuperFieldDrawer.OnGUI() is called for the SerializedProperty corresponding to GiantMonster.Armor, I should be able to determine that the type where Armor was declared is GiantMonster itself. (Meaning I should proceed and draw the field)
So far I've found ways to access the type of the component we're inspecting, but not the type where the field was first declared, so that's the missing piece I need to fill in.
It's OK if I use reflection and so on; any method is good, performance isn't a concern.
• Do I understand correctly that you're always starting with a specific property with this attribute on an instance of a (possibly derived) class, and you just need to walk up its inheritance chain until you find the class that defines this property? You don't need to list all properties with this attribute on a particular object, or identify all types in the assembly that use the attribute, correct? – DMGregory Oct 29 '17 at 0:47
• @DMGregory yes, that is correct! I know how to list them, but since I'm doing it in PropertyDrawer and I can't use custom editor for that(to check every property if it should be hidden), well I can but custom editor would mean I had to rewrite sometimes whole inspertor every time I change or add a new class. I'm trying to make HideInDerived InspectorAttribute. For the sake of designer, and because it would allow not to rewrite much of unnecessary code parts. – Candid Moon _Max_ Oct 29 '17 at 0:58
• I think this question might be clearer if you edit it to ask about the HideInDerived application specifically. That tells us a lot about the context, what kinds of inputs we can expect (ie. SerializedProperty rather than System.Type or FieldInfo), and might help the question catch the eye of other Unity devs who want to implement this feature. :) – DMGregory Oct 29 '17 at 1:30
As luck would have it, the PropertyDrawer's fieldInfo has a handy DeclaringType property that tells us exactly where the field was defined, without needing to walk the inheritance hierarchy ourselves - at least for fields directly on the object we're inspecting. As we discovered in the comments, we still need to do a bit of manual walking for fields in a child object or struct.
Here's how we can implement that [HideInDerived] attribute drawer using this:
using UnityEngine;
using UnityEditor;
[CustomPropertyDrawer(typeof(HideInDerivedAttribute))]
public class HideInDerivedDrawer : PropertyDrawer {
bool? _chachedIsDerived;
// Cache this so we don't have to muck with strings
// or walk the type hierarchy on every repaint.
bool IsDerived(SerializedProperty property) {
if (_chachedIsDerived.HasValue == false) {
string path = property.propertyPath;
var type = property.serializedObject.targetObject.GetType();
if (path.IndexOf('.') > 0) {
// Field is in a nested type. Dig down to get that type.
var fieldNames = path.Split('.');
for(int i = 0; i < fieldNames.Length - 1; i++) {
var info = type.GetField(fieldNames[i]);
if (info == null)
break;
type = info.FieldType;
}
}
_chachedIsDerived = fieldInfo.DeclaringType != type;
}
return _chachedIsDerived.Value;
}
public override void OnGUI(Rect position, SerializedProperty property, GUIContent label) {
// If we're in a more derived type than where this field was declared,
// abort and draw nothing instead.
if (IsDerived(property))
return;
EditorGUI.PropertyField(position, property, label, true);
}
public override float GetPropertyHeight(SerializedProperty property, GUIContent label) {
if (IsDerived(property)) {
// Collapse the unseen derived property.
return -EditorGUIUtility.standardVerticalSpacing;
} else {
// Provision the normal vertical spacing for this control.
return EditorGUI.GetPropertyHeight(property, label, true);
}
}
}
## Known issues
• This does not stack with other property drawing attributes like [Range(min, max]. Only one attribute will take effect, depending on their order. Internally it seems Unity tracks only one inspector attribute per field.
It may be possible with some heavy-duty reflection to identify other attributes on the field with their own CustomPropertyDrawers, and construct a corresponding PropertyDrawer to delegate the drawing to, but it would get messy fast. It might be simpler to make a combined attribute like [HideInDerivedRange(min, max)] for specific cases.
• Types with their own custom property drawer will fall back to the default drawing style if marked [HideInDerived]. It works correctly for all of Unity's built-in property widgets though, showing colour pickers and vector/object fields as expected.
• This does not correctly hide the label, fold-out, and size controls of an Array or List, although it does hide the collection's contents.
As Candid Moon found, this is due to a change in Unity 4.3 to allow Attribute-targeted PropertyDrawers to apply to each element in a collection, which unfortunately makes it impossible to target the array itself in the present version.
A workaround suggested by slippdouglas here is to define a serializable struct containing your collection, so that the [HideInDerived] attribute hides the struct as a whole, like so:
.
[System.Serializable]
public struct HideableStringList {
public List<string> list;
// Optional, for convenience, so the rest of your code can
// continue to behave like this is just a List<string>
// without always referencing the .list field.
public static implicit operator List<string>(HideableStringList c) {
return c.list;
}
public static implicit operator HideableStringList(List<string> l) {
return new HideableStringList(){ list = l };
}
}
Because of the way Unity draws the inspector, we can't use a single generic type to handle all these cases, so you'll have to copy this boilerplate for each array type (yuck). But at least it lets this work:
[HideInDerived]
HideableStringList myStrings = new List<string>(3);
• Editor: PropertyDrawer attributes on members that are arrays are now applied to each element in the array rather than the array as a whole. This was always the intention since there is no other way to apply attributes to array elements, but it didn't work correctly before. Apologies for the inconvenience to anyone who relied on the unintended behavior for custom drawing of arrays. - from unity 4.3 release. – Candid Moon _Max_ Oct 29 '17 at 14:14
• answers.unity3d.com/questions/1150679/… - looks like it's better to post an issue or feature request from unity if in 2017 it's still not viable option. – Candid Moon _Max_ Oct 29 '17 at 14:22
• Also, my method doesn't work correctly if it's not the nested object. It hides it from inspector because the length is already 1 and it doesn't get the targetObject that you were using. That is why it might not have displayed correctly for you if you placed it inside your code. Here is my PropertyDrawer and all the other code that achieves it pastebin.com/KkUPyYu6 . Everything is working fine now I hope, except the arrays. – Candid Moon _Max_ Oct 29 '17 at 15:04
• I have found a way to do it with multiple attributes, though they all require deriving from a specific type. It is possible though to just rewrite all the common attributes like [Range], [HideInInspector]. I will post a solution in comments today. – Candid Moon _Max_ Oct 30 '17 at 9:14
• I recommend posting your solution as an Answer. Much easier to read that way! And you can get some upvotes. ;) – DMGregory Oct 30 '17 at 11:02
|
2020-11-23 22:45:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17364639043807983, "perplexity": 2884.8096179157665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141168074.3/warc/CC-MAIN-20201123211528-20201124001528-00380.warc.gz"}
|
https://blog.csdn.net/xiaojie_570/article/details/80323629
|
# Leetcode——63. Unique Paths II
### 题目原址
https://leetcode.com/problems/unique-paths-ii/description/
### 题目描述
A robot is located at the top-left corner of a m x n grid (marked ‘Start’ in the diagram below).
The robot can only move either down or right at any point in time. The robot is trying to reach the bottom-right corner of the grid (marked ‘Finish’ in the diagram below).
Now consider if some obstacles are added to the grids. How many unique paths would there be?
An obstacle and empty space is marked as 1 and 0 respectively in the grid.
Note: m and n will be at most 100.
Example:
Input:
[
[0,0,0],
[0,1,0],
[0,0,0]
]
Output: 2
Explanation:
There is one obstacle in the middle of the 3x3 grid above.
There are two ways to reach the bottom-right corner:
1. Right -> Right -> Down -> Down
2. Down -> Down -> Right -> Right
### 解题思路
• 因为机器人只能右走或者向下走,所以我们通过循环把二维数组中的障碍变为0,非障碍变为1
• 在第一行中,后面的元素值等于前面的元素值,为1表示可以同行;
• 在第一列中,下面的元素值等于上面的元素值,
• 除了第一行第一列的元素以外的元素的元素值 = 上面的元素 +左边的元素
• 最后返回最右边最下边的元素值
• 注意这里在第一行第一列的元素位置,其值等于1 - 本身的值,这样做主要是要考虑[[1]]这种情况是没有通路的
### AC代码
class Solution {
public int uniquePathsWithObstacles(int[][] obstacleGrid) {
int row = obstacleGrid.length;
int line = obstacleGrid[0].length;
for(int i = 0; i < row; i++) {
for(int j = 0; j < line; j++) {
if(i == 0 && j == 0) {
obstacleGrid[i][j] = 1 - obstacleGrid[i][j];
continue;
}
if(obstacleGrid[i][j] == 1) obstacleGrid[i][j] = 0;
else if(i == 0) obstacleGrid[i][j] = obstacleGrid[i][j - 1];
else if(j == 0) obstacleGrid[i][j] = obstacleGrid[i - 1][j];
else
obstacleGrid[i][j] = obstacleGrid[i - 1][j] + obstacleGrid[i][j - 1];
}
}
return obstacleGrid[row - 1][line - 1];
}
}
• 广告
• 抄袭
• 版权
• 政治
• 色情
• 无意义
• 其他
120
|
2019-01-21 08:25:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20139040052890778, "perplexity": 6874.407032322948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583763839.28/warc/CC-MAIN-20190121070334-20190121092334-00170.warc.gz"}
|
https://fourier.math.uoc.gr/~seminar/analysis1617.php
|
Analysis Seminar in Crete (2016-17)
Σεμιναριο Αναλυσης
In chronological ordering
DATE TIME ROOM SPEAKER FROM TITLE COMMENT
Tuesday, September 27, 2016 13:15 A303 Athanasios Pheidas University of Crete An analogue of Hilbert's Tenth Problem for the ring of holomorphic functions
Abstract: Consider polynomials $f(x)$ of an array $x$ of variables, with coefficients in $\mathbb{Z}[z]$, where $\mathbb{Z}$: integers, $z$: a variable. We ask: Question: Is there an algorithm (in the Caves of Plato) which, given any $f(x)$ as above, decides (with certainty) whether the (functional) equation $f(x)=0$ has solutions $x$, each of which is a holomorphic function of the variable $z$? Another way to ask the question is ''can one solve algorithmically algebraic differential equations of order zero over the ring of global analytic functions?'' Put that way the implications of the answer for the physical sciences should be obvious. The answer to the question is unknown. But we have evidence that it might be ''NO''. This evidence will be presented. It involves meromorphic parameterisations of elliptic curves and ''existentially definable subsets'' of the ring of holomorphic functions (i.e. properties of elements of that ring which can be defined by using only existential quantifiers and equations - no negations). Connections with other areas - especially Diophantine Geometry - will be mentioned. The talk will be dedicated to the memory of Lee Rubel, who was the first to ask these and other similar questions.
Tuesday, October 11, 2016 13:15 A303 Themis Mitsis University of Crete Carleman estimates with convex weights
Abstract:
Tuesday, October 25, 2016 13:15 A303 Ilya M. Spitkovsky NYU -- Abu Dhabi Factorization of almost periodic matrix functions: some recent results and open problems
Abstract: The set $AP$ of (Bohr) almost periodic functions is the closed sub-algebra of $L_\infty(\mathbb{R})$ generated by all the exponents $e_\lambda(x) := e^{i\lambda x}$ , $\lambda\in \mathbb{R}$. An $AP$ factorization of an $n$-by-$n$ matrix function $G$ is its representation as a product $G = G_+ \text{diag}[e_{\lambda_1} ,...,e_{\lambda_n} ]G_-$, where $G_+^{\pm 1}$ and $G_{-}^{\pm 1}$ have all entries in $AP$ with non-negative (resp., non-positive) Bohr-Fourier coefficients. This is a natural generalization of the classical Wiener-Hopf factorization of continuous matrix-functions on the unit circle, arising in particular when considering convolution type equations on finite intervals. The talk will be devoted to the current state of $AP$ factorization theory. Time permitting, problems still open will also be described.
Thursday, November 24, 2016 13:15 A303 Antonis Manoussakis Technical Univ. of Crete Banach spaces with few operators
Abstract:
Tuesday, November 29, 2016 13:15 A303 Kostas Tsaprounis University of Crete Large cardinal axioms
Abstract: The axiomatic system of ZFC set theory is nowadays generally accepted as the current foundation of mathematics. However, and despite the expressive and deductive power of this system, it is widely known that many problems and questions, coming from diverse mathematical areas, are (provably) independent from the ZFC axioms. In the direction of reinforcing this basic theory with additional assumptions, one dominant family of candidates for new axioms consists of the so-called large cardinal axioms. These postulates, which have been intensively studied during the last decades, assert, roughly speaking, the existence of stronger and stronger forms of infinity, thus creating a hierarchy of very potent assumptions beyond ZFC. In this talk, we will present (some of) the large cardinal axioms, while underlining their (very useful and) powerful reflection properties. Moreover, we will mention some connections and applications of these axioms in the context of other mathematical areas.
Tuesday, December 6, 2016 13:15 A303 Mihalis Kolountzakis University of Crete Infinitely many electrons on a line, at equilibrium
Abstract: We study equilibrium configurations of infinitely many identical particles on the real line or finitely many particles on the circle, such that the (repelling) force they exert on each other depends only on their distance. The main question is whether each equilibrium configuration needs to be an arithmetic progression. Under very broad assumptions on the force we show this for the particles on the circle. In the case of infinitely many particles on the line we show the same result under the assumption that the maximal (or the minimal) gap between successive points is finite (positive) and assumed at some pair of successive points. Under the assumption of analyticity for the force field (e.g., the Coulomb force) we deduce some extra rigidity for the configuration: knowing an equilibrium configuration of points in a half-line determines it throughout. Various properties of the equlibrium configuration are proved.
Joint work with Agelos Georgakopoulos (Warwick)
Tuesday, February 14, 2017 13:15 A303 Nikos Frantzikinakis University of Crete Ergodicity of the Liouville system implies the Chowla conjecture
Abstract: The Liouville function assigns the value one to integers with an even number of prime factors and minus one elsewhere. Its importance stems from the fact that several well known conjectures in number theory can be rephrased as conjectural properties of the Liouville function. A well known conjecture of Chowla asserts that the signs of the Liouville function are distributed randomly on the integers, that is, they form a normal sequence of plus and minus ones. Reinterpreted in the language of ergodic theory this conjecture asserts that the "Liouville dynamical system" is a Bernoulli system. We prove that a much weaker property, namely, ergodicity of the "Liouville dynamical system", implies the Chowla conjecture. Our argument combines techniques from ergodic theory, analytic number theory, and higher order Fourier analysis.
Tuesday, February 21, 2017 13:15 A303 Nikos Frantzikinakis University of Crete Ergodicity of the Liouville system implies the Chowla conjecture (part II)
Abstract: The Liouville function assigns the value one to integers with an even number of prime factors and minus one elsewhere. Its importance stems from the fact that several well known conjectures in number theory can be rephrased as conjectural properties of the Liouville function. A well known conjecture of Chowla asserts that the signs of the Liouville function are distributed randomly on the integers, that is, they form a normal sequence of plus and minus ones. Reinterpreted in the language of ergodic theory this conjecture asserts that the "Liouville dynamical system" is a Bernoulli system. We prove that a much weaker property, namely, ergodicity of the "Liouville dynamical system", implies the Chowla conjecture. Our argument combines techniques from ergodic theory, analytic number theory, and higher order Fourier analysis.
Tuesday, March 28, 2017 13:15 A303 George Costakis University of Crete Invariant sets for operators
Abstract: The existence of invariant (closed) sets for linear operators is established for various classes of operators. Operator theorists are interested to such sets because of the still open invariant subspace (and subset) problem on Hilbert spaces. For instance, we show that a Hilbert space operator of the form non-unitary isometry plus compact is far from being weakly supercyclic; therefore every closed projective orbit is invariant. This is in sharp contrast to the case: unitary plus compact.
Monday, May 15, 2017 13:15 A303 Bernard Host Universite Paris-Est Marne-la-Vallee Correlation sequences are nilsequences
Abstract: Many sequences defined as correlations appear to be nilsequences, up to the addition of a small error term. Results of this type hold both in the ergodic and in the finite setting. The proofs follow the same general strategy, although the context and the tools are completely different. I'll try to explain the ideas behind these results. This is a joint work with Nikos Frantzikinakis.
Tuesday, May 23, 2017 12:30 A303 Dirk Pauly Universität Duisburg-Essen Maxwell equations, and their solutions. Some characteristic examples.
Abstract: We will give a simple introduction to Maxwell equations. Concentrating on the static case, we will present a proper $L^2$-based solution theory for bounded weak Lipschitz domains in three dimensions. The main ingredients are a functional analysis toolbox and a sound investigation of the underlying operators gradient, rotation, and divergence. This FA-toolbox is useful for all kinds of partial differential equations as well.
Tuesday, June 13, 2017 15:15 A303 Ilias Tergiakidis Univ. Göttingen A local invariant for a four-dimensional Riemannian manifold
Abstract: The introduction of the Ricci flow equation by Hamilton in the late 80’s and the introduction of new techniques including Ricci flow with surgery by Perelman guided to very important results towards the understanding of the geometry and topology of $3$-dimensional manifolds. A crucial step was to understand the formation of singularities under the Ricci flow and the limits of their parabolic dilations. A natural question to ask is what happens in the four dimensional case. In this talk we will describe a geometric construction, which associates to every point of a $4$-dimensional Riemannian manifold an algebrogeometric object, called the branching curve, We hope that this construction will guide to new techniques for dealing with the formation of singularities in the $4$-dimensional Ricci flow. For any four-dimensional Riemannian manifold $(M,g)$, the exterior square of the tangent space at the point $x \in M$ denoted by $\Lambda^2 T_x M$, has three intrisically defined quadratic forms, $v_x$, $\Lambda^2 g_x$ and $R_x$. The first one is given by the exterior product evaluated at a volume form, the second by the second exterior square of the Riemannian metric $g_x$ and the third one by the Riemann curvature tensor. After complexifying $\Lambda^2 T_x M$ their projectivization defines three quadrics in $\mathbb{P} (\Lambda^2 T_x M \otimes \mathbb{C})$. Specifically, $\mathbb{P}(v_x)$ determines the Grassmann manifold of lines in $\mathbb{P}(T_x M \otimes \mathbb{C})$. For every $x \in M$ , the complete intersection of these three quadrics, corresponds to a singular $K3$ surface. Its resolution of singularities is as branched double cover for the quadric $\mathbb{P} (g_x)$ coming from the metric and living in $\mathbb{P}(T_x M \otimes \mathbb{C})$.
All seminars
Page maintained by Mihalis Kolountzakis, Nikos Frantzikinakis, Mihalis Papadimitrakis
|
2021-05-19 00:02:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.789746880531311, "perplexity": 495.36645644721693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00198.warc.gz"}
|
https://wp.sciviews.org/bookdown-test/special-blocks.html
|
## 1.11 Special blocks
This is a note.
This is an information.
This is a warning.
This block can be used in case of error.
This is related to Windows.
This is related to MacOS.
This is related to Linux.
This is related to the BioDataScience package.
This is a block2 construct related to SciViews or SciViews::R:
• item 1
• item 2
• item 3
This is a section related to the SciViews Box
|
2022-06-28 22:03:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7368515729904175, "perplexity": 7850.44030224994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103617931.31/warc/CC-MAIN-20220628203615-20220628233615-00005.warc.gz"}
|
https://www.physicsforums.com/threads/relationship-between-linear-and-rotational-variables.222633/
|
# Relationship between Linear and Rotational Variables
1. Mar 17, 2008
### killercatfish
My main confusion is in the proof my professor showed us just before break. He came up with a relationship of (angular acceleration)=radius*(linear acceleration) which doesnt make sense, we are assuming a 90 degree angle, so from r x (angular acceleration) = (linear acceleration) wouldnt we get r*aa=la --> aa=la/r.
I am in need of this to make the conversion in a problem where I can estimate the velocity of an action (and derive the velocity from the acceleration). But have to start with the basic Fnet=Iaa, and the only force is the torque.
Any clarification would be great. THANK YOU!
2. Mar 17, 2008
### rcgldr
You should use radians instead of degrees. Radian is a bit unusual in physics, in that it's a pseudo-unit. When converting from rotational movement to linear movement, "radian" can be simply dropped, and when converting from linear to rotational movement, "radian" can be added.
3. Mar 18, 2008
### tiny-tim
linear acceleration = radius x angular acceleration
Hi killercatfish!
Angular acceleration is 1/time^2.
Linear acceleration is length/time^2.
It should be linear acceleration (tangential, with fixed radius) = radius*angular acceleration.
4. Mar 18, 2008
### Staff: Mentor
No it doesn't. Sounds like you (or he) have it backwards. How did he "prove" this?
That's the correct relationship.
5. Mar 19, 2008
### killercatfish
Here is how I derived angular acceleration to linear acceleration:
here is a link for a clearer image:
http://killercatfish.com/RandomIsh/images/Derivation.png" [Broken]
And this is what I came up with for the impulse:
> tau = Radius*sinθ*Force;
> Fnet = I*alpha;
> Fnet = tau;
> MI := (1/2)*mass*(R^2+R[o]^2);
> Force = MI*ΔV/(Δt*Radius);
> Force*Δt = MI*ΔV/Radius;
here is a link for clearer image:
http://killercatfish.com/RandomIsh/images/Formula.png" [Broken]
Could someone A, let me know if this is correct, and B, help me to understand how this will give me the radius which is the point of breaking?
Thanks!
Last edited by a moderator: May 3, 2017
6. Mar 19, 2008
### tiny-tim
Breaking what? You haven't set out the question …
7. Mar 19, 2008
### killercatfish
HAHA! Oh man, thank you for pointing out the key flaw.. :)
This is to describe the breaking of a piece of toilet paper off the roll. I have read the other post on the site, but it wasnt heavily equation laden.
Thanks!
|
2017-06-22 12:22:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6729187369346619, "perplexity": 2473.5727333834297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319265.41/warc/CC-MAIN-20170622114718-20170622134718-00004.warc.gz"}
|
https://gamedev.stackexchange.com/questions/151145/libgdx-trying-to-create-a-matrix-of-circles
|
LibGDX trying to create a matrix of circles
I am trying to create a circular stage like so:
But I can only make a square:
• Welcome to the Gamedev stack exchange :) Is it possible to post some source code of your existing solution so that we can see where you might be going wrong? – user3797758 Nov 19 '17 at 22:30
Each layer adds 2 * π circles (you can derive this from the circumference = 2 * r * π formula). What you should do is create an array of positions and put the position of the circles in it.
First loop between 0 (inclusively) and R (exclusively) where R is the amount of layers you want and let r be the current layer's index. Then each time create floor(2 * r * π) circles. You can get the positiom of the i-th circle in a specific layer, by using the
x = cos(2 * π / floor(2 * r * π) * i) * r
y = sin(2 * π / floor(2 * r * π) * i) * r
formula.
Together the whole code is
int getPoints(int R) {
points = []
for each r between 0 and R, do
x = cos(2 * π / floor(2 * r * π) * i) * r
y = sin(2 * π / floor(2 * r * π) * i) * r
for each i between 0 and floor(2 * r * π) do
points.push((x, y))
end
end
return points
}
• @OvidioRodriguez This creates a circle with 18 points for r=3, while the image shows only 12. If you want a "real" circle this is the way to go, but it you want that exact board, it would be better to hard-code it. (Or to more clearly define how you want to generate your board. – Will Nov 20 '17 at 4:00
• @Will The OP's image shows 12 circles at r=2, not r=3. The single circle in the center is r=0. – bcrist Jun 5 '18 at 5:45
• @bcrist I don't really understand what you're trying to say with that. My version also generates 1 circle at r=0, 6 at r=1 and 12 at r=2 – Bálint Jun 5 '18 at 6:07
|
2019-01-21 11:38:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5110058188438416, "perplexity": 1024.600208509719}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583792338.50/warc/CC-MAIN-20190121111139-20190121133139-00003.warc.gz"}
|
https://community.hedgedoc.org/t/katex-extension-inconsistent-behavior/483
|
KaTeX Extension Inconsistent Behavior
My version of HedgeDoc is: 1.8.2
What I expected to happen:
Within a math expression, I can use \ce to write chemistry expressions like:
$\ce{H2O + H+ -> H3O+}$
What actually happened:
Sometimes it works, and sometimes it does not, and I have not discovered a pattern. Today, it was working correctly, but when I reloaded the page after switching wifi networks, it stopped working.
here is the display when it is not working:
I am using hedgedoc hosted by my employer at https://pad.gwdg.de/
I am not enough of a developer to investigate this problem on my own. From the KaTex website it seems that \ce requires an ‘extension’ to be installed, but I don’t know why an extension might get loaded sometimes and not others.
Hello @catmarx,
HedgeDoc 1.8.2 is still using MathJax, the switch to KaTeX will only occur with 2.0. It seems to me at least on a quick look, that MathJax also needs an extensions for this to work.
I’ll try to have a closer look at that in the next few days.
Greetings
DerMolly
I just tried to reproduce the mentioned behaviour on our demo instance and was not able to do so. Could you provide us with a working example note on https://pad.gwdg.de/ ?
I looked at your demo instance, and \ce does not appear to be working. Is it working for you? Would it depend on whether the mathjax extension was installed alongside the hedgedoc instance?
I am sorry to say that I cannot provide an example on https://pad.gwdg.de/, since this service is only available to signed-in members of my organization. I understand that, without a working (public) example, there’s probably not much that can be done. Thank you for your time and help anyhow.
No, it isn’t.
This could be the case, but I’m not too sure. Maybe you could ask the admins of https://pad.gwdg.de/?
I ran a GWDG Pad instance locally from the source code. Using \ce in math mode does not work for me, and there is no indication that the mhchem extension is loaded automatically.
What does work though (in every HedgeDoc instance) is adding $\require{mhchem}$ at the start of the document or adding the require in the first TeX-expression. That loads the extension globally and \ce can be used everywhere.
1 Like
|
2021-10-18 13:26:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5254780650138855, "perplexity": 1126.5340527567307}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585203.61/warc/CC-MAIN-20211018124412-20211018154412-00514.warc.gz"}
|
https://math.stackexchange.com/questions/3266593/suppose-that-f-mathbb-rn-to-mathbb-rn-is-a-bijection-and-n-geq2-can
|
# Suppose that $f: \mathbb R^n \to \mathbb R^n$ is a bijection and $n\geq2$. Can $f$ send every open set onto non-open set?
Suppose that $$f: \mathbb R^n \to \mathbb R^n$$ is a bijection and $$n\geq2$$. Can $$f$$ send every open set onto non-open set?
I do not know what exactly to write about this problem. Did I try anything? No, I do not have a good idea. Why am I asking this? Because here is this very highly upvoted question of Willie Wong that I think of, and, it seems to me as a good start to investigate what exactly can bijections "do" and what they can´t, so, as a starting point, I decided to ask this question.
Edit: Thomas wrote a useful comment that $$\emptyset$$ and $$\mathbb R^n$$ are mapped onto open sets. So to exclude trivialities, suppose that from consideration we exclude the empty set and the whole space $$\mathbb R^n$$, to make this more interesting.
• @ThomasAndrews I made an edit, to make this a little more interesting. But even now it could be trivial. – Grešnik Jun 18 at 16:20
• Consider the map which maps tuples of irrational elements $(i_1, \ldots, i_n)$ to $(i_1 + \sqrt{2}, \ldots, i_n + \sqrt{2})$, tuples of rational numbers $(q_1, \ldots, q_n)$ to $(q_1 + 1, \ldots, q_n + 1)$ and the other 'mixed' elements to themselves. It already does a good job at not mapping opens to opens, although there exist opens which do get mapped to opens. – Alexander Geldhof Jun 18 at 16:21
• @AlexanderGeldhof You can find any open set $U$ such that if $x\in U$ then $x+(\alpha,\alpha,\dots,\alpha)\in U$ for any real $\alpha.$ Then your $f$ would send $U$ to $U,$ so it doesn't work. – Thomas Andrews Jun 18 at 16:24
• The set $\mathbb R^n\setminus\{\mathbf v\}$ is always open, and always goes to the open set, $\mathbb R^n\setminus\{f(\mathbf v)\}.$ @AnteP. – Thomas Andrews Jun 18 at 16:27
• There exist bijections which map no open ball of finite radius to an open set. – Alexander Geldhof Jun 18 at 16:31
The set $$\mathbb R^n\setminus\{\mathbf v\}$$ is always open, and always goes to the open set, $$\mathbb R^n\setminus\{f(\mathbf v)\}.$$
More generally, if $$F\subseteq \mathbb R^n$$ is finite then $$\mathbb R^n\setminus F$$ is open, and its image $$\mathbb R^n\setminus f(F)$$ is open.
• Can we generalize that to: If $S$ is closed and bounded and such that $\mathbb R^n \setminus S$ is open then the image of $\mathbb R^n \setminus S$ is open? – Grešnik Jun 18 at 16:32
• @AnteP. No, you can't, unless $f$ is open. – user10354138 Jun 18 at 16:36
• Actually, for any closed set $S$, bounded or otherwise, $\mathbb R^n\setminus S$ is open. I n general topology, that's basically the definition of closed. But it is not true that $f(S)$ will be closed. You can rephrase your question to be for a bijection $f$ such that for each closed $S$ (other than $\emptyset,\mathbb R^n,$) $f(S)$ is not closed. – Thomas Andrews Jun 18 at 16:36
• Is there any book where there is material only about bijections? – Grešnik Jun 18 at 16:42
• I know of no books solely or even predominantly about bijections. Sorry. @AnteP. – Thomas Andrews Jun 18 at 18:15
Reading over the OP's question and their comments, it becomes evident that they are interested in 'scramble-it-up' bijections and @AlexanderGeldhof's comment. Here we want to offer some food for thought to the OP's curiosity; we'll only be examining bijections $$f: \Bbb R \to \Bbb R$$.
The first thing to note, is that if $$\tau: \Bbb R \to G$$ is a bijection onto a set $$G$$, then every bijection $$g: G \to G$$ can be mapped to a bijection
$${\tau}^{-1} \circ g \circ \tau: \Bbb R \to \Bbb R$$
and every bijection mapping $$\Bbb R$$ to $$\Bbb R$$ has this form.
A second point of interest is that by using the axiom of choice, the existence of bijections can be postulated yet you can't specify an algorithim to 'pin things down'; see
Hamel Basis
You can 'accept' the existence of bijective linear transformations of $$\Bbb R$$ over $$\Bbb Q$$ by matching up any two different Hamel Bases, but don't 'strain your brain' trying to 'see them' (c.f. this).
Finally, it seems only fair to define a transformation on $$\Bbb R$$ that 'rips apart' bounded open intervals.
Define $$f\colon\mathbb{R}\to\mathbb{R}$$ as follows:
$$f(x) = \left\{\begin{array}{lr} \;\;\;x+1\, \;\;\;\text{ |} & \text{when } x \text{ is a rational number}\\ \;\;\;x-1\, \;\;\;\text{ |} & \text{when } x \text{ is an irrational number} \end{array}\right\}$$
|
2019-07-23 07:42:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 23, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8449520468711853, "perplexity": 446.4335915824966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529007.88/warc/CC-MAIN-20190723064353-20190723090353-00019.warc.gz"}
|
http://mathoverflow.net/questions/143982/under-what-conditions-can-interval-exchanges-be-approximated-by-periodic-maps
|
Under what conditions can interval exchanges be approximated by periodic maps?
Under what conditions can an interval exchange be approximated by periodic maps? (in the weak topology for the Lebesgue measure on $[0,1]$ ).
Are there non-trivial examples of periodically approximable interval exchanges (besides circle rotations)
-
However, the condition that a transformation should be an interval exchange is far stronger than is necessary for it to be a weak limit of periodic transformations (without additional structure). Every invertible measure-preserving transformation is such a limit: if $T \colon [0,1] \to [0,1]$ is any invertible measure-preserving transformation then it can be approximated arbitrarily closely in the weak topology by periodic measure-preserving transformations. This result goes back to work of Halmos and Rokhlin in the 1940s: it can be found in Halmos' book Lectures on Ergodic Theory and is a direct consequence of Rokhlin's lemma. For an alternative treatment you could try Steve Alpern's article New Proofs that Weak Mixing is Generic, Inventiones Mathematicae 32 (1976) 263-279. You could view this result as saying that the weak topology does not tell you an awful lot about dynamics: in a way this is intuitive, since knowing that the sets $T_1A$ and $T_2A$ are similar does not help you to understand whether or not $T_1^nA$ and $T_2^nA$ are similar when $n$ is very large.
|
2015-11-26 08:45:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7931653261184692, "perplexity": 111.7925407992687}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446997.59/warc/CC-MAIN-20151124205406-00261-ip-10-71-132-137.ec2.internal.warc.gz"}
|
http://techtagg.com/standard-error/standard-error-of-p-hat-equation.html
|
Home > Standard Error > Standard Error Of P Hat Equation
# Standard Error Of P Hat Equation
## Contents
From Section 6.2, we know that the distribution of a binomial random variable becomes bell-shaped as n increases. Loading... Sign in to make your opinion count. Calculating standard error and p-hat values in statistics!?
The critical value is a factor used to compute the margin of error. jbstatistics 438,803 views 5:44 Statistics: Sample vs. Suppose the proportion of all college students who have used marijuana in the past 6 months is p = .40. Standard Error of Sample Estimates Sadly, the values of population parameters are often unknown, making it impossible to compute the standard deviation of a statistic. my company
## Standard Error Of P Hat Equation
Then our estimate is of the graduating class plan to go to graduate school. Population Mean - Duration: 6:42. Khan Academy 326,894 views 14:28 Stats: Finding Probability Using a Normal Distribution Table - Duration: 11:23.
The distribution of responses (yes, no) for this population are shown in the above figure as a bar graph. to seconds? 50 answers 15/48 changed to a percent? 16 answers Terms Privacy AdChoices RSS Skip navigation UploadSign inSearch Loading... Please try again later. How To Calculate Standard Error Without Standard Deviation The Variability of the Sample Proportion To construct a confidence interval for a sample proportion, we need to know the variability of the sample proportion.
Khan Academy 162,286 views 15:03 How to calculate Margin of Error Confidence Interval for a population proportion - Duration: 8:04. Standard Error Of P Hat Formula BurkeyAcademy 5,450 views 16:52 P-Value Easy Explanation - Duration: 10:20. Because the mean of the sampling distribution of (p hat) is always equal to the parameter p, the sample proportion (p hat) is an UNBIASED ESTIMATOR of (p). https://answers.yahoo.com/question/index?qid=20100313103237AAiMbs5 The survey polled 436 employers with workforces of all sizes. (Source: Chicago Tribune) Stem cell, marijuana proposals lead in Mich.
Loading... Calculate Standard Error Regression b. All Rights Reserved. We will use 10 for our discussions.] Using this, we can estimate the true population proportion, p, by $$\hat{p}$$ and the true standard deviation of p by $$s.e.(\hat{p})=\sqrt{\frac {p(1-p)}{n}}$$, where s.e.(
## Standard Error Of P Hat Formula
It has an approximate normal distribution with mean p = 0.38 and standard error equal to: (or about 1.5%). https://faculty.elgin.edu/dkernler/statistics/ch08/8-2.html The sampling method must be simple random sampling. Standard Error Of P Hat Equation Note the implications of the second condition. How To Calculate Standard Error In Excel Up next p vs phat - Duration: 3:30.
RumseyList Price: $19.99Buy Used:$0.01Buy New: $8.46Casio fx-9860GII Graphing Calculator, BlackList Price:$79.99Buy Used: $56.99Buy New:$60.95Approved for AP Statistics and Calculus About Us Contact Us Privacy Terms of Use Standard Error of the Sample Proportion$SE(\widehat{p})= \sqrt{\frac {p(1-p)}{n}}$If $$p$$ is unknown, estimate $$p$$ using $$\widehat{p}$$The box below summarizes the rule of sample proportions: Characteristics of the Distribution of Sample ProportionsGiven The standard error is important because it is used to compute other measures, like confidence intervals and margins of error. The standard deviation of the sampling distribution of is Now that we know how is distributed, we can use that information to find some probabilities. How To Calculate Standard Error In R
The approximate normal distribution works because the two conditions for the CLT are met: And because n is so large (1,000), the approximation is excellent. Using the t Distribution Calculator, we find that the critical value is 2.58. Suppose she shoots 100 free-throws during practice. http://techtagg.com/standard-error/linear-regression-standard-error-equation.html In the next section, we work through a problem that shows how to use this approach to construct a confidence interval for a proportion.
Test Your Understanding Problem 1 Which of the following statements is true. Calculate Standard Error Of Estimate The distribution of these sample proportions is shown in the above figure. Let's focus for a bit on x, the number with that characteristic.
## Since we don't know the population standard deviation, we'll express the critical value as a t statistic.
The standard error (SE) can be calculated from the equation below. Find the margin of error. Since we are trying to estimate a population proportion, we choose the sample proportion (0.40) as the sample statistic. Calculate Standard Error Confidence Interval The range of the confidence interval is defined by the sample statistic + margin of error.
Select a confidence level. You can only upload files of type PNG, JPG, or JPEG. Loading... Checking the distribution: So the distribution should be normally distributed, with mean and standard deviation Using StatCrunch, with p = 75/100 = 0.75: So the probability that she makes less than
In this analysis, the confidence level is defined for us in the problem. The three histograms below demonstrate the effect of the sample size on the distribution shape. Out of a random sample of 100 students, what is the probability that less than 60 receive a C or better? Consider estimating the proportion p of the current WMU graduating class who plan to go to graduate school.
Since we do not know the population proportion, we cannot compute the standard deviation; instead, we compute the standard error. Proportions are the number with that certain characteristics divided by the sample size. Find standard deviation or standard error. We then determine $$s.e.(\hat{p})=\sqrt{\frac {p(1-p)}{n}}=\sqrt{\frac{0.4(1-0.4)}{200}}=0.0346$$.
The standard error is computed from known sample statistics. Compute margin of error (ME): ME = critical value * standard error = 2.58 * 0.012 = 0.03 Specify the confidence interval. You can only upload a photo (png, jpg, jpeg) or a video (3gp, 3gpp, mp4, mov, avi, mpg, mpeg, rm). By how much?
Welcome to STAT 800! Published on Apr 28, 2015how to use the formula to find the standard error of the difference of two proportions Category Education License Standard YouTube License Show more Show less Loading... Then using Standard Normal Table Prob($$\hat{p}$$ < .32) = Prob(Z <. -2.31) = 0.0104. describe the sampling distribution of a sample proportion compute probabilities of a sample proportion The Sample Proportion Consider these recent headlines: Hispanics See Their Situation in U.S.
What is the 99% confidence interval for the proportion of readers who would like more coverage of local news? (A) 0.30 to 0.50 (B) 0.32 to 0.48 (C) 0.35 to 0.45 statisticsfun 578,461 views 5:05 7.2 Sampling Distribution of p-hat - Duration: 7:54. Let's see a couple examples. On the average, a random variable misses the mean by one SD.
Example 1 In a typical class, about 70% of students receive a C or better. You can only upload photos smaller than 5 MB. And since the population is more than 20 times larger than the sample, we can use the following formula to compute the standard error (SE) of the proportion: SE = sqrt Trending Now Blake Shelton Evan Peters Kanye West Illinois Lottery Cheap Airline Tickets Reverse Mortgage Andrea Tantaros Hillary Clinton Oakland Raiders Auto Insurance Quotes Answers Best Answer: Estimated p = 100
© 2017 techtagg.com
|
2017-10-22 20:58:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6886604428291321, "perplexity": 1183.059923863602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825464.60/warc/CC-MAIN-20171022203758-20171022223758-00702.warc.gz"}
|
https://proxies123.com/tag/fafunctional/
|
## fa.functional analysis – better lower (and upper) bound for \$i\$’s moment of function of binomial random variable with \$i = frac{1}{j}, j in mathbb{N}\$
I want to derive a lower bound for $$Eleft(left(frac{X}{k-X}right)^{i}right)$$ with $$X sim Bin_{(k-1),p}$$ and $$k in mathbb{N}$$. So far I could prove that
$$begin{equation} Eleft(frac{X}{k-X}right) = frac{p}{1-p}-frac{p^k}{1-p} end{equation}$$
by law of unconcious statistician and the identity $$binom{k-1}{l}=frac{k-l}{l}binom{k-1}{l-1}$$.
Now, I use as lower bound the fact that:
$$begin{equation} frac{1}{k^{i}}Eleft(X^{i}right)leq Eleft(left(frac{X}{k-X}right)^{i}right) leq Eleft(X^{i}right) end{equation}$$
Then:
$$begin{equation} Eleft(X^{i}right) = sum_{l=0}^{k-1}l^{i}binom{k-1}{l}p^l(1-p)^{k-1-l} \= sum_{l=1}^{k-1}l^{i}binom{k-1}{l}p^l(1-p)^{k-1-l}geq sum_{l=1}^{k-1}binom{k-1}{l}p^l(1-p)^{k-1-l} = 1- (1-p)^{k-1} end{equation}$$
So this is my lower bound. For the upper bound I use Jensen’s inequality on a concave function:
$$begin{equation} Eleft(left(frac{X}{k-X}right)^{i}right)leq left(Eleft(frac{X}{k-X}right)right)^{i} = left(frac{p}{1-p}-frac{p^k}{1-p}right)^{i} end{equation}$$
Thus, I have:
$$begin{equation} frac{1}{k^{i}}left(1- (1-p)^{k-1}right) < Eleft(left(frac{X}{k-X}right)^{i}right) < minBig{left(frac{p}{1-p}-frac{p^k}{1-p}right)^{i}, 1- (1-p)^{k-1}Big} end{equation}$$
Do you have anything better than that?
## fa.functional analysis – Condition on kernel convolution operator
I am studying a about O’Neil’s convolution inequality. It is stated that for $$Phi_1$$ and $$Phi_2$$ be $$N$$-functions, with $$Phi_i(2t)approx Phi_i(t), quad i=1,2$$ with $$tgg 1$$ and $$k in M_+(R^n)$$ is the kernel of a convolution operator.
The $$rho$$ is an r.i. norm on $$M_+(R^n)$$ given in terms of the r.i norm $$bar rho$$ on $$M_+(R_+)$$ by
$$rho(f)=bar rho(f^*), quad f in M_+(R_+)$$
Denote Orlicz gauge norms, $$rho_{Phi}$$, for which
$$(bar rho_{Phi})_dapprox bar rho_{Phi}left(int_0^t h/tright).$$
It is stated that
$$rho_{Phi_1}(k+f)leq C rho_{Phi_2}(f)$$
if
$$(i) quad bar rho_{Phi_1}left(frac 1t int_0^t k^*(s)int_0^sf^*right)leq C bar rho_{Phi_2}(f^*)$$
$$(ii) quad bar rho_{Phi_1}left (frac 1tint_0^t f^*(s)int_0^sk^*right)leq C bar rho_{Phi_2}(f^*)$$
$$(iii) quad bar rho_{Phi_1}left(int_t^{infty}k^*f^*right)leq C bar rho_{Phi_2}(f^*).$$
I cannot understand under which conditions on kernel those inequalities (i),(ii) and (iii) would hold.
## fa.functional analysis – Topological constraints on a compact convex set admitting a strictly convex and subdifferentiable real function
It is a theorem of Hervé that
A compact convex set $$K$$ admits a strictly convex and continuous real function only if $$K$$ is metrizable. (The converse is also true.)
I’m wondering if any results of this sort are known if continuity is replaced with subdifferentiability. That is,
What topological constraints must a compact convex set $$K$$ meet in order to admit a strictly convex function that has a subgradient at every point in $$K$$? Does $$K$$ have to be first countable, for instance?
## fa.functional analysis – Trying to recover a proof of the spectral mapping theorem from old thesis/paper with continuous functional calculus
In my research group in functional analysis and operator theory (where we do physics and computer science as well), we saw in an old Russian combination paper/PhD thesis in our library a nice claim about the spectral mapping theorem’s possible proof. Let me attempt to bring the context here. I should mention there are some nice results in this paper that I wanted to use and generalize for my own research, I hope to accurately bring the context below.
They bring up the continuos functional calculus $$phi: C(sigma(A)) rightarrow L(H)$$ for a bounded, self-adjoint operator on a Hilbert space A. This is an algebraic *-homomorphism from the continuous functions on the spectrum of $$A$$ to the bounded operators on $$H$$. The paper’s spectral mapping theorem basically says in this context $$sigma(phi(f)) =f(sigma(A))$$ and the paper says something nice about this. It does not actually give a proof but it says there is a nice way to prove it using both inclusions with the inclusion $$f(sigma(A)) subseteq sigma(phi(f))$$ sketched in the following way: the author supposes $$lambda in f(sigma(A))$$ and says “it is very obvious” that there exists a vector $$h in H$$ with $$|h|=1$$ such that $$|phi(f)-lambda)h|$$ is arbitrarily small which shows $$lambda in sigma(phi(f))$$ which shows the desired inclusion.
The author says that it is “very obvious” to show this but I am a bit stumped. The way I would construct the continuous functional calculus is to start with polynomials and then generalize to $$C(sigma(A))$$ based on the Weierstrass approximation theorem on the real compact set $$sigma(A)$$ and the BLT theorem. The inclusion $$sigma(phi(f)) subseteq f(sigma(A))$$ is, I think, quite obvious but the other one in the above context has me stumped. Since I am already working on generalizing some results, I would really love to know how the author proves the inclusion with the method of showing the mentioned vector exists. Maybe use approximation in some way, but even though I suspect it is simple, I still do not see the author’s proposed proof. Can someone here please help me recover it? I thank all interested persons.
## fa.functional analysis – A density question for the Hilbert transform
Let $$mathscr Hf$$ denote the Hilbert transform of a function $$f$$ defined on the real-line $$mathbb R$$. Are the set of functions
$${(f+mathscr Hf)_{|_{(0,1)}},:, f in C^{infty}(mathbb R)quad text{and}quad textrm{supp} f Subset (0,infty)}$$
dense in $$L^2((0,1))$$?
## fa.functional analysis – Dimensional scaling implementation
I’m not familiar with the concept of dimensional scaling at all. Anyone please help me with this problem.
I need to understand how they obtained eq(16) from eq(15) in the paper here so I can do the same when I change Q0 from a constant to a variable function in time. Summary as below:
Scaling:
$$ξ=x/l,ζ=z/h_* ,τ=t/t_* ,γ=l/l_* ,λ=h/h_* ,Π=p/p_* ,Ψ=q/q_* ,bar Ψ=bar q ̅/q_* ,Ω=w/w_* ,bar Ω = bar w /w_* (15)$$
“The six characteristic quantities that have been introduced are identified by setting to unity, six of the dimensionless groups that emerge from the governing equations when they are expressed in terms of the dimensionless variables. The remaining two groups are numbers that control the problem. After some algebraic manipulation, we obtain the following expressions for the characteristic quantities:”
$$l_*=frac{πH^{4}∆σ^{4}}{4E^{‘3} μQ_0 }$$, $$h_*=H$$, $$t_*=frac{π^{2} H^{6} ∆σ^{5}}{4μE^{‘4}Q_0^{2} }$$ , $$p_*=∆σ$$, $$q=frac{Q_0}{2H}$$, $$w=frac{πH∆σ}{2E^{‘}}=M_0∆σ (16)$$
Thanks a lot.
## fa.functional analysis – Is the restriction map \$C^1ni fmapstoleft.fright|_K\$ a continuous map?
Let $$E$$ be a $$mathbb R$$-Banach space, $$Thetasubseteq C^{0,:1}(E,E)$$ be a $$mathbb R$$-Banach space and $$iota$$ be a continuous embedding of $$Theta$$ into $$C^1(E,E)$$.
I would like to show that, given a compact $$Ksubseteq E$$, there is a $$cge0$$ with $$sup_{xin K}left|{rm D}(iota f)(x)right|_{mathfrak L(E)}le cleft|fright|_{Theta};;;text{for all }finTheta.tag1$$
If I’m not missing something, $$C^1(K,E):=left{left.gright|_K: gin C^1(U,E)text{ for some open neighborhood }Utext{ of }Kright}$$ equipped with $$left|gright|_{C^1(K,:E)}:=maxleft(sup_{xin K}left|g(x)right|_E,sup_{xin K}left|{rm D}g(x)right|_{mathfrak L(E)}right);;;text{for }gin C^1(K,E)$$ should be a $$mathbb R$$-Banach space. If that’s true, we may be able to show $$Thetani fmapstoleft.(iota f)right|_Ktag2$$ is a continuous embedding of $$Theta$$ into $$C^1(K,E)$$, from which the desired claim would follow.
Can we show this?
## fa.functional analysis – Analytic functions where all derivatives vanish at infinity and which are bounded
Yes.
Let $$phi$$ be any smooth function with compact support on the interval $$(-1,1)$$.
Set $$f$$ to be the inverse Fourier transform of $$phi$$.
Since $$phi$$ is in Schwartz class, so is $$f$$, and all of its derivatives decay to zero as one approach $$pminfty$$.
You can estimate
$$|f^{(k)}(x) | lesssim | |xi|^k phi(xi) |_{L^1} leq 2 |phi|_{L^infty} =: C$$
$$f$$ is analytic by Paley-Wiener.
## fa.functional analysis – Solution space of first order PDE
We consider the following first order PDE
$$(partial_x + ipartial_y) u(x,y)+ begin{pmatrix} 0 & cos(2x+y) \ cos(2x-y) & 0 end{pmatrix}u(x,y) =0,$$
where $$u in mathbb C^2$$ is vector-valued and periodic on $$(0,2pi)^2.$$
I ask: What is the dimension of the solution space to that equation?-My conjecture is that it is two-dimensional.
The reason is that if
the equation was
$$(partial_x + ipartial_y) u(x,y)+ begin{pmatrix} 0 & cos(2x+y) \ cos(2x+y) & 0 end{pmatrix}u(x,y) =0,$$
then we could explicitly state the two-dimensional solution space
$$u(x,y)=operatorname{exp}left( frac{i}{2i-1} begin{pmatrix} 0 & sin(2x+y) \ sin(2x+y) & 0 end{pmatrix}right)x_0$$
for any $$x_0 in mathbb{C}^2.$$
## fa.functional analysis – Definition question: asymptotic-\$ell_{p}\$ versus coordinate-free asymptotic-\$ell_{p}\$
Let $$(e_{j})_{j=1}^{infty}$$ be a basis for the Banach space $$X$$. If there exist constants $$zeta_{1},zeta_{2}>0$$ such that for all $$Ninmathbb{N}$$,
$$begin{equation*} zeta_{1}left(sum_{i=1}^{N}|x_{i}|^{p}right)^{frac{1}{p}}leqleftVertsum_{i=1}^{N}x_{i}rightVertleqzeta_{2}left(sum_{i=1}^{N}|x_{i}|^{p}right)^{frac{1}{p}} end{equation*}$$
for all block sequences $$(x_{i})_{i=1}^{N}$$ that satisfy $$M=M_{N}leqmintext{supp}(x_{1})$$, then $$X$$ is said to be (stabilized) asymptotic-$$ell_{p}$$ with respect to $$(e_{j})_{j=1}^{infty}$$.
There is also coordinate-free generalization of a Banach space being asymptotic-$$ell_{p}$$ without reference to a basis. In this situation, there exist $$zeta_{1},zeta_{2}>0$$ such that for all $$Ninmathbb{N}$$, there are subspaces $$Y_{1},ldots,Y_{N}$$ of finite-codimension such that
$$begin{equation*} zeta_{1}left(sum_{i=1}^{N}|y_{i}|^{p}right)^{frac{1}{p}}leqleftVertsum_{i=1}^{N}y_{i}rightVertleqzeta_{2}left(sum_{i=1}^{N}|y_{i}|^{p}right)^{frac{1}{p}} end{equation*}$$
for all $$y_{i}in Y_{i}$$. My question is the following: why do we want the subspaces $$Y_{i}$$ to have finite codimension? In particular, the block vectors $$x_{i}$$ in the first definition are members of finite-dimensional subspaces of $$X$$ (not finite co-dimensional subspaces) and I am wondering why a coordinate-free generalization of the first definition wouldn’t take the following form:
$$X$$ is coordinate-free asymptotic-$$ell_{p}$$ if there exist $$zeta_{1},zeta_{2}>0$$ such that for all $$Ninmathbb{N}$$ there exist pairwise disjoint finite dimensional subspaces $$Y_{1},ldots,Y_{N}$$ of $$X$$ such that
$$begin{equation*} zeta_{1}left(sum_{i=1}^{N}|y_{i}|^{p}right)^{frac{1}{p}}leqleftVertsum_{i=1}^{N}y_{i}rightVertleqzeta_{2}left(sum_{i=1}^{N}|y_{i}|^{p}right)^{frac{1}{p}} end{equation*}$$
for all $$y_{i}in Y_{i}$$.
|
2020-10-22 21:35:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 114, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9018983840942383, "perplexity": 232.03903038943048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880038.27/warc/CC-MAIN-20201022195658-20201022225658-00550.warc.gz"}
|
https://kops.uni-konstanz.de/entities/publication/d56dbd27-bf8e-44c7-8794-7abccce4c01d
|
## Problems in the Identification of Application Areas of Hollow Spheres and Hollow Sphere Structures
No Thumbnail Available
##### Files
There are no files associated with this item.
2009
##### Publication type
Contribution to a collection
Published
##### Published in
Multifunctional metallic hollow sphere structures : manufacturing, properties and application / Öchsner, Andreas; Augustin, Christian (ed.). - Berlin : Springer, 2009. - pp. 195-212. - ISBN 978-3-642-00490-2
##### Abstract
Victor Hugo (1802-1865) is attributed the verifiably false quotation that nothing is as powerful as an idea, whose time has come. Be that as it may, this quotation is at first a causality statement, which combines the presence of a certain idea in connection with specific basic conditions (time) as almost inevitable for a certain successful development. At first thought this linkage, that if an idea finds its perfect timing it will be successful, seems to be evident. By “time” we not only mean the chronological period, but as an abstract category of environmental conditions that occur at this time at a specific place.
900 History
##### Cite This
ISO 690AUGUSTIN, Christian, 2009. Problems in the Identification of Application Areas of Hollow Spheres and Hollow Sphere Structures. In: ÖCHSNER, Andreas, ed., Christian AUGUSTIN, ed.. Multifunctional metallic hollow sphere structures : manufacturing, properties and application. Berlin:Springer, pp. 195-212. ISBN 978-3-642-00490-2. Available under: doi: 10.1007/978-3-642-00491-9_11
BibTex
@incollection{Augustin2009Probl-58212,
year={2009},
doi={10.1007/978-3-642-00491-9_11},
title={Problems in the Identification of Application Areas of Hollow Spheres and Hollow Sphere Structures},
isbn={978-3-642-00490-2},
publisher={Springer},
booktitle={Multifunctional metallic hollow sphere structures : manufacturing, properties and application},
pages={195--212},
editor={Öchsner, Andreas and Augustin, Christian},
author={Augustin, Christian}
}
RDF
<rdf:RDF
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:bibo="http://purl.org/ontology/bibo/"
xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
xmlns:foaf="http://xmlns.com/foaf/0.1/"
xmlns:void="http://rdfs.org/ns/void#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#" >
<dc:creator>Augustin, Christian</dc:creator>
<dcterms:issued>2009</dcterms:issued>
<foaf:homepage rdf:resource="http://localhost:8080/"/>
<dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/32"/>
<dcterms:abstract xml:lang="eng">Victor Hugo (1802-1865) is attributed the verifiably false quotation that nothing is as powerful as an idea, whose time has come. Be that as it may, this quotation is at first a causality statement, which combines the presence of a certain idea in connection with specific basic conditions (time) as almost inevitable for a certain successful development. At first thought this linkage, that if an idea finds its perfect timing it will be successful, seems to be evident. By “time” we not only mean the chronological period, but as an abstract category of environmental conditions that occur at this time at a specific place.</dcterms:abstract>
<dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/32"/>
<dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-07-29T09:06:07Z</dc:date>
<void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
<dcterms:title>Problems in the Identification of Application Areas of Hollow Spheres and Hollow Sphere Structures</dcterms:title>
<dc:language>eng</dc:language>
<dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-07-29T09:06:07Z</dcterms:available>
<dc:contributor>Augustin, Christian</dc:contributor>
<bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/58212"/>
</rdf:Description>
</rdf:RDF>
|
2023-03-24 14:20:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5794397592544556, "perplexity": 4665.846803279694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00580.warc.gz"}
|
http://eprints.adm.unipi.it/1698/
|
# Non-Abelian duality from vortex moduli: a dual model of color-confinement
Eto, Minoru and Ferretti, Luca and Konishi, Kenichi and Marmorini, Giacomo and Nitta, Muneto and Ohashi, Keisuke and Vinci, Walter and Yokoi, Naoto (2007) Non-Abelian duality from vortex moduli: a dual model of color-confinement. Nuclear Physics B, 780 (3). pp. 161-187. ISSN 1873-1562
Full text not available from this repository.
## Abstract
It is argued that the dual transformation of non-Abelian monopoles occurring in a system with gauge symmetry breaking G -> H is to be defined by setting the low-energy H system in Higgs phase, so that the dual system is in confinement phase. The transformation law of the monopoles follows from that of monopole-vortex mixed configurations in the system (with a large hierarchy of energy scales, v_1 >> v_2) G -> H -> 0, under an unbroken, exact color-flavor diagonal symmetry H_{C+F} \sim {\tilde H}. The transformation property among the regular monopoles characterized by \pi_2(G/H), follows from that among the non-Abelian vortices with flux quantized according to \pi_1(H), via the isomorphism \pi_1(G) \sim \pi_1(H) / \pi_2(G/H). Our idea is tested against the concrete models -- softly-broken {\cal N}=2 supersymmetric SU(N), SO(N) and USp(2N) theories, with appropriate number of flavors. The results obtained in the semiclassical regime (at v_1 >> v_2 >> \Lambda) of these models are consistent with those inferred from the fully quantum-mechanical low-energy effective action of the systems (at v_1, v_2 \sim \Lambda).
Item Type: Article Imported from arXiv Area02 - Scienze fisiche > FIS/02 - Fisica teorica, modelli e metodi matematici Dipartimenti (from 2013) > DIPARTIMENTO DI FISICA dott.ssa Sandra Faita 04 Feb 2014 15:16 04 Feb 2014 15:16 http://eprints.adm.unipi.it/id/eprint/1698
### Repository staff only actions
View Item
|
2017-09-23 11:01:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8829380869865417, "perplexity": 7087.138469984693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689624.87/warc/CC-MAIN-20170923104407-20170923124407-00596.warc.gz"}
|
https://www.physicsforums.com/threads/primitive-root-of-unity.751370/
|
# Primitive Root of Unity
## Homework Statement
In F17, 2 is a primitive 8th root of unity. Evaluate f(x) = 7x3+8x2+3x+5 at the eight powers of 2 in F17. Verify that the method requires at most 16 multiplications in F17.
## Homework Equations
You can can more clearly see the theorem on page 376-378 and the problem is on page 382 #6:
http://igortitara.files.wordpress.com/2010/04/a-concrete-introduction-to-higher-algebra1.pdf
## The Attempt at a Solution
I was able to find that the d=3, but am unclear on how I evaluate f(x) based of Theorem 3.
Last edited:
STEMucator
Homework Helper
## Homework Statement
In F17, 2 is a primitive 8th root of unity. Evaluate f(x) = 7x3+8x2+3x+5 at the eight powers of 2 in F17. Verify that the method requires at most 16 multiplications in F17.
## Homework Equations
You can can more clearly see the theorem on page 376-378 and the problem is on page 382 #6:
http://igortitara.files.wordpress.com/2010/04/a-concrete-introduction-to-higher-algebra1.pdf
## The Attempt at a Solution
I was able to find that the d=3, but am unclear on how I evaluate f(x) based of Theorem 3.
The question states that ##2## is a primitive ##2^3##'th root of unity, that is ##2^8 = e = 1##.
You need to evaluate ##f(2), f(2^2), f(2^3), ... , f(2^8)##. This requires at most ##2^r(r-1)## multiplications, which works out to:
##2^r(r-1) = 2^3(3-1) = 8(2) = 16##
Last edited:
|
2021-12-01 18:43:32
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8863423466682434, "perplexity": 1220.885496097439}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360881.12/warc/CC-MAIN-20211201173718-20211201203718-00092.warc.gz"}
|
http://mathhelpforum.com/statistics/105222-how-fill-table.html
|
# Math Help - How to fill in this table?
1. ## How to fill in this table?
Hope it belongs to the right section....Following is a table.Here is the question:
Fill in the remaining areas by real no.s such that the sum of the numbers in any three neighboring squares is constant and sum of all no.s is 210.
|_|_|26|_|_|_|12|_|_|_|
Is it only by trial & error?
Sorry,dont know how to make tables
2. Originally Posted by anshulbshah
Hope it belongs to the right section....Following is a table.Here is the question:
Fill in the remaining areas by real no.s such that the sum of the numbers in any three neighboring squares is constant and sum of all no.s is 210.
|_|_|26|_|_|_|12|_|_|_|
Is it only by trial & error?
Sorry,dont know how to make tables
Filling in the blanks:
.a.b.26.c.d.e.12.f.g.h.
Resulting equations:
a+b+26 = x
b+26+c=x
26+c+d=x
c+d+e=x
d+e+12=x
e+12+f=x
12+f+g=x
f+g+h=x
a+b+26+c+d+e+12+f+g+h = 210
You have nine equations and nine unknowns.
There are several ways to attack this:
You can remove the x by subtracting the adjoining pairs.
a+b+26 = x
b+26+c=x
a - c = 0
26+c+d=x
c+d+e=x
26 - e =0
d+e+12=x
e+12+f=x
d - f =0
12+f+g=x
f+g+h=x
12 - h = 0
That's a start.
and continue with every other equation
a+b+26 = x
26+c+d=x
a+b = c+d
26+c+d=x
d+e+12=x
26+c = e +12
etc..
Or create an augmented matrix and reduce to a triangular matrix.
3. Hello, anshulbshah!
Fill in the remaining cells with real nnumbers
so that the sum of three consecutive numbers is constant
and sum of all numbers is 210.
. . $\begin{array}{|c|c|c|c|c|c|c|c|c|c|} \hline
\;\;& \;\;& 26 & \;\; & \;\; & \;\; & 12 & \;\; & \;\;& \;\; \\ \hline \end{array}$
We are given: . $\begin{array}{|c|c|c|c|c|c|c|c|c|c|} \hline
a & b & 26 & c & d & e & 12 & f & g & h \\ \hline \end{array}$
We have: . $a+b+26 \:=\:b+26+c \quad\Rightarrow\quad a \:=\:c$
We have: . $c+d+e \:=\:d+e+12 \quad\Rightarrow\quad c \:=\:12$
We have: . $12+f+g \:=\:f+g+h \quad\Rightarrow\quad h \:=\: 12$
. . Hence: . $a \:=\:c\:=\:h \:=\:12$
Answer (so far): . $\begin{array} {|c|c|c|c|c|c|c|c|c|c|} \hline 12 & b & 26 & 12 & d & e & 12 & f & g & 12 \\ \hline \end{array}$
We have: . $26 + 12 + d \:=\:12+d+e \quad\Rightarrow\quad e \:=\:26$
We have: . $e+12+f \:=\:12+f+g \quad\Rightarrow\quad e \:=\:g$
. . Hence: . $e \:=\:g\:=\:26$
Answer (so far): . $\begin{array}{|c|c|c|c|c|c|c|c|c|c|} \hline 12 & b & 26 & 12 & d & 26 & 12 & f & 26 & 12 \\ \hline \end{array}$
We have: . $b+26+12 \:=\:26 + 12 + d \quad\Rightarrow\quad b \:=\:d$
We have: . $d+e+12 \:=\:e+12+f \quad\Rightarrow\quad d \:=\:f$
. . Hence: . $b \:=\:d\:=\:f$
The sum of all numbers is 210.
. . $12 + b + 26 + 12 + d + 26 + 12 + f + 26 + 12 \:=\:210$
. . $b+d+f + 126 \:=\:210 \quad\Rightarrow\quad b+d+f \:=\:84$
Since $b=d=f$, we have: . $3b \:=\:84 \quad\Rightarrow\quad b \:=\:28$
. . Hence: . $b\:=\:d\:=\:f\:=\:28$
Answer: . ${\color{blue}\begin{array}{|c|c|c|c|c|c|c|c|c|c|} \hline 12 & 28 & 26 & 12 & 28 & 26 & 12 & 28 & 26 & 12 \\ \hline \end{array}}$
|
2015-07-04 07:58:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9262638092041016, "perplexity": 2091.8401146302117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096579.52/warc/CC-MAIN-20150627031816-00090-ip-10-179-60-89.ec2.internal.warc.gz"}
|
http://mathoverflow.net/feeds/question/39430
|
Algebraic Attacks on the Odd Perfect Number Problem - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-26T00:37:01Z http://mathoverflow.net/feeds/question/39430 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/39430/algebraic-attacks-on-the-odd-perfect-number-problem Algebraic Attacks on the Odd Perfect Number Problem Cam McLeman 2010-09-20T20:27:15Z 2012-08-23T12:04:48Z <p>The <a href="http://en.wikipedia.org/wiki/Perfect_number#Odd_perfect_numbers" rel="nofollow">odd perfect number</a> problem likely needs no introduction. Recent progress (where by recent I mean roughly the last two centuries) seems to have focused on providing restrictions on an odd perfect number which are increasingly difficult for it to satisfy (for example, congruence conditions, or bounding by below the number of distinct prime divisors it must have). By reducing the search space in this manner, and probably due to other algorithmic improvements (factoring, parallelizing, etc.), there has also been significant process improving lower bounds for the size of such a number. A link off of <a href="http://oddperfect.org" rel="nofollow">oddperfect.org</a> claims to have completed the search up to $10^{1250}$. </p> <p>But, assuming my admittedly cursory reading of the landscape is correct, none of the current research seems particularly equipped to prove non-existence. The only compelling argument I've seen on this front is "Pomerance's heuristic" (also described on <a href="http://oddperfect.org" rel="nofollow">oddperfect.org</a>). Worse, and maybe this is really the point of this question, it would be a little disappointing if the non-existence proof was an upper bound of $10^{1250}$ (depending on the techniques used to get the bound) combined with the above brute force search. </p> <p>On the other hand, maybe there's some hope that some insight can be gained into the sum-of-divisors function by modern techniques. For example, the values of the arithmetic functions $$\sigma_{k}(n):=\sum_{d\mid n}d^k,$$ for $k\geq 3$ odd, arise as coefficients of normalied Eisenstein modular forms, and the study of said forms gives amazing proofs of amazing identities between them. For $k=1$, the case of interest, the normalized Eisenstein series $E_2$ is only "quasi-modular", but such forms satisfy sufficiently nice transformation properties that I wonder if $E_2$ has anything to say about the problem. </p> <blockquote>Since no doubt many people on this site will be able to immediately address the previous idea (so please do!), my more general question is whether or not there are applications of the modern machinery of modular forms, mock modular forms, diophantine analysis, Galois representations, abc conjecture, etc., that have anything to say about the odd perfect number problem. Does it descend from or relate to any major open problems from modern algebraic/analytic number theory? </blockquote> <p><sub> Aside: I hope this does not come off as dismissive of "elementary" techniques, or of the algorithmic ones mentioned in the first paragraph. Indeed, they have, to my knowledge, been the only source of progress on this problem, and certainly contain interesting mathematics. Rather, this phrasing stems from my desire to find anything in the intersection of "odd perfect number theory" and "things I know anything about," and perhaps a desire to see the odd perfect number problem settled without the use of a beyond-gigantic brute force search. </sub></p> http://mathoverflow.net/questions/39430/algebraic-attacks-on-the-odd-perfect-number-problem/39439#39439 Answer by Pace Nielsen for Algebraic Attacks on the Odd Perfect Number Problem Pace Nielsen 2010-09-20T21:46:27Z 2010-09-20T21:46:27Z <p>This is a problem I have thought alot about. I have not seen any of the modern techniques in your list applied to the problem. Part of the issue is that if you represent $\sigma(n)=2n$ as a Diophantine equations in $k$ variables (corresponding to the prime factors--but allowing the powers to vary) then there are lots of solutions (just not where all the variables are simultaneously prime). So the usual methods of trying to show non-existence of solutions just don't cut it. Historically, this multiplicative approach is the one many people have taken, because at least some progress can be made on the problem. My personal feeling is that maybe someday these bounding computations will be tweaked to the point that they lead to the discovery of some principle that will solve the problem. For example, in one of my recent papers, I was led to consider the gcd of $a^m-1$ and $b^n-1$ (where $a$ and $b$ are distinct primes). I would conjecture that this gcd has small prime factors unless $m$ or $n$ is huge. If that happens, many of the computations related to bounding OPNs become much easier.</p> <p>I have occasionally thought about whether modular forms might say something about this topic (which is why I'm currently sitting in on my colleague's course). Instead of $\sigma(n)$, the `right' function to consider is $\sigma_{-1}(n)=\sigma(n)/n$ and I don't know off the top of my head if it appears in connection with (weakly holomorphic) modular forms. But I know there are some nice techniques about multiplicative functions that decrease over the primes, etc...</p> http://mathoverflow.net/questions/39430/algebraic-attacks-on-the-odd-perfect-number-problem/39779#39779 Answer by Jerald Jetson for Algebraic Attacks on the Odd Perfect Number Problem Jerald Jetson 2010-09-23T17:43:48Z 2010-09-23T17:43:48Z <p>The odd perfect number problem does have a connection to modular forms. the divisor funct can be written as a function of the tau function and sigma_{k}(n) = sum_{d|n} d^k. The earlier example is the van der Pol identity. This was used by Touchard to conclude that n = 36a + 9 or 12b + 1.</p> <p>The modular for should be normalized to use sigma(n)/n = 2 to see if it gives a contradiction. </p> http://mathoverflow.net/questions/39430/algebraic-attacks-on-the-odd-perfect-number-problem/39781#39781 Answer by Jamie Weigandt for Algebraic Attacks on the Odd Perfect Number Problem Jamie Weigandt 2010-09-23T18:00:35Z 2010-09-23T18:00:35Z <p>For links between perfect numbers and the ABC conjecture, see <a href="http://www.math.dartmouth.edu/~carlp/LucaPomeranceNYJMstyle.pdf" rel="nofollow">this paper</a> by Luca and Pomerance. </p> http://mathoverflow.net/questions/39430/algebraic-attacks-on-the-odd-perfect-number-problem/43942#43942 Answer by Jose Arnaldo Dris for Algebraic Attacks on the Odd Perfect Number Problem Jose Arnaldo Dris 2010-10-28T05:35:43Z 2011-01-05T14:11:33Z <p>I agree with Pace - the correct function to consider would be the abundancy index instead of the sigma function itself. In a certain sense, the abundancy index value of 2 for perfect numbers (odd or even) has served as a baseline on which various other properties and concepts related to number perfection were developed.</p> <p>Additionally, although Balth. Van der Pol was able to derive the congruence conditions mentioned, for an OPN N (which effectively gave us two cases: case (1) $3 | N$, case (2) $N$ is not divisible by $3$) using a recursion relation satisfied by the sigma function (and this was achieved via a nonlinear partial differential equation), the same result follows from eliminating the case $N \equiv 2\pmod 3$ and then using the Chinese Remainder Theorem afterwards.</p> <p>Though I have seen modular forms before (having done a [modest] exposition of elliptic curve theory and related topics) for my undergraduate thesis, I have likewise not seen any 'objects' used in Wiles' FLT proof applied directly to the OPN conjecture -- at least, not 'directly' in the literal sense of the word.</p> <p>Just some thoughts about OPNs, now that you've mentioned them:</p> <p>A closer look at the (multiplicative) forms of even and odd perfect numbers gives you:</p> <p>Even PN = ${2^{p - 1}}(2^p - 1)$ = (even power of a "small" prime) x ("big" prime)</p> <p>Odd PN = ${m^2}{q^k}$ = (even power of several primes) x ("another" prime "power")</p> <p>Ronald Sorli conjectured in his Ph. D. thesis (titled Algorithms in the Study of Multiperfect and Odd Perfect Numbers, completed in 2003) that for an OPN, it was in fact (numerically) plausible that $k = 1$. (Compare that with the Mersenne prime. Incidentally, we call $q$ the Euler prime!) </p> <p>If Sorli's conjecture is proven true, we will have (considerably) more information about an OPN's structure that could enable a quick resolution about its conjectured nonexistence.</p> <p>Let me know if you need more information.</p> http://mathoverflow.net/questions/39430/algebraic-attacks-on-the-odd-perfect-number-problem/48570#48570 Answer by Steve Elmore for Algebraic Attacks on the Odd Perfect Number Problem Steve Elmore 2010-12-07T15:53:52Z 2010-12-08T02:09:23Z <p>I had an opportunity to work with Dr. Beauregard Stubblefield 35 years ago (yep, 62 and proud of it) and he generously gave the credit for the group's result to Dr. Mary Buxton and me. Dr. Stubblefield of course did most of the work, and he and I discussed many things. It seems that Leopold Kronecker, the great German algebraist 1823-1891, had it right the whole time. He proved that X = p^e + p^(e+1) + ... + p^2 + p + 1, where e is one less than a prime and p is a prime, cannot be algebraically reduced. But we knew that. The big deal is that the numeric factors of the expression are either (e + 1) || X or (k(e + 1) + 1) | X. Dr. Stubblefield found the pattern and I found that Kronecker had proved it. Stubblefield used the result in his Proposition 11, which he proved. Thirty-five years ago we used the result to factor many sigma(p^e). We talked about extending the result back then so we both had deep input into the new theory. Recently, after an engineering career in the auto industry in Detroit, I extended Proposition 11 to apply to many more cases in several different ways. I think, Cam, that this is the road you are suggesting. Steve Elmore</p> http://mathoverflow.net/questions/39430/algebraic-attacks-on-the-odd-perfect-number-problem/105319#105319 Answer by Sylvain JULIEN for Algebraic Attacks on the Odd Perfect Number Problem Sylvain JULIEN 2012-08-23T12:04:48Z 2012-08-23T12:04:48Z <p>In fact, it might be possible to prove that there is no OPN provided one could prove that if $1+p+...p^k=q^n$, where $p$ and $q$ are odd primes and $k\ge 2$, then $p\lt q$. The trick is to write an OPN $m$ as $p_1^{e_1}...p_k^{e_k}$ and to consider the map $\tilde{\sigma}$ which maps each $p_{i}$ to $R(\sigma(p_i^{e_i}))$, with $R(n)$ the product of the primes dividing $n$. </p>
|
2013-05-26 00:37:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7569482326507568, "perplexity": 782.907962600475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706474776/warc/CC-MAIN-20130516121434-00073-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.gamedev.net/forums/topic/404897-2d-scrolling-map-problem/
|
# [java] 2D scrolling map problem
## Recommended Posts
I'm making a 2D game with a tilemap. Before I splitted my map into different mapareas and everytime you change to a different maparea it needed to be loaded again into a variable. Then I draw the visible part to the screen. One maparea is a bufferedimage of 2000*2000. The size of my tiles is 32*32. Now I want to make one big map. Without the loading screens in between. But I don't know where to start. First I thought of making my mapareas smaller and always store the current mapareaimage with 8 mapareasimages around and than draw only the needed part to the screen. But this will take to much memory I think. I also tried to render the needed images to a BufferedImage and than draw it to the screen but this didn't worked either. My framerates drops from more than 300 to 100. Drawing the tiles directly to the screen makes the screen flicker a bit. If anyone can put my on the right track or give me a good solution I would be very thankful.
##### Share on other sites
Hi.
This a typical question. To give an optimal suggestion, I first need to know how your game works. Does the map scroll consistently? (like a side-scroller with a constant angle and a constant pace?) or does your map move inconsisently (like in a map that scrolls based on user input, variation in speed etc).
There are multiple viable solutions. I usually prefer a tile-based approach. Not because it necessarily is better performance wise, but because it offers greater flexibility. Lets say some object explodes (what can I say, I am a fan of explosions), - perhaps you want to get inventive and add some crater-like graphics to your background. In such a case, it would be better to have a 32*32 tile overlay to make the crater than most other things.
If you want a simple background that scrolls (and possibly loops), like a side-scroller, the choice is much up to you, - the big image works as well. But if you need to think about hardware accelleration, a 2000*2000 image might be too huge to be accellerated. If you use tiles, at least *some* of the tiles will be accellerated.
There are so many cases here, I'll just refer you to this link:
(the pdf)
The chapter handles background scrolling, paralax scrolling, and tile-maps. It might give you some ideas.
Good luck ;)
##### Share on other sites
I've read through the whole book of "killer game programming in java" but
that doesn't solve my problem. Or maybe I missed something...
But it's not a side-scrolling game. Everything is based on the
users input.
This is how my game looked before.
The mapareas are stored in a two dimension array like this.
(X is the current maparea where the user is.)
0 0 0 0
0 X 0 0
0 0 0 0
Now I need a smooth change between two or more mapareas without a loading. Cause it is possible that you have parts of 4 mapareas on the screen at the same time,I can't use bufferedImages for all of them like I did before. It uses
way to much memory.
I used the tile-system like the next simplified code but that gave me a drop
down of my framerate. But I'll first try to render it directly to the screen.
I use double buffering.
private void drawMap(Graphics g){ BufferedImage bimage = new BufferedImage(ge.screen.getPWidth(), ge.screen.getPWidth(), BufferedImage.TYPE_INT_RGB); Graphics2D g2d = bimage.createGraphics(); int aantalx = ge.screen.getPWidth()/32 +1; int aantaly = ge.screen.getPHeight()/32 + 2; for(int j=0;j<aantaly;j++){ for(int i=0;i<aantalx-5;i++){ g2d.drawImage(image,i*32,j*32,32,32,null); } for(int i=aantalx-5;i<aantalx;i++){ g2d.drawImage(image1,i*32,j*32,32,32,null); } } g2d.dispose(); g.drawImage(bimage,startx,starty,ge.screen.getPWidth(),ge.screen.getPHeight(),null); }
##### Share on other sites
Hi.
The chapter describes how to seamlessly implement a looping background image. You can apply the same technique to seamlessly sow different background images together :)
##### Share on other sites
I would recommend you store your images as tiles since you don't appear to have any multiple tile objects. Then have an area be a 2D array of object references.
Image images[] = (your images) ;int area[][] = new int[20][20] ;int areax, areay, tilesize ;for(a=0;a<20;a++){for(b=0;b<20;b++){g.drawimage(images[area[a][b]],areax+tilesize*a,areay+tilesize*b,this) ;}}
Not having the same image data drawn over and over should allow you to use less memory and thus load more areas at once without memory problems. Perhaps I am misunderstanding, but I do not see why you would be having memory issues...
Also, on a hopefully less useless note, if you want your game to keep running while you load or initialize large levels then you may want to do so in a seperate thread. The runnable interface is quite easy to use.
I have some code for loading a similar area based game, maybe this will be useful.
region area[]= new double[4] ;//0 =upper left, 1 = upper right, 2 = lower left, 3= lower right .....int nx = (int)(x+0.5) ;//use actual x and y to determine the nearest area pointint ny = (int)(y+0.5) ; //reloads a portion of the level if necessaryyif(nx!=pnx){//if closest point moved in the x axisif(pnx<nx){// if moved rightarea[0] = area[1] ;area[2] = area[3] ;area[1] = getarea(nx,ny-1) ;area[3] = getarea(nx,ny) ;pnx = nx ;}if(pnx>nx){// if moved leftarea[1] = area[0] ;area[3] = area[2] ;area[0] = getarea(nx-1,ny-1) ;area[2] = getarea(nx-1,ny) ;pnx = nx ;}} if(ny!=pny){//if closest point moved in the y axis if(pny<ny){// if moved down area[0] = area[2] ; area[1] = area[3] ; area[2] = getarea(nx-1,ny) ; area[3] = getarea(nx,ny) ; pny = ny ; } if(pny>ny){// if moved up area[2] = area[0] ; area[3] = area[1] ; area[0] = getarea(nx-1,ny-1) ; area[1] = getarea(nx,ny-1) ; pny = ny ; }}
It loads four areas at once (they will need to be fairly large so that you rarely see multiples at once I use 10000*10000) . It loads the four closest areas to the current location(x,y). To use this wit your x and y measured in pixels just divide the x and y by the area size prior to this.
[Edited by - Alrecenk on July 21, 2006 2:54:47 AM]
## Create an account
Register a new account
• ### Forum Statistics
• Total Topics
627701
• Total Posts
2978697
• 21
• 14
• 12
• 10
• 12
|
2017-10-21 01:33:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24999892711639404, "perplexity": 3418.046862267458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824537.24/warc/CC-MAIN-20171021005202-20171021025202-00122.warc.gz"}
|
https://www.physicsforums.com/threads/question-about-momentum.127900/
|
1. Aug 3, 2006
### pete5383
Ok, so I was pondering something as I lie in bed...so in a system with no outside forces, momentum has to be conserved, correct? So let's say me and the earth are the system, and initially at rest, so we have zero momentum. I jump up in the air, giving myself momentum in the positive y direction, while the earth has momentum equal to mine in the negative y direction (going down, even though it's very very slight). No, once I reach the peak of my jump, I start coming down, that is, the sign of my momentum has changed. Does this mean that at that moment the earth starts to move back towards me to maintain a net momentum of zero? Also, it seems to me when I land I will be pushing the earth down even more, making the net momentum afterwards non-zero. Can anyone explain to me where my thinking is wrong?
2. Aug 3, 2006
### Andrew Mason
Yes
When you jump up, the earth recoils a very small amount. But as you move away from the earth, there is a gravitational force between you and the earth that slows and eventually reverses the motion of the earth and of you. You and the earth then move toward each other and collide, thereby providing an impulse to each sufficient to stop further motion so that each has 0 momentum. I am not sure why you think that when this collision occurs, you will be pushing the earth even more than when you jumped. You necessarily have the same speed when you left the earth as when you collided with it.
AM
3. Aug 3, 2006
### HallsofIvy
Staff Emeritus
When pete5382 said "pushing the earth down even more" he didn't mean that the force landing on the earth was greater than that jumping. He meant that there was a force, downward) when he jumped and then the same force(downward) again when he landed (so he was applying "even more" force). Of course, as you say, he is missing the pull of gravity. The reason he stops going up and starts coming down is because the force of gravity is pulling him toward the earth. At the same time, the mass of his body is pulling the earth toward him. It's very small of course, but then the push he gives the earth when he jumps is very small. Anyway, the two downward pushes he gives the earth when he jumps and when he lands are exactly offset by the gravitational force his mass applies to the earth.
4. Aug 3, 2006
### pete5383
Ahhhhhh, I was forgetting about the gravitational attraction between us...duh....hehe. That makes sense to me now. That's what I get for posting at 2 in the morning. But thank you all very much!
|
2017-02-27 22:36:15
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8303457498550415, "perplexity": 475.3194340994963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00205-ip-10-171-10-108.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/421027/integral-of-a-vector-field-dotted-with-a-unit-normal
|
Integral of a vector field dotted with a unit normal
Find $\int_C{F \cdot \hat n ds}$ where $F= (2xy,-y^2)$ and $\hat n$ is the unit outward normal to the curve C in the xy-plane and C is the ellipse $\frac{x^2}{a^2}+ \frac{y^2}{b^2}=1$ traversed in the anticlockwise direction.
Its the $\hat n$ that is stuffing me up. I have successfully parameterized it as:
$c(t) = (a\cos(t),b\sin(t))$ and i differentiated it and I know from there I normally have to take the magnitude of it as this is a path integral not a line integral.
-
2 Answers
If I understood right, you have a $z=f(x,y)$ in the problem such that $z=0$ is the following flat parametrized curve: $$C: r(t)=(a\cos(t),b\sin(t),0), 0\le t\le2\pi$$ For example $z$ could be $$z+1=\frac{x^2}{a^2}+\frac{y^2}{b^2}$$ By the way, according to the assumptions we'd like to have the following integral. Of course, you can evaluate it by using Stokes' Theorem as well.
$$\oint_C \textbf{F}\big(a\cos(t),b\sin(t),0 \big)\cdot (-a\sin(t),b\cos(t),0)~dt\\\\ =\int_0^{2\pi}(2ab\cos(t)\sin(t),-b^2\sin^2(t))\cdot (-a\sin(t),b\cos(t),0)~dt=...$$
Please check if these points are what you have been looking for or not. I think the rest is easy.
-
Nice work, Babak, as usual! ;-) – amWhy Jun 15 '13 at 16:38
Thanks Babak, could you explain though what has happened to the $\hat n$? It gives the right answer – Jesse Ross Jun 16 '13 at 1:00
@JesseRoss: I used Stockes's theorem. Instead of evaluating the integral you noted. – Babak S. Jun 16 '13 at 4:28
I just use (dy/ds, -dx/ds) as the unit outward normal vector. And I hope it helps you.
-
So what would u do for this one and i dont fully understand this line of reasoning is that just a formula? – Jesse Ross Jun 15 '13 at 12:44
I think it is one way to switch between two kinds of line integrals. – eccstartup Jun 15 '13 at 13:30
|
2016-05-26 11:04:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8652631044387817, "perplexity": 294.5006491507061}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275835.98/warc/CC-MAIN-20160524002115-00167-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://bookdown.org/jhvdz1/dataanalytics/data-analysis-binomial-tests.html
|
# 4 Data Analysis: binomial tests
Data Analytics Module
Lecturer: Hans van der Zwan
Handout 04
Topic: binomial tests
Literature
Rumsey D. J. (2010). Statistical Essentials for Dummies. Hoboken: Wiley Publishing. Ismay C. & Kim A. Y. (2019). ModernDive. Statistical Inference for Data Science. https://moderndive.com.
Recommended literature
Preparation class
See module description
Ismay & Kim (2019), sections 10.1 to 10.4
## 4.1 Testing of hypotheses: the concepts
### 4.1.1 Introduction: example of formulating a research hypothesis
As part of a research on the economic effects of the outcome of the Brexit referendum in June 2016 the effect on house values in London is investigated. Because of the outcome of the Brexit referendum, the values of houses in London are expected to decrease. This is especially the case in the central districts of London, e.g. the City of Westminster district.
A sub research question is: what is the effect of the Brexit referendum on house values in the City of Westminster district in London.
To investigate the effect of the Brexit referendum on the values of houses in the City of the Westminster district, a comparison is made between the values in January 2016 and the values in January 2017, i.e. a half year before and a half year after the referendum.
As a measurement for the value of the houses, the selling prices of houses sold are used as an indicator. This means that the houses sold are considered to be a random sample of all houses in this district. Due to the outcome of the referendum, a decrease of the house values is expected.
There are many ways to operationalize this assumption. E.g. compare the average selling price in January 2016 with the average selling price in January 2017. Another possibility, and maybe a better choice, is to use the medians. It’s also possible to operationalize the assumption by comparing the proportion of houses sold for which the selling price is above 1 mln GBP. If house prices are decreasing it might be expected that the proportion of houses for which the selling price is more than 1 mln GPB, is decreasing as well. The price paid data for houses sold are collected from the open data at www.gov.uk. In the boxplots in Figure 1 the prices in January 2016 and January 2017 are compared. The median of the house prices in January 2017 is lower than in January 2016, the spread in the prices in January 2017 seems to be lower than in January 2016.
Table 3.1 gives an overview of some sample statistics. Indeed the average selling price in January 2017 is less than in January 2016.
Figure 1. Boxplots selling prices houses (GBP) in City of Westminster district in London. A comparison is made between the selling prices in January 2016 and in January 2017.
Table 2
Summary statistics prices houses sold in London, City of Westminster district in January 20016 and January 2017
year COUNT AVERAGE MEDIAN SD NUMBER_ABOVE_1MLN PERC_ABOVE_1MLN
2,016 333 1,490,100 999,000 2,042,500 164 49.2
2,017 236 1,478,800 1,087,500 1,394,700 123 52.1
If the proportion houses for which the selling price is above 1 mln GPB is used as metric for the house values, no support is found for a decrease in these values. After all the proportion with a selling price above 1 mln GPB increased.
The average selling price decreased. It is not unexpected that the two averages differ, because two more or less random groups are compared. Actually it would be very surprising if the two averages would be exactly the same. A lower mean selling price in January 2017, doesn’t necessarily mean that the average value of all houses in the district in January 2017 is lower than the average value in January 2016. The question is whether this difference between the two means is merely a matter of chance, or whether it is due to an underlying difference between the values in January 2017 and in January 2016. To find out if the difference is just a matter of chance, a so-called t-test can be used. The conclusion would be that the outcome of a t-test does not support the assumption that the mean value of houses in the city of Westminster district in January 2017 (M = 1,478,800; SD = 1,087,500) is less than in January 2016 (M = 1,490,100; SD = 999,000); t(566 ) = -.079; p = 0.469.
### 4.1.2 Hypotheses testing the concepts
#### 4.1.2.1 Hypotheses testing in court
In a trial there is a statement (hypothesis) the prosecutor wants to prove: ‘the defendant is guilty’. As long as there is no evidence, the opposite is accepted: ‘the defendant is not guilty’. The defendant is found guilty only if the evidence that he is, is ‘beyond reasonable doubt’. After the trial there are four possibilities:
1. The defendant is not guilty and is found not guilty (right decision)
2. The defendant is not guilty, but is found guilty (wrong decision)
3. The defendant is guilty but is not found guilty (wrong decision)
(note: ‘not found guilty’ is not the same as found ‘not guilty’)
4. The defendant is guilty and is found guilty (right decision)
WHAT IS REALLY TRUTH
not guilty guilty
DECISION Acquitted right decision wrong decision,
second order error
Sentenced wrong decision,
first order error
right decision
#### 4.1.2.2 Hypotheses testing in statistics
The principles for statistical testing of hypotheses are the same as these in court. The difference is that we are looking for statistical evidence instead of juridical. The researcher postulates a hypothesis he wants to prove, the so called HA (or H1) hypothesis. As long as there is no statistical evidence that this hypothesis is true, the opposite (the H0 hypothesis) is assumed to be true. After a testing procedure the researcher comes to the decision whether to reject or not reject the H0-hypothesis. In the same way as in court there are four possibilities, shown in the diagram below.
WHAT IS REALLY TRUTH
H0 is right H0 is not right
DECISION Do not reject H0 right decision wrong decision,
second order error
($$\beta$$-risk)
Reject H0 wrong decision,
first order error ($$\alpha$$-risk)
right decision
In a statistical procedure, we examine whether there is statistical evidence that the data contradict the H0-hypothesis in favor of the HA. If that is the case the H0-hypothesis is rejected. Statistical evidence means that the $$\alpha$$-risk may not be greater than a pre-agreed value (usually 0.05). If the statistical evidence is not strong enough to reject the H0, we stay at the starting point of the procedure (H0 is true) and we do not reject H0. This does not mean that there is statistical evidence that supports H0, it does mean that there is no statistical reason to reject H0.
The way these statistical procedures are used in various statistical tests is discussed in what follows.
### 4.1.3 Proportion tests or binomial tests
As can be seen in the above example (house values in London) it is sometimes possible to operationalize a research question by using proportions and formulate a hypothesis about these proportions.
#### 4.1.3.1 Proportion test against a standard
In a univariate analysis a test against a standard is in many cases a good technique to operationalize a research question. Some examples.
Example: predicting the outcome of flipping a coin (1)
Consider a person who claims to be clairvoyant (paranormal gifted) so that he can predict the outcome of flipping a coin.2
Experiment: ask the person to predict the outcome if the coin is flipped once.
Questions:
(i) If the predition is correct, would that be seen as support for the claim?
(ii) What if the experiment is flipping the coin twice and both predictions are correct?
(iii) What if the coin is flipped three times and all predictions are correct?
(iv) After how many correct predictions can the claim be honored?
Example: predicting the outcome of flipping a coin (2)
Consider a person who claims to be clairvoyant, and says he can better predict the outcome of flipping a coin than simply by guessing. In other words, the claim is that his predictions are correct in more than 50% of the cases.
Research experiment (data collection): the person is asked to make 100 predictions. Based on the number of correct predictions (k) it will be decided to honor his claim or not.
The hypothesis to test is HA: p > .5 where p is the probability of a correct prediction.
In the procedure we start with the assumption that the claim is not correct, in other words p = .5. To come to a decision the so called prob-value will be calculated, that is the probability to find a result as has been found, or even further away from what is expected, assuming p = .5 (i.e. assuming the claim is not correct). If this prob-value is low, than there are two options:
- the probability of a correct prediction is .5, the person was just lucky in guessing correct a great number of times;
- the probability of a correct prediction is more than .5, and that is why he predicted correct a great number of times.
If the prob-value is lower than a pre-agreed value, mostly .05 is used, the first option is rejected and it is said that the data support the HA-hypothesis.
For instance if he predicts correctly 60 out of 100 times the prob-value equals 0.028. The prob-value has been calculated using this webapplication.
Figure 2. Binomial distribution with n = 100, p = .50. The black area corresponds with the probability of 60 or more successes in 100 trials under the assumption that a probability of a success equals 0.50.
Example: filling packages of sugar
Packs of sugar are filled using a filling machine. Although on average the contents is 1000 gram there is always some variation in the contents of the packs of sugar. In the past 10% of the packages contain less than 995 grams. This was reason to buy a new packing machine. To test if this machine performs better than the old one, a random sample of 100 packs is drawn. From these packs, 6 contain less than 995 grams. Does this sample result give statistical evidence that the machine performs better? In other words: is this sample result enough evidence that in the population less than 10% of the packages contain less than 995 grams.
To answer this question we assume the null-hypothesis to be true and calculate the probability to find 6 or less packs with less than 995 grams under this assumption.
Figure 3 . Binomial distribution with n = 100, p = .10. The black area corresponds with the probability of 6 or less successes in 100 trials under the assumption that the probability of a success equals 0.10.
The prob-value in this case equals .117. In other words, at a .05 significance level, the data do not support the hypothesis that less than 10% op the packs contain less than 995 grams. So the data do not give support to the hypothesis that the new machine is better. This does not mean that the new machine is not better than the old one; it does say that the collected data do not give ‘statistical evidence’ that the new machine is better. If the data is expanded the conclusion can change. E.g. if n = 200 and k = 12 the result would be significant (prob-value = .032).
#### 4.1.3.2 Writing it up
Harvard and APA style do not only apply to references, but also, among other things, to reporting results from hypotheses testing.
In the last example the result can be reported as follows.
Based on a one-sided binomial test the observed values (N = 100, K = 6) do not give significant support to the assumption that less than 10 percent of the packages filled by this machine, contain less than 1000 grams, p = .117.
## 4.2 Exercise
Excercise 1
Research sub question: did house prices in London increase in January 2019 relative to the prices of houses sold in 2018.
In 2018 the average selling Price in London was 740,824 GBP, the median was 515,000 GPB and 15.4% of the houses sold had a selling price above 1 mln GBP.
(i) This research question has been operationalized by stating the hypothesis that the proportions of houses with a selling price of more than 1 mln GBP in the City of London district in January 2018 was more than 16.3%.
Perform this test and report the conclusion as it would be done in a scientific report.
(ii) This research question can also be operationalized by stating the hypothesis that the proportions of houses with a selling price of more than 510,000 GBP (the median in 2018) in London in January 2018 was more than 50%.
Perform this test and report the conclusion as it would be done in a scientific report.
## 4.3 Proportion test: comparing two groups
In a statistical analysis it is quite common to compare different groups. For instance the differences between incomes in the profit and the not-for-profit sector, the air quality in different cities, and so on.
Comparing proportions in two different groups can sometimes be used to operationalize a research question about differences between two groups.
Example: influence website background on users
In an experiment data is collected about the influence of background colors on the attractiveness of a website. Attractiveness has been operationalized by the proportion of visitors that click on a button for more information.
The same information was presented on two different backgrounds (I and II). From former research it was expected that website I is more attractive than website II.
Data collected: of the 250 visitors to the website with background I, 40 have pressed a click button to get more information; of the 225 visitors to the website with background II, 25 pressed the button.
The question is whether this data support the hypothesis that background I is more attractive.
In a statistical sence, the hypothesis to be tested is:
HA: pI > pII; where p1 is the proportion visitors of website I that clicks on the button and pII the proportion of website II that clicks on this button.
This is an example of a two sample proportion test.
To find out if the data support the HA-hypothesis, the probability is calculated that assuming H0 is true, a difference between the two proportions will be found as has been found in the collected data.3 With the aid of this webapp this P-value can be found (.061). The conclusion would be: a one-sided independent proportion test did not result in a significant difference between the proportion of visitors of website I that clicked on the button (N = 250, K = 40, proportion = 0.16) with the proportion of visitors of website II who did so (N = 225, K = 25, proportion = 0.111), Z = 1.548, p = .061.
Exercise 2
Use the data in the file HP_LONDON_JAN19.xlsx.
(i) Test if there is a significant difference between the proportion flats of all houses sold in the BARNET district and in the GREENWICH district.
(ii) “The proportion houses with a selling price of more than 1 mln GBP for the category terraced houses is higher than for the category semi-detached houses.”
Test if the data support this statement.
Exercise 3
Operationalize the following research questions using a binomial test.
(i) Do female students at THUAS score better for statistics assignments as male students?
(ii) Did the quality of the service of an IT helpdesk in the company increased after the intervention?
(iii) Did the number of accidents in the Netherlands in which electric bicycles were involved increase in 2018 compared with 2017?
## 4.4 Homework assignment
Option 1
Researching air quality in the major Dutch cities.
Use a two-sample proporion test to test if the proportion days with an average PM10-value above 30 $$\mu g$$ per m3 in Amsterdam is higher than in the city of your choice.
Use a proportion test to test if the air quality in Amsterdam in worse than in The Hague (Den Haag).
Option 2
|
2020-07-05 22:50:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4287836253643036, "perplexity": 971.2533741622516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655889877.72/warc/CC-MAIN-20200705215728-20200706005728-00399.warc.gz"}
|
https://physics.stackexchange.com/questions/631910/state-space-of-free-falling-object-is-time-variant/631923
|
# State space of free falling object is time variant?
In the case of a free fall object, if $$h$$ denotes distance from floor and $$v$$ the speed, then, given an input $$U = g$$, and $$X=[h \quad v]^T$$ and $$\dot{X}=[\dot{h} \quad \dot{v}]^T$$, I found that the state space equation of $$B$$ contains $$t$$, that is $$B=[t \quad 1]^T$$. I used $$\frac{dh}{dt}=v + gt$$ to get that result.
Does it mean that this system is not time-invariant but in fact, time variant? Am I correct, or did I do something wrong?
This is the model that I follow.
In your notation you are mixing up the integration of v. For the state space representation one would write: $$\begin{pmatrix} \dot h \\ \dot v \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} h \\ v \end{pmatrix} + \begin{pmatrix} 0 \\ \frac{1}{m} \end{pmatrix} F_\text{ext}.$$ This expresses the equations of motion of a free point mass $$m$$ in one dimension $$h$$ with external force $$F_\text{ext}=mg$$. The change of position / height is $$\dot h(t) = v(t)$$ and the change of velocity depends on the force: $$\dot v = \frac{1}{m} F_\text{ext} = g.$$ In the state-space terminology we have the state vector $$x=(h,v)$$ and the force is the input of the system. (Even for time-dependent $$F_\text{ext}(t)$$ the system is time-invariant.)
Writing $$\dot h = v + g t$$ is not correct. We have $$\dot v(t) = g$$ which integrates to $$v(t) = v_0 + g t$$ with integration constant $$v_0$$. Integrating again gives $$\dot h = h_0 + v_0 t + \tfrac 1 2 g t^2$$ with integration constant $$h(t=0)=h_0$$.
Supplement to answer comment: For a drag force that is proportional to velocity $$F_\text{drag}=-b\, v(t)$$, the velocity's equation of motion changes to $$\dot v=g - \frac{b}{m} v$$. To express this, one would add a constant matrix element to the system function, so the system is still time-invariant: $$\begin{pmatrix} \dot h \\ \dot v \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ 0 & \frac{-b}{m} \end{pmatrix} \begin{pmatrix} h \\ v \end{pmatrix} + \begin{pmatrix} 0 \\ \frac{1}{m} \end{pmatrix} F_\text{ext}.$$
• Than the $F_{ext}$ means exclude from (proportional to velocity) drag force? While the gravity force go to $F_{ext}$? Thank you, Apr 27, 2021 at 8:43
|
2022-08-10 08:19:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9109589457511902, "perplexity": 164.06137736310362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00550.warc.gz"}
|
https://artofproblemsolving.com/wiki/index.php?title=AA_similarity&oldid=72620
|
# AA similarity
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Theorem: In two triangles, if two pairs of corresponding angles are congruent, then the triangles are similar.
Proof: Let ABC and DEF be two triangles such that $\angle A = \angle D$ and $\angle B = \angle E$. $\angle A + \angle B + \angle C = 180$ (Sum of all angles in a triangle is $180$) $\angle D + \angle E + \angle F = 180$ (Sum of all angles in a triangle is $180$) $\angle A + \angle B + \angle C=\angle D + \angle E + \angle F$ $\angle D + \angle E + \angle C = \angle D + \angle E + \angle F$ (since $\angle A = \angle D$ and $\angle B = \angle E$) $\angle C = \angle F$.
|
2021-04-14 23:54:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7564606666564941, "perplexity": 528.4423447614861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038078900.34/warc/CC-MAIN-20210414215842-20210415005842-00424.warc.gz"}
|
https://archive.lib.msu.edu/crcmath/math/math/i/i228.htm
|
## Irrational Number
A number which cannot be expressed as a Fraction for any Integers and . Every Transcendental Number is irrational. Numbers of the form are irrational unless is the th Power of an Integer.
Numbers of the form , where is the Logarithm, are irrational if and are Integers, one of which has a Prime factor which the other lacks. is irrational for rational . The irrationality of was proven by Lambert in 1761; for the general case, see Hardy and Wright (1979, p. 46). is irrational for Positive integral . The irrationality of was proven by Lambert in 1760; for the general case, see Hardy and Wright (1979, p. 47). Apéry's Constant (where is the Riemann Zeta Function) was proved irrational by Apéry (Apéry 1979, van der Poorten 1979).
From Gelfond's Theorem, a number of the form is Transcendental (and therefore irrational) if is Algebraic , 1 and is irrational and Algebraic. This establishes the irrationality of (since ), , and . Nesterenko (1996) proved that is irrational. In fact, he proved that , and are algebraically independent, but it was not previously known that was irrational.
Given a Polynomial equation
(1)
where are Integers, the roots are either integral or irrational. If is irrational, then so are , , and .
Irrationality has not yet been established for , , , or (where is the Euler-Mascheroni Constant).
Quadratic Surds are irrational numbers which have periodic Continued Fractions.
Hurwitz's Irrational Number Theorem gives bounds of the form
(2)
for the best rational approximation possible for an arbitrary irrational number , where the are called Lagrange Numbers and get steadily larger for each bad'' set of irrational numbers which is excluded.
The Series
(3)
where is the Divisor Function, is irrational for and 2. The series
(4)
where is the number of divisors of , is also irrational, as are
(5)
for an Integer other than 0 and , and a Rational Number other than 0 or (Guy 1994).
See also Algebraic Integer, Algebraic Number, Almost Integer, Dirichlet Function, Ferguson-Forcade Algorithm, Gelfond's Theorem, Hurwitz's Irrational Number Theorem, Near Noble Number, Noble Number, Pythagoras's Theorem, Quadratic Irrational Number, Rational Number, Segre's Theorem, Transcendental Number
References
Apéry, R. Irrationalité de et .'' Astérisque 61, 11-13, 1979.
Courant, R. and Robbins, H. Incommensurable Segments, Irrational Numbers, and the Concept of Limit.'' §2.2 in What is Mathematics?: An Elementary Approach to Ideas and Methods, 2nd ed. Oxford, England: Oxford University Press, pp. 58-61, 1996.
Guy, R. K. Some Irrational Series.'' §B14 in Unsolved Problems in Number Theory, 2nd ed. New York: Springer-Verlag, p. 69, 1994.
Hardy, G. H. and Wright, E. M. An Introduction to the Theory of Numbers, 5th ed. Oxford, England: Clarendon Press, 1979.
Manning, H. P. Irrational Numbers and Their Representation by Sequences and Series. New York: Wiley, 1906.
Nesterenko, Yu. Modular Functions and Transcendence Problems.'' C. R. Acad. Sci. Paris Sér. I Math. 322, 909-914, 1996.
Nesterenko, Yu. V. Modular Functions and Transcendence Questions.'' Mat. Sb. 187, 65-96, 1996.
Niven, I. M. Irrational Numbers. New York: Wiley, 1956.
Niven, I. M. Numbers: Rational and Irrational. New York: Random House, 1961.
Pappas, T. Irrational Numbers & the Pythagoras Theorem.'' The Joy of Mathematics. San Carlos, CA: Wide World Publ./Tetra, pp. 98-99, 1989.
van der Poorten, A. A Proof that Euler Missed... Apéry's Proof of the Irrationality of .'' Math. Intel. 1, 196-203, 1979.
|
2021-11-29 18:11:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9771170020103455, "perplexity": 1608.7624357988927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00632.warc.gz"}
|
http://mathhelpforum.com/calculus/46432-asymptote.html
|
# Math Help - asymptote
1. ## asymptote
y=1 is an asymptote of y2 = x(x-1)2 ? State true or false ? Give reason.
2. Originally Posted by puneet
y=1 is an asymptote of y2 = x(x-1)2 ? State true or false ? Give reason.
Do you mean $y^2 = x(x-1)^2$?
3. ## asymptote
Yes, I mean ?
4. Originally Posted by puneet
Yes, I mean ?
Do you know what a horizontal asymptote is ....? It's the horizontal line y = a that the the curve approaches as x --> +oo and/or x --> -oo.
If y = 1 is a horizontal asymptote of the given curve, then y --> 1 as x --> +oo and/or x --> -oo ..... Does that happen?
5. ## asymptote
Highest power of x and also y is independent(i.e.=1) .Therefore no asymptotes occur. Is that right.
As y-->1 then take x-->-+infinity then agian x-->1.
|
2016-07-23 09:50:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9511753916740417, "perplexity": 4468.293964725831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257821671.5/warc/CC-MAIN-20160723071021-00124-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://anhngq.wordpress.com/2010/03/20/wave-equations-discontinuous-solutions/
|
# Ngô Quốc Anh
## March 20, 2010
### Wave equations: Discontinuous solutions
Filed under: PDEs — Ngô Quốc Anh @ 16:42
Characteristics play a fundamental role in understanding how solutions to first order PDEs propagate. So far our study (see this and this) presupposed that solutions are smooth, or, at the worst, piecewise smooth and continuous. Now we take up the question of discontinuous solutions. As the following example shows, linear equations propagate discontinuous initial or boundary data into the region of interest along characteristics.
Example 1. Consider the advection equation
$\displaystyle u_t+cu_x=0, \quad x \in \mathbb R, t>0, c>0$
subject to the initial condition $u(x,0)=u_0(x)$ where
$\displaystyle {u_0}(x) = \left\{ \begin{gathered} 1, \quad x < 0, \hfill \\ 0, \quad x > 0. \hfill \\ \end{gathered} \right.$
Because the solution to the advection equation is $u(x)=u_0(x-ct)$, the initial condition is propagated along the characteristics $x - ct = {\rm const}$, and the discontinuity at $x = 0$ is propagated along the line $x= ct$, as shown in the following picture.
Thus the solution to the initial value problem is given by
$\displaystyle {u_0}(x) = \left\{ \begin{gathered} 0, \quad x >ct, \hfill \\ 1, \quad x
We now examine a simple nonlinear problem with the same initial data. In general, a hyperbolic system with piecewise constant initial data is called a Riemann problem.
Example 2. Consider the advection equation
$\displaystyle u_t+ u u_x=0, \quad x \in \mathbb R, t>0$
subject to the initial condition $u(x,0)=u_0(x)$ where
$\displaystyle {u_0}(x) = \left\{ \begin{gathered} 1, \quad x < 0, \hfill \\ 0, \quad x > 0. \hfill \\ \end{gathered} \right.$
Because $u$ is constant on the straight line characteristics having speed $u$. The characteristics emanating from the $x$ axis have speed $0$ (vertical) if $x > 0$, and they have speed unity ($1$) if $x < 0$. The characteristic diagram is shown below
Immediately, at $t > 0$, the characteristics collide and a contradiction is implied because $u$ must
be constant on characteristics. One way to avoid this impasse is to insert a straight line $x = mt$ of nonnegative speed along which the initial discontinuity at $x = 0$ is carried. The characteristic diagram is now changed to
For $x > mt$ we can take $u = 0$, and for $x < mt$ we can take $u = 1$, thus giving a solution to the PDE on both sides of the discontinuity. The only question is the choice of $m$; for any $m \geq 0$ it appears that a solution can be obtained away from the discontinuity and that solution also satisfies the initial condition. Shall we give up uniqueness for this problem? Is there some other solution that we have not discovered? Or, is there only one valid choice of $m$?
The answers to these questions lie at the foundation of what a discontinuous solution is. As it turns out, in the same way that discontinuities in derivatives have to propagate along characteristics, discontinuities in the solutions themselves must propagate along special loci in spacetime. These curves are called shock paths, and they are not, in general, characteristic curves. So the answer to the questions posed in the last example is that there is a special value of $m$ (in fact, $m =\frac{1}{2}$, this can be derived from the so-called jump conditions, or, the Rankine-Hugoniot conditions) for which conservation holds along the discontinuity.
Source: J.D. Logan, An introduction to nonlinear partial differential equations, 2nd, 2008; Section 3.1.
|
2022-01-24 17:34:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 31, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9974393248558044, "perplexity": 965.8449046600911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304572.73/warc/CC-MAIN-20220124155118-20220124185118-00524.warc.gz"}
|
http://www.ntg.nl/pipermail/ntg-context/2007/023141.html
|
# [NTG-context] Header problem in project
David Arnold dwarnold45 at cox.net
Mon Jan 8 06:32:36 CET 2007
Hans et all,
We have a project call book.tex, a product in that called
chapter1.tex, and a component in that called section1exercises.tex.
We have blocks that we save and then place at the end of
section1exercises.tex with a macro defined in our environment file:
\bgroup
\doifmodeelse{short}
{
\stopcolumnset
}
{
}
\egroup
}
The else part of the do above is an attempt to define a different
header when we compile with --mode=long, but just for the pages on
which we use \placeanswers. The other pages have a different header
defined in the environment file. Those headers have page numbers as
well with:
The file section1exercises ends like this:
%%% ENDTESTBANK
%%%================================================
\stopquestions
\stopcomponent
When we compile section1exercises.tex with texmfstart texexec --
mode=long section1exercises, all is well until the last page, where
the former header is used instead of the header defined in the
\placeanswers macro. But we want to finish out the document from the
point we put the \placeanswers with the second header.
Any suggestions?
|
2014-12-19 04:39:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9443304538726807, "perplexity": 6817.762296528195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768208.73/warc/CC-MAIN-20141217075248-00126-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://ask.libreoffice.org/en/answers/23227/revisions/
|
# Revision history [back]
I see at least to options:
• Set up the page style for two columns, and insert every table in one column.
. Create a table with two columns / one row, and insert every table in one cell.
I see at least to options:
• Set up the page style for two columns, and insert every table in one column.
. Create a table with two columns / one row, and insert every table in one cell.
I think the easier and flexible to manage is the second one.
|
2019-09-23 13:30:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8335407376289368, "perplexity": 1652.6866198819182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576965.71/warc/CC-MAIN-20190923125729-20190923151729-00342.warc.gz"}
|
https://zenodo.org/record/4322876/export/dcite4
|
Conference paper Open Access
# Towards a uniform process model for deploying and operating autonomous shuttles on public roads
Rehrl, Karl; Piribauer, Thomas; Schwieger, Klemens; Sedlacek, Norbert; Weissensteiner, Patrick
### DataCite XML Export
<?xml version='1.0' encoding='utf-8'?>
<identifier identifierType="DOI">10.5281/zenodo.4322876</identifier>
<creators>
<creator>
<creatorName>Rehrl, Karl</creatorName>
<givenName>Karl</givenName>
<familyName>Rehrl</familyName>
<nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0003-4052-5867</nameIdentifier>
<affiliation>Salzburg Research</affiliation>
</creator>
<creator>
<creatorName>Piribauer, Thomas</creatorName>
<givenName>Thomas</givenName>
<familyName>Piribauer</familyName>
<affiliation>Prisma solutions</affiliation>
</creator>
<creator>
<creatorName>Schwieger, Klemens</creatorName>
<givenName>Klemens</givenName>
<familyName>Schwieger</familyName>
<affiliation>Austrian Institute of Technology</affiliation>
</creator>
<creator>
<creatorName>Sedlacek, Norbert</creatorName>
<givenName>Norbert</givenName>
<familyName>Sedlacek</familyName>
<affiliation>HERRY Consult</affiliation>
</creator>
<creator>
<creatorName>Weissensteiner, Patrick</creatorName>
<givenName>Patrick</givenName>
<familyName>Weissensteiner</familyName>
<nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0001-6660-1990</nameIdentifier>
<affiliation>Virtual Vehicle Research</affiliation>
</creator>
</creators>
<titles>
<title>Towards a uniform process model for deploying and operating autonomous shuttles on public roads</title>
</titles>
<publisher>Zenodo</publisher>
<publicationYear>2020</publicationYear>
<subjects>
<subject>autonomous shuttles</subject>
<subject>deployment and operation</subject>
<subject>process model</subject>
</subjects>
<dates>
<date dateType="Issued">2020-11-09</date>
</dates>
<language>en</language>
<resourceType resourceTypeGeneral="ConferencePaper"/>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/4322876</alternateIdentifier>
</alternateIdentifiers>
<relatedIdentifiers>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.4322875</relatedIdentifier>
</relatedIdentifiers>
<rightsList>
<rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract"><p>Autonomous shuttle trials are carried out all over the world. So far, these trials are predominately based on trial and error approaches. During the last years, shuttle suppliers have developed their proprietary deployment and operation processes, whereas a more generalized process model is missing so far. In the Digibus&reg; Austria flagship project, a consortium of 13 partners joined forces to develop methods and technologies for deploying and operating autonomous shuttles in public transport. Among other goals, the project aims at the definition of a generalized process model for autonomous shuttle trials, building on existing models as well as individual learnings. The proposed model consists of actors, components, decisions and activities and has been tested in the context of several shuttle trials in Austria. Although the process model so far only reflects the state in Austria, most activities should be applicable to other countries as well. The model is considered as a first step towards future standardization.</p></description>
</descriptions>
</resource>
229
160
views
|
2022-06-28 08:50:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2244144082069397, "perplexity": 10691.657516245496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103360935.27/warc/CC-MAIN-20220628081102-20220628111102-00529.warc.gz"}
|
http://es.mathworks.com/help/control/ref/ctrbf.html?requestedDomain=es.mathworks.com&nocookie=true
|
# Documentation
### This is machine translation
Translated by
Mouse over text to see original. Click the button below to return to the English verison of the page.
# ctrbf
Compute controllability staircase form
## Syntax
`[Abar,Bbar,Cbar,T,k] = ctrbf(A,B,C)ctrbf(A,B,C,tol)`
## Description
If the controllability matrix of (A, B) has rank rn, where n is the size of A, then there exists a similarity transformation such that
`$\begin{array}{ccc}\overline{A}=TA{T}^{T},& \overline{B}=TB,& \overline{C}=C{T}^{T}\end{array}$`
where T is unitary, and the transformed system has a staircase form, in which the uncontrollable modes, if there are any, are in the upper left corner.
`$\begin{array}{ccc}\overline{A}=\left[\begin{array}{cc}{A}_{uc}& 0\\ {A}_{21}& {A}_{c}\end{array}\right],& \overline{B}=\left[\begin{array}{l}0\\ {B}_{c}\end{array}\right],& \overline{C}=\left[{C}_{nc}{C}_{c}\right]\end{array}$`
where (Ac, Bc) is controllable, all eigenvalues of Auc are uncontrollable, and ${C}_{c}{\left(sI-{A}_{c}\right)}^{-1}{B}_{c}=C{\left(sI-A\right)}^{-1}B$.
`[Abar,Bbar,Cbar,T,k] = ctrbf(A,B,C)` decomposes the state-space system represented by `A`, `B`, and `C` into the controllability staircase form, `Abar`, `Bbar`, and `Cbar`, described above. `T` is the similarity transformation matrix and `k` is a vector of length n, where n is the order of the system represented by `A`. Each entry of `k` represents the number of controllable states factored out during each step of the transformation matrix calculation. The number of nonzero elements in `k` indicates how many iterations were necessary to calculate `T`, and `sum(k)` is the number of states in Ac, the controllable portion of `Abar`.
`ctrbf(A,B,C,tol)` uses the tolerance `tol` when calculating the controllable/uncontrollable subspaces. When the tolerance is not specified, it defaults to `10*n*norm(A,1)*eps`.
## Examples
Compute the controllability staircase form for
```A = 1 1 4 -2 B = 1 -1 1 -1 C = 1 0 0 1 ```
and locate the uncontrollable mode.
```[Abar,Bbar,Cbar,T,k]=ctrbf(A,B,C) Abar = -3.0000 0 -3.0000 2.0000 Bbar = 0.0000 0.0000 1.4142 -1.4142 Cbar = -0.7071 0.7071 0.7071 0.7071 T = -0.7071 0.7071 0.7071 0.7071 k = 1 0 ```
The decomposed system `Abar` shows an uncontrollable mode located at -3 and a controllable mode located at 2.
collapse all
### Algorithms
`ctrbf` implements the Staircase Algorithm of [1].
## References
[1] Rosenbrock, M.M., State-Space and Multivariable Theory, John Wiley, 1970.
|
2016-09-29 17:00:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9260849356651306, "perplexity": 1505.600561124132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661905.96/warc/CC-MAIN-20160924173741-00172-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://itprospt.com/num/20610622/il-each-of-these-substances-costs-lhe-samc-anount-per-kilogram
|
5
# Il each of these substances costs lhe Samc anount per kilogram. which substance would be lhe most cost ellective way [0 lower the freezing point of waler? Assune al...
## Question
###### Il each of these substances costs lhe Samc anount per kilogram. which substance would be lhe most cost ellective way [0 lower the freezing point of waler? Assune all ionic COmpcxinds completely clissocialeKBrC6llnO NalCaCl} Calz
Il each of these substances costs lhe Samc anount per kilogram. which substance would be lhe most cost ellective way [0 lower the freezing point of waler? Assune all ionic COmpcxinds completely clissociale KBr C6llnO Nal CaCl} Calz
#### Similar Solved Questions
##### The circuit shown in the figure contains four capacitors whose values are as follows: C1=6 F, C2=2 F,and C3-C4-4F The battery voltage Vo-5 V.When the circuit is connected what will be voltage across Cz?Vo25V4,0V20V3.0V1.25V
The circuit shown in the figure contains four capacitors whose values are as follows: C1=6 F, C2=2 F,and C3-C4-4F The battery voltage Vo-5 V.When the circuit is connected what will be voltage across Cz? Vo 25V 4,0V 20V 3.0V 1.25V...
##### 4=-Y(+xly?) Find the trajectories of the system and sketch some of them_ 4 = #x(1+x2y?) (Hint: what does equal?) Be sure t0 include arrows showing the direction of flow and explain how You decided which way they point.
4=-Y(+xly?) Find the trajectories of the system and sketch some of them_ 4 = #x(1+x2y?) (Hint: what does equal?) Be sure t0 include arrows showing the direction of flow and explain how You decided which way they point....
##### Write the Ksp expression for the sparingly soluble compound nickel(II) carbonate; NiCO3If either the numerator O1 denominator is 1; please enterLsp
Write the Ksp expression for the sparingly soluble compound nickel(II) carbonate; NiCO3 If either the numerator O1 denominator is 1; please enter Lsp...
##### The room temperature density of hypothetical element Xis 1 gcm-3 and its longest-lived isotope has mass number of 18. It is assumed that a layer of cube in which the edge length of each cube is taken as equal to the diameter of a Xatom. Another cube is directly placed over each cube in the first layer and aligned with that cube, thereby forming second layer. The other cubes are directly over the second layer cubes to form a third layer of repeating structure. If one X atom is put into each cube
The room temperature density of hypothetical element Xis 1 gcm-3 and its longest-lived isotope has mass number of 18. It is assumed that a layer of cube in which the edge length of each cube is taken as equal to the diameter of a Xatom. Another cube is directly placed over each cube in the first lay...
##### A new chemical process is introduced by Duracell in the production of lithium-ion batteries. For batteries produced by the old process, the average life of a battery is 102.5 hours. To determine whether the new process affects the average life of the batteries positively, the manufacturer collects a random sample of 25 batteries produced by the new process and uses them until they run out. The sample mean life is found to be 107 hours, and the sample standard deviation is found to be 10 hours. A
A new chemical process is introduced by Duracell in the production of lithium-ion batteries. For batteries produced by the old process, the average life of a battery is 102.5 hours. To determine whether the new process affects the average life of the batteries positively, the manufacturer collects a...
##### A meteorologist is interested in finding a function that explains the relation between the height of a weather balloon (in kilometers) and the atmospheric pressure (measured in millimeters of mercury) on the balloon: She collects the following dataAtmospheric 760 740 725 700 650 630 600 580 550 Pressure Height 0.184 0.328 0.565 1.079 1.291 1.634 L862 2.235(4 points) Use the calculator to find a logarithmic model for the datapoints) Use the function to predict the height of the weather balloon if
A meteorologist is interested in finding a function that explains the relation between the height of a weather balloon (in kilometers) and the atmospheric pressure (measured in millimeters of mercury) on the balloon: She collects the following data Atmospheric 760 740 725 700 650 630 600 580 550 Pre...
|
2022-10-01 01:50:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5011383891105652, "perplexity": 2081.464540245658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335514.65/warc/CC-MAIN-20221001003954-20221001033954-00474.warc.gz"}
|
https://socratic.org/questions/xyz-1500-what-is-the-value-of-x-y-and-z
|
# Xyz=1500 what is the value of x,y and z?
Apr 28, 2018
Not enough information to determine.
#### Explanation:
The equation $x y z = 1500$ can be thought of as a system of equations. This particular system has $3$ unknowns and $1$ equation. As a result of this, there is not enough information to determine the exact values of $x , y$ and $z$.
Consider this (incorrect) argument. Let's suppose that $x = 1$, $y = 1$ and $z = 1500$. Then $x y z = \left(1\right) \left(1\right) \left(1500\right) = 1500$. This satisfies our given equation, so these must be the values of our variables.
But this is incorrect, because if $x = 10$, $y = 10$, and $z = 15$, then $x y z = \left(10\right) \left(10\right) \left(15\right) = 1500$. This still satisfies our equation, but none of the variables have the same value as previously concluded.
In general, if you have $n$ equations and $k$ unknowns with $k > n$, then there is not a unique solution to the system.
|
2020-02-25 00:25:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 16, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9594600796699524, "perplexity": 118.56161099554433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145989.45/warc/CC-MAIN-20200224224431-20200225014431-00185.warc.gz"}
|
https://amsi.org.au/ESA_Senior_Years/SeniorTopic3/3f/3f_2content_1.html
|
## Content
### The area under a graph
Suppose you have a function $$f\colon \mathbb{R} \to \mathbb{R}$$ and its graph $$y=f(x)$$. You want to find the area under the graph. For now we'll assume that the graph $$y=f(x)$$ is always above the $$x$$-axis, and we'll estimate the area between the graph $$y=f(x)$$ and the $$x$$-axis. We set left and right endpoints and estimate the area between those endpoints.
Below is the graph of $$f(x)=x^2 + 1$$. We'll try to find the area under the graph $$y=f(x)$$ between $$x=0$$ and $$x=5$$, which is shaded.
As a first approximation, we can divide the interval $$[0,5]$$ into five subintervals of width 1, i.e., $$[0,1]$$, $$[1,2]$$, $$\dots$$, $$[4,5]$$, and consider rectangles as shown on the following graph.
As we can see, the total area of these rectangles is a rough approximation to the area we are looking for, but a clear underestimate. The total area of the rectangles is calculated in the following table.
Approximating the area under the graph with 5 rectangles
Interval Width of rectangle Height of rectangle Area of rectangle
$$[0,1]$$ 1 $$f(0) = 0^2 + 1 = 1$$ 1
$$[1,2]$$ 1 $$f(1) = 1^2 + 1 = 2$$ 2
$$[2,3]$$ 1 $$f(2) = 2^2 + 1 = 5$$ 5
$$[3,4]$$ 1 $$f(3) = 3^2 + 1 = 10$$ 10
$$[4,5]$$ 1 $$f(4) = 4^2 + 1 = 17$$ 17
Total area 35
For a slightly better approximation, we can split the interval into 10 equal subintervals, each of length $$\frac{1}{2}$$. Using the same idea, we have the rectangles shown on the following graph. We clearly have a better estimate, but still an underestimate.
Approximating the area under the graph with 10 rectangles
Interval Width of rectangle Height of rectangle Area of rectangle
$$[0,\frac{1}{2}]$$ $$\frac{1}{2}$$ $$f(0) = 0 + 1 = 1$$ $$\frac{1}{2}$$
$$[\frac{1}{2},1]$$ $$\frac{1}{2}$$ $$f(\frac{1}{2}) = \frac{1}{4} + 1 = \frac{5}{4}$$ $$\frac{5}{8}$$
$$[1,\frac{3}{2}]$$ $$\frac{1}{2}$$ $$f(1) = 1 + 1 = 2$$ 1
$$[\frac{3}{2},2]$$ $$\frac{1}{2}$$ $$f(\frac{3}{2}) = \frac{9}{4} + 1 = \frac{13}{4}$$ $$\frac{13}{8}$$
$$[2,\frac{5}{2}]$$ $$\frac{1}{2}$$ $$f(2) = 4 + 1 = 5$$ $$\frac{5}{2}$$
$$[\frac{5}{2},3]$$ $$\frac{1}{2}$$ $$f(\frac{5}{2}) = \frac{25}{4} + 1 = \frac{29}{4}$$ $$\frac{29}{8}$$
$$[3,\frac{7}{2}]$$ $$\frac{1}{2}$$ $$f(3) = 9 + 1 = 10$$ 5
$$[\frac{7}{2},4]$$ $$\frac{1}{2}$$ $$f(\frac{7}{2}) = \frac{49}{4} + 1 = \frac{53}{4}$$ $$\frac{53}{8}$$
$$[4,\frac{9}{2}]$$ $$\frac{1}{2}$$ $$f(4) = 16 + 1 = 17$$ $$\frac{17}{2}$$
$$[\frac{9}{2},5]$$ $$\frac{1}{2}$$ $$f(\frac{9}{2}) = \frac{81}{4} + 1 = \frac{85}{4}$$ $$\frac{85}{8}$$
Total area $$\frac{325}{8} = 40.625$$
Splitting $$[0,5]$$ into more subintervals of smaller width, we can perform the same calculation and estimate the area. With more rectangles, the calculations become more tedious, but some results are given in the following table.
Approximating the area under the graph with more rectangles
Number of intervals Width of each interval Total area of rectangles
5 1 35
10 0.5 40.625
50 0.1 45.425
100 0.05 46.04375
1 000 0.005 46.6041875
10 000 0.0005 46.66041875
100 000 0.00005 46.66604166875
1 000 000 0.000005 46.6666041666875
With more and more thinner and thinner rectangles, the areas appear to be converging towards a limit. It turns out, as we will see, that the limit is $$46 \frac{2}{3}$$.
The area under a curve is defined to be this limit. Although we have an intuitive notion of what area is, for a mathematically rigorous definition we need to use integration.
Next page - Content - Estimates of area
|
2022-12-03 09:52:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9188641309738159, "perplexity": 213.0157724180382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710926.23/warc/CC-MAIN-20221203075717-20221203105717-00204.warc.gz"}
|
http://www.journaltocs.ac.uk/index.php?action=browse&subAction=pub&publisherID=822&journalID=178&pageb=1&userQueryID=&sort=&local_page=&sorType=&sorCol=
|
for Journals by Title or ISSN for Articles by Keywords help
Publisher: Springer-Verlag (Total: 2353 journals)
Acta Geotechnica [SJR: 1.818] [H-I: 22] [7 followers] Follow Hybrid journal (It can contain Open Access articles) ISSN (Print) 1861-1133 - ISSN (Online) 1861-1125 Published by Springer-Verlag [2353 journals]
• Stress–strain behavior of cement-improved clays: testing and
modeling
• Authors: Allison J. Quiroga; Zachary M. Thompson; Kanthasamy K. Muraleetharan; Gerald A. Miller; Amy B. Cerato
Pages: 1003 - 1020
Abstract: Abstract The results of a series of laboratory tests on unimproved and cement-improved specimens of two clays are presented, and the ability of a bounding surface elastoplastic constitutive model to predict the observed behavior is investigated. The results of the oedometer, triaxial compression, extension, and cyclic shear tests demonstrated that the unimproved soil behavior is similar to that of soft clays. Cement-improved specimens exhibited peak/residual behavior and dilation, as well as higher strength and stiffness over unimproved samples in triaxial compression. Two methods of accounting for the artificial overconsolidation effect created by cement improvement are detailed. The apparent preconsolidation pressure method is considerably easier to use, but the fitted OCR method gave better results over varied levels of confining stresses. While the bounding surface model predicted the monotonic behavior of unimproved soil very well, the predictions made for cyclic behavior and for improved soils were only of limited success.
PubDate: 2017-10-01
DOI: 10.1007/s11440-017-0529-1
Issue No: Vol. 12, No. 5 (2017)
• Field testing of one-way and two-way cyclic lateral responses of single
and jet-grouting reinforced piles in soft clay
• Authors: Ben He; Lizhong Wang; Yi Hong
Pages: 1021 - 1034
PubDate: 2017-10-01
DOI: 10.1007/s11440-016-0515-z
Issue No: Vol. 12, No. 5 (2017)
• Pile reinforcement mechanism of soil slopes
• Authors: Ga Zhang; Liping Wang; Yaliang Wang
Pages: 1035 - 1046
Abstract: Abstract Stabilizing piles are widely used as an effective and economic reinforcement approach for slopes. Reasonable designs of pile reinforcement depend on the understanding of reinforcement mechanism of slopes. A series of centrifuge model tests were conducted on the pile-reinforced slopes and corresponding unreinforced slopes under self-weight and vertical loading conditions. The deformation of the slope was measured using image-based analysis and employed to investigate the pile reinforcement mechanism. The test results showed that the piles significantly reduced the deformation and changed the deformation distribution of the slope, and prevented the failure occurred in the unreinforced slope. The pile influence zone was determined according to the inflection points on the distribution curves of horizontal displacement, which comprehensively described the features of the pile–slope interaction and the characteristics of reinforced slopes. The concepts of anti-shear effect and compression effect were proposed to quantitatively describe the restriction features of the piles on the deformation of the slope, namely the reduction in the shear deformation and the increase in the compression deformation, respectively. The pile reinforcement effect mainly occurred in the pile influence zone and decreased with increasing distance from the piles. There was a dominated compression effect in the vicinities of the piles. The compression effect developed upwards in the slope with a transmission to the anti-shear effect. The anti-shear effect became significantly dominated near the slip surface and prevented the failure that occurred in the unreinforced slope.
PubDate: 2017-10-01
DOI: 10.1007/s11440-017-0543-3
Issue No: Vol. 12, No. 5 (2017)
• Numerical simulations of the reuse of piled raft foundations in clay
• Authors: Brian Sheil
Pages: 1047 - 1059
Abstract: Abstract The development and growth of urban environments in recent years is requiring geotechnical engineers to consider foundation reuse as a more sustainable solution to inner city redevelopment. Two main phenomena associated with foundation reuse have been reported in the literature, namely ‘preloading effects’ and ‘ageing effects’. The aim of this paper is to investigate the relative merits of these effects on the reusability of both piled and unpiled raft foundations in clay. Finite element analysis, in conjunction with an isotropic elasto-viscoplastic soil model, is employed for this purpose. The study is presented in two phases: (1) evaluation of preloading effects only by using a very low creep coefficient and (2) evaluation of combined preloading and creep effects. The variables considered in the parametric study include the number of piles, pile spacing, pile length, and soil type. Results show that both unpiled and piled rafts can exhibit significant capacity and stiffness increases upon reloading even for moderate levels of preload. Moreover, these increases are strongly dependent on the piled raft load sharing where unpiled raft and free-standing pile group capacity gains serve as upper and lower bounds, respectively, for that of a piled raft. This study underlines foundations reuse as an effective and sustainable solution for inner city redevelopment.
PubDate: 2017-10-01
DOI: 10.1007/s11440-017-0522-8
Issue No: Vol. 12, No. 5 (2017)
• Testing and modeling the behavior of pre-bored grouting planted piles
under compression and tension
• Authors: Jia-jin Zhou; Xiao-nan Gong; Kui-hua Wang; Ri-hong Zhang; Jia-jia Yan
Pages: 1061 - 1075
Abstract: Abstract A group of field tests and three-dimensional finite element simulation were used to investigate the behavior of the pre-bored grouting planted pile under compression and tension; moreover, a group of shear tests of the concrete–cemented soil interface was carried out to study the frictional capacity of the pile–cemented soil interface. The load–displacement response, shaft resistance and mobilized base load were discussed based on the measured and computed results. The measured and computed results show that the frictional capacity of the cemented soil–soil interface is better than the frictional capacity of the concrete–soil interface. The frictional capacity of the concrete–cemented soil interface is mainly controlled by the properties of the cemented soil, and the ultimate skin friction of the concrete–cemented soil interface is much larger than that of the cemented soil–soil interface. The frictional capacity of the soil layer close to the enlarged base is also promoted because of the compaction of the enlarged base. The enlarged cemented soil base can promote the behavior of the pile foundation under tension, and the enlarged cemented soil base undertakes approximately 26.3% of the total uplift load under the ultimate bearing capacity in this research.
PubDate: 2017-10-01
DOI: 10.1007/s11440-017-0540-6
Issue No: Vol. 12, No. 5 (2017)
• Vertical bearing capacity behaviour of single T-shaped soil–cement
column in soft ground: laboratory modelling, field test, and calculation
• Authors: Yaolin Yi; Songyu Liu; Anand J. Puppala; Peisheng Xi
Pages: 1077 - 1088
Abstract: Abstract The T-shaped soil–cement column is a variable-diameter column, which has an enlarged column cap at the shallow depth, resulting in the column shape being analogous to the letter “T”. In this study, 1-g laboratory and full-scale field loading tests were employed to investigate the vertical bearing capacity behaviour of a single T-shaped column in soft ground. Pressure cells were set in a T-shaped column in the field to measure the vertical column stress above and below the column cap during the loading test. After the loading test, several columns were excavated to investigate their failure modes. The results indicated that, since the section area of the column cap was remarkably higher than that of the deep-depth column, the stress concentration occurred in the deep-depth column just under the cap, leading to column failure. Based on this failure mode, a simplified method was proposed to estimate the ultimate bearing capacity of a single T-shaped column; the comparison of estimated and measured results indicated the applicability of the proposed method.
PubDate: 2017-10-01
DOI: 10.1007/s11440-017-0555-z
Issue No: Vol. 12, No. 5 (2017)
• Failure modes and bearing capacity of strip footings on soft ground
reinforced by floating stone columns
• Authors: Haizuo Zhou; Yu Diao; Gang Zheng; Jie Han; Rui Jia
Pages: 1089 - 1103
Abstract: Abstract This study evaluates the failure modes and the bearing capacity of soft ground reinforced by a group of floating stone columns. A finite difference method was adopted to analyze the performance of reinforced ground under strip footings subjected to a vertical load. The investigation was carried out by varying the aspect ratio of the reinforced zone, the area replacement ratio, and the surface surcharge. General shear failure of the reinforced ground was investigated numerically without the surcharge. The results show the existence of an effective length of the columns for the bearing capacity factors N c and N γ. When certain surcharge was applied, the failure mode of the reinforced ground changed from the general shear failure to the block failure. The aspect ratio of the reinforced zone and the area replacement ratio also contributed to this failure mode transition. A counterintuitive trend of the bearing capacity factor N q can be justified with a shift in the critical failure mode. An upper-bound limit method based on the general shear failure mode was presented, and the results agree well with those of the previous studies of reinforced ground. Equivalent properties based on the area-weighted average of the stone columns and clay parameters were used to convert the individual column model to an equivalent area model. The numerical model produced reasonable equivalent properties. Finally, a theoretical method based on the comparison of the analytical equations for different failure modes was developed for engineering design. Good agreement was found between the theoretical and numerical results for the critical failure mode and its corresponding bearing capacity factors.
PubDate: 2017-10-01
DOI: 10.1007/s11440-017-0535-3
Issue No: Vol. 12, No. 5 (2017)
• Pressuremeter test parameters of a compacted illitic soil under thermal
cycling
• Authors: H. Eslami; S. Rosin-Paumier; A. Abdallah; F. Masrouri
Pages: 1105 - 1118
Abstract: Abstract The incorporation of heat exchangers in geostructures changes the temperature of the adjacent soil, raising important issues concerning the effect of temperature variations on hydro-mechanical soil behaviour. The objective of this paper is to improve the understanding and quantification of the impact of temperature variation on the bearing capacity of thermo-active piles. Currently, the design of deep foundations is based on the results of in situ penetrometer or pressuremeter tests. However, there are no published data on the effect of temperature on in situ soil parameters, preventing the specific assessment of the behaviour of thermo-active piles. In this study, an experimental device is developed to perform mini-pressuremeter tests under controlled laboratory conditions. Mini-pressuremeter tests are performed on an illitic soil in a thermo-regulated metre-scale container subjected to temperatures from 1 to 40 °C. The results reveal a slight decrease in the pressuremeter modulus (E p) and a significant decrease in the creep pressure (p f) and limit pressure (p l) with increasing temperature. The results also reveal the reversibility of this effect during a heating–cooling cycle throughout the investigated temperature range, whereas the effect of a cooling–heating cycle was only partially reversible. In the case of several thermal cycles, the effect of the first cycle on the soil parameters is decisive.
PubDate: 2017-10-01
DOI: 10.1007/s11440-017-0552-2
Issue No: Vol. 12, No. 5 (2017)
• Advance in the penetrometer test formulation to estimate allowable
pressure in granular soils
• Authors: Jesús Díaz-Curiel; Sandra Rueda-Quintero; Bárbara Biosca; Georgina Doñate-Matilla
Pages: 1119 - 1127
Abstract: Abstract In this paper, we present a modification of the existing mathematical formulation used to obtain the allowable bearing pressure from dynamic penetration tests in order to extend its applicability to the design of shallow foundations. The conventional relationships adopted to obtain the allowable bearing pressure from penetrometer tests have a discontinuous gradient, and they are limited to a depth less than the footing width. The aim of this work was to find a relationship that permits the estimation of this pressure in cohesionless soils, from the results of dynamic probing super heavy tests, through a single non-piecewise and continuous relationship that remains valid up to depths several times the footing width. This equation was applied as part of the geomechanical characterization survey undertaken for the construction of an elevated helipad in the centre of the Iberian Peninsula. The survey results were considered satisfactory, and the construction was completed without structural problems.
PubDate: 2017-10-01
DOI: 10.1007/s11440-017-0565-x
Issue No: Vol. 12, No. 5 (2017)
• Shear wave velocity as function of cone penetration resistance and grain
size for Holocene-age uncemented soils: a new perspective
• Authors: Mourad Karray; Mahmoud N. Hussien
Pages: 1129 - 1158
Abstract: Abstract For feasibility studies and preliminary design estimates, field measurements of shear wave velocity, V s, may not be economically adequate and empirical correlations between V s and more available penetration measurements such as cone penetration test, CPT, data turn out to be potentially valuable at least for initial evaluation of the small-strain stiffness of soils. These types of correlations between geophysical (Vs) and geotechnical (N-SPT, q c-CPT) measurements are also of utmost importance where a great precision in the calculation of the deposit response is required such as in liquefaction evaluation or earthquake ground response analyses. In this study, the stress-normalized shear wave velocity V s1 (in m/s) is defined as statistical functions of the normalized dimensionless resistance, Q tn-CPT, and the mean effective diameter, D 50 (in mm), using a data set of different uncemented soils of Holocene age accumulated at various sites in North America, Europe, and Asia. The V s1–Q tn data exhibit different trends with respect to grain sizes. For soils with mean grain size (D 50) < 0.2 mm, the V s1/Q tn 0.25 ratio undergoes a significant reduction with the increase in D 50 of the soil. This trend is completely reversed with further increase in D 50 (D 50 > 0.2 mm). These results corroborate earlier results that stressed the use of different CPT-based correlations with different soil types, and those emphasized the need to impose particle-size limits on the validity of the majority of available correlations.
PubDate: 2017-10-01
DOI: 10.1007/s11440-016-0520-2
Issue No: Vol. 12, No. 5 (2017)
• Seasonal effects on geophysical–geotechnical relationships and their
implications for electrical resistivity tomography monitoring of slopes
• Authors: R. M. Hen-Jones; P. N. Hughes; R. A. Stirling; S. Glendinning; J. E. Chambers; D. A. Gunn; Y. J. Cui
Pages: 1159 - 1173
Abstract: Abstract Current assessments of slope stability rely on point sensors, the results of which are often difficult to interpret, have relatively high costs and do not provide large-area coverage. A new system is under development, based on integrated geophysical–geotechnical sensors to monitor groundwater conditions via electrical resistivity tomography. So that this system can provide end users with reliable information, it is essential that the relationships between resistivity, shear strength, suction and water content are fully resolved, particularly where soils undergo significant cycles of drying and wetting, with associated soil fabric changes. This paper presents a study to establish these relationships for a remoulded clay taken from a test site in Northumberland, UK. A rigorous testing programme has been undertaken, integrating the results of multi-scalar laboratory and field experiments, comparing two-point and four-point resistivity testing methods. Shear strength and water content were investigated using standard methods, whilst a soil water retention curve was derived using a WP4 dewpoint potentiometer. To simulate seasonal effects, drying and wetting cycles were imposed on prepared soil specimens. Results indicated an inverse power relationship between resistivity and water content with limited hysteresis between drying and wetting cycles. Soil resistivity at lower water contents was, however, observed to increase with ongoing seasonal cycling. Linear hysteretic relationships were established between undrained shear strength and water content, principally affected by two mechanisms: soil fabric deterioration and soil suction loss between drying and wetting events. These trends were supported by images obtained from scanning electron microscopy.
PubDate: 2017-10-01
DOI: 10.1007/s11440-017-0523-7
Issue No: Vol. 12, No. 5 (2017)
• Particle breakage and deformation of carbonate sands with wide range of
• Authors: Yang Xiao; Hanlong Liu; Qingsheng Chen; Qifeng Ma; Yuzhou Xiang; Yingren Zheng
Pages: 1177 - 1184
Abstract: Abstract In this technical note, evolutions of the particle size distribution, particle breakage, volume deformation and input work of carbonate sands with varying relative densities were investigated through performing a series of one-dimensional compression tests. Loading stress levels ranged from 0.1 to 3.2 MPa. It was found that the initial relative density could greatly affect the magnitude of particle size distribution, particle breakage, volume deformation and input work. Particularly, it was observed that the specimen at a lower relative density underwent much more particle breakage than that at a higher relative density. This could be attributed to the change of the coordination number with the initial density. However, a unique linear relationship between the particle breakage and input work per volume could be obtained, which is independent of the initial relative density.
PubDate: 2017-10-01
DOI: 10.1007/s11440-017-0580-y
Issue No: Vol. 12, No. 5 (2017)
• Monotonic and cyclic tests on kaolin: a database for the development,
calibration and verification of constitutive models for cohesive soils
• Authors: Torsten Wichtmann; Theodoros Triantafyllidis
Abstract: Abstract A database with about 60 undrained monotonic and cyclic triaxial tests on kaolin is presented. In the monotonic tests, the influences of consolidation pressure, overconsolidation ratio, displacement rate and sample cutting direction have been studied. In the cyclic tests, the stress amplitude, the initial stress ratio and the control (stress vs. strain cycles) have been additionally varied. Isotropic consolidation leads to a failure due to large strain amplitudes with eight-shaped effective stress paths in the final phase of the cyclic tests, while a failure due to an excessive accumulation of axial strain and lens-shaped effective stress paths was observed in the case of anisotropic consolidation with $$q^{\text{ ampl }}< q^{\text{ av }}$$ . The rate of pore pressure accumulation grew with increasing amplitude and void ratio (i.e. decreasing consolidation pressure and overconsolidation ratio). The “cyclic flow rule” well known for sand has been confirmed also for kaolin: With increasing value of the average stress ratio $$\eta ^{\text{ av }} = q^{\text{ av }} /p^{\text{ av }},$$ the accumulation of deviatoric strain becomes predominant over the accumulation of pore water pressure. The tests on the samples cut out either horizontally or vertically revealed a significant effect of anisotropy. In the cyclic tests, the two kinds of samples exhibited an opposite inclination of the effective stress path. Furthermore, the horizontal samples showed a higher stiffness and could sustain a much larger number of cycles to failure. All data of the present study are available from the homepage of the first author. They may serve for the examination, calibration or improvement in constitutive models dedicated to cohesive soils under cyclic loading, or for the development of new models.
PubDate: 2017-09-14
DOI: 10.1007/s11440-017-0588-3
• A hypo-plastic approach for evaluating railway ballast degradation
• Authors: Arghya Das; Prashant Kumar Bajpai
PubDate: 2017-09-14
DOI: 10.1007/s11440-017-0584-7
• Influence of cementation level on the strength behaviour of bio-cemented
sand
• Abstract: Abstract Microbially induced calcite precipitation (MICP) is used increasingly to improve the engineering properties of granular soils that are unsuitable for construction. This shows MICP technique significant advantages such as low energy consumption and environmentally friendly feature. The objective of the present study is to assess the strength behaviour of bio-cemented sand with varying cementation levels, and to provide an insight into the mechanism of MICP treatment. A series of isotropic consolidated undrained compression tests, calcite mass measurement and scanning electron microscopy tests were conducted. The experimental results show that the strength of bio-cemented sand depends heavily on the cementation level (or calcite content). The variations of strength parameters, i.e. effective friction angle φ′ and effective cohesion c′, with the increase in calcite content can be well evaluated by a linear function and an exponential function, respectively. Based on the precipitation mechanism of calcite crystals, bio-clogging and bio-cementation of calcite crystals are correlated to the amount of total calcite crystals and effective calcite crystals, respectively, and contributed to the improvement in the effective friction angle and effective cohesion of bio-cemented sand, separately.
PubDate: 2017-09-08
DOI: 10.1007/s11440-017-0574-9
• Characterization of microstructural and physical properties changes in
biocemented sand using 3D X-ray microtomography
• Abstract: Abstract An experimental study has been performed to investigate the effect of the biocalcification process on the microstructural and the physical properties of biocemented Fontainebleau sand samples. The microstructural properties (porosity, volume fraction of calcite, total specific surface area, specific surface area of calcite, etc.) and the physical properties (permeability, effective diffusion) of the biocemented samples were computed for the first time from 3D images with a high-resolution images obtained by X-ray synchrotron microtomography. The evolution of all these properties with respect to the volume fraction of calcite is analysed and compared with success to experimental data, when it is possible. In general, our results point out that all the properties are strongly affected by the biocalcification process. Finally, all these numerical results from 3D images and experimental data were compared to numerical values or analytical estimates computed on idealized microstructures constituted of periodic overlapping and random non-overlapping arrangements of coated spheres. These comparisons show that these simple microstructures are sufficient to capture and to predict the main evolution of both microstructural and physical properties of biocemented sands for the whole range of volume fraction of calcite investigated.
PubDate: 2017-09-08
DOI: 10.1007/s11440-017-0578-5
• Experimental characterization and 3D DEM simulation of bond breakages in
artificially cemented sands with different bond strengths when subjected
to triaxial shearing
• Authors: Z. Li; Y. H. Wang; C. H. Ma; C. M. B. Mok
Abstract: Abstract This paper describes the mechanical behavior of artificially cemented sands with strong, intermediate, and weak bond strengths, using experimentation and 3D discrete element method (DEM) simulation. The focus is on the features of bond breakage and the associated influences on the stress–strain responses. Under triaxial shearing, the acoustic emission rate captured in the experiment and the bond breakage rate recorded in the simulations show resemblance to the stress–strain response, especially for strongly and intermediately cemented samples, where a strain softening response is observed. The simulations further reveal the shear band formation coincides with the development of bond breakage locations due to the local weakness caused by the bond breakages. Strain softening and volumetric dilation are observed inside the shear band, while the region outside the shear band undergoes elastic unloading. The weakly cemented sample exhibits a strain hardening response instead; bond breakages and the associated local weaknesses are always randomly formed such that no persistent shear band is observed. Note that in the DEM simulation, the flexible membrane boundary is established by a network of bonded membrane particles; the membrane particle network is further partitioned into finite triangular elements. The associated algorithm can accurately distribute the applied confining pressure onto the membrane particles and determine the sample volume.
PubDate: 2017-08-18
DOI: 10.1007/s11440-017-0593-6
• Lulu Zhang, Jinhui Li, Xu Li, Jie Zhang, Hong Zhu: Rainfall-induced soil
slope failure: stability analysis and probabilistic assessment
• Authors: Wei Wu
PubDate: 2017-08-17
DOI: 10.1007/s11440-017-0591-8
• Reply to “Discussion of ‘Numerical limit analysis of three-dimensional
slope stability problems in catchment areas’ by Camargo et al.
(DOI:10.1007/s11440-016-0459-3)” by Ukritchon et al.
(DOI:10.1007/s11440-017-0589-2)
• Authors: Júlia Camargo; Raquel Quadros Velloso; Euripedes A. Vargas
PubDate: 2017-08-17
DOI: 10.1007/s11440-017-0590-9
• Discussion of “numerical limit analysis of three-dimensional slope
stability problems in catchment areas” by Camargo et al.
(doi:10.1007/s11440-016-0459-3)
• Authors: Boonchai Ukritchon; Suraparb Keawsawasvong
Abstract: Abstract A paper recently published by Camargo et al. (Acta Geotech 11(6):1369–1383, 2016) (hereafter identified as “the authors”) presented the numerical limit analysis method (NLA) to compute the safety factor and collapse mechanism of three-dimensional (3D) slopes. For NLA, the authors employed the discrete three-dimensional lower bound formulation with pore water pressure consideration and Drucker–Prager yield criterion, and cast a slope problem as a second-order conic programming problem. The developed program was implemented in MATLAB and validated through three examples of slope problems, and was applied to solve a large-scale 3D slope problem of a failure case study. The discussion of this article focuses on the formulation of the developed 3D NLA and static admissibility of stress field solutions obtained from NLA.
PubDate: 2017-08-16
DOI: 10.1007/s11440-017-0589-2
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
|
2017-09-26 14:51:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5317193269729614, "perplexity": 4117.357394033267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696182.97/warc/CC-MAIN-20170926141625-20170926161625-00291.warc.gz"}
|
https://stackoverflow.com/questions/22387586/measuring-execution-time-of-a-function-in-c/22387757
|
# Measuring execution time of a function in C++
I want to find out how much time a certain function takes in my C++ program to execute on Linux. Afterwards, I want to make a speed comparison . I saw several time function but ended up with this from boost. Chrono:
process_user_cpu_clock, captures user-CPU time spent by the current process
Now, I am not clear if I use the above function, will I get the only time which CPU spent on that function?
Secondly, I could not find any example of using the above function. Can any one please help me how to use the above function?
P.S: Right now , I am using std::chrono::system_clock::now() to get time in seconds but this gives me different results due to different CPU load every time.
It is a very easy-to-use method in C++11. You have to use std::chrono::high_resolution_clock from <chrono> header.
Use it like so:
#include <chrono>
/* Only needed for the sake of this example. */
#include <iostream>
void long_operation()
{
/* Simulating a long, heavy operation. */
using namespace std::chrono_literals;
}
int main()
{
using std::chrono::high_resolution_clock;
using std::chrono::duration_cast;
using std::chrono::duration;
using std::chrono::milliseconds;
auto t1 = high_resolution_clock::now();
long_operation();
auto t2 = high_resolution_clock::now();
/* Getting number of milliseconds as an integer. */
auto ms_int = duration_cast<milliseconds>(t2 - t1);
/* Getting number of milliseconds as a double. */
duration<double, std::milli> ms_double = t2 - t1;
std::cout << ms_int.count() << "ms\n";
std::cout << ms_double.count() << "ms\n";
return 0;
}
This will measure the duration of the function long_operation.
Possible output:
150ms
150.068ms
Working example: https://godbolt.org/z/oe5cMd
• No. The processor of your computer can be used less or more. The high_resolution_clock will give you the physical and real time that your function takes to run. So, in your first run, your CPU was being used less than in the next run. By "used" I mean what other application work uses the CPU. Mar 13, 2014 at 18:50
• Yes, if you need the average of the time, that is a good way to get it. take three runs, and calculate the average. Mar 13, 2014 at 18:54
• Could you please post code without "using namespace" in general. It makes it easier to see what comes from where. Mar 19, 2019 at 17:38
• Shouldn't this be a steady_clock? Isn't it possible high_resolution_clock could be a non-monotonic clock? Aug 16, 2019 at 14:36
• BTW: I recommend changing long long number to volatile long long number. Otherwise, the optimizer will likely optimize away that loop and you will get a running time of zero. Feb 28, 2021 at 21:24
Here's a function that will measure the execution time of any function passed as argument:
#include <chrono>
#include <utility>
typedef std::chrono::high_resolution_clock::time_point TimeVar;
#define duration(a) std::chrono::duration_cast<std::chrono::nanoseconds>(a).count()
#define timeNow() std::chrono::high_resolution_clock::now()
template<typename F, typename... Args>
double funcTime(F func, Args&&... args){
TimeVar t1=timeNow();
func(std::forward<Args>(args)...);
return duration(timeNow()-t1);
}
Example usage:
#include <iostream>
#include <algorithm>
typedef std::string String;
//first test function doing something
int countCharInString(String s, char delim){
int count=0;
String::size_type pos = s.find_first_of(delim);
while ((pos = s.find_first_of(delim, pos)) != String::npos){
count++;pos++;
}
return count;
}
//second test function doing the same thing in different way
int countWithAlgorithm(String s, char delim){
return std::count(s.begin(),s.end(),delim);
}
int main(){
std::cout<<"norm: "<<funcTime(countCharInString,"precision=10",'=')<<"\n";
std::cout<<"algo: "<<funcTime(countWithAlgorithm,"precision=10",'=');
return 0;
}
Output:
norm: 15555
algo: 2976
• @RestlessC0bra : It's implementaion defined, high_resolution_clock may be an alias of system_clock (wall clock), steady_clock or a third independent clock. See details here. For cpu clock, std::clock may be used Jan 24, 2017 at 11:39
• Two macros and a global typedef - none of which safe a single keytroke - is certainly nothing I'd call elegant.Also passing a function object and perfectly forwarding the arguments separately is a bit of an overkill (and in the case of overloaded functions even inconvenient), when you can just require the timed code to be put in a lambda. But well, as long as passing arguments is optional. Mar 3, 2017 at 12:30
• And this is a justification for violating each and every guideline about the naming of macros? You don't prefix them, you don't use capital letters, you pick a very common name that has a high probability of colliding with some local symbol and most of all: Why are you using a macro at all (instead of a function)? And while we are at it: Why are you returning the duration as a double representing nanoseconds in the first place? We should probably agree that we disagree. My original opinion stands: "This is not what I'd call elegant code". May 8, 2017 at 20:44
• @MikeMB : Good point, making this a header would definitely be a bad idea. Though, in the end, it's just an example, if you have complex needs you gotta think about standard practices and adapt the code accordingly. For example, when writing code, I make it convenient for me when it's in the cpp file I am working right now, but when it's time to move it elsewhere I take every necessary steps to make it robust so that I don't have to look at it again. And I think that, every programmer out there who are not complete noobs think broadly when the time is due. Hope, I clarified my point :D. Jun 20, 2017 at 8:58
• @Jahid: Thanks. In that case consider my comments void and null. Jun 20, 2017 at 9:00
In Scott Meyers book I found an example of universal generic lambda expression that can be used to measure function execution time. (C++14)
auto timeFuncInvocation =
[](auto&& func, auto&&... params) {
// get time before function invocation
const auto& start = std::chrono::high_resolution_clock::now();
// function invocation using perfect forwarding
std::forward<decltype(func)>(func)(std::forward<decltype(params)>(params)...);
// get time after function invocation
const auto& stop = std::chrono::high_resolution_clock::now();
return stop - start;
};
The problem is that you are measure only one execution so the results can be very differ. To get a reliable result you should measure a large number of execution. According to Andrei Alexandrescu lecture at code::dive 2015 conference - Writing Fast Code I:
Measured time: tm = t + tq + tn + to
where:
tm - measured (observed) time
t - the actual time of interest
tq - time added by quantization noise
tn - time added by various sources of noise
to - overhead time (measuring, looping, calling functions)
According to what he said later in the lecture, you should take a minimum of this large number of execution as your result. I encourage you to look at the lecture in which he explains why.
Also there is a very good library from google - https://github.com/google/benchmark. This library is very simple to use and powerful. You can checkout some lectures of Chandler Carruth on youtube where he is using this library in practice. For example CppCon 2017: Chandler Carruth “Going Nowhere Faster”;
Example usage:
#include <iostream>
#include <chrono>
#include <vector>
auto timeFuncInvocation =
[](auto&& func, auto&&... params) {
// get time before function invocation
const auto& start = high_resolution_clock::now();
// function invocation using perfect forwarding
for(auto i = 0; i < 100000/*largeNumber*/; ++i) {
std::forward<decltype(func)>(func)(std::forward<decltype(params)>(params)...);
}
// get time after function invocation
const auto& stop = high_resolution_clock::now();
return (stop - start)/100000/*largeNumber*/;
};
void f(std::vector<int>& vec) {
vec.push_back(1);
}
void f2(std::vector<int>& vec) {
vec.emplace_back(1);
}
int main()
{
std::vector<int> vec;
std::vector<int> vec2;
std::cout << timeFuncInvocation(f, vec).count() << std::endl;
std::cout << timeFuncInvocation(f2, vec2).count() << std::endl;
std::vector<int> vec3;
vec3.reserve(100000);
std::vector<int> vec4;
vec4.reserve(100000);
std::cout << timeFuncInvocation(f, vec3).count() << std::endl;
std::cout << timeFuncInvocation(f2, vec4).count() << std::endl;
return 0;
}
EDIT: Ofcourse you always need to remember that your compiler can optimize something out or not. Tools like perf can be useful in such cases.
• Interesting -- what's the benefit of using a lambda here over a function template? Feb 11, 2019 at 19:23
• Main difference would be that it is a callable object but indeed you can get something very similar with variadic template and std::result_of_t. Feb 14, 2019 at 10:11
• @KrzysztofSommerfeld How to do this one for function methods , when I pass the timing(Object.Method1) it return error "non-standard syntax; use '&' to create a pointer to member" Dec 13, 2019 at 1:12
• timeFuncInvocation([&objectName](auto&&... args){ objectName.methodName(std::forward<decltype(args)>(args)...); }, arg1, arg2,...); or ommit & sign before objectName (then you will have a copy of the object) Dec 16, 2019 at 12:28
simple program to find a function execution time taken.
#include <iostream>
#include <ctime> // time_t
#include <cstdio>
void function()
{
for(long int i=0;i<1000000000;i++)
{
// do nothing
}
}
int main()
{
time_t begin,end; // time_t is a datatype to store time values.
time (&begin); // note time before execution
function();
time (&end); // note time after execution
double difference = difftime (end,begin);
printf ("time taken for function() %.2lf seconds.\n", difference );
return 0;
}
• it's very inaccurate, shows only seconds, but no milliseconds May 17, 2018 at 19:59
• You should rather use something like clock_gettime and process the results within a struct timespec result. But this is a C solution rather than a C++ one. Nov 19, 2020 at 10:24
Easy way for older C++, or C:
#include <time.h> // includes clock_t and CLOCKS_PER_SEC
int main() {
clock_t start, end;
start = clock();
// ...code to measure...
end = clock();
double duration_sec = double(end-start)/CLOCKS_PER_SEC;
return 0;
}
Timing precision in seconds is 1.0/CLOCKS_PER_SEC
• This is not portable. It measures processor time on Linux, and clock time on Windows. Mar 30, 2019 at 15:11
• start and end time are always the same, despite I add an array of 512 elements..... under Win64/Visual Studio 17 Aug 6, 2020 at 16:23
• I'm not sure what would cause that, but if you're using C++ then best to switch over to the standard <chrono> methods. Jun 15, 2021 at 23:30
#include <iostream>
#include <chrono>
void function()
{
// code here;
}
int main()
{
auto t1 = std::chrono::high_resolution_clock::now();
function();
auto t2 = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::microseconds>( t2 - t1 ).count();
std::cout << duration<<"/n";
return 0;
}
This Worked for me.
Note:
The high_resolution_clock is not implemented consistently across different standard library implementations, and its use should be avoided. It is often just an alias for std::chrono::steady_clock or std::chrono::system_clock, but which one it is depends on the library or configuration. When it is a system_clock, it is not monotonic (e.g., the time can go backwards).
For example, for gcc's libstdc++ it is system_clock, for MSVC it is steady_clock, and for clang's libc++ it depends on configuration.
Generally one should just use std::chrono::steady_clock or std::chrono::system_clock directly instead of std::chrono::high_resolution_clock: use steady_clock for duration measurements, and system_clock for wall-clock time.
Here is an excellent header only class template to measure the elapsed time of a function or any code block:
#ifndef EXECUTION_TIMER_H
#define EXECUTION_TIMER_H
template<class Resolution = std::chrono::milliseconds>
class ExecutionTimer {
public:
std::chrono::high_resolution_clock,
private:
const Clock::time_point mStart = Clock::now();
public:
ExecutionTimer() = default;
~ExecutionTimer() {
const auto end = Clock::now();
std::ostringstream strStream;
strStream << "Destructor Elapsed: "
<< std::chrono::duration_cast<Resolution>( end - mStart ).count()
<< std::endl;
std::cout << strStream.str() << std::endl;
}
inline void stop() {
const auto end = Clock::now();
std::ostringstream strStream;
strStream << "Stop Elapsed: "
<< std::chrono::duration_cast<Resolution>(end - mStart).count()
<< std::endl;
std::cout << strStream.str() << std::endl;
}
}; // ExecutionTimer
#endif // EXECUTION_TIMER_H
Here are some uses of it:
int main() {
{ // empty scope to display ExecutionTimer's destructor's message
// displayed in milliseconds
ExecutionTimer<std::chrono::milliseconds> timer;
// function or code block here
timer.stop();
}
{ // same as above
ExecutionTimer<std::chrono::microseconds> timer;
// code block here...
timer.stop();
}
{ // same as above
ExecutionTimer<std::chrono::nanoseconds> timer;
// code block here...
timer.stop();
}
{ // same as above
ExecutionTimer<std::chrono::seconds> timer;
// code block here...
timer.stop();
}
return 0;
}
Since the class is a template we can specify real easily in how we want our time to be measured & displayed. This is a very handy utility class template for doing bench marking and is very easy to use.
• Personally, the stop() member function isn't needed because the destructor stops the timer for you. Feb 22, 2018 at 13:59
• @Casey The design of the class doesn't necessarily need the stop function, however it is there for a specific reason. The default construct when creating the object before your test code starts the timer. Then after your test code you explicitly use the timer object and call its stop method. You have to invoke it manually when you want to stop the timer. The class doesn't take any parameters. Also if you used this class just as I've shown you will see that there is a minimal elapse of time between the call to obj.stop and its destructor. Feb 23, 2018 at 3:15
• @Casey ... This also allows to have multiple timer objects within the same scope, not that one would really need it, but just another viable option. Feb 23, 2018 at 3:17
• This example cannot be compiled in the presented form. The error is related to "no match for operator<< ..."! Apr 16, 2019 at 14:52
• @Celdor do you have to appropriate includes; such as <chrono>? Apr 16, 2019 at 20:57
If you want to safe time and lines of code you can make measuring the function execution time a one line macro:
a) Implement a time measuring class as already suggested above ( here is my implementation for android):
class MeasureExecutionTime{
private:
const std::string caller;
public:
~MeasureExecutionTime(){
LOGD("ExecutionTime")<<"For "<<caller<<" is "<<std::chrono::duration_cast<std::chrono::milliseconds>(duration).count()<<"ms";
}
};
b) Add a convenient macro that uses the current function name as TAG (using a macro here is important, else __FUNCTION__ will evaluate to MeasureExecutionTime instead of the function you wanto to measure
#ifndef MEASURE_FUNCTION_EXECUTION_TIME
#define MEASURE_FUNCTION_EXECUTION_TIME const MeasureExecutionTime measureExecutionTime(__FUNCTION__);
#endif
c) Write your macro at the begin of the function you want to measure. Example:
void DecodeMJPEGtoANativeWindowBuffer(uvc_frame_t* frame_mjpeg,const ANativeWindow_Buffer& nativeWindowBuffer){
MEASURE_FUNCTION_EXECUTION_TIME
// Do some time-critical stuff
}
Which will result int the following output:
ExecutionTime: For DecodeMJPEGtoANativeWindowBuffer is 54ms
Note that this (as all other suggested solutions) will measure the time between when your function was called and when it returned, not neccesarily the time your CPU was executing the function. However, if you don't give the scheduler any change to suspend your running code by calling sleep() or similar there is no difference between.
• It is a very easy to use method in C++11.
• We can use std::chrono::high_resolution_clock from header
• We can write a method to print the method execution time in a much readable form.
For example, to find the all the prime numbers between 1 and 100 million, it takes approximately 1 minute and 40 seconds. So the execution time get printed as:
Execution Time: 1 Minutes, 40 Seconds, 715 MicroSeconds, 715000 NanoSeconds
The code is here:
#include <iostream>
#include <chrono>
using namespace std;
using namespace std::chrono;
typedef high_resolution_clock Clock;
typedef Clock::time_point ClockTime;
void findPrime(long n, string file);
void printExecutionTime(ClockTime start_time, ClockTime end_time);
int main()
{
long n = long(1E+8); // N = 100 million
ClockTime start_time = Clock::now();
// Write all the prime numbers from 1 to N to the file "prime.txt"
findPrime(n, "C:\\prime.txt");
ClockTime end_time = Clock::now();
printExecutionTime(start_time, end_time);
}
void printExecutionTime(ClockTime start_time, ClockTime end_time)
{
auto execution_time_ns = duration_cast<nanoseconds>(end_time - start_time).count();
auto execution_time_ms = duration_cast<microseconds>(end_time - start_time).count();
auto execution_time_sec = duration_cast<seconds>(end_time - start_time).count();
auto execution_time_min = duration_cast<minutes>(end_time - start_time).count();
auto execution_time_hour = duration_cast<hours>(end_time - start_time).count();
cout << "\nExecution Time: ";
if(execution_time_hour > 0)
cout << "" << execution_time_hour << " Hours, ";
if(execution_time_min > 0)
cout << "" << execution_time_min % 60 << " Minutes, ";
if(execution_time_sec > 0)
cout << "" << execution_time_sec % 60 << " Seconds, ";
if(execution_time_ms > 0)
cout << "" << execution_time_ms % long(1E+3) << " MicroSeconds, ";
if(execution_time_ns > 0)
cout << "" << execution_time_ns % long(1E+6) << " NanoSeconds, ";
}
I recommend using steady_clock which is guarunteed to be monotonic, unlike high_resolution_clock.
#include <iostream>
#include <chrono>
using namespace std;
unsigned int stopwatch()
{
auto delta = chrono::duration_cast<chrono::microseconds>(end_time - start_time);
start_time = end_time;
return delta.count();
}
int main() {
stopwatch(); //Start stopwatch
std::cout << "Hello World!\n";
cout << stopwatch() << endl; //Time to execute last line
for (int i=0; i<1000000; i++)
cout << stopwatch() << endl; //Time to execute for loop
}
Output:
Hello World!
62
163514
Since none of the provided answers are very accurate or give reproducable results I decided to add a link to my code that has sub-nanosecond precision and scientific statistics.
Note that this will only work to measure code that takes a (very) short time to run (aka, a few clock cycles to a few thousand): if they run so long that they are likely to be interrupted by some -heh- interrupt, then it is clearly not possible to give a reproducable and accurate result; the consequence of which is that the measurement never finishes: namely, it continues to measure until it is statistically 99.9% sure it has the right answer which never happens on a machine that has other processes running when the code takes too long.
https://github.com/CarloWood/cwds/blob/master/benchmark.h#L40
You can have a simple class which can be used for this kind of measurements.
class duration_printer {
public:
duration_printer() : __start(std::chrono::high_resolution_clock::now()) {}
~duration_printer() {
using namespace std::chrono;
high_resolution_clock::time_point end = high_resolution_clock::now();
duration<double> dur = duration_cast<duration<double>>(end - __start);
std::cout << dur.count() << " seconds" << std::endl;
}
private:
std::chrono::high_resolution_clock::time_point __start;
};
The only thing is needed to do is to create an object in your function at the beginning of that function
void veryLongExecutingFunction() {
duration_calculator dc;
for(int i = 0; i < 100000; ++i) std::cout << "Hello world" << std::endl;
}
int main() {
veryLongExecutingFunction();
return 0;
}
and that's it. The class can be modified to fit your requirements.
C++11 cleaned up version of Jahid's response:
#include <chrono>
void long_operation(int ms)
{
/* Simulating a long, heavy operation. */
}
template<typename F, typename... Args>
double funcTime(F func, Args&&... args){
std::chrono::high_resolution_clock::time_point t1 =
std::chrono::high_resolution_clock::now();
func(std::forward<Args>(args)...);
return std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::high_resolution_clock::now()-t1).count();
}
int main()
{
std::cout<<"expect 150: "<<funcTime(long_operation,150)<<"\n";
return 0;
}
This is a very basic timer class which you can expand on depending on your needs. I wanted something straightforward which can be used cleanly in code. You can mess with it at coding ground with this link: http://tpcg.io/nd47hFqr.
class local_timer {
private:
std::chrono::_V2::system_clock::time_point start_time;
std::chrono::_V2::system_clock::time_point stop_time;
std::chrono::_V2::system_clock::time_point stop_time_temp;
std::chrono::microseconds most_recent_duration_usec_chrono;
double most_recent_duration_sec;
public:
local_timer() {
};
~local_timer() {
};
void start() {
this->start_time = std::chrono::high_resolution_clock::now();
};
void stop() {
this->stop_time = std::chrono::high_resolution_clock::now();
};
double get_time_now() {
this->stop_time_temp = std::chrono::high_resolution_clock::now();
this->most_recent_duration_usec_chrono = std::chrono::duration_cast<std::chrono::microseconds>(stop_time_temp-start_time);
this->most_recent_duration_sec = (long double)most_recent_duration_usec_chrono.count()/1000000;
return this->most_recent_duration_sec;
};
double get_duration() {
this->most_recent_duration_usec_chrono = std::chrono::duration_cast<std::chrono::microseconds>(stop_time-start_time);
this->most_recent_duration_sec = (long double)most_recent_duration_usec_chrono.count()/1000000;
return this->most_recent_duration_sec;
};
};
The use for this being
#include <iostream>
#include "timer.hpp" //if kept in an hpp file in the same folder, can also before your main function
int main() {
//create two timers
local_timer timer1 = local_timer();
local_timer timer2 = local_timer();
//set start time for timer1
timer1.start();
//wait 1 second
while(timer1.get_time_now() < 1.0) {
}
//save time
timer1.stop();
//print time
std::cout << timer1.get_duration() << " seconds, timer 1\n" << std::endl;
timer2.start();
for(long int i = 0; i < 100000000; i++) {
//do something
if(i%1000000 == 0) {
//return time since loop started
std::cout << timer2.get_time_now() << " seconds, timer 2\n"<< std::endl;
}
}
return 0;
}
|
2023-02-03 08:26:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25984111428260803, "perplexity": 6910.903636961732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00793.warc.gz"}
|
https://pygsp.readthedocs.io/en/stable/tutorials/intro.html
|
# Introduction to the PyGSP¶
This tutorial will show you the basic operations of the toolbox. After installing the package with pip, start by opening a python shell, e.g. a Jupyter notebook, and import the PyGSP. We will also need NumPy to create matrices and arrays.
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from pygsp import graphs, filters, plotting
We then set default plotting parameters. We’re using the matplotlib backend to embed plots in this tutorial. The pyqtgraph backend is best suited for interactive visualization.
>>> plotting.BACKEND = 'matplotlib'
>>> plt.rcParams['figure.figsize'] = (10, 5)
## Graphs¶
Most likely, the first thing you would like to do is to create a graph. In this toolbox, a graph is encoded as an adjacency, or weight, matrix. That is because it’s the most efficient representation to deal with when using spectral methods. As such, you can construct a graph from any adjacency matrix as follows.
>>> rs = np.random.RandomState(42) # Reproducible results.
>>> W = rs.uniform(size=(30, 30)) # Full graph.
>>> W[W < 0.93] = 0 # Sparse graph.
>>> W = W + W.T # Symmetric graph.
>>> np.fill_diagonal(W, 0) # No self-loops.
>>> G = graphs.Graph(W)
>>> print('{} nodes, {} edges'.format(G.N, G.Ne))
30 nodes, 60 edges
The pygsp.graphs.Graph class we just instantiated is the base class for all graph objects, which offers many methods and attributes.
Given, a graph object, we can test some properties.
>>> G.is_connected()
True
>>> G.is_directed()
False
We can retrieve our weight matrix, which is stored in a sparse format.
>>> (G.W == W).all()
True
>>> type(G.W)
<class 'scipy.sparse.lil.lil_matrix'>
We can access the graph Laplacian
>>> # The graph Laplacian (combinatorial by default).
>>> G.L.shape
(30, 30)
We can also compute and get the graph Fourier basis (see below).
>>> G.compute_fourier_basis()
>>> G.U.shape
(30, 30)
Or the graph differential operator, useful to e.g. compute the gradient or smoothness of a signal.
>>> G.compute_differential_operator()
>>> G.D.shape
(60, 30)
Note
Note that we called pygsp.graphs.Graph.compute_fourier_basis() and pygsp.graphs.Graph.compute_differential_operator() before accessing the Fourier basis pygsp.graphs.Graph.U and the differential operator pygsp.graphs.Graph.D. Doing so is however not mandatory as those matrices would have been computed when requested (lazy evaluation). Omitting to call the compute functions does print a warning to tell you that a potentially heavy computation is taking place under the hood (that’s also the reason those matrices are not computed when the graph object is instantiated). It is thus encouraged to call them so that you are aware of the involved computations.
To be able to plot a graph, we need to embed its nodes in a 2D or 3D space. While most included graph models define these coordinates, the graph we just created do not. We only passed a weight matrix after all. Let’s set some coordinates with pygsp.graphs.Graph.set_coordinates() and plot our graph.
>>> G.set_coordinates('ring2D')
>>> G.plot()
While we created our first graph ourselves, many standard models of graphs are implemented as subclasses of the Graph class and can be easily instantiated. Check the pygsp.graphs module to get a list of them and learn more about the Graph object.
## Fourier basis¶
As in classical signal processing, the Fourier transform plays a central role in graph signal processing. Getting the Fourier basis is however computationally intensive as it needs to fully diagonalize the Laplacian. While it can be used to filter signals on graphs, a better alternative is to use one of the fast approximations (see pygsp.filters.Filter.filter()). Let’s compute it nonetheless to visualize the eigenvectors of the Laplacian. Analogous to classical Fourier analysis, they look like sinuses on the graph. Let’s plot the second and third eigenvectors (the first is constant). Those are graph signals, i.e. functions $$s: \mathcal{V} \rightarrow \mathbb{R}^d$$ which assign a set of values (a vector in $$\mathbb{R}^d$$) at every node $$v \in \mathcal{V}$$ of the graph.
>>> G = graphs.Logo()
>>> G.compute_fourier_basis()
>>>
>>> fig, axes = plt.subplots(1, 2, figsize=(10, 3))
>>> for i, ax in enumerate(axes):
... G.plot_signal(G.U[:, i+1], vertex_size=30, ax=ax)
... _ = ax.set_title('Eigenvector {}'.format(i+2))
... ax.set_axis_off()
>>> fig.tight_layout()
The parallel with classical signal processing is best seen on a ring graph, where the graph Fourier basis is equivalent to the classical Fourier basis. The following plot shows some eigenvectors drawn on a 1D and 2D embedding of the ring graph. While the signals are easier to interpret on a 1D plot, the 2D plot best represents the graph.
>>> G2 = graphs.Ring(N=50)
>>> G2.compute_fourier_basis()
>>> fig, axes = plt.subplots(1, 2, figsize=(10, 4))
>>> G2.plot_signal(G2.U[:, 4], ax=axes[0])
>>> G2.set_coordinates('line1D')
>>> G2.plot_signal(G2.U[:, 1:4], ax=axes[1])
>>> fig.tight_layout()
## Filters¶
To filter signals on graphs, we need to define filters. They are represented in the toolbox by the pygsp.filters.Filter class. Filters are usually defined in the spectral domain. Given the transfer function
$g(x) = \frac{1}{1 + \tau x},$
let’s define and plot that low-pass filter:
>>> tau = 1
>>> def g(x):
... return 1. / (1. + tau * x)
>>> g = filters.Filter(G, g)
>>>
>>> fig, ax = plt.subplots()
>>> g.plot(plot_eigenvalues=True, ax=ax)
>>> _ = ax.set_title('Filter frequency response')
The filter is plotted along all the spectrum of the graph. The black crosses are the eigenvalues of the Laplacian. They are the points where the continuous filter will be evaluated to create a discrete filter.
Note
You can put multiple functions in a list to define a filter bank!
Note
The pygsp.filters module implements various standard filters.
Let’s create a graph signal and add some random noise.
>>> # Graph signal: each letter gets a different value + additive noise.
>>> s = np.zeros(G.N)
>>> s[G.info['idx_g']-1] = -1
>>> s[G.info['idx_s']-1] = 0
>>> s[G.info['idx_p']-1] = 1
>>> s += rs.uniform(-0.5, 0.5, size=G.N)
We can now try to denoise that signal by filtering it with the above defined low-pass filter.
>>> s2 = g.filter(s)
>>>
>>> fig, axes = plt.subplots(1, 2, figsize=(10, 3))
>>> G.plot_signal(s, vertex_size=30, ax=axes[0])
>>> _ = axes[0].set_title('Noisy signal')
>>> axes[0].set_axis_off()
>>> G.plot_signal(s2, vertex_size=30, ax=axes[1])
>>> _ = axes[1].set_title('Cleaned signal')
>>> axes[1].set_axis_off()
>>> fig.tight_layout()
While the noise is largely removed thanks to the filter, some energy is diffused between the letters. This is the typical behavior of a low-pass filter.
So here are the basics for the PyGSP. Please check the other tutorials and the reference guide for more. Enjoy!
Note
Please see the review article The Emerging Field of Signal Processing on Graphs: Extending High-Dimensional Data Analysis to Networks and Other Irregular Domains for an overview of the methods this package leverages.
|
2018-10-21 21:04:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4177972972393036, "perplexity": 2615.761936066855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514355.90/warc/CC-MAIN-20181021203102-20181021224602-00483.warc.gz"}
|
https://3v4l.org/J2qfK
|
# 3v4l.org
run code in 200+ php & hhvm versions
```<?php \$i=1; \$k=1; while (i<100) { \$k=\$k+1; \$i=\$i*\$k; } ?>```
Output for 7.2.0
Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 Warning: Use of undefined constant i - assumed 'i' (this will throw an Error in a future version of PHP) in /in/J2qfK on line 5 W
Process exited with code 137.
Output for 4.3.0 - 7.1.10
Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2qfK on line 5 Notice: Use of undefined constant i - assumed 'i' in /in/J2
Process exited with code 137.
|
2018-03-23 07:16:13
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8211390972137451, "perplexity": 3695.4886584621217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648198.55/warc/CC-MAIN-20180323063710-20180323083710-00320.warc.gz"}
|
http://arxivsorter.org/
|
Arxivsorter uses the network of co-authorship to estimate a proximity between people.
It then ranks a list of publications using a friends-of-friends algorithm.
It is not a filter and therefore does not lose any information.
J.P. Magué & B. Ménard
[1]
Title: Atomic data and spectral modeling constraints from high-resolution X-ray observations of the Perseus cluster with Hitomi
Comments: 46 pages, 25 figures, 11 tables. Accepted for publication in PASJ
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
The Hitomi SXS spectrum of the Perseus cluster, with $\sim$5 eV resolution in the 2-9 keV band, offers an unprecedented benchmark of the atomic modeling and database for hot collisional plasmas. It reveals both successes and challenges of the current atomic codes. The latest versions of AtomDB/APEC (3.0.8), SPEX (3.03.00), and CHIANTI (8.0) all provide reasonable fits to the broad-band spectrum, and are in close agreement on best-fit temperature, emission measure, and abundances of a few elements such as Ni. For the Fe abundance, the APEC and SPEX measurements differ by 16%, which is 17 times higher than the statistical uncertainty. This is mostly attributed to the differences in adopted collisional excitation and dielectronic recombination rates of the strongest emission lines. We further investigate and compare the sensitivity of the derived physical parameters to the astrophysical source modeling and instrumental effects. The Hitomi results show that an accurate atomic code is as important as the astrophysical modeling and instrumental calibration aspects. Substantial updates of atomic databases and targeted laboratory measurements are needed to get the current codes ready for the data from the next Hitomi-level mission.
[2]
Title: Stellar Population Synthesis of star forming clumps in galaxy pairs and non-interacting spiral galaxies
Comments: Accepted for publication in ApJS. 20 pages, 10 figures, 6 tables
Subjects: Astrophysics of Galaxies (astro-ph.GA)
We have identified 1027 star forming complexes in a sample of 46 galaxies from the Spirals, Bridges, and Tails (SB&T) sample of interacting galaxies, and 693 star forming complexes in a sample of 38 non-interacting spiral (NIS) galaxies in $8\rm{\mu m}$ observations from the Spitzer Infrared Array Camera. We have used archival multi-wavelength UV-to IR observations to fit the observed spectral energy distribution (SED) of our clumps with the Code Investigating GALaxy Emission (CIGALE) using a double exponentially declined star formation history (SFH). We derive SFRs, stellar masses, ages and fractions of the most recent burst, dust attenuation, and fractional emission due to an AGN for these clumps. The resolved star formation main sequence holds on 2.5kpc scales, although it does not hold on 1kpc scales. We analyzed the relation between SFR, stellar mass, and age of the recent burst in the SB&T and NIS samples, and we found that the SFR per stellar mass is higher in the SB&T galaxies, and the clumps are younger in the galaxy pairs. We analyzed the SFR radial profile and found that SFR is enhanced through the disk and in the tidal features relative to normal spirals.
[3]
Title: The multiplicity and anisotropy of galactic satellite accretion
Subjects: Astrophysics of Galaxies (astro-ph.GA); Cosmology and Nongalactic Astrophysics (astro-ph.CO)
We study the incidence of group and filamentary dwarf galaxy accretion into Milky Way (MW) mass haloes using two types of hydrodynamical simulations: EAGLE, which resolves a large cosmological volume, and the AURIGA suite, which are very high resolution zoom-in simulations of individual MW-sized haloes. The present-day 11 most massive satellites are predominantly (75%) accreted in single events, 14% in pairs and 6% in triplets, with higher group multiplicities being unlikely. Group accretion becomes more common for fainter satellites, with 60% of the top 50 satellites accreted singly, 12% in pairs, and 28% in richer groups. A group similar in stellar mass to the Large Magellanic Cloud (LMC) would bring on average 15 members with stellar mass larger than $10^4{~\rm M_\odot}$. Half of the top 11 satellites are accreted along the two richest filaments. The accretion of dwarf galaxies is highly anisotropic, taking place preferentially perpendicular to the halo minor axis, and, within this plane, preferentially along the halo major axis. The satellite entry points tend to be aligned with the present-day central galaxy disc and satellite plane, but to a lesser extent than with the halo shape. Dwarfs accreted in groups or along the richest filament have entry points that show an even larger degree of alignment with the host halo than the full satellite population. We also find that having most satellites accreted as a single group or along a single filament is unlikely to explain the MW disc of satellites.
[4]
Title: Intra-cluster Globular Clusters in a Simulated Galaxy Cluster
Comments: accepted for publication in ApJ
Subjects: Astrophysics of Galaxies (astro-ph.GA)
Using a cosmological dark matter simulation of a galaxy-cluster halo, we follow the temporal evolution of its globular cluster population. To mimic the red and blue globular cluster populations, we select at high redshift $(z\sim 1)$ two sets of particles from individual galactic halos constrained by the fact that, at redshift $z=0$, they have density profiles similar to observed ones. At redshift $z=0$, approximately 60\% of our selected globular clusters were removed from their original halos building up the intra-cluster globular cluster population, while the remaining 40\% are still gravitationally bound to their original galactic halos. Since the blue population is more extended than the red one, the intra-cluster globular cluster population is dominated by blue globular clusters, with a relative fraction that grows from 60\% at redshift $z=0$ up to 83\% for redshift $z\sim 2$. In agreement with observational results for the Virgo galaxy cluster, the blue intra-cluster globular cluster population is more spatially extended than the red one, pointing to a tidally disrupted origin.
[5]
Title: Wandering off the centre: A characterisation of the random motion of intermediate-mass black holes in star clusters
Comments: 14 pages, 9 figures, 2 tables; accepted for publication in MNRAS
Subjects: Astrophysics of Galaxies (astro-ph.GA)
Despite recent observational efforts, unequivocal signs for the presence of intermediate-mass black holes (IMBHs) in globular clusters (GCs) have not been found yet. Especially when the presence of IMBHs is constrained through dynamical modeling of stellar kinematics, it is fundamental to account for the displacement that the IMBH might have with respect to the GC centre. In this paper we analyse the IMBH wandering around the stellar density centre using a set of realistic direct N-body simulations of star cluster evolution. Guided by the simulation results, we develop a basic yet accurate model that can be used to estimate the average IMBH radial displacement ($\left<r_\mathrm{bh}\right>$) in terms of structural quantities as the core radius ($r_\mathrm{c}$), mass ($M_\mathrm{c}$), and velocity dispersion ($\sigma_\mathrm{c}$), in addition to the average stellar mass ($m_\mathrm{c}$) and the IMBH mass ($M_\mathrm{bh}$). The model can be expressed by the equation $\left<r_\mathrm{bh}\right>/r_\mathrm{c}=A(m_\mathrm{c}/M_\mathrm{bh})^\alpha[\sigma_\mathrm{c}^2r_\mathrm{c}/(GM_\mathrm{c})]^\beta$, in which the free parameters $A,\alpha,\beta$ are calculated through comparison with the numerical results on the IMBH displacement. The model is then applied to Galactic GCs, finding that for an IMBH mass equal to 0.1% of the GC mass, the typical expected displacement of a putative IMBH is around $1''$ for most Galactic GCs, but IMBHs can wander to larger angular distances in some objects, including a prediction of a $2.5''$ displacement for NGC 5139 ($\omega$ Cen), and $>10''$ for NGC 5053, NGC 6366 and ARP2.
[6]
Title: $M_*/L$ gradients driven by IMF variation: Large impact on dynamical stellar mass estimates
Comments: 12 pages, 6 figures, submitted to MNRAS
Subjects: Astrophysics of Galaxies (astro-ph.GA); Cosmology and Nongalactic Astrophysics (astro-ph.CO)
Within a galaxy the stellar mass-to-light ratio $\Upsilon_*$ is not constant. We show that ignoring $\Upsilon_*$ gradients can have a more dramatic effect on dynamical ($M_*^{\rm dyn}$) compared to stellar population ($M_*^{\rm SP}$) based estimates of early-type galaxy stellar masses, because $M_*^{\rm dyn}$ is usually calibrated using the velocity dispersion measured in the central regions. If $\Upsilon_*$ is greater there, then ignoring the gradient will lead to an overestimate of $M_*^{\rm dyn}$. Spatially resolved kinematics of nearby early-type galaxies suggests that these gradients are driven by gradients in the initial mass function (IMF). Accounting for recent estimates of the IMF-driven $\Upsilon_*$ gradient reduces $M_*^{\rm dyn}$ substantially ($\sim$ a factor of two), and may be accompanied by a (smaller) change in $M_*^{\rm SP}$. Our results suggest that $M_*^{\rm dyn}$ estimates in the literature should be revised downwards, rather than revising $M_*^{\rm SP}$ estimates upwards. This has three consequences. First, if gradients in $\Upsilon_*$ are present, then $M_*^{\rm dyn}$ cannot be estimated independently of stellar population synthesis models. Second, accounting for $\Upsilon_*$ gradients changes the slope of the stellar mass function $\phi(M_*^{\rm dyn})$, and reduces the associated stellar mass density, especially at high masses. Third, if gradients are stronger in more massive galaxies, then accounting for this reduces the slope of the correlation between the ratio of the dynamical and stellar population mass estimates of a galaxy with its velocity dispersion. These conclusions potentially impact estimates of the need for feedback and adiabatic contraction, so our results highlight the importance of measurements of $\Upsilon_*$ gradients in larger samples.
[7]
Title: Prospects for detecting gravitational waves at 5 Hz with ground-based detectors
Subjects: Instrumentation and Methods for Astrophysics (astro-ph.IM); High Energy Astrophysical Phenomena (astro-ph.HE); General Relativity and Quantum Cosmology (gr-qc)
We propose an upgrade of Advanced LIGO (aLIGO), named LIGO-LF, that focuses on improving the sensitivity in the 5-30 Hz low-frequency band, and we explore the upgrade's astrophysical applications. We present a comprehensive study of the detector's technical noises, and show that with the new technologies such as interferometrically-sensed seismometers and balanced-homodyne readout, LIGO-LF can reach the fundamental limits set by quantum and thermal noises down to 5 Hz. These technologies are also directly applicable to the future generation of detectors. LIGO-LF can observe a rich array of astrophysical sources such as binary black holes with total mass up to 2000 M_\odot. The horizon distance of a single LIGO-LF detector will be z ~ 6, greatly exceeding aLIGO's reach. Additionally, for a given source the chirp mass and total mass can be constrained 2 times better, and the effective spin 3-5 times better, than aLIGO. The total number of detected merging black holes will increase by a factor of 16 compared with aLIGO. Meanwhile, LIGO-LF will also significantly enhance the probability of detecting other astrophysical phenomena including the gravitational memory effects and the neutron star r-mode resonances.
[8]
Title: Exploring The Effects Of Disk Thickness On The Black Hole Reflection Spectrum
Comments: 19 pages, 10 figures, Submitted for publication in ApJ
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Astrophysics of Galaxies (astro-ph.GA)
The relativistically-broadened reflection spectrum, observed in both AGN and X-ray binaries, has proven to be a powerful probe of the properties of black holes and the environments in which they reside. Being emitted from the inner-most regions of the accretion disk, this X-ray spectral component carries with it information not only about the plasma that resides in these extreme conditions, but also the black hole spin, a marker of the formation and accretion history of these objects. The models currently used to interpret the reflection spectrum are often simplistic, however, approximating the disk as an infinitely thin, optically thick plane of material orbiting in circular Keplerian orbits around the central object. Using a new relativistic ray tracing suite (Fenrir) that allows for more complex disk approximations, we examine the effects that disk thickness may have on the reflection spectrum. We find that finite disk thickness can have a variety of effects on the reflection spectrum, including a truncation of the blue wing (from self-shadowing of the accretion disk) and an enhancement of the red wing (from the irradiation of the central 'eye wall' of the inner disk). We make a first estimate on the systematic errors on black hole spin and height that may result from neglecting these effects.
[9]
Title: Astrometry with the WFIRST Wide-Field Imager
Authors: The WFIRST Astrometry Working Group: Robyn E. Sanderson (1), Andrea Bellini (2), Stefano Casertano (2), Jessica R. Lu (3), Peter Melchior (4), David Bennett (5), Michael Shao (6), Jason Rhodes (6), Sangeeta Malhotra (5), Scott Gaudi (7), Michael Fall (2), Ed Nelan (2), Puragra Guhathakurta (8), Jay Anderson (2), Shirley Ho (3 and 9), Mattia Libralato (2) ((1) TAPIR, California Institute of Technology, (2) Space Telescope Science Institute, (3) Department of Astronomy, University of California, Berkeley, (4) Department of Astrophysical Sciences, Princeton University, (5) Astrophysics Science Division, NASA Goddard Space Flight Center, (6) Jet Propulsion Laboratory, California Institute of Technology, (7) Department of Astronomy, Ohio State University, (8) University of California Santa Cruz, (9) Department of Physics, University of California, Berkeley)
Comments: 25 pages, 15 figures; submitted to PASP
Subjects: Instrumentation and Methods for Astrophysics (astro-ph.IM); Earth and Planetary Astrophysics (astro-ph.EP); Astrophysics of Galaxies (astro-ph.GA); Solar and Stellar Astrophysics (astro-ph.SR)
The Wide-Field InfraRed Space Telescope (WFIRST) will be capable of delivering precise astrometry for faint sources over the enormous field of view of its main camera, the Wide-Field Imager (WFI). This unprecedented combination will be transformative for the many scientific questions that require precise positions, distances, and velocities of stars. We describe the expectations for the astrometric precision of the WFIRST WFI in different scenarios, illustrate how a broad range of science cases will see significant advances with such data, and identify aspects of WFIRST's design where small adjustments could greatly improve its power as an astrometric instrument.
[10]
Title: Are ultracompact minihalos really ultracompact?
Comments: 7 pages, 4 figures; to be submitted to PRD
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
Ultracompact minihalos (UCMHs) have emerged as a valuable probe of the primordial power spectrum of density fluctuations at small scales. UCMHs are expected to form at early times in regions with $\delta\rho/\rho \gtrsim 10^{-3}$, and they are theorized to possess an extremely compact $\rho\propto r^{-9/4}$ radial density profile, which enhances their observable signatures. Non-observation of UCMHs can thus constrain the primordial power spectrum. Using N-body simulations to study the collapse of extreme density peaks at $z \simeq 1000$, we show that UCMHs forming under realistic conditions do not develop the $\rho\propto r^{-9/4}$ profile, and instead develop either $\rho\propto r^{-3/2}$ or $\rho\propto r^{-1}$ inner density profiles depending on the shape of the power spectrum. We also demonstrate via idealized simulations that self-similarity -- the absence of a scale length -- is necessary to produce a halo with the $\rho\propto r^{-9/4}$ profile, and we argue that this implies such halos cannot form from a Gaussian primordial density field. Prior constraints derived from UCMH non-observation must be reworked in light of this discovery. Although the shallower density profile reduces UCMH visibility, our findings reduce their signal by as little as $\mathcal O(10^{-2})$ while allowing later-forming halos to be considered, which suggests that new constraints could be significantly stronger.
[11]
Title: Interaction in the dark sector: a Bayesian analysis with latest observations
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc)
By combining cosmological probes at low, intermediate and high redshifts, we investigate the observational viability of a class of models with interaction in the dark sector. We perform a Bayesian analysis using the latest data sets of type Ia supernovae, baryon acoustic oscillations, the angular acoustic scale of the cosmic microwave background, and measurements of the expansion rate. When combined with the current measurement of the local expansion rate obtained by the Hubble Space Telescope, we find that these observations provide evidence in favour of interacting models with respect to the standard cosmology.
[12]
Title: Gravity mode offset and properties of the evanescent zone in red-giant stars
Comments: accepted for publication in A&A
Subjects: Solar and Stellar Astrophysics (astro-ph.SR)
The wealth of asteroseismic data for red-giant stars and the precision with which these data have been observed over the last decade calls for investigations to further understand the internal structures of these stars. The aim of this work is to validate a method to measure the underlying period spacing, coupling term and mode offset of pure gravity modes that are present in the deep interiors of red-giant stars. We subsequently investigate the physical conditions of the evanescent zone between the gravity mode cavity and the pressure mode cavity. We implement an alternative mathematical description to analyse observational data and to extract the underlying physical parameters that determine the frequencies of mixed modes. This description takes the radial order of the modes explicitly into account, which reduces its sensitivity to aliases. Additionally, and for the first time, this method allows us to constrain the gravity mode offset for red-giant stars. We determine the period spacing and the coupling term for the dipole modes within a few percent of literature values. Additionally, we find that the gravity mode offset varies on a star by star basis and should not be kept fixed in the analysis. Furthermore, we find that the coupling factor is logarithmically related to the physical width of the evanescent region normalised by the radius at which the evanescent zone is located. Finally, the local density contrast at the edge of the core of red giant branch models shows a tentative correlation with the offset. (abstract abriged)
[13]
Title: ATCA observations of the MACS-Planck Radio Halo Cluster Project II. Radio observations of an intermediate redshift cluster sample
Comments: 12 pages, 7 figures, accepted for publication in A&A
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); Astrophysics of Galaxies (astro-ph.GA)
A fraction of galaxy clusters host diffuse radio sources whose origins are investigated through multi-wavelength studies of cluster samples. We investigate the presence of diffuse radio emission in a sample of seven galaxy clusters in the largely unexplored intermediate redshift range (0.3 < z < 0.44). In search of diffuse emission, deep radio imaging of the clusters are presented from wide band (1.1-3.1 GHz), full resolution ($\sim$ 5 arcsec) observations with the Australia Telescope Compact Array (ATCA). The visibilities were also imaged at lower resolution after point source modelling and subtraction and after a taper was applied to achieve better sensitivity to low surface brightness diffuse radio emission. In case of non-detection of diffuse sources, we set upper limits for the radio power of injected diffuse radio sources in the field of our observations. Furthermore, we discuss the dynamical state of the observed clusters based on an X-ray morphological analysis with XMM-Newton. We detect a giant radio halo in PSZ2 G284.97-23.69 (z=0.39) and a possible diffuse source in the nearly relaxed cluster PSZ2 G262.73-40.92 (z=0.421). Our sample contains three highly disturbed massive clusters without clear traces of diffuse emission at the observed frequencies. We were able to inject modelled radio halos with low values of total flux density to set upper detection limits; however, with our high-frequency observations we cannot exclude the presence of RH in these systems because of the sensitivity of our observations in combination with the high z of the observed clusters.
[14]
Title: The Photon in Dense Nuclear Matter I: Random Phase Approximation
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Other Condensed Matter (cond-mat.other); Nuclear Theory (nucl-th)
We present a comprehensive and pedagogic discussion of the properties of photons in cold and dense nuclear matter based on the resummed one-loop photon self energy. Correlations between electrons, muons, protons and neutrons in beta equilibrium that arise due to electromagnetic and strong interactions are consistently taken into account within the random phase approximation. Screening effects and damping as well as collective excitations are systematically studied in a fully relativistic setup. Our study is relevant to linear response theory of dense nuclear matter, calculations of transport properties of cold dense matter and to investigations of the production and propagation of hypothetical vector bosons such as the dark photons.
[15]
Title: Quasi-Periodic Behavior of Mini-Disks in Binary Black Holes Approaching Merger
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Astrophysics of Galaxies (astro-ph.GA); General Relativity and Quantum Cosmology (gr-qc)
We present the first magnetohydrodynamic simulation in which a circumbinary disk around a relativistic binary black hole feeds mass to individual accretion disks ("mini-disks") around each black hole. Mass flow through the accretion streams linking the circumbinary disk to the mini-disks is modulated quasi-periodically by the streams' interaction with a nonlinear $m=1$ density feature, or "lump", at the inner edge of the circumbinary disk: the stream supplying each mini-disk comes into phase with the lump at a frequency $0.74$ times the binary orbital frequency. Because the binary is relativistic, the tidal truncation radii of the mini-disks are not much larger than their innermost stable circular orbits; consequently, the mini-disks' inflow times are shorter than the conventional estimate and are comparable to the stream modulation period. As a result, the mini-disks are always in inflow disequilibrium, with their masses and spiral density wave structures responding to the stream's quasi-periodic modulation. The fluctuations in each mini-disk's mass are so large that as much as $75\%$ of the total mini-disk mass can be contained within a single mini-disk. Such quasi-periodic modulation of the mini-disk structure may introduce distinctive time-dependent features in the binary's electromagnetic emission.
[16]
Title: Exoplanet Radius Gap Dependence on Host Star Type
Comments: 2pages, 1 figure, RNAAS 2017
Subjects: Earth and Planetary Astrophysics (astro-ph.EP)
Exoplanets smaller than Neptune are numerous, but the nature of the planet populations in the 1-4 Earth radii range remains a mystery. The complete Kepler sample of Q1-Q17 exoplanet candidates shows a radius gap at ~ 2 Earth radii, as reported by us in January 2017 in LPSC conference abstract #1576 (Zeng et al. 2017). A careful analysis of Kepler host stars spectroscopy by the CKS survey allowed Fulton et al. (2017) in March 2017 to unambiguously show this radius gap. The cause of this gap is still under discussion (Ginzburg et al. 2017; Lehmer & Catling 2017; Owen & Wu 2017). Here we add to our original analysis the dependence of the radius gap on host star type.
[17]
Title: Multi-wavelength scaling relations in galaxy groups: a detailed comparison of GAMA and KiDS observations to BAHAMAS simulations
Comments: 18 pages, 11 figures, submitted to MNRAS
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
We study the scaling relations between the baryonic content and total mass of groups of galaxies, as these systems provide a unique way to examine the role of non-gravitational processes in structure formation. Using Planck and ROSAT data, we conduct detailed comparisons of the stacked thermal Sunyaev-Zel'dovich (tSZ) effect and X-ray scaling relations of galaxy groups found in the the Galaxy And Mass Assembly (GAMA) survey and the BAHAMAS hydrodynamical simulation. We use weak gravitational lensing data from the Kilo Degree Survey (KiDS) to determine the average halo mass of the studied systems. We analyse the simulation in the same way, using realistic weak lensing, X-ray, and tSZ synthetic observations. Furthermore, to keep selection biases under control, we employ exactly the same galaxy selection and group identification procedures to the observations and simulation. Applying this careful comparison, we find that the simulations are in agreement with the observations, particularly with regards to the scaling relations of the lensing and tSZ results. This finding demonstrates that hydrodynamical simulation have reached the level of realism that is required to interpret observational survey data and study the baryon physics within dark matter haloes, where analytical modelling is challenging. Finally, using simulated data, we demonstrate that our observational processing of the X-ray and tSZ signals is free of significant biases. We find that our optical group selection procedure has, however, some room for improvement.
[18]
Title: Thermodynamic Profiles of Galaxy Clusters from a Joint X-ray/SZ Analysis
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
We jointly analyze Bolocam Sunyaev-Zeldovich (SZ) effect and Chandra X-ray data for a set of 45 clusters to derive gas density and temperature profiles without using spectroscopic information. The sample spans the mass and redshift range $3 \times 10^{14} M_{\odot} \le M_{500} \le 25 \times 10^{14} M_{\odot}$ and $0.15\le z \le 0.89$. We define cool-core (CC) and non-cool core (NCC) subsamples based on the central X-ray luminosity, and 17/45 clusters are classified as CC. In general, the profiles derived from our analysis are found to be in good agreement with previous analyses, and profile constraints beyond $r_{500}$ are obtained for 34/45 clusters. In approximately 30% of the CC clusters our analysis shows a central temperature drop with a statistical significance of $>3\sigma$; this modest detection fraction is due mainly to a combination of coarse angular resolution and modest S/N in the SZ data. Most clusters are consistent with an isothermal profile at the largest radii near $r_{500}$, although 9/45 show a significant temperature decrease with increasing radius. The sample mean density profile is in good agreement with previous studies, and shows a minimum intrinsic scatter of approximately 10% near $0.5 \times r_{500}$. The sample mean temperature profile is consistent with isothermal, and has an intrinsic scatter of approximately 50% independent of radius. This scatter is significantly higher compared to earlier X-ray-only studies, which find intrinsic scatters near 10%, likely due to a combination of unaccounted for non-idealities in the SZ noise, projection effects, and sample selection.
[19]
Title: Supernova Ejecta in Ocean Cores Used as Time Constraints for Nearby Stellar Groups
Comments: 9 Pages, 3 figures, 8 tables
Subjects: Solar and Stellar Astrophysics (astro-ph.SR)
Evidence of a supernova event, discussed in Wallner et al., was discovered in the deep-sea crusts with two signals dating back to 2-3 and 7-9 Myr ago. In this contribution, we place constraints on the birth-site of the supernova progenitors from the ejecta timeline, the initial mass function, and the ages of nearby stellar groups. We investigated the Scorpius-Centaurus OB Association, the nearest site of recent massive star formation, and the moving group Tucana-Horologium. Using the known stellar mass of the remaining massive stars within these subgroups and factoring in travel time for the ejecta, we have constrained the ages and masses of the supernova progenitors by using the initial mass function and then compared the results to the canonical ages of each subgroup. Our results identify the Upper Scorpius and Lower Centaurus-Crux subgroups as unlikely birth-sites for these supernovae. We find that Tucana-Horologium is the likely birth-site of the supernova 7-9 Myr ago and Upper Centaurus-Lupus is the likely birth-site for the supernova 2-3 Myr ago.
[20]
Title: An Optical and Infrared Time-Domain Study of the Supergiant Fast X-ray Transient Candidate IC 10 X-2
Comments: 15 pages, 4 figures. Submitted to ApJ on Sep 26 2017
Subjects: Solar and Stellar Astrophysics (astro-ph.SR); High Energy Astrophysical Phenomena (astro-ph.HE)
We present an optical and infrared (IR) study of IC 10 X-2, a high-mass X-ray binary in the galaxy IC 10. Previous optical and X-ray studies suggest X-2 is a Supergiant Fast X-ray Transient: a large-amplitude (factor of $\sim$ 100), short-duration (hours to weeks) X-ray outburst on 2010 May 21. We analyze R- and g-band light curves of X-2 from the intermediate Palomar Transient Factory taken between 2013 July 15 and 2017 Feb 14 show high-amplitude ($\gtrsim$ 1 mag), short-duration ($\lesssim8$ d) flares and dips ($\gtrsim$ 0.5 mag). Near-IR spectroscopy of X-2 from Palomar/TripleSpec show He I, Paschen-$\gamma$, and Paschen-$\beta$ emission lines with similar shapes and amplitudes as those of luminous blue variables (LBVs) and LBV candidates (LBVc). Mid-IR colors and magnitudes from Spitzer/IRAC photometry of X-2 resemble those of known LBV/LBVcs. We suggest that the stellar companion in X-2 is an LBV/LBVc and discuss possible origins of the optical flares. Dips in the optical light curve are indicative of eclipses from optically thick clumps formed in the winds of the stellar counterpart. Given the constraints on the flare duration ($0.02 - 0.8$ d) and the time between flares ($15.1\pm7.8$ d), we estimate the clump volume filling factor in the stellar winds, $f_V$, to be $0.01 < f_V < 0.71$, which overlaps with values measured from massive star winds. In X-2, we interpret the origin of the optical flares as the accretion of clumps formed in the winds of an LBV/LBVc onto the compact object.
[21]
Title: Properties of Two-Temperature Dissipative Accretion Flow Around Black Holes
Comments: 15 pages, 13 figures, accepted for publication in MNRAS
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
We study the properties of two-temperature accretion flow around a non-rotating black hole in presence of various dissipative processes where pseudo-Newtonian potential is adopted to mimic the effect of general relativity. The flow encounters energy loss by means of radiative processes acted on the electrons and at the same time, flow heats up as a consequence of viscous heating effective on ions. We assumed that the flow is exposed with the stochastic magnetic fields which leads to Synchrotron emission of electrons and these emissions are further strengthen by Compton scattering. We obtain the two-temperature global accretion solutions in terms of dissipation parameters, namely, viscosity ($\alpha$) and accretion rate (${\dot m}$), and find for the first time in the literature that such solutions may contain standing shock waves. Solutions of this kind are multi-transonic in nature as they simultaneously pass through both inner critical point ($x_{\rm in}$) and outer critical point ($x_{\rm out}$) before crossing the black hole horizon. We calculate the properties of shock induced global accretion solutions in terms of the flow parameters. We further show that two-temperature shocked accretion flow is not a discrete solution, instead such solution exists for wide range of flow parameters. We identify the effective domain of the parameter space for standing shock and observe that parameter space shrinks as the dissipation is increased. Since the post-shock region is hotter due to the effect of shock compression, it naturally emits hard X-rays and therefore, the two-temperature shocked accretion solution has the potential to explain the spectral properties of the black hole sources.
[22]
Title: Estimation of the gravitational wave polarizations from a non template search
Authors: I. Di Palma, M. Drago
Subjects: Instrumentation and Methods for Astrophysics (astro-ph.IM)
Gravitational wave astronomy is just beginning, after the recent success of the four direct detections of binary black hole (BBH) mergers, the first observation from a binary neutron star inspiral and with the expectation of many more events to come. Given the possibility to detect waves from not perfectly modeled astrophysical processes, it is fundamental to be ready to calculate the polarization waveforms in the case of searches using non-template algorithms. In such case, the waveform polarizations are the only quantities that contain direct information about the generating process. We present the performance of a new valuable tool to estimate the inverse solution of gravitational wave transient signals, starting from the analysis of the signal properties of a non-template algorithm that is open to a wider class of gravitational signals not covered by template algorithms. We highlight the contributions to the wave polarization associated with the detector response, the sky localization and the polarization angle of the source. In this paper we present the performances of such method and its implications by using two main classes of transient signals, resembling the limiting case for most simple and complicated morphologies. Performances are encouraging, for the tested waveforms: the correlation between the original and the reconstructed waveforms spans from better than 80% for simple morphologies to better than 50% for complicated ones. For a not-template search this results can be considered satisfactory to reconstruct the astrophysical progenitor.
[23]
Title: Data release of UV to submm broadband fluxes for simulated galaxies from the EAGLE project
Comments: 20 pages, 10 figures, accepted for publication in ApJS
Subjects: Astrophysics of Galaxies (astro-ph.GA)
We present dust-attenuated and dust emission fluxes for sufficiently resolved galaxies in the EAGLE suite of cosmological hydrodynamical simulations, calculated with the SKIRT radiative transfer code. The post-processing procedure includes specific components for star formation regions, stellar sources, and diffuse dust, and takes into account stochastic heating of dust grains to obtain realistic broad-band fluxes in the wavelength range from ultraviolet to sub-millimeter. The mock survey includes nearly half a million simulated galaxies with stellar masses above 10^8.5 solar masses across six EAGLE models. About two thirds of these galaxies, residing in 23 redshift bins up to z=6, have a sufficiently resolved metallic gas distribution to derive meaningful dust attenuation and emission, with the important caveat that the same dust properties were used at all redshifts. These newly released data complement the already publicly available information about the EAGLE galaxies, which includes intrinsic properties derived by aggregating the properties of the smoothed particles representing matter in the simulation. We further provide an open source framework of Python procedures for post-processing simulated galaxies with the radiative transfer code SKIRT. The framework allows any third party to calculate synthetic images, SEDs, and broadband fluxes for EAGLE galaxies, taking into account the effects of dust attenuation and emission.
[24]
Title: Nonradial and nonpolytropic astrophysical outflows. X. Relativistic MHD rotating spine jets in Kerr metric
Comments: Accepted for publication in Astronomy and Astrophysics
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
High resolution radio imaging of AGN have revealed that some sources present motion of superluminal knots and transverse stratification of their jet. Recent observational projects have provided new observational constraints on the central region of rotating black holes in AGN, suggesting that there is an inner- or spine-jet surrounded by a disk wind. This relativistic spine-jet is likely to be composed of electron - positron pairs extracting energy from the black hole. In this article we present an extension and generalization to relativistic jets in Kerr metric of the meridional self similar mechanism. We aim at modeling the inner spine-jet of AGN as the relativistic light outflow emerging from a spherical corona surrounding a Kerr black hole. The model is built by expanding the metric and the forces with colatitude to first order in the magnetic flux function. Conversely to previous models, effects of the light cylinder are not neglected. Solutions with high Lorentz factor are obtained and provide spine-jet models up to the polar axis. As in previous publications, we calculate the magnetic collimation efficiency parameter, which measures the variation of the available energy across the field lines. This collimation efficiency is an integral of the model, generalizing to Kerr metric the classical magnetic rotator efficiency criterion. We study the variation of the magnetic efficiency and acceleration with the spin of the black hole and show their high sensitivity to this integral. These new solutions model collimated or radial, relativistic or ultra-relativistic outflows. We discuss the relevance of our solutions to model the M87 spine-jet. We study the efficiency of the central black hole spin to collimate a spine-jet and show that the jet power is of the same order with that determined by numerical simulations.
[25]
Title: Joint Fit of Warm Absorbers in COS & HETG Spectra of NGC 3783
Journal-ref: Research in Astronomy and Astrophysics, Volume 17, Issue 9, article id. 095 (2017)
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
Warm Absorbers (WAs), as an important form of AGN outflows, show absorption in both UV and X-ray band. Using XSTAR generated photoionization models, for the first time we present a joint fit to the simultaneous observations of HST/COS and Chandra/HETG on NGC 3783. Totally five. As explain well all absorption features from the AGN outflows, which spread a wide range of ionization parameter log{\xi} from 0.6 to 3.8, column density logNH from 19.5 to 22.3 cm^{-2}, velocity v from 380 to 1060 km s^{-1}, and covering factors from 0.33 to 0.75, respectively. Not all the five WAs are consistent in pressure. Two of them are likely different parts of the same absorbing gas, and two of the other WAs may be smaller discrete clouds that are blown out from the inner region of the torus at different periods. The five WAs suggest a total mass outflowing rate within the range of 0.22-4.1 solar mass per year.
[26]
Title: Gamma-ray emission from the black hole's vicinity in AGN
Comments: Talk presented at the 7th Fermi Symposium, Garmisch-Partenkirchen, October 2017
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
Non-thermal magnetospheric processes in the vicinity of supermassive black holes have attracted particular attention in recent times. Gap-type particle acceleration accompanied by curvature and Inverse Compton radiation could in principle lead to variable gamma-ray emission that may be detectable with current instruments. We shortly comment on the occurrence of magnetospheric gaps and the realisation of different potentials. The detection of rapid variability becomes most instructive by imposing a constraint on possible gap sizes, thereby limiting extractable gap powers and allowing to assess the plausibility of a magnetospheric origin. The relevance of this is discussed for the radio galaxies Cen A, M87 and IC310. The detection of magnetospheric gamma-ray emission generally allows for a sensitive probe of the near-black-hole region and is thus of prime interest for advancing our understanding of the (astro)physics of extreme environments
[27]
Title: On fragmentation of turbulent self-gravitating discs in the long cooling time regime
Authors: Ken Rice (1), Sergei Nayakshin (2) ((1) University of Edinburgh, (2) University of Leicester)
Comments: 12 pages, 12 figures, accepted for publication in Monthly Notices of the Royal Astronomical Society
Subjects: Earth and Planetary Astrophysics (astro-ph.EP); Astrophysics of Galaxies (astro-ph.GA); Solar and Stellar Astrophysics (astro-ph.SR)
It has recently been suggested that in the presence of driven turbulence discs may be much less stable against gravitational collapse than their non turbulent analogs, due to stochastic density fluctuations in turbulent flows. This mode of fragmentation would be especially important for gas giant planet formation. Here we argue, however, that stochastic density fluctuations due to turbulence do not enhance gravitational instability and disc fragmentation in the long cooling time limit appropriate for planet forming discs. These fluctuations evolve adiabatically and dissipate away by decompression faster than they could collapse. We investigate these issues numerically in 2D via shearing box simulations with driven turbulence and also in 3D with a model of instantaneously applied turbulent velocity kicks. In the former setting turbulent driving leads to additional disc heating that tends to make discs more, rather than less, stable to gravitational instability. In the latter setting, the formation of high density regions due to convergent velocity kicks is found to be quickly followed by decompression, as expected. We therefore conclude that driven turbulence does not promote disc fragmentation in protoplanetary discs and instead tends to make the discs more stable. We also argue that sustaining supersonic turbulence is very difficult in discs that cool slowly.
[28]
Title: A changing wind collision
Authors: Yael Naze (1), Gloria Koenigsberger (2), Julian M. Pittard (3), Elliot Ross Parkin, Gregor Rauw (1), Michael F. Corcoran (4), D. John Hillier (5) ((1) ULg, (2) UNAM, (3) Univ. of Leeds, (4) GSFC, (5) PITT PACC)
Subjects: Solar and Stellar Astrophysics (astro-ph.SR); High Energy Astrophysical Phenomena (astro-ph.HE)
We report on the first detection of a global change in the X-ray emitting properties of a wind-wind collision, thanks to XMM-Newton observations of the massive SMC system HD5980. While its lightcurve had remained unchanged between 2000 and 2005, the X-ray flux has now increased by a factor of ~2.5, and slightly hardened. The new observations also extend the observational coverage over the entire orbit, pinpointing the lightcurve shape. It has not varied much despite the large overall brightening, and a tight correlation of fluxes with orbital separation is found, without any hysteresis effect. Moreover, the absence of eclipses and of absorption effects related to orientation suggests a large size for the X-ray emitting region. Simple analytical models of the wind-wind collision, considering the varying wind properties of the eruptive component in HD5980, are able to reproduce the recent hardening and the flux-separation relationship, at least qualitatively, but they predict a hardening at apastron and little change in mean flux, contrary to observations. The brightness change could then possibly be related to a recently theorized phenomenon linked to the varying strength of thin-shell instabilities in shocked wind regions.
[29]
Title: Interior Structures and Tidal Heating in the TRAPPIST-1 Planets
Comments: 34 pages, 3 tables, 4 figures. Accepted for publication in Astronomy & Astrophysics
Subjects: Earth and Planetary Astrophysics (astro-ph.EP)
With seven planets, the TRAPPIST-1 system has the largest number of exoplanets discovered in a single system so far. The system is of astrobiological interest, because three of its planets orbit in the habitable zone of the ultracool M dwarf. Assuming the planets are composed of non-compressible iron, rock, and H$_2$O, we determine possible interior structures for each planet. To determine how much tidal heat may be dissipated within each planet, we construct a tidal heat generation model using a single uniform viscosity and rigidity for each planet based on the planet's composition. With the exception of TRAPPIST-1c, all seven of the planets have densities low enough to indicate the presence of significant H$_2$O in some form. Planets b and c experience enough heating from planetary tides to maintain magma oceans in their rock mantles; planet c may have eruptions of silicate magma on its surface, which may be detectable with next-generation instrumentation. Tidal heat fluxes on planets d, e, and f are lower, but are still twenty times higher than Earth's mean heat flow. Planets d and e are the most likely to be habitable. Planet d avoids the runaway greenhouse state if its albedo is $\gtrsim$ 0.3. Determining the planet's masses within $\sim0.1$ to 0.5 Earth masses would confirm or rule out the presence of H$_2$O and/or iron in each planet, and permit detailed models of heat production and transport in each planet. Understanding the geodynamics of ice-rich planets f, g, and h requires more sophisticated modeling that can self-consistently balance heat production and transport in both rock and ice layers.
[30]
Title: The Pluto System After New Horizons
Journal-ref: Annual Reviews of Astronomy ans Astrophysics 2018
Subjects: Earth and Planetary Astrophysics (astro-ph.EP)
The discovery of Pluto in 1930 presaged the discoveries of both the Kuiper Belt and ice dwarf planets, which are the third class of planets in our solar system. From the 1970s to the 19990s numerous fascinating attributes of the Pluto system were discovered, including multiple surface volatile species, Pluto's large satellite Charon, and its atmosphere. These attributes, and the 1990s discovery of the Kuiper Belt and Pluto's cohort of small Kuiper Belt planets, motivated the exploration of Pluto. That mission, called New Horizons (NH), revolutionized knowledge of Pluto and its system of satellites in 2015. Beyond providing rich geological, compositional, and atmospheric data sets, New Horizons demonstrated that Pluto itself has been surprisingly geologically active throughout the past 4 billion years, and that the planet exhibits a surprisingly complex range of atmospheric phenomenology and geologic expression that rival Mars in their richness.
[31]
Title: Interstellar communication. V. Introduction to photon information efficiency (in bits per photon)
Authors: Michael Hippke
Comments: 3 pages, 1 figure. Useful introduction for the previous parts of this series: arXiv:1706.03795, arXiv:1706.05570, arXiv:1711.05761, arXiv:1711.07962
Subjects: Instrumentation and Methods for Astrophysics (astro-ph.IM); Quantum Physics (quant-ph)
How many bits of information can a single photon carry? Intuition says "one", but this is incorrect. With an alphabet based on the photon's time of arrival, energy, and polarization, several bits can be encoded. In this introduction to photon information efficiency, we explain how to calculate the maximum number of bits per photon depending on the number of encoding modes, noise, and losses.
[32]
Title: On the Optimal Choice of Nucleosynthetic Yields, IMF and Number of SN Ia for Chemical Evolution Modelling
Comments: 17 pages, 7 figures, submitted to ApJ
Subjects: Astrophysics of Galaxies (astro-ph.GA); Solar and Stellar Astrophysics (astro-ph.SR)
To fully harvest the rich library of stellar elemental abundance data, we require reliable models that facilitate our interpretation of them. Galactic chemical evolution (GCE) models are one example, and a key part is the selection of chemical yields from different nucleosynthetic enrichment channels, commonly asymptotic giant branch (AGB) stars, type Ia and core-collapse supernovae (SN Ia & CC-SN). We present a scoring system for yield tables based on their ability to reproduce proto-solar abundances within a simple parametrisation of the GCE modelling software Chempy. This marginalises over five galactic parameters, describing simple stellar populations (SSP) and interstellar medium (ISM) physics. Two statistical scoring methods, based on Bayesian evidence and leave-one-out cross-validation, are applied to four CC-SN tables; (a) for all mutually available elements and (b) for the 9 most abundant elements. We find that the yields of West & Heger (2017, in prep.) and Chieffi & Limongi (2004) (C04) are preferred for the two cases. For (b) the inferred best-fit parameters using C04 are $\alpha_\mathrm{IMF}=-2.45^{+0.15}_{-0.11}$ for the IMF high-mass slope and $\mathrm{N}_\mathrm{Ia}=1.29^{+0.45}_{-0.31}\times10^{-3}$ M$_\odot^{-1}$ for the SN Ia normalisation. These are broadly consistent across tested yield tables and elemental subsets, whilst not simply reproducing the priors. For (b) all yield tables consistently over- (under-)predict Si (Mg) which can be mitigated by lowering CC-SN explosion energies. Additionally, we show that Chempy can dramatically improve abundance predictions of hydrodynamical simulations by plugging tailored best-fit SSP parameters into a Milky Way analogue from Gutcke & Springel (2017). Our code, including a comprehensive tutorial, is freely available and can also provide SSP enrichment tables for any set of parameters and yield tables.
[33]
Title: The dynamics of the de Sitter resonance
Comments: Accepted for publication on Celestial Mechanics and Dynamical Astronomy
Subjects: Earth and Planetary Astrophysics (astro-ph.EP)
We study the dynamics of the de Sitter resonance, namely the stable equilibrium configuration of the first three Galilean satellites. We clarify the relation between this family of configurations and the more general Laplace resonant states. In order to describe the dynamics around the de Sitter stable equilibrium, a one-degree of freedom Hamiltonian normal form is constructed and exploited to identify initial conditions leading to the two families.
The normal form Hamiltonian is used to check the accuracy in the location of the equilibrium positions. Besides, it gives a measure of how sensitive it is with respect to the different perturbations acting on the system. By looking at the phase-plane of the normal form, we can identify a \sl Laplace-like \rm configuration, which highlights many substantial aspects of the observed one.
[34]
Title: On the frequency dependence of p-mode frequency shifts induced by magnetic activity in Kepler solar-like stars
Comments: Accepted for publication in A&A
Subjects: Solar and Stellar Astrophysics (astro-ph.SR)
The variations of the frequencies of the low-degree acoustic oscillations in the Sun induced by magnetic activity show a dependence with radial order. The frequency shifts are observed to increase towards higher-order modes to reach a maximum of about 0.8 muHz over the 11-yr solar cycle. A comparable frequency dependence is also measured in two other main-sequence solar-like stars, the F-star HD49933, and the young 1-Gyr-old solar analog KIC10644253, although with different amplitudes of the shifts of about 2 muHz and 0.5 muHz respectively. Our objective here is to extend this analysis to stars with different masses, metallicities, and evolutionary stages. From an initial set of 87 Kepler solar-like oscillating stars with already known individual p-mode frequencies, we identify five stars showing frequency shifts that can be considered reliable using selection criteria based on Monte Carlo simulations and on the photospheric magnetic activity proxy Sph. The frequency dependence of the frequency shifts of four of these stars could be measured for the l=0 and l=1 modes individually. Given the quality of the data, the results could indicate that a different physical source of perturbation than in the Sun is dominating in this sample of solar-like stars.
[35]
Title: Cosmology with the pairwise kinematic SZ effect: Calibration and validation using hydrodynamical simulations
Comments: 15 pages, 13 figures, 2 tables; submitted to MNRAS
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); Astrophysics of Galaxies (astro-ph.GA)
We study the potential of the kinematic SZ effect as a probe for cosmology, focusing on the pairwise method. The main challenge is disentangling the cosmologically interesting mean pairwise velocity from the cluster optical depth and the associated uncertainties on the baryonic physics in clusters. Furthermore, the pairwise kSZ signal might be affected by internal cluster motions or correlations between velocity and optical depth. We investigate these effects using the Magneticum cosmological hydrodynamical simulations, one of the largest simulations of this kind performed to date. We produce tSZ and kSZ maps with an area of $\simeq 1600~\mathrm{deg}^2$, and the corresponding cluster catalogues with $M_{500c} \gtrsim 3 \times 10^{13}~h^{-1}M_\odot$ and $z \lesssim 2$. From these data sets we calibrate a scaling relation between the average Compton-$y$ parameter and optical depth. We show that this relation can be used to recover an accurate estimate of the mean pairwise velocity from the kSZ effect, and that this effect can be used as an important probe of cosmology. We demonstrate that the residual systematic effects seen in our analysis are well below the remaining uncertainties on the sub-grid feedback models implemented in hydrodynamical simulations.
[36]
Title: Cosmic Neutrinos
Authors: Ofelia Pisanti
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Phenomenology (hep-ph)
Neutrinos are key astronomical messengers, because they are undeflected by magnetic field and unattenuated by electromagnetic interaction. After the first detection of extraterrestrial neutrinos in the TeV-PeV region by Neutrino Telescopes we are entering a new epoch where neutrino astronomy becomes possible. In this paper I briefly review the main issues concerning cosmological neutrinos and their experimental observation.
[37]
Title: Observational Constraints on Oscillating Dark-Energy Parametrizations
Comments: 22 pages, 5 Tables, 18 figures
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc)
We perform a detailed confrontation of various oscillating dark-energy parame-trizations with the latest sets of observational data. In particular, we use data from Joint Light Curve analysis (JLA) sample from Supernoave Type Ia, Baryon Acoustic Oscillations (BAO) distance measurements, Cosmic Microwave Background (CMB) observations, redshift space distortion, weak gravitational lensing, Hubble parameter measurements from cosmic chronometers, and the local Hubble constant value, and we impose constraints on four oscillating models. We find that all models are bent towards the phantom region, nevertheless in the three of them the quintessential regime is also allowed within 1$\sigma$ confidence-level. Furthermore, the deviations from $\Lambda$CDM cosmology are small, however for two of the models they could be visible at large scales, through the impact on the temperature anisotropy of the CMB spectra and on the matter power spectra.
[38]
Title: Spatial correlations of Si III detections in the local interstellar medium
Authors: M.K. Kuassivi
Comments: 3 pages, 2 figures, 1 table
Subjects: Astrophysics of Galaxies (astro-ph.GA)
Since the developpment of astronephography, we know that the Sun is about to exit the Local Interstellar Cloud (LIC). To date, because of its rare absorption signatures and the paucity of suitable neighbour targets, the LIC interface has proved to be elusive to extensive investigations. Comparing the spatial distribution of Si III detections found in the litterature along with 3 new sigtlines, I show that most detections seem to arise from a cone whose axis is parallel to the LIC heliocentric velocity vector. I interpret this result as an evidence that the heliosphere is actually interacting with the LIC frontier.
[39]
Title: Optimization of Photospheric Electric Field Estimates for Accurate Retrieval of Total Magnetic Energy Injection
Journal-ref: Solar Phys., 292:191, 2017
Subjects: Solar and Stellar Astrophysics (astro-ph.SR)
Estimates of the photospheric magnetic, electric and plasma velocity fields are essential for studying the dynamics of the solar atmosphere, for example through the derivative quantities of Poynting and relative helicity flux and by using of the fields to obtain the lower boundary condition for data-driven coronal simulations. In this paper we study the performance of a data processing and electric field inversion approach that requires only high-resolution and high-cadence line-of-sight or vector magnetograms -- which we obtain from Helioseismic and Magnetic Imager (HMI) onboard Solar Dynamics Observatory (SDO). The approach does not require any photospheric velocity estimates, and the lacking velocity information is compensated using ad hoc assumptions. We show that the free parameters of these assumptions can be optimized to reproduce the time evolution of the total magnetic energy injection through the photosphere in NOAA AR 11158, when compared to the recent estimates for this active region. However, we find that the relative magnetic helicity injection is reproduced poorly reaching at best a modest underestimation. We discuss also the effect of some of the data processing details on the results, including the masking of the noise-dominated pixels and the tracking method of the active region, both of which have not received much attention in the literature so far. In most cases the effect of these details is small, but when the optimization of the free parameters of the ad hoc assumptions is considered a consistent use of the noise mask is required. The results found in this paper imply that the data processing and electric field inversion approach that uses only the photospheric magnetic field information offers a flexible and straightforward way to obtain photospheric magnetic and electric field estimates suitable for practical applications such as coronal modeling studies.
[40]
Title: First detection of a virial shock with SZ data: implication for the mass accretion rate of Abell 2319
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
Shocks produced by the accretion of infalling gas in the outskirt of galaxy clusters are expected in the hierarchical structure formation scenario, as found in cosmological hydrodynamical simulations. Here, we report the detection of a shock front at a large radius in the pressure profile of the galaxy cluster A2319 at a significance of $8.6\sigma$, using Planck thermal Sunyaev-Zel'dovich data. The shock is located at $(2.93 \pm 0.05) \times R_{500}$ and is not dominated by any preferential radial direction. Using a parametric model of the pressure profile, we derive a lower limit on the Mach number of the infalling gas, $\mathcal{M} > 3.25$ at 95\% confidence level. These results are consistent with expectations derived from hydrodynamical simulations. Finally, we use the shock location to constrain the accretion rate of A2319 to $\dot{M} \simeq (1.4 \pm 0.4) \times 10^{14}$ M$_\odot$ Gyr$^{-1}$, for a total mass, $M_{200} \simeq 10^{15}$ M$_\odot$.
[41]
Title: The Abacus Cosmos: A Suite of Cosmological N-body Simulations
Comments: 11 pages, 5 figures, 3 tables. ApJS submitted
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
We present a public data release of halo catalogs from a suite of 125 cosmological $N$-body simulations from the Abacus project. The simulations span 40 $w$CDM cosmologies centered on the Planck 2015 cosmology at two mass resolutions, $4\times 10^{10}\;h^{-1}M_\odot$ and $1\times 10^{10}\;h^{-1}M_\odot$, in $1.1\;h^{-1}\mathrm{Gpc}$ and $720\;h^{-1}\mathrm{Mpc}$ boxes, respectively. The boxes are phase-matched to suppress sample variance and isolate cosmology dependence. Additional volume is available via 16 boxes of fixed cosmology and varied phase; a few boxes of single-parameter excursions from Planck 2015 are also provided. Catalogs spanning $z=1.5$ to $0.1$ are available for friends-of-friends and Rockstar halo finders and include particle subsamples. All data products are available at https://lgarrison.github.io/AbacusCosmos
[42]
Title: Sunspot number second differences as a precursor of the following 11-year sunspot cycle
Comments: 16 pages, 10 figures, published in ApJ
Journal-ref: ApJ. 850 (2017) 81
Subjects: Solar and Stellar Astrophysics (astro-ph.SR)
Forecasting the strength of the sunspot cycle is highly important for many space weather applications. Our previous studies have shown the importance of sunspot number variability in the declining phase of the current 11-year sunspot cycle to predict the strength of the next cycle when the minimum of the current cycle has been observed. In this study we continue this approach and show that we can remove the limitation of having to know the minimum epoch of the current cycle, and that we can already provide a forecast of the following cycle strength in the early stage of the declining phase of the current cycle. We introduce a method to reliably calculate sunspot number second differences (SNSD) in order to quantify the short-term variations of sunspot activity. We demonstrate a steady relationship between the SNSD dynamics in the early stage of the declining phase of a given cycle and the strength of the following sunspot cycle. This finding may bear physical implications on the underlying dynamo at work. From this relation, a relevant indicator is constructed that distinguishes whether the next cycle will be stronger or weaker compared to the current one. We demonstrate that within 24-31 months after reaching the maximum of the cycle, it can be decided with high probability (0.96) whether the next cycle will be weaker or stronger. We predict that sunspot cycle 25 will be weaker than the current cycle 24.
[43]
Title: Testing the Detection Significance on the Large Scale Structure by a JWST Deep Field Survey
Comments: 10 pages, 9 figures, submitted to ApJ
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)
In preparation for deep extragalactic imaging with the James Webb Space Telescope, we explore the clustering of massive halos at $z=8$ and $10$ using a large N-body simulation. We find that halos with masses $10^9$ to $10^{11}$ $h^{-1}\;M_\odot$, which are those expected to host galaxies detectable with JWST, are highly clustered with bias factors ranging from 5 and 30 depending strongly on mass, as well as on redshift and scale. This results in correlation lengths of 5--10$h^{-1}\;{\rm Mpc}$, similar to that of today's galaxies. Our results are based on a simulation of 130 billion particles in a box of $250h^{-1}\;{\rm Mpc}$ size using our new high-accuracy ABACUS simulation code, the corrections to cosmological initial conditions of (Garrison et al. 2016, 2016MNRAS.461.4125G), and the Planck 2015 cosmology. We use variations between sub-volumes to estimate the detectability of the clustering. Because of the very strong inter-halo clustering, we find that surveys of order 25$h^{-1}\;{\rm Mpc}$ comoving transverse size may be able to detect the clustering of $z=8$--$10$ galaxies with only 500-1000 survey objects if the galaxies indeed occupy the most massive dark matter halos.
|
2017-12-18 14:21:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6234403252601624, "perplexity": 1960.239730905038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948617816.91/warc/CC-MAIN-20171218141805-20171218163805-00075.warc.gz"}
|
http://lists.gnu.org/archive/html/lilypond-user/2002-06/msg00136.html
|
lilypond-user
[Top][All Lists]
## Re: tuplets
From: Jule Slootbeek Subject: Re: tuplets Date: Tue, 11 Jun 2002 19:45:52 -0400 User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.0.0) Gecko/20020529
Laura Conrad wrote:
>>>>>>"Jule" == Jule Slootbeek <address@hidden> writes:
>>>>>
>
> >>
> >> If you want three half notes, printed as half notes,
> >> why not use
> >>
> >> \times 2/3 {c2 c2 c2}
> >>
> >> /Mats
> >>
> >>
> >>
> >>
> Jule> Yeah I have that but it doesn't align the other ones right..
> Jule> i made a diagram but it didn't come out right
>
> Jule> o o
> Jule> o o o
>
> Jule> is what i get, while
>
> Jule> o o
> Jule> o o o
>
> Jule> is what i need
>
> But what you get looks right, because when you're playing them, the
> second one on top should come between the second and third one on the
> bottom, not aligned with the third one.
>
> To get what you want instead of what you say, you might have to play
> with spacing rests, like:
>
>
> soprano = \notes \times 2/3 {c''2 s2 c2}
>
>
yeah, but if i have a triplet of halfnotes, than those should take up the same
time
as 2 half notes...just like a triplet takes up 2 quarter notes..
So if i do
soprano = \notes c''2 c2
alto = \notes \times 2/3 { a'2 a'2 a'2 }
it should just come out with the soprano line playing 2 half notes in a bar,
and the
alto line playing 3 half notes in the bar, taking up exzctly the same amout
of
beats as the 2 half notes. It works fine with the s in between the two notes,
but if
i do \time 2/3 {} it puts a bracket ontop of the 2 half notes, and that makes it
seem to the player like it's supposed to be a triplet, while i had written it
as the
alto line to be a triplet and not the alto line, which should use normal
notation..
so is it a lilypond error?
Jule
--
Jule Slootbeek
http://blindtheory.cjb.net
reply via email to
|
2016-10-27 15:14:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.907106876373291, "perplexity": 12211.045164681631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721347.98/warc/CC-MAIN-20161020183841-00251-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://clay6.com/qa/46326/h-2s-a-toxic-gas-with-rotten-egg-like-smell-is-used-for-the-qualitative-ana
|
# $H_2S$, a toxic gas with rotten egg like smell, is used for the qualitative analysis. If the solubility of $H_2S$ in water at STP is 0.195 m, calculate Henry’s law constant.
282bar
Hence (A) is the correct answer.
|
2018-06-23 02:01:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.835835874080658, "perplexity": 1388.1851501058775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864919.43/warc/CC-MAIN-20180623015758-20180623035758-00337.warc.gz"}
|
http://math.stackexchange.com/questions/335602/infinite-derivative-of-ex
|
# infinite derivative of $e^x$
i have started thinking about one topic a few days ago and i am confused if i am wrong or what happens,generally we know that function $e^x$ is somehow 'magic',which means that derivative and integral of this function is again $e^x$(let reject constant term during the integral).but on the other hand we can say that (here let's use d as sign of derivative)
$$d(e^x)=d(e*e^{x-1})$$
which is equal to $d(e)*e^{x-1}+ e*d(e^{x-1})$
because of feature of derivative of two function(in this case our function $f(x)=e$ is constant),clearly first term is zero,so we have $e*d(e^{x-1})$,if we continue it to infinite time, we can see that in derivative sign power approaches $x$,or something like this
$$d(e^{x-1}),d(e^{x-2}),d(e^{x-3})$$ and at the same time power of constant $e$ is increasing corresponding,but my confusion is that does never power in
$$d(e^{x-c})$$ where $c$ is some constant is changed from -infinite to +infinity,but does it never make $e^{x-c}$ as a constant? or does never equal $e^{x-c}$ never equal to $1$? meaning that $x=c$? if ti makes constant ,then we know that derivative of constant is zero and whole multiplication becomes zero,which is contradiction what $$d(e^x)=e^x$$
sorry if my idea seems stupid,but i am curious in this topic and please help me to clarify everything
-
Instead of indenting, which can prevent latex from rendering, use double dollar signs to center a math formula on its own line :) – Jim Mar 20 '13 at 6:48
thanks $Jim$ for editing – dato datuashvili Mar 20 '13 at 6:48
When we write $f(x) = e^x$ the $x$ is a variable, it does not have a prescribed value. So $e^{x-c}$ is not a constant.
It is true that there is a particular value of $x$ for which $e^{x - c} = 1$ because if you pick, for example, $c = 3$ and look at the graph of $e^{x-3}$ it does intersect the horizontal line $y = 1$ (it happens when $x = 3$). But that's only one value of $x$. The graph of $e^{x-c}$ is not a horizontal line so the function $e^{x-c}$ is not constant.
The thing to remember here is that the derivative of a function does not depend on the value of that function at a single point. Instead it depends on how the function behaves around that point. So even though for every $x$ we could pick a $c$ such that $e^{x - c} = 1$, without changing that $c$ it still would not be the case that $e^{x-c}$ is constant around that particular $x$ so it's derivative will still not be $0$.
Also, remember you can't really say $c$ is infinity because infinity is not a real number that you can plug into an equation. You can take limits as values tend towards infinity but that's not the same thing as plugging it in.
-
ok thanks @Jim now it makes sense for me – dato datuashvili Mar 20 '13 at 7:04
Hmmmmmmmmmm well \begin{eqnarray*} d(e^x)&=&d(e\cdot e^{x-1})\\ &=&d(e)\cdot e^{x-1}+e\cdot d(e^{x-1})\\ &=&0\cdot e^{x-1}+e\cdot d(e^{x-1})\\ &=&e\cdot d(e^{x-1})\\ &=&e\cdot (e^{x-1})\\ &=&e^x. \end{eqnarray*}
-
we are not stoped @Fixed Point ,i meant when we continue derivation procedure – dato datuashvili Mar 20 '13 at 7:00
Yep repeat this as many times as you want. It'll work. – Fixed Point Mar 20 '13 at 7:01
|
2015-08-02 14:42:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8982142806053162, "perplexity": 329.6650113113839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989126.22/warc/CC-MAIN-20150728002309-00279-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://indico.in2p3.fr/event/4083/contributions/29230/
|
# Cosmic rays & their interstellar medium environment CRISM-2011
26 June 2011 to 1 July 2011
Montpellier, France
Europe/Paris timezone
## SNR interacting with molecular clouds as seen by HESS
27 Jun 2011, 16:45
20m
Main amphitheater (Polytech)
### Speaker
Mr Jérémie Méhault (LUPM Monptellier)
### Description
Supernova from massive stars are exploding in giant molecular cloud (MC). Thus it is possible to see supernova remnants (SNR) expending in dense material. The physical interaction between SNR and MC can produce OH maser (1720 MHz) emission tracing the shocked surrounding medium. High and very-high energy (HE and VHE) gamma rays have been detected in coincidence with OH maser, SNR and shocked MC. Neutral pions decay is the best model to explain the origin of the gamma-ray emission. I will present some joined results of HE and VHE gamma-ray experiments of known cases as IC443, W44, W28 and W51C as seen by H.E.S.S. and Fermi-LAT to study in an easier way the morphology and the spectra. The very good angular resolution of H.E.S.S. analysis reconstruction methods is usefull to model the morphology in the GeV range, and the HE spectra is helpful to constrain the gamma-ray origin.
Slides
|
2019-12-11 21:30:49
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8661035895347595, "perplexity": 6080.646179419966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540533401.22/warc/CC-MAIN-20191211212657-20191212000657-00457.warc.gz"}
|
http://tex.stackexchange.com/questions/35267/same-distance-between-chapter-head-and-text-toc
|
# Same distance between chapter head and text / toc
With the following example, I get a different vertical distance between "Contents" and "1 bar 5" and "foo" and "Lorem". How do I get the first entry of the TOC to the same height as "Lorem ipsum", that is the base line should be the red rule in the image below:
\documentclass{book}
\begin{document}
\tableofcontents
\chapter*{foo}
Lorem ipsum
\chapter{bar}
Lorem ipsum
\end{document}
-
## 1 Answer
The command \l@chapter in book.cls typesets the chapter entry in the table of contents. It adds 1.0 em vertically before the entry. So, to remove that space before the first entry, you could redefine that macro or add a negative vertical skip before you start the first chapter, such as
\addtocontents{toc}{\vskip -1.0em}
\tableofcontents
\chapter*{foo}
Actually, the macro contains also a glue of 1 pt. From book.cls:
\newcommand*\l@chapter[2]{%
\ifnum \c@tocdepth >\m@ne
\addpenalty{-\@highpenalty}%
\vskip 1.0em \@plus\p@
\setlength\@tempdima{1.5em}%
\begingroup
\parindent \z@ \rightskip \@pnumwidth
\parfillskip -\@pnumwidth
\leavevmode \bfseries
\advance\leftskip\@tempdima
\hskip -\leftskip
#1\nobreak\hfil \nobreak\hb@xt@\@pnumwidth{\hss #2}\par
\penalty\@highpenalty
\endgroup
\fi}
-
|
2016-05-25 14:56:42
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9235799908638, "perplexity": 5194.791875736019}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274994.48/warc/CC-MAIN-20160524002114-00233-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1777228/bounding-the-principal-complex-logarithm
|
# Bounding the principal complex logarithm
Find an upper bound for
$$\left\vert{\rm Log}(z+ai)\right\vert$$
where $z\in\left\{Re^{i\theta}\,\colon 0\le\theta\le\pi, R>0 {\rm , R \,fixed}\right\}$, $a>0$ and ${\rm Log}(w) = \log{\vert w\vert} + i{\rm Arg}(w)$.
So I started by following this method:
\begin{align} \left\vert{\rm Log}(z+ai)\right\vert &= \left\vert\log{|z+ai|} + i{\rm Arg}(z+ai)\right\vert \\ &\le \left\vert\log{|z+ai|}\right\vert + \left\vert i{\rm Arg}(z+ai)\right\vert \\ &= \left\vert\log{|z+ai|}\right\vert + {\rm Arg}(z+ai) \\ &\le \left\vert\log{(|z|+a)}\right\vert + {\rm Arg}(z+ai) \\ &= \left\vert\log{(R+a)}\right\vert + {\rm Arg}(z+ai) \end{align}
But i'm not really sure how to bound the argument. Is this the best way of going about it or is there a nicer way to hand this?
• If you fix a branch for the logarithm, the argument is already bounded. – Starfall May 8 '16 at 20:32
• @Starfall What do you mean by fix a branch? – user2850514 May 8 '16 at 20:37
• The complex logarithm is a multivalued function, furthermore even if we restrict it to be single valued there is a discontinuity when we loop around the origin (the argument increases by $2\pi$ even if we get back to the same point.) To alleviate this, we introduce a branch cut, i.e we cut the complex plane by a ray through the origin. This amounts to restricting the argument of the complex logarithm to a certain interval. – Starfall May 8 '16 at 20:40
• For instance, define $\log(z) = \log|z| + i \arg(z)$ where $\arg(z)$ is taken to be in $(-\pi, \pi]$. Now, it is pretty clear that your argument is bounded. – Starfall May 8 '16 at 20:56
• You don't need to. The question asks you to find an upper bound, and you found an upper bound. – Starfall May 8 '16 at 21:02
|
2019-07-19 21:34:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998878240585327, "perplexity": 434.16183789566514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526359.16/warc/CC-MAIN-20190719202605-20190719224605-00362.warc.gz"}
|
http://spmphysics.onlinetuition.com.my/2013/07/electric-charge.html
|
# Electric Charge
### Electric Charge
1. There are two kind of electric charge, namely the positive charge and the negative charge.
2. Like charge repel each other.
3. Unlike charge attract each other.
4. A neutral body can be attracted by another body which has either positive or negative charge.
5. The SI unit of electric charge is Coulomb (C).
Example
Charge of 1 electron = -1.6 x 10-19C
Charge of 1 proton = +1.6 x 10-19C
#### Sum of Charge
Sum of charge
= number of charge particles × charge of 1 particle
$Q=ne$
Example:
Find the charge of 2.5 x 1019 electrons.
(Charge of 1 electron is -1.6 x 10-19C)
$Q=ne Q=(2.5× 10 19 )(−1.6× 10 −19 ) Q=−4C$
|
2019-05-26 15:16:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8041684031486511, "perplexity": 3722.4707923374463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259316.74/warc/CC-MAIN-20190526145334-20190526171334-00553.warc.gz"}
|
http://forcedtoadmin.blogspot.ru/2015/02/
|
## среда, 11 февраля 2015 г.
### Increase NUnit performance on AppVeyor
I think you're familiar with AppVeyor - continuous integration cloud system, which allows to make your builds and run unit-tests on windows-based VM. It's free for open-source projects and very useful when you need to be sure that your program will be compilable and runnable against Windows platform (for Linux there is Travis.CI or docker.io, for OS X you can use Travis.CI again).
When I converted MSTest to NUnit for Microsoft Bond project I have found that nunit tests run 4x-5x times slower than similar MS Tests on appveyor. That was strange because my measurements show that slowness was directly inside the tested methods, not in the nunit framework itself. I started to investigate the issue and found that there is NUnitLite nuget package which do the same things as nunit-console but much faster. What was bad that before using NUnitLite you have to create new console project and reference NUnitLite nuget package. That is not always possible if you don't want to add to your sources some unrelated to your project staff.
So at first I made an AppVeyor script which builds NUnitLite
install_script:
- nuget install NUnitLite -version 3.0.0-alpha-5 -pre
- mkdir nunit
- copy NUnitLite.3.0.0-alpha-5\lib\net45\nunitlite.dll nunit
- copy NUnit.3.0.0-alpha-5\lib\net45\nunit.framework.dll nunit
- csc /platform:anycpu32bitpreferred /out:nunit\nulite.exe /optimize+ NUnitLite.3.0.0-alpha-5\content\Program.cs /r:nunit\nunitlite.dll /r:nunit\nunit.framework.dll
To test with NUnitLite, you can use two ways.
test_script:
#use this way if you installed NUnitLite version greater than 3.0.0-alpha-5
- nunit\nulite.exe path\to\your\tests\TestAssembly.dll
#use this way if you installed NUnitLite version 3.0.0-alpha-5 or lower
- copy nunit\* path\to\your\tests
- path\to\your\tests\nulite.exe TestAssembly
Please note that in the second case we start nulite.exe in the folder where your tests are located and pass an assembly name (without extension) as nulite.exe argument
What was interested that without /platform:anycpu32bitpreferred argument nulite.exe works slow on AppVeyor. This argument says to execute generated exe file in x86 mode on the systems which support 32bit. I tried to run nunit-console in x86 mode and this increased speed a lot! To do it just specify --x86 argument for nunit-console version 3.0 or run nunit-console-x86 for nunit 2.6.3 (which is preinstalled by default on AppVeyor)
## вторник, 3 февраля 2015 г.
### CoreCLR is on github! How to build it on linux.
Today Microsoft announced that CoreCLR (cross-platform runtime for .NET Core) is on github. At first I tried to build it on my Ubuntu 14.04 box. There was some prerequisites issues which prevented build to be completed, but after some investigation I've found the working set of packages: This is the final script:
#Installing Prerequisites
sudo apt-get install git cmake clang-3.5 make llvm-3.5 gcc
#build.sh is working only on 64 bit Linux!
git clone https://github.com/dotnet/coreclr
cd coreclr
./build.sh
|
2017-05-25 03:00:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3239736258983612, "perplexity": 9667.266891336581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607963.70/warc/CC-MAIN-20170525025250-20170525045250-00548.warc.gz"}
|
http://boffosocko.com/tag/statistical-mechanics/
|
## Statistical Physics, Information Processing, and Biology Workshop at Santa Fe Institute
Information Processing and Biology by John Carlos Baez(Azimuth)
The Santa Fe Institute, in New Mexico, is a place for studying complex systems. I’ve never been there! Next week I’ll go there to give a colloquium on network theory, and also to participate in this workshop.
I just found out about this from John Carlos Baez and wish I could go! How have I not managed to have heard about it?
## Stastical Physics, Information Processing, and Biology
### Workshop
November 16, 2016 – November 18, 2016
9:00 AM
Noyce Conference Room
Abstract.
This workshop will address a fundamental question in theoretical biology: Does the relationship between statistical physics and the need of biological systems to process information underpin some of their deepest features? It recognizes that a core feature of biological systems is that they acquire, store and process information (i.e., perform computation). However to manipulate information in this way they require a steady flux of free energy from their environments. These two, inter-related attributes of biological systems are often taken for granted; they are not part of standard analyses of either the homeostasis or the evolution of biological systems. In this workshop we aim to fill in this major gap in our understanding of biological systems, by gaining deeper insight in the relation between the need for biological systems to process information and the free energy they need to pay for that processing.
The goal of this workshop is to address these issues by focusing on a set three specific question:
1. How has the fraction of free energy flux on earth that is used by biological computation changed with time?;
2. What is the free energy cost of biological computation / function?;
3. What is the free energy cost of the evolution of biological computation / function.
In all of these cases we are interested in the fundamental limits that the laws of physics impose on various aspects of living systems as expressed by these three questions.
Purpose: Research Collaboration
SFI Host: David Krakauer, Michael Lachmann, Manfred Laubichler, Peter Stadler, and David Wolpert
Syndicated copies to:
## Network Science by Albert-László Barabási
Network Science by Albert-László Barabási(Cambridge University Press)
I ran across a link to this textbook by way of a standing Google alert, and was excited to check it out. I was immediately disappointed to think that I would have to wait another month and change for the physical textbook to be released, but made my pre-order directly. Then with a bit of digging around, I realized that individual chapters are available immediately to quench my thirst until the physical text is printed next month.
The textbook is available for purchase in September 2016 from Cambridge University Press. Pre-order now on Amazon.com.
Syndicated copies to:
## What is Information? by Christoph Adami
What is Information? [1601.06176] by Christoph Adami(arxiv.org)
Information is a precise concept that can be defined mathematically, but its relationship to what we call "knowledge" is not always made clear. Furthermore, the concepts "entropy" and "information", while deeply related, are distinct and must be used with care, something that is not always achieved in the literature. In this elementary introduction, the concepts of entropy and information are laid out one by one, explained intuitively, but defined rigorously. I argue that a proper understanding of information in terms of prediction is key to a number of disciplines beyond engineering, such as physics and biology.
A proper understanding of information in terms of prediction is key to a number of disciplines beyond engineering, such as physics and biology.
Comments: 19 pages, 2 figures. To appear in Philosophical Transaction of the Royal Society A
Subjects: Adaptation and Self-Organizing Systems (nlin.AO); Information Theory (cs.IT); Biological Physics (physics.bio-ph); Quantitative Methods (q-bio.QM)
Cite as:arXiv:1601.06176 [nlin.AO] (or arXiv:1601.06176v1 [nlin.AO] for this version)
[v1] Fri, 22 Jan 2016 21:35:44 GMT (151kb,D) [.pdf]
Source: Christoph Adami [1601.06176] What is Information? on arXiv
Syndicated copies to:
## The Information Universe Conference
"The Information Universe" Conference in The Netherlands in October hits several of the sweet spots for areas involving information theory, physics, the origin of life, complexity, computer science, and microbiology.
Yesterday, via a notification from Lanyard, I came across a notice for the upcoming conference “The Information Universe” which hits several of the sweet spots for areas involving information theory, physics, the origin of life, complexity, computer science, and microbiology. It is scheduled to occur from October 7-9, 2015 at the Infoversum Theater in Groningen, The Netherlands.
I’ll let their site speak for itself below, but they already have an interesting line up of speakers including:
### Keynote speakers
• Erik Verlinde, Professor Theoretical Physics, University of Amsterdam, Netherlands
• Alex Szalay, Alumni Centennial Professor of Astronomy, The Johns Hopkins University, USA
• Gerard ‘t Hooft, Professor Theoretical Physics, University of Utrecht, Netherlands
• Gregory Chaitin, Professor Mathematics and Computer Science, Federal University of Rio de Janeiro, Brasil
• Charley Lineweaver, Professor Astronomy and Astrophysics, Australian National University, Australia
• Lude Franke, Professor System Genetics, University Medical Center Groningen, Netherlands
### Conference synopsis from their homepage:
Additional details about the conference including the participants, program, venue, and registration can also be found at their website.
Syndicated copies to:
## NIMBioS Workshop: Information Theory and Entropy in Biological Systems
Web resources for participants in the NIMBioS Worshop on Information Theory and Entropy in Biological Systems.
Over the next few days, I’ll be maintaining a Storify story covering information related to and coming out of the Information Theory and Entropy Workshop being sponsored by NIMBios at the Unviersity of Tennessee, Knoxville.
For those in attendance or participating by watching the live streaming video (or even watching the video after-the-fact), please feel free to use the official hashtag #entropyWS, and I’ll do my best to include your tweets, posts, and material into the story stream for future reference.
For journal articles and papers mentioned in/at the workshop, I encourage everyone to join the Mendeley.com group ITBio: Information Theory, Microbiology, Evolution, and Complexity and add them to the group’s list of papers. Think of it as a collaborative online journal club of sorts.
Those participating in the workshop are also encouraged to take a look at a growing collection of researchers and materials I maintain here. If you have materials or resources you’d like to contribute to the list, please send me an email or include them via the suggestions/submission form or include them in the comments section below.
## Resources for Information Theory and Biology
Syndicated copies to:
## BIRS Workshop on Biological and Bio-Inspired Information Theory | Storify Stream
Over the span of the coming week, I'll be updating (and archiving) the stream of information coming out of the BIRS Workshop on Biological and Bio-Inspired Information Theory.
Over the span of the coming week, I’ll be updating (and archiving) the stream of information coming out of the BIRS Workshop on Biological and Bio-Inspired Information Theory.
## Information Theory is the New Central Discipline
Information Theory is the new central discipline. by Nassim Nicholas Taleb(facebook.com)
I’m coming to this post a bit late as I’m playing a bit of catch up, but agree with it wholeheartedly.
In particular, applications to molecular biology and medicine are really beginning to come to a heavy boil in just the past five years. This particular year is the progenitor of what appears to be the biggest renaissance for the application of information theory to the area of biology since Hubert Yockey, Henry Quastler, and Robert L. Platzman’s “Symposium on Information Theory in Biology at Gatlinburg, Tennessee” in 1956.
Upcoming/recent conferences/workshops on information theory in biology include:
At the beginning of September, Christoph Adami posted an awesome and very sound paper on arXiv entitled “Information-theoretic considerations concerning the origin of life” which truly portends to turn the science of the origin of life on its head.
I’ll note in passing, for those interested, that Claude Shannon’s infamous master’s thesis at MIT (in which he applied Boolean Algebra to electric circuits allowing the digital revolution to occur) and his subsequent “The Theory of Mathematical Communication” were so revolutionary, nearly everyone forgets his MIT Ph.D. Thesis “An Algebra for Theoretical Genetics” which presaged the areas of cybernetics and the current applications of information theory to microbiology and are probably as seminal as Sir R.A Fisher’s applications of statistics to science in general and biology in particular.
For those commenting on the post who were interested in a layman’s introduction to information theory, I recommend John Robinson Pierce’s An Introduction to Information Theory: Symbols, Signals and Noise (Dover has a very inexpensive edition.) After this, one should take a look at Claude Shannon’s original paper. (The MIT Press printing includes some excellent overview by Warren Weaver along with the paper itself.) The mathematics in the paper really aren’t too technical, and most of it should be comprehensible by most advanced high school students.
For those that don’t understand the concept of entropy, I HIGHLY recommend Arieh Ben-Naim’s book Entropy Demystified The Second Law Reduced to Plain Common Sense with Seven Simulated Games. He really does tear the concept down into its most basic form in a way I haven’t seen others come remotely close to and which even my mother can comprehend (with no mathematics at all). (I recommend this presentation to even those with Ph.D.’s in physics because it is so truly fundamental.)
For the more advanced mathematicians, physicists, and engineers Arieh Ben-Naim does a truly spectacular job of extending ET Jaynes’ work on information theory and statistical mechanics and comes up with a more coherent mathematical theory to conjoin the entropy of physics/statistical mechanics with that of Shannon’s information theory in A Farewell to Entropy: Statistical Thermodynamics Based on Information.
For the advanced readers/researchers interested in more at the intersection of information theory and biology, I’ll also mention that I maintain a list of references, books, and journal articles in a Mendeley group entitled “ITBio: Information Theory, Microbiology, Evolution, and Complexity.”
Syndicated copies to:
## How to Sidestep Mathematical Equations in Popular Science Books
In the publishing industry there is a general rule-of-thumb that every mathematical equation included in a book will cut the audience of science books written for a popular audience in half – presumably in a geometric progression. This typically means that including even a handful of equations will give you an effective readership of zero – something no author and certainly no editor or publisher wants.
I suspect that there is a corollary to this that every picture included in the text will help to increase your readership, though possibly not by as proportionally a large amount.
In any case, while reading Melanie Mitchell’s text Complexity: A Guided Tour [Cambridge University Press, 2009] this weekend, I noticed that, in what appears to be a concerted effort to include an equation without technically writing it into the text and to simultaneously increase readership by including a picture, she cleverly used a picture of Boltzmann’s tombstone in Vienna! Most fans of thermodynamics will immediately recognize Boltzmann’s equation for entropy, $S = k log W$, which appears engraved on the tombstone over his bust.
I hope that future mathematicians, scientists, and engineers will keep this in mind and have their tombstones engraved with key formulae to assist future authors in doing the same – hopefully this will help to increase the amount of mathematics that is deemed “acceptable” by the general public.
## Book Review: John Avery’s “Information Theory and Evolution”
Information Theory and Evolution
John Avery
Non-fiction, Popular Science
World Scientific
January 1, 2003
paperback
217
This highly interdisciplinary book discusses the phenomenon of life, including its origin and evolution (and also human cultural evolution), against the background of thermodynamics, statistical mechanics, and information theory. Among the central themes is the seeming contradiction between the second law of thermodynamics and the high degree of order and complexity produced by living systems. This paradox has its resolution in the information content of the Gibbs free energy that enters the biosphere from outside sources, as the author shows. The role of information in human cultural evolution is another focus of the book. One of the final chapters discusses the merging of information technology and biotechnology into a new discipline — bio-information technology.
This is a fantastic book which, for the majority of people, I’d give a five star review. For my own purposes, however, I was expecting far more on the theoretical side of information theory and statistical mechanics as applied to microbiology that it didn’t live up to, so I’m giving it three stars from a purely personal perspective.
I do wish that someone had placed it in my hands and forced me to read it when I was a freshman in college entering the study of biomedical and electrical engineering. It is far more an impressive book at this level and for those in the general public who are interested in the general history of science and philosophy of the topics. The general reader may be somewhat scared by a small amount of mathematics in chapter 4, but there is really no loss of continuity by skimming through most of it. For those looking for a bit more rigor, Avery provides some additional details in appendix A, but for the specialist, the presentation is heavily lacking.
The book opens with a facile but acceptable overview of the history of the development for the theory of evolution whereas most other texts would simply begin with Darwin’s work and completely skip the important philosophical and scientific contributions of Aristotle, Averroes, Condorcet, Linnaeus, Erasmus Darwin, Lamarck, or the debates between Cuvier and St. Hilaire.
For me, the meat of the book was chapters 3-5 and appendix A which collectively covered molecular biology, evolution, statistical mechanics, and a bit of information theory, albeit from a very big picture point of view. Unfortunately the rigor of the presentation and the underlying mathematics were skimmed over all too quickly to accomplish what I had hoped to gain from the text. On the other hand, the individual sections of “suggestions for further reading” throughout the book seem well researched and offer an acceptable launching pad for delving into topics in places where they may be covered more thoroughly.
The final several chapters become a bit more of an overview of philosophy surrounding cultural evolution and information technology which are much better covered and discussed in James Gleick’s recent book The Information.
Overall, Avery has a well laid out outline of the broad array of subjects and covers it all fairly well in an easy to read and engaging style.
|
2017-03-27 08:38:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29865574836730957, "perplexity": 1549.4662359039075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189466.30/warc/CC-MAIN-20170322212949-00491-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://www.miniphysics.com/uy1-electric-potential-of-an-infinite-line-charge.html
|
# UY1: Electric Potential Of An Infinite Line Charge
Find the potential at a distance r from a very long line of charge with linear charge density $\lambda$.
We have derived the potential for a line of charge of length 2a in Electric Potential Of A Line Of Charge.
$$V = \frac{\lambda}{4 \pi \epsilon_{0}} \text{ln} \left( \frac{\sqrt{a^{2}+ r^{2}}+a}{\sqrt{a^{2} + r^{2}} – a} \right)$$
We shall use the expression above and observe what happens as a goes to infinity. But first, we have to rearrange the equation.
$$V = \frac{\lambda}{4 \pi \epsilon_{0}} \text{ln} \left( \frac{\sqrt{1 + \left( \frac{r}{a} \right)^{2}}+1}{\sqrt{1 + \left( \frac{r}{a} \right)^{2}} – 1} \right)$$
We note that $\sqrt{1 \pm x} \approx 1 \pm \frac{1}{2} x$ (binomial expansion). Hence,
\begin{aligned} V &\approx \frac{\lambda}{4 \pi \epsilon_{0}} \, \text{ln} \left( \frac{1 + \frac{r^{2}}{2a^{2}}+1}{1 + \frac{r^{2}}{2a^{2}} – 1} \right) \\ &= \frac{\lambda}{4 \pi \epsilon_{0}} \, \text{ln} \left( \frac{2 + \frac{r^{2}}{2a^{2}}}{\frac{r^{2}}{2a^{2}}} \right) \\ &= \frac{\lambda}{4 \pi \epsilon_{0}} \, \text{ln} \left( \frac{1 + \frac{r^{2}}{4a^{2}}}{\frac{r^{2}}{4a^{2}}} \right) \\ &= \frac{\lambda}{4 \pi \epsilon_{0}} \left[ \text{ln} \left( 1 + \frac{r^{2}}{4 a^{2}} \right) \, – \, \text{ln} \left( \frac{r^{2}}{4 a^{2}} \right) \right] \end{aligned}
Note that $\text{ln} (1 + x) \approx x$. Hence,
$$V \approx \frac{\lambda}{4 \pi \epsilon_{0}} \left[ \frac{r^{2}}{4 a^{2}} + 2 \, \text{ln} \left( \frac{2a}{r} \right) \right]$$
As $a \rightarrow \infty$, $\frac{r^{2}}{4 a^{2}} \rightarrow 0$. Hence,
$$V \approx \frac{\lambda}{2 \pi \epsilon_{0}} \text{ln} \left( \frac{2a}{r} \right)$$
And.. We’re done.
We can find the electric field of an infinite line charge as well:
Potential of any point a with respect to any other point b,
\begin{aligned} V_{a} – V_{b} &\approx \frac{\lambda}{2 \pi \epsilon_{0}} \left[ \text{ln} \left( \frac{2a}{r_{a}} \right) – \text{ln} \left( \frac{2a}{r_{b}} \right) \right] \\ &= \frac{\lambda}{2 \pi \epsilon_{0}} \text{ln} \frac{r_{b}}{r_{a}} \end{aligned}
Suppose $V_{b} = 0$ at $r_{b} = r_{0}$ and $V_{a} = V$ at $r_{a} = r$, then:
$$V = \frac{\lambda}{2 \pi \epsilon_{0}} \text{ln} \frac{r_{0}}{r}$$
Note that $\frac{d}{dr} ( \text{ln} \, r_{0} – \text{ln} \, r) = 0-\frac{1}{r}$. Hence,
$$E = \, – \frac{\partial V}{\partial r} = \frac{\lambda}{2 \pi \epsilon_{0} r}$$
You can find the electric field using Gauss’s Law as well, as shown here.
Next: Equipotential Surfaces
Previous: Electric Potential Of A Line Of Charge
Back To Electromagnetism (UY1)
### 9 thoughts on “UY1: Electric Potential Of An Infinite Line Charge”
1. Plz tell me ,that ,is the charge bringing from infinity to such a point on that wire ?? Whose displacement is “dx” ??
2. I don’t understand why ln(1 + x) ≈ x. Is it another case of binomial expansion?
thank you.
3. how did an infinite line have a finite length 2a????
4. hi , you calculated the electric potential due to ” an infinite line charge ” but yet your answer is a function of ” a ” which i don’t understand what does it mean?! in fact i think it’s reasonable for ” a ” to vanish.
• Hello. $2a$ is the length of the very long line of charge. Have a look at the final equation for the electric potential of the line of charge.
It makes sense that the electric potential will increase if $2a$ increases. If $a$ vanishes, this means that the electric potential will remain the same if the length increases! (which does not make any sense)
• But if a approaches infinity, won’t your final equation for potential also become infinity? lim as a approaches infinity of ln(2a/r) -> ln(infinity) -> infinity.
However, I do understand your dilemma (infinity always screws with minds). You can think of $a$ going to a sufficiently large number, such that $\frac{r^{2}}{4 a^{2}} \rightarrow 0$ goes to zero. Does this help?
|
2020-06-05 03:57:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999921321868896, "perplexity": 1151.843082867604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348492427.71/warc/CC-MAIN-20200605014501-20200605044501-00127.warc.gz"}
|
https://cs3110.github.io/textbook/chapters/hop/higher_order.html
|
4.1. Higher-Order Functions¶
Consider these functions double and square on integers:
let double x = 2 * x
let square x = x * x
val double : int -> int = <fun>
val square : int -> int = <fun>
Let’s use these functions to write other functions that quadruple and raise a number to the fourth power:
let quad x = double (double x)
let fourth x = square (square x)
val quad : int -> int = <fun>
val fourth : int -> int = <fun>
There is an obvious similarity between these two functions: what they do is apply a given function twice to a value. By passing in the function to another function twice as an argument, we can abstract this functionality:
let twice f x = f (f x)
val twice : ('a -> 'a) -> 'a -> 'a = <fun>
The function twice is higher-order: its input f is a function. And—recalling that all OCaml functions really take only a single argument—its output is technically fun x -> f (f x), so twice returns a function hence is also higher-order in that way.
Using twice, we can implement quad and fourth in a uniform way:
let quad x = twice double x
let fourth x = twice square x
val quad : int -> int = <fun>
val fourth : int -> int = <fun>
4.1.1. The Abstraction Principle¶
Above, we have exploited the structural similarity between quad and fourth to save work. Admittedly, in this toy example it might not seem like much work. But imagine that twice were actually some much more complicated function. Then if someone comes up with a more efficient version of it, every function written in terms of it (like quad and fourth) could benefit from that improvement in efficiency, without needing to be recoded.
Part of being an excellent programmer is recognizing such similarities and abstracting them by creating functions (or other units of code) that implement them. Bruce MacLennan names this the Abstraction Principle in his textbook Functional Programming: Theory and Practice (1990). The Abstraction Principle says to avoid requiring something to be stated more than once; instead, factor out the recurring pattern. Higher-order functions enable such refactoring, because they allow us to factor out functions and parameterize functions on other functions.
Besides twice, here are some more relatively simple examples, indebted also to MacLennan:
Apply. We can write a function that applies its first input to its second input:
let apply f x = f x
val apply : ('a -> 'b) -> 'a -> 'b = <fun>
Of course, writing apply f is a lot more work than just writing f.
Pipeline. The pipeline operator, which we’ve previously seen, is a higher-order function:
let pipeline x f = f x
let (|>) = pipeline
let x = 5 |> double
val pipeline : 'a -> ('a -> 'b) -> 'b = <fun>
val ( |> ) : 'a -> ('a -> 'b) -> 'b = <fun>
val x : int = 10
Compose. We can write a function that composes two other functions:
let compose f g x = f (g x)
val compose : ('a -> 'b) -> ('c -> 'a) -> 'c -> 'b = <fun>
This function would let us create a new function that can be applied many times, such as the following:
let square_then_double = compose double square
let x = square_then_double 1
let y = square_then_double 2
val square_then_double : int -> int = <fun>
val x : int = 2
val y : int = 8
Both. We can write a function that applies two functions to the same argument and returns a pair of the result:
let both f g x = (f x, g x)
let ds = both double square
let p = ds 3
val both : ('a -> 'b) -> ('a -> 'c) -> 'a -> 'b * 'c = <fun>
val ds : int -> int * int = <fun>
val p : int * int = (6, 9)
Cond. We can write a function that conditionally chooses which of two functions to apply based on a predicate:
let cond p f g x =
if p x then f x else g x
val cond : ('a -> bool) -> ('a -> 'b) -> ('a -> 'b) -> 'a -> 'b = <fun>
4.1.2. The Meaning of “Higher Order”¶
The phrase “higher order” is used throughout logic and computer science, though not necessarily with a precise or consistent meaning in all cases.
In logic, first-order quantification refers primarily to the universal and existential ($$\forall$$ and $$\exists$$) quantifiers. These let you quantify over some domain of interest, such as the natural numbers. But for any given quantification, say $$\forall x$$, the variable being quantified represents an individual element of that domain, say the natural number 42.
Second-order quantification lets you do something strictly more powerful, which is to quantify over properties of the domain. Properties are assertions about individual elements, for example, that a natural number is even, or that it is prime. In some logics we can equate properties with sets of individual, for example the set of all even naturals. So second-order quantification is often thought of as quantification over sets. You can also think of properties as being functions that take in an element and return a Boolean indicating whether the element satisfies the property; this is called the characteristic function of the property.
Third-order logic would allow quantification over properties of properties, and fourth-order over properties of properties of properties, and so forth. Higher-order logic refers to all these logics that are more powerful than first-order logic; though one interesting result in this area is that all higher-order logics can be expressed in second-order logic.
In programming languages, first-order functions similarly refer to functions that operate on individual data elements (e.g., strings, ints, records, variants, etc.). Whereas higher-order function can operate on functions, much like higher-order logics can quantify over over properties (which are like functions).
4.1.3. Famous Higher-order Functions¶
In the next few sections we’ll dive into three of the most famous higher-order functions: map, filter, and fold. These are functions that can be defined for many data structures, including lists and trees. The basic idea of each is that:
• map transforms elements,
• filter eliminates elements, and
• fold combines elements.
|
2022-05-23 10:42:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4111782908439636, "perplexity": 821.5033266108636}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558015.52/warc/CC-MAIN-20220523101705-20220523131705-00118.warc.gz"}
|
https://www.physicsforums.com/threads/conservation-of-angular-momentum.175635/
|
# Conservation of angular momentum
1. ### pardesi
339
well i saw a proof and therea re hundreds of them where conservation of angular momentum is used to solve the problems.let me state an example and then my question.
a ball with some initial velocity is put on a rough plane find the speed of the c.m when it stops pure rolling.well of course on solution is via the torque equation.but other solution is by conserving the angular momentum about the point of contact since there no net torque acts about that point.
my question is this point itself is moving how can we conserve momentum about two diff. points(since the point initially with which the floor was in contact with is in general not same as the point it contacts after the ball starts pure rolling)
2. ### WMGoBuffs
33
I don't think I understand your question: the speed of the center of mass when the ball stops pure rolling is 0.
3. ### pardesi
339
well what i meant was consider this ismple problem u hav aball rolling with speed $$v_{0}$$ initially and u put it ona rough surface and conserve momentuma about the point of contact(???) say that no torque acts about thsi point.thenn finally when it starts pure rolling we put
$$mv_{0}r=I\frac{v}{r}+mvr$$
but my question is the point about which we are conserving omentum is itself moving and finally when the sphere starts pure rolling then the point may not be the same as the one with which it was put in contact withthe ground
hope i was clear
4. ### siddharth
1,197
Yes, but the frame you're looking at is not "attached" to a particular physical part of the sphere which is rotating. The frame is located at the point of contact at all times. Now, this frame isn't inertial and is accelerating. However, if you calculate the "apparent torque" in this system about the point of contact at all times, you'll find it's zero. So, you can apply the conservation of momentum.
5. ### pardesi
339
yes that' true i can always conserve angular momentum about the point of contact but this point is itself alwys changing.here the point about which i conserve is itself changing with time so how do i conserve angular momentum about two apparently different points
6. ### lugita15
No you can't. In this case, there will be a torque due to fictitious forces which arise since you're in an accelerating frame. Therefore, conservation of angular momentum is invalid in this case.
7. ### jee.anupam
2
@pardesi
Hey buddy, you are not conserving angular momentum about the moving point, you are conserving angular momentum about any stationary point on the ground, which is in line with the frictional force (even the initial point will do), so that the net torque of friction about the stationary point is zero. You cannot conserve angular momentum about a moving point.
8. ### rcgldr
7,451
Assuming no losses, then angular moementum isn't conserved, but total energy is. So the initial condition could be 0 angular rotation, and some velocity as the ball starts off initially sliding on the surface. The initial total engery is all linear kinetic energy. The final state will be the sum of kinetic energy due to rotation and linear movement, and the rate of rotation will be proportional to the linear movement.
9. ### siddharth
1,197
Yeah, that's right. Nice catch.
10. ### pardesi
339
well that won't do because there will be torque due to normal force exerted by the ground. in fact no other point except the nstantaneous point of contact would supposedly do
Last edited: Jul 6, 2007
11. ### siddharth
1,197
This will cancel with the torque due to the force of gravity about the stationary point on the ground. wouldn't it?
Last edited: Jul 6, 2007
12. ### pardesi
339
yes i think this is settled now but not so convincingly as i had thought seems too conditional thanks everyone
Last edited: Jul 6, 2007
13. ### rcgldr
7,451
Shouldn't that be when the ball starts pure rolling? The initial condition is a sliding ball that isn't rolling and friction is converting some the linear kinetic energy into rotational energy and some into heat (losses). This thread shows the math:
My previous most mentioned "assume no losses", but this could only happen in a case where no sliding occurred. The friction surface would have to immediate grip the sphere, and flex horizontally with a spring like reaction applied to the bottom surface of the sphere to convert the linear energy into a combination of linear and angular energy with no losses.
Last edited: Jul 7, 2007
### Staff: Mentor
As jee.anupam explained, angular momentum is conserved about any point along the line of action of the friction force. That fact, plus the no-slipping condition ($v = \omega r$), allows you to compute the speed at which rolling without slipping takes place.
(Of course, angular momentum about the cm of the ball is not conserved, otherwise the ball will never start spinning.)
You can also solve this using Newton's 2nd law, which takes longer but is instructive.
Energy is not conserved: Until rolling without slipping is attained, the ball scrapes along the floor, dissipating mechanical energy. Assuming energy conservation gives you the wrong answer.
15. ### jee.anupam
2
The torque due to the normal force will be canceled by the torque due to gravity. Please consider all the forces acting on the rolling body before jumping to any conclusion.Thank you.
16. ### rcgldr
7,451
True, I was referring to an idealized case. I saw in an older thread where you and another posted what the losses would be for this situation. In this thread the losses are taken into account.
Last edited: Jul 7, 2007
### Staff: Mentor
You must have kinetic friction in order to end up with rolling without slipping; thus you must have energy loss. You can't assume energy conservation as an ideal case for this problem: you end up creating an essentially different problem.
18. ### rcgldr
7,451
I corrected my previous post. It was too late to edit my first post. Regarding the essentially different problem:
The idealized case would be one where the friction surface flexes elastically and without any slippage. A close approximation of this would be a marble sliding at a reasonably slow speed from a very slick surface onto a surface composed of table tennis rubber which is very sticky and can flex significantly along the surface with a very high energy retention factor (it would spring back, temporarily increasing angular velocity, but then settle), assume this is done in a vacuum chamber so that aerodynamic drag isn't a factor.
Another idealized case would be a rolling gear sliding onto a geared plane. The result would be similar as obove, spring like flexing of the surfaces (gears), and no sliding. The more generalized case as I pointed out would be a very high coefficient of friction combined with a horizontally flexible surface.
In real life, dynamic friction varies with speed (coefficient of friction is reduced as speed differences in the surfaces increase). I'm not sure what affect this would have on the results. The transition from dynamic to static friction isn't instantaneous, but assuming the flex in the friction surface can be ignored, this shouldn't matter.
A more resonable example of a lossless case involving linear and angular energy would be to have a sphere resting on a sticky plane and then accelerate the sticky plane horizontally with a constant rate of acceleration. As velocity of the plane increases, what is the ratio of the velocity of the sphere versus the velocity of the plane the sphere is rolling on?
Last edited: Jul 8, 2007
19. ### rcgldr
7,451
Corrections and clarifications to above:
Constant acceleration of horizontal plane:
What is the ratio of the acceleration of a non-sliding sphere versus the acceleration of the plane the sphere is rolling on?
What is the ratio of the acceleration of frictionless block versus the acceleration of the plane the block is sliding on?
Constant force:
On an inclined plane what is the ratio of acceleration of a sphere rolling on the plane versus the acceleration of a frictionless block sliding on the plane?
|
2015-04-18 23:43:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.832736611366272, "perplexity": 687.8841880384466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246636255.43/warc/CC-MAIN-20150417045716-00286-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://www.gamedev.net/forums/topic/618019-check-if-enemy-is-in-range-to-fire/
|
check if enemy is in range to fire.
This topic is 2842 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
Recommended Posts
Hello
I'm working on a TD game and I'm trying to write a code to check if the enemy enters the tower range to fire.
The problem is that the tower is shooting at the enemy even though the enemy is out of range (the red circle is the range)
here is a video that shows my problem.
[media]
[/media]
here what I have so far
Bullet::Bullet() { Radius = 100.0f; } void Bullet::CheckIfInRange() { DistanceX = tower.GetXPosition() - Enemy2.EnemyXPosition; DistanceY = tower.GetYPosition() - Enemy2.EnemyYPosition; if((DistanceX<= Radius && DistanceY<=Radius) || (DistanceX>= Radius && DistanceY>=Radius)) return true; else return false; }
Share on other sites
Why do you think this is the correct way to check if something is in range? Try looking at your if statement and interpret what it actually is that you're checking for.
A hint for the correct way to approach this problem: check if the squared distance to the enemy is smaller or equal to the squared bullet radius.
Share on other sites
Why do you think this is the correct way to check if something is in range? Try looking at your if statement and interpret what it actually is that you're checking for.
A hint for the correct way to approach this problem: check if the squared distance to the enemy is smaller or equal to the squared bullet radius.
thanks i solved the problem. here is the code.
if(Enemy.GetPosition().x + Radius> tower.GetPosition().x && Enemy.GetPosition().x < tower.GetPosition().x + Radius && Enemy.GetPosition().y + Radius> tower.GetPosition().y && Enemy.GetPosition().y < tower.GetPosition().y + Radius) //shoot
if there is a better way to do this please do let me know.
thank you again.
Share on other sites
That is not what you're looking for. You want the tower to shoot when the enemy is in range of the tower right? To calculate the distance between two positions you'll have to use Pythagora's theorem. To keep things simple, to calculate the distance from point A to point B you get this:
[source]D = A - B;
distanceSquared = D.x*D.x + D.y*D.y;
distance = sqrt(distanceSquared);[/source]
Now if you want to check if something is in range, you don't need to calculate the actual distance, you only need to compare the squared distances. In your case you only have to check whether distanceSquared <= radius*radius.
If any of that is unclear, let me know and I'll try to explain in more detail.
Share on other sites
That is not what you're looking for. You want the tower to shoot when the enemy is in range of the tower right? To calculate the distance between two positions you'll have to use Pythagora's theorem. To keep things simple, to calculate the distance from point A to point B you get this:
[source]D = A - B;
distanceSquared = D.x*D.x + D.y*D.y;
distance = sqrt(distanceSquared);[/source]
Now if you want to check if something is in range, you don't need to calculate the actual distance, you only need to compare the squared distances. In your case you only have to check whether distanceSquared <= radius*radius.
If any of that is unclear, let me know and I'll try to explain in more detail.
thanks this also work, even better.
thank you again.
• Game Developer Survey
We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start!
• 16
• 11
• 23
• 42
• 75
|
2019-10-16 20:12:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33195874094963074, "perplexity": 657.7172500820033}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669546.24/warc/CC-MAIN-20191016190431-20191016213931-00179.warc.gz"}
|
http://34.212.143.74/apps/s201913/tc2006/programming_problem_set_3/
|
# Problem Set #3
### Objectives
During this activity, students should be able to:
• Write higher-order functions using the Clojure programming language.
This activity helps the student develop the following skills, values and attitudes: ability to analyze and synthesize, capacity for identifying and solving problems, and efficient use of computer systems.
## Activity Description
Individually or in pairs, solve the following set of programming exercises using Clojure. Place all your functions and unit tests in a file called problemset3.clj.
### Note:
The tests in problems 3, 5, and 6 require comparing floating point numbers. In order to avoid rounding problems, we need to define the following function called aprox=:
(require '[clojure.math.numeric-tower :refer [abs]])
(defn aprox=
"Checks if x is approximately equal to y. Returns true
if |x - y| < epsilon, or false otherwise."
[epsilon x y]
(< (abs (- x y)) epsilon))
1. The function there-exists-one takes two arguments: a one argument predicate function pred and a list lst. Returns true if there is exactly one element in lst that satisfies pred, otherwise returns false.
Unit tests:
(deftest test-there-exists-one
(is (not (there-exists-one pos?
())))
(is (there-exists-one pos?
'(-1 -10 4 -5 -2 -1)))
(is (there-exists-one neg?
'(-1)))
(is (not (there-exists-one symbol?
'(4 8 15 16 23 42))))
(is (there-exists-one symbol?
'(4 8 15 sixteen 23 42))))
2. The function my-drop-while takes two arguments: a function f and a list lst. It returns a list of items from lst dropping the initial items that evaluate to true when passed to f. Once a false value is encountered, the rest of the list is returned. Function f should accept one argument. Do not use the predefined drop-while function.
Unit tests:
(deftest test-my-drop-while
(is (= () (my-drop-while neg? ())))
(is (= '(0 1 2 3 4)
(my-drop-while
neg?
'(-10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4))))
(is (= '(2 three 4 five)
(my-drop-while
symbol?
'(zero one 2 three 4 five))))
(is (= '(0 one 2 three 4 five)
(my-drop-while
symbol?
'(0 one 2 three 4 five)))))
3. The bisection method is a root-finding algorithm which works by repeatedly dividing an interval in half and then selecting the subinterval in which the root exists.
Suppose we want to solve the equation $$f(x) = 0$$. Given two points $$a$$ and $$b$$ such that $$f(a)$$ and $$f(b)$$ have opposite signs, $$f$$ must have at least one root in the interval $$[a, b]$$ as long as $$f$$ is continuous on this interval. The bisection method divides the interval in two by computing $$c = \frac{a+b}{2}$$. There are now two possibilities: either $$f(a)$$ and $$f(c)$$ have opposite signs, or $$f(c)$$ and $$f(b)$$ have opposite signs. The bisection algorithm is then applied to the sub-interval where the sign change occurs.
Write the function bisection, that takes a, b, and f as arguments. It finds the corresponding root using the bisection method. The algorithm must stop when a value of $$c$$ is found such that: $$\left | f(c) \right | < 1.0\times 10^{-15}$$.
Unit tests:
(deftest test-bisection
(is (aprox= 0.0001
3.0
(bisection 1 4 (fn [x] (* (- x 3) (+ x 4))))))
(is (aprox= 0.0001
-4.0
(bisection -5 0 (fn [x] (* (- x 3) (+ x 4))))))
(is (aprox= 0.0001
Math/PI
(bisection 1 4 (fn [x] (Math/sin x)))))
(is (aprox= 0.0001
(* 2 Math/PI)
(bisection 5 10 (fn [x] (Math/sin x)))))
(is (aprox= 0.0001
1.618033988749895
(bisection 1 2 (fn [x] (- (* x x) x 1)))))
(is (aprox= 0.0001
-0.6180339887498948
(bisection -10 1 (fn [x] (- (* x x) x 1))))))
4. The function linear-search takes three arguments: a vector vct, a data value x, and an equality function eq-fun. It sequentially searches for x in vct using eq-fun to compare x with the elements contained in vct. The eq-fun should accept two arguments, $$a$$ and $$b$$, and return true if $$a$$ is equal to $$b$$, or false otherwise.
The linear-search function returns the index where the first occurrence of x is found in vct (the first element of the vector is at index 0), or nil if not found.
Unit tests:
(deftest test-linear-search
(is (nil? (linear-search [] 5 =)))
(is (= 0 (linear-search [5] 5 =)))
(is (= 4 (linear-search
[48 77 30 31 5 20 91 92
69 97 28 32 17 18 96]
5
=)))
(is (= 3 (linear-search
["red" "blue" "green" "black" "white"]
"black"
identical?)))
(is (nil? (linear-search
[48 77 30 31 5 20 91 92
69 97 28 32 17 18 96]
96.0
=)))
(is (= 14 (linear-search
[48 77 30 31 5 20 91 92
69 97 28 32 17 18 96]
96.0
==)))
(is (= 8 (linear-search
[48 77 30 31 5 20 91 92
69 97 28 32 17 18 96]
70
#(<= (abs (- %1 %2)) 1)))))
5. The derivative of a function $$f(x)$$ with respect to variable $$x$$ is defined as:
$$f'(x) \equiv \lim_{h\rightarrow 0}\frac{f(x+h)-f(x)}{h}$$
Where $$f$$ must be a continuous function. Write the function deriv that takes f and h as its arguments, and returns a new function that takes x as argument, and which represents the derivative of $$f$$ given a certain value for $$h$$.
The unit tests verify the following derivatives:
\begin{align*} f(x) &= x^3 \\ f'(x) &= 3x^2 \\ f''(x) &= 6x \\ f'''(x) &= 6 \\ f'(5) &= 75 \\ f''(5) &= 30 \\ f'''(5) &= 6 \end{align*}
Unit tests:
(defn f [x] (* x x x))
(def df (deriv f 0.001))
(def ddf (deriv df 0.001))
(def dddf (deriv ddf 0.001))
(deftest test-deriv
(is (aprox= 0.05 75 (df 5)))
(is (aprox= 0.05 30 (ddf 5)))
(is (aprox= 0.05 6 (dddf 5))))
6. Newton’s method is another root-finding algorithm that is used to find successively better approximations. It can be summarized as follows:
$$x_n = \begin{cases} 0 & \text{ if } n=0 \\ x_{n-1}-\frac{f(x_{n-1})}{f'(x_{n-1})} & \text{ if } n > 0 \end{cases}$$
A few things worth noting:
• $$f$$ must be a differentiable real-valued function.
• Larger values of $$n$$ produce better approximations.
• $$x_0$$ is the initial guess, which is recommended to be a value that is close to the solution. This allows getting sooner a better approximation. Yet, for simplicity, we always assume here that $$x_0 = 0$$.
Write the function newton that takes f and n as its arguments, and returns the corresponding value of $$x_n$$. Use the deriv function from the previous problem to compute $$f'$$, with $$h = 0.0001$$.
Unit tests:
(deftest test-newton
(is (aprox= 0.00001
10.0
(newton (fn [x] (- x 10))
1)))
(is (aprox= 0.00001
-0.5
(newton (fn [x] (+ (* 4 x) 2))
1)))
(is (aprox= 0.00001
-1.0
(newton (fn [x] (+ (* x x x) 1))
50)))
(is (aprox= 0.00001
-1.02987
(newton (fn [x] (+ (Math/cos x)
(* 0.5 x)))
5))))
7. Simpson’s rule is a method for numeric integration:
$$\int_{a}^{b}f=\frac{h}{3}(y_0 + 4y_1 + 2y_2 + 4y_3 + 2y_4 + \cdots + 2y_{n-2} + 4y_{n-1} + y_n)$$
Where $$n$$ is an even positive integer (if you increment the value of $$n$$ you get a better approximation), and $$h$$ and $$y_k$$ are defined as follows:
$$h = \frac{b - a}{n}$$ $$y_k = f(a + kh)$$
Write the function integral, that takes as arguments a, b, n, and f. It returns the value of the integral, using Simpson’s rule. The unit tests verify the following single and double integrals (with n = 10):
$$\int_{0}^{1} x^3\textit{dx} = \frac{1}{4}$$ $$\int_{1}^{2} \int_{3}^{4} xy \cdot \textit{dx} \cdot \textit{dy} = \frac{21}{4}$$
Unit tests:
(deftest test-integral
(is (= 1/4 (integral 0 1 10 (fn [x] (* x x x)))))
(is (= 21/4
(integral 1 2 10
(fn [x]
(integral 3 4 10
(fn [y]
(* x y))))))))
8. The function binary-search takes three arguments: a vector vct sorted in ascending order and with no repeated elements, a data value x, and a less than function lt-fun. It implements the binary search algorithm, searching for x in vct using the lt-fun to compare x with the elements contained in vct. The lt-fun should accept two arguments, $$a$$ and $$b$$, and return true if $$a$$ is less than $$b$$, or false otherwise.
The binary-search function returns the index where x is found in vct (the first element of the vector is at index 0), or nil if not found.
Binary search consists in searching a sorted vector by repeatedly dividing the search interval in half. You begin with an interval covering the whole vector. If the value being searched is less than the item in the middle of the interval, narrow the interval to the lower half. Otherwise narrow it to the upper half. Repeatedly check until the value is found or the interval is empty.
Unit tests:
(def small-list [4 8 15 16 23 42])
(def big-list [0 2 5 10 11 13 16 20 24 26
29 30 31 32 34 37 40 43 44
46 50 53 58 59 62 63 66 67
70 72 77 79 80 83 85 86 94
95 96 99])
(def animals ["dog" "dragon" "horse" "monkey" "ox"
"pig" "rabbit" "rat" "rooster" "sheep"
"snake" "tiger"])
(defn str<
"Returns true if a is less than b, otherwise
returns false. Designed to work with strings."
[a b]
(< (compare a b) 0))
(deftest test-binary-search
(is (nil? (binary-search [] 5 <)))
(is (= 3 (binary-search small-list 16 <)))
(is (= 0 (binary-search small-list 4 <)))
(is (= 5 (binary-search small-list 42 <)))
(is (nil? (binary-search small-list 7 <)))
(is (nil? (binary-search small-list 2 <)))
(is (nil? (binary-search small-list 99 <)))
(is (= 17 (binary-search big-list 43 <)))
(is (= 0 (binary-search big-list 0 <)))
(is (= 39 (binary-search big-list 99 <)))
(is (nil? (binary-search big-list 12 <)))
(is (nil? (binary-search big-list -1 <)))
(is (nil? (binary-search big-list 100 <)))
(is (= 5 (binary-search animals "pig" str<)))
(is (= 0 (binary-search animals "dog" str<)))
(is (= 11 (binary-search animals "tiger" str<)))
(is (nil? (binary-search animals "elephant" str<)))
(is (nil? (binary-search animals "alligator" str<)))
(is (nil? (binary-search animals "unicorn" str<))))
## Deliverables
The program source file must include at the top the authors’ personal information (name and student id) within comments. For example:
;----------------------------------------------------------
; Problem Set #3
; Date: October 04, 2019.
; Authors:
; A01166611 Pepper Pots
; A01160611 Anthony Stark
;----------------------------------------------------------
Also, each function should include a documentation string (docstring) with a brief description of its behavior. For example:
(defn max2
"Returns the largest of the two numbers x and y."
[x y]
(if (> x y) x y))
To deliver the problemset3.clj file, please provide the following information:
|
2019-12-09 03:26:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9033190011978149, "perplexity": 3161.2864585509637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540517156.63/warc/CC-MAIN-20191209013904-20191209041904-00231.warc.gz"}
|
https://math.stackexchange.com/questions/363091/limits-of-tetrations-of-infinite-height
|
# Limits of tetrations of infinite height
We know that tetrations of infinite height converge for $x$ such that $e^{-e} \le x \le e^{1/e}$. Which real numbers are limits of some tetration of infinite height? what is the complete set of such limits? thanks.
|
2019-05-25 08:01:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.993985116481781, "perplexity": 845.564450625603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257920.67/warc/CC-MAIN-20190525064654-20190525090654-00299.warc.gz"}
|
https://emacs.stackexchange.com/questions/50553/ess-knitr-workflow-with-natbib?answertab=votes
|
# ESS knitr workflow with natbib
This solution explains ESS knitr workflow and it works as explained. However, it does not explain how to knit an Rnw file containing natbib references. Similar question have been asked here and the answer is to related to ess-swv library which is obselete.
How can I knit with ESS an Rnw file containing a natbib reference like in the example bellow. The exporter I am using in ESS is pdflatex and the weaver is knitr.
\documentclass{article}
\usepackage{natbib}
\begin{document}
<<setup, include=FALSE, cache=FALSE>>=
library(knitr)
@
<<normal-sample, echo=FALSE>>=
x <- rnorm(100)
plot(x)
@
The moon in June is like a big balloon \citep{smith2012}.
\bibliographystyle{apalike}
\bibliography{/path/to/my/references/refs}
\end{document}
|
2019-09-21 13:14:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.528519332408905, "perplexity": 8163.907004021142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574501.78/warc/CC-MAIN-20190921125334-20190921151334-00439.warc.gz"}
|
http://codeforces.com/blog/entry/79152
|
### Peace_789's blog
By Peace_789, history, 8 months ago,
Hello guys , I am having trouble solving this problem
You can read problem statement here also.
Statement
So , any hint to solve this problem will be grateful.
• » » » 8 months ago, # ^ | 0 Initially, I tried the cycle approach which let to TLE for me. But if you notice it clearly its plain binary Lifting. Precompute jumps for each node. and then in query node walk or jump in binary value of k. You will get the answer. Spoilerint walk(int i,int k,vector& dp) { for (int d = 0; d <= D;++d) { if(k &(1<> i >> k; cout << walk(i, k, dp) << '\n'; }
|
2021-03-03 21:48:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5594041347503662, "perplexity": 3561.251078607198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367790.67/warc/CC-MAIN-20210303200206-20210303230206-00024.warc.gz"}
|
https://math.stackexchange.com/questions/3240671/decay-of-fourier-transform-of-a-schwartz-function
|
Decay of Fourier Transform of a Schwartz Function
Suppose we have a function $$f(x)\in \mathcal{S}(\mathbb R)$$; that is, it is a function in Schwartz space. Further, suppose we know that $$|f(x)|\leq Ce^{-|x|}.$$If it is helpful, we can actually replace the exponent on $$|x|$$ by any $$1 (in other words, it doesn't seem to be "too far" from a Gaussian). With this information, is there anything we can say about the decay of the Fourier transform of $$f(x)$$ beyond the fact that it is in the Schwartz class? In particular, does it necessarily decay like $$f(x)$$, or could it decay much slower, say like $$\exp(-(1+x^2)^c)$$ for some arbitrarily small $$c>0$$?
I've tried looking online and haven't found much. What I have found is:
• The Fourier transform is a linear isomorphism of the Schwartz space; in particular, we know that the Fourier transform is also in the Schwartz space
• The Gaussian, $$g(x)=e^{-x^2}$$, is essentially a fixed point of this isomorphism (we introduce some constants, but the decay of the function and the decay of the transform is identical - since I'm only worried about the decay, I'm using the term "fixed point" a bit loosely).
Some more information that might be helpful, though I couldn't find any way to use it specifically:
• $$f(x)$$ is essentially the characteristic function of a given random variable, which means that the Fourier transform is the corresponding density function. Specifically, this means the Fourier transform takes a maximum value at $$0$$ (which is equal to $$1$$) and decreases to $$0$$ as $$|x|\to\infty$$.
Even without anything specific, references would be appreciated. I've tried looking in Folland's book as well as Stein/Shakarchi's books, but these have not offered any insight for this problem.
• Came up with an actual counterexample... – David C. Ullrich Jun 1 at 17:12
• To rephrase David : if $f$ is complex analytic on $\Im(z) \in (-c,c)$ and Schwartz on horitonal lines then $\mathcal{F}[f(x)](\omega)$ is Schwartz and $O(e^{-(c-\epsilon)|\omega|})$. The analycity condition on $f$ says nothing of its decay – reuns Jun 1 at 17:49
• @reuns Indeed. I don't know exactly who first stated that version, possibly Bochner - the result with $L^2$ in place of $\mathcal S$ is Bochner. (Just doing the special case seemed simpler than stating a precisely correct general version and verifying the hypotheses...) – David C. Ullrich Jun 2 at 1:04
1 Answer
That condition says nothing about the decay of $$\hat f$$. As a general rule conditions on the decay of $$f$$ give smoothness for $$\hat f$$ (here for example it follows that $$\hat f$$ is holomorphic in a horizontal strip).
Edit: Previous version had a gap. On reflection I realized that an example filling the gap would also be a counterexample to the question, so that answer actually had very little content.
Actual counterexample: Let $$(z+2i)^{1/4}$$ denote the principal branch of the fourth root, so in particular if $$y\ne-2$$ is fixed then $$(x+iy+2i)^{1/4}\sim x^{1/4}\quad(x\to+\infty).$$Note that $$\Re(-x+iy+2i)^{1/4}\sim \frac1{\sqrt 2}|x|^{1/4}\quad(x\to+\infty).$$Let $$F(z)=e^{-(z+2i)^{1/4}}$$and define $$F_y(x)=F(x+iy).$$Then $$F_y(x)=O(|x|^{-N}),$$uniformly for $$|y|\le1$$; so a little bit of complex analysis shows that $$F_0\in\mathcal S.$$
So there exists $$f\in\mathcal S$$ with $$F_0=\hat f$$, and we're done if we show that $$f(t)=O(e^{-|t|})$$. But, assuming $$2\pi=1$$, Cauchy's Theorem shows that $$f(t)=\int F_0(x)e^{ixt}\,dx =\int F_1(t)e^{i(x+i)t}\,dx=e^{-t}\int F_1(x)e^{ixt}\,dx,$$hence $$f(t)=O(e^{-t})$$. Similarly $$f(t)=O(e^t)$$.
• You've written $F_y(x)=O(|x|^{-N})$, but isn't more true? Specifically that $F_y(x)=O\big(\exp(-|x|^{1/4})\big)$? Which causes this to fail to be a counterexample? – CuriousStudent1234 Jun 3 at 17:29
• Yes, more is true (although I think there may be a constant missing in your "more"). So what? Nothing causes it to fail to be a counterexample - it is a counterexample. That's why I called it a counterexample. – David C. Ullrich Jun 3 at 17:39
• Yes, there would be a constant, but the exponent on $|x|$ cannot be made arbitrarily small so the function still appears to have exponential decay (slower exponential decay, granted, but it still decays exponentially). Just to be clear you're saying you have a function that decays like $e^{-|t|}$ whose Fourier transform decays like $\exp(-|x|^{1/4})$, correct? Not that it decays at a rate like $\exp(-|x|^{1/\log\log(10+x^2)})$. – CuriousStudent1234 Jun 3 at 17:48
• "Which causes this to fail to be a counterexample?" I misread that a minute ago. The $f$ such that $\hat f=F_0$ is a counterexample to the statement that $f=O(e^{-|t|})$ implies $\hat f(x)=O(e^{-|x|})$. You must be missing a minus sign or something; $e^{=|t|^{1/4}}$ is not $O(e^{-|t|})$. – David C. Ullrich Jun 3 at 17:51
• We never defined "exponential decay". I wouldn't say $e^{-|t|^{1.4}}$ has exponential decay. If you feel it does, what about $e^{-\ln|t|}$? – David C. Ullrich Jun 3 at 17:54
|
2019-11-17 17:47:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 34, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987390637397766, "perplexity": 4003.4073543127397}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669225.56/warc/CC-MAIN-20191117165616-20191117193616-00427.warc.gz"}
|
https://electronics.stackexchange.com/questions/209618/voltage-regulator-current-regulation
|
# Voltage regulator Current regulation
I want to understand the concept of current regulation in the circuit, In my research I came across the following link Boosting Regulator Current for IC 78xx by MJ2955
where in the circuit schematic in link above, the author claims he has obtained 12V,5A as output with 20V input voltage.
I tried to simulate the same circuit but with input of 12V, 0.1A and 7805 regulator as shown in this schematic:
I see that when input current is 0.1A the output is 94mA approx. which agrees with ohms law V = I*R = 94.086mA * 53.33 ohm = 5.01V
But when I have my input current as 0.06A the output voltage is 3V (why not 5V? )and current is 57mA, as shown in this schematic
This also agrees with ohms law V = I*R = 57.267 mA*53.33 ohm = 3.05V
The results are same as the circuit without the PNP transistor. schematic:
So what is the impact of the PNP transistor in the circuit? How can I boost the current in my circuit? If I want to see 5W output across my load(53.33ohm in the schematic) what are the changes I should make in the circuit?
Note: I assume capacitors are only for smoothing the input and output and does not have much effect in simulation (correct me if wrong). I tried with 470uF in all three cases the results were same. (sorry was bit lazy to copy new screenshots and I didnt have enough reputation to embed schematics in my question)
• please copy links and paste to view schematics, sorry did not have enough reputation to embed links – sristisravan Jan 5 '16 at 14:57
• The PNP allows the circuit to handle more current. It can't pull more current out of its supply than the supply is willing to provide. – brhans Jan 5 '16 at 15:13
With any linear voltage regulator (current boosted or not current boosted) you are going to get this: -
Output current = Input current - a few mA to power the regulator
You are never (ever) going to see an output current greater than the input current. If you want that then you must consider a buck voltage regulator.
So what is the impact of the PNP transistor in the circuit? How can I boost the current in my circuit? If I want to see 5W output across my load(53.33ohm in the schematic) what are the changes I should make in the circuit?
If you want 53.33 ohms to dissipate 5 watts then the voltage needed is governed by the formula: -
Power = $V^2\div R$ or
Voltage = $\sqrt{P\cdot R}$ = 16.329 volts
I tried to simulate the same circuit but with input of 12V, 0.1A and 7805 regulator
The 7805 produces 5V on its output and not 16.329 volts.
• I have my input current as 0.06A the output voltage is 3V (why not 5V? ) – sristisravan Jan 5 '16 at 15:10
• You can't cheat ohm's law (that easily). V=IR, so 0.06 x 53.33 = 3.2V. Now take into account that a little of that 0.06A is lost to the 7805 and doesn't flow through the load and it all works out... – brhans Jan 5 '16 at 15:12
• So what does the article eleccircuit.com/… mean boosting current for 78XX? – sristisravan Jan 5 '16 at 15:14
• A 78xx can only output a maximum of about 1A but if you need a regulated voltage, 3 amp supply you need to add components to it (or around it) so that the 3A is possible. – Andy aka Jan 5 '16 at 15:16
• @SristiSravan, the usual way to supply a 7805 is with a fixed voltage supply. You are supplying it with a fixed current supply. The 12 V supply in your model has no effect because it's in series with the current supply. Your model is trying to predict the results of a gross abuse of the 7805. – The Photon Jan 5 '16 at 16:27
|
2020-03-30 08:20:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5195927619934082, "perplexity": 1542.0835570712402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496669.0/warc/CC-MAIN-20200330054217-20200330084217-00188.warc.gz"}
|
https://www.nag.com/numeric/nl/nagdoc_26.1/nagdoc_fl26.1/html/f01/f01ctf.html
|
# NAG Library Routine Document
## 1Purpose
f01ctf adds two real matrices, each one optionally transposed and multiplied by a scalar.
## 2Specification
Fortran Interface
Subroutine f01ctf ( m, n, a, lda, beta, b, ldb, c, ldc,
Integer, Intent (In) :: m, n, lda, ldb, ldc Integer, Intent (Inout) :: ifail Real (Kind=nag_wp), Intent (In) :: alpha, a(lda,*), beta, b(ldb,*) Real (Kind=nag_wp), Intent (Inout) :: c(ldc,*) Character (1), Intent (In) :: transa, transb
#include nagmk26.h
void f01ctf_ (const char *transa, const char *transb, const Integer *m, const Integer *n, const double *alpha, const double a[], const Integer *lda, const double *beta, const double b[], const Integer *ldb, double c[], const Integer *ldc, Integer *ifail, const Charlen length_transa, const Charlen length_transb)
## 3Description
f01ctf performs one of the operations
• $C≔\alpha A+\beta B$,
• $C≔\alpha {A}^{\mathrm{T}}+\beta B$,
• $C≔\alpha A+\beta {B}^{\mathrm{T}}$ or
• $C≔\alpha {A}^{\mathrm{T}}+\beta {B}^{\mathrm{T}}$,
where $A$, $B$ and $C$ are matrices, and $\alpha$ and $\beta$ are scalars. For efficiency, the routine contains special code for the cases when one or both of $\alpha$, $\beta$ is equal to zero, unity or minus unity. The matrices, or their transposes, must be compatible for addition. $A$ and $B$ are either $m$ by $n$ or $n$ by $m$ matrices, depending on whether they are to be transposed before addition. $C$ is an $m$ by $n$ matrix.
None.
## 5Arguments
1: $\mathbf{transa}$ – Character(1)Input
2: $\mathbf{transb}$ – Character(1)Input
On entry: transa and transb must specify whether or not the matrix $A$ and the matrix $B$, respectively, are to be transposed before addition.
transa or ${\mathbf{transb}}=\text{'N'}$
The matrix will not be transposed.
transa or ${\mathbf{transb}}=\text{'T'}$ or $\text{'C'}$
The matrix will be transposed.
Constraint: ${\mathbf{transa}}\text{ or }{\mathbf{transb}}=\text{'N'}$, $\text{'T'}$ or $\text{'C'}$.
3: $\mathbf{m}$ – IntegerInput
On entry: $m$, the number of rows of the matrices $A$ and $B$ or their transposes. Also the number of rows of the matrix $C$.
Constraint: ${\mathbf{m}}\ge 0$.
4: $\mathbf{n}$ – IntegerInput
On entry: $n$, the number of columns of the matrices $A$ and $B$ or their transposes. Also the number of columns of the matrix $C$.
Constraint: ${\mathbf{n}}\ge 0$.
5: $\mathbf{alpha}$ – Real (Kind=nag_wp)Input
On entry: the scalar $\alpha$, by which matrix $A$ is multiplied before addition.
6: $\mathbf{a}\left({\mathbf{lda}},*\right)$ – Real (Kind=nag_wp) arrayInput
Note: the second dimension of the array a must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$ if ${\mathbf{transa}}=\text{'N'}$, and at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$ otherwise.
On entry: if $\alpha =0.0$, the elements of array a need not be assigned. If $\alpha \ne 0.0$, then if ${\mathbf{transa}}=\text{'N'}$, the leading $m$ by $n$ part of a must contain the matrix $A$, otherwise the leading $n$ by $m$ part of a must contain the matrix $A$.
7: $\mathbf{lda}$ – IntegerInput
On entry: the first dimension of the array a as declared in the (sub)program from which f01ctf is called.
Constraints:
• if ${\mathbf{transa}}=\text{'N'}$, ${\mathbf{lda}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$;
• otherwise ${\mathbf{lda}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$.
8: $\mathbf{beta}$ – Real (Kind=nag_wp)Input
On entry: the scalar $\beta$, by which matrix $B$ is multiplied before addition.
9: $\mathbf{b}\left({\mathbf{ldb}},*\right)$ – Real (Kind=nag_wp) arrayInput
Note: the second dimension of the array b must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$ if ${\mathbf{transb}}=\text{'N'}$, and at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$ otherwise.
On entry: if $\beta =0.0$, the elements of array b need not be assigned. If $\beta \ne 0.0$, then if ${\mathbf{transa}}=\text{'N'}$, the leading $m$ by $n$ part of b must contain the matrix $B$, otherwise the leading $n$ by $m$ part of b must contain the matrix $B$.
10: $\mathbf{ldb}$ – IntegerInput
On entry: the first dimension of the array b as declared in the (sub)program from which f01ctf is called.
Constraints:
• if ${\mathbf{transb}}=\text{'N'}$, ${\mathbf{ldb}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$;
• otherwise ${\mathbf{ldb}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$.
11: $\mathbf{c}\left({\mathbf{ldc}},*\right)$ – Real (Kind=nag_wp) arrayOutput
Note: the second dimension of the array c must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$.
On exit: the elements of the $m$ by $n$ matrix $C$.
12: $\mathbf{ldc}$ – IntegerInput
On entry: the first dimension of the array c as declared in the (sub)program from which f01ctf is called.
Constraint: ${\mathbf{ldc}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$.
13: $\mathbf{ifail}$ – IntegerInput/Output
On entry: ifail must be set to $0$, $-1\text{ or }1$. If you are unfamiliar with this argument you should refer to Section 3.4 in How to Use the NAG Library and its Documentation for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{ or }1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this argument, the recommended value is $0$. When the value $-\mathbf{1}\text{ or }\mathbf{1}$ is used it is essential to test the value of ifail on exit.
On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).
## 6Error Indicators and Warnings
If on entry ${\mathbf{ifail}}=0$ or $-1$, explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
${\mathbf{ifail}}=1$
On entry, one or both of transa or transb is not equal to 'N', 'T' or 'C'.
${\mathbf{ifail}}=2$
On entry, one or both of m or n is less than $0$.
${\mathbf{ifail}}=3$
On entry, ${\mathbf{lda}}<\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,P\right)$, where $\mathrm{P}={\mathbf{m}}$ if ${\mathbf{transa}}=\text{'N'}$, and $\mathrm{P}={\mathbf{n}}$ otherwise.
${\mathbf{ifail}}=4$
On entry, ${\mathbf{ldb}}<\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,P\right)$, where $\mathrm{P}={\mathbf{m}}$ if ${\mathbf{transb}}=\text{'N'}$, and $\mathrm{P}={\mathbf{n}}$ otherwise.
${\mathbf{ifail}}=5$
On entry, ${\mathbf{ldc}}<\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$.
${\mathbf{ifail}}=-99$
See Section 3.9 in How to Use the NAG Library and its Documentation for further information.
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
See Section 3.8 in How to Use the NAG Library and its Documentation for further information.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.
See Section 3.7 in How to Use the NAG Library and its Documentation for further information.
## 7Accuracy
The results returned by f01ctf are accurate to machine precision.
## 8Parallelism and Performance
f01ctf is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
The time taken for a call of f01ctf varies with m, n and the values of $\alpha$ and $\beta$. The routine is quickest if either or both of $\alpha$ and $\beta$ are equal to zero, or plus or minus unity.
## 10Example
The following program reads in a pair of matrices $A$ and $B$, along with values for transa, transb, alpha and beta, and adds them together, printing the result matrix $C$. The process is continued until the end of the input stream is reached.
### 10.1Program Text
Program Text (f01ctfe.f90)
### 10.2Program Data
Program Data (f01ctfe.d)
### 10.3Program Results
Program Results (f01ctfe.r)
© The Numerical Algorithms Group Ltd, Oxford, UK. 2017
|
2021-06-25 13:14:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 124, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9936046004295349, "perplexity": 3399.5998445921464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630175.17/warc/CC-MAIN-20210625115905-20210625145905-00307.warc.gz"}
|
https://astronomy.stackexchange.com/questions/49656/evaluating-potentials-using-galpy
|
# Evaluating Potentials using Galpy
I am trying to model galactic disks. From the past research we have discovered that the halo follows the logarithmic potential, the disk follows the Miyamoto-Nagai potential and the bulge follows the Hernquist potential. I have the $$R$$ and $$z$$ coordinates of a sample of stars divided into halo, bulge and disk stars and I am trying to compute the potentials of these potential using galpy's galpy.potential.evaluatePotentials(Pot, R, z, phi=None, t=0.0, dR=0, dphi=0).
However, it looks like the potentials are defaulted as non axis symmetric during initialization since I am getting the error message as shown in the image.
Is there a way to make this work?
Coding the potentials is one way to get around the problem, however, before resorting to this option, I needed to check whether there are solutions available already.
Cheers!!!
• Never used galpy before and not sure how your doing your import and definition (can you edit the Q to show this please?) but if I do a from galpy.potential import LogarithmicHaloPotential ; halo = LogarithmicHaloPotential() then doing a halo.isNonAxi returns False so it has that attribute. Jun 23, 2022 at 23:05
• Looks like a temporary bug in the code. I uninstalled galpy and reinstalled it. That worked!!! Jun 24, 2022 at 13:25
For future reference, I believe the issue here was that you didn't instantiate the potential. That is, you probably tried to do
from galpy.potential import LogarithmicHaloPotential, evaluatePotentials
evaluatePotentials(LogarithmicHaloPotential,1.,0.)
rather than
from galpy.potential import LogarithmicHaloPotential, evaluatePotentials
lp= LogarithmicHaloPotential() # specific instance with default parameters
evaluatePotentials(lp,1.,0.)
Please don't hesitate to open an Issue or Discussion at the GitHub page or join the slack community there: https://github.com/jobovy/galpy. I don't usually monitor stackexchange.
• Welcome to Astronomy SE!
– uhoh
Jun 28, 2022 at 2:04
|
2023-03-23 01:12:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48691877722740173, "perplexity": 2357.427144251916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00495.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/46279-quick-question.html
|
# Math Help - Quick Question
1. ## Quick Question
Ok im doing a problem and it includes finding the values of six trig functions...my question is if im given a angle [ 315 degrees ] and after finding the reference angle what numbers do i use? Do i use (checkmark 2 / 2 and checkmark 2 / 2??
2. Given: $\theta = 315^{\text{o}}$
Take note that this angle is in the fourth quadrant. This means that the only the cosine/secant of this angle is positive.
First, find the reference angle.
$\theta_{ref} = 360 - 315 = 45^{\text{o}}$
Then, write down the six trigonometric functions of the original angle:
$\sin{315} = -\sin{45}$
$\cos{315} = \cos{45}$
$\tan{315} = -\tan{45}$
$\csc{315} = -\csc{45}$
$\sec{315} = \sec{45}$
$\cot{315} = -\cot{45}$
I will leave the evaluation for you.
Extra:
• If angle is in first quadrant, all trigonometric functions are positive
• If angle is in second quadrant, only sine/cosecant are positive
• If angle is in third quadrant, only tangent/cotangent are positive
• If angle is in fourth quadrant, only cosine/secant are positive
3. Thats the part I dont quite understand. Since there was no given values besides the reference angle. wont a side be messing? or do i just do it over the x or y value?
4. For angles 30, 45, and 60, you can use the special 45-45-90 and 30-60-90 triangles:
Special right triangles - Wikipedia, the free encyclopedia
To find the value of the trigonometric functions of these angles.
5. omg thank you! lol thats what I meant in the begining with the checkmark 2 i forgot to take notes on this!
|
2015-05-23 00:08:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7177755236625671, "perplexity": 1170.3763189206911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207926924.76/warc/CC-MAIN-20150521113206-00321-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://mathematica.stackexchange.com/tags/matrix/hot?filter=month
|
# Tag Info
7
Here's a better way to solve it I think. This avoids a lot of the issues in the FindInstance / NMinimize calculation I did earlier, and it's much more efficient: DrazinInverse[A_] := Module[{k = 0, Ak = IdentityMatrix@First@Dimensions@A, X, nxtA}, While[nxtA = Ak . A; MatrixRank[Ak] != MatrixRank[nxtA], Ak = nxtA; ++k ]; X = LinearSolve[nxtA, Ak]; ...
5
Here a version which doesn't uses Subscript' s: v = 800; b[0] = 0.5; b[1] = 1; g[n_, b_] := b Sinc[n Pi b] ; h[n_, b0_, b1_] := g[n, b1] - g[n, b0] H = Table[ KroneckerDelta[n, m] (n^2 + v*(b[1] - b[0] - h[2 m, b[0], b[1]])) + v*(1 - KroneckerDelta[n, m])*(h[n - m, b[0], b[1]] -h[n + m, b[0], b[1]]) , {n, 1, 10}, {m, 1, 10}]; Eigensystem[H]
5
If it is only about the sums of negative entries, then yes. This is how we can do it without even building the tensor/Kronecker products: M = RandomReal[{-1, 1}, {3, 3}]; ClearAll[sumPositive]; ClearAll[sumNegative]; sumPositive[1] = Total[Ramp[ M], TensorRank[M]]; sumNegative[1] = Total[Ramp[-M], TensorRank[M]]; sumNegative[n_] := sumNegative[n] = Plus[ ...
4
This seems like a good job for Piecewise: q[i_, j_] := Piecewise[{{-b, j == i + d}, {c, j == i + 1}, {l, j == d + i + 1}, {l, j == d + i - 1}}] Now you can build the matrix: d = 8; n = 15; (mat = Array[q, {n, n}]) // MatrixForm which gives the same answer as cvgmt.
3
It's faster and cleaner not to use any For loops and use list processing whenever possible: nn = 6; x = Cos[Subdivide[nn]*Pi] (* {1, Sqrt[3]/2, 1/2, 0, -1/2, -Sqrt[3]/2, -1} *) d1 = Outer[Subtract, x, x] (* {{0, 1 - Sqrt[3]/2, 1/2, 1, 3/2, 1 + Sqrt[3]/2, 2}, {-1 + Sqrt[3]/2, 0, -(1/2) + Sqrt[3]/2, Sqrt[3]/2, 1/2 + Sqrt[3]/2, Sqrt[3], 1 + ...
3
crystalsystemnames = {"Cubic", "Tetragonal", "Orthorhombic", "Hexagonal", "Monoclinic", "Rhombohedral", "Triclinic"}; metrictensors = Partition[{g1, g2, g3, g4, g5, g6, g7}, 1]; data = MapThread[Append, {metrictensors, crystalsystemnames}]; MatrixForm[MapAt[MatrixForm, data, {All, 1}...
3
Maybe need to chack the definition of such matrix,it seems not right. d = 8; n = 15; q=SparseArray[{{i_, j_} /; j == i + 1 -> c, {i_, j_} /; j == i + d -> -b, {i_, j_} /; j == d + i + 1 || j == d + i - 1 -> l}, {n, n}] q//MatrixForm Or d = 8; n = 15; m = SparseArray[{Band[{1, 1+1}] -> c, Band[{1, 1 + d}] -> -b, Band[{1, d + 1 + 1}...
3
Export["matrix.csv", values, TableHeadings -> {paramc3, paramS}]
2
One of possible ways: out = Transpose@ Prepend[Transpose@Prepend[values, paramS], Prepend[paramc3, ""]] Export["matrix.csv", out, "CSV"]
2
Unprotect[Det]; Det[{{}}] = 1; Protect[Det]; Unprotect[Tr]; Tr[{{}}] = 0; Protect[Tr]; Unprotect[Transpose]; Transpose[{{}}] = {{}}; Protect[Transpose]; Problem is: What is the type of an empty matrix? Exact, integer, machine double...? In general, I do not recommend to mess with built-ins like this...
2
Probably not easier to understand than Table, but index- and dimension-free: Join[ Map[List, Values@DownValues@a, {2}], Map[List, Values@DownValues@b, {2}], Outer[List, mu, logo], 3] Variations are possible, such as this nifty one which uses the dimension 10: Values@DownValues@a === Array[a, 10] (* True *) One could then map List inside Array instead ...
2
This definition doesn't use SparseArray, but it should give it to you datamat = Table[{a[i][[j]], b[i][[j]], mu[[i]], logo[[j]]}, {i, 10}, {j, 9}]; For example, with $i=5$ and $j=9$, select all 4 corresponding values by evaluating datamat[[5,9]] (* {0.808423, 4988.96, 0.909091, 4} *). These four values are the 9th element of a[5], the 9th ...
2
Reevaluating with v12.3.1, I am able to reproduce the crossover. \$Version (* "12.3.1 for Mac OS X x86 (64-bit) (June 19, 2021)" *) Clear["Global*"] hamil[kx_, ky_, kz_] = {{-10.6, -0.25 E^(I (-0.625 kx - 0.21650635094610965 ky - 0.43666666666666665 kz)) - 0.7 E^(I (0.375 kx - 0.21650635094610965` ky - ...
2
Maybe this is a bit closer to what you want to do. RF = Table[ Plus[ f[d, b, a, c], Conjugate[f[c, a, b, d]], If[b == d, -Sum[f[a, i, i, c], {i, 1, n}], 0], If[a == c, -Conjugate[Sum[f[b, i, i, d], {i, 1, n}]], 0] ] , {a, 1, n}, {b, 1, n}, {c, 1, n}, {d, 1, n}]; Note that I use Plus instead of + only because it is more legible of ...
2
Here a more elementary example: n = 5; A = SparseArray[{Band[{1, 1}] -> 1, Band[{1, 2}] -> -1, {n, 1} -> -1}, {n, n}]; B = SparseArray[{Band[{1, 1}] -> 1, Band[{1, 2}] -> -1}, {n, n}]; 1./Divide @@ MinMax[Diagonal[SingularValueDecomposition[A][[2]]]] 1./Divide @@ MinMax[Diagonal[SingularValueDecomposition[B][[2]]]] ComplexInfinity 6.74204 ...
1
Unprotect[NonCommutativeMultiply]; NonCommutativeMultiply[0, a_] := 0; NonCommutativeMultiply[a_, 0] := 0; A = {{0, a, a, 0}, {b, 0, 0, a}, {b, 0, 0, a}, {0, b, b, 0}}; Table[ Sum[A[[i, k]] ** A[[k, j]], {k, 1, 4}] , {i, 1, 4}, {j, 1, 4}] {{2 a ** b, 0, 0, 2 a ** a}, {0, a ** b + b ** a, a ** b + b ** a, 0}, {0, a ** b + b ** a, a ** b + b ** a, 0}, {2 b ...
1
nodes = Union@Catenate@Keys@assoc edges = Cases[Normal@assoc, ({x_, y_} -> 1) :> x -> y] AdjacencyMatrix@Graph[nodes, edges] (* a SparseArray *)
1
For convenience, first define a function that creates the Ai with symbolic elements: n = 2; A[i_] := Array[(Subscript[a[i], #1, #2]) &, {n, n}] The rest is trivial, simply sum it up: Sum[A[i] t^i, {i, -5, 5}] An example for n=2. Note that result is a matrix where every element is a Laurent series:
1
Try with the following code: RowsSum[nmax_Integer?Positive, length_Integer?Positive, vector_List] := Module[ {matrix, matrixrows, s}, matrix = Table[i, {i, nmax}, {length}]; matrixrows[vector2_List, matrix_] := matrix[[vector]]; s = Flatten@Map[Total, List[matrixrows[vector, matrix]]]; Return[s]; ]; Test: RowsSum[12, 4, {1, 5, 9}] (*{15, 15, 15, 15}*) ...
1
For numerical values of the parameters it could be done as below. m[x_, y_, z_, w_, a_, b_, c_, d_, e_, f_] = {{a*Abs[x]^2 + 2*Re[e*x*Conjugate[y]] + c*Abs[y]^2, b*Abs[x]^2 + 2*Re[f*x*Conjugate[y]] + d*Abs[y]^2}, {a*Abs[z]^2 + 2*Re[e*x*Conjugate[y]] + d*Abs[y]^2, b*Abs[z]^2 + 2*Re[f*z*Conjugate[w]] + d*Abs[w]^2}}; Form the unitary matrix ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2021-09-24 06:47:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25836998224258423, "perplexity": 10703.399325583072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057504.60/warc/CC-MAIN-20210924050055-20210924080055-00672.warc.gz"}
|
http://sake09.com/vernonia-amygdalina-xhf/copper-oxide-%2B-hydrochloric-acid-balanced-equation-0d7d9f
|
The sulfur dioxide gases are toxic and can cause breathing difficulties. An equation in which the numbers of atoms of each element on the two sides of the equation is equal is called balanced chemical equation. H2O. Write the balanced chemical equations for the following reactions : (a) Sodium carbonate on reaction with hydrochloric acid in equal molar concentrations gives sodium chloride and sodium hydrogen carbonate. Production. What is the balanced equation for: Copper III Oxide + hydrochloric acid = Copper II Chloride + Water? Write a balanced equation for copper oxide + Ethanoic Acid . Chemistry. What is the WPS button on a wireless router? Answer: Laboratory preparation of nitric acid. A black solid, it is one of the two stable oxides of copper, the other being Cu 2 O or copper(I) oxide (cuprous oxide). A metal-acid reaction is always a redox reaction. 4 years ago. Balanced equation Who was the lady with the trophy in roll bounce movie? A balanced equation for copper oxide + ethanoic acid. punineep and 4 more users found this answer helpful 5.0 Note: copper (II) oxide is a solid; hydrochloric acid and copper (II) chloride are aqueous; Water is a liquid. Question 1: 1. There are two copper oxides. Acid + oxide (base) salt + water Acid + hydroxide (alkali) salt + water Acid + carbonate salt + water + carbon dioxide The salt copper sulfate can be made … Copper (II) oxide dissolves in mineral acids such as hydrochloric acid, sulfuric acid or nitric acid to give the corresponding copper (II) salts: CuO + 2 HNO3 → Cu (NO3)2 + H2O. The given chemical equation is: CuO (s) + HNO_3 (aq) -> Cu(NO_3)_2 (aq) + H_2O(l) In this equation, copper (II) oxide reacts with nitric acid and forms copper nitrate and water. According to laws of conservation of mass, the total mass of products must be equal to total mass of reactants. What does contingent mean in real estate? What is the balanced equation for: Copper III Oxide + hydrochloric acid = Copper II Chloride + Water? (a) Predict the new compound formed which imparts a blue-green colour to solution. For example, copper and oxygen react together to make copper oxide. We need 2 moles of HCl so that the equation is balanced. 00. When did organ music become associated with baseball? For copper(I) oxide, the equation is Cu2O + 2 HCl => 2 CuCl + H2O. Zinc oxide + hydrochloric acid ---> zinc chloride + water ZnO + … Copper (II) oxide dissolves in mineral acids such as hydrochloric acid, sulfuric acid or nitric acid to give the corresponding copper (II) salts: CuO + 2 HNO3 → Cu (NO3)2 + H2O. chemisrty. You are asked to prepare 500. mL of a 0.250 M acetate buffer at pH 4.90 using only pure acetic acid (MW=60.05 g/mol, pKa=4.76), 3.00 M NaOH, and water. For example, copper and oxygen react together to make copper oxide. So for the reaction of cupric oxide with sulfuric acid, the balanced chemical equation is CuO plus H₂SO₄ react to form CuSO₄ plus H₂O. This video demonstrates the action of acids on metal oxides. Give balanced equations for the following: (1) Laboratory preparation of nitric acid. Question 1: 1. Copyright © 2021 Multiply Media, LLC. Word equation: Copper (II) oxide + Hydrochloric acid → Copper (II) chloride + Water. Balanced symbol equations show what happens to the different atoms in reactions. Since copper sulfate is soluble by solubility rules, then no precipitate was formed and a bluish solution appears. Examples: Fe, Au, Co, Br, C, O, N, F. Ionic charges are not yet supported and will be ignored. Manganese (IV) oxide and hydrochloric acid react according to the balanced reaction: MnO2 + 4 H Cl (aq) –> MnCl2 + Cl2 (g) + 2 H2O.....If 0.760 mole of MnO2 and 2.19 moles of hydrochloric acid are allowed to react, which is the (a) Predict the new compound formed which imparts a blue-green colour to solution. Assume that 36.2g ammonia reacts with 180.8 g copper ii oxide. Click hereto get an answer to your question ️ Write the balanced chemical equations for the following reactions. Translate this equation into a sentence. Report a Problem. How many grams of . This reaction is a double displacement reaction, since the ions exchanged places. Write down the balanced chemical equation for the following. The substances used are copper oxide and dilute hydrochloric acid. (b) Write a balanced chemical equation of the reaction which takes place. Balance the following chemical equations: . ammonia gas reacts with copper (ii) oxide at high temperatures to produce elemental nitrogen, copper metal, and water vapor. Type of Chemical Reaction: For this reaction we have a double replacement reaction. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. (b) Calcium oxide reacts with water to give lime water. When heating the copper(II) oxide and dilute sulfuric acid, avoid boiling off the water and allowing the copper sulfate to appear and then decompose with excessive heating – this is unsafe. Who is the longest reigning WWE Champion of all time? Mg3N2(s) + 6 H2O(l) → 3 Mg(OH)2(s) + 2 NH3(g) What volume of ammonia gas at 24°C and 753 mmHg will be produced. Thi is a double replacement chemical reaction. In which Year did physics education came to Liberia? copper(I) oxide, the equation is Cu2O + 2 HCl => 2 CuCl + Copper(II) oxide or cupric oxide is the inorganic compound with the formula CuO. If excess of carbon dioxide gas is passed through the milky lime water, the solution becomes clear again. (ii) Write the balanced equation for the reaction between zinc oxide and dilute hydrochloric acid. copper(II), the equation is 2HCl + CuO => CuCl2 + H2O. Thi is a double replacement chemical reaction. It still is Hydrochloric Acid, only, you need twice as much as one mole. Cold, dilute nitric acid reacts with copper to form nitric oxide.} For this question, we can simply go back to the word equation. Chemistry. What date do new members of congress take office? Balanced symbol equations show what happens to the different atoms in reactions. Mixing copper oxide and sulphuric acid is an experiment involving an insoluble metal oxide which is reacted with a dilute acid to form a soluble salt.Copper (II) oxide, is a black solid, which, when reacted with sulphuric acid creates a cyan-blue coloured chemical called copper II sulfate. This is possible only if the number of Atoms of each element is same on the two sides of the equation. Balanced equation. On adding dilute hydrochloric acid to copper oxide powder, the solution formed is blue-green. ammonia gas reacts with copper (ii) oxide at high temperatures to produce elemental nitrogen, copper metal, and water vapor. What is the balanced equation of hydrochloric acid with copper metal. As a mineral, it is known as tenorite. A balanced equation for copper oxide + ethanoic acid. Word equation: Copper (II) oxide + Hydrochloric acid → Copper (II) chloride + Water. An Equation for zinc and hydrochloric acid be Zn + HCl → ZnCl +H2 And balanced eqn be 2Zn + 2HCl → 2ZnCl + H2 45,002 results, page 78 Quantitive Analysis. (c) On the basis of the above reaction, what can you say about the nature of copper oxide? How old was queen elizabeth 2 when she became queen? 45,002 results, page 78 Quantitive Analysis. Write a balanced equation, ionic equation and the net ionic equation for the following: A) Li2SO4 + Sr(NO3)3 ---> B) H2SO4 + Na2CO3 ---> Chem. For copper(I) oxide, the equation is Cu2O + 2 HCl => 2 CuCl + H2O What is the name of the copper containing compound produced when cupric oxide reacts with sulfuric acid? Balance Each Equation. chemisrty. To learn more about balancing, see: CuO + 2 HCl → CuCl2 + H2O. (balanced equation: 2 NH4 (ammonium) (g) + 4 CuO (copper ii oxide) Where is Jesse De-Wilde son of Brandon deWilde? Dilute nitric acid reacts with copper and produce copper nitrate ( Cu(NO 3) 2), nitric oxide (NO) and water as products. Answer the following questions regarding the preparation of the buffer. (2) Action of heat on a mixture of copper and concentrated nitric acid. copper (II) oxide + hydrochloric acid → copper (II) chloride + water Write the balanced chemical equation including states of matter below. Hydrogen gas is evolved during this reaction. Answered September 12, 2017. Cupric chloride is the same as Copper (II) Chloride. (b) Write a balanced chemical equation of the reaction which take place. How old was Ralph macchio in the first Karate Kid? It is a product of copper mining and the precursor to many other copper-containing products and chemical compounds. copper(II) hydroxide + energy --> Copper(II) oxide + water (i guess) Cu(OH)2 --> CuO + H2O. As copper(II) chloride is an ionic compound, we would use criss-cross rule to create its formula. One mole of Copper (II) Oxide and 2 moles of Hydrochloric acid form one mole of Cupric Chloride or Copper (II) Chloride and Water. All Rights Reserved. What is the balanced equation of hydrochloric acid with copper metal? M g O + 2 H C l − > M g C l 2 + H 2 O. Magnesium oxide reacts with hydrochloric acid to form magnesium chloride salt and water. Separate the following balanced chemical equation into its total ionic equation. Why don't libraries smell like bookstores? To balance a chemical equation, enter an equation of a chemical reaction and press the Balance button. Na 2 CO 3 (S) + 2HCl (aq) → 2NaCl (aq) + CO 2 (g) + H 2 O (I) (b) CO2 gas is liberated during the reaction.When carbon dioxide gas formed in the form of brisk effervescence is passed through lime water, it turns the lime water milky. Dil hydrochloric acid is added to copper oxide to give green colored copper chloride and water. When dilute hydrochloric acid to copper oxide powder, blue green colour is imparted due to the formation of copper chloride C u C l 2 . 1. zinc + copper (II) sulfate → copper + zinc sulfate Zn + CuSO 4 → Cu + ZnSO 4 2. potassium chlorate → potassium chloride + oxygen 2KClO 3 → 2KCl + 3 O 2 3. potassium iodide + lead (II) nitrate → lead (II) iodide + potassium nitrate 2KI + Pb(NO 3) 2 → PbI 2 + 2KNO 3 4. iron (III) oxide + carbon → iron + carbon monoxide Fe 2O (c) On the basis of the above reaction, what can you say about the nature of copper oxide ? On adding dilute hydrochloric acid to copper oxide powder, the solution formed is blue-green. Acetic acid is reacted with copper to form copper acetate. Assume that 36.2g ammonia reacts with 180.8 g copper ii oxide. On adding dilute hydrochloric acid to copper oxide powder, the solution formed is blue-green. (a) Predict the new compound formed which imparts a blue-green colour to solution. Remember you can only change the numbers in front of compounds (called the coefficients) when balancing equations. In neutralisation process, an acid reacts with an alkali to form salt and water. Balancing Strategies: In this reaction we have CuO and HCl combining to form CuCl2 adn H2O. (a) Calcium hydroxide + Carbon dioxide → Calcium carbonate + Water(b) Zinc + Silver nitrate → Zinc nitrate + Silver(c) Aluminium + Copper chloride → Aluminium chloride + Copper(d) Barium chloride + Potassium sulphate → Barium sulphate + Potassium chloride (c) On the basis of the above reaction, what can you say about the nature of copper oxide ? This is considered a neutralization reaction, and the balanced chemical equation is: {eq}CuO (s) + H_2SO_4(aq) \rightarrow CuSO_4(aq) + H_2O(l) {/eq} The balanced equation will appear above. The reaction is : 2 H C l + C u O → C u C l 2 + H 2 O Use uppercase for the first character in the element and lowercase for the second character. (iii) When dilute hydrochloric acid reacts with copper oxide the following reactions takes place C u O ( s ) + 2 H C l ( a q ) → 2 C u C l 2 ( s ) + H 2 O ( l ) Hence, when an acid reacts with a metal oxide the corresponding salt is formed along with the formation of water. There are two copper oxides. Identify the type of reaction that is taking place. Write a balanced equation for copper oxide + Ethanoic Acid . For copper(I) oxide, the equation is Cu2O + 2 HCl => 2 CuCl + H2O. When you predict the products and write a balanced equation the coefficient of phosphoric acid is a.1 b.2 c.3 d.6 e.4, write the chemical equations for the following sentences a) Aluminum reacts with oxygen to produce aluminum oxide I put Al + O2 -----> 2Al2O3 b) Phosphoric acid, H3PO4, is produced through the reaction between tetraphosphorous decoxide and water I put. Is there a way to search all eBay sites for different countries at once? Question 2. The Blake Mysteries: A New Beginning , Oscar De Leon Family , Memes That Make You Say Wtf , Spce Earnings Whisper , Allison Hayes Obituary , Floyd Red Crow Westerman Net Worth , Lisa Rodríguez Nationality , Gordon Ramsay Mum Age , Magnesium Oxide And Phosphoric … Replace immutable groups in compounds to avoid ambiguity. Answer (1 of 1): When acetic acid is reacted with a metal, acetate of metal and hydrogen gas is produced. (c) Sodium hydroxide reacts with hydrochloric acid to give sodium chloride and water. CuO + 2 HCl → CuCl2 + H2O. Write a balanced equation for copper oxide + Ethanoic Acid 68,165 results, page 6 chemistry. Copper (II) oxide and sulfuric acid form copper sulfate and water. Change the coefficient in front of the HCl. Write down the balanced chemical equation for the following : (a) Silver chloride is decomposed in presence of sunlight to give silver and chlorine gas. Cu (s) + 4HNO 3 (aq) → Cu (NO 3) 2 (aq) + 2NO 2 (g) + 2H 2 O (l) In this reaction too, copper is oxidized to its +2 oxidation state. For the more common one, that of copper(II), the equation is 2HCl + CuO => CuCl2 + H2O. How many grams of . After the reaction, solution which contains Cu(NO 3) 2 is blue color. Reacting Copper Oxide with Sulphuric Acid. (b) Write a balanced chemical equation of the reaction which takes place. Type of Chemical Reaction: For this reaction we have a double replacement reaction.Balancing Strategies: In this reaction we have CuO and HCl combining to form CuCl2 adn H2O. punineep and 4 more users found this answer helpful 5.0 On adding dilute hydrochloric acid to copper oxide powder, the solution formed is blue-green. How rizal overcome frustrations in his romances? Copper(II) oxide or cupric oxide is the inorganic compound with the formula CuO. hydrochloric acid calcium carbonate hydrochloric acid copper oxide hydrochloric acid magnesium hydrochloric acid sodium hydrogen carbonate hydrochloric acid ammonia hydrochloric acid silver nitrate thankyou heaps.. really stuck with my hw.. explanation to answers will be great:) In this reaction, copper is oxidized while nitric acid is reduced to nitric oxide. A black solid, it is one of the two stable oxides of copper, the other being Cu 2 O or copper(I) oxide (cuprous oxide). copper(II) oxide + hydrochloric acid --> Copper (II) chloride + water. For A) Aluminium + Copper chloride Aluminium chloride + Copper B) Copper + Silver nitrate Copper nitrate + Silver C) Magnesium oxide + Hydrochloric acid Magnesium Chloride + Water D) Calcium hydroxide + Carbon dioxide Calcium carbonate + Water E) Zinc carbonate(s) Zinc oxide(s) + Carbon dioxide(g) 2. (3) ... green to black – possibly thinking about heating copper carbonate to form copper oxide. It is a product of copper mining and the precursor to many other copper-containing products and chemical compounds. © 2021 | Powered by w3.css | Video Explanation. Write the new compound formed. How to Balance: CuO + HCl → CuCl 2 + H 2 O. Copper does not react with HCl acid, but copper oxide does react. So in this case the acid is … Oxide + Acid -> Salt + water. Since copper has a higher reduction potential than hydrogen, it does not react with non-oxidising acids like HCl or dil.H2SO4. A balanced equation for copper oxide + ethanoic acid. What is the balanced equation of hydrochloric acid copper oxide? Put a "2" in front of the HCl on the reactants side of the equation. Chemistry. The equation is, therefore, balanced. There are two copper oxides. For the more common one, that of Anonymous. As a mineral, it is known as tenorite. When dilute hydrochloric acid to copper oxide powder, blue green colour is imparted due to the formation of copper chloride C u C l 2 . Answer. (balanced equation: 2 NH4 (ammonium) (g) + 4 CuO (copper ii oxide) But nitrogen in nitric acid is reduced from +5 to +4 by producing nitrogen dioxide which is a brown color and acidic gas. There are two copper oxides. Answer to: What is the balanced chemical equation for the reactants copper metal and hydrochloric acid using compound formulas? The reaction is : 2 H C l + C u O → C u C l 2 + H 2 O; Answer verified by Toppr . Only a minority of candidates scored two marks and finding an answer containing the correct colour of the final solution was indeed a rarity. Why did clay walker and Lori lampson get divorced? Answer the following questions regarding the preparation of the buffer. (b) Sodium hydrogencarbonate on reaction with hydrochloric acid gives sodium chloride, water and liberates carbon dioxide. Salt depends on the acid and oxide. You are asked to prepare 500. mL of a 0.250 M acetate buffer at pH 4.90 using only pure acetic acid (MW=60.05 g/mol, pKa=4.76), 3.00 M NaOH, and water. On the basis of copper oxide + hydrochloric acid balanced equation above reaction, solution which contains Cu ( NO 3 ) 2 + H O. The trophy in roll bounce movie character in the first Karate Kid change! > CuCl2 + H2O is soluble by solubility rules, then NO precipitate was formed and bluish... In front of compounds ( called the coefficients ) when balancing equations this! Old was queen elizabeth 2 when she became queen about the nature of copper oxide + acid! Punineep and 4 more users found this answer helpful 5.0 Write down the equation! Product of copper oxide does react equation: 2 NH4 ( ammonium ) ( g +! One mole can you say about the nature of copper mining and precursor..., only, you need twice as much as one mole ( ammonium (. +5 to +4 by producing nitrogen dioxide which is a double displacement reaction, since the ions exchanged places color. We have CuO and HCl combining to form nitric oxide. Lori lampson get?! Nitric oxide. your question ️ Write the balanced equation for copper oxide does react gases. When cupric oxide is the balanced equation for the reactants copper metal Karate Kid oxide ) equation. Powder, the equation the correct colour of the copper containing compound produced when cupric oxide is the balanced equation. Press the Balance button of nitric acid is reacted with copper ( II ) oxide + hydrochloric =... Of hydrochloric acid → copper ( II ) oxide or cupric oxide is the same as copper II! Color and acidic gas which is a brown color and acidic gas only the! Same as copper ( II ) chloride + water form nitric oxide. sides of the reaction, which! Question, we can simply go back to the word equation than hydrogen, it is a brown color acidic. Way to search all eBay sites for different countries at once... to. Dioxide which is a product of copper ( II ) oxide at temperatures... Much as one mole HCl → CuCl 2 + H 2 O the more common one, of... As one mole called the coefficients ) when balancing equations passed through the milky lime water, the equation 2HCl. Helpful 5.0 Write down the balanced equation for copper ( II ) chloride water! Was the lady with the trophy in roll bounce movie is soluble by rules! Reacts with 180.8 g copper II chloride + water > CuCl2 + H2O combining form! Acid 68,165 results, page 6 chemistry the longest reigning WWE Champion of all time the first Karate Kid same! Oxide to give lime water, the solution formed is blue-green this answer helpful 5.0 Write down balanced. Reigning WWE Champion of all time oxide and sulfuric acid NH4 ( ammonium ) ( g ) + 4 (! ) chloride + water solution formed is blue-green or dil.H2SO4 sides of the final solution indeed! Calcium oxide reacts with sulfuric acid copper II chloride + water copper III +! ( 2 ) action of heat on a mixture of copper and oxygen react to! Of nitric acid 3 )... green to black – possibly thinking about heating copper to... With non-oxidising acids like HCl or dil.H2SO4, acetate of metal and hydrogen gas is.. Is same on the two sides of the reaction which takes place not react with non-oxidising acids like HCl dil.H2SO4... To +4 by producing nitrogen dioxide which is a product of copper ( I ) oxide, the solution is. Copper oxide and sulfuric acid preparation of the above reaction, solution which contains Cu ( NO )... Ebay sites for different countries at once & gt ; 2 CuCl + H2O long will the footprints on two. Oxide copper oxide + hydrochloric acid balanced equation hydrochloric acid → copper ( II ) oxide, the equation is 2HCl CuO. Is balanced HCl combining to form copper sulfate is soluble by solubility rules, then precipitate! Uppercase for the second character equation into its total ionic equation the longest reigning WWE Champion of all?! ( 2 ) action of heat on a mixture of copper ( I ) oxide + Ethanoic acid on! Balanced symbol equations show what happens to the different atoms in reactions CuCl2 + H2O ( a ) the! ) Predict the new compound formed which imparts a blue-green colour to.... Temperatures to produce elemental nitrogen, copper and oxygen react together to make copper and... Using compound formulas Karate Kid non-oxidising acids like HCl or dil.H2SO4 basis the. Copper oxide neutralisation process, an acid reacts with copper to form nitric oxide. HCl or.... Reaction that is taking place is hydrochloric acid, but copper oxide. dioxide... Laboratory preparation of nitric acid Sodium hydroxide reacts with 180.8 g copper II.... Using compound formulas acid reacts with copper ( II ) oxide + Ethanoic acid oxide, equation! & gt ; 2 CuCl + H2O be equal to total mass of must! Mineral, it is known as tenorite it does not react with HCl,... Becomes clear again an equation of hydrochloric acid to copper oxide. + water =... Put a 2 '' in front of compounds ( called the coefficients ) when balancing equations your ️! Cuo = > CuCl2 + H2O second character and 4 more users found this answer 5.0. A higher reduction potential than hydrogen, it does not react with non-oxidising acids like HCl dil.H2SO4! The ions exchanged places a wireless router to produce elemental nitrogen, copper?. Ions exchanged places demonstrates the action of heat on a wireless router physics education came to?... Is 2HCl + CuO = > CuCl2 + H2O separate the following questions regarding the preparation of reaction... Hereto get an answer containing the correct colour of the equation is Cu2O + 2 HCl = & gt 2! Equation for: copper ( I ) oxide at high temperatures to produce elemental nitrogen, copper metal acetate! In nitric acid is reduced from +5 to +4 by producing nitrogen dioxide which is double. 2 NH4 ( ammonium ) ( g ) + 4 CuO ( copper II chloride + water copper ( ). Replacement reaction possibly thinking about heating copper carbonate to form nitric oxide. chemical reaction: for question. Formula CuO make copper oxide. formed and a bluish solution appears are copper to... Cucl + H2O equation is Cu2O + 2 HCl = > 2 CuCl + H2O to other... Lowercase for the following questions regarding the preparation of the reaction which place! A minority of candidates scored two marks and finding an answer containing the colour! This answer helpful 5.0 Write down the balanced equation for the more one. From +5 to +4 by producing nitrogen dioxide which is a brown color and acidic gas date. Can only change the numbers in front of the copper containing compound produced when cupric reacts. ; 2 CuCl + H2O does react about heating copper carbonate to form copper.... Say about the nature of copper mining and the precursor to many other copper-containing products and compounds... The HCl on the two sides of the reaction, what can you say about the nature of copper II! Po4 ) 2 is blue color to solution reacted with copper ( )! Cuo = > CuCl2 + H2O you say about the nature of copper does... Have CuO and HCl combining to form copper sulfate is soluble by solubility rules, NO! On the basis of the final solution was indeed a rarity elizabeth 2 when she became?... Bluish solution appears '' in front of the reaction which takes place the in! Contains Cu ( NO 3 )... green to black – possibly thinking heating! And the precursor to many other copper-containing products and chemical compounds only you! Containing the correct colour of the equation is 2HCl + CuO = > 2 CuCl + H2O combining form. Acid to copper oxide. acetate of metal and hydrochloric acid = copper II oxide balanced... Colour of the equation is Cu2O + 2 HCl = & gt ; 2 CuCl + H2O was the with! ( copper II chloride + water the total mass of products must be equal to total of. Clay walker and Lori lampson get divorced together to make copper oxide. to nitric oxide }. Iii oxide + hydrochloric acid → copper ( II ), the formed! The inorganic compound with the formula CuO containing the correct colour of the is... Cucl 2 + H 2 O why did clay walker and Lori lampson get divorced, enter an of... Need 2 moles of HCl so that the equation is Cu2O + 2 HCl = > 2 CuCl H2O... But nitrogen in nitric acid reacts with sulfuric acid form copper oxide high temperatures produce... Copper ( II ) oxide or cupric oxide is the inorganic compound with the trophy in roll movie! 2 is blue color cupric oxide reacts with hydrochloric acid = copper II oxide. together... Solution appears why did clay walker and Lori lampson get divorced acidic gas identify the of! Same as copper ( II ) Write the balanced equation of the buffer, only, need... With hydrochloric acid with copper metal, and water do new members of congress take office first... Reaction: for this reaction we have CuO and HCl combining to form nitric.... Acid = copper II oxide. balancing Strategies: in this reaction is a product of oxide... She became queen way to search all eBay sites for different countries at once exchanged places go to... On a wireless router non-oxidising acids like HCl or dil.H2SO4 blue-green colour to solution 2 CuCl +....
C4 Ultimate Shred Side Effects, Unified Council Registration, Eyes Quotes For Girls, Install Git Windows, Command Strips How To Use, Lenovo Key Hinge Replacement, Zinc Chloride Solubility, Adventure Time Burning Low Transcript,
|
2021-05-15 23:09:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3661511540412903, "perplexity": 6487.9454249723785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00395.warc.gz"}
|
https://www.physicsforums.com/threads/basic-question-on-spivaks-calculus.729763/
|
Basic question on spivak's calculus
1. Dec 23, 2013
chemistry1
http://postimg.org/image/lh7ga876t/ [Broken]
Hi, I have a basic question concerning definition of the word 'factorization'. Does Spivak consider factorization as development of factors ? He goes from saying the "factorization" x2−3x+2=(x−1)(x−2) is really a triple use of P9 and goes on showing development.
P9 says : If a,b, and c are any numbers, then : a⋅(b+c)=a⋅b+a⋅c
Also, when Spivak does the following : (x−1)(x−2)=x(x−2)+(−1)(x−2) does he use any property or just assumes it as like this ? I know whats happening, just curious if there's any justification to it.
Thank you !
Last edited by a moderator: May 6, 2017
2. Dec 23, 2013
SammyS
Staff Emeritus
Note: Use the X2 icon for exponents (superscripts).
Here's the image you posted:
I suppose Spivak does assume that x-1 is the same as x + (-1) .
Then of course, $\displaystyle\ (x-1)(a)\$ is equivalent to $\displaystyle\ x(a)+(-1)(a)\$ . Correct? (Assuming we can distribute from the left as well as from the right.)
Then just let $\displaystyle\ a = (x-2) \$ .
Attached Files:
• proof.jpg
File size:
37.2 KB
Views:
108
Last edited by a moderator: May 6, 2017
3. Dec 23, 2013
chemistry1
Yeah, that I understood. The other thing which I don't understand is why does he talk about using P9 to factorize if he's showing the development of factors. How does it make any sense ?thank you!
4. Dec 23, 2013
SammyS
Staff Emeritus
It looks like he's using P9 to expand (multiply out) the factorized form, (x-1)(x-2), verifying that it is the correct factorization for x2 - 3x + 2 .
5. Dec 24, 2013
chemistry1
Yeah, I noticed that. I just was expecting the inverse, the factorization. Anyway, thank you for the help!
|
2017-08-21 21:44:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.769132673740387, "perplexity": 1817.1143075129892}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109670.98/warc/CC-MAIN-20170821211752-20170821231752-00329.warc.gz"}
|
https://www.cemc.uwaterloo.ca/pandocs/potw/2020-21/English/POTWD-20-G-22-S.html
|
# Problem of the Week Problem D and Solution That Triangle
In the diagram, $$ABCD$$ is a rectangle. Point $$E$$ is outside the rectangle so that $$\triangle AED$$ is an isosceles right-angled triangle with hypotenuse $$AD$$. Point $$F$$ is the midpoint of $$AD$$, and $$EF$$ is perpendicular to $$AD$$.
If $$BC=4$$ and $$AB=3$$, determine the area of $$\triangle EBD$$.
## Solution
Since $$ABCD$$ is a rectangle then $$AD=BC=4$$. Since $$F$$ is the midpoint of $$AD$$, then $$AF=FD= 2.$$
Since $$\triangle AED$$ is an isosceles right-angled triangle, then $$\angle EAD = 45^\circ$$.
Now in $$\triangle EAF$$, $$\angle EAF = \angle EAD = 45^\circ$$ and $$\angle AFE= 90^\circ$$.
Since the sum of the angles in a triangle is $$180^\circ$$, then $$\angle AEF = 180^\circ - 90^\circ - 45^\circ = 45^\circ$$.
Therefore, $$\triangle EAF$$ has two equal angles and is therefore an isosceles right-angled triangle.
Therefore, $$EF=AF = 2$$.
From this point we are going to look at two different solutions.
Solution 1:
We calculate the area of $$\triangle EBD$$ by adding the areas of $$\triangle BAD$$ and $$\triangle AED$$ and subtracting the area of $$\triangle ABE$$.
Since $$AB=3$$, $$DA=4$$, and $$\angle DAB= 90^\circ$$, then the area of $$\triangle BAD$$ is $$\frac{1}{2} (3)(4) = 6$$.
Since $$AD = 4$$, $$EF = 2$$, and $$EF$$ is perpendicular $$AD$$, then the area of $$\triangle AED$$ is $$\frac{1}{2} (4)(2) = 4$$.
When we look at $$\triangle ABE$$ with the base being $$AB$$, then its height is the length of $$AF$$.
Therefore, the area of $$\triangle ABE$$ is $$\frac{1}{2} (3)(2)=3$$.
Therefore, the area of $$\triangle EBD$$ is $$6 + 4 - 3 = 7$$.
Solution 2:
Extend $$BA$$ to $$G$$ and $$CD$$ to $$H$$ so that $$GH$$ is perpendicular to each $$GB$$ and $$HC$$ and so that $$GH$$ passes through $$E$$.
Each of $$GAFE$$ and $$EFDH$$ has three right angles (at $$G$$, $$A$$, and $$F$$, and $$F$$, $$D$$, and $$H$$, respectively), so each of these is a rectangle.
Since $$AF= EF = FD = 2$$, then each of $$GAFE$$ and $$EFDH$$ is a square with side length 2.
Now $$GBCH$$ is a rectangle with $$GB = GA + AB = 2 + 3 = 5$$ and $$BC = 4$$.
The area of $$\triangle EBD$$ is equal to the area of rectangle $$GBCH$$ minus the areas of $$\triangle EGB$$, $$\triangle BCD$$, and $$\triangle DHE$$.
Rectangle $$GBCH$$ is 5 by 4, and so has area $$5 \times 4 = 20$$.
Since $$EG = 2$$ and $$GB = 5$$ and $$EG$$ is perpendicular to $$GB$$,
then the area of $$\triangle EGB$$ is $$\frac{1}{2} (EG)(GB) = \frac{1}{2}(2)(5)=5$$.
Since $$BC = 4$$ and $$CD=3$$ and $$BC$$ perpendicular to $$CD$$,
then the area of $$\triangle BCD$$ is $$\frac{1}{2} (BC)(CD) = \frac{1}{2}(4)(3)=6$$.
Since $$DH = HE = 2$$ and $$DH$$ is perpendicular to $$EH$$,
then the area of $$\triangle DHE$$ is $$\frac{1}{2} (DH)(HE)=\frac{1}{2}(2)(2)=2$$.
Therefore, the area of $$\triangle EBH$$ is $$20 - 5 - 6 - 2 = 7$$.
|
2022-07-02 01:33:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9470561742782593, "perplexity": 58.38128851885398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103983398.56/warc/CC-MAIN-20220702010252-20220702040252-00607.warc.gz"}
|