url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://slashdot.org/story/04/08/27/1616256/facts-and-fallacies-of-software-engineering
typodupeerror ## Facts and Fallacies of Software Engineering424 Sarusa writes "The title of the book, Facts and Fallacies of Software Engineering, is nice and controversial, and so is the content. Robert Glass is a long-time software engineer and researcher into what software practices work, which don't, and why. You'll find his name all over the literature along with names like Yourdon and Brooks, and he's got a long list of professional credits. In other words, he's an experienced, cranky, opinionated old coot who pulls no punches and writes a very readable and useful book. And he's on your side, having deliberately passed up a more lucrative career in management for a technical track." Read on for the rest of Sarusa's review. Facts and Fallacies of Software Engineering author Robert L. Glass pages 190 publisher Addison-Wesley rating 8 out of 10 reviewer Sarusa ISBN 0321117425 summary 40 years of software engineering research in a nutshell. ### The Layout Facts and Fallacies is not a technically demanding book; it's a very easy and compelling read. There are 55 Facts (and 5+5 fallacies) grouped into logical sections such as Management, Life Cycle, and Quality. First, each Fact is stated succinctly. (For instance, Fact 1: The most important factor in software work is not the tools or techniques used by the programmers, but rather the quality of the programmers themselves.) Then the point is fleshed out more fully -- in this case, that even with all the periodic hype for some hot new methodology that promises orders of magnitude greater productivity, the quality of your programmers matters far more than anything else (and even the best new methods only offer 5-35% increases). Next, the level of controversy about this Fact is discussed. For Fact 1, it's that even though everyone pays lip service to the idea of people being more important than processes, we all still act like it's not true. Maybe this new hot methodology can turn all your lousy programmers into great ones! Perhaps it's because people are a harder problem to address than tools, techniques, and process. And, of course, hot new methodologies sell a lot of books. Finally comes a list of sources and references, which can lead you to more in-depth great reading like Peopleware and Software Runaways. This all works out to about one to two pages per item. ### The Facts and Fallacies The Facts and Fallacies fall into several groups. Some are not well known (or just met with stunned disbelief) such as Fact 31: Error removal is the most time-consuming phase of the life cycle. Some that are pretty well accepted, but are mostly ignored, like Fact 1 above. Some that are accepted, but nobody can agree on what to do about (if anything), like Fact 9 (paraphrased) #150: Project estimates are done at the beginning of the project when you have insufficient understanding of the requirements and scope, which makes it a very bad time to do an estimate for the entire project. Some Facts Glass acknowledges many people will flat out disagree with (and for a few people, very loudly), like Fact 30: COBOL is a very bad language, but all the others (for business data processing) are so much worse. These are the Facts where he really has an axe to grind, and make for amusing reading. In this case what he's really saying is that there is a use for domain-specific languages intended to do one specific thing and do it well, rather than languages like C and Java which attempt to be "good enough" for any use under the sun. But everyone hates COBOL, including me, so it's controversial. ### What's Good? Again, this is a good (and fast) Read. Even if you don't agree with everything, Glass is a skilled writer with strong opinions and a sense of humor. And you might end up agreeing more than you expected. I was pretty skeptical when I started reading. After all, I'm a long time software engineer with strong opinions too, and how often do you get opinionated geeks to agree on even what soda or text editor to use? But most of the Facts resonated with my experience, and of course for most of them Glass has substantial research reference for. The best Facts are those that you knew but might never have expressed explicitly, like Fact 41: Maintenance typically consumes 40 to 80 percent (average, 60 percent) of software costs. Therefore, it is probably the most important life cycle phase of software. Or consider Fact 18: There are two 'rules of three' in reuse: (a) it is three times as difficult to build reusable components as single use components, and (b) a reusable component should be tried out in three different applications before it will be sufficiently general to accept into a reuse library. I knew this generally, and you probably did too, but I didn't know the specific reference for "Biggerstaff's Rules of Three," which give you a ballpark figure. The book was written in 2002, when eXtreme Programming was hot, and it's very interesting that the predictions Glass made in this book about the strengths and weaknesses of XP were, in retrospect, pretty much on target, and this sort of predictive success helps confirm more viscerally that he knows his subject. There are a few Facts in here that Glass included just because he feels strongly about them (or even about specific people) and he doesn't really back them up very strongly except with "well golly, this is so obvious." Like Fallacy 5: Programming can and should be egoless. Note that this is a Fallacy, so he opposes it. I happen to agree with him, but his arguments are mostly personal ox-goring even if they're based on his extensive experience. Still, it's an interesting read. A few of the Fallacies he feels are so obvious that he doesn't even really bother providing sources or references for them, and this somewhat diminishes the overall feel of rigor. Really, the worst thing about this book is that it doesn't come with a poster of just a bullet-pointed list of facts and fallacies that you can nail to your office wall (or your boss's). ### A Few More Facts Fact 21: For every 25% increase in problem complexity, there is a 100% increase in solution complexity. Fact 37: Rigorous inspections [code reviews] can remove up to 90% of errors before the first test case is run. [But are so mentally and emotionally exhausting that we rarely do them.] Fallacy 10: You teach people how to program by showing them how to write programs. Why don't we teach them to read programs first? Good question (and he has a few possible answers). ### In Conclusion I wouldn't say this Facts and Fallacies of Software Engineering is quite as powerful as The Mythical Man Month, Peopleware or Death March on their own, but if you program (or manage programmers) and want to be more than just a code pig, this will give you the condensed version of 40 years of research in a very readable package. Even if you don't agree with everything he says, it's well worth considering it. You can purchase Facts and Fallacies of Software Engineering from bn.com. Slashdot welcomes readers' book reviews. To see your own review here, carefully read the book review guidelines, then visit the submission page. This discussion has been archived. No new comments can be posted. ## Facts and Fallacies of Software Engineering • #### As long as he is not management, he's fine by me.. (Score:2, Insightful) Since he has not gone the way of other money-seeking, glory-hunting management "people", he is a "true" Veteran. Dear Veteran: I Salute thee for resisting the pressures of becoming another me-too manager and instead staying in trenches to fight with poor soldiers. • #### Re:As long as he is not management, he's fine by m (Score:5, Insightful) on Monday August 30, 2004 @05:09PM (#10112248) Homepage Journal If he was in management, wouldn't he have more influence which he could use to change things for the better? Just because you have a management position doesn't automatically mean that you believe in the PHB management style. • #### Re:As long as he is not management, he's fine by m (Score:4, Insightful) on Monday August 30, 2004 @05:15PM (#10112294) Homepage Hear hear! This is the big problem with management. Pre-boom managers were PHBs, and thus promote PHBs. New skilled IT people look at PHBs and think, I don't want to be like that, so I won't become a manager. So its a self-feeding cycle. WE NEED MORE GEEKS IN MANAGEMENT. Right now, I should be writing my MBA assignment, worth 25% of the subject, due in at midnight tommorrow, but instead I'm procrastinating on SlashDot. If that doesn't qualify me to be a geek manager, I don't know what does. • #### Re:As long as he is not management, he's fine by m (Score:3, Insightful) The hard part about getting geeks into management is that they need to be around one place long enough. PHB's get where they are by riding out the clock and becoming the one with the most knowledge. This happens through attrition in many organizations. We geeks aren't often patient enough to ride the pendulum long enough for it to swing the other way. Our management here has been propogated through golf buddies and drinking buddies. Those with the experience to make good decisions for the organization a • #### Re:As long as he is not management, he's fine by m (Score:4, Insightful) on Monday August 30, 2004 @05:45PM (#10112553) Journal How do you propose a geek go about staging a coup de corp? Attain some social skills, go out and play some golf and buy the boss a beer. Noone wants to work with arrogant anti-social types, management included. • #### Re:As long as he is not management, he's fine by m (Score:4, Funny) <chris.mahan@gmail.com> on Monday August 30, 2004 @07:36PM (#10113245) Homepage Yeah, but then you cease being a geek, and you start living a double life, where tassels and the wool content of your suit pants become important factors. One day you'll be driving by fry's and realize you've not been there in over three months, and you will feel very small indeed. Geeks are good at what they do when they embrace their geekness. When they try to suppress it, they become miserable, depressed creatures. And i would not want that in managament. I say: stay behind the keyboard, and long live sandals! • #### Re:As long as he is not management, he's fine by m (Score:3, Insightful) You can have my berkies when you pull them off my cold lifeless feet! In a hundred years we will have software that does what a manager does today. But man will always need geeks. Yes, what is complex today is simple tomarrow. But tomarrow there will new complex problems to solve! • #### Re:As long as he is not management, he's fine by m (Score:5, Insightful) on Monday August 30, 2004 @05:40PM (#10112505) Homepage Journal Except most programmers I know that are worth their weight in salt would completely and totally suck to have as managers. Some are poorly organized in everything but their code (*ahem*.) A few grew up believing that an employee / employer relationship should be antagonistic; that a manager must rule their team with an iron fist. That may come from looking around at a bunch of us slacker programmers thinking "hey, why aren't they working as hard as I? If I were their manager, I'd be busting their asses 24 by 7." Many are extremely introverted and have trouble speaking up among their peers; they simply would not be capable of dressing down an employee who desperately needs it. In most of these cases it seems that the programmers have spent their time learning machine management skills. Those skills are completely unhelpful when it comes to working with people. The lessons you learn (for example, "the machine only does exactly what I tell it") don't work with human employees, no matter how hard you try to apply them. Yes, management is a skill that can be learned, but I don't know any geeks that would want to spend the time, let alone actually manage. Not even for the money. Almost all the people I know who have become successful managers have never been real programmers. They were business analysts or came from completely outside the IT field. • #### Re:As long as he is not management, he's fine by m (Score:3, Interesting) by Anonymous Coward Except most programmers I know that are worth their weight in salt would completely and totally suck to have as managers. Yep. My experience of fellow computer programmers is the same. They would not make good managers. And, more importantly, they would not ENJOY being managers. They are perfectly content to be managees. However, the few programmers who are both capable and motivated to be in management really should aspire to do so, IMAO. They are exactly the sort of management that the industry nee • #### Re:As long as he is not management, he's fine by m (Score:4, Insightful) on Monday August 30, 2004 @07:46PM (#10113302) " Almost all the people I know who have become successful managers have never been real programmers"...well,meet one more. I have been a coder for many years in embedded systems work, and also in the web area. And I have (and do) manage teams of programmers and analysts. The reason most geeks don't want to manage is simple..It is HARDER than coding. No debuggers, no error messages, no recomplies, it has to be right the first time. And Senior Management expects it!! Plus the skills are mostly people skills, something IMNSHO a lot of "geeks" have trouble with. People solutions are generally not right/wrong they are somewhere in the middle, they are kinda "fuzzy" which bothers the logical programmers mind. But those "soft" management skills CAN be learned if you try. In my 22 yrs in IT I've been up the Management chain to mid-level and back down and over to Sr. Technical Staff. I prefer the Technical work, but it is getting HARD to find, so I have my PM skills to fall back on. Versatility in roles, as well as in programming skills is valuable! Oh,and don't get me started on my soapbox about how Leadership is MUCH more valuable than management, but it is in every scarcer supply in the tech world. Set reasonable expectations but hold them to it, give people room to work, help them with problems, keep the customer informed and off the programmers backs and you'll do OK in Managment. • #### Re:As long as he is not management, he's fine by m (Score:5, Insightful) on Monday August 30, 2004 @09:04PM (#10113736) "The reason most geeks don't want to manage is simple..It is HARDER than coding." Management is not harder than coding per se. It is just harder for geeks whose talents and interests are more suited for coding. Most managers don't want to code, because for them it is HARDER than managing. • #### If all managers are PHBs... (Score:3, Insightful) ...then all developers are dilberts. And you wouldnt want THAT. It would spoil the cool "old soldier" metaphor... • #### Re:As long as he is not management, he's fine by m (Score:5, Funny) on Monday August 30, 2004 @05:47PM (#10112575) Homepage Journal This message paid for Swift Byte Programmers for Truth. • #### Ahhhh... (Score:2, Funny) by Anonymous Coward "Fact 21: For every 25% increase in problem complexity, there is a 100% increase in solution complexity." How is that a fact? Some data would be nice, and i'll wager 60 Quatloos he doesn't have any. • #### I resent that (Score:3, Funny) on Monday August 30, 2004 @05:04PM (#10112195) Homepage Journal and want to be more than just a code pig I resent that. As we all know, the correct term is r as in coder. • #### "experienced, cranky, opinionated old coot" (Score:5, Funny) on Monday August 30, 2004 @05:05PM (#10112208) You could have shortened that to "experienced"; the rest follows naturally from that. • #### Another review... (Score:5, Insightful) <tom@tho[ ]leecopeland.com ['mas' in gap]> on Monday August 30, 2004 @05:06PM (#10112220) Homepage ...this one [mountaingoatsoftware.com] by Mike Cohn of Mountain Goat Software. Mike's review is from the "agile software" point of view, so he comments favorably on (among others) Fact 22 - "Eighty percent of software work is intellectual. A fair amount of it is creative. Little of it is clerical". • #### Re:Another review... (Score:2) 80% + a fair amount + a little = 100%. • #### COBOL (Score:2, Interesting) I saw the word COBOL and cringed and immediately thought of long hot days in the arid lab. Also, spending enormous amounts of time programming/debugging on what I thought was the worst language ever. Master file update...almost makes me cry what they put us through...and then there was Assembler. I'll be interested to check this book out for the COBOL section alone. • #### COBOL && Lisp? (Score:4, Interesting) on Monday August 30, 2004 @05:25PM (#10112389) I'll be interested to check this book out for the COBOL section alone. A guestion. Let's assume that the book's thesis is true that COBOL is best for administrative programming since it's a specialised language. Does the book address e.g. Lisp, where programmers have a standard "pattern" to create sub-languages to attack problems? It sounds like an argument that Lisp should be used instead of COBOL, since Lisp is arguably at least as good as any/most for non-low level programming. Now I'll probably be flamed by Lisp people... :-) • #### Re:COBOL && Lisp? (Score:3, Funny) And the point of using Lisp to create the sub-language COBOL would be what? • #### Re:COBOL (Score:3, Informative) I saw the word COBOL and cringed and immediately thought of long hot days in the arid lab. Also, spending enormous amounts of time programming/debugging on what I thought was the worst language ever. Master file update...almost makes me cry what they put us through... Modern COBOLs are a far cry from the original language. Some even have OO features. While the thought of using traditional COBOL file management makes me cringe too, nowadays you can use SQL or call out to an ODBC driver. Probably the best fe <gro.dezilaitini' ta' `liamhsals'> on Monday August 30, 2004 @05:09PM (#10112253) Homepage The most important factor in software work is not the tools or techniques used by the programmers, but rather the quality of the programmers themselves. Another important factor is the amount of time that they are given to accomplish the task. Certain programmers, including many who are otherwise excellent, procrastinate and cannot meet deadlines. And, as we're aware, even good programmers often take shortcuts once fatigue begins to set in. • #### the deadline issue (Score:4, Insightful) on Monday August 30, 2004 @05:55PM (#10112624) IMHO the real management issue concerning deadlines is the way they are defined. If the manager imposes an impossible deadline to the programmer, hes just a bad boss, PHB style. Of course, there are always real world time constraints to be met, but in this case the manager should define a possible goal along with the programmer, alternative solutions, scope agreements, etc. On the other hand, if the programmer is incapable of defining a deadline himself to a well defined amount of work, than you just cant blame the manager. Being a programmer myself, let me shed some light on why and when it may look when someone is working at low speed. Or really is procrastinating. 1. The obvious, they are not a great programmer after all. 2. You mention fatigue. You're dead on. Being tired can sap someone's efficiency (including via increasing the number of bugs) a lot. Whipping a team into working 12 hour shifts, 7 days a week, may work for a week, maybe even two, but then you have tired _and_ demoralized people. 3. Morale problems. Being • #### Fact 37 - code reviews catch errors (Score:5, Informative) <tom@tho[ ]leecopeland.com ['mas' in gap]> on Monday August 30, 2004 @05:11PM (#10112266) Homepage > Fact 37: Rigorous inspections [code reviews] > can remove up to 90% of errors before the > first test case is run. [But are so mentally > and emotionally exhausting that we > rarely do them.] I think some of these terms mean different things to different people. When he says "test case", he means (I think) a tester clicking around a UI and adding a new employee or whatever. But "test case" can also mean a unit test, i.e.: Employee e = new Employee("Fred"); assertEquals(e.getName(), "Fred", "Name not set correctly on instantiation"); The latter meaning of unit test provides a way to do "rigorous inspections" over and over - because a computer is doing the work. Good times. • #### Re:Fact 37 - code reviews catch errors (Score:5, Insightful) on Monday August 30, 2004 @05:25PM (#10112387) Sure, but code reviews (especially of the latest patches, instead of whole tracts of fresh code) can easily catch errors for which no test exists, or for which no simple test is possible. Just as an example off the top of my head, it's common to write if (condition = immediate) ... which in some languages could be caught by the compiler, but not always. The author of that line can read it over and over but something in his brain will replace the mistaken '=' with '=='. The code reviewer has no such preconceptions and will (might) see it immediately. One good example of open review is the Mozilla project, where all commits must be reviewed by at least two people, at least one of whom must be the owner of the relevant subtree of the project. (sorry if this is not quite right, i'm going from memory here). As a result the quality of the code making it into the Mozilla tree is pretty high, with minimum "paper bag" errors. • #### Re:Fact 37 - code reviews catch errors (Score:5, Insightful) on Monday August 30, 2004 @05:33PM (#10112461) No, unit testing and code reviews are orthogonal. Unit tests verify correctness for certain types of input, but often fail to catch subtle bugs or identify poor solutions (bad algorithms or whatever), and of course they are only as good as the person who wrote them - most often, the person who wrote the code being tested in the first place. So the input to the unit test is often just the sort of thing the code was written to manage, not edge cases and so forth. Nothing compares to a code review done by a super-anal type who nitpicks over everything. It is amazing what such a person can catch in terms of weird edge cases, inefficiencies, and so forth, simply by making you sit there and justify what you've done. Like the reviewer said, they are emotionally draining, but are truly worth it. • #### Re:Fact 37 - code reviews catch errors (Score:5, Insightful) <tom@tho[ ]leecopeland.com ['mas' in gap]> on Monday August 30, 2004 @05:39PM (#10112497) Homepage > a super-anal type who nitpicks over everything Hm. Maybe that super-anal person could fill the missing test cases for all those edge conditions. Then his analness will be preserved for posterity, because everyone can run those test cases to catch possible bugs in future code changes. • #### Re:Fact 37 - code reviews catch errors (Score:3, Insightful) Sure, when he finds the bugs in your code through a peer review, you can add the test cases to first expose the problem then prove that you've corrected it. The other major advantage of code reviews is that you know someone else is going to look at your code soon. You're less likely to try to slip something trashy that works through. • #### COBOL (Score:3, Insightful) on Monday August 30, 2004 @05:11PM (#10112269) Fact 30: COBOL is a very bad language, but all the others (for business data processing) are so much worse COBOL is an old language, not necessarily a bad language. Like anything else, you get out of it what you put into it. If you like programming in COBOL then you'll probably be good at it. If you like programming in Java, then you'll probably be able to code any business data processing functionality you need in it too. I think it's best to use the tool you're most comfortable with. • #### Re:COBOL (Score:4, Funny) on Monday August 30, 2004 @05:41PM (#10112512) Homepage Journal COBOL was designed not to actually get work done, but rather to destroy the ego of any young, up-and-coming prima donnas. After the first year of debugging and maintaining COBOL programs with millions of lines of spaghetti code, obfuscated, global variables, etc... the young programmer has no room left for an ego. He has come to the realization that he can't understand everything about the system completely; he is humbled. Then, when given a Java assignment, he feels a sense of gratitude and loyalty to his boss, who has just lifted him from an endless quagmire of PERFORMS and GO BACKS, and SOC4 ABEND... THAT is why COBOL came about. IBM never expected that business would build systems with a language designed to break-in the new hires... But, as they say, the rest is history... • #### Re:COBOL (Score:3, Insightful) Counterexamples: brainfuck, intercal. The lesson is that at least at some level, programming language does really make a difference. • #### Re:COBOL (Score:3, Interesting) Cobol is a bad language and it always has been! There is just so much that is bad about Cobol that it is unbelievable that people would use it. Flow control like PERFORM, no good way to set up storage classes, akward syntax, horrible verbosity are just a few things. Setting up linkage sections is a pain in the ass. It is no wonder Cobol programers hated calling subroutines. What is worse is the spagetti code they rouintly wrote and then painfully debugged. There is a language that is quite good as a repl • #### So COBOL is like Capitalism... (Score:4, Funny) by Anonymous Coward on Monday August 30, 2004 @05:12PM (#10112280) ...the worst system except for all the others. • #### "Fact", but still irrelevant (Score:4, Insightful) on Monday August 30, 2004 @05:16PM (#10112306) In regards to Fact 37 ("Rigorous inspections [code reviews] can remove up to 90% of errors before the first test case is run, but are so mentally and emotionally exhausting that we rarely do them."): so what? If a code review, which takes several hours of my time and the time of my fellow developers, can catch 90% of the errors before the first test case is run, or I can catch 90% of the errors (not necessarily the same ones) using the test cases, it's a better use of resources to let the computer point the errors out to me. A code review on "90% debugged" software that finds an error strikes me as more useful than a code review that finds several errors in 0% debugged software. As for fact #1, good process or tools may not be able to make all programmers gods, but bad process or tools can make a mortal out of anyone. • #### Re:"Fact", but still irrelevant (Score:3, Insightful) It is the presumption on your part that you will or can catch the 90% that is the so what. It is emperical that people tend to overlook errors in their own work. Hence, the reviewing by others. I don't think he's talking about compilation errors, so the computer can't always find the (business logic) errors. • #### Re:"Fact", but still irrelevant (Score:2) Yeah, but it's quite horrifying how many people/shops look at code reviews as unnecessary or too time-consuming. When I'm interviewing at companies, I tend to ask careful questions about process. If they don't include code reviews, I'm out of there. on Monday August 30, 2004 @05:45PM (#10112551) Journal Yes, but you can review your own code. After writing out a couple hundred lines of code, print it out. Then come in the next day and read it. I mean, truly read it, line by line. Some may argue that this is not as good other programmers reading the code. Undoubtedly true, but you will still catch many errors. The fact that you've waited a day means you are, in a sense, a different programmer than the one that wrote the code. And the fact that it's printed rather than on the screen gives you a different perspective. I suggest that running tests is not sufficient to ensure a reasonable level of quality. There are certain errors that are unlikely to be caught by testing, and yet are quite obvious in a read through. In other word, testing is not a replacement for read throughs. In finding problems, a multi-faceted approach is needed. • #### Change font type and size... (Score:3, Informative) If printing it out is not an option, displaying it in a different font, in a different size, ideally with differnt line breaks (hard for code, possible for other things) is almost as good. The aim of printing or reformating is to change the text, and force you to actually read the letters that are on the page. This is done by destroying the patterns in the positioning of the text that the writer is used to, thereby hindering recall. This is one of the reasons LaTeX is useful (for me, at any rate), because • #### And use a grinder (Score:3, Informative) Which is what we used to call pretty-printers when they did more than just wrap everything in font tags. My favorite was lgrind, which produced TeX/LaTeX versions of your source. It could be taught about variable naming patterns, so if your code does something like "delta_vn = blah", it would emit "\delta_{vn} = blah". When printed, this becomes an actual Greek delta character, with the "vn" as a subscript. (Just one example.) Checking the formula in the code against the formula in the math reference You are right. Don't just print it out and read it. Print it out and take it somewhere else to read it. Don't just print it out and read on the desk one foot away from your monitor. Take it outdoors and get some sun! • #### Re:"Fact", but still irrelevant (Score:4, Insightful) on Monday August 30, 2004 @06:13PM (#10112767) The thing that every seems to forget about the code inspection school of thought is that it was developed at a time when running tests and debugging actually did cost real money back in the 1970's when Fagan came up with his inspection process. Your department was charged everytime you compiled and ran your program on the mainframe computer because the mainframe was expensive to buy/rent, power and maintain. Now it doesn't cost real money but has an implied cost that bugs found later in the development process cost more money to fix than if you found then in the coding phase at a code review. Never mind the fact that the recommended rates of code inspects in lines of code per hour are near glacial and costs more money now to have 4 highly paid people to sit in a room and read code out loud. One project I worked on was all brand new code and would have taken three full months of code reviews to review every single piece of code at the speed the QA people were insisting was required for a proper code inspection. The process also insisted that we code inspect before we began any testing. So instead of running a suite of tests that could test 90% of the code in a matter of minutes, the QA insisted that we go through a code inspection before test just because the QA people's definitive texts on software quality still use the same data that Fagan used from his research back in the 1970s. They can quote the facts but they don't understand what assumptions were in the original research. Code inspections do have their place. I would say those places are to enforce coding standards and knowledge transfer which both help with maintainability in the long term. In reality however, most of the code I inspect today has been pounded on for a month or so before we review it. I can't remember the last time I actually found an error through inspection that would have resulted in a bug report. Most of the stuff we find are missing documentation and typos in that documentation. *yawn* • #### Re:"Fact", but still irrelevant (Score:3, Insightful) "In regards to Fact 37 ("Rigorous inspections [code reviews] can remove up to 90% of errors before the first test case is run, but are so mentally and emotionally exhausting that we rarely do them."): so what?" Two reasons (there are more, but these are the best ones that come to mind immediately): (1) The next time you park yourself on a commercial airliner you can be thankful that the software controlling the engines, the autopilot and the cabin pressure controls, to name just a few subsystems, was revie • #### Re:"Fact", but still irrelevant (Score:3, Insightful) i will give you the shortest possible summary of the difference between code reviews and test cases: code reviews are done by humans, test cases by computers. who's smarter? test cases are a great way to ensure that your code continues to do what it's intended to do. code reviews can catch design errors [though the ego factor is problematic here], can lead to new ideas, can dramatically simplify algorithms, etc. ITS GOOD WHEN THE PROGRAMMERS TALK ABOUT THE CODE EVERY ONE IN A WHILE! a free side benefit of r • #### Not at all. (Score:3, Insightful) A code review on "90% debugged" software that finds an error strikes me as more useful than a code review that finds several errors in 0% debugged software. That may be what your intuition tells you, but you're wrong. That is the most expensive way to debug software. When you find a defect in code inspect, you have your finger on it. You know exactly which line of code is faulty, and you know how it is faulty. Fixing it is trivial. When you find a defect in unit test, you know which subsystem is at • #### Fact 21 Addendum (Score:5, Funny) on Monday August 30, 2004 @05:17PM (#10112314) Fact 21: For every 25% increase in problem complexity, there is a 100% increase in solution complexity. Addendum: Unless you are talking about a Microsoft product where for every 1 % increase in problem complexity, there is a 7 year delay in solution delivery. • #### Book Club (Score:5, Interesting) on Monday August 30, 2004 @05:19PM (#10112331) We had a Book Club at my job, where we reviewed one to four Facts or Fallacies at each meeting, once a week. We collected comments and suggestions about how to change how we worked. It was really interesting, and it was good to engage more people in the discussion, because while I might really care about this stuff and have strong opinions, other people in our organization did not have strong opinions and actually started to think about this stuff as a result of our meetings. Unfortunately, our suggestions didn't really get anywhere as a Massive Reorganization (TM) of the department took place. *grumble* We're thinking of doing another Book Club, talking about the "Dynamics of Software Development" by Jim McCarthy. • #### Re:Book Club (Score:4, Funny) on Tuesday August 31, 2004 @01:07AM (#10115267) Homepage Journal We had a Book Club at my job We had a Fight Club at my job, but I'm not allowed to talk about it. • #### Fact 41 - maintenance (Score:4, Insightful) <tom@tho[ ]leecopeland.com ['mas' in gap]> on Monday August 30, 2004 @05:19PM (#10112333) Homepage > Fact 41: Maintenance typically consumes > 40 to 80 percent (average, 60 percent) > of software costs. Therefore, it is probably > the most important life cycle phase of software. Hm. This is a tricky one. Does maintenance take that big a chunk because of the way we write v1.0? Maybe we can improve our initial code to make subsequent changes easier. And build in a safety net of units tests to make those changes less painful. A lot of maintenance may be a good sign - it may mean that the program is being evolved and improved and is actually useful to someone. Dead programs and cancelled projects don't get maintained, but that's not a point in their favor. • #### Egoless Programming (Score:5, Insightful) by Anonymous Coward on Monday August 30, 2004 @05:20PM (#10112343) Fallacy 5: Programming can and should be egoless I have worked with somebody who turned himself into a great programmer by being egoless. He could solve any problem by the simple expedient of not trying to do it all himself and being very good at accepting ideas from other people. In most circumstances programming is done within a team and ego just gets in the way. Who wants to work with somebody who rejects an idea just because they didn't think of it!!. • #### Re:Egoless Programming (Score:2, Insightful) by Anonymous Coward Not only are egoless programmer better at accepting ideas, they are better at giving ideas. I've seen time and time again more junior programmers act proprietary with their ideas because they are in competition with the world. The economics of programming guarantee there will journeymen programmers and one mature team mentor is worth a whole room full of ambitious young monkeys with typewriters. • #### Re:Egoless Programming (Score:5, Insightful) by Anonymous Coward on Monday August 30, 2004 @05:42PM (#10112520) Different sense of "ego", I think. There is ego, "Everybody else sucks", and then there is ego, "Yes my code is good, and I'm confident enough in myself to know that I can deal with the problem." • #### Re:Egoless Programming (Score:2) I do. What's his name and email address? If his ideas are all good, I'm hiring! • #### Re:Egoless Programming (Score:5, Interesting) on Monday August 30, 2004 @05:57PM (#10112633) Homepage The main reason programmers appear to have an ego about their work is that as you get older and more experienced you're expected to know more about software engineering than the people younger or less experienced than you. Never mind that you've never worked with Package X, you are a senior guy and you can handle it. They also have their junior people lean on you and then their success depends on yours. If you appear egoless and unashamed to draw from others' advice, you appear to be ignorant and unmotivated once you get to be a certain age or get a certain amount of experience. • #### Re:Egoless Programming (Score:5, Insightful) on Monday August 30, 2004 @06:05PM (#10112705) "If you appear egoless and unashamed to draw from others' advice, you appear to be ignorant and unmotivated once you get to be a certain age or get a certain amount of experience." Only to a young, pompous jackass do you. • #### Management vs. Geeks (Score:2) If this type of thinking could be eliminated (through examples and actions), lots of people would have happier, healthier work environments and professional relationships: "And he's on your side, having deliberately passed up a more lucrative career in management for a technical track." Why they vs. us? Why can't we all just get along? :) • #### Software Engineering (Score:5, Insightful) on Monday August 30, 2004 @05:35PM (#10112478) I still remember (and cringe when doing so) my software engineering class in college. SE is over-analyzed to a fault. You have so many "improvements" like UML and programs like Rational Rose that only help to overwork and confuse those working on programs. And don't even get me started on the bazillion buzzwords created for software engineering just to make obvious facts sound scientific. I think a book like this is what is wholly necessary. I am not saying this book does a good job of it (I haven't read it). There just needs to be a book that tells people how much of the software engineering information is false and unnecessary. This is so we don't have to either sift through all of it or even worse waste countless hours trying to follow a faulty discipline. Yea I have an agenda because writing software is hard enough in itself. It is 10 times worse when cluttered with overhead. I remember my very first programming class in high school (it was at a community college) where I was told for a FACT that I should flowchart every function and include a separate box for every line of code. It is ridiculous and they are feeding this stuff into students heads as fact. • #### Re:Software Engineering (Score:3, Interesting) What's wrong with UML and Rational Rose? I've just come back to a project from 3 years ago that I designed and developed by myself. Nobody else has touched it since. I'm so thankful I used Rational Rose to create UML diagrams for my documentation as it's made my learning experience with everything I'd forgotten much easier. When used properly, UML doesn't confuse people. Either that or the people have been educated about it. • #### new definition for "fact" (Score:2, Insightful) by Anonymous Coward Since when does a "fact" include a value judgment, like COBOL being a bad language? That's an opinion. • #### fact and fallacies (Score:2) Fact 21: For every 25% increase in problem complexity, there is a 100% increase in solution complexity. Fact 37: Rigorous inspections [code reviews] can remove up to 90% of errors before the first test case is run. Fallacy 10: You teach people how to program by showing them how to write programs. Why don't we teach them to read programs first? Good question (and he has a few possible answers). FACT: a piece of information presented as having objective reality - in fact : in truth fact 21 & 37 are not f • #### Re:fact and fallacies (Score:3, Insightful) What you're reading here is a review, not a full restatement of each thesis in the book. Have you RTFB? If not, then you do not know what data he provides to buttress his statements as Facts and Fallacies. OTOH, what data can you provide to contradict him? Your own personal perceptions? Or can you actually show verifiable numbers? • #### Re:fact and fallacies (Score:3, Interesting) Technically, a fact is not "a true statement". A fact is a statement that is either objectively true OR objectively false, but cannot be both. This is as opposed to an opinion, which is subjective and can thus be simultaneously true for one person and false for another. You are acting as if "fact" is the opposite of "false". It's not. "Fact" is the opposite of "Opinion". "The earth's moon is made from green cheese" is a fact. It happens to be a false fact, but it is still a fact instead of an opinion. • #### Bullet points (Score:5, Funny) on Monday August 30, 2004 @05:50PM (#10112591) Really, the worst thing about this book is that it doesn't come with a poster of just a bullet-pointed list of facts and fallacies that you can nail to your office wall (or your boss's). • #### Some other good resources on the topic... (BSP) (Score:4, Informative) <mishkin@@@berteig...com> on Monday August 30, 2004 @05:57PM (#10112635) Homepage This is Blatant Self Promotion (you have been warned). Here's a good list of software resources [berteig.org], mostly books that I've collected over the last five years or so. Lots of stuff about agile, stuff for managers as well as developers. • #### I hope he mentions the Lions book (Score:4, Interesting) on Monday August 30, 2004 @06:09PM (#10112738) Journal Fallacy 10: You teach people how to program by showing them how to write programs. Why don't we teach them to read programs first? Good question (and he has a few possible answers). This is exactly what John Lions was trying to do with his commentary [peer-to-peer.com]. And he used nothing less than the Unix kernel source code as an example of well-crafted, and very readable, code. Rest in peace, John. Your little project helped more hackers than you could ever have known in this life. • #### A similar read (Score:3, Informative) on Monday August 30, 2004 @06:51PM (#10112998) Homepage Journal A book in the same kind of vein is "A handbook of Software and Systems Engineering -- Empirical Observations, Laws and Theories". In it, "laws" are stated and discussed with emphasis on experiments or studies that back up the law, or that tend to falsify it. As an example, one of the laws mentioned in this discussion is given as "Individual developer performance varies considerably." (Law 31) Then some statistics are given showing the variability. Finally there is a comment on if we should or should not trust the numbers given. • #### does anyone esle find it incredible (Score:3, Interesting) on Monday August 30, 2004 @07:07PM (#10113099) Does anyone esle find it incredible that this reviewer complains that the cranky old coot author doesn't bother to provide justifications where he really doesn't have anything compelling to add? Knowing when to shut up is one of best indicators that someone cares enough about their subject matter that they don't feel the need to "fill air" as if other people can't supply their own experience. I heartily condone the approach: here's what I think, take it or leave it. I'm an old coot myself, and I've learned that it's generally a waste of time to write toward an audience that won't think for itself. If you boss won't think, a poster of convenient sound bites won't solve any problem that matters. • #### Project estimates (Score:5, Insightful) on Monday August 30, 2004 @07:14PM (#10113129) Homepage Project estimates are done at the beginning of the project when you have insufficient understanding of the requirements and scope, which makes it a very bad time to do an estimate for the entire project. This is what separates the men from the boys. Estimation of project requirements are not perfect until the project is complete, so you have no choice but to work with educated guesses. Modern project management is an exercise in managing uncertainty. It is easy to say how long it would take you to write a script, anyone can do that in their head: guess base on experience, multiply by x2 and have a reasonable estimate. Now try estimating a thousands scripts (or circuits) done by hundreds of engineers of varying aptitudes that will result in a capital cost of several billion dollars over (hopefully) a few years! All of which is directly reflected in your retirement investments! That kind of planning is real nuts-and-guts stuff that most of us well never have to wrestle with, and a "fact" like this grossly understates and misrepresents. Programming is easy. Planning is orders of magnitude harder by comparison. I prefer programming, the latter makes my brainpan throb. • #### Teaching (Score:4, Interesting) on Monday August 30, 2004 @08:02PM (#10113422) Homepage Fallacy 10: You teach people how to program by showing them how to write programs. Why don't we teach them to read programs first? For the same reason we don't teach people how to write books by telling them to go read: If they're serious about writing, they're *already* avid readers. Teaching them to read would be redundant. on Tuesday August 31, 2004 @09:11AM (#10117203) I am curious if he addresses the fallacy of "Software development only consists of programmers". I have been doing QA/Testing for 10 years, and it is pretty sad how all-important people think programmers are. The best ones may be, but they aren't all the best ones. When you foster an atmosphere where "develpment is always right" you run into major roadblocks in software development. Requirements analysts can't do their job properly or requirements are ignored. Documentation people are glared at for trying to make the system understandable. (yet we all love to bitch about bad online documentation) Test people are seen as people who are just blocking the inevitablility of shipping the code. If anyone tries to even analyze why things are F'd up, they are seen as "not being team players" and "finger pointers", even if you are trying to fix the process and not the people. I will say that what he says about inspections is right on. Although, I think just focusing only on code reviews is wrong - rigorous reviews of requirements/code/test plans/process docs/user doc/etc will remove 90% of the defects. And defects in requirements are much more costly to fix later. The trick is balancing which of these are most important for your company to review, depending on the project. You can't just do it willy-nilly, you have to do a risk assessment on it and make a decision based on something. I actually had a director of engineering say in a meeting "Since we implemented my new requirements management process, I *guarantee* that the code will work, first time, out of the box." I laughed out loud, and received a very dirty look from him, but agreement from everyone else. Needless to say, that release is the worst one we have had in 5 years, and it is at least 6 months over schedule. People have had to work a lot of OT to try and shine this turd, and they are getting burnt out. Most places do software development and not software engineering. Which is fine, as long as you are clear about it. I just thought of a very good analogy that /.ers can understand. There is probably little doubt that Microsoft has a lot of good programmers. However, their culture and business model has lead the direction of their product. That alone should show you that software development is not all about the programmer. On the other hand, OSS is great but it can only get so far on "good code". Once it is managed, it can be pretty powerful. #### Related LinksTop of the: day, week, month. "The way of the world is to praise dead saints and prosecute live ones." -- Nathaniel Howe Working...
2017-06-23 21:04:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2977761924266815, "perplexity": 2703.8934255442796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320174.58/warc/CC-MAIN-20170623202724-20170623222724-00343.warc.gz"}
http://www.physicsforums.com/showthread.php?t=594457
## Solve |3x-7|-|x-8| > 4 (solve algebraically) So I broke it into a number line and calculated when |3x-7| (3x-7) x>orequAl 7/3 -(3x-7) x < 7/3 (strict) |x-8| -(x-8) x<8 (x-8) x>orequal 8 So for x<7/3 -(3x-7) - [-(x-8)] > 4 Simplified to x < -5/2 For the interval 7/3<x<8 (3x-7)-[-(x-8)] x>19/4 This is part of solution be ause it lies within the interval 7/3<x<8 Now for x>8 (3x-7)-(x-8) X<3/2 Now I'm not sure how to adress this but I wanna say this is not part of the solution cause it doesn't lie within the interval x>8 And also in terms of writing my Ana after in interval notation how can I do this with the info attained. Do I just plot my newly found restrictions on the number line? PhysOrg.com science news on PhysOrg.com >> Heat-related deaths in Manhattan projected to rise>> Dire outlook despite global warming 'pause': study>> Sea level influenced tropical climate during the last ice age Recognitions: Homework Help you got the first 2 intervals correct, but the last one (for x>8), you didn't quite get right. Once you've got the right answer for the 3rd part, then yes you can draw it out, or just think about what each of the restrictions means, so then you can say what is the final result of all the restrictions. Mentor Quote by Plutonium88 So I broke it into a number line and calculated when |3x-7| (3x-7) x ≥ 7/3 -(3x-7) x < 7/3 (strict) |x-8| -(x-8) x<8 (x-8) x ≥ 8 So for x<7/3 -(3x-7) - [-(x-8)] > 4 Simplified to x < -5/2 For the interval 7/319/4 This is part of solution be cause it lies within the interval 7/38 (3x-7)-(x-8) X<3/2 This is not correct. You're still solving|3x-7|-|x-8| > 4, is that right? So in the region, x≥8, you have (3x-7)-(x-8) > 4. What does that lead to? Now I'm not sure how to address this but I wanna say this is not part of the solution cause it doesn't lie within the interval x>8 . And also in terms of writing my Ana after in interval notation how can I do this with the info attained. Do I just plot my newly found restrictions on the number line? ## Solve |3x-7|-|x-8| > 4 (solve algebraically) Quote by BruceW you got the first 2 intervals correct, but the last one (for x>8), you didn't quite get right. Once you've got the right answer for the 3rd part, then yes you can draw it out, or just think about what each of the restrictions means, so then you can say what is the final result of all the restrictions. Ahh my bad, i beleive i wrote the symbol wrong x > 3/2 But this still isn't part of the interval x≥8? I just don't know how to consider it cause 1.5 does not lie in the interval x≥8 (is this correct)**? (I just dont know how to consider these intervals, with the restrictions attained) - Like for ex: does the restriction have to lie within that interval for it to be part of solution? So if i look at what i have X< -5/2 x> 19/4 XE(-∞, -5/2) U (19/4, +∞) I'm just curious originally with my intervals that i'm taking like x≤7/3, 7/3≤x≤8, x≥8 why don't i consider these on the number line? Recognitions: Homework Help Quote by Plutonium88 x > 3/2 But this still isn't part of the interval x≥8? I just don't know how to consider it cause 1.5 does not lie in the interval x≥8 (is this correct)**? (I just dont know how to consider these intervals, with the restrictions attained) - Like for ex: does the restriction have to lie within that interval for it to be part of solution? Think about what it means, you tell it that x>8, then it tells you x>3/2. There is no problem here. Just think, does x>3/2 change the restriction of x>8? Quote by Plutonium88 So if i look at what i have X< -5/2 x> 19/4 XE(-∞, -5/2) U (19/4, +∞) I'm just curious originally with my intervals that i'm taking like x≤7/3, 7/3≤x≤8, x≥8 why don't i consider these on the number line? I think you've got the right answer. What does XE mean? And I'm not sure what you mean about considering the original intervals on the number line.. You could look at each of the intervals and write down the restrictions from each, but then your final answer takes them all into account, so its nicer to look at. Quote by BruceW Think about what it means, you tell it that x>8, then it tells you x>3/2. There is no problem here. Just think, does x>3/2 change the restriction of x>8? I think you've got the right answer. What does XE mean? And I'm not sure what you mean about considering the original intervals on the number line.. You could look at each of the intervals and write down the restrictions from each, but then your final answer takes them all into account, so its nicer to look at. http://s15.postimage.org/g98nz5i2j/line_bmp.png XE =, or X Belongs To The Interval, i just don't have the special E/ Couldnt find it in my charmap. So from my number line the intervals that i have, are the same as the answer i stated, above. And what youre saying is, that in terms of the restrictions ive solved, i only have to take those 3 into account, in order to find the answer (19/4, -5/2 3/2) and if thats the right answer, it would seem that this is the case And also, thank you very much for your help. I Really appreciate what you do, learning would be much more difficult without the help of this forum. d:) Recognitions: Homework Help Oh I get it, like: $$X \in ( - \infty , - \frac{5}{2} ) \cup ( \frac{19}{4} , \infty )$$ The magic of latex! And yeah, that looks like the right answer to me, since you used the information from each of the 3 important regions to get this answer. I'm glad I've been some help, you had done most of the question in your first post! Thread Tools Similar Threads for: Solve |3x-7|-|x-8| > 4 (solve algebraically) Thread Forum Replies Academic Guidance 16 Introductory Physics Homework 4 Precalculus Mathematics Homework 2 General Math 3
2013-05-20 07:57:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7412785887718201, "perplexity": 953.798911509599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698554957/warc/CC-MAIN-20130516100234-00086-ip-10-60-113-184.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/403303/how-to-interpret-metric-lasso-regression-coefficients
# How to interpret / metric Lasso regression coefficients Edited Question, since it was a duplicate I used Matlab to make a lasso model for my data that has 41 predictors and 1 response variable, and perhaps I used more variables that I need too or maybe some variables are not meaningful since some regression coefficients are 0. For the non 0 coefficients, I got some that are 0.2, 0.8, 2.7, etc. I understood that I can interpret that the higher the regression coefficient higher the importance for that respective variable, is there a Metric or a Rule of thumb that say if a regression coefficient is 10/50/100 times lower than the highest regression coefficient we can "reject" or "not consider" that variable, when implementing the model online? or since it gave a non zero value I should stick with that variable no matter what? • This issue of feature importance is tricky and is discussed extensively on this site. This page has extensive discussion and links to further discussion. As does this page. Please look over those pages and their links and edit your question to specify any specific statistical issues that are still unclear to you. Also, please edit your question to spell out the meanings of "RR" and "EN" as not all readers will know what you mean by them. – EdM Apr 16 at 14:59 • I know of no such rule. – Peter Flom - Reinstate Monica Apr 17 at 11:31 • @PeterFlom so if it gave a coefficient > 0, I should stick with that variable, no matter the value? – Tiago Dias Apr 17 at 13:37 • It's not good, in statistics, to make universal rules for model building. You have to think about what you are doing. But you shouldn't reject a variable just because it is non-significant or has a low value. Model building requires thought. – Peter Flom - Reinstate Monica Apr 18 at 11:53 If you want to assess the importance of features in the lasso framework, you can use stability selection by Meinshausen/Bühlmann. This means basically that you repeat your lasso $$B$$ times on a random subset of your data and in every run you check which features are in the top $$L$$ chosen features. In the end you give a score to every feature how often it was selected in the top $$L$$ over $$B$$ runs. The cited paper shows that stability selection is much more stable than simple lasso. • Yes I run lasso 30 runs, and for each run, I order the absolute value of the regression in a ranking of 1 to 41 (41 predictors) to map it, but I was not certain if it is a good methodology. Not sure if I can perform the calculation of the p-value for the regression coefficients for LAsso – Tiago Dias Apr 17 at 13:53 • $B=30$ is probably too low, $B=100$ or even $=500$ should be better. The size of the subsample should be around half of the observations, if you don't have many observations you can choose to subsample some more. $L=5$ is the most common choice, this makes lasso run quite fast. – Edgar Apr 17 at 14:50 • You don't have $p$-values this way, but the stability score should give you a good measure of how important the features are (Meinshausen and Bühlmann say that values above 0.6 are good indicators). Feel free to mark the answer of your choice as a solution to your problem! – Edgar Apr 17 at 14:52 • BTW you should definitely not order by the size of the regression coefficients (as @EdM pointed out, too). You should order by the entrance of features into the model with Lasso (Lasso adds features one by one, with stability selection, you stop after $L=5$ and go on to the next run). – Edgar Apr 17 at 15:00 • Maybe you should intensify your understanding of lasso first before you study more complicated methods that improve on lasso. – Edgar Apr 18 at 11:01 You have to think carefully about "importance" of selected predictors and what "p-values" really mean in LASSO. Predictor importance Demonstrations of LASSO can be based on a simulated data set with a small number of predictors associated with outcome and a large number that are not. In that context it works well to find the truly important predictors. But in real-world applications, with multiple predictors that are correlated with each other, the choice of "important" predictors will vary from sample to sample from the same population. The variability in "importance" among predictors you saw among re-samples of your data, and that forms the basis of the stability selection method recommended in the answer by @Edgar, should lead to some questions about what "importance" of individual predictors means when there are multiple correlated predictors related to outcome. Even when LASSO returns a value of 0 for a predictor's coefficient (as it is designed to do), that doesn't mean it's "not meaningful"; it just means that it didn't add enough to the model to matter for your particular sample and sample size. The predictors that were selected might be important within your particular data sample, but that doesn't mean they are the most important in any fundamental sense in the overall population and they certainly can't be interpreted to have causal effects on outcome. Your particular approach based on ranking of coefficient values is potentially dangerous, depending on how it is done. Predictors are typically standardized before LASSO so that differences in measurement scales don't differentially affect the penalization of the coefficients. But some software then re-scales the coefficients to the original measurement scales. So at the least you have to be careful about whether you are ranking coefficients for standardized or for re-scaled predictors. You don't want the importance of a predictor having a length value to differ depending on whether you measured it in millimeters or miles. LASSO p-values In many applications, the most important issue with LASSO is how well the model works for prediction. A strength of LASSO is that, even with its potentially unstable selection among correlated predictors, models can work quite well in practice for prediction. In that context, p-values for individual coefficients are of little interest. It's when you are interested in inference that p-values matter. This is a very difficult problem in LASSO or in any modeling approach that uses outcomes to select predictors. The usual assumptions for estimating p-values in standard regression models no longer hold when you have used the outcomes to select predictors. There has been some work on this in recent years, introduced for example in Chapters 16 and 20 of Computer Age Statistical Inference. Under some assumptions it is possible to estimate p-values, but I think that it's safe to say this is still an area of active research interest. Unless you are willing to get into these issues in depth, it might be best to stay away from p-values for individual coefficients in LASSO. • About the correlation of predictors: Meinshausen/Bühlmann claim that the subsetting of the sample helps with letting features "shine" even if there correlated with more prominent features for the whole sample. Also, they add a randomized scaling to features in every run because of possibly correlated features, which I didn't explain here. – Edgar Apr 17 at 14:55 • @Edgar I agree that, insofar as it makes sense to evaluate importance among predictors in LASSO, the stability selection method is a promising approach. My reason for providing this answer is my fear that the OP or others who come upon this page might not have thought through what feature importance means in practice with multiple correlated predictors. – EdM Apr 17 at 15:02 • Can I ask what is that parameter that you told that should be at least 0.6? is the lasso tuning parameter? or something I should calculate after the model is built? – Tiago Dias Apr 18 at 8:18
2019-12-07 04:33:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5517038106918335, "perplexity": 592.1722455963277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540495263.57/warc/CC-MAIN-20191207032404-20191207060404-00369.warc.gz"}
http://www.koreascience.or.kr/article/JAKO201229664764872.page
# A FURTHER GENERALIZATION OF APOSTOL-BERNOULLI POLYNOMIALS AND RELATED POLYNOMIALS • Tremblay, R. (Department of Mathematics and Computer Science, University of Quebec at Chicoutimi) ; • Gaboury, S. (Department of Mathematics and Computer Science, University of Quebec at Chicoutimi) ; • Fugere, J. (Department of Mathematics and Computer Science, Royal Military College) • Accepted : 2012.06.11 • Published : 2012.09.25 #### Abstract The purpose of this paper is to introduce and investigate two new classes of generalized Bernoulli and Apostol-Bernoulli polynomials based on the definition given recently by the authors [29]. In particular, we obtain a new addition formula for the new class of the generalized Bernoulli polynomials. We also give an extension and some analogues of the Srivastava-Pint$\acute{e}$r addition theorem [28] for both classes. Finally, by making use of the new adition formula, we exhibit several interesting relationships between generalized Bernoulli polynomials and other polynomials or special functions. #### References 1. M. Abramowitz and I. A. Stegun, Handbook of mathematical functions with formulas, graphs and mathematical tables, National Bureau of Standards, Washington, DC, 1964. 2. T. M. Apostol, On the Lerch zeta function, Pacic J. Math. 1 (1951), 161-167. https://doi.org/10.2140/pjm.1951.1.161 3. K. N. Boyadzhiev, Apostol-bernoulli functions, derivative polynomials and eulerian polynomials, Advances and Applications in Discrete Mathematics 1 (2008), no.2, 109-122. 4. J. Choi, P. J. Anderson, and H. M. Srivastava, Some q-extensions of the apostolbernoulli and the apostol-euler polynomials of order n, and the multiple hurwitz zeta function, Appl. Math. Comput 199 (2008), 723-737. https://doi.org/10.1016/j.amc.2007.10.033 5. L. Comtet, Advanced combinatorics: The art of nite and innite expansions, (Translated from french by J.W. Nienhuys), Reidel, Dordrecht, 1974. 6. A. Erdelyi, W. Magnus, F. Oberhettinger, and F. Tricomi, Higher transcendental functions, vols.1-3,, 1953. 7. M. Garg, K. Jain, and H. M. Srivastava, Some relationships between the generalized apostol-bernoulli polynomials and hurwitz-lerch zeta functions, Integral Transform Spec. Funct. 17 (2006), no. 11, 803-815. https://doi.org/10.1080/10652460600926907 8. E. R. Hansen, A table of series and products, Prentice-Hall, Englewood Cliffs, NJ, 1975. 9. B. Kurt, A further generalization of the Bernoulli polynomials and on the 2D-Bernoulli polynomials $B_{n}^{2}$ (x, y), Appl. Math.Sci. Vol.4 (47) (2010), 2315-2322. 10. Y. Luke, The special functions and their approximations, vols. 1-2, 1969. 11. Q.-M. Luo, Apostol-Euler polynomials of higher order and gaussian hypergeometric functions, Taiwanese J. Math. 10 (4) (2006), 917-925. https://doi.org/10.11650/twjm/1500403883 12. Q.-M. Luo, Fourier expansions and integral representations for the apostol-bernoulli and apostol-euler polynomials, Math. Comp. 78 (2009), 2193-2208. https://doi.org/10.1090/S0025-5718-09-02230-3 13. Q.-M. Luo, The multiplication formulas for the apostol-bernoulli and apostol-euler polynomials of higher order, Integral Transform Spec. Funct. 20 (2009), 377-391. https://doi.org/10.1080/10652460802564324 14. Q.-M. Luo, Some formulas for apostol-euler polynomials associated with hurwitz zeta function at rational arguments, Applicable Analysis and Discrete Mathematics 3 (2009), 336-346. https://doi.org/10.2298/AADM0902336L 15. Q.-M. Luo, An explicit relationship between the generalized apostol-bernoulli and apostol-euler polynomials associated with ${\lambda}$-stirling numbers of the second kind, Houston J. Math. 36 (2010), 1159-1171. 16. Q.-M. Luo, Extension for the genocchi polynomials and its fourier expansions and integral representations, Osaka J. Math. 48 (2011), 291-310. 17. Q.-M. Luo, B.-N. Guo, F. Qui, and L. Debnath, Generalizations of Bernoulli numbers and polynomials, Int. J. Math. Math. Sci. 59 (2003), 3769-3776. 18. Q.-M. Luo and H.M. Srivastava, Some generalizations of the Apostol-Bernoulli and Apostol-Euler polynomials, J. Math.Anal.Appl. 308 (1) (2005), 290-302. https://doi.org/10.1016/j.jmaa.2005.01.020 19. Q.-M. Luo and H.M. Srivastava, Some generalizations of the apostol-genocchi polynomials and the stirling numbers of the second kind, Appl. Math. Comput 217 (2011), 5702-5728. https://doi.org/10.1016/j.amc.2010.12.048 20. Q.M. Luo and H.M. Srivastava, Some relationships between the Apostol-Bernoulli and Apostol-Euler polynomials, Comput. Math. Appl. 51 (2006), 631-642 https://doi.org/10.1016/j.camwa.2005.04.018 21. F. Magnus, W. Oberhettinger and R. P. Soni, Formulas and theorems for the special functions of mathematical physics, Third enlarged edition, Springer-Verlag, New York, 1966. 22. P. Natalini and A. Bernardini, A generalization of the Bernoulli polynomials, J. Appl. Math. 3 (2003), 155-163. 23. M. Prevost, Pade approximation and apostol-bernoulli and apostol-euler polynomials, J. Comput. Appl. Math. 233 (2010), 3005-3017. https://doi.org/10.1016/j.cam.2009.11.050 24. E. D. Rainville, Special functions, Macmillan Company, New York, 1960. 25. H. M. Srivastava and J. Choi, Series associated with zeta and related functions, Kluwer Academin Publishers, Dordrecht, Boston and London, 2001. 26. H. M. Srivastava, M. Garg, and S. Choudhary, A new generalization of the Bernoulli and related polynomials, Russian J. Math. Phys. 17 (2010), 251-261. https://doi.org/10.1134/S1061920810020093 27. H. M. Srivastava, J.-L. Lavoie, and R. Tremblay, A class of addition theorems, Canad. Math.Bull. 26 (1983), 438-445. https://doi.org/10.4153/CMB-1983-072-1 28. H. M. Srivastava and A. Pinter, Remarks on some relationships between the Bernoulli and Euler polynomials, Appl. Math. Lett. 17 (4) (2004), 375-380. https://doi.org/10.1016/S0893-9659(04)90077-8 29. R. Tremblay, S. Gaboury, and B. J. Fugere, A new class of generalized Apostol- Bernoulli polynomials and some analogues of the Srivastava-Pinter addition theorem, Appl. Math. Lett. 24 (2011), 1888-1893. https://doi.org/10.1016/j.aml.2011.05.012 30. W. Wang, C. Jia, and T. Wang, Some results on the Apostol-Bernoulli and Apostol-Euler polynomials, Comput. Math. Appl. 55 (2008), 1322-1332. https://doi.org/10.1016/j.camwa.2007.06.021 #### Cited by 1. -Extensions for the Apostol Type Polynomials vol.2018, pp.1687-0409, 2018, https://doi.org/10.1155/2018/2937950
2020-08-14 22:46:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8287566304206848, "perplexity": 2284.774502523397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740343.48/warc/CC-MAIN-20200814215931-20200815005931-00246.warc.gz"}
https://academic.oup.com/heapol/article/doi/10.1093/heapol/czw100/2555417/Has-India-s-national-rural-health-mission-reduced
## Abstract Background: In 2005, India launched the National Rural Health Mission (NRHM) to strengthen the primary healthcare system. NRHM also aims to encourage pregnant women, particularly of low socioeconomic backgrounds, to use institutional maternal healthcare. We evaluated the impacts of NRHM on socioeconomic inequities in the uptake of institutional delivery and antenatal care (ANC) across high-focus (deprived) Indian states. Methods: Data from District Level Household and Facility Surveys (DLHS) Rounds 1 (1995–99) and 2 (2000–04) from the pre-NRHM period, and Round 3 (2007–08), Round 4 and Annual Health Survey (2011–12) from post-NRHM period were used. Wealth-related and education-related relative indexes of inequality, and pre-post difference-in-differences models for wealth and education tertiles, adjusted for maternal age, rural-urban, caste, parity and state-level fixed effects, were estimated. Results: Inequities in institutional delivery declined between pre-NRHM Period 1 (1995–99) and pre-NRHM Period 2 (2000–04), but thereafter demonstrated steeper decline in post-NRHM periods. Uptake of institutional delivery increased among all socioeconomic groups, with (1) greater effects among the lowest and middle wealth and education tertiles than highest tertile, and (2) larger equity impacts in the late post-NRHM period 2011–12 than in the early post-NRHM period 2007–08. No positive impact on the uptake of ANC was found in the early post-NRHM period 2007–08; however, there was considerable increase in the uptake of, and decline in inequity, in uptake of ANC in most states in the late post-NRHM period 2011–12. Conclusion: In high-focus states, NRHM resulted in increased uptake of maternal healthcare, and decline in its socioeconomic inequity. Our study suggests that public health programs in developing country settings will have larger equity impacts after its almost full implementation and widest outreach. Targeting deprived populations and designing public health programs by linking maternal and child healthcare components are critical for universal access to healthcare. Key Messages • India’s National Rural Health Mission (NRHM), one of the largest public health programs in the world launched in 2005, increased the uptake of institutional delivery and antenatal services, particularly among the poorest socioeconomic groups, in the less-developed high-focus Indian states. • Although the uptake of antenatal services were not improved in the early post-NRHM period 2007–08, there was considerable increase in the uptake of, and decline in its inequity in the late post-NRHM period 2011–12. • Larger equity impacts in the uptake of institutional delivery and antenatal services were found in the late post-NRHM period 2011–12 than in the early post-NRHM period 2007–08, indicating that public health programs in developing country settings will have larger equity impacts after its almost full implementation and widest outreach. • Impact of NRHM was greatest in those states with higher proportion of beneficiaries enrolled under conditional cash-transfer program of Janani Suraksha Yojana. • Targeting deprived populations and designing public health programs by linking maternal and child healthcare components are critical for universal access to healthcare. ## Introduction India has the highest number of maternal and infant deaths worldwide and accounts for one-fifth of all global maternal mortalities, and 21% of the children of less than five dying every day in the world are Indians (International Institute for Population Sciences and Macro International 2007; The World Bank 2014). There exist large inequalities in maternal and infant mortality rates across Indian states, as well as significant gaps between wealthy and deprived groups within these states (International Institute for Population Sciences and Macro International 2007; International Institute for Population Sciences and Macro International 2010). Children from the poorest communities are more likely to die before they reach the age of 5 (Save the Children 2010) and stillbirths and neonatal mortality rates are higher than those of higher income groups (Joshi 2009). Promoting maternal and child health services such as ante-natal care (ANC), institutional delivery and child immunisation reduces maternal and infant mortality rates (Martines et al. 2005; World Health Organisation 2005; Langlois 2013). Even though most of the primary healthcare in public health facilities is available free of charge, the use of maternal and child health services are still relatively low with considerable socioeconomic inequity within and across the Indian states (Pallikadavath et al. 2004; Vora et al. 2009; Pathak et al. 2010; Sanneving et al. 2013; Joe 2014). To address these longstanding inequalities, the Indian government launched National Rural Health Mission (NRHM) in 2005. Its aim was to strengthen the primary healthcare system. One of the key thrusts here was to encourage pregnant women, particularly those of low socioeconomic backgrounds, to use institutional maternal and child healthcare. The NRHM had a set of core strategies including increasing public health funding, decentralising village and district level health planning and management, strengthening the public health service delivery infrastructure, particularly at village, primary and secondary levels, and promoting the non-profit sectors to increase social participation and community empowerment (Planning Commission of India 2001; Government of India 2014b; Sharma and Joe 2014). To ensure wide outreach the NRHM employed ‘Accredited Social Health Activists’ (ASHA) at the grass-roots (village) level to support the use of services (Government of India 2014). One of the important components of the NHRM was the ‘Janani Suraksha Yojana’ (JSY), a cash-transfer programme, which provided financial support to enable women from lower socio-economic groups to give birth in a health facility (Lim et al. 2010; Government of India 2014a). NRHM implementation varied across the 18 high-focus (deprived) and the 10 low-focus (developed) states, determined by maternal and child health indicators. On one hand, the high-focus states, where the program was first rolled out, were entitled to more funds from the central government and additional technical and managerial support. Furthermore, in the high-focus states, all pregnant women were eligible for JSY financial support of Indian rupee (INR) 1400 (∼ US$25) per birth, and benefits were paid irrespective of the birth order, age and socioeconomic position. On the other hand, the JSY financial assistance of INR 800 (∼ US$14) in the low-focus states was limited to women who are below the poverty line, married and aged 19 or more. Research has found that NRHM’s JSY payments were associated with increases in health facility births and decline in neonatal mortality (Lim et al. 2010; Gupta et al. 2012; Panja et al. 2012; Randiveet al. 2013), improvement in immunization rates and breastfeeding practices (Carvalho 2014), decline in economic inequality in institutional delivery in the districts of higher JSY coverage, and decline in maternal mortality in richest districts than in the poorest (Randive 2014). Most of the available studies examined the effects specific to JSY payments, by defining those who had received JSY financial payment as treatment groups, and those who did not as control groups. Understanding the population level impacts of NRHM are critical for India’s health policy and planning, particularly when the country is thriving to achieve its aspirations to attain universal healthcare coverage through exploring several alternative strategies including supply-side strengthening and demand-side financing. To our knowledge, studies have yet to assess the population-level impact of NRHM. Furthermore, no studies have examined the impacts of the program design of NRHM. JSY is one of the major components of NRHM focusing on promoting the uptake of institutional delivery through conditional cash-transfer to mothers and ASHAs. This cash-transfer to mothers was not linked to the uptake of antenatal care (ANC). Any impact of NRHM might be driven by the JSY’s conditional cash-transfer for the uptake of institutional delivery, and so may neglect other components of primary healthcare such as ANC, at least in the early post-NRHM period. However, from 2009 to 2010 onwards, several state governments revised the JSY guidelines to also promote the provision of ANC. For instance, in 2009, the Chhattisgarh government made ensuring the provision of ANC one of the eligibility conditions for payment of incentives to ASHAs (Government of Chhattisgarh 2009). Furthermore, the NRHM framework of implementation suggests that the full implementation of the program with wider outreach will be attained over time (Government of India 2016). For example, the NRHM aimed to achieve 50% coverage of the villages with fully trained ASHA by 2007, and 100% by 2008. Nonetheless, the target of setting up of Village Health and Sanitation Committee in each village was 30% by 2007 and 100% by 2008, and the target of setting up (and strengthening) of Sub Health Centres with two auxiliary nurse midwives (ANM) employed was 60% by 2009 and 100% by 2010. The aim was for most targets to be met by the end of 2010, while the remainder by the end of 2012. However, most of the available studies that assessed the effects of JSY were based on the early post-NRHM period data of 2006–08, thus excluding the program effects after almost full implementation and widest outreach. Here, using a difference-in-differences study design, we evaluated the impact of NRHM on socioeconomic inequity in the uptake of institutional delivery and ANC. Since the JSY of NRHM gives cash-transfers to mothers and ASHAs conditional on using institutional delivery, whereas no cash-transfer is supplied for the use of ANC until 2010 to the ASHAs, we hypothesise that the impact of NRHM is more likely to be seen in increased uptake of, and reduced socioeconomic inequity in institutional delivery, but not on ANC use in the early post-NRHM period. Based on the data availability, we classified the post-NRHM period into ‘early post-NRHM-period 2007–08’ and ‘late post-NRHM period 2011–12’. We further hypothesise that greater equity impacts of NRHM in terms of uptake of institutional delivery and ANC is likely to be seen in the late post-NRHM period of widest outreach of the program than in the early post-NRHM period. ## Methods ### Study design Using a quasi-natural experiment study design, we analysed four national-level cross-sectional survey datasets in the pre- and post NRHM periods. ### Study samples Of the 18 high-focus states, we considered eight empowered action group (EAG) states and seven north-eastern (NE) states in this analysis. The EAG states, where 46% Indian population live, were lagging behind in containing population growth and, compared with the rest of the Indian states, had poorer socio-economic, demographic and health indicators. Thus, the committee named ‘Empowered Action Group’, set up by the government of India in 2001, recommended to pay particular attention to these eight states in terms of area-specific programs and action plans for efficient service delivery in collaboration with various ministries of the union and state governments. The NE states also were socioeconomically less developed, and geographically isolated from the rest of India, and represents about 4% of the total Indian population. We excluded the state of Nagaland (an NE state) in our analysis because DLHS-3 was not implemented there. We also excluded the high-focus states of Jammu & Kashmir and Himachal Pradesh. These two states are socioeconomically developed with better health outcomes in terms of lower maternal and infant mortality rates along with higher levels of the uptake of institutional health services than the EAG states. Indeed their development is at a similar level to several low-focus NRHM states. For instance, about 68% and 80% of pregnant women had minimum three ANC visits and 45% and 71% had institutional delivery in the pre-NRHM period (2000–04) in Himachal Pradesh and Jammu & Kashmir, respectively. We used data from the repeated cross-sectional surveys of married women from the District Level Household and Facility Surveys (DLHS) Round 1 in 1995–99, Round 2 in 2000–04, Round 3 in 2007–08, Round 4 in 2011–12 and the Annual Health Survey (AHS) in 2011–12. The DLHS Round 4 was not implemented in the EAG states and Assam (one of the NE state). Instead, the AHS was conducted in these states with relatively larger sample size than the DLHS. The AHS followed the survey methodology and survey instruments consistent with the DLHS. The AHS was conducted by the census office, the Government of India, and the DLHS was conducted by the International Institutes for Population Sciences (IIPS Mumbai), on behalf of the Ministry of Health and Family Welfare, Government of India. The IIPS is an international research institute and is responsible for the design, development of survey tools and software, training of regional agencies entrusted to undertake the fieldwork in different states, quality assurance and the overall supervision and management of DLHS. The DLHS datasets are available for secondary data analysis for research purposes at nominal cost from IIPS, Mumbai, and the AHS data set from the office of the census of India. The details of the survey methods and survey instruments are available in the national level overview reports [http://www.rchiips.org/andhttp://censusindia.gov.in/]. Briefly, the surveys followed a systematic, multi-stage stratified sampling design. Women were included in the study sample if they reported a live birth, stillbirth or spontaneous or induced abortion within a specified period of the interview date. This period was an average of the past 4 years, being 1 January 1995 for DLHS-1, 2000 for DLHS-2, 2004 for DLHS-3 and 2008 for DLHS-4. We included a reduced time window of 2007–08 for DLHS-3 to allow for delays in implementation of NRHM across the country (this resulted in relatively smaller analytical sample size for DLHS-3 compared with DLHS-1 and DLHS-2). In the DLHS rounds, data on the use of maternal healthcare were collected for the most recent birth. In the AHS, however, the collection of data on the uptake of maternal healthcare was limited to the period of 1 January 2011–31 December 2011. To be consistent with the AHS data period, we included the reduced time window of 2011–12 for DLHS-4 (rather than 2008–12). We retained the period 2012 in the DLHS-4 to avoid the loss of sample size. The respondents of DLHS-1 and 2 were currently married women whereas the respondents of DLHS-3 and DLHS-4/AHS were ever-married women. Thus DLHS-3 and DLHS-4/AHS were more inclusive of mothers that were divorced, single or widowed. The age profile of the respondents varied across the four rounds. DLHS-1 and DLHS-2 included women aged 15–44, but DLHS-3 and DLHS-4/AHS included women 15–49. We, therefore, excluded women who were aged 45–49 to ensure consistency across the survey rounds. Missing observations relating to institutional delivery and ANC were excluded from the analysis. The final analytical sample size of married women aged 15–44 years, who had reported a live birth, stillbirth, or spontaneous or induced abortion within a specified period of the interview date, was 131 531 in DLHS-1, 135 035 in DLHS-2, 65 090 in DLHS-3 and 400 702 in DLHS-4/AHS. To adjust for sample selection and post-stratification factors in the analysis, state-level sampling weights were used. The data analysis was performed with Stata 13.0 (StataCorp 2014). ### Definition of variables The outcome variables were the uptake of institutional delivery and ANC, both dichotomous variables (Yes = 1; No = 0). Institutional delivery is defined as delivery in a healthcare facility of any type. For ANC, we adhered to the JSY recommended number of at least three ANC visits, either at a healthcare facility or a healthcare worker visiting pregnant women to give ANC. The location of residence (rural/urban), age of women at interview, years of highest education (either of the respondent or husband, whichever was the highest) and asset index were included in the analysis. Asset index (a proxy for wealth and income) and years of highest education were used as two distinct measures of socioeconomic position. The asset index used for the DLHS-3 was the one originally provided in the dataset which was estimated using principal component analysis of variables such as ownership of house and its features, including toilet, electricity connection, cooking gas, fridge, fan, television, radio, sewing machine and vehicles. Asset scores for DLHS-1, DLHS-2 and DLHS-4/AHS were generated by the research team using the same approach (Filmer and Pritchett 2001). ### Analytical steps and statistical tools The analysis followed two major steps. First, using the asset-related and education-related relative index of inequality (RII), we measured the population level socioeconomic inequities in the uptake of institutional delivery and ANC in the pre- and post-NRHM periods. The RII is a regression-based measure of inequalities that takes the whole socioeconomic distribution of population into account. When the RII > 1 relatively more women of higher socioeconomic position than lower socioeconomic position utilise maternal healthcare, and vice-versa for < 1. Although other measures of inequalities are available, such as concentration index and absolute slope index of inequality, the RII is recommended for performing comparisons over time or across populations (Kunst and Mackenbach 1990; Ernstsen et al. 2012). We measured socioeconomic position using asset indices and educational attainments. To estimate the RII, we transformed these into a summary measure, namely a ridit-score (separately for asset and education groups), scaled from zero to one by arranging the groups in order from lowest to highest socioeconomic position and assigning the cumulative proportion of the population to each group and is weighted to reflect the share of the sample at each asset score and educational attainment (Harper and Lynch 2006). We used generalised linear models (log-binomial regression), with a logarithmic link function to calculate RIIs (rate ratios) (Barros and Hirakata 2003; Spiegelman and Hertzmark 2005; Ernstsenet al. 2012). Trends in RII over time were assessed by the inclusion of the interaction term of ridit-score and time, and the corresponding P-value was reported in the study (Harper and Lynch 2006). Second, using the pre-post difference-in-differences (DiD) models adjusted for maternal age, parity, rural-urban, caste and state-level fixed effects, we estimated the effects of NRHM for each wealth and education tertile. In addition to the reference period data of pre-NRHM period of 2000–04 (DLHS-2), we also have included the pre-NRHM period data of 1995–99 (DLHS-1) to capture the pre-NRHM trends in institutional delivery and ANC. Various socioeconomic groups had differential uptake of maternal health services in the pre-NRHM period and several confounders (other than NRHM), such as various poverty alleviation programs and economic growth, would differentially affect the socioeconomic tertiles over time. Moreover, the JSY was launched by modifying the existing National Maternity Benefit Scheme (NMBS). The NMBS provided financial assistance of INR 500/- per birth (up to two live births) to pregnant women aged 19 or more belonging to below the poverty line (BPL) households. Thus, the use of two pre-NRHM period data sets would allow us to control for the effect of NMBS on the uptake of maternal healthcare. The DLHS data meet the two key assumption of DiD method, namely the parallel trends assumption and the stable unit treatment value assumptions. Per the parallel trends assumption, the treatment and the control group would follow the same time trend in the absence of the treatment, and any change in data collection should not influence the trends. To our knowledge, there were no specific changes in the data collection over time to influence the trends as the DLHS data used the same sampling framework and variables of interest across the three rounds without any systematic undercounts or over counts for one group. Similarly, the DLHS data satisfy the stable unit treatment value assumption as there were no observable spill-over effects between treated units. We treated the lowest and middle wealth (and education) tertile as the target group, and the highest tertile as the non-target group. The basic form of our DiD model is the following: $Yi = β0 + β1.NRHMYeari + β2.BaseYeari + β3.SEPit + β4.NRHMYeari*BaseYeari*SEPit + β5.parityit + β6.ageit + β7.locationit + β8.casteit + β9.stateit+ eit$ where Yi is the outcome indicator (institutional delivery or ANC) for respondent ‘i’; NRHMYear =1 if post-NRHM period (either 2007–08 or 2011–12, as we estimated two models separately for each post-NRHM period) and 0 if DLHS-2 (2000–04); BaseYear = 1 if pre-NRHM period of DLHS-2 (2000–04) and 0 if DLHS-1 (1995–99); SEP is defined to three socioeconomic (wealth or education) tertiles; parity = 1 if the total number of reported live birth, stillbirth, or spontaneous or induced abortion is > 1, and 0 if otherwise; age denotes age of the respondent, location = 1 if the respondent live in urban location and 0 if in rural; caste is defined as three categories of caste, namely scheduled caste/tribe, and forward caste; state is defined as each individual state to capture the state-level fixed effects. The coefficients of the triple interaction terms measure the effects of NRHM for each socioeconomic tertile. ## Results Table 1 describes the four study samples. The mean age of the women was 27.3 years in DLHS-1, 26.4 years in DLHS-2, 25.4 years in DLHS-3 and 26.1 years in DLHS-4/AHS. Table 1. Description of the study samples DLHS-1 (1995–99) DLHS-2 (2000–04) DLHS-3 (2007–08) DLHS-4/AHS (2011–12) N 131,531 135,035 65,090 400,702 Age (mean and SD) 27.3[5.7] 26.4[5.7] 25.4[5.3] 26.1[4.9] Years of education (mean and SD) 2.6[4.1] 3.5[4.7] 3.7[4.5] 4.1[3.8] Years of highest education in the family (mean and SD) 5.9[5.2] 6.3[5.2] 6.8[4.9] 6.9[4.9] Rural (%) 85.0 80.1 74.5 73.3 Caste SC (%) 19.6 21.0 21.4 20.3 ST (%) 10.4 10.7 17.7 10.5 Other caste (%) 69.9 68.1 60.9 69.2 Minimum three ante-natal care visits (%) 25.9 32.0 34.1 58.4 Institutional delivery (%) 20.6 26.6 39.1 65.4 Skilled birth attendance at home (%) 12.2 13.7 5.1 7.7 JSY finance benefit recipients (%)   18.0 49.1 DLHS-1 (1995–99) DLHS-2 (2000–04) DLHS-3 (2007–08) DLHS-4/AHS (2011–12) N 131,531 135,035 65,090 400,702 Age (mean and SD) 27.3[5.7] 26.4[5.7] 25.4[5.3] 26.1[4.9] Years of education (mean and SD) 2.6[4.1] 3.5[4.7] 3.7[4.5] 4.1[3.8] Years of highest education in the family (mean and SD) 5.9[5.2] 6.3[5.2] 6.8[4.9] 6.9[4.9] Rural (%) 85.0 80.1 74.5 73.3 Caste SC (%) 19.6 21.0 21.4 20.3 ST (%) 10.4 10.7 17.7 10.5 Other caste (%) 69.9 68.1 60.9 69.2 Minimum three ante-natal care visits (%) 25.9 32.0 34.1 58.4 Institutional delivery (%) 20.6 26.6 39.1 65.4 Skilled birth attendance at home (%) 12.2 13.7 5.1 7.7 JSY finance benefit recipients (%)   18.0 49.1 Source: Authors estimates from the DLHS and the AHS data. ### Changes in uptake of institutional delivery and ANC Most EAG and NE states had experienced considerable increase in the uptake of institutional delivery from pre-NRHM Period 2 to post-NRHM periods as compared with the change in the uptake from pre-NRHM Period 1 to pre-NRHM Period 2 (Figure 1). For instance, in the EAG states as a whole, there was an increase of 13 and 40 percentage points in the uptake of institutional delivery in the early post-NRHM period 2007–08 (38.3%) and late post-NRHM period 2011–12 (65.5%), respectively, from pre-NRHM Period 2 of 2000–04 (24.8%) as against an increase of 7 percentage points in pre-NRHM Period 2 from pre-NRHM Period 1 of 1995–99 (18.5%) ( ). Similarly, on one hand, in the NE states, there was an increase of 8 and 33 percentage points in the early post-NRHM period 2007–08 (42.6%) and late post-NRHM period 2011–12 (68.0%) as against the increase of 6 percentage points in pre-NRHM Period 2 (34.9%) from pre-NRHM Period 1(28.6%). On the other hand, there was no significant improvement in the uptake of ANC in the EAG states (23.2% in 1995–99, 29.2% in 2000–04 and 29.9% in 2007–08) but a moderate increase was found in the NE states (36.2% in 1995–99, 45.0% in 2000–04 and 51.8% in 2007–08) in the early post-NRHM period 2007–08. However, there was considerable improvement in the uptake of ANC in the late post-NRHM period 2011–12 (57.5% in the EAG and 72.9% in the NE states). Figure 1. Percentage of eligible women, age 15–44, using institutional delivery and ante-natal care in pre-NRHM periods (1995–99 and 2000–04) and post-NRHM periods (2007–08 and 2011–12), in high-focus empowered action group (EAG) and north eastern (NE) Indian states. Notes: (i) Authors estimates from the DLHS and the AHS data; ii) Error bars show 95% confidence intervals. Figure 1. Percentage of eligible women, age 15–44, using institutional delivery and ante-natal care in pre-NRHM periods (1995–99 and 2000–04) and post-NRHM periods (2007–08 and 2011–12), in high-focus empowered action group (EAG) and north eastern (NE) Indian states. Notes: (i) Authors estimates from the DLHS and the AHS data; ii) Error bars show 95% confidence intervals. ### Trends in inequity of the uptake of institutional delivery and ANC Figure 2 and show the estimates of RII. Large socio-economic inequities in the uptake of institutional delivery and ANC, favouring higher socioeconomic groups, were found in the pre-NRHM Periods 1 and 2. A similar pattern was observed in the post-NRHM periods, but the magnitude of inequity in institutional delivery dropped considerably. For example, in the EAG states the wealth-related RII for institutional delivery fell from 14.5 [95% CI: 13.2; 15.9] in 1995–99 to 11.7 [95% CI: 11.2; 12.2] in 2000–04 (P of trend < 0.001) to 3.6 [95% CI: 3.5; 3.8] in 2007–08 (P of trend < 0.001) to 1.3 [95% CI: 1.3; 1.3] in 2011–12 (P of trend < 0.001). Although there was only a moderate decline in inequity in the uptake of ANC between the pre- and the early post-NRHM period 2007–08, there was considerable decline in the late post-NRHM period 2011–12. In the EAG states, the wealth-related RII fell from 9.3 [95% CI: 8.2; 10.6] in 1995–99 to 5.9 [95% CI: 5.6; 6.1] in 2000–04 (P of trend < 0.001) to 4.5 [95% CI: 4.3; 4.8] in 2007–08 (P of trend < 0.001) to 1.5 [95% CI: 1.5; 1.5] in 2011–12 (P of trend < 0.001). Similar pattern was found in the NE states. Figure 2. Wealth-related relative index of inequality (RII) in the uptake of institutional delivery and ante-natal care in pre-NRHM periods (1995–99 and 2000–04) and post-NRHM periods (2007–08 and 2011–12), in high-focus empowered action group (EAG) and north eastern (NE) Indian states. Notes: (i) Authors estimates from the DLHS and the AHS data. (ii) The RII value reported in the figure is (RII-1); (iii) positive values of the reported RII denotes inequity in favour of the rich; (iv) error bars denote 95% confidence intervals. Figure 2. Wealth-related relative index of inequality (RII) in the uptake of institutional delivery and ante-natal care in pre-NRHM periods (1995–99 and 2000–04) and post-NRHM periods (2007–08 and 2011–12), in high-focus empowered action group (EAG) and north eastern (NE) Indian states. Notes: (i) Authors estimates from the DLHS and the AHS data. (ii) The RII value reported in the figure is (RII-1); (iii) positive values of the reported RII denotes inequity in favour of the rich; (iv) error bars denote 95% confidence intervals. ### Effects of NRHM The above-stated changes in inequity in the uptake of institutional delivery were evident for each socioeconomic tertile too, as shown by the observed probability and predicted probability in the uptake for each wealth ( ) and education tertile (). The observed probability in the uptake of institutional delivery was greater than the predicted probability in both EAG and NE states for each socioeconomic tertile, which means a positive impacts of NRHM. The estimated effects of NRHM showed that each socioeconomic tertile in most of the EAG and NE states had positive program effect in the uptake of institutional delivery in the early post-NRHM period 2007–08 (Table 2). The lowest and middle tertiles had greater impact in the uptake than the highest wealth and education tertiles, particularly in the EAG states. In the EAG states, the net increase in the uptake of institutional delivery in the early post-NRHM period 2007–08 was 13.3% (β = 0.133; P < 0.001), 15.7% (β = 0.157; P < 0.001) and 5.3% (β = 0.053; P < 0.001) for the lowest, middle and highest wealth tertile, respectively. That is, in the EAG states, there was 8.05% (95% CI: 7.98%, 8.12%) and 10.36% (95% CI: 10.28%, 10.44%) higher differential increase in the uptake of institutional delivery for the lowest and middle tertiles over the highest wealth tertile, respectively. In the NE states, however, the uptake of institutional delivery increased, but with a lesser magnitude of 4.4% (β = 0.044; P < 0.01), 5.9% (β = 0.059; P < 0.001) and 3.7% (β = 0.037; P < 0.05) for the lowest, middle and highest wealth tertiles, respectively. That is, there was a moderate differential increased the uptake of 0.7% (95% CI: 0.7%, 0.8%) for the lowest and 2.2% (95% CI: 2.2%, 2.3%) for the middle tertiles over the highest wealth tertile. Table 2. Effects of NRHM in the uptake of institutional delivery and ante-natal care among wealth and education tertiles in the early post-NRHM period 2007–08, in high-focus empowered action group (EAG) and north eastern (NE) Indian states Institutional delivery Ante-natal care Wealth tertiles Education tertiles Wealth tertiles Education tertiles Lowest tertile (1) Middle tertile (2) Highest tertile (3) Lowest tertile (4) Middle tertile (5) Highest tertile (6) Lowest tertile (7) Middle tertile (8) Highest tertile (9) Lowest tertile (10) Middle tertile (11) Highest tertile (12) EAGstates Jharkhand −0.02  ±0.02 0.03 ±0.02 0.01 ±0.03 0.02 ±0.01 −0.02 ±0.02 −0.02 ±0.03 0.05 ±0.02** 0.14 ±0.03* 0.11 ±0.03** 0.07 ±0.02** 0.07 ±0.02** 0.03 ±0.03 Uttar Pradesh 0.07 ±0.01* 0.06 ±0.01* −0.02 ±0.01 0.08 ±0.01* 0.09 ±0.01* 0.07 ±0.01* −0.06 ±0.01* −0.09 ±0.01* −0.15 ±0.01* 0.01 ±0.01 0.02 ±0.01 −0.02 ±0.01 Bihar 0.13 ±0.01* 0.21 ±0.01* 0.09 ±0.02* 0.16 ±0.01* 0.12 ±0.01* −0.04 ±0.02 0.07 ±0.01* 0.13 ±0.01* −0.05 ±0.02** 0.15 ±0.01* 0.06 ±0.01* −0.17 ±0.02* Chhattisgarh 0.04 ±0.02*** 0.07 ±0.03** −0.07 ±0.04*** 0.04 ±0.02*** 0.09 ±0.03* −0.05 ±0.03 −0.22 ±0.03* −0.29 ±0.04* −0.38 ±0.04* −0.10 ±0.03** −0.15 ±0.04* −0.26 ±0.04* Uttarakhand 0.09 ±0.03** 0.15 ±0.03* 0.08 ±0.05 0.09 ±0.03** 0.06 ±0.03 0.01 ±0.05 −0.01 ±0.03 0.01 ±0.04 0.02 ±0.05 0.03 ±0.03 0.04 ±0.04 0.06 ±0.05 Rajasthan 0.21 ±0.02* 0.20 ±0.02* 0.12 ±0.02* 0.19 ±0.02* 0.22 ±0.02* 0.18 ±0.02* −0.13 ±0.02* −0.20 ±0.02* −0.22 ±0.02* −0.08 ±0.02* −0.07 ±0.02* −0.15 ±0.02* Orissa 0.11 ±0.02* 0.16 ±0.03* 0.11 ±0.03* 0.14 ±0.02* 0.16 ±0.02* 0.09 ±0.03** −0.07 ±0.02** −0.15 ±0.03* −0.13 ±0.03* 0.06 ±0.02*** 0.01 ±0.02 −0.09 ±0.03** Madhya Pradesh 0.31 ±0.01* 0.39 ±0.02* 0.22 ±0.02* 0.35 ±0.01* 0.41 ±0.02* 0.25 ±0.02* −0.13 ±0.01* −0.15 ±0.02* −0.23 ±0.02* 0.01 ±0.01 0.02 ±0.02 −0.17 ±0.02* All EAG states 0.13 ±0.01* 0.16 ±0.01* 0.05 ±0.01* 0.14 ±0.00* 0.15 ±0.01* 0.07 ±0.01* −0.05 ±0.01* −0.08 ±0.01* −0.15 ±0.01* 0.03 ±0.01* 0.01 ±0.01 −0.10 ±0.00* NE states Arunanchal Pradesh 0.08 ±0.03*** −0.003 ±0.04 −0.01 ±0.04 0.08 ±0.03** 0.12 ±0.04** 0.11 ±0.04** 0.00 ±0.03 −0.14 ±0.04** −0.21 ±0.04* 0.08 ±0.03*** 0.03 ±0.04 0.04 ±0.04 Assam 0.20 ±0.02* 0.17 ±0.03* 0.08 ±0.03** 0.18 ±0.02* 0.22 ±0.02* 0.27 ±0.03* 0.03 ±0.02 −0.09 ±0.03** −0.31 ±0.03* 0.08 ±0.02** 0.02 ±0.03 −0.14 ±0.03* Meghalaya −0.03 ±0.04 0.17 ±0.04* 0.29 ±0.04* 0.04 ±0.03 0.24 ±0.03* 0.19 ±0.05* −0.19 ±0.05* 0.03 ±0.04 0.11 ±0.05*** −0.03 ±0.04 0.08 ±0.05 −0.05 ±0.05 Sikkim −0.15 ±0.07*** −0.21 ±0.07** −0.22 ±0.07** −0.06 ±0.06 −0.23 ±0.07** −0.19 ±0.06** 0.03 ±0.07 −0.08 ±0.07 −0.29 ±0.07* 0.04 ±0.06 0.05 ±0.07 −0.08 ±0.06 Manipur −0.09 ±0.04*** −0.08 ±0.05 −0.11 ±0.06 −0.05 ±0.03 −0.15 ±0.05** −0.16 ±0.06** 0.21 ±0.05* 0.09 ±0.05 0.11 ±0.05*** 0.11 ±0.04** 0.15 ±0.05** −0.14 ±0.05** Tripura −0.25 ±0.07** −0.36 ±0.08* −0.15 ±0.07*** −0.32 ±0.07* −0.20 ±0.07** −0.04 ±0.08 −0.54 ±0.07* −0.44 ±0.08* −0.22 ±0.07** −0.40 ±0.07* −0.38 ±0.07* −0.13 ±0.08 Mizoram 0.15 ±0.05** 0.24 ±0.05* 0.10 ±0.04*** 0.05 ±0.04 −0.03 ±0.04 0.05 ±0.05 0.46 ±0.05* 0.40 ±0.05* 0.17 ±0.04* 0.24 ±0.05* 0.17 ±0.04* 0.13 ±0.05*** All NE states 0.04 ±0.01** 0.06 ±0.02* 0.04 ±0.02*** 0.06 ±0.01* 0.07 ±0.02* 0.10 ±0.02* 0.02 ±0.02 −0.03 ±0.02 −0.13 ±0.02* 0.06 ±0.01* 0.04 ±0.02** −0.06 ±0.02* Institutional delivery Ante-natal care Wealth tertiles Education tertiles Wealth tertiles Education tertiles Lowest tertile (1) Middle tertile (2) Highest tertile (3) Lowest tertile (4) Middle tertile (5) Highest tertile (6) Lowest tertile (7) Middle tertile (8) Highest tertile (9) Lowest tertile (10) Middle tertile (11) Highest tertile (12) EAGstates Jharkhand −0.02  ±0.02 0.03 ±0.02 0.01 ±0.03 0.02 ±0.01 −0.02 ±0.02 −0.02 ±0.03 0.05 ±0.02** 0.14 ±0.03* 0.11 ±0.03** 0.07 ±0.02** 0.07 ±0.02** 0.03 ±0.03 Uttar Pradesh 0.07 ±0.01* 0.06 ±0.01* −0.02 ±0.01 0.08 ±0.01* 0.09 ±0.01* 0.07 ±0.01* −0.06 ±0.01* −0.09 ±0.01* −0.15 ±0.01* 0.01 ±0.01 0.02 ±0.01 −0.02 ±0.01 Bihar 0.13 ±0.01* 0.21 ±0.01* 0.09 ±0.02* 0.16 ±0.01* 0.12 ±0.01* −0.04 ±0.02 0.07 ±0.01* 0.13 ±0.01* −0.05 ±0.02** 0.15 ±0.01* 0.06 ±0.01* −0.17 ±0.02* Chhattisgarh 0.04 ±0.02*** 0.07 ±0.03** −0.07 ±0.04*** 0.04 ±0.02*** 0.09 ±0.03* −0.05 ±0.03 −0.22 ±0.03* −0.29 ±0.04* −0.38 ±0.04* −0.10 ±0.03** −0.15 ±0.04* −0.26 ±0.04* Uttarakhand 0.09 ±0.03** 0.15 ±0.03* 0.08 ±0.05 0.09 ±0.03** 0.06 ±0.03 0.01 ±0.05 −0.01 ±0.03 0.01 ±0.04 0.02 ±0.05 0.03 ±0.03 0.04 ±0.04 0.06 ±0.05 Rajasthan 0.21 ±0.02* 0.20 ±0.02* 0.12 ±0.02* 0.19 ±0.02* 0.22 ±0.02* 0.18 ±0.02* −0.13 ±0.02* −0.20 ±0.02* −0.22 ±0.02* −0.08 ±0.02* −0.07 ±0.02* −0.15 ±0.02* Orissa 0.11 ±0.02* 0.16 ±0.03* 0.11 ±0.03* 0.14 ±0.02* 0.16 ±0.02* 0.09 ±0.03** −0.07 ±0.02** −0.15 ±0.03* −0.13 ±0.03* 0.06 ±0.02*** 0.01 ±0.02 −0.09 ±0.03** Madhya Pradesh 0.31 ±0.01* 0.39 ±0.02* 0.22 ±0.02* 0.35 ±0.01* 0.41 ±0.02* 0.25 ±0.02* −0.13 ±0.01* −0.15 ±0.02* −0.23 ±0.02* 0.01 ±0.01 0.02 ±0.02 −0.17 ±0.02* All EAG states 0.13 ±0.01* 0.16 ±0.01* 0.05 ±0.01* 0.14 ±0.00* 0.15 ±0.01* 0.07 ±0.01* −0.05 ±0.01* −0.08 ±0.01* −0.15 ±0.01* 0.03 ±0.01* 0.01 ±0.01 −0.10 ±0.00* NE states Arunanchal Pradesh 0.08 ±0.03*** −0.003 ±0.04 −0.01 ±0.04 0.08 ±0.03** 0.12 ±0.04** 0.11 ±0.04** 0.00 ±0.03 −0.14 ±0.04** −0.21 ±0.04* 0.08 ±0.03*** 0.03 ±0.04 0.04 ±0.04 Assam 0.20 ±0.02* 0.17 ±0.03* 0.08 ±0.03** 0.18 ±0.02* 0.22 ±0.02* 0.27 ±0.03* 0.03 ±0.02 −0.09 ±0.03** −0.31 ±0.03* 0.08 ±0.02** 0.02 ±0.03 −0.14 ±0.03* Meghalaya −0.03 ±0.04 0.17 ±0.04* 0.29 ±0.04* 0.04 ±0.03 0.24 ±0.03* 0.19 ±0.05* −0.19 ±0.05* 0.03 ±0.04 0.11 ±0.05*** −0.03 ±0.04 0.08 ±0.05 −0.05 ±0.05 Sikkim −0.15 ±0.07*** −0.21 ±0.07** −0.22 ±0.07** −0.06 ±0.06 −0.23 ±0.07** −0.19 ±0.06** 0.03 ±0.07 −0.08 ±0.07 −0.29 ±0.07* 0.04 ±0.06 0.05 ±0.07 −0.08 ±0.06 Manipur −0.09 ±0.04*** −0.08 ±0.05 −0.11 ±0.06 −0.05 ±0.03 −0.15 ±0.05** −0.16 ±0.06** 0.21 ±0.05* 0.09 ±0.05 0.11 ±0.05*** 0.11 ±0.04** 0.15 ±0.05** −0.14 ±0.05** Tripura −0.25 ±0.07** −0.36 ±0.08* −0.15 ±0.07*** −0.32 ±0.07* −0.20 ±0.07** −0.04 ±0.08 −0.54 ±0.07* −0.44 ±0.08* −0.22 ±0.07** −0.40 ±0.07* −0.38 ±0.07* −0.13 ±0.08 Mizoram 0.15 ±0.05** 0.24 ±0.05* 0.10 ±0.04*** 0.05 ±0.04 −0.03 ±0.04 0.05 ±0.05 0.46 ±0.05* 0.40 ±0.05* 0.17 ±0.04* 0.24 ±0.05* 0.17 ±0.04* 0.13 ±0.05*** All NE states 0.04 ±0.01** 0.06 ±0.02* 0.04 ±0.02*** 0.06 ±0.01* 0.07 ±0.02* 0.10 ±0.02* 0.02 ±0.02 −0.03 ±0.02 −0.13 ±0.02* 0.06 ±0.01* 0.04 ±0.02** −0.06 ±0.02* Notes: (i) Coefficients are average treatments effects  ± standard errors; (ii) the models were estimated separately for wealth and education tertiles, and repeated for each state and state groups, after controlling for age, rural-urban, parity and caste; (iii) *P < 0.001, **P < 0.01, *** P < 0.05; (iv) the estimates were from data of DLHS-1 (1995–99), DLHS-2 (2001–04) and DLHS-3 (2007–08). In the late post-NRHM period 2011–12, we found greater equity in the uptake of institutional delivery. In the EAG states as a whole, the net increase in the uptake of institutional delivery was 49%, 42% and 18% for the lowest, middle and highest wealth tertiles, respectively (Table 3). Similarly, in the NE states, the net increase in the uptake of institutional delivery was 31%, 22% and 7% for the lowest, middle and highest wealth tertiles, respectively. Similar trends were found for the education tertiles. Table 3. Effects of NRHM in the uptake of institutional delivery and ante-natal care among wealth and education tertiles in the late post-NRHM period 2011−12, in high-focus empowered action group (EAG) and north eastern (NE) Indian states Institutional delivery Ante-natal care Wealth tertiles Education tertiles Wealth tertiles Education tertiles Lowest tertile (1) Middle tertile (2) Highest tertile (3) Lowest tertile (4) Middle tertile (5) Highest tertile (6) Lowest tertile (7) Middle tertile (8) Highest tertile (9) Lowest tertile (10) Middle tertile (11) Highest tertile (12) EAGstates Jharkhand 0.24 ±0.02* 0.27 ±0.02* 0.20 ±0.03* 0.22 ±0.01* 0.23 ±0.02* 0.18 ±0.02* 0.38 ±0.02* 0.47 ±0.03* 0.30 ±0.03* 0.39 ±0.02* 0.33 ±0.02* 0.19 ±0.03* Uttar Pradesh 0.45 ±0.01* 0.39 ±0.01* 0.18 ±0.01* 0.41 ±0.01* 0.39 ±0.01* 0.28 ±0.01* 0.22 ±0.01* 0.18 ±0.01* 0.12 ±0.01* 0.26 ±0.01* 0.28 ±0.01* 0.28 ±0.01* Bihar 0.46 ±0.01* 0.45 ±0.01* 0.19 ±0.02* 0.46 ±0.01* 0.31 ±0.01* 0.06 ±0.02* 0.31 ±0.01* 0.34 ±0.01* 0.08 ±0.02* 0.39 ±0.01* 0.27 ±0.01* −0.07 ±0.02* Chhattisgarh 0.26 ±0.02* 0.29 ±0.02* −0.06 ±0.03*** 0.29 ±0.02* 0.27 ±0.02* −0.08 ±0.03** 0.10 ±0.03* −0.12 ±0.03* −0.28 ±0.03* 0.14 ±0.03* 0.04 ±0.03 −0.17 ±0.03* Uttarakhand 0.38 ±0.03* 0.42 ±0.03* 0.31 ±0.04* 0.30 ±0.02* 0.35 ±0.03* 0.23 ±0.05* 0.32 ±0.03* 0.25 ±0.03* 0.20 ±0.04* 0.31 ±0.03* 0.32 ±0.03* 0.22 ±0.05* Rajasthan 0.53 ±0.01* 0.48 ±0.02* 0.29 ±0.02* 0.53 ±0.01* 0.46 ±0.02* 0.34 ±0.02* 0.26 ±0.01* 0.17 ±0.02* 0.05 ±0.02** 0.29 ±0.01* 0.29 ±0.02* 0.13 ±0.02* Orissa 0.51 ±0.02* 0.35 ±0.02* 0.09 ±0.02* 0.49 ±0.01* 0.38 ±0.02* 0.04 ±0.02 0.34 ±0.02* 0.11 ±0.03* −0.05 ±0.02 0.44 ±0.02* 0.28 ±0.02* −0.02 ±0.02 Madhya Pradesh 0.65 ±0.01* 0.57 ±0.01* 0.26 ±0.02* 0.65 ±0.01* 0.57 ±0.01* 0.27 ±0.02* 0.42 ±0.01* 0.29 ±0.02* 0.00 ±0.02 0.49 ±0.01* 0.43 ±0.02* 0.06 ±0.02* All EAG states 0.49 ±0.00* 0.42 ±0.01* 0.18 ±0.01* 0.44 ±0.00* 0.40 ±0.01* 0.19 ±0.01* 0.31 ±0.01* 0.22 ±0.01* 0.07 ±0.01* 0.34 ±0.00* 0.33 ±0.01* 0.09 ±0.01* NE states Arunanchal Pradesh 0.06 ±0.03** 0.10 ±0.04* 0.00 ±0.04 0.09 ±0.03* 0.15 ±0.03* 0.13 ±0.04* −0.05 ±0.03 −0.09 ±0.04** −0.14 ±0.04* 0.05 ±0.03 0.06 ±0.03 0.08 ±0.04*** Assam 0.55 ±0.01* 0.43 ±0.02* 0.19 ±0.03* 0.49 ±0.01* 0.47 ±0.02* 0.42 ±0.03* 0.40 ±0.02* 0.13 ±0.03* −0.19 ±0.03* 0.43 ±0.02* 0.43 ±0.02* 0.01 ±0.03 Meghalaya 0.15 ±0.04* 0.42 ±0.04* 0.45 ±0.05* 0.21 ±0.03* 0.41 ±0.04* 0.33 ±0.05* 0.04 ±0.05 0.23 ±0.05* 0.25 ±0.05* 0.17 ±0.04* 0.19 ±0.05* 0.13 ±0.05* Sikkim 0.24 ±0.06* 0.13 ±0.07 −0.16 ±0.07** 0.24 ±0.06* 0.06 ±0.07 −0.09 ±0.06 0.15 ±0.06*** −0.03 ±0.07 −0.23 ±0.06* 0.12 ±0.06*** 0.08 ±0.07 −0.09 ±0.06 Manipur 0.05 ±0.05 0.12 ±0.05** −0.09 ±0.05 0.08 ±0.03** −0.07 ±0.05 −0.14 ±0.06** 0.22 ±0.05* 0.12 ±0.05*** −0.01 ±0.05 0.08 ±0.04*** 0.14 ±0.05** −0.26 ±0.05* Tripura 0.09 ±0.07 −0.10 ±0.08 0.03 ±0.07 −0.09 ±0.07 −0.01 ±0.07 0.06 ±0.07 −0.20 ±0.07** −0.22 ±0.08** −0.15 ±0.07*** −0.22 ±0.07* −0.17 ±0.07** −0.14 ±0.07 Mizoram 0.28 ±0.05* 0.41 ±0.05* 0.17 ±0.04* 0.21 ±0.04* 0.09 ±0.04** 0.09 ±0.04*** 0.38 ±0.05* 0.40 ±0.05* 0.19 ±0.04* 0.24 ±0.04* 0.08 ±0.04 0.10 ±0.05*** All NE states 0.42 ±0.01* 0.31 ±0.02* 0.15 ±0.02* 0.36 ±0.01* 0.34 ±0.01 0.29 ±0.02* 0.33 ±0.01* 0.12 ±0.02* −0.07 ±0.02* 0.34 ±0.01* 0.20 ±0.02* 0.04 ±0.02** Institutional delivery Ante-natal care Wealth tertiles Education tertiles Wealth tertiles Education tertiles Lowest tertile (1) Middle tertile (2) Highest tertile (3) Lowest tertile (4) Middle tertile (5) Highest tertile (6) Lowest tertile (7) Middle tertile (8) Highest tertile (9) Lowest tertile (10) Middle tertile (11) Highest tertile (12) EAGstates Jharkhand 0.24 ±0.02* 0.27 ±0.02* 0.20 ±0.03* 0.22 ±0.01* 0.23 ±0.02* 0.18 ±0.02* 0.38 ±0.02* 0.47 ±0.03* 0.30 ±0.03* 0.39 ±0.02* 0.33 ±0.02* 0.19 ±0.03* Uttar Pradesh 0.45 ±0.01* 0.39 ±0.01* 0.18 ±0.01* 0.41 ±0.01* 0.39 ±0.01* 0.28 ±0.01* 0.22 ±0.01* 0.18 ±0.01* 0.12 ±0.01* 0.26 ±0.01* 0.28 ±0.01* 0.28 ±0.01* Bihar 0.46 ±0.01* 0.45 ±0.01* 0.19 ±0.02* 0.46 ±0.01* 0.31 ±0.01* 0.06 ±0.02* 0.31 ±0.01* 0.34 ±0.01* 0.08 ±0.02* 0.39 ±0.01* 0.27 ±0.01* −0.07 ±0.02* Chhattisgarh 0.26 ±0.02* 0.29 ±0.02* −0.06 ±0.03*** 0.29 ±0.02* 0.27 ±0.02* −0.08 ±0.03** 0.10 ±0.03* −0.12 ±0.03* −0.28 ±0.03* 0.14 ±0.03* 0.04 ±0.03 −0.17 ±0.03* Uttarakhand 0.38 ±0.03* 0.42 ±0.03* 0.31 ±0.04* 0.30 ±0.02* 0.35 ±0.03* 0.23 ±0.05* 0.32 ±0.03* 0.25 ±0.03* 0.20 ±0.04* 0.31 ±0.03* 0.32 ±0.03* 0.22 ±0.05* Rajasthan 0.53 ±0.01* 0.48 ±0.02* 0.29 ±0.02* 0.53 ±0.01* 0.46 ±0.02* 0.34 ±0.02* 0.26 ±0.01* 0.17 ±0.02* 0.05 ±0.02** 0.29 ±0.01* 0.29 ±0.02* 0.13 ±0.02* Orissa 0.51 ±0.02* 0.35 ±0.02* 0.09 ±0.02* 0.49 ±0.01* 0.38 ±0.02* 0.04 ±0.02 0.34 ±0.02* 0.11 ±0.03* −0.05 ±0.02 0.44 ±0.02* 0.28 ±0.02* −0.02 ±0.02 Madhya Pradesh 0.65 ±0.01* 0.57 ±0.01* 0.26 ±0.02* 0.65 ±0.01* 0.57 ±0.01* 0.27 ±0.02* 0.42 ±0.01* 0.29 ±0.02* 0.00 ±0.02 0.49 ±0.01* 0.43 ±0.02* 0.06 ±0.02* All EAG states 0.49 ±0.00* 0.42 ±0.01* 0.18 ±0.01* 0.44 ±0.00* 0.40 ±0.01* 0.19 ±0.01* 0.31 ±0.01* 0.22 ±0.01* 0.07 ±0.01* 0.34 ±0.00* 0.33 ±0.01* 0.09 ±0.01* NE states Arunanchal Pradesh 0.06 ±0.03** 0.10 ±0.04* 0.00 ±0.04 0.09 ±0.03* 0.15 ±0.03* 0.13 ±0.04* −0.05 ±0.03 −0.09 ±0.04** −0.14 ±0.04* 0.05 ±0.03 0.06 ±0.03 0.08 ±0.04*** Assam 0.55 ±0.01* 0.43 ±0.02* 0.19 ±0.03* 0.49 ±0.01* 0.47 ±0.02* 0.42 ±0.03* 0.40 ±0.02* 0.13 ±0.03* −0.19 ±0.03* 0.43 ±0.02* 0.43 ±0.02* 0.01 ±0.03 Meghalaya 0.15 ±0.04* 0.42 ±0.04* 0.45 ±0.05* 0.21 ±0.03* 0.41 ±0.04* 0.33 ±0.05* 0.04 ±0.05 0.23 ±0.05* 0.25 ±0.05* 0.17 ±0.04* 0.19 ±0.05* 0.13 ±0.05* Sikkim 0.24 ±0.06* 0.13 ±0.07 −0.16 ±0.07** 0.24 ±0.06* 0.06 ±0.07 −0.09 ±0.06 0.15 ±0.06*** −0.03 ±0.07 −0.23 ±0.06* 0.12 ±0.06*** 0.08 ±0.07 −0.09 ±0.06 Manipur 0.05 ±0.05 0.12 ±0.05** −0.09 ±0.05 0.08 ±0.03** −0.07 ±0.05 −0.14 ±0.06** 0.22 ±0.05* 0.12 ±0.05*** −0.01 ±0.05 0.08 ±0.04*** 0.14 ±0.05** −0.26 ±0.05* Tripura 0.09 ±0.07 −0.10 ±0.08 0.03 ±0.07 −0.09 ±0.07 −0.01 ±0.07 0.06 ±0.07 −0.20 ±0.07** −0.22 ±0.08** −0.15 ±0.07*** −0.22 ±0.07* −0.17 ±0.07** −0.14 ±0.07 Mizoram 0.28 ±0.05* 0.41 ±0.05* 0.17 ±0.04* 0.21 ±0.04* 0.09 ±0.04** 0.09 ±0.04*** 0.38 ±0.05* 0.40 ±0.05* 0.19 ±0.04* 0.24 ±0.04* 0.08 ±0.04 0.10 ±0.05*** All NE states 0.42 ±0.01* 0.31 ±0.02* 0.15 ±0.02* 0.36 ±0.01* 0.34 ±0.01 0.29 ±0.02* 0.33 ±0.01* 0.12 ±0.02* −0.07 ±0.02* 0.34 ±0.01* 0.20 ±0.02* 0.04 ±0.02** Notes: (i) Coefficients are average treatments effects  ± standard errors; (ii) the models were estimated separately for wealth and education tertiles, and repeated for each state and state groups, after controlling for age, rural-urban, parity and caste; (iii) *P < 0.001, **P < 0.01, ***P < 0.05; (iv) the estimates were from data of DLHS-1 (1995–99), DLHS-2 (2001–04) and DLHS-4/AHS (2011–12). On the contrary, the observed probability in the uptake of ANC was lower than the predicted probability in both EAG and NE states for each socioeconomic tertile in the early post-NRHM period 2007–08 ( and ). Nevertheless, negative effects in the uptake of ANC were found for each socioeconomic tertile in the early post-NRHM period 2007–08 (Table 2). In the EAG states as a whole, the uptake of ANC for the lowest, middle and highest wealth tertiles decreased by 5.3% (β=−0.053; P < 0.001), 8.0% (β=−0.080; P < 0.001) and 15.1% (β=−0.151; P < 0.001), respectively. In the NE states, there was no significant effects for the uptake of ANC for the lowest and middle wealth tertiles, but negative effects for the highest wealth tertiles (β=−0.131; P < 0.001). However, in the late post-NRHM period 2011–12, there was considerable improvement in the uptake of ANC, particularly for the lowest socioeconomic tertiles (Table 3). ### Interstate variations on the impacts of NRHM Considerable inter-state variations were found on the population level effects of NRHM in the uptake of both institutional delivery and ANC. Of the eight EAG states, in the early post-NRHM period 2007–08, no impact of NRHM on the uptake of institutional delivery was found in the states of Jharkhand but positive impact was found on the rest of the EAG states (Table 2). Of the seven NE states, negative effect of NRHM in the uptake of institutional delivery was found among the low wealth tertiles in the states of Sikkim, Manipur and Tripura, and no effect in Meghalaya. However, in the late post-NRHM period 2011–12, the lowest and middle wealth and education tertiles of the most states, except Manipur and Tripura, had positive impact in the uptake in institutional delivery. In the uptake of ANC in the early post-NRHM period 2007–08, positive effect of NRHM was found particularly among the low wealth tertile in the EAG states of Jharkhand and Bihar, and negative effects in the remaining six EAG states. In the NE states, among the low wealth tertile, no impact in the uptake of ANC in Arunanchal Pradesh, Assam and Sikkim, however, positive impact in Manipur and Mizoram, and negative impact in Meghalaya and Tripura, was found. In the late post-NRHM period 2011–12, positive impacts on the uptake of ANC were found among low and middle socioeconomic tertiles in most of the states, except Arunachal Pradesh, Meghalaya and Tripura. On one hand, we also found that the effects of JSY coverage and NRHM in the uptake of institutional delivery and ANC showed a similar pattern. For instance, in the early post-NRHM period 2007–08, in Jharkhand where only 3.3% of the eligible women had received JSY cash-transfer, there was no population level impact of NRHM in the uptake of institutional delivery but positive impacts in the uptake of ANC. But, in the late post-NRHM period 2011–12, 28% of the eligible women had received JSY cash-transfer, and this was associated with positive impacts in the uptake of institutional delivery as well. On the other hand, the three EAG states with higher levels of JSY coverage in 2007–08, namely, Rajasthan (32.5%), Orissa (38.9%) and Madhya Pradesh (42.8%) had positive population level effects in the uptake of institutional delivery. ### Variation in impacts between wealth and education groups We found almost similar pattern on the RII and the effects between wealth and education measures of socioeconomic position. For example, in the EAG states in the early post-NRHM period 2007–08, the effects of NRHM in the uptake of institutional delivery for the lowest (β = 0.133; P < 0.001), middle (β = 0.157; P < 0.001) and highest wealth tertiles (β = 0.053; P < 0.001) are similar to the lowest education (β = 0.14; P < 0.001), middle education (β = 0.15; P < 0.001) and highest education tertiles (β = 0.07; P < 0.001). However, there were few instances of inconsistent pattern of our results in some states between wealth and education tertiles. ### JSY coverage and inequity over time Only 17.9% of the eligible women in the EAG states (ranging from 3% in Jharkhand to 43% in Madhya Pradesh) and 18.5% of the eligible women in the NE states (ranging from 3% in Meghalaya to 30% in Mizoram) received financial incentives from the JSY in the early post-NRHM period 2007–08 ( ). There was an increase in the percentage of eligible women receiving financial incentives in the late post-NRHM period 2011–12, being 48.6% in the EAG (ranging from 28% in Jharkhand to 76.6% in Madhya Pradesh) and 56.3% in the NE states (ranging from 12.7% in Meghalaya to 56.7% in Assam). Furthermore, there was inequity in the receipt of JSY, that is, people from higher socioeconomic groups benefited more than the people from lower socioeconomic groups in the early post-NRHM period 2007–08, and this pattern had reversed or attenuated in the late post-NRHM period 2011–12. The wealth-related RII in receipt of JSY incentives declined from 1.3 (95% CI: 1.2; 1.4) in 2007–08 to 0.95 (95% CI: 0.94; 0.97) in 2011–12 in the EAG states and from 3.6 (95% CI: 2.8; 4.7) to 1.14 (95% CI: 1.10; 1.19) in the NE states. ### Robustness Besides using the RII for measuring the inequity, we used both the slope index of inequality and concentration index, another widely used measures of inequity that take the whole socioeconomic distribution of population into account, and we found that a similar pattern of inequity in the uptake in the pre- and post-NRHM periods. The DiD estimates were re-estimated for different wealth and education quintiles and quartiles, and found the effects similar to that of the tertiles. We estimated changes in the uptake of skilled birth-attendance at home in the pre- and post-NRHM periods. We found that the percentage of eligible women who used skilled birth-attendance at home increased between the pre-NRHM Period 1 (1995–99) and the pre-NRHM Period 2 (2000–04), but decreased in the post-NRHM period (2007–08). In the EAG states as a whole, the percentage of women who used skilled birth-attendance at home increased from 13% to 14% in the pre-NRHM periods and then decreased to 5% in the early post-NRHM period 2007–08, and further increased to 8% in the late post-NRHM period 2011–12. The corresponding estimates for the NE states are 10–12% in the pre-NRHM periods to 5% in the early post-NRHM period 2007–08 to 5% in the late post-NRHM period 2011–12. We further estimated the skilled birth-attendance consisting of both institutional delivery and skilled birth-attendance at home, and found that there was only moderate increase in the early post-NRHM period 2007–08 but considerable increase in the late post-NRHM period 2011–12. That is, over the time periods of 1995–99, 2000–04, 2007–08 and 2011–12, the skilled birth-attendance had increased from 31% to 39% to 43% to 73% in the EAG states and 39% to 47% to 48% to 72% in the NE states. We also tested the quality of ANC by examining whether there were changes in the uptake of the Iron Folic Acid (IFA) supplementation and the Tetanus Toxoid (TT) injections. We found, on one hand, in 1995–99, 2000–04, 2007–08 and 2011–12, there was improvement in the use of IFA from 33% to 49% to 79% to 88% in the EAG states, respectively. On the other hand, there was decrease in the uptake of TT injection in EAG states in 2007–08 (from 65% to 74% to 65%) but again increased to 97% in 2011–12. In the NE states, there was an increase in the uptake of TT injection over time (from 61% to 64% to 71% to 96%). ## Discussion In this study, we assessed the population-level impact of NRHM on socioeconomic inequities in the uptake of two major components of maternal health services, namely institutional delivery and ANC. We found evidence supporting our hypotheses that the effect of NRHM in the uptake of institutional delivery among pregnant women from lower socioeconomic backgrounds was greater than the effect on women from higher socioeconomic backgrounds. The inequities in institutional delivery and ANC were already declining between the pre-NRHM Period 1 (1995–99) and the pre-NRHM Period 2 (2000–04), but declined at steeper rates in the post-NRHM periods. The effects were stronger for institutional delivery. Our findings also support our hypothesis that the effects of NRHM on the increase in the uptake and the decline in socioeconomic inequities were greater for institutional delivery than ANC in the early post-NRHM period 2007–08. This pattern was more evident in those states with higher proportion of eligible women enrolled under JSY cash-transfer. One of the reasons can be that the conditional cash-transfer linked to institutional delivery might have attracted huge attention to the promotion of institutional delivery, and thus neglecting other components of maternal and child healthcare such as ANC and child immunisation. This finding highlights the need within NRHM to link antenatal services with institutional delivery to achieve a continuity of maternal health services (Lahariya 2009). Equity and uptake of institutional delivery and ANC improved in most states in late post-NRHM period 2011–12, the period when the targeted outreach of the most components of the NRHM was reached. Our study also found considerable inter-state variations in the impacts of the NRHM and the proportion of eligible women that received cash transfers. To address these inter-state variations, the central government in collaboration with the low performing state governments needs to implement appropriate policy measures, including of ensuring the availability of functional and trained ASHAs in each village, increased cash-transfer amount for the pregnant women, higher incentives for the ASHAs and good quality health services at the public health facilities. Furthermore, conducting state-wise in-depth evaluation studies, using both quantitative and qualitative methods, for identifying the underlying reasons for the inter-state variations would be useful for formulating effective policy guidelines. Over the past few years, several low- and middle-income countries including Mexico, Columbia, Nicaragua, Honduras and Brazil have introduced public health and social security programs with cash-transfer to increase the use of health services by poor people (Attanasio et al. 2005). Studies have found increase in the use of health services and health status (Gertler 2000; Morris 2004; Rivera 2004; Maluccio and Flores 2005; Lagarde et al. 2009). For example, ‘Familias en Accion’ of Columbia improved the nutritional status and reduced the morbidity of young children (Attanasio et al. 2005). The ‘Bolsa Alimentação’ of Brazil also improved health care use by children (Shei 2014). A recent study found that the combined effects of Brazil’s massive expansion of primary health care via the ‘Family Health Program’ and the cash-transfer of ‘Bolsa Alimentação’ successfully reduced post-neonatal infant mortality (Guanais 2013). Several studies have assessed the impact of India’s JSY (Lim et al. 2010; Modugu 2013; Carvalho 2014; Randive 2014), and we have added to this evidence base by assessing the overall impact of NRHM at the population level. Our impact assessment included the larger components of the program including the JSY payments, the large scale public healthcare investment followed after 2005, and the deployment of grassroots level health workers (i.e. ASHA), and several healthcare committees at various levels including those with community involvement for enhancing the uptake of primary healthcare. Further, the nationally representative household level data covering major Indian states, which were collected independently of the NRHM, enabled us to perform a comparative assessment of impacts of NRHM between the states. In addition, to ensure more robust impact assessment we used two pre-NRHM period data, as suggested by the literature of impact assessment in other contexts (Wagstaff 2009, 2010). The use of two pre-NRHM and two post-NRHM data points had helped us to apply a quasi-natural experiment framework to control for potential confounders and secular trends. The NRHM was changed to the National Health Mission (NHM) in 2013, which is now made up of NRHM and the National Urban Health Mission. The latter element was launched to meet the primary health care needs of the urban population with a focus on the urban poor. The use of the latest available data set of DLH-4 and AHS had allowed us to assess the impact of a more comprehensive form of this public health program after its almost full implementation and widest outreach. Our study has several limitations. First, the NRHM is a targeted public health program intending to enhance the uptake of primary healthcare especially of lower socioeconomic groups, but is characterised as a population level intervention where all segments of the population can benefit. This makes it difficult to define the treatment and control group in strict sense as per standard impact evaluation methodology. Instead, we opted to define lower socioeconomic groups as target groups and the higher socioeconomic as the non-target groups. Second, we estimated the effects on the assumption of a linear increase in the uptake of maternal health services in the post-NRHM periods based on the trends of the pre-NRHM Periods 1 and 2. This assumption may not hold true in the real world setting, and increases in the uptake over the period of time are likely to occur at diminishing rates as it becomes increasingly difficult to reach the most vulnerable population groups. However, since we used both institutional delivery and ANC as maternal health services outcomes for comparative assessment, we assume that these possible external factors would in general affect both outcomes in a similar way. The study findings have important implications for health policy in India and other developing countries, particularly in the present context of growing debate in India and other developing countries on various strategies of developing the health system and achieving universal coverage, which is centred on two paths: strengthening the supply side of the public healthcare system along with targeted interventions and strengthening demand-side financing by promoting health insurance programmes with larger roles for private healthcare providers (World Health Organization 2010). The report of the High Level Expert Group (HLEG) established by the Indian Planning Commission recommended for strengthening the public healthcare system instead of relying on the health insurance option (Planning Commission 2011). However, recently, concerns have been raised about India’s healthcare policy shifts, including the proposal to reduce government funds for NRHM, and to rely more on the private health sector in primary healthcare (Staff Reporter 2014; Reeves et al. 2015). Our findings suggest that strengthening public healthcare infrastructure, using public health intervention programs with focus on the weaker sections of the society and increased resource allocation, will enhance the uptake of maternal healthcare, improve health outcomes and contribute to the achievement of the health-related Millennium Development Goals. Giving due considerations to the effective design and implementation of the public health programs by linking various components of maternal and child healthcare will improve universal access to comprehensive healthcare. Previous studies have argued that socially disadvantaged individuals with less education and living in places with poor health facilities fail to perceive the need for accessing healthcare (Gulliford et al. 2002; Sen 2002; Vellakkal et al. 2013). Our results indicate that over-coming financial and other structural barriers through programs focussing lower socioeconomic groups, rather than psychological perceptions of poor people, are likely to promote uptake and reduce inequities in uptake of maternal and child healthcare. ## Supplementary Data are available at HEAPOL online ## Acknowledgements This work was supported by a Wellcome Trust Capacity Strengthening Strategic Award to the Public Health Foundation of India (PHFI) and a consortium of UK universities (Grant Number WT084754). Besides, DS is supported by the Wellcome Trust (Grant number: 1007/09/Z/12/Z), and SE and SV by a Wellcome Trust strategic award (Grant number: 084674/Z/08/Z). Conflict of interest statement. None declared. ## References Attanasio O Gómez LC Heredia P Vera-Hernandez M. 2005 . The short-term impact of a conditional cash subsidy on child health and nutrition in Colombia . Report Summary: Familias 3 . Barros AJ Hirakata VN. 2003 . Alternatives for logistic regression in cross-sectional studies: an empirical comparison of models that directly estimate the prevalence ratio . BMC Medical Research Methodology 3 : 21. Carvalho N Thacker N Gupta SS Salomon JA . 2014 . More evidence on the impact of India's conditional cash transfer program, Janani Suraksha Yojana: quasi-experimental evaluation of the effects on childhood immunization and other reproductive and child health outcomes . PloS One 9 : e109311. Ernstsen L Strand BH Nilsen SM , et al.  . 2012 . Trends in absolute and relative educational inequalities in four modifiable ischaemic heart disease risk factors: repeated cross-sectional surveys from the Nord–Trøndelag Health Study (HUNT) 1984–2008 . BMC Public Health 12 : 266. Filmer D Pritchett LH. 2001 . Estimating wealth effects without expenditure data—Or tears: An application to educational enrollments in states of India . Demography 38 : 115 32 . Gertler P. 2000 . Final Report: The Impact of Progesa on Health . Washington, DC : International Food Policy Research Institute . Gopalan SS Varatharajan D. 2012 . Addressing maternal healthcare through demand side financial incentives: experience of Janani Suraksha Yojana program in India . BMC Health Services Research 12 : 319. Government of Chhattisgarh . 2009 . Notification of JSY Guidelines, dated 14/07/2009. Government of India . 2014a. Janai Surksha Yojana, National Health Mission, Ministry of Health and Family Welfare, Government of India. http://nrhm.gov.in/nrhm-components/rmnch-a/maternal-health/janani-suraksha-yojana/background.html. Government of India . 2014b. National Health Mission, Ministry of Health & Family Welfare, New Delhi. Available from: http://nrhm.gov.in/nhm/about-nhm.html. Government of India . 2016 . National Rural Health Mission: Meeting People’s Health Needs in Rural Areas, Framework for Implementation 2005-2012, Ministry Of Health And Family Welfare, New Delhi. Accessed on 14th January 2016. http://nrhm.gov.in/images/pdf/about-nrhm/nrhm-framework -implementation/nrhm-fra mework-latest.pdf. Guanais FC. 2013 . The combined effects of the expansion of primary health care and conditional cash transfers on infant mortality in Brazil, 1998-2010 . American Journal of Public Health 103 : 2000 6 . Guin G Sahu B Khare S et al.  . 2012 . Trends in maternal mortality and impact of Janani Suraksha Yojana (JSY) on maternal mortality ratio in a tertiary referral hospital . The Journal of Obstetrics and Gynecology of India 62 : 307 11 . Gulliford M Figueroa-Munoz J Morgan M , et al.  . 2002 . The Journal of Health Services Research & Policy 7 : 186 8 . Gupta SK Pal DK Tiwari R , et al.  . 2012 . Impact of Janani Suraksha Yojana on institutional delivery rate and maternal morbidity and mortality: an observational study in India . Journal of Health, Population, and Nutrition 30 : 464. Harper S, Lynch J. 2006 . In: Oakes J Kaufman JS (ed). Measuring Inequalities in Health. In Methods in Social Epidemiology . San Francisco : Jossey-Bass . International Institute for Population Sciences (IIPS) and Macro International . 2007 . National Family Health Survey (NFHS-3), 2005–06: India : Vol. I . Mumbai : IIPS . International Institute for Population Sciences (IIPS) and Macro International (IIPS) . 2010 . Report on the District Level Household and Facility Survey Round-3, 2007–08: India. Mumbai. Joe W. 2014 . Intersectional inequalities in immunization in India, 1992–93 to 2005–06: a progress assessment . Health Policy and Planning . 30 : 407 22 . Joshi R. 2009 . Perinatal and Neonatal Mortality in Rural Punjab A Community Based Case-Control Study. Working Papers/eSocialSciences. Khang YH Yun SC Lynch JW. 2008 . Monitoring trends in socioeconomic health inequalities: it matters how you measure . BMC Public Health 8 : 66. Kunst AE Mackenbach JP. 1990 . Measuring socioeconomic inequalities in health, in Measuring socioeconomic inequalities in health . Geneva : World Health Organization . Lagarde M Haines A Palmer N. 2009 . The impact of conditional cash transfers on health outcomes and use of health services in low and middle income countries . The Cochrane Database of Systematic Reviews 4 : CD008137 . Lahariya C. 2009 . Cash incentives for institutional delivery: linking with antenatal and post natal care may ensure ‘continuum of care’ in India . Indian Journal of Community Medicine: Official Publication of Indian Association of Preventive & Social Medicine 34 : 15. Langlois ÉV Miszkurka M Ziegler D Karp I Zunzunegui MV . 2013 . Protocol for a systematic review on inequalities in postnatal care services utilization in low-and middle-income countries . Systematic Reviews 2 : 1 8 . Lim SS Dandona L Hoisington JA , et al.  . 2010 . India's Janani Suraksha Yojana, a conditional cash transfer programme to increase births in health facilities: an impact evaluation . The Lancet 375 : 2009 23 . Maluccio J Flores R. 2005 . Impact Evaluation of a Conditional Cash Transfer Program: The Nicaraguan Red de Protección Social . Nicaragua : International Food Policy Research Institute . Martines J Paul VK Bhutta ZA , et al.  . 2005 . Neonatal survival: a call for action . The Lancet 365 : 1189 97 . Mazumdar S Mills A Powell-Jackson T. 2012 . Financial incentives in health: New evidence from India's Janani Suraksha Yojana. Modugu HR Kumar M Kumar A Millett C . 2013 . State and socio-demographic group variation in out-of-pocket expenditure, borrowings and Janani Suraksha Yojana (JSY) programme use for birth deliveries in India . BMC Public Health 12 : 1048. Morris SS Olinto P Flores R Nilson EA Figueiró AC . 2004 . Conditional cash transfers are associated with a small reduction in the rate of weight gain of preschool children in northeast Brazil . The Journal of Nutrition 134 : 2336 41 . S Foss M Stones RW. 2004 . Antenatal care: provision and inequality in rural north India . Social Science & Medicine 59 : 1147 58 . Panja TK DK Sinha N , et al.  . 2012 . Are institutional deliveries promoted by Janani Suraksha Yojana in a district of West Bengal, India? Indian Journal of Public Health 56 : 69 72 . Pathak PK Singh A Subramanian S. 2010 . Economic inequalities in maternal health care: prenatal care and skilled birth attendance in India, 1992–2006 . PLoS One 5 : e13593. Planning Commission of India . 2001 . Evaluation Study of National Rural Health Mission (NRHM) In 7 States. Programme Evaluation Organisation, Government of India, New Delhi. Planning Commission . 2011 . High Level Expert Group Report on Universal Health Coverage for India, New Delhi. Randive B Diwan V De Costa A. 2013 . India’s conditional cash transfer programme (the JSY) to promote institutional birth: is there an association between institutional birth proportion and maternal mortality? PLoS One 8 : e67452. Randive B San Sebastian M De Costa A Lindholm L . 2014 . Inequalities in institutional delivery uptake and maternal mortality reduction in the context of cash incentive program, Janani Suraksha Yojana: results from nine states in India . Social Science & Medicine 1 6 . Reeves A Gourtsoyannis Y Basu S , et al.  . 2015 . Financing universal health coverage–effects of alternative tax structures on public health systems: cross-national modelling in 89 low-income and middle-income countries . Lancet 386 : 274 80 . Rivera JA Sotres-Alvarez D Habicht JP Shamah T Villalpando S . 2004 . Impact of the Mexican program for education, health, and nutrition (Progresa) on rates of growth and anemia in infants and young children: a randomized effectiveness study . JAMA 291 : 2563 70 . Sanneving L Trygg N Saxena D , et al.  . 2013 . Inequity in India: the case of maternal and reproductive health . Global Health Action 6 : 19145 . Save the Children . 2010 . A Fair Chance At Life: Why Equity Matters For Child Mortality. Web Link: http://www.savethechildren.org.uk/sites/default/files/docs/A_Fair_Chance_at_Life_1.pdf, 2010. Sen A. 2002 . Health: perception versus observation: self reported morbidity has severe limitations and can be extremely misleading . British Medical Journal 324 : 860 1 . Sharma S Joe W. 2014 . National Rural Health Mission: An Unfinished Agenda , edited book, Delhi : Bookwell Publishers . Shei A Costa F Reis MG Ko AI . 2014 . The impact of Brazil's Bolsa FamÃlia conditional cash transfer program on children's health care utilization and health outcomes . BMC International Health and Human Rights 14 : 10. Spiegelman D Hertzmark E. 2005 . Easy SAS calculations for risk or prevalence ratios and differences . American Journal of Epidemiology 162 : 199 200 . Staff Reporter , 2014 . Health Mission left in the lurch, says official. URL: http://www.thehindu.com/todays-paper/tp-national/tp-andhrapradesh/health-mission-left-in-the-lurch-says-official/article6433746.ece, in The Hindu. StataCorp LP . 2014 . StataCorp, Intercooled Stata 13.0 for Windows . College Station, TX : StataCorporation . The World Bank . 2014 . The World Bank Data: Health.; Available from: http://data.worldbank.org/indicator, accessed 21 March 2015. Vellakkal S Subramanian SV Millett C , et al.  . 2013 . Socioeconomic inequalities in non-communicable diseases prevalence in India: disparities between self-reported diagnoses and standardized measures . PLoS One 8 : e68219. Vora KS Mavalankar DV Ramani KV , et al.  . 2009 . Maternal health situation in India: a case study . Journal of Health, Population, and Nutrition 27 : 184 201 . Wagstaff A Lindelow M Jun G Ling X Juncheng Q . 2009 . Extending health insurance to the rural population: an impact evaluation of China's new cooperative medical scheme . Journal of Health Economics 28 : 1 19 . Wagstaff A. 2010 . Estimating health insurance impacts under unobserved heterogeneity: the case of Vietnam's health care fund for the poor . Health Economics 19 : 189 208 . World Health Organisation . 2005 . World Health Report 2005: Make Every Mother and Child Count . Geneva . World Health Organization . 2010 . The World Health Report – Health Systems Financing: The Path to Universal Coverage . Geneva : World Health Organization . This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
2017-02-27 21:40:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3528841435909271, "perplexity": 1820.3357794615463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173761.96/warc/CC-MAIN-20170219104613-00101-ip-10-171-10-108.ec2.internal.warc.gz"}
https://forum.dynare.org/t/technology-process-taylor-rule-and-output-gap/16156
# Technology process, Taylor rule and Output gap Hello everyone, I’m trying to replicate a model and have a few questions since I’m not experienced at it yet. 1. Technology in this paper follows an AR(1) process which is: ln(A_t) = rho_A ln(A_(t-1)) + s_A e_(A,t) My question is how to log-linearize it. Should I just subtract the steady state which is ln(A) = rho_A*ln(A) ? Another question is just for a better understanding: why do we multiply the technology shock e_(A,t) with its standard deviation s_A? And why do we write this process in logs and not just like this: A_t = rho_A*A_(t-1)+ e_(A,t) 1. The policy rate is set according to a Taylor rule which is: ln(R_t) = (1-rho_r)lnR + (1-rho_r)[phi_pi*(ln(Pi_t)-ln(Pi))+phi_x*(ln(Y_t)-ln(Y*t))] + s_r e(r,t) (1) where Y*_t is the steady state of Y_t. I think there’s a term +rho_r*ln(R_(t-1)) missing in this interest rate rule, as the log-linearized version of this rule is the following: r_t = rho_r r_(t-1) + (1-rho_r)[phi_pi pi_t + phi_x x_t] + s_r*e_(r,t) Would you say this term +rho_r*ln(R_(t-1)) is missing in the (1) equation? Another question would be also how to log-lin it. Just subtract the st.st. from (1) which is ln® = (1-rho_r)lnR + (1-rho_r)[phi_pi*(ln(Pi)-ln(Pi))+phi_x*(ln(Y)-ln(Y*))] (probably + rho_r*lnR for that missing term) ? 1. My last question is about the steady state value of the output Y_t. Apparently, they define it as Y*_t. But I don’t understand why the time subscript is used in the variable which describes the steady state value (Y*_t) of the output (Y_t). As I understand the meaning of the steady state it’s a constant value of a variable which doesn’t depend on time. Could you help me with the understanding of this? Thank you all. 1. That equation is already linear. Simply define ln(A) as the variable. 2. You can either specify a standard deviation for e_A in the shocks-block or use a unit standard deviation there and premultiply the standard normal shock by its standard deviation. Both ways are equivalent. 3. Sometimes the process is written this way to show how it is related to the level A. The other way defines \tilde A_t=ln(A_t) 4. Yes, it seems the lagged interest rate is missing. 5. Again, the equation is already linear in logs. 6. Without knowing the paper, it is impossible to tell what Y_t^* is. But often it is not steady state output, but rather natural output, which is time-varying.
2020-07-13 09:18:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8404104113578796, "perplexity": 1361.3379543645792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143354.77/warc/CC-MAIN-20200713064946-20200713094946-00216.warc.gz"}
https://www.gamedev.net/resources/_/creative/visual-arts/easy-atmospheric-perspective-in-photoshop-r2521
•      Sign In • Create Account ### GameDev Marketplace #### Men's i.make.games T-shirt $20$15 Like 0Likes Dislike # Easy Atmospheric Perspective in Photoshop By Jacob A Stevens | Published May 30 2008 11:34 AM in Visual Arts perspective atmospheric sky layer color buck objects ground colors If you find this article contains errors or problems rendering it unreadable (missing images or files, mangled code, improper text formatting, etc) please contact the editor so corrections can be made. Thank you for helping us improve this resource # What is Atmospheric Perspective? Go outside and look at the air around you. Can you see it? Most of us grow up thinking that air is invisible, but it isn’t. In fact, air is one of the most important cues we use to judge the depth in an image we’re looking at. Look at this photo I took from a hotel room window in Los Angeles. The colors of the buildings and cars that are closer to the camera are vivid and high-contrast. However, the buildings, trees, and hills in the background have a smoky appearance, losing their color and definition. These images of Sabino Canyon in Tucson, AZ, show a similar effect. Objects close to the camera show a full range of colors and light and dark values, while the mountains in the background are hazy with less difference between shadows and highlights. Also notice that the way you can tell which mountains are closer to you and which are further is by noticing how hazy they are. Atmospheric perspective is caused by the fact that the space surrounding the solid objects that we see isn’t empty. Light scatters around between air, water, smoke, dust, and pollution molecules, partially obscuring the objects that they are in front of. The further away an object is from the viewer, the more obscure it becomes. Eventually, especially on foggy days, entire buildings and mountains can become completely invisible! Visually, there are three primary effects caused by atmospheric perspective: 1. The further away an object is, the closer its color will match the color of the sky. 2. As an object moves further from the viewer, the contrast between its highlights and shadows will decrease. 3. The colors of objects that are closer to the viewer will be more saturated (less gray) than the colors of objects that are far away. Luckily for us, in most cases, effects 2 and 3 occur naturally as a result of effect number 1. This makes it easy to achieve simple atmospheric perspective in Adobe Photoshop. # Creating Perspective with Layers Let’s use a scene with Buck from Cash Cow as an example. Right now this document has three layers: A solid sky color, an image of Buck, and some tall grass so that we don’t have to worry about the ground. "http://images.gamedev.net/features/art/AtmosPerspective/layers1.jpg"> Now let’s make few duplicates of Buck, resizing each one so we have a whole field of Bucks. Be sure to keep each copy of Buck on it’s own layer. "http://images.gamedev.net/features/art/AtmosPerspective/layers2.jpg"> Now comes a slightly tedious part: I’m going to make several copies of the sky layer, one for each copy of Buck. Then I’ll arrange each copy of the sky in front of it’s corresponding Buck layer. "http://images.gamedev.net/features/art/AtmosPerspective/layers3.jpg"> Next, we’ll arrange the layer clipping masks so that each duplicate sky layer affects only its corresponding Buck image. You can do this by clicking on each sky layer, and then choosing Layer->Create Clipping Mask. This will cause the sky layer to be clipped by the image of Buck that it’s in front of. You still won’t be able to see anything, because each image of Buck is obscured by its clipped sky layer. Now comes the fun part. For each copy of the sky, adjust the layer transparency so that the copies corresponding to closer images of Buck are more transparent than the copies that are further away. And there you have it, totally adjustable atmospheric perspective. Just by changing the sky color, transparency levels, and overall coloration of the scene, you can create limitless atmospheric effects. Here’s the same grassy field in thick fog: And the scene at sunset: Note that on this one, I had to adjust the color balance of the scene to account for the warmer lighting. # Applying Perspective to the Ground Plane The last example deliberately left out an important aspect of most images: the ground plane. Since the ground plane recedes into the distance, we cannot simply blend it with a solid color to achieve natural atmospheric perspective. Let’s take a look at another example. This time we have a sand-colored checkerboard desert floor, some sand dunes, and some cacti. First, using the same steps as before, let’s use layers of sky color to blend the dunes and cacti: "http://images.gamedev.net/features/art/AtmosPerspective/dune_layers2.jpg"> This is already looking much nicer. The dunes are pushed toward the horizon, and the closer cactus is brought forward. However, we still need to bring the ground itself into perspective. We can do this with a simple variation of the same technique. First, make a new layer in front of the ground layer. Fill this layer with a gradient that starts with a fully opaque sky color on top and ends with a fully transparent sky color on bottom: NOw use Layer->Create Clipping mask to limit the effects of this layer to the ground plane. Now the transparency of the gradient needs to be adjusted to be consistent with the thickness of the atmosphere in the rest of the image: There we go! We’ve made a nice, dusty, desert scene. # Atmospheric Perspective in Games The use of atmospheric perspective is absolutely vital in games. Since our eyes are naturally drawn to high-contrast objects with bright colors, game developers can use the desaturating effects of atmospheric perspective to draw the player’s attention to vital onscreen elements. Atmospheric perspective is especially important in 2D games. 2D games can’t make use of geometric perspective, and so they rely heavily on the use of color choice to help players interpret the contents of the screen. This screenshot of Super Mario World 2: Yoshi’s Island is a perfect example. Notice how objects in the foreground have bright colors and black outlines. The hills and clouds in the background are rendered in subdued pastel tones. The falling Chomp in the foreground is made of pure black and pure white, while the Chomps waiting in the background are made of grayed, cool, tones. All these elements help the player quickly assess what’s happening in the game. # Conclusion Atmospheric perspective is an important but often overlooked method of creating depth in images. Virtually any time that objects are placed more than a few meters apart, atmospheric perspective comes into play. Luckily, unlike geometric perspective, atmospheric perspective is relatively easy to implement, especially with digital tools like Photoshop. So next time you're making a picture, don't forget the air! Comments Note: GameDev.net moderates article comments.
2017-01-18 14:23:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2939439117908478, "perplexity": 1851.2096516725162}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00059-ip-10-171-10-70.ec2.internal.warc.gz"}
https://hal-cea.archives-ouvertes.fr/cea-02474070
Effect of Sb and Na Incorporation in Cu2ZnSnS4 Solar Cells - Archive ouverte HAL Access content directly Journal Articles physica status solidi (a) Year : 2019 ## Effect of Sb and Na Incorporation in Cu2ZnSnS4 Solar Cells (1) , (1) , (1) , (1) , (1) , (2) 1 2 Louis Grenet • Function : Correspondent author • PersonId : 1041695 Connectez-vous pour contacter l'auteur Henri Mariette #### Abstract Cu$_2$ZnSnS$_4$-based solar cells suffer from limited power conversion efficiency (PCE) and relative small grain size compared to selenium containing absorbers. Introduction of Na in Cu$_2$ZnSnS$_4$ absorbers either during the synthesis or after this step is used to improve device performances and to determine whether its effect is based on structural properties improvement (grain size enhancement, better crystallization) or on opto-electronic properties improvement (defect passivation). In both cases, presence of Na in the absorber notably improves current and voltage of the solar cells, but the effect is more pronounced when Na is present during synthesis. Quantum efficiency analysis shows that these improvements can be related to longer minority carrier diffusion length and reduced absorber/buffer interface recombination. Introducing Na in the process mostly leads to preferential (112) orientation of the crystal which is clearly correlated with better device performances. Otherwise, the performance limitation due to small grain size is discarded by the joint use of Sb and Na, which has a significant impact on grain size but does not affect solar cells efficiency. ### Dates and versions cea-02474070 , version 1 (01-04-2020) ### Identifiers • HAL Id : cea-02474070 , version 1 • DOI : ### Cite Abdul Aziz Suzon, Louis Grenet, Fabrice Emieux, Eric de Vito, Frédéric Roux, et al.. Effect of Sb and Na Incorporation in Cu2ZnSnS4 Solar Cells. physica status solidi (a), 2019, 216 (11), pp.1900070. ⟨10.1002/pssa.201900070⟩. ⟨cea-02474070⟩ ### Export BibTeX TEI Dublin Core DC Terms EndNote Datacite 54 View
2023-01-29 19:12:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22960832715034485, "perplexity": 11600.639933656921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499758.83/warc/CC-MAIN-20230129180008-20230129210008-00677.warc.gz"}
http://www.aliquote.org/post/christmas-admin-on-ubuntu/
# aliquote ## < a quantity that can be divided into another a whole number of time /> I replaced my everything-in-Emacs setup with separate apps, and I don’t feel lost at all, on the contrary. The only thing I’m really missing is my Org setup, but it’s by far the simplest configuration I can setup under vanilla Emacs without too much effort. That being said, I really come to appreciate the power of Vim, or Neovim for what matters, for text editing, and single app for other tasks. I found that this allows me to stay more focused on the current task at hand. If I need to check my emails, then I know I need to use a different app (Neomutt); likewise, checking Git status means I need to get back to Zsh and use Git CLI commands or Tig; and so on for readings news, chatting on IRC… Something I find annoying in Ubuntu are the default permission settings — 755, which means that others, or the “world”, have the same privileges than the groups on your \$HOME directory, i.e., everybody can read your own directory. Other distro don’t use 755 as the defaults, and I believe the right permission flags should 751. Recall that r=4, w=2, x=1, for user, group and other, which means that 751 amounts to -rwx-r-x--x, i.e., you grant read access to group only. There’s something strange hapenning with Zsh and colored output for ls (or LS_COLORS sometimes), unless you’re on a BSD-like system where export CLICOLOR=1 takes care of this setting for you. This means that a basic usage of ls returns a listing of the current directory without any color at all in Ubuntu. Of course, you can set up a quick alias ls=ls --color, but with time I came to appreciate this. Rather than overriding the default ls with an alias, I now use the following shortcuts: alias la="ls -A" alias ll="exa --long --git" alias lk="ls -lhSp" As can be seen, if I need colored output I use exa, otherwise I stand by the default settings for Zsh. I finally managed to install all the programs and libraries I need (including clang10 and Haskell), and to keep away from Node. Why? When you ask to install the very basic stuff (Node and npm), you ended up with something like 300 packages, including Python 2.7 which noboby uses anymore. Why is it that complicated? Can’t we really have binary packages for csslint or eslint? Les NOUVEAUX paquets suivants seront installés : gyp libc-ares2 libjs-inherits libjs-is-typedarray libjs-psl libjs-typedarray-to-buffer libnode-dev libnode64 libpython2-stdlib libpython2.7-minimal libpython2.7-stdlib libuv1-dev node-abbrev node-ajv node-ansi node-ansi-align node-ansi-regex node-ansi-styles node-ansistyles node-aproba node-archy node-are-we-there-yet node-asap node-asn1 node-assert-plus node-asynckit node-aws-sign2 node-aws4 node-balanced-match node-bcrypt-pbkdf node-bl node-bluebird node-boxen node-brace-expansion node-builtin-modules node-builtins node-cacache node-call-limit node-camelcase node-caseless node-chalk node-chownr node-ci-info node-cli-boxes node-cliui node-clone node-co node-color-convert -%<------ node-yargs node-yargs-parser nodejs nodejs-doc npm python-pkg-resources python2 python2-minimal python2.7 python2.7-minimal 0 mis à jour, 296 nouvellement installés, 0 à enlever et 16 non mis à jour. Il est nécessaire de prendre 14,9 Mo dans les archives. Après cette opération, 75,9 Mo d'espace disque supplémentaires seront utilisés. Anyway, I get a binary-like package for csslint by using rhino as recommended on Github. I am not sure I will miss prettier for HTML and CSS stuff. In my opinion, linters are more valuable than fixers, and Vim’s ALE doesn’t care anyway: If there’s one fixer available it will be used, otherwise it’s not a big deal.
2022-05-28 05:25:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28121381998062134, "perplexity": 8454.029647130892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663012542.85/warc/CC-MAIN-20220528031224-20220528061224-00543.warc.gz"}
http://budgetmodel.wharton.upenn.edu/a5-mortality
Top Appendix 5: Implementation Mortality Rate Dependence Mortality rates used in the PWBMsim are benchmarked to the Social Security Administrations’ age- and gender-specific projections of mortality rates through the year 2090. (The SSA rates are consistent with the Social Security Trustees’ annual report for 2016.) This Appendix describes how those rates are further distinguished by categories of ethnicity, $r$, education, $e$, and marital status, $m$, and re-benchmarked to SSA mortality rates by age and gender so that PWBMsim’s overall age-gender mortality rates remain equal to those SSA rates. PWBMsim’s year-specific mortality rates are derived by decomposing SSA mortality rates by gender and age, $m^\ast [g,a]$, by ethnicity, $r$. The decomposed rates by ethnicity, gender, and age are denoted below as $m^\ast [r \mid g, a]$. The decomposition is implemented as shown in equations 4.1 and 4.2: \begin{align} m^\ast [r \mid g, a] &= m^{\text{SSA}} [g, a] \times \rho[r \mid g, a] \times s[g, a, r] \text{(A4.1)} \\ s[g, a, r] &= \frac{1}{\Sigma_r \rho (r \mid g, a) \times d(r \mid g, a)} \text{(A4.2)}. \end{align} Here, the term $\rho(r\mid g, a)$ refers to the mortality rate differential of those with ethnicity r relative to those with $r=white$ for given age and gender and $d(e\mid g,a)$ represents the population share of such individuals. According to the first equation above, mortality differentials by education attainment, $\rho[r\mid g,a]$, are applied to $m^{SSA} [g,a]$ along with a shift parameter, $s[g,a,r]$. (‘a’ denotes age groups (0:0-24, 1:25-44, 2:45-64, 3:65-84, 4:85 and over) and ‘e’ denotes education attainment groups based on completed years of education (0:0-11, 1:12, 2:13-15, 4:16, 5:17 and over). $\rho[e\mid g,r,a]=1$ for $e=0$.) The latter term adjusts mortality rates for all education levels, so that average mortality rates for each subgroup are maintained at SSA’s benchmark mortality rates by age and gender. This shift term is computed in each year as shown in equation A4.2 and the result is used in the equation A4.1 when PWBMsim is executed. The decomposition by education is implemented analogously as shown via equations A4.3 and A4.4: \begin{align} m^\ast [e\mid g,a,r] &= m^\ast [r\mid g,a]\times\rho[e\mid g,a,r]\times s[g,a,r,e]\text{(A4.3)} \\ s[g,a,r,e] &= \frac{1}{\Sigma_e\rho(e\mid g,a,r)\times d(e\mid g,a,r)}\text{(A4.4)} \end{align} Here, the term $\rho(e│g,a,r)$ refers to the mortality rate differential of those with education level $e$ relative to those with $e = \text{high-school diploma}$ for a given age and gender and $d(e│g,a,r)$ represents the population share of such individuals. According to the first equation above, mortality differentials by education attainment, $\rho[e\mid g,a,r]$, are applied to $m^\ast [r\mid g,a]$ along with a shift parameter $s[g,a,r,e]$. The latter term adjusts mortality rates for all education levels, so that average mortality rates for each subgroup are maintained at SSA’s benchmark mortality rates by age and gender. This shift term is computed in each year as shown in equation A4.4 and the result is used in the equation A4.3 when PWBMsim is executed. The marital status differentials are applied in an analogous way, leading to the final computation of mortality rates, $m^\ast [m│g,a,r,e]$, conditional on marital status along with other demographic attributes - gender, age, ethnicity, and education. (‘m’ denotes marital status (0:single, 1:married, 2:divorced/separated, 3:widowed). The analogous remark as in the earlier footnote applies.) \begin{align} m^\ast [m \mid g,a,r,e] &= m^\ast [e \mid g,a,r]\times\rho[m \mid g,a,r,e]\times{s[g,a,r,e,m]} \text{(A4.5)} \\ s[g,a.r,e,m] &= \frac{1}{\Sigma_e\rho(m\mid g,a.r,e)\times{d(m\mid g,a,r,e)}} \text{(A4.6)} \end{align} A similar interpretation of the terms in equations 4.5 and 4.6 applies as described earlier for equations A4.1-A4.4. If information for a particular ethnicity is absent because of small populations in micro-survey data (married educated females aged 45 of “other” ethnicity, for example), those ethnicities are merged with others: in particular, Asian ethnicity category is merged with the white ethnicity category, and “other” and Hispanic ethnicities are merged with the black ethnicity category. When education attainment indicators are inconsistent between Census micro-data and other information sources on relative mortality differentials, education categories are collapsed (for example, education level below 8th grade is not distinguished from the secondary high-school education level). Finally, if estimated relative education differentials are not significant or are negative, the differentials are set equal to the normalized value for the reference education group, namely 1.0. For data sources on relative differentials, see notes to Tables 1 and 2 in the main text.
2019-01-21 03:48:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9925515651702881, "perplexity": 4306.575753516387}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583755653.69/warc/CC-MAIN-20190121025613-20190121051613-00635.warc.gz"}
https://gmatclub.com/forum/two-oil-cans-x-and-y-are-right-circular-cylinders-and-the-height-an-255153.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 13 Nov 2018, 10:29 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in November PrevNext SuMoTuWeThFrSa 28293031123 45678910 11121314151617 18192021222324 2526272829301 Open Detailed Calendar • ### Essential GMAT Time-Management Hacks November 14, 2018 November 14, 2018 07:00 PM PST 08:00 PM PST Join the webinar and learn time-management tactics that will guarantee you answer all questions, in all sections, on time. Save your spot today! Nov. 14th at 7 PM PST • ### $450 Tuition Credit & Official CAT Packs FREE November 15, 2018 November 15, 2018 10:00 PM MST 11:00 PM MST EMPOWERgmat is giving away the complete Official GMAT Exam Pack collection worth$100 with the 3 Month Pack ($299) # Two oil cans, X and Y, are right circular cylinders, and the height an new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 50570 Two oil cans, X and Y, are right circular cylinders, and the height an [#permalink] ### Show Tags 10 Dec 2017, 07:05 00:00 Difficulty: 25% (medium) Question Stats: 85% (01:30) correct 15% (01:20) wrong based on 42 sessions ### HideShow timer Statistics Two oil cans, X and Y, are right circular cylinders, and the height and the radius of Y are each twice those of X. If the oil in can X, which is filled to capacity, sells for$2, then at the same rate, how much does the oil in can Y sell for if Y is filled to only half its capacity? (A) $1 (B)$ 2 (C) $3 (D)$ 4 (E) $8 _________________ Manager Joined: 22 Apr 2017 Posts: 110 Location: India GMAT 1: 620 Q46 V30 GMAT 2: 620 Q47 V29 GMAT 3: 630 Q49 V26 GMAT 4: 690 Q48 V35 GPA: 3.7 Two oil cans, X and Y, are right circular cylinders, and the height an [#permalink] ### Show Tags Updated on: 10 Dec 2017, 08:37 Bunuel wrote: Two oil cans, X and Y, are right circular cylinders, and the height and the radius of Y are each twice those of X. If the oil in can X, which is filled to capacity, sells for$2, then at the same rate, how much does the oil in can Y sell for if Y is filled to only half its capacity? (A) $1 (B)$ 2 (C) $3 (D)$ 4 (E) $8 Let's suppose Cylinder X has a radius r and height h. Cylinder Y radius =2r, height 2h. X contains πr^2h ltrs of oil which sells for 2$. Y is half filled...so volume of oil in Y....1/2(π(2r)^2 2h) = 4πr^2h So oil in Y will sell for 4 rate of x===> 4x2$= 8$ Hence E Originally posted by ManishKM1 on 10 Dec 2017, 08:08. Last edited by ManishKM1 on 10 Dec 2017, 08:37, edited 1 time in total. Intern Joined: 25 Jul 2017 Posts: 39 Re: Two oil cans, X and Y, are right circular cylinders, and the height an  [#permalink] ### Show Tags 10 Dec 2017, 08:26 1 Hi all, IMO, answer should be E: $8. V of cylinder = R^2*H. If height and radius of Y double height and radius of X --> V(Y) = 8V(X) ---> Price for full capacity of Y = 8 price for full capacity of X = 8*$2=$16 --> Price for half capacity of Y =$16/2=$8 Please kudo if it helps. Thanks. Senior SC Moderator Joined: 22 May 2016 Posts: 2090 Two oil cans, X and Y, are right circular cylinders, and the height an [#permalink] ### Show Tags 10 Dec 2017, 14:38 1 Bunuel wrote: Two oil cans, X and Y, are right circular cylinders, and the height and the radius of Y are each twice those of X. If the oil in can X, which is filled to capacity, sells for$2, then at the same rate, how much does the oil in can Y sell for if Y is filled to only half its capacity? (A) $1 (B)$ 2 (C) $3 (D)$ 4 (E) $8 Choose smart numbers Calculate the amount of oil in X and Y with the formula for the volume of a right circular cylinder. Y's oil volume times the sell rate for X's oil = selling price for Y's oil Volume of right circular cylinders: $$\pi r^2 h$$ For oil can X: Let $$r = 1$$ and $$h = 1$$ For oil can Y: Let $$r = 2$$ and $$h = 2$$ X is full. The amount of oil in X = X's volume X's oil amt: $$\pi r^2 h=\pi(1)(1)= 1\pi$$ Y is half full. Y's oil amount is half of Y's volume Y's volume: $$\pi r^2h =\pi (4)(2)=8\pi$$ Y's oil amount: $$\frac{8\pi}{2}= 4 \pi$$ X's oil sells for$2. Y has 4 times as much oil as X. Y's oil sells for ($2 * 4) =$8 OR $$\frac{2}{1 \pi}=\frac{y}{4 \pi}$$ $$y = 8$$ Algebraically Volume of right circular cylinder: $$\pi r^2 h$$ Height of Y = 2 * height of X Amount of oil in X = Volume of X = $$\pi r^2 h$$ Y's oil amount = half of Y's volume: $$(\frac{1}{2})*\pi (2r)^2(2h)=(\frac{1}{2})\pi(4r^2)(2h)=4\pi r^2h$$ X's oil sells for \$2. Y's oil sells for? $$\frac{1(\pi r^2h)}{4(\pi r^2h)} = \frac{2}{y}$$ $$y = 8$$ Two oil cans, X and Y, are right circular cylinders, and the height an &nbs [#permalink] 10 Dec 2017, 14:38 Display posts from previous: Sort by
2018-11-13 18:29:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4240923523902893, "perplexity": 8113.895476790673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741340.11/warc/CC-MAIN-20181113173927-20181113195351-00014.warc.gz"}
https://www.taylorfrancis.com/books/9781315830650/chapters/10.4324/9781315830650-22
chapter 18 Pages ## Some Second Principles IT is, I believe, both natural and usual that a vigorous religious trait in one generation should promote a philosophical trait in the next. The peculiar colouring of immediacy which belongs to religion, the pervasive sense of an unevident value in existence, cannot be precisely transmitted. But it is sure to be recognized, thought about, sought after. It is also almost as sure to be critically regarded at some time or other, to be analysed and explained away or rejected, as a preliminary to independent building. This was my own very ordinary experience.
2020-02-24 08:48:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9110549092292786, "perplexity": 2847.004647143347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145910.53/warc/CC-MAIN-20200224071540-20200224101540-00063.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=1768466
MathSciNet bibliographic data MR1768466 (2001f:22047) 22E46 (11F46 11F70) Miyazaki, Takuya The generalized Whittaker functions for ${\rm Sp}(2,{\bf R})$${\rm Sp}(2,{\bf R})$ and the gamma factor of the Andrianov $L$$L$-function. J. Math. Sci. Univ. Tokyo 7 (2000), no. 2, 241–295. Journal For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews. American Mathematical Society 201 Charles Street Providence, RI 02904-6248 USA
2014-07-28 15:39:10
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9672256708145142, "perplexity": 11676.733100498202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510260734.19/warc/CC-MAIN-20140728011740-00014-ip-10-146-231-18.ec2.internal.warc.gz"}
https://codegolf.stackexchange.com/questions/173942/what-numbers-are-these
# What numbers are these? While I was writing numbers I noticed after a while that my keyboard had the Shift key pressed and blocked and all I wrote was $%&-like characters. And even worse, I had been switching between the English and Spanish keyboard layouts so I don't know which one I used for each number. ### Challenge Given a string containing symbol characters, try to guess which number I wrote. My keyboard produces the following characters for the numbers when the Shift is pressed: 1234567890 ---------- !"·$%&/()= Spanish layout !@#$%^&*() English layout • The input will be a non-null, non-empty string composed of the symbols above. • The output will be a single number if the keyboard layout can be inferred from the string (i.e. if the string contains a @ an English layout was used, and if the string contains a " a Spanish layout was used) or if the number is the same for both layouts (i.e. the input is !$ which translates as 14 for both layouts); otherwise the output will be the two possible numbers for both layouts if it cannot be inferred and the resulting numbers are different. • The input string will always be written in a single layout. So you don't need to expect "@ as input. ### Examples Input --> Output ------------------ /() 789 (Spanish layout detected by the use of /) $%& 456,457 (Layout cannot be inferred) !@# 123 (English layout detected by the use of @ and #) ()&! 8961,9071 (Layout cannot be inferred) ((·)) 88399 (Spanish layout detected by the use of ·) !$ 14 (Layout cannot be inferred but the result is the same for both) !!$$%% 114455 (Layout cannot be inferred but the result is the same for both) ==" 0042/42 (Spanish layout, if a number starts with 0 you can choose to omit them in the result or not) Single character translations: ------------------------------ ! 1 " 2 · 3 4 % 5 & 6,7 / 7 ( 8,9 ) 9,0 = 0 @ 2 # 3 ^ 6 * 8 This is , so may the shortest code for each language win! • Dang it, that · is challenging... – Erik the Outgolfer Oct 12 '18 at 20:39 • @EriktheOutgolfer in fact the · is useless for Spanish, it is only used in the Catalan language. – Charlie Oct 12 '18 at 20:41 • Is output like {(8, 9, 6, 1), (9, 0, 7, 1)} (for the 4th test case) acceptable? – Lynn Oct 12 '18 at 21:01 • @Lynn yes, it is. – Charlie Oct 12 '18 at 21:03 • When outputting 2 numbers, does the order matter? – Shaggy Oct 12 '18 at 22:30 ## 14 Answers # Jelly, 32 31 bytes O“=!"Ṣ%&/()“)!@#%^&*(‘iⱮ€PƇ’Q Try it online! • -1 bytes thanks to Erik the Outgolfer O“!"Ṣ%&/()=“!@#%^&*()‘iⱮ€PƇ%⁵Q O ord of each character in the input “!"Ṣ%&/()=“!@#%^&*()‘ Constant that yields the list: [[33, 34, 183, 36, 37, 38, 47, 40, 41, 61], [33, 64, 35, 36, 37, 94, 38, 42, 40, 41] € For each list of numbers: Ɱ For each ord of the characters in the input: i Find the index of the ord of the character in the list of numbers. If the number is not found, i returns zero which means it's a character from only one keyboard. There are now two lists of numbers 1-10. Ƈ Keep the list(s) that: P have nonzero product. %⁵ Modulo 10. This maps 10->0. Q Unique elements. This removes duplicates if the two numbers are the same. # Python 3, 76 bytes lambda s:{(*map(k.find,s),)for k in['=!"·%&/()',')!@#%^&*(']if{*s}<={*k}} Try it online! # Perl 6, 62 bytes {set grep {!/\D/},TR'=!"·%&/()'0..9',TR')!@\x23%^&*('0..9'} Try it online! Returns a Set. (Could be made two or three bytes shorter if there wasn't a bug in Rakudo's handling of # in search lists.) # Java (JDK), 173 bytes Golfed c->{String s="",e=s;var m="21#3457#908###6##12#456389###0#7".split("");for(int l:c){e+=m[l=l%16];s+=m[l+16];}return s.equals(e)|s.contains("#")?e:e.contains("#")?s:s+","+e;} Try it online! Ungolfed c->{ // Lamdba taking char array as input String s="",e=s; // Initialise Spanish and English strings var m="21#3457#908###6##12#456389###0#7".split(""); // Create magic hashing lookup array (see below) for(int l:c){ // Loops through all chars in input e+=m[l=l%16]; // Get english number from array and append s+=m[l+16]; // Get Spanish number from array and append } return s.equals(e)|s.contains("#")?e: // If equal or Spanish is invalid return english e.contains("#")?s: // If English is invalid return Spanish s+","+e; // If both are valid but not equal, return both } The Magic Hashing Lookup Array After some experimenting with values I realised that each of the ASCII values of the characters !"·%&/()=@#^* modulo 16 returns a unique number. The 'magic hashing lookup array' stores the English numbers associated with each character at this unique index, and each of the Spanish numbers at this index offset by 16, making fetching the required number from the array trivial for each language. A hash is stored for values that are invalid for either language. • I don't suppose you could use toCharArray() and the int values to make this shorter? (Just an idea, I haven't tried it yet.) – Quintec Oct 13 '18 at 0:46 • @Quintec I tried it, but the extra bytes from toCharArray() and calculating the exponent to be applied to the int value made it far longer than both the .contains() statements. – Luke Stevens Oct 13 '18 at 16:35 • s.equals(e)|s.contains("#") can be s.matches(e+"|.*#.*"). – Kevin Cruijssen Oct 15 '18 at 6:41 • Suggest l%=16 instead of l=l%16 – ceilingcat Jan 3 at 1:26 # Japt, 38 bytes Outputs an array of strings with the Spanish layout first. "=!\"·%&/())!@#%^&*("òA £ËXbD kø'- â Try it # Jelly, 38 bytes 183Ọ“=!"“%&/()”j,“)!@#%^&*(”iⱮ€⁸ẠƇ’Q Try it online! • Nice! Just one question, I have tried your code with () or (()) as input, but your code then return nothing. I suppose that's a limitation with what Jelly receives as input? – Charlie Oct 12 '18 at 21:27 • @Charlie Try with '()' and '(())' respectively. Yes, if you don't quote the argument, it's only inputted as a string if it can't be evaluated to a Python 3 value. – Erik the Outgolfer Oct 12 '18 at 21:28 # Retina 0.8.2, 60 bytes .+ &¶& T=!"·%&/()d^.+ T)!@#%^&*(d.+ D Gm^\d+ Try it online! Link includes test cases. Explanation: .+ &¶& Duplicate the input. T=!"·%&/()d^.+ T)!@#%^&*(d.+ Try to translate each line according to a different keyboard layout. D Deduplicate the result. Gm^\d+ Only keep lines that only contain digits. • Do you need the m in your last stage? – ovs Oct 13 '18 at 10:38 • @ovs Yes, the matches run first, and then the lines are split and lines containing matches are kept. – Neil Oct 13 '18 at 12:12 # JavaScript (ES6), 99 bytes s=>(g=l=>a=s.replace(/./g,c=>l.indexOf(c)))('=!"·%&/()',b=g(')!@#%^&*('))>=0?a-b&&b>=0?[a,b]:a:b Try it online! ### How? The helper function $$\g\$$ attempts to convert the input string using a given layout. Invalid characters are replaced with $$\-1\$$, which results in either a valid but negative looking numeric string (if only the first character was missing), or an invalid numeric string. Either way, the test x >= 0 is falsy. # 05AB1E, 42 41 bytes •Hhç₁d©u÷^Σ(“ðΣèõĆ -•184в2äεIÇk}ʒ®å_}>T%Ù Port of @dylnan's Jelly answer. Explanation: •Hhç₁d©u÷^Σ(“ðΣèõĆ -•184в # Compressed list [33,34,183,36,37,38,47,40,41,61,33,64,35,36,37,94,38,42,40,41] 2ä # Split into two parts: [[33,34,183,36,37,38,47,40,41,61],[33,64,35,36,37,94,38,42,40,41]] ε } # Map each inner list to: IÇ # Get the input, and convert each character to its unicode value k # Then get the index of each unicode value in the current map-list # (this results in -1 if the item doesn't exist) ʒ } # Filter the resulting list of indices by: ®å_ # If the inner list does not contain any -1 > # Increase each index by 1 to make it from 0-indexed to 1-indexed T% # Take modulo-10 to convert 10 to 0 Ù # Uniquify the result-lists (and output implicitly) See this 05AB1E tip of mine (section How to compress integer lists?) to understand why •Hhç₁d©u÷^Σ(“ðΣèõĆ\n-•184в is [33,34,183,36,37,38,47,40,41,61,33,64,35,36,37,94,38,42,40,41]). This (together with the 2ä) is 1 byte shorter than taking the unicode values of the string: '""!ÿ·%&/()=""!@#%^&*()"‚Ç. • The ! and !!$$%% cases should output only one number as the result is the same for both layouts and there is no ambiguity. – Charlie Oct 15 '18 at 7:17 • @Charlie Oops, fixed – Kevin Cruijssen Oct 15 '18 at 7:21 # Ruby, 68 bytes ->s{%w[=!"·$%&/() )!@#$%^&*(].map{|a|s.tr a,'0-9'}.grep_v(/\D/)|[]} Try it online! # Clean, 116 bytes import StdEnv,Text $s=removeDup[foldl(<+)""d\\r<-["=!\"·$%&/()",")!@#$%^&*("],d<-[[indexOf{c}r\\c<-s]]|all((<) -1)d] Try it online! Takes input and is encoded in CP437. TIO only supports UTF-8, so an escape is used in the demo code to get the literal byte value 250 corresponding to the centre dot (counted as one byte). • The !$% input should output only one number, not two, as the result is the same for both layouts. – Charlie Oct 15 '18 at 7:15 • @Charlie Fixed it. – Οurous Oct 15 '18 at 8:30 # APL (Dyalog), 40 bytes Anonymous tacit prefix function. Though unused, · is in the Dyalog single byte character set. Assumes 0-based indexing (⎕IO←0) which is default on many systems. {∪⍵/⍨~10∊¨⍵}'=!"·$%&/()' ')!@#$%^&*('⍳¨⊂ Try it online! ⊂ the entire argument '=!"·$%&/()' ')!@#$%^&*('⍳¨ indices of the characters in each of these strings {∪⍵/⍨~10∊¨⍵} apply the following lambda (⍵ is the argument): 10∊¨⍵ for each list of digits, whether 10 (indicating "not found") is a member thereof ~ local negation (i.e. only those where all digits are found) ⍵/⍨ filter the argument by that ∪ find the unique elements of that # Dart, 125 bytes f(s)=>['=!"·$$\%&/()',')!@#\$$%^&*('].map((b)=>s.split('').map((e)=>b.indexOf(e)).join()).where((e)=>!e.contains('-')).toSet(); Ungolfed : f(s){ ['=!"·$$\%&/()',')!@#\$$%^&*('] .map( (b)=>s.split('').map((e)=>b.indexOf(e)) .join() ).where( (e)=>!e.contains('-') ).toSet(); } • Creates an array with the two specified key values, from 0 to 9 • For each of them, convert the input string to the corresponding number by using the characters' indexes • Join the resulting array to create a number • Remove any number having a '-' (Dart returns -1 when indexOf can't find a char) • Return as a Set to remove duplicates SELECT DISTINCT*FROM(SELECT TRY_CAST(TRANSLATE(v,m,'1234567890')as INT)a FROM i,(VALUES('!"·$%&/()='),('!@#$%^&*()'))t(m))b Joins the input table with the two different character strings, then uses the new SQL 2017 function TRANSLATE to swap out individual characters, and TRY_CAST to see if we end up with a number. If not, TRY_CAST returns NULL. The final outer SELECT DISTINCT combines identical results and filters out the NULLS.
2020-02-19 02:43:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2949831783771515, "perplexity": 3004.352016645373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143963.79/warc/CC-MAIN-20200219000604-20200219030604-00001.warc.gz"}
https://en.wikipedia.org/wiki/Goppa_code
Goppa code In mathematics, an algebraic geometric code (AG-code), otherwise known as a Goppa code, is a general type of linear code constructed by using an algebraic curve ${\displaystyle X}$ over a finite field ${\displaystyle \mathbb {F} _{q}}$. Such codes were introduced by Valerii Denisovich Goppa. In particular cases, they can have interesting extremal properties. They should not be confused with binary Goppa codes that are used, for instance, in the McEliece cryptosystem. Construction Traditionally, an AG-code is constructed from a non-singular projective curve X over a finite field ${\displaystyle \mathbb {F} _{q}}$ by using a number of fixed distinct ${\displaystyle \mathbb {F} _{q}}$-rational points on ${\displaystyle \mathbf {X} }$: ${\displaystyle {\mathcal {P}}:=\{P_{1},\ldots ,P_{n}\}\subset \mathbf {X} (\mathbb {F} _{q}).}$ Let ${\displaystyle G}$ be a divisor on X, with a support that consists of only rational points and that is disjoint from the ${\displaystyle P_{i}}$. Thus ${\displaystyle {\mathcal {P}}\cap \operatorname {supp} (G)=\varnothing }$ By the Riemann–Roch theorem, there is a unique finite-dimensional vector space, ${\displaystyle L(G)}$, with respect to the divisor ${\displaystyle G}$. The vector space is a subspace of the function field of X. There are two main types of AG-codes that can be constructed using the above information. Function code The function code (or dual code) with respect to a curve X, a divisor ${\displaystyle G}$ and the set ${\displaystyle {\mathcal {P}}}$ is constructed as follows. Let ${\displaystyle D=P_{1}+\cdots +P_{n}}$, be a divisor, with the ${\displaystyle P_{i}}$ defined as above. We usually denote a Goppa code by C(D,G). We now know all we need to define the Goppa code: ${\displaystyle C(D,G)=\left\{\left(f(P_{1}),\ldots ,f(P_{n})\right)\ :\ f\in L(G)\right\}\subset \mathbb {F} _{q}^{n}}$ For a fixed basis ${\displaystyle f_{1},\ldots ,f_{k}}$ for L(G) over ${\displaystyle \mathbb {F} _{q}}$, the corresponding Goppa code in ${\displaystyle \mathbb {F} _{q}^{n}}$ is spanned over ${\displaystyle \mathbb {F} _{q}}$ by the vectors ${\displaystyle \left(f_{i}(P_{1}),\ldots ,f_{i}(P_{n})\right)}$ Therefore, ${\displaystyle {\begin{bmatrix}f_{1}(P_{1})&\cdots &f_{1}(P_{n})\\\vdots &&\vdots \\f_{k}(P_{1})&\cdots &f_{k}(P_{n})\end{bmatrix}}}$ is a generator matrix for ${\displaystyle C(D,G).}$ Equivalently, it is defined as the image of ${\displaystyle {\begin{cases}\alpha :L(G)\to \mathbb {F} ^{n}\\f\mapsto (f(P_{1}),\ldots ,f(P_{n}))\end{cases}}}$ The following shows how the parameters of the code relate to classical parameters of linear systems of divisors D on C (cf. Riemann–Roch theorem for more). The notation (D) means the dimension of L(D). Proposition A. The dimension of the Goppa code ${\displaystyle C(D,G)}$ is ${\displaystyle k=\ell (G)-\ell (G-D).}$ Proof. Since ${\displaystyle C(D,G)\cong L(G)/\ker(\alpha ),}$ we must show that ${\displaystyle \ker(\alpha )=L(G-D).}$ Let ${\displaystyle f\in \ker(\alpha )}$ then ${\displaystyle f(P_{1})=\cdots =f(P_{n})=0}$ so ${\displaystyle \operatorname {div} (f)>D}$. Thus, ${\displaystyle f\in L(G-D).}$ Conversely, suppose ${\displaystyle f\in L(G-D),}$ then ${\displaystyle \operatorname {div} (f)>D}$ since ${\displaystyle P_{i} (G doesn't “fix” the problems with the ${\displaystyle -D}$, so f must do that instead.) It follows that ${\displaystyle f(P_{1})=\cdots =f(P_{n})=0.}$ Proposition B. The minimal distance between two code words is ${\displaystyle d\geqslant n-\deg(G).}$ Proof. Suppose the Hamming weight of ${\displaystyle \alpha (f)}$ is d. That means that for ${\displaystyle n-d}$ indices ${\displaystyle i_{1},\ldots ,i_{n-d}}$ we have${\displaystyle f(P_{i_{k}})=0}$ for ${\displaystyle k\in \{1,\ldots ,n-d\}.}$ Then ${\displaystyle f\in L(G-P_{i_{1}}-\cdots -P_{i_{n-d}})}$, and ${\displaystyle \operatorname {div} (f)+G-P_{i_{1}}-\cdots -P_{i_{n-d}}>0.}$ Taking degrees on both sides and noting that ${\displaystyle \deg(\operatorname {div} (f))=0,}$ we get ${\displaystyle \deg(G)-(n-d)\geqslant 0.}$ so ${\displaystyle d\geq n-\deg(G).}$ Residue code The residue code can be defined as the dual of the function code, or as the residue of some functions at the ${\displaystyle P_{i}}$'s. References • Key One Chung, Goppa Codes, December 2004, Department of Mathematics, Iowa State University.
2020-04-03 18:30:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 49, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9715428352355957, "perplexity": 474.6495971709866}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370515113.54/warc/CC-MAIN-20200403154746-20200403184746-00475.warc.gz"}
http://mathhelpforum.com/calculus/145648-vector-equation-thingy-print.html
# A vector equation thingy • May 19th 2010, 10:04 PM Dr Zoidburg A vector equation thingy Calculate $\int_{C}f(X).dX$ where $f(x,y)=(x^2+xy,y-x^2y)$ and C is parametrized by $x=t, y= \frac{1}{t} \; (1\leq t \leq 3)$ $x=t \Rightarrow \tfrac{\delta x}{\delta t} = 1$ $y= \tfrac{1}{t} \Rightarrow \tfrac{\delta y}{\delta t} = -\tfrac{1}{t^2}$ $\int_{C}f(X).dX = \int_{C} (x^2+xy).\partial x + (y-x^2y).\partial y = \int_{1}^{3} (t^2+1-\frac{1}{t^3}+\frac{1}{t}).\partial t$ integrating that makes for a unpleasant mess: $\frac{t^3}{3}+t+\frac{1}{2t^2}+ln(t)$ which is why I'm wondering if I'm doing this right. $\int_{C}f(X(t))\cdot X'(t) dt$
2016-08-30 05:07:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9847302436828613, "perplexity": 1585.129870037538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982969890.75/warc/CC-MAIN-20160823200929-00044-ip-10-153-172-175.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/158120-tricky-trig-sub-integration-print.html
# Tricky trig sub integration • Oct 1st 2010, 01:52 PM highc1157 Tricky trig sub integration Hi, This is the integral I need to take $\int\{x^2}/{\sqrt(9-x^2) \}$ I get to this then dont know what to do $\int{x^2}$ • Oct 1st 2010, 02:03 PM pickslides What substitution did you make? Did you try $x = 3\sin\theta$ Also remember $\displaystyle \sin^2 \theta = \frac{1-\cos \theta}{2}$ • Oct 1st 2010, 02:07 PM Soroban Hello, highc1157! You didn't complete the substitution . . . Quote: $\displaystyle \int \frac{x^2}{\sqrt{9-x^2}}\,dx$ $\text{Let: }\:x = 3\sin\theta \quad\Rightarrow\quad dx = 3\cos\theta\,d\theta$ $\displaystyle \text{Substitute: }\;\int \frac{\overbrace{9\sin^2\!\theta}^{x^2}}{\underbra ce{3\cos\theta}_{\sqrt{9-x^2}}}\,\underbrace{(3\cos\theta\,d\theta)}_{dx} \;=\;9\!\1\int \sin^2\theta\,d\theta$ Got it? • Oct 1st 2010, 03:30 PM highc1157 yea i get that :) thanks a lot Sorobon, but i'm still having trouble i used the identity for the integral of sine squared and my answer looks nothing like the answer in my book :( • Oct 2nd 2010, 04:57 PM mr fantastic Quote: Originally Posted by highc1157 yea i get that :) thanks a lot Sorobon, but i'm still having trouble i used the identity for the integral of sine squared and my answer looks nothing like the answer in my book :( Once you have an answer in terms of $\theta$ you will have to back-substitute $x = 3 \sin \theta$ to get the answer in terms of $x$. If you need more help, please show all the details of your working (not just one or two steps). I'm sure your classnotes and textbook will have examples to follow ....
2017-01-21 11:27:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9201581478118896, "perplexity": 940.3466794736478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00284-ip-10-171-10-70.ec2.internal.warc.gz"}
http://gate-exam.in/EC/Syllabus/General-Aptitude/Analytical-Skills
Question & Answer of Analytical Skills If the number $715\blacksquare423$ is divisible by 3 ($\blacksquare$ denotes the missing digit in the thousandths place), then the smallest whole number in the place of $\blacksquare$ is _________. What is the value of $1+\frac14+\frac1{16}+\frac1{64}+\frac1{256}+........$ ? A 1.5 m tall person is standing at a distance of 3 m from a lamp post. The light from the lamp at the top of the post casts her shadow. The length of the shadow is twice her height. What is the height of the lamp post in meters? Leila aspires to buy a car worth Rs. 10,00,000 after 5 years. What is the minimum amount in Rupees that she should deposit now in a bank which offers 10% annual rate of interest, if the interest was compounded annually? Two alloys A and B contain gold and copper in the ratios of 2:3 and 3:7 by mass, respectively. Equal masses of alloys A and B are melted to make an alloy C. The ratio of gold to copper in alloy C is ______. The Cricket Board has long recognized John’s potential as a leader of the team. However, his on-field temper has always been a matter of concern for them since his junior days. While this aggression has filled stadia with die-hard fans, it has taken a toll on his own batting. Until recently, it appeared that he found it difficult to convert his aggression into big scores. Over the past three seasons though, that picture of John has been replaced by a cerebral, calculative and successful batsman-captain. After many years, it appears that the team has finally found a complete captain. Which of the following statements can be logically inferred from the above paragraph? (i) Even as a junior cricketer, John was considered a good captain. (ii) Finding a complete captain is a challenge. (iii) Fans and the Cricket Board have differing views on what they want in a captain. (iv) Over the past three seasons John has accumulated big scores. A cab was involved in a hit and run accident at night. You are given the following data about the cabs in the city and the accident. (i) 85% of cabs in the city are green and the remaining cabs are blue. (ii) A witness identified the cab involved in the accident as blue. (iii) It is known that a witness can correctly identify the cab colour only 80% of the time. Which of the following options is closest to the probability that the accident was caused by a blue cab? A coastal region with unparalleled beauty is home to many species of animals. It is dotted with coral reefs and unspoilt white sandy beaches. It has remained inaccessible to tourists due to poor connectivity and lack of accommodation. A company has spotted the opportunity and is planning to develop a luxury resort with helicopter service to the nearest major city airport. Environmentalists are upset that this would lead to the region becoming crowded and polluted like any other major beach resorts. Which one of the following statements can be logically inferred from the information given in the above paragraph? In the summer, water consumption is known to decrease overall by 25%. A Water Board official states that in the summer household consumption decrease by 20%, while other consumption increase by 70%. Which of the following statements is correct? 40% of deaths on city roads may be attributed to drunken driving. The number of degrees needed to represents this as a slice of a pie chart is Some tables are shelves. Some shelves are chairs. All chairs are benches. Which of the following conclusion can be deduced from the preceding sentence? i.                     At least one bench is a table ii.                   At least one shelf is a bench iii.                  At least one chair is a table iv.                 All benches are chairs S, T, U, V, W, X, Y, and Z are seated around a circular table. T’s neighbour are Y and V. Z is seated third to the left of T and second to the right of S. U’s neighbor are S and Y; and T and W are not seated opposite each other. Who is third to the left of V? Trucks (10 m long) and cars (5 m long) go on a single lane bridge. There must be a gap of at least 20 m after each truck and a gap of at least 15 m after each car. Trucks and cars travel at a speed of 36 km/h. If cars and trucks go alternately, what is the maximum number of vehicles that can use the bridge in one hour? There are 3 Indians and 3 Chinese in a group of 6 people. How many subgroups of this group can we choose so that every subgroup has at least one Indian? A contour line joins locations having the same height above the mean sea level. The following is a contour plot of a geographical region. Contour lines are shown at 25 m intervals in this plot. The path from P to Q is best description by A rule states that in order to drink beer, one must be over 18 years old. In a bar, there are 4 people. P is 16 years old, Q is 25 years old, R is drinking milkshake and S is drinking a beer. What must be checked to ensure that the rule is being followed? Fatima starts from P, goes North for 3 km, and then East for 4 km to reach point Q, She then turns to face point P and goes 15 km in that direction. She then goes North for 6 km. How far is she from point P, and which direction should she go to reach point P? 500 students are taking one or more courses out of Chemistry, Physics, and Mathematics. Registration records indicate course enrolment as follow: Chemistry (329), Physics (186), Mathematics (295), Chemistry and Physics (83), Chemistry and Mathematics (217), and Physics and Mathematics (63). How many students are taking all 3 subjects? Each of P, Q, R, S, W, X, Y and Z has been married at most once. X and Y are married and have two children P and Q. Z is the grandfather of the daughter S of P. Further, Z and W are married and are parents of R. Which one of the following must necessary be FALSE? 1200 men and 500 women can build a bridge in 2 weeks, 900 men and 250 women will take 3 weeks to build the same bridge. How many men will be needed to build the bridge in one week? The number of 3-digit number such that the digit 1 is never to the immediate right of 2 is A contour line joins location having the same height above the mean sea level. The following is a contour plot of a geographical region. Contour lines are shown at 25 m intervals in this plot. Which of the following is the steepest path leaving from P? In a huge pile of apples and oranges, both ripe and unripe mixed together, 15% are unripe fruits. Of the unripe fruits, 45% are apples. Of the ripe ones, 66% are oranges. If the pile contains a total of 5692000 fruits, how many of them are apples? Michael lives 10 km away from where I live. Ahmed lives 5 km away and Susan lives 7 km away from where I live. Arun is farther away than Ahmed but closer than Susan from where I live. From the information provided here, what is one possible distance (in km) at which I live from Arun’s place? A person moving through a tuberculosis prone zone has a 50% probability of becoming infected. However, only 30% of infected people develop the disease. What percentage of people moving through a tuberculosis prone zone remains infected but does not show symptoms of disease? If ${q}^{-a}=\frac{1}{r}$ and ${r}^{-b}=\frac{1}{s}$ and ${s}^{-c}=\frac{1}{q}$, the value of $abc$ is ___________ . P, Q, R and S are working on a project. Q can finish the task in 25 days, working alone for 12 hours a day. R can finish the task in 50 days, working alone for 12 hours per day. Q worked 12 hours a day but took sick leave in the beginning for two days. R worked 18 hours a day on all days. What is the ratio of work done by Q and R after 7 days from the start of the project? Given $(9\mathrm{inches})^½=(0.25\mathrm{yards})^½$, which one of the following statements is TRUE? S, M, E and F are working in shifts in a team to finish a project. M works with twice the efficiency of others but for half as many days as E worked. S and M have 6 hour shifts in a day, whereas E and F have 12 hours shifts. What is the ratio of contribution of M to contribution of E in the project? The Venn diagram shows the preference of the student population for leisure activities. From the data given, the number of students who like to read books or play sports is _______. Two and quater hours back, when  seen in a mirror, the reflecation of a wall clock without number  markings seemed to show 1:30. What is the actual current time shown by the clock? M and N start from the same location. M travels 10 km East and then 10 km North-East. N travels 5 km South and then 4 km South-East. What is the shortest distance (in km) between M and N at the end of their travel? A wire of length 340 mm is to be cut into two parts. One of the parts is to be made into a square and the other into a rectangle where sides are in the ratio of 1:2. What is the length of the side of the square (in mm) such that the combined area of the square and the rectangle is a MINIMUM? SET - 3 An apple costs Rs. 10. An onion costs Rs. 8. Select the most suitable sentence with respect to grammar and usage. The number that least fits this set: (324, 441, 97 and 64) is ________. It takes 10 s and 15 s, respectively, for two trains travelling at different constant speeds to completely pass a telegraph post. The length of the first train is 120 m and that of the second train is 150 m. The magnitude of the difference in the speeds of the two trains (in m/s) is ____________. The velocity V of a vehicle along a straight line is measured in m/s and plotted as shown with respect to time in seconds. At the end of the 7 second, how much will the odometer reading increase by (in m)? Find the area boundeed by the lines 3x+2y=14, 2x-3y=5 in the first quandrant. A straight line is fit to a data set (ln x, y). This line intercepts the abscissa at ln x = 0.1 and has a slope of −0.02. What is the value of y at x = 5 from the fit? Operators $\square,\Diamond\;\mathrm{and}\;\rightarrow$ are defined by $\mathrm a\square\mathrm b\;=\;\frac{\mathrm a-\mathrm b}{\mathrm a+\mathrm b};\;\mathrm a\Diamond\mathrm b\;=\frac{\mathrm a+\mathrm b}{\mathrm a-\mathrm b};\;\mathrm a\rightarrow\mathrm b=\mathrm{ab}$ Find the value of $(66\square6\;)\rightarrow(66\;\Diamond6)$ If logx=(5/7)=-1/3,then the value of x is Find the missing Value: A cube of side 3 units is formed using a set of smaller cubes of side 1 unit; Find the proportion of the number of faces of the smaller cubes visible to those which are NOT visible. An electric bus has onboard instruments that report the total electricity consumed since the start of the trip as well as the total distance convered. During a single day of operation the bus travels on stretches M,N, O and P, in that order. The cumulative distances travelled and the corresponding electricity consumption are shown in the Table below: Stretch Cumulative distance (km) Electricity used(kWh) M 20 12 N 45 25 O 75 45 P 100 57 The stretch where the electricity consumption per km is minimum is Ram and Ramesh appeared in an interview for two vacancies in the same department.The probability of Ram’s selection is 1/6 and that of Ramesh is 1/8. What is the probability that only one of them will be selected? A tiger is 50 leaps of its own behind a deer. The tiger takes 5 leaps per minute to the deer’s 4. If the tiger and the deer cover 8 metre and 5 metre per leap respectively, what distance in metres will the tiger have to run before it catches the deer? If a2+b2+c2=1, then ab+bc+ac lies in the interval Find the missing sequence in the letter series below: A, CD, GHI, ?, UVWXY If x > y >1, which of the following must be true? i. ln x > ln y          ii. ex > ey          iii. yx > xy          iv. cos x > cos y From a circular sheet of paper of radius 30 cm, a sector of 10% area is removed. If the remaining part is used to make a conical surface, then the ratio of the radius and height of the cone is ______. log tan 1o + log tan 2o + …… + log tan 89o is…. Ms. X will be in Bagdogra from 01/05/2014 to 20/05/2014 and from 22/05/2014 to 31/05/2014. On the morning of 21/05/2014, she will reach Kochi via Mumbai. Which one of the statements below is logically valid and can be inferred from the above sentences? The statistics of runs scored in a series by four batsmen are provided in the following table. Who is the most consistent batsman of these four? Batsman Average Standard deviation K 31.2 5.21 L 46.0 6.35 M 54.4 6.22 N 17.9 5.90 What is the next number in the series? 12 35 81 173 357 ____ Find the odd one from the following group: W,E,K,O   I,Q,W,A    F,N,T,X    N,V,B,D For submitting tax returns, all resident males with annual income below Rs 10 lakh should fill up Form P and all resident females with income below Rs 8 lakh should fill up Form Q. All people with incomes above Rs 10 lakh should fill up Form R, except non residents with income above Rs 15 lakhs, who should fill up Form S. All others should fill Form T. An example of a person who should fill Form T is A train that is 280 metres long, travelling at a uniform speed, crosses a platform in 60 seconds and passes a man standing on the platform in 20 seconds. What is the length of the platform in metres? The exports and imports (in crores of Rs.) of a country from 2000 to 2007 are given in the following bar chart. If the trade deficit is defined as excess of imports over exports, in which year is the trade deficit 1/5th of the exports? You are given three coins: one has heads on both faces, the second has tails on both faces, and the third has a head on one face and a tail on the other. You choose a coin at random and toss it, and it comes up heads. The probability that the other face is tails is A regular die has six sides with numbers 1 to 6 marked on its sides. If a very large number of throws show the following frequencies of occurrence: 1 → 0.167; 2 → 0.167; 3 → 0.152; 4 → 0.166; 5 → 0.168; 6 → 0.180. We call this die Fill in the missing number in the series 2 3 6 15 ____ 157.5 630 Find the odd one in the following group Q,W,Z,B    B,H,K,M    W,C,G,J     M,S,V,X The sum of eight consecutive odd numbers is 656. The average of four consecutive even numbers is 87. What is the sum of the smallest odd number and second largest even number? The total exports and revenues from the exports of a country are given in the two charts shown below. The pie chart for exports shows the quantity of each item exported as a percentage of the total quantity of exports. The pie chart for the revenues shows the percentage of the total revenue generated through export of each item. The total quantity of exports of all the items is 500 thousand tonnes and the total revenues are 250 crore rupees. Which item among the following has generated the maximum revenue per kg? It takes 30 minutes to empty a half-full tank by draining it at a constant rate. It is decided to simultaneously pump water into the half-full tank while draining it. What is the rate at which water has to be pumped in so that it gets fully filled in 10 minutes? The next term in the series 81, 54, 36, 24, … is ________ In which of the following options will the expression P < M be definitely true? Find the next term in the sequence: 7G, 11K, 13M, ___ The multi-level hierarchical pie chart shows the population of animals in a reserve forest. The correct conclusions from this information are: (i) Butterflies are birds (ii) There are more tigers in this forest than red ants (iii) All reptiles in this forest are either snakes or crocodiles (iv) Elephants are the largest mammals in this forest A man can row at 8 km per hour in still water. If it takes him thrice as long to row upstream, as to row downstream, then find the stream velocity in km per hour. A firm producing air purifiers sold 200 units in 2012. The following pie chart presents the share of raw material, labour, energy, plant & machinery, and transportation costs in the total manufacturing cost of the firm in 2012. The expenditure on labour in 2012 is Rs. 4,50,000. In 2013, the raw material expenses increased by 30% and all other expenses increased by 20%. If the company registered a profit of Rs. 10 lakhs in 2012, at what price (in Rs.) was each air purifier sold? A batch of one hundred bulbs is inspected by testing four randomly chosen bulbs. The batch is rejected if even one of the bulbs is defective. A batch typically has five defective bulbs. The probability that the current batch is accepted is_____ Let $f\left(x,y\right)={x}^{n}{y}^{m}=P$. If x is doubled and y is halved, the new value of f is In a sequence of 12 consecutive odd numbers, the sum of the first 5 numbers is 425. What is the sum of the last 5 numbers in the sequence? Find the next term in the sequence: 13M, 17Q, 19S, ___ If ‘KCLFTSB’ stands for ‘best of luck’ and ‘SHSWDG’ stands for ‘good wishes’, which of the following indicates ‘ace the exam’? Industrial consumption of power doubled from 2000-2001 to 2010-2011. Find the annual rate of increase in percent assuming it to be uniform over the years. A firm producing air purifiers sold 200 units in 2012. The following pie chart presents the share of raw material, labour, energy, plant & machinery, and transportation costs in the total manufacturing cost of the firm in 2012. The expenditure on labour in 2012 is Rs. 4,50,000. In 2013, the raw material expenses increased by 30% and all other expenses increased by 20%. What is the percentage increase in total cost for the company in 2013? A five digit number is formed using the digits 1,3,5,7 and 9 without repeating any of them. What is the sum of all such possible five digit numbers? In the summer of 2012, in New Delhi, the mean temperature of Monday to Wednesday was 41°C and of Tuesday to Thursday was 43°C. If the temperature on Thursday was 15% higher than that of Monday, then the temperature in °C on Thursday was A car travels 8 km in the first quarter of an hour, 6 km in the second quarter and 16 km in the third quarter. The average speed of the car in km per hour over the entire journey is Find the sum to n terms of the series 10+84+ 734 + ..... The set of values of p for which the roots of the equation 3x2+2x+p(p–1) = 0 are of opposite sign is What is the chance that a leap year, selected at random, will contain 53 Saturdays? If (1.001)1259 = 3.52 and (1.001)2062 = 7.85, then (1.001)3321 = Raju has 14 currency notes in his pocket consisting of only Rs. 20 notes and Rs. 10 notes. The total money value of the notes is Rs. 230. The number of Rs. 10 notes that Raju has is There are eight bags of rice looking alike, seven of which have equal weight and one is slightlyheavier. The weighing balance is of unlimited capacity. Using this balance, the minimum number of weighings required to identify the heavier bag is The data given in the following table summarizes the monthly budget of an average household. Category Amount (Rs.) Food 4000 Clothing 1200 Rent 2000 Savings 1500 Other expenses 1800 The approximate percentage of the monthly budget NOT spent on savings is
2019-06-17 10:55:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5439624190330505, "perplexity": 893.0695745499753}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998473.44/warc/CC-MAIN-20190617103006-20190617125006-00312.warc.gz"}
https://ora.ox.ac.uk/objects/uuid:45bdd638-e460-4708-a67f-a7fe283b6528
Journal article ### Precision measurement of the X(3872) mass in J/ψπ+π- decays Abstract: We present an analysis of the mass of the X(3872) reconstructed via its decay to J/ψπ+π- using 2.4fb-1 of integrated luminosity from pp̄ collisions at s=1.96TeV, collected with the CDF II detector at the Fermilab Tevatron. The possible existence of two nearby mass states is investigated. Within the limits of our experimental resolution the data are consistent with a single state, and having no evidence for two states we set upper limits on the mass difference between two hypothetical states f... ### Authors Journal: Physical Review Letters Volume: 103 Issue: 15 Publication date: 2009-10-05 DOI: EISSN: 1079-7114 ISSN: 0031-9007 URN: uuid:45bdd638-e460-4708-a67f-a7fe283b6528 Source identifiers: 106093 Local pid: pubs:106093 Language: English
2021-07-29 13:32:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8276289105415344, "perplexity": 2834.1517211848263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153857.70/warc/CC-MAIN-20210729105515-20210729135515-00537.warc.gz"}
https://valutafbqj.web.app/9109/99428.html
# DETTA ÄR DET FEMTE NUMRET AV GÖTEBORGS Att slutförvara långlivat farligt avfall i - Regeringen Search results for mohr salt at Sigma-Aldrich. Compare Products: Select up to 4 products. *Please select more than one item to compare The study of uptake of Mn(II) by Mohr's salt was made from 4° to 50° to find out whether there was any prominent change in the pattern of uptake in between these temperatures. In order to deter-mine constancy in the distribution factor, the per-centage of solid carrier was varied in all the expenments. In the case of mixed crystal for- Salt in sports drinks (which we used as our sample) is used to combat the sodium loss in perspiration. In most physical activities where the length could be longer than two hours, or results in heavy sweating, 450 mg of sodium or more within the sports drink is recommended to maintain plasma volume and plasma sodium levels. Figur 10. Fångsten av sik i Vättern från 1914 till år. 2000. Fångstökningen i ligheten (Mohr et al. The hexahydrate of Mohr’s salt is known to have a molar mass of 392.13 grams per mole. Mohr’s salt possesses a molar mass of 392.21 g/mol and it appears as a blue-green crystal. Preparation of Ferrous Ammonium Sulfate or Mohr’s salt. ## Studiehandbok 2005/2006 - KTH Its equivalent mass is 392/1 = 392 as its n factor is 1 as per the following reaction: Fe 2+ → Fe 3+ + e- PROCEDURE: Weigh a clean dry bottle using a chemical balance. Dear student, n-factor is the just the number of electrons supplied/ reacted with one mole of the substance. ### Literatūrzinātne, folkloristika, māksla Literature, Folklore, Arts P g. 12U is the fracture porosity is the permeability of a fracture . where: "Heat Transfer Analysis of the Waste-Container Sleeve/Salt Configu-. lymfkörtelutrymning (n = 967) eller observation med ultraljud (n = 967). Egenvårdsråd: Ät små mål ofta, undvik fet mat, salt mat kan dämpa illamåendet, torr och factors for cutaneous melanoma: I. Common and atypical naevi. To prepare 0 .1 N of the salt solu 2013-08-24 1985-04-01 samples taken during the olive fermentation period were analyzed for the salt content by Mohr titration method. Åldring kinetik Result: The strength of the supplied unknown strength Mohr’s salt solution was …….g/L Molar mass of Mohr’s salt is 392gmol-1. It is a primary standard. Since, 1000cm 3 of 1M potassium permanganate require Mohr’s salt of =392g So, 250cm 3 of M/20 potassium permanganate require Mohr’s salt of = $\frac{\frac{392}{20}}{1000}\times 250$ = 4.9g. Accurately weigh 4.9g of Mohr’s salt using a chemical balance and watch glass Mohr's salt is FeSO 4 (NH 4) 2 SO 4 . 6H 2 O On it's reaction with H 3 PO 4 Fe gets oxidised from Fe +2 to Fe +3. to Mohr in neutral solution. For acids, n-factor is defined as the number of H+ ions replaced by 1 mole of acid in Weight of Mohr's salt required to prepare 1000 ml of 1 M solution = 392 g. Kryssningsfartyg i sjönöd förfallodag kvarskatt bestalla e faktura swedbank
2022-09-27 04:08:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3335295617580414, "perplexity": 14173.045136284076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00509.warc.gz"}
http://www.r-bloggers.com/r-index-between-two-products-is-somewhat-dependent-on-other-products/
# R index between two products is somewhat dependent on other products March 12, 2012 By (This article was first published on Wiekvoet, and kindly contributed to R-bloggers) I explained earlier how R-index is used in sensory is used to examine ranking data. The legitimization to use R-index is in the link with d' and with Mann-Whitney statistic. In this post I show there is a dependence on the number of products and position of other products on the R index. It is a small effect. However, if data is analyzed by looking only and rigidly at the p value, then the result might chance from just under significant to just over significance. Using simulations, I will show that presence of other samples influences the R-index. I think this effect occurs because the R index is, mathematically, calculated from an aggregated matrix of counts of product against ranks. It is my feeling, that when there are more products, there are less chances to get equal rankings than with few products and hence slightly different scores. R index calculation Below the calculation when comparing 2 products from a total of 4 R index calculation matrix rank 1 rank 2 rank 3 rank 4 product 1       a      b      c      d product 2       e      f      g      h Note a to h are the counts in the respective cells. The R index is composed of three parts: 1 The number of wins of product 1 over product 2: a*(f+g+h) + b*(g+h) + c*h 2 The number of equal rankings divided by two (a*e + b*f + c*g + d*h) /2 3  Normalization (a+b+c+d)*(e+f+g+h) R index = 100* wins*equal normalization Effect of number of products Figure 1 shows the simulation R index dependence on the number of products, using a ranking with 25 panelists. With a low number of products, the distribution of the R index is a bit wider than with more products. Most of the difference in distribution is in the region 3 to 6 products, which is also the number of products often used in sensory. (Critical values of R-indices are given by the red and blue lines (Bi and O'Mahony 1995 respectively 2007, Journal of Sensory Studies)) Effect of neighborhood of other products Figure 2 shows the dependence on location of the other products. I have chosen 5 products, two have the same location. The other 3 move away from this location. Again 25 panelists. In this figure it shows that the two products R-index has a smaller distribution under H0 (no product differences) when all products are similar. This is about the same as the 5 products in the first plot. When the other products are far away, the distribution becomes wider, getting closer to the 3 product distribution in figure 1. It should be written that with one product rather than three products moving away from the centre location the effect is smaller. Effect of number of panelists is for a next post. Code for figure 1: library(ggplot2) makeRanksNoDiff <- function(nprod,nrep) { inList <- lapply(1:nrep,function(x) sample(1:nprod,nprod)   ) data.frame(person=factor(rep(1:nrep,each=nprod)), prod=factor(rep(1:nprod,times=nrep)), rank=unlist(inList)) } tab2Rindex <- function(t1,t2) { Rindex <- crossprod(rev(t1)[-1],cumsum(rev(t2[-1]))) + 0.5*crossprod(t1,t2) 100*Rindex/(sum(t1)*sum(t2)) } FastAllRindex <- function(rankExperiment) { crst <- xtabs(~ prod + rank,data=rankExperiment) nprod <- nlevels(rankExperiment$prod) Rindices <- unlist( lapply(1:(nprod-1),function(p1) { lapply((p1+1):nprod,function(p2) tab2Rindex(crst[p1,],crst[p2,])) }) ) Rindices } nprod <- seq(3,25,by=1) last <- lapply(nprod,function(xo) { nsamples <- ceiling(10000/xo) li <- lapply(1:nsamples,function(xi) { re <- makeRanksNoDiff(nprod=xo,nrep=25) FastAllRindex(re) }) li2 <- as.data.frame(do.call(rbind,li)) li2$nprod <- xo li2 } ) last2 <- lapply(last,function(x) { qq <- quantile(as.matrix(x[,grep('nprod',names(x),invert=TRUE)]) ,c(0.025,.5,.975)) qq <- as.data.frame(t(qq)) qq$nprod <- x$nprod[1] qq }   ) summy <- do.call(rbind,last2) g1 <- ggplot(summy,aes(nprod,50%) ) g1 <- g1+ geom_errorbar(aes(ymax = 97.5%, ymin=2.5%)) g1 <- g1 + scale_y_continuous(name='R-index' ) g1 <- g1 + scale_x_continuous(name='Number of products to compare') g1 <- g1 + geom_hline(yintercept=50 + 18.57*c(-1,1),colour='red') g1 <- g1 + geom_hline(yintercept=50 + 15.21*c(-1,1),colour='blue') g1 makeRanksDiff <- function(prods,nrep) { nprod <- length(prods) inList <- lapply(1:nrep,function(x)  rank(rnorm(n=nprod,mean=prods))) data.frame(person=factor(rep(1:nrep,each=nprod)), prod=factor(rep(1:nprod,times=nrep)), rank=unlist(inList)) } location <- seq(0,3,by=.25) last <- lapply(location,function(xo) { li <- sapply(1:10000,function(xi) { re <- makeRanksDiff(prod=c(0,0,xo,xo,xo),nrep=25) crst <- xtabs(~ prod + rank,data=re) tab2Rindex(crst[1,],crst[2,]) }) li2 <- data.frame(location=xo,Rindex=li) li2 } ) last2 <- lapply(last,function(x) { qq <- quantile( x$Rindex,c(0.025,.5,.975)) qq <- as.data.frame(t(qq)) qq$location <- x\$location[1] qq }   ) summy <- do.call(rbind,last2) g1 <- ggplot(summy,aes(location,50%) ) g1 <- g1+ geom_errorbar(aes(ymax = 97.5%, ymin=2.5%)) g1 <- g1 + scale_y_continuous(name='R-index between equal products' ) g1 <- g1 + scale_x_continuous(name='Location of odd products') g1 <- g1 + geom_hline(yintercept=50 + 18.57*c(-1,1),colour='red') g1 <- g1 + geom_hline(yintercept=50 + 15.21*c(-1,1),colour='blue') g1
2013-12-12 00:26:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3716125786304474, "perplexity": 7368.954569165137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164120234/warc/CC-MAIN-20131204133520-00061-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.transtutors.com/questions/1-we-showed-in-the-text-that-the-value-of-a-call-option-increases-with-the-volatilit-1308967.htm
# 1. We showed in the text that the value of a call option increases with the volatility of the... 1.   We showed in the text that the value of a call option increases with the volatility of the stock. Is this also true of put option values? Use the put-call parity relationship as well as a numerical example to prove your answer.
2018-07-16 01:05:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9518612623214722, "perplexity": 202.64464467231838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589029.26/warc/CC-MAIN-20180716002413-20180716022413-00250.warc.gz"}
https://tex.stackexchange.com/questions/592352/error-even-after-enabling-shell-escape-in-texworks-and-installing-gnuplots-5-4
# Error even after enabling --shell-escape in TeXworks and installing gnuplots 5.4, when using gnuplot {4*x**2 - 5} On my troubleshooting of the code got from page 42/571 of the pgfplots package: \documentclass{standalone} \usepackage{tikz} \usepackage{pgfplots} \pgfplotsset{width=7cm,compat=1.17} \begin{document} \begin{tikzpicture} \begin{axis} \addplot+ [id=parable,domain=-5:5,] gnuplot {4*x**2 - 5} node [pin=180:{$4x^2-5$}]{}; \end{axis} \end{tikzpicture} \end{document} I followed the steps as below: 1. Installed successfully the gnuplot version 5.4 patchlevel 1 2. On my TeXworks window went to Edit->Preference shown below: 3. The TeXwrks preference window pops up. I hit on typesetting tab. In Processing tool section I hit on pdfLaTeX, as below: 4. Then Hit on Edit... box. To get the tool configuration window as below: 5. I hit on + box and type --shell-escape, as below: 1. Using up-arrow I move the --shell-escape up on top of $fullname and hit ok: 1. Then Run the code using pdfLaTeX getting the following error: ! Package pgfplots Error: Sorry, the gnuplot-result file 'f2_addplot_pgfplots_p p42.parable.table' could not be found. Maybe you need to enable the shell-escap e feature? For pdflatex, this is '>> pdflatex -shell-escape'. You can also invo ke '>> gnuplot .gnuplot' manually on the respective gnuplot file.. See the pgfplots package documentation for explanation. Type H for immediate help. ... l.8 ...t {4*x**2 - 5} node [pin=180:{$4x^2-5\$}]{}; Do you know how to fix it? • Have you installed gnuplot at your PATH? Apr 11 at 6:34 • Yes Installed successfully the gnuplot version 5.4 patchlevel 1 – Aria Apr 11 at 6:34 • I mean like in the picture (fnu.uni-hamburg.de/16917560/…)? Apr 11 at 6:36 • I did not check the box you highlighted. I try and let you know. – Aria Apr 11 at 6:38 • Nice to hear. Now you can get started Apr 11 at 6:59 If you want to combine gnuplot and LaTeX, you have to 1. install gnuplot at your PATH and 2. to run pdflatex with shell-escape, i.e. pdflatex -shell-escape ... or pdflatex --enable-write18 ... The first point can be checked typing gnuplot --version and PATH in your cmd, which should return gnuplot ... patchlevel ... and a list of the elements in your PATH (something like C:\Program Files\gnuplot\bin). Therefore, it is the easiest way to enable the shown option during the installation of gnuplot.
2021-09-25 07:25:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7940264344215393, "perplexity": 8929.265988717438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057598.98/warc/CC-MAIN-20210925052020-20210925082020-00299.warc.gz"}
https://socratic.org/questions/44-64-is-19-9-of-x-what-is-the-value-of-x#542238
# 44.64 is %19.9 of x. what is the value of x? ## I don't understand how to get x, and there is no information of the whole percent like what is %19.99 from. Jan 27, 2018 $\implies x = 224.321$ (approximately.) #### Explanation: okkaay, cool. so the question says, $44.64$ is 19.9% of $x$, that's like, 19.9% of $x = 44.64 .$ $\implies \frac{19.9}{100} \cdot x = 44.64$ $\implies x = 44.64 \cdot \frac{100}{19.9}$ $\implies x = 224.321$ (approximately.) Done! :) -Sahar Jan 27, 2018 $x = 224.32$ #### Explanation: The question would be easier to understand if it was written something like this.. If $44.64$ is 19.9% of a number, what is the number? Remember that the number itself represents 100% An answer by another contributor shows you the equation method of finding $x$. Regard it as a proportion. $44.64$ is to 19.9% as what is to 100%? 44.64/19.9 = x /100" "(larr "numbers")/(larr "percents") Multiplying by $100$ gives: $\frac{44.64 \times 100}{19.9} = x$ $x = 224.32$ You can do ANY percentage calculation using direct proportion as long as you match the correct values with the correct percentages. Jan 27, 2018 $x = 224.32$ to 2 decimal place Always specify the degree (amount) of rounding. #### Explanation: Assumption: you mean 4464 is 19.9% of $x$ $\textcolor{b l u e}{\text{The teaching bit}}$ First lets consider what percentage is. Basically it is just a fraction but one where the denominator (bottom number) is fixed at 100 There are two ways of writing percentage The whole fraction itself and just the fractions numerator (top number) followed by the symbol % How can we relate the % symbol? By example suppose we had $\frac{20}{100}$ This is the same as $20 \times \frac{1}{100}$ Written the other way we have 20% So you may if you wish consider % as representing $\times \frac{1}{100}$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $\textcolor{b l u e}{\text{Answering the question - Full explanation given}}$ Note that in mathematics the word 'of' can normally be translated to multiply. Example: $2 \text{ of } 6 \to 2 \times 6$ $\textcolor{b r o w n}{\text{Breaking down the question into its component parts}}$ $44.64$ is ->" "........................44.64=? 19.9% ->" ".........................44.64=19.9/100 of ->" "................................44.64=19.9/100xx? $x \to \text{ } \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots .44 .64 = \frac{19.9}{100} \times x$ Written as normally seen in mathematics we have: color(green)(44.64=19.9/100 x We need to determine the value of $x$ so the objective is to get the $x$ it on its own on one side of the = and everything else on the other side. Thus we need to 'get rid' of the $\frac{19.9}{100}$ on the right. This is done by turning it into 1. Anything multiplied by 1 does not change its value. Multiply both sides by $\textcolor{red}{\frac{100}{19.9}}$ $\textcolor{g r e e n}{44.64 = \frac{19.9}{100} x \textcolor{w h i t e}{\text{dddd") ->color(white)("ddd}} 44.64 \textcolor{red}{\times \frac{100}{19.9}} = \frac{19.9}{100} x \textcolor{red}{\times \frac{100}{19.9}}}$ $\textcolor{g r e e n}{\textcolor{w h i t e}{\text{dddddddddddddddd")->color(white)("ddddddddd}} \frac{4464}{19.9} = x \times \frac{19.9}{\textcolor{red}{19.9}} \times \frac{\textcolor{red}{100}}{100}}$ $\textcolor{g r e e n}{\textcolor{w h i t e}{\text{dddddddddddddddd")->color(white)("ddddddddd") 4464/19.9 =x xx color(white)("d")1color(white)("dd")xxcolor(white)("d}} 1}$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you use decimals you will not have a precise value answer in this case. So it is better to stick to fractions. However, the question format is decimal to 2 places. It is customary to give the answer in the same format as the question unless instructed otherwise. $\frac{4464}{19.9} \times 1 \to \frac{4464}{19.9} \times \frac{10}{10} = \frac{44640}{199}$ as an exact answer $x = 224.32$ to 1 decimal place (not precise)
2021-10-21 14:06:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 46, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8592199087142944, "perplexity": 1082.5609151390975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585424.97/warc/CC-MAIN-20211021133500-20211021163500-00339.warc.gz"}
https://firmfunda.com/maths/calculus-limits/basics-limit-of-function/continuity-limits
maths > calculus-limits Continuity of a Function at an input value what you'll learn... Value of a function »  Value of f(x)$f \left(x\right)$ →  Evaluated at input f(x)x=a$f \left(x\right) {|}_{x = a}$ or f(a)$f \left(a\right)$ →  Left-hand-limit limxaf(x)${\lim}_{x \to a -} f \left(x\right)$ →  Right-hand-limit limxa+f(x)${\lim}_{x \to a +} f \left(x\right)$ »  A function f(x)$f \left(x\right)$ at $x = a$ is →  continuous: if $f \left(a\right)$ = LHL = RHL →  defined by value: if $f \left(a\right)$ is a real number →  defined by limit: if $f \left(a\right) = \frac{0}{0}$ and LHL = RHL →  not defined: if LHL $\ne$ RHL and $f \left(a\right) \notin \mathbb{R}$ Limit of a Function »  If both left-hand-limit and right-hand-limit are equal, it is together referred as "limit of the function" another motivation So far, the motivation to examine limits of a function was to evaluate the function at a input value of the argument variable where the function evaluates to 'indeterminate value'. In this topic, another motivation to examine limits is explained. Consider $f \left(x\right) = \frac{1}{\textcolor{c \mathmr{and} a l}{x - 1}}$ at $x = 1$ by directly substituting $x = 1$ $f \left(1\right) = \frac{1}{1 - 1} = + \infty$ ${\lim}_{x \to 1 +} f \left(x\right)$ $\quad \quad = \frac{1}{\textcolor{c \mathmr{and} a l}{1 + \delta - 1}}$ $\quad \quad = \frac{1}{\textcolor{c \mathmr{and} a l}{\delta}}$ $\quad \quad = \frac{1}{0} = + \infty$ ${\lim}_{x \to 1 -} f \left(x\right)$ $\quad \quad = \frac{1}{\textcolor{c \mathmr{and} a l}{1 - \delta - 1}}$ $\quad \quad = \frac{1}{\textcolor{c \mathmr{and} a l}{- \delta}}$ $\quad \quad = - \frac{1}{0} = - \infty$. For the function $f \left(x\right) = \frac{1}{\textcolor{c \mathmr{and} a l}{x - 1}}$, •  $f \left(a\right) = \infty$ •  ${\lim}_{x \to 1 +} f \left(x\right) = \infty$ •  ${\lim}_{x \to 1 -} f \left(x\right) = - \infty$ The plot of the function is given in the figure. That is, for a value less than $x = 1$, the function is $- \infty$. And at $x = 1$, the function becomes $\infty$. The function is not continuous. continuous A function $f \left(x\right)$ at a given input value $x = a$ is continuous if all the three are equal $f \left(x\right) {|}_{x = a}$ $\quad \quad = {\lim}_{x \to a -} f \left(x\right)$ $\quad \quad = {\lim}_{x \to a +} f \left(x\right)$ The word 'continuous' means: unbroken and continue from one side to another without pause in between. A function is continuous at an input value, if the following three are equal •   function evaluated at the input •   left-hand limit of the function at that input value and •   right-hand limit of the function at that input value. example Given function $f \left(x\right) = 2 {x}^{2}$, is it continuous at x=0? The answer is 'Yes, Continuous'. Evaluate the three values of the function and they are equal. summary Continuity of a Function: A function $f \left(x\right)$ is continuous at $x = a$ if all the following three have a defined value and are equal •  Evaluated at the input value $f \left(x\right) {|}_{x = a}$ •  left-hand-limit ${\lim}_{x \to a -} f \left(x\right)$ •  right-hand-limit ${\lim}_{x - a +} f \left(x\right)$ limit of a function Given that function $f \left(x\right)$ evaluates to indeterminate value at $x = a$. To evaluate the expected value of $f \left(x\right) {|}_{x = a}$, we examine ; •  Left-hand-limit ${\lim}_{x \to a -} f \left(x\right)$ •  Right-hand-limit ${\lim}_{x \to a +} f \left(x\right)$ If these two limits are equal then the result is referred as "limit of the function at the input value" ${\lim}_{x \to a} f \left(x\right)$ The significance of this is that, most functions have both right-hand-limit and left-hand-limit equal. summary Limit of a function: Given function $f \left(x\right)$ and that $f \left(x\right) {|}_{x = a} = \frac{0}{0}$. If ${\lim}_{x \to a +} f \left(x\right) = {\lim}_{x \to a -} f \left(x\right)$, then the common value is referred as limit of the function ${\lim}_{x \to a} f \left(x\right)$. discontinuous If a function $f \left(x\right)$ is discontinuous at $x = a$, then what is ${\lim}_{x \to a} f \left(x\right)$? The answer is 'cannot be computed'. It is given that the function is discontinuous at $x = a$, and that implies left-hand-limit and right-hand-limits are not equal. In that case, limit of the function cannot be computed without specifying left or right. summary Continuity of a Function: A function $f \left(x\right)$ is continuous at $x = a$ if all the following three have a defined value and are equal •  Evaluated at the input value $f \left(x\right) {|}_{x = a}$ •  left-hand-limit ${\lim}_{x \to a -} f \left(x\right)$ •  right-hand-limit ${\lim}_{x - a +} f \left(x\right)$ Limit of a function: Given function $f \left(x\right)$ and that $f \left(x\right) {|}_{x = a} = \frac{0}{0}$. If ${\lim}_{x \to a +} f \left(x\right) = {\lim}_{x \to a -} f \left(x\right)$, then the common value is referred as limit of the function ${\lim}_{x \to a} f \left(x\right)$ Outline
2021-10-25 04:32:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 66, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9378651976585388, "perplexity": 1049.8642140954087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587623.1/warc/CC-MAIN-20211025030510-20211025060510-00512.warc.gz"}
https://physics.stackexchange.com/questions/144178/how-to-understand-derive-eq-5-5-in-geiners-quantum-mechanics-an-introduction
How to understand/derive Eq 5.5 in Geiner's Quantum Mechanics - An Introduction? Geiner's Quantum Mechanics - An Introduction has an unnumbered equation above Eq. 6 in section 2.4 discussing density -- not sure if it is energy density -- of radiation: ... $$dE/dV = E/V =a T^4$$ where $$a = \frac{\pi^2 k_B^4}{15 \hbar^3 c^3} = 7.56 \times 10^{-15} erg cm^{-3} K^{-4}$$ This gives rise to an homogeneous, isotropic,radiation of density K, where K is given by $$K = \frac{c}{4 \pi} \frac{dE}{dV} erg \: cm^{-2}$$ ... I'm particularly puzzled by the factor of $\frac{c}{4\pi}$. How is it from? Hopefully this is enough info for the question to be answerable. • It's probably due to the stupid units. Erg cm etc – Your Majesty Nov 1 '14 at 6:48
2019-10-22 13:51:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.879889726638794, "perplexity": 914.7098712730934}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822098.86/warc/CC-MAIN-20191022132135-20191022155635-00433.warc.gz"}
https://www.semanticscholar.org/paper/Interpretation-of-experimental-results-on-Kondo-Romero-Aligia/b22c5a3934bb8e4b8099ab285d1aaac1986e2d04
# Interpretation of experimental results on Kondo systems with crystal field ```@article{Romero2013InterpretationOE, title={Interpretation of experimental results on Kondo systems with crystal field}, author={M. A. Romero and A. A. Aligia and Julian Sereni and G. Nieva}, journal={Journal of Physics: Condensed Matter}, year={2013}, volume={26} }``` • Published 14 August 2013 • Physics • Journal of Physics: Condensed Matter We present a simple approach to calculate the thermodynamic properties of single Kondo impurities including orbital degeneracy and crystal field effects (CFE) by extending a previous proposal by Schotte and Schotte (1975 Phys. Lett. 55A 38). Comparison with exact solutions for the specific heat of a quartet ground state split into two doublets shows deviations below 10% in the absence of CFE and a quantitative agreement for moderate or large CFE. As an application, we fit the measured specific… • Physics Nature communications • 2016 An angle-resolved photoemission spectroscopy study of the Kondo lattice antiferromagnet CeRh2Si2, where the surface and bulk Ce-4f spectral responses were clearly resolved and a fine structure near the Fermi edge reflecting the crystal electric field splitting of the bulk magnetic 4f 15/2 state is revealed. • Physics • 2014 In single crystals of YbCo2Zn20 intermetallic compound, two coexisting types of electron spin resonance signals related to the localized magnetic moments of cobalt and to itinerant electrons have • Computer Science SciPost Physics • 2021 A deep machine learning algorithm that employs a two-dimensional convolutional neural network that is trained on magnetization, magnetic susceptibility and specific heat data that is calculated theoretically within the single-ion approximation and further processed using a standard wavelet transformation is presented. • T. Kong • Materials Science, Physics • 2016 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii CHAPTER 1. BASICS OF THE MAGNETIC PROPERTIES OF RARE EARTH ELEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . • Physics • 2016 We study the evolution of the Kondo effect in heavy fermion compounds, Yb(Fe\$_{1-x}\$Co\$_{x}\$)\$_{2}\$Zn\$_{20}\$ (0\$\leqslant\$ x \$\leqslant\$ 1), by means of temperature-dependent electric resistivity and ## References SHOWING 1-10 OF 91 REFERENCES • Chemistry, Physics • 2005 We consider a system of Kondo impurities with an RKKY interaction J between them. Using the exact solution of the impurity and approximating the effect of J, we calculate the critical temperature Tc • Chemistry • 2005 Magnetic susceptibility, heat capacity, and electrical resistivity measurements have been carried out on single crystals of the intermediate valence compounds Yb2 Rh3 Ga9 and Yb2 Ir3 Ga9. These We take as granted Anderson's statement that in the low-temperature limit the usual Kondos-d model evolves toward a fixed point in which the effective exchange coupling of the impurity with the • Physics • 2012 We calculate the thermopower of a quantum dot described by two doublets hybridized with two degenerate bands of two conducting leads, conserving orbital (band) and spin quantum numbers, as a function • Zhou • Physics Physical review. B, Condensed matter • 1991 Applying the Zwicknagl, Zevin, and Fulde (ZZF) approximation for the spectral densities of the occupied and empty {ital f} states resulting from a degenerate-Anderson-impurity model, which • Physics • 2011 We calculate the finite temperature and non-equilibrium electric current through systems described generically at low energy by a singlet and two spin doublets for N and N ± 1 electrons respectively, • MooreWen • Physics Physical review letters • 2000 The splitting of the Kondo resonance in the density of states of an Anderson impurity in a finite magnetic field is calculated from the exact Bethe-ansatz solution. The result gives an estimate of • Physics Physical review letters • 2012 This work claims that the SU(2) symmetric case corresponds to a pure valley Kondo effect of fully polarized electrons and demonstrates experimentally a symmetry crossover from an SU(4) ground state to apure orbital SU( 2) groundState as a function of magnetic field.
2023-01-31 16:36:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.856488823890686, "perplexity": 1937.2452485470644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499888.62/warc/CC-MAIN-20230131154832-20230131184832-00608.warc.gz"}
https://answers.opencv.org/users/16970/trustmeimanengineer/?sort=recent
2017-11-21 09:04:30 -0500 received badge ● Notable Question (source) 2016-11-25 01:36:54 -0500 received badge ● Popular Question (source) 2015-11-04 15:17:40 -0500 received badge ● Student (source) 2015-03-11 10:14:15 -0500 received badge ● Enthusiast 2015-03-10 17:02:53 -0500 commented answer Is there a way to easily plot pixels? Thanks berak. This might be a dumb question, but why is it (y, x) instead of (x, y)? Is that just the convention when you do ? Or is it because you are using image.at? EDIT: Nevermind, I looked at the documentation. 2015-03-10 16:54:39 -0500 commented answer Is there a way to easily plot pixels? Thanks for the help. Since I want multiple circles, I have to apply the circle function multiple times? Also, doesn't this exceed the size of one pixel? What I mean by this is let's say the circle radius is one, this will technically circle around more than one pixel because the pixel value is just the center, correct? Other question, let's say I use image.at(Point(x, y)) = 0; does uchar work for CV_8U datatype? Sorry, I know that question is really basic. And finally, let's say I want to make the color turquoise instead of black. Do I say: image.at(Point(x, y)) = Scalar(R,G,B); where R,G,B are the proper values for whatever color I want? 2015-03-10 16:19:30 -0500 asked a question Is there a way to easily plot pixels? Let's say I have a Mat M which contains a grayscale image. Now I apply a colormap to make a new Mat M_color and show the image. The colormap is fine, but at specific points (where I already know the indices), I want to make the pixels a specific color (let's say black). Example: I have a 25x25 grayscale image shown with a rainbow colormap. At pixels (2,7), (4,12), (16,9), and (14,23) I want them to be black. Is there an easy way to plot the black pixels over-top the rainbow image? Or if it's easier, to just change the color at that specific instance to black. 2015-02-23 12:02:11 -0500 asked a question applyColormap for CV_64F matrix? I've been trying to use the applyColormap function. I'm working with a matrix with a datatype of CV_64F. If I apply a color map, the image is still B&W (using imshow). But if I convert the matrix to CV_8U, then apply the color map, it works. This is fine for right now, but later on I will need to apply a color map to a matrix with datatype CV_64F and I will not be able to convert it to CV_8U because it will have negative values and decimal points. Any suggestions would be greatly appreciated. 2015-02-20 11:35:31 -0500 commented answer Sine or Cosine of every element in Mat (c++) ^^Sorry, I didn't know -.- I will make sure to keep it open! 2015-02-19 19:02:01 -0500 commented question How to detect labels with numbers in a black rectangle? That sounds like cool project! I think you will definitely need closer images than that/possibly take them when you are parallel to the surface so there isn't much perspective. I would guess 2 or 3 feet away would be fine. Once you have the code working on something like that, you can start to optimize it for images that aren't so perfect. Like thdrksdfthmn said, you should start by setting a threshold so it is high contrast B&W. However, I think that before you do the thresholding to high-contrast black & white, you should invert the image colors. This might make it easier on the Cascade classifiers. Let me know if that helps, and if it doesn't I can think a little bit harder :p. Good luck!! 2015-02-19 16:50:59 -0500 commented question How to detect labels with numbers in a black rectangle? I'm new to OpenCV as well, but I might be able to help. When you take the images from the camera, are they going to be focused on the labels? Or will they be similar to the image you have posted, where the image is distant and you are far away from the labels? 2015-02-19 10:06:56 -0500 received badge ● Editor (source) 2015-02-19 09:58:50 -0500 commented answer Sine or Cosine of every element in Mat (c++) That's exactly what I ended up doing! Thanks (: +1 2015-02-19 09:52:51 -0500 received badge ● Supporter (source) 2015-02-18 09:55:57 -0500 asked a question Sine or Cosine of every element in Mat (c++) I was wondering if there is any way to take the sine or cosine of every element in a matrix? Thanks! EDIT: For future reference, I did the Taylor expansion of Sine. For pretty good accuracy, I only had to calculate up to the fourth term. However, keep note that if you are trying to estimate sin(pi) or sin(-pi) you have to calculate additional terms (probably 8 total terms). This link will be useful for those who want to do this: http://en.wikipedia.org/wiki/Taylor_s...
2020-07-07 00:34:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24548368155956268, "perplexity": 832.8462574247131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890566.2/warc/CC-MAIN-20200706222442-20200707012442-00286.warc.gz"}
https://www.sarthaks.com/2718581/coil-with-resistance-and-inductance-connected-battery-what-will-the-energy-stored-the-coil
# A coil with 10 Ω resistance and 2 H inductance is connected to a 50 V battery. What will be the energy stored in the coil? 28 views closed A coil with 10 Ω resistance and 2 H inductance is connected to a 50 V battery. What will be the energy stored in the coil? 1. 25 J 2. 50 J 3. 75 J 4. 100 J by (53.7k points) selected Correct Answer - Option 1 : 25 J Concept: The power stored in an inductor is given by ⇒ P = Vi      -----(1) Where V = Voltage induced in an inductor, and i = current The voltage induced in the inductor is given by $⇒V =L\frac{dI}{dt}$ Substituting the above equation in equation 1, it becomes $⇒P = L i \frac{dI}{dt}$ The energy stored in the inductor is given by $⇒E = \int P dt$ Substituting the value of P in the above equation $⇒ E = \int L i\frac{dI}{dt}dt$ $⇒ E = \int L i di$ $⇒ E = \frac{1}{2} LI^{2}$ Calculation: R = 10 Ω, L = 2 H I = 50 / 10 = 5 A E = ( 2 × 52 )  / 2 = 25 J
2022-10-06 15:02:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8052700757980347, "perplexity": 1307.7191236245974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00649.warc.gz"}
https://math.stackexchange.com/questions/889349/when-is-this-matrix-positive-semi-definite
# When is this matrix positive semi-definite? I have a symmetric $n \times n$ matrix as follows. I want to find the eigenvalues of this Hessian matrix to state that it is not Positive Semi-Definite (i.e. some eigenvalues are negative while the others are non-negative). If I couldn't find all the eigenvalues, it would be enough for my purpose to find just one negative eigenvalue: NOTE: I could have found the eigenvalues in the $2\times2$ and $3\times3$ case. I am looking for the general $n\times n$ case. $a_1$ and $a_2$ should be equal in order to have positive eigenvalues in $2\times 2$ case. Similarly, $a_1, a_2$ and $a_3$ should be equal in order to have positive eigenvalues in $3\times 3$ case. $$A=\begin{pmatrix} 2a_1&a_1+a_2&\dots &a_1+a_{n-1}&a_1+a_n\\ a_1+a_2&2a_2&\dots &a_2+a_{n-1}&a_2+a_n\\ \vdots&\vdots&&\vdots&\vdots\\ a_1+a_{n-1}&a_2+a_{n-1}&\dots&2a_{n-1}&a_{n-1}+a_n\\ a_1+a_n&a_2+a_n&\dots&a_n+a_{n-1}&2a_n \end{pmatrix}$$ So $A_{ij} = a_i+a_j$. • What is the question? – copper.hat Aug 6 '14 at 19:33 • @copper.hat He wants either all the eigenvalues for that matrix (seems unlikely) or, barring that, when are all the eigenvalues non-negative? For $2\times 2$ and $3\times 3$ he says all eigenvalues are non-negative if and only if the $a_i$ are all equal. Presumably the $a_i$ are all positive? – Thomas Andrews Aug 6 '14 at 19:34 • What optimization problem did this Hessian come from - could you post a little more details about it? I know there are already answers, I'm just curious. – Nick Alger Aug 7 '14 at 8:41 The matrix $A$ can be decomposed as $A=au^T+ua^T$ where $a\in\mathbb{R}^n$ is the vector containing $a_1,\ldots,a_n$ and $u\in\mathbb{R}^n$ is the vector of all ones. If $v\in\mathbb{R}^n$ is a vector perpendicular to both $a$ and $u$, then $Av=0$. Hence we always have $n-2$ eigenvalues equal to zero. For the last two eigenvalues, we have to cases. In the first case, $a$ and $u$ are linearly dependent, i.e., $a=\alpha u$ for some $\alpha\in\mathbb{R}$: in this case $A=2\alpha uu^T$ and the last two eigenvalues are $0$ and $2\alpha n$. The associated eigenvectors are $\frac{u}{||u||}$ (for the non-zero eigenvalue) and any set of $n-1$ vectors orthogonal to $u$. All the eigenvalues are non-negative and the matrix is PSD. In the second case, $a$ and $u$ are linearly independent. For our analysis, we can limit our attention, without loss of generality, to vectors $v\in\mathbb{R}^n$ in the span of $\{a,u\}$ (if this is not true, the component of $v$ that is orthogonal to both $a$ and $u$ cancels out, as discussed above). This means that there exist $\alpha,\beta\in\mathbb{R}$ such that $v=\alpha a+\beta u$. The eigenvalue equation $Av=\lambda v$ reduces to, on the LHS, $$Av=(au^T+ua^T)(\alpha a+\beta u)=\left((u^Ta) \alpha+(u^Tu)\beta\right)a +\left((a^Ta)\alpha+(a^Tu)\beta\right) u,$$ and on the RHS $$\lambda v= \lambda\alpha a +\lambda\beta u.$$ Since $a$ and $u$ are linearly independent, the equation has a solution iff the coefficients of $a$ and $u$ on the two sides match. We therefore get a reduced $2 \times 2$ system of equations of the form $$\begin{bmatrix} u^Ta & ||u||^2\\||a||^2 & u^Ta \end{bmatrix}\begin{bmatrix}\alpha\\\beta\end{bmatrix}=\lambda \begin{bmatrix}\alpha\\\beta\end{bmatrix}.$$ Note that $||u||^2=n$. Intuitively, this makes sense, because we reduced our $n\times n$ eigenvalue problem to the $2 \times 2$ eigenvalue problem obtained by restricting the vectors to the 2-dimensional space spanned by $a$ and $u$. A symbolic computation software informs me that the eigenvalues for the reduced $2 \times 2$ case are $\lambda_{1,2}=u^Ta\pm\sqrt{n||a||^2}.$ (Note that this formula gives the right result also for the case $a=\alpha u$, although, technically, its derivation would not be correct.) To conclude, the matrix A can be PSD or indefinite depending on $a$. E.g., if $a=u$, then it is PSD, but if $a$ is orthogonal to $u$, then it is indefinite. Note that $A$ can be represented as $$A=a^T\otimes e+e^T\otimes a,$$ where $a=(a_1,\ldots,a_n)$ and $e=(1,\ldots,1)$. Hence $A$ has rank $\leq 2$ and can have at most two non-zero eigenvalues. The corresponding column eigenvectors are linear combinations of $a^T$ and $e^T$. Writing $w=\alpha a+\beta e$, we get $$Aw^T=\bigl(\alpha (e,a)+\beta(e,e)\bigr) a^T+\bigl(\alpha (a,a)+\beta(a,e)\bigr) e^T,$$ where $(x,y)=x\cdot y^T=\sum_{k=1}^n x_k y_k$ denotes the usual scalar product. Therefore, the equation for the two nontrivial eigenvalues becomes $$\operatorname{det}\left(\begin{array}{cc} (e,a)-\lambda & (e,e) \\ (a,a) & (a,e)-\lambda \end{array}\right)=0,$$ or, equivalently, $$\lambda^2-2(e,a)\lambda+(e,a)^2-(e,e)(a,a)=0.$$ Yet in another form: $$\lambda^2-2n\langle a\rangle\lambda+n^2\left(\langle a\rangle^2-\langle a^2\rangle\right)=0\quad \Longrightarrow \quad \lambda_{\pm}=n\left[\langle a\rangle\pm\sqrt{\langle a^2\rangle}\right],$$ where $\langle x\rangle\displaystyle=\frac{\sum_{k=1}^n x_k}{n}$. Therefore $A$ is positive semi-definite iff $a_1=a_2=\ldots=a_n\geq 0$.
2019-11-13 20:18:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.938947856426239, "perplexity": 104.63225089890184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667333.2/warc/CC-MAIN-20191113191653-20191113215653-00127.warc.gz"}
https://www.nature.com/articles/s41598-023-28742-6?error=cookies_not_supported&code=b4a0126c-13e0-4835-9c9c-1c5eeb9b5674
## Introduction Sister chromatid cohesion (SCC) is mediated by the cohesin complex, forming a ring-like structure and holding sister chromatids until the onset of mitosis. Cohesin consists of four proteins: SMC3, SMC1, RAD21, and SA1/SA21,2. Because the location and structure of cohesin dynamically changes during the progression of the cell cycle, the function of cohesin is controlled by many regulatory proteins. In particular, the associators of cohesin, including PDS5A/B, Sororin, and WAPL, and the cohesin loader consisting of NIPBL and MAU2, play a significant role in cohesion establishment, maintenance, and dissolution. Although biochemical reconstitution of cohesin activity has been achieved in vitro with these essential SCC regulators3,4, dozens of proteins have been reported to regulate SCC directly or indirectly2,5. CTF18, an ATPase that constitutes an RFC-like proliferating cell nuclear antigen (PCNA) loader with CTF8, DCC1, and RFC2-5, is an evolutionarily conserved cohesin regulator6,7,8. The function of CTF18 in SCC is likely mediated through PCNA loading on the leading strand, although the detailed molecular mechanism is not well known9. SCC defects are usually examined through the microscopic observation of metaphase chromosome spreads10. Although classification criteria differ among laboratories, SCC-defective cells tend to have more open sister chromatids. If SCC is completely compromised, sister chromatids are separated from each other. The problem with examining SCC defects using this method is that the evaluation and classification of chromosome shapes are performed manually by researchers, and the lines between each category are often blurred. Moreover, because minimally 50–100 metaphase chromosomes must be classified in each strain/sample for a quantitative analysis, this analysis requires considerable effort and time. These problems allow for human errors or the incorporation of researchers’ subjective bias. For these reasons, a machine-learning model that automatically classifies chromosome shapes is required. Currently, image processing methods based on convolutional neural networks (CNNs) are widely used in various fields, including pathological examination, face recognition, and so on11,12. In the field of chromosome analysis, computational techniques have been developed for karyotype analysis, which are employed for prenatal diagnosis13,14. However, the application of automatic chromosome analysis, except for karyotype analysis, remains limited. In this study, we applied CNN-based image recognition models to classify the shape of chromosomes into three types, depending on the positional relationship of sister chromatids. The trained model achieved a maximum concordance rate of 73.1% with the example answers (EA) given by a researcher and successfully detected the SCC defects of CTF18-/- cells. Based on these results, we propose neural network-based image recognition models as useful tools for automatically classifying the shape of chromosomes and examining SCC defects in mutant cell lines. ## Results ### Dataset preparation In this study, we used the TK6 human B lymphoblastoid cell line, which is widely used for in vitro genotoxicity tests15,16. TK6 cells exhibit normal and stable karyotypes, except for their chromosome 13 trisomy15. Metaphase chromosome spreads were prepared from wild-type (WT) TK6 cells using a conventional method (see “Materials and Methods”). Then, 2,144 single-chromosome images that did not overlap with other chromosomes from 150 cells were cut out. Chromosome images were classified into three types (Fig. 1), and these labels were used as EA. Well-cohered tight chromosomes were classified as type A, chromosomes in which the arms were separated were classified as type B, and chromosomes in which sister chromatids were also separated at the centromere were classified as type C 17,18. All chromosome images from WT cells were divided into three groups: training data, validation data for model selection, and testing data for model evaluation. To divide the dataset, we first randomly selected 654 chromosome images from 43 cells as the test dataset. The distribution of types A, B, and C in this test dataset should be close to the actual distribution of each chromosome type in WT TK6 cells. ### Individual differences in classification of chromosome images As stated above, the boundaries between each type of chromosome are not absolute, causing the classification criteria to vary between observers. To experimentally verify this, preparatory to applying CNN, the test dataset containing 654 chromosome images was manually classified by nine cooperators (referred to as cooperators A, B, C, D, E, F G, H, and I). Before classification, each cooperator was trained using several labeled chromosome images, as shown in Fig. 1A. The concordance rates between EA and each cooperator are shown in Fig. 1B. The average, maximum, and minimum concordance rates were 65.7, 76.8, and 51.4%, respectively. There was a marked difference in the rate of each chromosome type (Fig. 1C). For instance, while the rate of type C chromosomes was 4.8% in EA, it varied from 4.6 to 28.8% when the same test dataset was classified by each cooperator. This result shows that the classification criteria certainly differ among observers, and demonstrates the requirement of computational analysis, which has a fixed standard. ### Estimation model using SqueezeNet or ResNet-18 As pre-trained models for transfer learning, we used two different CNNs: SqueezeNet-based model and ResNet-18-based model (thereafter simply referred to as SqueezeNet and ResNet-18). Both models were trained on the ImageNet dataset to classify an image into 1,000 object categories (keyboard, mouse, pencil, many animals, etc.), and were used for an efficient image classification with a limited number of images12,19. ResNet was the winning model for the ILSVRC in 2015. Deepening the network is expected to improve the representation capability and classification accuracy; however, simply adding more layers is not sufficient to train it efficiently. ResNet successfully increases the depth of layers by introducing residual blocks, which combines a convolutional layer and a shortcut connection that adds the input of the previous layer directly to the layer behind it (Fig. 2A). In contrast, SqueezeNet is a lightweight model built by stacking fire modules with reduced parameters and computational complexity by utilizing 1 × 1 convolutions (Fig. 2B). The dimensionality of the input feature map is reduced by the 1 × 1 convolution of the squeeze layer, and it is restored when a feature extraction is performed by the 3 × 3 convolution of the expanded layer. However, the number of parameters is reduced by replacing some of them with a 1 × 1 convolution. The fully connected layer (fc) prior to the output of the network was trained using the chromosome dataset, and the other layers of SqueezeNet or ResNet-18 were used as fixed feature extractors. Because the rate of type C chromosome images was much lower than that of type A or type B images in WT cells, we added 221 type C images to the dataset for a well-balanced training. From these 1,711 chromosome images, we randomly selected 400 chromosome images for validation data and 1,275 for training data in a fixed seed (Supplementary Fig. 1). All images were processed before being fed into the fully connected layer (see “Materials and Methods”). For each analysis, we repeated the validation and training data selection three times to reduce data selection bias. Moreover, with each data set, we repeated the training, validation, and testing cycles 10 times, meaning that the score obtained after each analysis was the average of 30 trials. ### Three classification of chromosome images by trained models To estimate the number of chromosome images required for training, the CNN models were first trained with different numbers of datasets, which were randomly selected from the training data and contained equal numbers of type A, type B, and type C images. After training, the models were fed 654 chromosome images as the test data. The concordance rates between EA and predicted answers (PA) from the model improved as the number of training images increased (Fig. 3A). At most of points, the SqueezeNet achieved higher concordance rates than the ResNet-18. Because the concordance rate seemed to be saturated at the training with 693 chromosome images (containing 231 of each type of chromosome), further analysis was conducted with this number of training data. Under these conditions, SqueezeNet achieved a 73.1% concordance with EA, whereas it was 68.2% in ResNet-18. Considering that the concordance rates ranged from 51.4 to 76.8% when the classification was conducted by cooperators (Fig. 1B), the concordance rates achieved by the models seem to be sufficient for practical use. Regarding the distribution of each type of chromosome, the rates of type A, type B, and type C were 63.8%, 31.4%, and 4.9% in EA; 63.7%, 30.5%, and 5.8% in SqueezeNet; and 63.0%, 30.9%, and 6.1% in ResNet-18; respectively when the models were trained with 693 chromosome images (Fig. 3B and C). These distribution patterns were quite similar, and it indicates that the classifications done by a researcher are fully reproducible by the trained models. ### Visualization of characteristic sites of chromosome To understand the regions of image where SqueezeNet and ResNet-18 look, we used Grad-CAM, which is commonly used to visualize the characteristic sites of images20. Several examples of this analysis are presented in Fig. 4. As a general trend, SqueezeNet focuses on each chromatid end, whereas ResNet-18 focuses on the centromere where sister chromatids are joined together. Because the classification should be judged by the positional relationship between each sister chromatid, the result may explain the higher concordance rates of SqueezeNet over ResNet-18. ### Establishment of a CTF18 KO cell line as a SCC-defective model To confirm whether the trained model accurately detects the SCC defects of mutant cells, we chose a CTF18 KO cell line as an SCC-defective model. This is because CTF18 is a well-approved SCC regulator, and the knockout of CTF18 is supposed to cause relatively mild SCC defects based on previous observations8,21,22. To establish CTF18-/- cells, we constructed gene-targeting constructs designed to delete an exon encoding the Walker A motif of CTF18, which is essential for the function of ATPase family proteins (Fig. 5A). The absence of CTF18 protein expression in the established CTF18-/- cells was confirmed by a western blot analysis (Fig. 5B, Supplementary Fig. 2). CTF18-/- cells were viable, but their proliferative capacity slightly decreased (Fig. 5C). ### Detection of SCC defects in CTF18-/- cells with the trained model To compare the rates of chromosome types in CTF18-/- cells with those in WT cells, chromosome images were prepared from CTF18-/- cells in the same manner as for WT cells. The rates of each type of chromosome obtained from manual analysis or from the trained models are shown in Fig. 5D–F. The concordance rates between EA and PA were a bit lower when the models classified the chromosomes of CTF18-/- cells probably because these models were trained with WT TK6 chromosomes (Supplementary Fig. 3). In manual analysis, CTF18 depletion increased the rates of type B chromosomes from 30.5 to 52.2% and those of type C chromosomes from 5.8% to 21.3% (Fig. 5D), indicating that CTF18 depletion causes SCC defects in TK6 cells, as shown earlier in different cell lines and species6,7. In both SqueezeNet and ResNet-18, the results showed the same trend as that of the manual analysis (Fig. 5E and F). From these results, we concluded that CNN-based models are sufficiently competent to analyze the chromosomes of SCC-defective mutant cells. ## Discussion Cohesin associators and regulators have been extensively explored for over 4 decades using biochemical and genetic approaches23,24,25. Although all essential players in SCC might have already appeared, there could be other unknown regulators whose loss partially impairs the function of cohesin, or some proteins might play roles in SCC in specific cell types. To characterize the role of cohesin regulators by morphological approaches, classification of metaphase chromosomes has been used as an easy and reliable method10. In many studies, several chromosome images were shown as representative images of each category, and the rate of each category obtained by analyzing several hundred cells was shown as quantitative data17,25,26,27. However, in actual samples, many chromosomes are difficult to classify because they are located at the border of two categories. Owing to these populations, the classification criteria depends on the researcher, as shown in Fig. 2. Such populations do not cause a big problem when analyzing mutant cells deficient in cohesin subunits, cohesin loaders, or critical cohesin regulators, whose losses induce obvious and drastic chromosomal changes. However, in the case of gene mutations causing only mild SCC defects, a careful analysis is necessary to prevent subjectivity. In this study, we examined whether CNN-based image recognition models are sufficiently sensitive to study non-essential cohesin regulators. CTF18 depletion caused the chromosome shape to be more open, but it did not induce severe SCC defects or premature sister chromatid separation in TK6 cells. Nevertheless, the model fully detected the difference in chromosome shape between WT TK6 cells and CTF18-/- cells, as a researcher did. Thus, we propose the model and similar neural network-based approaches as powerful tools for examining SCC defects in non-essential cohesin regulators. The current model achieved a maximum concordance rate of 73.1% with EA for the WT TK6 sample. This concordance rate is comparable to or slightly higher than those from the two researchers; however, in some cases, SqueezeNet yielded apparently wrong answers (Supplementary Fig. 4). To increase the accuracy, the models should be improved with several modifications in future studies. The first is to increase the labeling accuracy for both training data and test data. This is because some chromosome images are difficult to classify into a certain category, as mentioned above, and incorrect labeling may have occurred. Refining the training data by double-checking the labeling with multiple researchers or by removing data that are difficult to classify might be effective in improving the accuracy of measurement. Another improvement might be the removal of photobombed chromosome fragments. Even if the area is very small, CNN models sometimes seem to focus on them, which researchers automatically remove from the analysis. In particular, this processing seems important for analyzing type C chromosomes, which are determined by the positional relationship between two separated objects. Currently, the model requires cropped single chromosomes that do not overlap with other chromosomes, causing the analysis to not fully avoid arbitrary choices by researchers. In the future, these chromosome analyses should be conducted using automatically captured and cropped chromosome images. Automatic capture can be performed using latest microscopes. To crop single chromosomes, CNN-based models that recognize each chromosome in a metaphase cell and automatically cut out single chromosomes can be developed. A region extraction using image processing techniques such as OpenCV function might be another option. These improvements will enable chromosome analysis using a computer, without human error or subjectivity. Here we started chromosome analysis using CNN-based image recognition models with three limited patterns found in WT and CTF18-/- TK6 cells. However, they do not cover all chromosome patterns. For example, chromosomes from WAPL-depleted cells have tight and twisted shapes, whereas cohesin-depleted cells have completely separated sister chromatids1,8. Moreover, chromosomes lacking primary constrictions are frequently found in Robert syndrome patients who have mutation in ESCO2 genes26. Whether the models distinguish these chromosomes is currently unknown. Moreover, analyzing whether the models can be used to classify the chromosome from other species is important. Future studies can address these tasks for CNN-based models to replace manual analysis conducted by researchers. ## Materials and methods ### Chromosome preparation Chromosome preparation was performed, as previously described, with a small modification27. Metaphase cells were enriched by a treatment with colcemid, a microtubule polymerization inhibitor, for 2 h. Cells were then swollen with 0.075 M KCl and fixed with methanol: acetic acid = 3:1. After dropping on a glass plate and staining with Giemsa solution, the chromosomes were visible under a microscope. ### Image acquisition and cropping Chromosome images were collected using a Visualix STD1 camera (Visualix) mounted on an inverted microscope (ECLIPSE Ni; Nikon) with a 100 × NA 1.49 objective lens. Subsequent image cropping was performed using the ImageJ software. ### Preprocessing of input data The original cropped chromosome images were resized to 224 × 224 pixels to match the image size of each pretraining model. The jpg images were converted into tensor images. In addition, if there were other objects in the image, such as other chromosomes thought to negatively affect model learning, they were replaced with white (255, 255, 255). To highlight the shape of the chromosomes, the brightness, contrast, and gamma were adjusted. The difference in staining shade between each image was reduced by fixing the average and standard deviation of each separated RGB channel (for the three colors red, green, and blue). These processes were applied to all chromosome images. The values of the average and standard deviation in normalization were 64 and 16, respectively. The equation $$dst\left( {x,y} \right)$$ used to highlight the shape of chromosomes is $$src\left( {x,y} \right)$$, which is the pixel value at position $$\left( {x,y} \right)$$ in the image. Equations (1 and 2) were used to calculate the contrast $$\alpha$$, brightness $$\beta$$, and gamma correction $$\gamma$$. In this study, we set $$\alpha = 3.0$$,$$\beta = 80.0, and \gamma = 3.0$$. $$dst\left( {x,y} \right) = \alpha\,src\left( {x,y} \right) + \beta$$ (1) $$y = \left( \frac{x}{255} \right)^{\gamma } \times 255$$ (2) ### Plasmid construction and transfection CTF18 KO-Puro and CTF18 KO-Bsr were generated from genomic PCR products combined with puromycin or blasticidin S selection marker cassettes. Genomic DNA sequences were amplified using the primers 5’- GCGAATTGGGTACCGGGCCCactgcctctgggtggatgagtttg -3’ and 5’- CTGGGCTCGAGGGGGGGCCgtgccacctgcagcccaggtagatg -3’ (for the left arm of the KO construct); and 5’- TGGGAAGCTTGTCGACTTAAgtgagtgatgtgaggtccgtctctg -3’ and 5’- CACTAGTAGGCGCGCCTTAAccggctgtacaggaactagacatagg -3’ (for the right arm of the KO construct). The amplified PCR products were purified by gel extraction and cloned into DT-Ap/Puro or DT-Ap/Bsr vectors digested with ApaI and AflII using the GeneArt Seamless Cloning and Assembly kit (Thermo Fisher Scientific). A gRNA to introduce a DSB into the CTF18 locus was designed using CRISPR direct (https://crispr.dbcls.jp/). Two phosphorylated oligo DNAs, 5’- CACCGgcgtcacgcggggtactctg-3’ and 5’- AAACcagagtaccccgcgtgacgcC-3’, were annealed and ligated with px330 cut by BbsI. CTF18 knockout vectors were then co-transfected with the gRNA expression vector into TK6 cells using Neon Transfection System (Thermo Fisher Scientific). ### Western blotting analysis Western blotting was performed, as previously described27, using antibodies against CTF18 (Santa Cruz Biotechnology) and β-actin (Proteintech), followed by incubation with horseradish peroxidase-conjugated anti-mouse IgG secondary antibody (Cell Signaling Technology). Proteins were visualized using ImmunoStar LD according to the manufacturer’s protocol. ### Growth curve WT TK6 and CTF18-/- TK6 cells were cultured at 37 °C in RPMI medium (Wako) supplemented with 5% house serum (Gibco), penicillin/streptomycin mix (Nacalai Tesque), 2 mM l-glutamine (Nacalai Tesque), and 100 μM sodium pyruvate. To plot growth curves, each cell line was cultured in three different wells of 24 well-plates and passaged every 24 h. Cell numbers were determined using flow cytometry. 15 μl of cell suspension was analyzed, and viable cells determined by forward scatter and side scatter were counted. ### Visualizing the basis for classification decisions using Grad-CAM Grad-CAM, proposed by Selvaraju et al.20, uses the gradient of the classification score for the convolutional features determined by the network to provide a visual indication of the parts of the image that are most important for classification.
2023-03-24 13:52:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43856602907180786, "perplexity": 3489.5995943940948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00377.warc.gz"}
https://docs.mosek.com/10.0/pythonapi/tutorial-cqo-shared.html
# 6.3 Conic Quadratic Optimization¶ The structure of a typical conic optimization problem is $\begin{split}\begin{array}{lccccl} \mbox{minimize} & & & c^T x+c^f & & \\ \mbox{subject to} & l^c & \leq & A x & \leq & u^c, \\ & l^x & \leq & x & \leq & u^x, \\ & & & Fx+g & \in & \D, \end{array}\end{split}$ (see Sec. 12 (Problem Formulation and Solutions) for detailed formulations). We recommend Sec. 6.2 (From Linear to Conic Optimization) for a tutorial on how problems of that form are represented in MOSEK and what data structures are relevant. Here we discuss how to set-up problems with the (rotated) quadratic cones. MOSEK supports two types of quadratic cones, namely: $\Q^n = \left\lbrace x \in \real^n: x_0 \geq \sqrt{\sum_{j=1}^{n-1} x_j^2} \right\rbrace.$ • Rotated quadratic cone: $\Qr^n = \left\lbrace x \in \real^n: 2 x_0 x_1 \geq \sum_{j=2}^{n-1} x_j^2,\quad x_0\geq 0,\quad x_1 \geq 0 \right\rbrace.$ For example, consider the following constraint: $(x_4, x_0, x_2) \in \Q^3$ which describes a convex cone in $$\real^3$$ given by the inequality: $x_4 \geq \sqrt{x_0^2 + x_2^2}.$ For other types of cones supported by MOSEK, see Sec. 15.11 (Supported domains) and the other tutorials in this chapter. Different cone types can appear together in one optimization problem. ## 6.3.1 Example CQO1¶ Consider the following conic quadratic problem which involves some linear constraints, a quadratic cone and a rotated quadratic cone. (6.10)$\begin{split}\begin{array} {lccc} \mbox{minimize} & x_4 + x_5 + x_6 & & \\ \mbox{subject to} & x_1+x_2+ 2 x_3 & = & 1, \\ & x_1,x_2,x_3 & \geq & 0, \\ & x_4 \geq \sqrt{x_1^2 + x_2^2}, & & \\ & 2 x_5 x_6 \geq x_3^2 & & \end{array}\end{split}$ The two conic constraints can be expressed in the ACC form as shown in (6.11) (6.11)$\begin{split}\left[\begin{array}{cccccc}0&0&0&1&0&0\\1&0&0&0&0&0\\0&1&0&0&0&0\\0&0&0&0&1&0\\0&0&0&0&0&1\\0&0&1&0&0&0\end{array}\right] \left[\begin{array}{c}x_1\\x_2\\x_3\\x_4\\x_5\\x_6\end{array}\right] + \left[\begin{array}{c}0\\0\\0\\0\\0\\0\end{array}\right] \in \Q^3 \times \Q_r^3.\end{split}$ Setting up the linear part The linear parts (constraints, variables, objective) are set up using exactly the same methods as for linear problems, and we refer to Sec. 6.1 (Linear Optimization) for all the details. The same applies to technical aspects such as defining an optimization task, retrieving the solution and so on. Setting up the conic constraints In order to append the conic constraints we first input the matrix $$\afef$$ and vector $$\afeg$$ appearing in (6.11). The matrix $$\afef$$ is sparse and we input only its nonzeros using Task.putafefentrylist. Since $$\afeg$$ is zero, nothing needs to be done about this vector. Each of the conic constraints is appended using the function Task.appendacc. In the first case we append the quadratic cone determined by the first three rows of $$\afef$$ and then the rotated quadratic cone depending on the remaining three rows of $$\afef$$. # Create a matrix F such that F * x = [x(3),x(0),x(1),x(4),x(5),x(2)] [3, 0, 1, 4, 5, 2], # Columns [1.0] * 6) [0, 1, 2], # Rows from F None) # Unused # Rotated quadratic cone (x(4),x(5),x(2)) \in RQUAD_3 [3, 4, 5], # Rows from F None) # Unused The first argument selects the domain, which must be appended before being used, and must have the dimension matching the number of affine expressions appearing in the constraint. Variants of this method are available to append multiple ACCs at a time. It is also possible to define the matrix $$\afef$$ using a variety of methods (row after row, column by column, individual entries, etc.) similarly as for the linear constraint matrix $$A$$. For a more thorough exposition of the affine expression storage (AFE) matrix $$\afef$$ and vector $$\afeg$$ see Sec. 6.2 (From Linear to Conic Optimization). Source code Listing 6.4 Source code solving problem (6.10). Click here to download. import sys import mosek # Since the actual value of Infinity is ignores, we define it solely # for symbolic purposes: inf = 0.0 # Define a stream printer to grab output from MOSEK def streamprinter(text): sys.stdout.write(text) sys.stdout.flush() def main(): # Create a task # Attach a printer to the task bkc = [mosek.boundkey.fx] blc = [1.0] buc = [1.0] c = [0.0, 0.0, 0.0, 1.0, 1.0, 1.0] bkx = [mosek.boundkey.lo, mosek.boundkey.lo, mosek.boundkey.lo, mosek.boundkey.fr, mosek.boundkey.fr, mosek.boundkey.fr] blx = [0.0, 0.0, 0.0, -inf, -inf, -inf] bux = [inf, inf, inf, inf, inf, inf] asub = [[0], [0], [0]] aval = [[1.0], [1.0], [2.0]] numvar = len(bkx) numcon = len(bkc) NUMANZ = 4 # Append 'numcon' empty constraints. # The constraints will initially have no bounds. #Append 'numvar' variables. # The variables will initially be fixed at zero (x=0). for j in range(numvar): # Set the linear term c_j in the objective. # Set the bounds on variable j # blx[j] <= x_j <= bux[j] task.putvarbound(j, bkx[j], blx[j], bux[j]) for j in range(len(aval)): # Input column j of A task.putacol(j, # Variable (column) index. # Row index of non-zeros in column j. asub[j], aval[j]) # Non-zero Values of column j. for i in range(numcon): task.putconbound(i, bkc[i], blc[i], buc[i]) # Input the affine conic constraints # Create a matrix F such that F * x = [x(3),x(0),x(1),x(4),x(5),x(2)] [3, 0, 1, 4, 5, 2], # Columns [1.0] * 6) [0, 1, 2], # Rows from F None) # Unused # Rotated quadratic cone (x(4),x(5),x(2)) \in RQUAD_3 [3, 4, 5], # Rows from F None) # Unused # Input the objective sense (minimize/maximize) # Optimize the task # Print a summary containing information # about the solution for debugging purposes # Output a solution if solsta == mosek.solsta.optimal: print("Optimal solution: %s" % xx) elif solsta == mosek.solsta.dual_infeas_cer: print("Primal or dual infeasibility.\n") elif solsta == mosek.solsta.prim_infeas_cer: print("Primal or dual infeasibility.\n") elif mosek.solsta.unknown: print("Unknown solution status") else: print("Other solution status") # call the main function try: main() except mosek.MosekException as e: print("ERROR: %s" % str(e.errno)) print("\t%s" % e.msg) sys.exit(1) except: import traceback traceback.print_exc() sys.exit(1)
2022-11-26 08:32:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6687899827957153, "perplexity": 4020.2196165324813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706285.92/warc/CC-MAIN-20221126080725-20221126110725-00832.warc.gz"}
https://tex.stackexchange.com/questions/473717/how-to-have-a-space-between-algorithm-lines
# How to have a space between algorithm lines I have an algorithm and I would like to insert space between its steps. Here is my try: \documentclass[12pt,a4paper,openright]{report} \usepackage{amsmath} \usepackage{amssymb} \usepackage{verbatim} \usepackage{longtable} \usepackage{graphicx} \usepackage[square]{natbib} \usepackage[utf8]{inputenc} \usepackage{algorithm}% http://ctan.org/pkg/algorithms \usepackage{algpseudocode}% http://ctan.org/pkg/algorithmicx \algrenewcommand\algorithmicrequire{\textbf{Input:}} \algrenewcommand\algorithmicensure{\textbf{Output:}} \newcommand{\sfunction}[1]{\textsf{\textsc{#1}}} \algrenewcommand\algorithmicforall{\textbf{foreach}} \algrenewcommand\algorithmicindent{.8em} \begin{document} \begin{algorithm} \caption{my algorithm} \label{alg:ALG1} \begin{algorithmic}[1] \State $\text{For each line} of my algorithm I would like to insert a space between the steps but I cannot do that. The text goes out the box.$ \State $i \gets \textit{patlen}$ \If {$i > \textit{stringlen}$} \Return false \EndIf \State $j \gets \textit{patlen}$ \If {$\textit{string}(i) = \textit{path}(j)$} \State $j \gets j-1$. \State $i \gets i-1$. \State \textbf{goto} \emph{loop}. \State \textbf{close}; \EndIf \State $i \gets i+\max(\textit{delta}_1(\textit{string}(i)),\textit{delta}_2(j))$. \State \textbf{goto} \emph{top}. \end{algorithmic} \end{algorithm} \end{document} • Can you provide preamble part, i.e., from \documentclass to \begin{document} – MadyYuvi Feb 7 at 7:10 • @MadyYuvi Ok. I will in a minute. – Maryam Feb 7 at 7:11 • @MadyYuvi I have added them. – Maryam Feb 7 at 7:13 • @Maryam: Instead of using \State $\text{your text}$ for a \Statement of code, just use \State your text, without the $\text{...}$. – Werner Feb 7 at 7:20 • Glad to hear that Werner's suggestion works for you... – MadyYuvi Feb 7 at 8:36 In order to have line wrapping "enabled" you need to set your text as regular text, not in math mode wrapped inside \text: \documentclass{article} \usepackage{algorithm,algpseudocode} \begin{document} \begin{algorithm} \caption{An algorithm} \begin{algorithmic}[1] \State For each line of my algorithm I would like to insert a space between the steps but I cannot do that. The text stays inside the box. \State Something else altogether. \end{algorithmic} \end{algorithm} \end{document}
2019-11-22 03:33:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8717301487922668, "perplexity": 4817.581568292301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671106.83/warc/CC-MAIN-20191122014756-20191122042756-00385.warc.gz"}
http://www.popflock.com/learn?s=Kramers%E2%80%93Wannier_duality
Kramers-Wannier Duality Get Kramers%E2%80%93Wannier Duality essential facts below. View Videos or join the Kramers%E2%80%93Wannier Duality discussion. Add Kramers%E2%80%93Wannier Duality to your PopFlock.com topic list for future reference or share this resource on social media. Kramers%E2%80%93Wannier Duality The Kramers-Wannier duality is a symmetry in statistical physics. It relates the free energy of a two-dimensional square-lattice Ising model at a low temperature to that of another Ising model at a high temperature. It was discovered by Hendrik Kramers and Gregory Wannier in 1941. With the aid of this duality Kramers and Wannier found the exact location of the critical point for the Ising model on the square lattice. Similar dualities establish relations between free energies of other statistical models. For instance, in 3 dimensions the Ising model is dual to an Ising gauge model. ## Intuitive idea The 2-dimensional Ising model exists on a lattice, which is a collection of squares in a chessboard pattern. With the finite lattice, the edges can be connected to form a torus. In theories of this kind, one constructs an involutive transform. For instance, Lars Onsager suggested that the Star-Triangle transformation could be used for the triangular lattice.[1] Now the dual of the discrete torus is itself. Moreover, the dual of a highly disordered system (high temperature) is a well-ordered system (low temperature). This is because the Fourier transform takes a high bandwidth signal (more standard deviation) to a low one (less standard deviation). So one has essentially the same theory with an inverse temperature. When one raises the temperature in one theory, one lowers the temperature in the other. If there is only one phase transition, it will be at the point at which they cross, at which the temperature is equal. Because the 2D Ising model goes from a disordered state to an ordered state, there is a near one-to-one mapping between the disordered and ordered phases. The theory has been generalized, and is now blended with many other ideas. For instance, the square lattice is replaced by a circle,[2] random lattice,[3] nonhomogeneous torus,[4] triangular lattice,[5] labyrinth,[6] lattices with twisted boundaries,[7] chiral Potts model,[8] and many others. ## Derivation Define these variables. The low temperature expansion for (K*,L*) is ${\displaystyle Z_{N}(K^{*},L^{*})=2e^{N(K^{*}+L^{*})}\sum _{P\subset \Lambda _{D}}(e^{-2L^{*}})^{r}(e^{-2K^{*}})^{s}}$ which by using the transformation ${\displaystyle \tanh K=e^{-2L^{*}},\ \tanh L=e^{-2K^{*}}}$ gives ${\displaystyle Z_{N}(K^{*},L^{*})=2(\tanh K\;\tanh L)^{-N/2}\sum _{P}v^{r}w^{s}}$ ${\displaystyle =2(\sinh 2K\;\sinh 2L)^{-N/2}Z_{N}(K,L)}$ where v = tanh K and w = tanh L. This yields a relation with the high-temperature expansion. The relations can be written more symmetrically as ${\displaystyle \,\sinh 2K^{*}\sinh 2L=1}$ ${\displaystyle \,\sinh 2L^{*}\sinh 2K=1}$ With the free energy per site in the thermodynamic limit ${\displaystyle f(K,L)=\lim _{N\rightarrow \infty }f_{N}(K,L)=-kT\lim _{N\rightarrow \infty }{\frac {1}{N}}\log Z_{N}(K,L)}$ the Kramers-Wannier duality gives ${\displaystyle f(K^{*},L^{*})=f(K,L)+{\frac {1}{2}}kT\log(\sinh 2K\sinh 2L)}$ In the isotropic case where K = L, if there is a critical point at K = Kc then there is another at K = K*c. Hence, in the case of there being a unique critical point, it would be located at K = K* = K*c, implying sinh 2Kc = 1, yielding kTc = 2.2692J.
2020-12-02 05:57:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7831507325172424, "perplexity": 768.9696307294333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141692985.63/warc/CC-MAIN-20201202052413-20201202082413-00293.warc.gz"}
http://mathematica.stackexchange.com/questions?page=498&sort=newest
# All Questions 88 views ### Setting Up Lightweight Grid On Multiple Clients Programmatically I have a number of computers which all might need to use the same (client) configuration for the Wolfram Lightweight Grid at some point. I know that there are ways to programmatically set certain ... 571 views ### Importing DICOM images When I try to Import individual DICOM (.dcm) files from http://www.osirix-viewer.com/datasets/ datasets I can see the Elements listed in example files, ie: ... 657 views ### Is there any fast way to solve a quadratic matrix equation in Mathematica approximately? Let the square nonsingular matrix $M$ is a given convergent matrix. What are the best scalar values for $\alpha$ and $\beta$ (in the real numbers domain), at which the following quadratic matrix ... 56 views ### Best way to power series expand in multiple variables? [duplicate] A power series expansion of a multivariate function can be performed with the Series command, which performs each expansion consecutively. The results can be a bit ... 227 views ### How to define a variable as a function of another variable? I want to define a variable q which is a function of t. And I want to define another variable ... 599 views ### How to simplify Sqrt[1/x] Sqrt[x]? In my expression, there appear terms of the form A^(B Sqrt[1/C] Sqrt[C]). Mathematica doesn't realize that this is just simply ... 372 views ### Linear Integer Programming with absolute values in objective function I'm trying to solve a problem of the form $$\left|a_{1,0}+a_{1,1} x_1 + a_{1,2}x_2\right| + \left|a_{2,0}+a_{2,1} x_1 + a_{2,2}x_2\right|\to\min,$$ where $a_{i,j}$ are real-valued coefficients and ... 239 views ### Consecutive Print to target same output Cell Is it possible to conveniently direct the output of consecutive Print statements to the same output without actually accumulating the output in a temporary string/variable. This would be useful in ... 553 views ### Making a multiple choice button I've written the following code to create a multiple choice button that takes an arbitrary number of answer options and will assign a variable true/false depending on whether or not the correct choice ... 256 views ### Symbolic Integration of Special Functions Sorry in advance if this formatting comes out strange, this is my first question! I can't find a way to integrate, e.g., a function of the Hermite polynomials for general (still integer) order. For ... 567 views ### BSplineFunction derivatives wrong if using weights? Bug introduced in 7.0 and persisting through 10.4 In my work, I make heavy use of non-uniform rational B-spline (NURBS) functions, defined using the function ... 699 views ### Why Fourier doesn't show me the peaks? I'm trying to identify the frequencies in my time history samples, and I can see a frequency in the time history, but can't see it in its Fourier transform. Here it is : the sample data: ... 356 views ### ParametricNDSolve and Constraint satisfaction problem I am trying to find the values of two parameters that allow for a specific result of my differential equations, under different initial conditions. Given a system of equations: ... 233 views ### Writing CSV or alternative I've tried to export a list to a CSV file. I shut it down after about 20 minutes of processing. The list has 261,000 rows and 30 columns. I finally exported it to a txt file and its size was 111 ... 290 views ### Importing a ragged array while ignoring rows that start with “#” [duplicate] I am new to Mathematica, and I find that there may be a lot of ways of doing this. I was wondering which one might be the most general. I am importing data from ... 1k views ### Can graph interpolating function, but not use it. Any thoughts? When solving a system of differential equations numerically, I get two interpolating functions. I can graph both of these, but when I try to evaluate either of the functions at a point, it just spits ... 207 views ### How to batch run Mathematica code from Python? I have this notebook: ... 240 views ### Symbolic integral with distributions Why is Integrate[ HeavisideTheta[1 - x^2] DiracDelta[1 - x^2] , {x, 0, 1} ] (*0*) It should be HeavisideTheta[0]? 167 views ### How to produce multiple plots from a multiple data sets I have a number of lists and want to plot all of them vs each other. E.g.: l={ {a,2,5,4,6}, {b,4,6,6,2}, {c,2,8,3,5}, {d,1,5,2,5} } Now I want multiple plots ... 1k views ### Measure length and diameter of objects in an image I am new to Mathematica, and I am trying measure length and diameter of rectangular objects in an image. I wanted to fit rectangles in the objects, and then get the values. Can I do that with ... 707 views ### How to do computations simultaneously in multiple notebooks? [duplicate] I have a Mathematica script which takes a few hours to run. I want to be able to open another notebook while it's running and do some other computations. I have enough many cores on my computer for ... 467 views ### Why doesn't Mathematica divide out the Kelvins? I thought I understood Mathematica V9 units. However, I can't get these numbers to plot, or even to come out to a nice clean unit of mass (say grams). The Kelvin's ... 91 views ### Does one need to be careful about loading multiple (many) contexts or packages in the same session? I have a number of large pieces of precomputed data which I am considering putting into individual packages in order to load them (via DeclarePackage) and unload ... 189 views ### Remove Removed From Output How might I remove Removed[..] symbols from the output of an expression? For example Clear[b]; b := Sin[a + a]; Remove[a]; b ... 429 views ### How Do I Upgrade a Show to a Manipulate? I'm trying to ilustrate the difficulties in assigning bins to a data set and then running a comparison of the bins against a binomial distribution for all values- n, p. The code I have so far is ... 2k views Context I'm trying to setup a connection between Mathematica and Google Docs. Specifically, I would like to be able to export data to a Google Spreadsheet (Note: if the spreadsheet is public it's ... 122 views ### Increase Size of InputForm in Manipulate I would like to set a variable equal to a copy n' pasted list, all within the Manipulate[] environment: Manipulate[list,{list,0}] I would like to view and edit ... 240 views ### Interpolating a ParametricFunction Original domain of my independent variable t is 0 to 10 nanometers, but in Mathematica (as in most numerical software), it's good idea to solve differential ... 288 views ### make specific cluster I have set of coordinates. I want to make clusters in which every point is within 1.5 distance unit of it's neighbor. ex of point coordinates: ... 192 views ### Understanding output of multivariable integration I'm new to Mathematica and I'm trying to integrate this function: ... 240 views ### After modifying expression, new result doesn't replace old one After I modify and re-evaluate an expression, the new result doesn't replace the old one. Rather, what happens is that the old result gets pushed down and Mathematica displays the current result at ... 218 views ### Using two color functions in a MatrixPlot I'm trying to use two color functions within one MatrixPlot in Mathematica. Is it possible to do this? For example, using a very simple matrix: ... 493 views ### Sierpinski carpet with GraphData Is this graph in the list among the so-called "standard" structures used in GraphData? However, I have not found, yet, anything like "Carpet" or "Sponge" in the ... 205 views ### Constraint syntax compaction Is there a more compact way to represent these constraints: NMaximize[{a+b+c,... 931 views ### Mathematica not plotting whole logarithmic function I am trying to use the following input Plot[Log[E,x+1],{x,-50,50},PlotRange->50] however the output is: How can I adjust it so the line goes down to ... 168 views ### Accessing temporary value of a Nest evaluation The function Nest[] must store the temporary values in the memory somewhere (those values that would form the entries of the list which is returned by the ... 418 views ### Creating overlapped block matrices I'm doing an FEM assignment using Mathematica. (EK1 = {{a11, a12}, {a21, a22}}) // MatrixForm (EK2 = {{b22, b23}, {b32, b33}}) // MatrixForm I don't know how ... 1k views ### Why this numerical integration takes so long? Let me explain the problem. I am trying to integrate a one dimensional integral: $$\int {g\left( {{k_x}},parameter1,parameter2,...\right)d{\mkern 1mu} {k_x}}$$ for the sake of clarity, I will give ... 2k views ### How can I manipulate TemporalData? Version 9 introduced TemporalData but, truth be told, the documentation doesn't suggest that one can do much with it. Many of us have probably assumed we'd be ... 247 views ### Solving Within InverseCDF I have the following input data: Remove["Global`*"] x = 5/30 // N; y = 0.8; μ = 0.2; T = 5; ρ = 0.8; σ (*to be determined*) And this is the given equation: ... 361 views ### FindFit: why do I get negative value as result? I am trying fo fit a model to data using FindFit. ... 808 views ### Metric Parameters such as Pratt Figure of Merit (Pratt-FOM) for Edge Detection I have developed a new algorithm in Mathematica v9.0 for the Edge Detection and want to compare it with the existing Roberts, Sobel, Prewitt etc, operators in the presence of Noise. I have evaluated ... 3k views ### Simulate MATLAB's meshgrid function Here is MATLAB's meshgrid. I came up with these implementations in Mathematica: ... 593 views ### How to compute the Multifractal Spectrum of a Financial Series with WTMM I would like to know if someone knows how to compute the Multifractal Spectrum of a Financial Time Series (Currency) througth the Wavelet Transformation Modulus Maxima (WTMM). I would highly ... 298 views ### Automate mouse clicks with Mathematica I need to automate mouse clicks with Mathematica. However, a similiar question was closed on this forum, without any answer. Consider the following steps: ... 556 views ### NDSolve for a large system of coupled (complex valued) ODEs The system under consideration here is a generalization of the one I discussed in this problem. Qualitatively, I am trying to solve a system of coupled nonlinear (up to order 4 in the dep variables) ... 359 views ### How do I create matrices with arbitrary list restrictions? I want to randomly generate a square matrix of dimension n with entries in the list StartingEntries, and also satisfying that for any such matrix M, the matrix Inverse[IdentityMatrix[n] - M] has ... 286 views ### Programmatic generation of enhanced (Enterprise) CDF How does one PROGRAMATICALLY generate enhanced CDFs under an Enterprise license? Undocumented options to CDFDeploy[] perhaps?. Wizard works fine for me, but attempts to do this programatically (from ... I'm working on a complex question that asks that I determine a function that maps the complement of the region $D=\{z:|z+1|\le 1\}\cup\{z: |z-1|\le 1\}$ onto the upper half plane. That is, $f$ must ...
2016-05-30 22:16:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7675822973251343, "perplexity": 1611.7122432756526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051114647.95/warc/CC-MAIN-20160524005154-00167-ip-10-185-217-139.ec2.internal.warc.gz"}
https://datascience.stackexchange.com/questions/89013/found-input-variables-with-inconsistent-numbers-of-samples-30-24
# Found input variables with inconsistent numbers of samples: [30, 24] I'm using neural network machine learning and would like to see the result of my confusion matrix for my model. However, there is an error that I've got and don't know how to solve it. from sklearn.metrics import confusion_matrix cm=confusion_matrix(testY, testPredict) print(cm) then it give an error stated: Found input variables with inconsistent numbers of samples: [30, 24] actually I've check the shape of the test and the prediction value and it shows the different shape. How can I make the shape of the test and prediction become same ? the I had the same problem, here is how I sloved it: testX is a tf.data.Dataset, I guess. In that case try the following: # make the prdictions from your test set: testPredictRaw = self.model.predict(testX) testPredict = np.argmax(testPredictRaw, axis=1) # Then, take all the y values from the prefetch dataset (thus changing the shape): trueClasses = tf.concat([y for x, y in testY], axis=0) # Calculate the confusion matrix using sklearn.metrics cm = metrics.confusion_matrix(trueClasses, testPredict) I used these answered questions to make it work for my case:
2023-03-23 23:11:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6614898443222046, "perplexity": 1848.853202635524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00035.warc.gz"}
https://math.berkeley.edu/~kmill/blog/blog_2017_09_28_hopf_repr.html
# Penrose notation and finite group representations I have been using Penrose notation quite a lot recently, for instance trying to make sense of Penrose’s Applications of negative dimensional tensors. While thinking about group algebras, I wondered how hard it would be to translate the basic main results for representations of finite groups into graphical notation (following the first couple sections of Fulton and Harris). These handwritten notes have one interpretation. It covers idempotents, characters, characters of irreducible representations being an orthogonal basis, and the sum of squares of the dimensions. It seems like it is not a coincidence that the features of the diagrams are like filters. It is like vectors resonate in the loops, and destructive interference causes only particular “frequencies” to emerge. I would like to see if there were a way to calculate the modes of coupled oscillators. Tensor products seem to represent a maximally uncoupled system, and the integration operator (convolution) seems to be maximally coupled. The interesting modes of a system of coupled oscillators have frequencies involving the square root of a sum, however, so it would have to be some other kind of operation.
2022-09-29 04:13:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8126225471496582, "perplexity": 353.27945954022346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00356.warc.gz"}
https://zbmath.org/?q=an:0635.20038
# zbMATH — the first resource for mathematics Orthogonality of transitive, distributive quasi-groups corresponding to finite cyclic groups. (Russian) Zbl 0635.20038 The paper deals with transitive, distributive quasigroups (TDQ) corresponding to finite abelian groups and proves, if the order of a cyclic group is a) an even number, then a TDQ does not exist, b) an odd number, then there exists at least one TDQ. Moreover, if the order of a finite abelian group G is $$n=p_ 1^{k_ 1}p_ 2^{k_ 2}...p_ t^{k_ t}$$ or $$n=2^ kp_ 1^{k_ 1}...p_ s^{k_ s}$$, where $$k_ i>0$$, $$k\geq 2$$, $$p_ i\neq 2$$ are prime numbers and $$G=G_ 1\dot +G_ 2\dot +...\dot +G_ k\dot +G_{k+1}\dot +...\dot +G_ s$$, $$| G_ j| =2$$, $$1\leq j\leq k$$, $$| G_ h| =p_ h^{k_ h}$$, $$k+1\leq h\leq s$$, then the corresponding TDQ exists. In the case that there exist several TDQ corresponding to a given finite abelian group, then it is determined a necessary and sufficient condition when two TDQs are orthogonal. Reviewer: E.Broziková ##### MSC: 20N05 Loops, quasigroups 20K01 Finite abelian groups Full Text:
2021-10-23 17:15:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8596997857093811, "perplexity": 568.2150466908149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585737.45/warc/CC-MAIN-20211023162040-20211023192040-00476.warc.gz"}
https://learn.careers360.com/ncert/question-consider-the-reaction-of-water-with-f2-and-suggest-in-terms-of-oxidation-and-reduction-which-species-are-oxidised-reduced/
# 9.19  Consider the reaction of water with F2 and suggest, in terms of oxidation and reduction, which species are oxidized/reduced. The reaction of fluorine with water- $2F_{2}+2H_{2}O\rightarrow 4HF+O_{2}$ $F_{2}(0)\rightarrow H^{+1}F^{-1}$ -----------reduction (O.N of F changes from 0 to -1) $H^{+1}_{2}O^{-2}\rightarrow O_{2}^{0}$   ------------- Oxidation( O.N of O changes from -2 to 0) Fluorine reduced by gaining an electron and water oxidized by losing an electron. Exams Articles Questions
2020-05-30 09:57:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2955467998981476, "perplexity": 8213.98160502198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347407667.28/warc/CC-MAIN-20200530071741-20200530101741-00530.warc.gz"}
http://crypto.stackexchange.com/tags/key-reuse/new
# Tag Info ## New answers tagged key-reuse 0 With this cipher, it's pretty easy to retrieve at least 1 key that is consistent with 2 pairs of plaintext,ciphertext . (Other ciphers are better or worse at making it nearly impossible to recover even 1 key consistent with the given plaintext,ciphertext). With this cipher, it is not possible to fully retrieve the key from only 2 known pairs of ... 0 There is a straightforward brute force method. For example, take the lowest $8$ bits of everything and check for valid values of $K_1$ and $K_2$, mod $2^8$. You will need about $2^{16}$ checks to get the lower $8$ bits of $K_1$ and $K_2$. Proceed then to values mod $2^{16}$, as you know the lower $8$ bits of $K_1$ and $K_2$, only bits $8\dots15$ of these ... Top 50 recent answers are included
2013-12-13 06:27:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48434752225875854, "perplexity": 504.4240288826126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164896464/warc/CC-MAIN-20131204134816-00030-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-8th-edition/chapter-11-section-11-1-sequences-11-1-exercises-page-704/14
Calculus: Early Transcendentals 8th Edition $$a_n = -\frac{(-1)^n}{4^{n-2}}$$ Our first term is $a_1 = 4$. Because the sequence alternates, we have a $(-1)^{n}$ in the sequence. Each term also changes by a factor of $\frac{1}{4}$, so we introduce a factor of $(\frac{1}{4})^n$. Putting together the pieces with the fact that $a_1 = 4$, we have: $$a_n = -\frac{(-1)^n}{4^{n-2}}$$
2018-04-25 07:40:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9445972442626953, "perplexity": 100.33136258125873}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947705.94/warc/CC-MAIN-20180425061347-20180425081347-00305.warc.gz"}
https://www.cnblogs.com/devymex/archive/2010/08/18/1801966.html
# 程序控 IPPP (Institute of Penniless Peasent-Programmer) Fellow :: :: :: :: :: :: :: :: 77 随笔 :: 0 文章 :: 442 评论 :: 0 引用 ## Problem问题 The well-known physicist Alfred E Neuman is working on problems that involve multiplying polynomials of x and y. For example, he may need to calculate (-x8y+9x3-1)·(x5y+1+x3) -x13y2-x11y+8x8y+9x6-x5y+x5y2+8x3+x3y-1+y Unfortunately, such problems are so trivial that the great man's mind keeps drifting off the job, and he gets the wrong answers. As a consequence, several nuclear warheads that he has designed have detonated prematurely, wiping out five major cities and a couple of rain forests. You are to write a program to perform such multiplications and save the world. ## Input输入 The file of input data will contain pairs of lines, with each line containing no more than 80 characters. The final line of the input file contains a # as its first character. Each input line contains a polynomial written without spaces and without any explicit exponentiation operator. Exponents are positive non-zero unsigned integers. Coefficients are also integers, but may be negative. Both exponents and coefficients are less than or equal to 100 in magnitude. Each term contains at most one factor in x and one in y. ## Output输出 Your program must multiply each pair of polynomials in the input, and print each product on a pair of lines, the first line containing all the exponents, suitably positioned with respect to the rest of the information, which is in the line below. The following rules control the output format: 1. Terms in the output line must be sorted in decreasing order of powers of x and, for a given power of x, in increasing order of powers of y. 输出的各项必须按x指数的降序排列,若相等则按y指数的升序排列。 2. Like terms must be combined into a single term. For example, 40x2y3 - 38x2y3 is replaced by 2x2y3. 必须合并同类项,比如:40x2y3 - 38x2y3应合并为2x2y3 3. Terms with a zero coefficient must not be displayed. 不要打印0系数项。 4. Coefficients of 1 are omitted, except for the case of a constant term of 1. 不要打印系数1,除非是常数项。 5. Exponents of 1 are omitted. 忽略为1的指数。 6. Factors of x0 and y0 are omitted. 若因子(x或y)的指数为0,则忽略该因子。 7. Binary pluses and minuses (that is the pluses and minuses connecting terms in the output) have a single blank column both before and after. 二元运算符加号和减号(即两项中间的加减号)的前后要有空格。 8. If the coefficient of the first term is negative, it is preceded by a unary minus in the first column, with no intervening blank column. Otherwise, the coefficient itself begins in the first output column. 如果第一项的系数为负,则需在行首打印一元运算符负号,后面没有空格。如果第一项系数为正,则直接从行首开始打印该项。 9. The output can be assumed to fit into a single line of at most 80 characters in length. 输出可以认为不会超过一行80个字符长度。 10. There should be no blank lines printed between each pair of output lines. 输出两行结果之间不要有空行。 11. The pair of lines that contain a product should be the same length--trailing blanks should appear after the last non-blank character of the shorter line to achieve this. 输出的两行乘积结果应该有相同的长度,较短的一行应在行尾填充空格使之对齐。 -yx8+9x3-1+y x5y+1+x3 1 1 # ## Sample Output输出示例 13 2    11      8      6    5     5 2     3    3 -x  y  - x  y + 8x y + 9x  - x y + x y  + 8x  + x y - 1 + y 1 ## Solution解答 #include <algorithm> #include <iostream> #include <string> #include <vector> #include <stdio.h> using namespace std; //表示多项式中各项的结构体 struct TERM { //成员分别为系数,x指数和y指数 int cof; int xe; int ye; //构造函数,按参数初始化变量 TERM (int c, int x, int y) : cof(c), xe(x), ye(y) {} }; //比较两项指数的大小,用于排序和合并同类项 bool GreaterTerm(const TERM &t1, const TERM &t2) { return (t1.xe > t2.xe || (t1.xe == t2.xe && t1.ye < t2.ye)); } //解析输入多项式字符串的函数 void ParsePolynomial(char *pStr, vector<TERM> &Terms) { //循环处理每一项 for (int nNum; *pStr != 0;) { //确定该项的正负号,并初始化项结构体 TERM Term(*pStr == '-' ? -1 : 1, 0, 0); //如果前面有符号,则指针向后移位 pStr += (*pStr == '-' || *pStr == '+') ? 1 : 0; //如果系数为0,则跳过整项 if (*pStr == '0') { for(++pStr; *pStr != '\0' && *pStr != '+' && *pStr != '-'; ++pStr); continue; } //读取符号后面的系数 for (nNum = 0; isdigit(*pStr); nNum = nNum * 10 + *pStr++ - '0'); //如果系数不为0,则乘到项结构体的系数中去(保留原符号) for (Term.cof *= (nNum == 0) ? 1 : nNum; isalpha(*pStr);) { //循环读取两个变量的指针(如果存在),先确定是x还是y的指数 int *pe = (*pStr == 'x') ? &Term.xe : &Term.ye; //读取后面的指数 for (; isdigit(*++pStr); *pe = *pe * 10 + *pStr - '0'); //没有指数即指数为1 *pe = (*pe == 0) ? 1 : *pe; } //将新项结构体加入数组 Terms.push_back(Term); } } //主函数 int main(void) { //循环读入所有输入的数据,遇到#号结束 for (string str1, str2; cin >> str1 && str1 != "#"; ) { cin >> str2; if (str1.empty() || str2.empty()) continue; const int nMaxLen = 100; char szBuf1[nMaxLen], szBuf2[nMaxLen]; vector<TERM> Poly1, Poly2, Result; //转存两个字符串以备解析多项式 strcpy(szBuf1, str1.c_str()); strcpy(szBuf2, str2.c_str()); //解析两个多项式字符串 ParsePolynomial(szBuf1, Poly1); ParsePolynomial(szBuf2, Poly2); vector<TERM>::iterator i, j; //执行多项式乘法 for (i = Poly1.begin(); i != Poly1.end(); ++i) { for (j = Poly2.begin(); j != Poly2.end(); ++j) { TERM Term(i->cof * j->cof, i->xe + j->xe, i->ye + j->ye); Result.push_back(Term); } } //按项指数排序 sort(Result.begin(), Result.end(), GreaterTerm); fill(&szBuf1[0], &szBuf1[nMaxLen], ' '); fill(&szBuf2[0], &szBuf2[nMaxLen], ' '); int nPos = 0; //查找同类项 for (i = Result.begin(); i != Result.end(); ++i) { //合并后面的同类项(如果存在) for (j = i + 1; j < Result.end() && i->xe == j->xe && i->ye == j->ye;) { i->cof += j->cof; j = Result.erase(j); } //如果该项的系数不为0,则将其输出 if (i->cof != 0) { if (nPos > 0) { //不是第一项,输出中间的运算符 ++nPos; //输出运算符前面的空格 szBuf2[nPos++] = i->cof > 0 ? '+' : '-'; szBuf2[nPos++] = ' '; } else { //第一项,输出前面的符号(如果为负) szBuf2[0] = '-'; nPos += (i->cof < 0); } //如果系数(绝对值)不为1或xy指数都为0,则输出系数 i->cof = abs(i->cof); if (i->cof != 1 || (i->xe == 0 && i->ye == 0)) { nPos += sprintf(&szBuf2[nPos], "%d", i->cof); //给sprintf擦屁股 szBuf2[nPos] = ' '; } //如果x指数不为0,则输出x if (i->xe > 0) { szBuf2[nPos++] = 'x'; if (i->xe > 1) { nPos += sprintf(&szBuf1[nPos], "%d", i->xe); szBuf1[nPos] = ' '; } } //同上 if (i->ye > 0) { szBuf2[nPos++] = 'y'; if (i->ye > 1) { nPos += sprintf(&szBuf1[nPos], "%d", i->ye); szBuf1[nPos] = ' '; } } } } //如果没有输出任何项,则多项式乘积为0 if (nPos == 0) { szBuf2[nPos++] = '0'; } //为多项式乘积字符串划结束符并输出 szBuf1[nPos] = szBuf2[nPos] = '\0'; cout << szBuf1 << '\n' << szBuf2 << endl; } return 0; } posted on 2010-08-18 01:57  Devymex  阅读(...)  评论(...编辑  收藏
2020-01-19 02:40:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24546319246292114, "perplexity": 11300.614138388522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594101.10/warc/CC-MAIN-20200119010920-20200119034920-00046.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-5-linear-functions-5-3-slope-intercept-form-practice-and-problem-solving-exercises-page-312/23
Chapter 5 - Linear Functions - 5-3 Slope-Intercept Form - Practice and Problem-Solving Exercises - Page 312: 23 y=2x-3 Work Step by Step The point-slope form is y=mx+b. The formula for slope is m=$\frac{y2-y1}{x2-x1}$ Using points from the graph (2,1)(0,-3), the slope is m=$\frac{1-(-3)}{2-0}$=$\frac{4}{2}$ =2. Using the point-slope formula and (2,1): 1=2(2)+b 1=4+b b=-3 Plug these values into the point-slope form so the equation is y=2x-3. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2021-04-23 05:52:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6087766289710999, "perplexity": 2682.051643045578}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039601956.95/warc/CC-MAIN-20210423041014-20210423071014-00093.warc.gz"}
https://www.biostars.org/p/326145/
Module- Triat relationship in WGCNA 1 0 Entering edit mode 4.4 years ago Prakash ★ 2.2k Hello all, I am working with 20 samples of RNAseq data ( 5 control and 5 KD with five different stimulation) and have identified several module using WGCNA. Next I would like to perform module- condition relationship analysis, basically i would like to identify condition specific module. I have designed the condition in binary format (see table below). can someone tell that the design is correct or not, if not, what would be best way to design trait in this scenario. sample EU EI EC EP EA NU NI NC NP NA E_Uns_R1 1 0 0 0 0 0 0 0 0 0 E_Uns_R2 1 0 0 0 0 0 0 0 0 0 E_IFNg_R1 0 1 0 0 0 0 0 0 0 0 E_IFNg_R2 0 1 0 0 0 0 0 0 0 0 E_CpG_R1 0 0 1 0 0 0 0 0 0 0 E_CpG_R2 0 0 1 0 0 0 0 0 0 0 E_pIC_R1 0 0 0 1 0 0 0 0 0 0 E_pIC_R2 0 0 0 1 0 0 0 0 0 0 E_all3_R1 0 0 0 0 1 0 0 0 0 0 E_all3_R2 0 0 0 0 1 0 0 0 0 0 N_Uns_R1 0 0 0 0 0 1 0 0 0 0 N_Uns_R2 0 0 0 0 0 1 0 0 0 0 N_IFNg_R1 0 0 0 0 0 0 1 0 0 0 N_IFNg_R2 0 0 0 0 0 0 1 0 0 0 N_CpG_R1 0 0 0 0 0 0 0 1 0 0 N_CpG_R2 0 0 0 0 0 0 0 1 0 0 N_pIC_R1 0 0 0 0 0 0 0 0 1 0 N_pIC_R2 0 0 0 0 0 0 0 0 1 0 N_all3_R1 0 0 0 0 0 0 0 0 0 1 N_all3_R2 0 0 0 0 0 0 0 0 0 1 Thanks RNA-Seq Forum • 1.3k views 0 Entering edit mode 4.4 years ago Lluís R. ★ 1.1k Your conditions seems like blocking factor. In the WGCNA it is recommended to make the correlation between the eigenvalue of the module with the phenotype variables. I would recommend however to do the correlation between the eigenvalue of the module with the eigenvalue of your condition. However, if this is are really 5 different conditions it might be worthless, as the co-expressed genes might be few or not highly co-expressed... 0 Entering edit mode but in this paper they have take three condiion in the last part of their analysis and did WGCNA can you explain ,how did they perform ? 2 Entering edit mode Well I don't know anything from this paper beyond what it is described, but looking for the Figure 4 a. it seems like they made the correlation between the condition and the eigengene. If it is accurate or not is another thing... You can do as them, but note that they have more than 4 samples for each condition and up to 9. Upon closer look to your matrix, it seems like you have several factors happening at the same time. If I understood correctly you have 2 conditions (N and E) and then (Uns, IFNg, pIC, CpG and all3) with two replicates for the second condition, which results in smaller groups for each condition, which are too small to do something with confidence. 0 Entering edit mode yes the sample size is a concern ..
2022-11-29 15:31:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40936940908432007, "perplexity": 393.88584176962974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00460.warc.gz"}
https://scottbanwart.com/blog/2020/07/til-jenkins-iterators/
# Jenkins Iterators ## Introduction This past week I got bit by a couple of issues with using collections in a Jenkins pipeline that are, in my opinion, violations of the Principle of Least Surprise. One of them is related to the need for all objects in a Jenkins pipeline to be serializable, and the other is related to how Java/Groovy treats references to objects. Both of them cost me a significant amount of time, so hopefully this blog post will save someone else the same pain and suffering. ## The Setup In my situation, I had a Groovy map with some configuration values that I needed for my pipeline. I was using these values to construct a set of closures that I was going to use with the parallel keyword. My pipeline code looks something like this: ## Issue 1: Iterators Are Not Serializable The first error I encountered was that iterators aren’t serializable. Jenkins expects everything to be serializable so that it can save state so that it can restart a pipeline if execution is interrupted, and so it can ship the various stages off to secondary processing nodes. The workaround here is to create an external function to convert the map iterator into a list, and mark it with the @NonCPS attribute. This attribute tells Jenkins that it must execute this function all together. For this purpose, I created a small utility function: ## Issue 2: Iterators Use Copy-By-Reference The second issue did not cause an error, but did result in unexpected behavior. By default, objects in Java are copy-by-reference. In the case of the iterator, when you use the contents of the CONFIG collection, you are getting a reference to the iterator object instead of a copy of the map values. In the case of deferred execution, it means the iterator is pointed at the last item in the collection. Instead of executing the three individual tasks as expected, it instead executes the last task three times. To avoid this unexpected behavior, you need to force it to copy the values instead of the reference. In my case I wrapped the configuration values in a String object. A String in Java is immutable, so it will perform a deep copy when creating the object. ## Final Version With the changes described above, the final version of my code look like this: This version of the code executes the three tasks in parallel as expected. ## Conclusion I was pretty frustrated that it took me the better part of a day to figure out the iterator issue. I will admit that I do not have much experience with Java, and after this I cannot say I have any real interest in learning more about it. Hopefully this will save somebody else the time and frustration if they encounter these same issues.
2023-03-28 14:37:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22710569202899933, "perplexity": 849.7700677639226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948867.32/warc/CC-MAIN-20230328135732-20230328165732-00521.warc.gz"}
http://en.wikipedia.org/wiki/Horn%E2%80%93Schunck_method
# Horn–Schunck method The Horn–Schunck method of estimating optical flow is a global method which introduces a global constraint of smoothness to solve the aperture problem (see Optical Flow for further description). ## §Mathematical details The Horn-Schunck algorithm assumes smoothness in the flow over the whole image. Thus, it tries to minimize distortions in flow and prefers solutions which show more smoothness. The flow is formulated as a global energy functional which is then sought to be minimized. This function is given for two-dimensional image streams as: $E=\iint \left[(I_xu + I_yv + I_t)^2 + \alpha^2(\lVert\nabla u\rVert^2+\lVert\nabla v\rVert^2)\right]{{\rm d}x{\rm d}y}$ where $I_x$, $I_y$ and $I_t$ are the derivatives of the image intensity values along the x, y and time dimensions respectively, $\vec{V} = [u(x,y),v(x,y)]^\top$ is the optical flow vector, and the parameter $\alpha$ is a regularization constant. Larger values of $\alpha$ lead to a smoother flow. This functional can be minimized by solving the associated multi-dimensional Euler-Lagrange equations. These are $\frac{\partial L}{\partial u} - \frac{\partial}{\partial x}\frac{\partial L}{\partial u_x} - \frac{\partial}{\partial y}\frac{\partial L}{\partial u_y} = 0$ $\frac{\partial L}{\partial v} - \frac{\partial}{\partial x}\frac{\partial L}{\partial v_x} - \frac{\partial}{\partial y}\frac{\partial L}{\partial v_y} = 0$ where $L$ is the integrand of the energy expression, giving $I_x(I_xu+I_yv+I_t) - \alpha^2 \Delta u = 0$ $I_y(I_xu+I_yv+I_t) - \alpha^2 \Delta v = 0$ where subscripts again denote partial differentiation and $\Delta = \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}$ denotes the Laplace operator. In practice the Laplacian is approximated numerically using finite differences, and may be written $\Delta u(x,y) = \overline{u}(x,y) - u(x,y)$ where $\overline{u}(x,y)$ is a weighted average of $u$ calculated in a neighborhood around the pixel at location (x,y). Using this notation the above equation system may be written $(I_x^2 + \alpha^2)u + I_xI_yv = \alpha^2\overline{u}-I_xI_t$ $I_xI_yu + (I_y^2 + \alpha^2)v = \alpha^2\overline{v}-I_yI_t$ which is linear in $u$ and $v$ and may be solved for each pixel in the image. However, since the solution depends on the neighboring values of the flow field, it must be repeated once the neighbors have been updated. The following iterative scheme is derived: $u^{k+1}=\overline{u}^k - \frac{I_x(I_x\overline{u}^k+I_y\overline{v}^k+I_t)}{\alpha^2+I_x^2+I_y^2}$ $v^{k+1}=\overline{v}^k - \frac{I_y(I_x\overline{u}^k+I_y\overline{v}^k+I_t)}{\alpha^2+I_x^2+I_y^2}$ where the superscript k+1 denotes the next iteration, which is to be calculated and k is the last calculated result. This is in essence the Jacobi method applied to the large, sparse system arising when solving for all pixels simultaneously. ## §Properties Advantages of the Horn–Schunck algorithm include that it yields a high density of flow vectors, i.e. the flow information missing in inner parts of homogeneous objects is filled in from the motion boundaries. On the negative side, it is more sensitive to noise than local methods.
2015-03-29 13:11:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9197269678115845, "perplexity": 352.0859041969179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298529.91/warc/CC-MAIN-20150323172138-00099-ip-10-168-14-71.ec2.internal.warc.gz"}
https://plainmath.net/20178/customers-randomly-selected-grocery-included-coupons-confidence-interval
# Customers randomly selected at a grocery store included 172 women and 45 men. 74 of the women and 12 of the men used coupons. Find the 95\% confidence interval. UkusakazaL 2021-08-09 Customers randomly selected at a grocery store included 172 women and 45 men. 74 of the women and 12 of the men used coupons. Find the $$\displaystyle{95}\%$$ confidence interval. • Live experts 24/7 • Questions are typically answered in as fast as 30 minutes • Personalized clear answers ### Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Step 1 Given: Number of women $$\displaystyle{\left({n}_{{{1}}}\right)}={172}$$ Number of men $$\displaystyle{\left({n}_{{{2}}}\right)}={45}$$ Number of women used coupons $$\displaystyle{\left({x}_{{{1}}}\right)}={74}$$ Number of men used coupons $$\displaystyle{\left({x}_{{{2}}}\right)}={12}$$ Confidence level $$\displaystyle={0.95}$$ So, $$\displaystyle\alpha={1}-{0.95}={0.05}$$ Step 2 Sample proportions $$\displaystyle\hat{{{p}}}_{{{1}}}={\frac{{{x}_{{{1}}}}}{{{n}_{{{1}}}}}}={\frac{{{74}}}{{{172}}}}$$ $$\displaystyle\hat{{{p}}}_{{{2}}}={\frac{{{x}_{{{2}}}}}{{{n}_{{{2}}}}}}={\frac{{{12}}}{{{45}}}}$$ From z tables, $$\displaystyle{P}{\left({z}{>}{1.96}\right)}={0.025}$$ Step 3 Finding confidence interval $$\displaystyle{\left(\hat{{{p}}}_{{{1}}}-\hat{{{p}}}_{{{2}}}\right)}-{z}_{{{\frac{{\alpha}}{{{2}}}}}}\times\sqrt{{{\frac{{\hat{{{p}}}_{{{1}}}\times{\left({1}-\hat{{{p}}}_{{{1}}}\right)}}}{{{n}_{{{1}}}}}}+{\frac{{\hat{{{p}}}_{{{2}}}\times{\left({1}-\hat{{{p}}}_{{{2}}}\right)}}}{{{n}_{{{2}}}}}}}},{\left(\hat{{{p}}}_{{{1}}}-\hat{{{p}}}_{{{2}}}\right)}+{z}_{{\frac{{\alpha}}{{{2}}}}}\times\sqrt{{{\frac{{\hat{{{p}}}_{{{1}}}\times{\left({1}-\hat{{{p}}}_{{{1}}}\right)}}}{{{n}_{{{1}}}}}}+{\frac{{\hat{{{p}}}_{{{2}}}\times{\left({1}-\hat{{{p}}}_{{{2}}}\right)}}}{{{n}_{{{2}}}}}}}}$$ $$\displaystyle{\left({\frac{{{74}}}{{{172}}}}-{\frac{{{12}}}{{{45}}}}\right)}-{1.96}\times\sqrt{{{\frac{{{\frac{{{74}}}{{{172}}}}\times{\left({1}-{\frac{{{74}}}{{{172}}}}\right)}}}{{{172}}}}+{\frac{{{\frac{{{12}}}{{{45}}}}\times{\left({1}-{\frac{{{12}}}{{{45}}}}\right)}}}{{{45}}}}}},{\left({\frac{{{74}}}{{{172}}}}-{\frac{{{12}}}{{{45}}}}\right)}+{1.96}$$ $$\displaystyle\times\sqrt{{{\frac{{{\frac{{{74}}}{{{172}}}}\times{\left({1}-{\frac{{{74}}}{{{172}}}}\right)}}}{{{172}}}}+{\frac{{{\frac{{{12}}}{{{45}}}}\times{\left({1}-{\frac{{{12}}}{{{45}}}}\right)}}}{{{45}}}}}}$$ 0.0147, 0.3125 The confidence interval for difference in women and men proportions of using coupons is 0.0147, 0.3125.
2022-01-26 12:08:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26856929063796997, "perplexity": 5465.022932627798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304947.93/warc/CC-MAIN-20220126101419-20220126131419-00365.warc.gz"}
http://zbmath.org/?q=an:1227.42021
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Real-variable characterizations of Hardy spaces associated with Bessel operators. (English) Zbl 1227.42021 The authors prove characterizations of the atomic Hardy spaces ${H}^{p}\left(\left(0,\infty \right),d{m}_{\lambda }\right)$ associated with the Bessel operator ${{\Delta }}_{\lambda }=-\frac{{d}^{2}}{d{x}^{2}}-\frac{2\lambda }{x}\frac{d}{dx}$, where $d{m}_{\lambda }\left(x\right)={x}^{2\lambda }dx$, $p\in \left(\left(2\lambda +1\right)\left(2\lambda +2\right),1\right]$ and $\lambda \in \left(0,\infty \right)$. The characterizations are given in terms of the radial maximal function, the nontangential maximal function, the grand maximal function, the Littlewood-Paley $g$-function and the Lusin area function. Arguments in the proofs are partially based on results and notions introduced by Y.-S. Han, D. Müller and D.-C. Yang in [Math. Nachr. 279, No. 13–14, 1505–1537 (2006; Zbl 1179.42016)] and [Abstr. Appl. Anal. 2008, Article ID 893409 (2008; Zbl 1193.46018)]. ##### MSC: 42B30 ${H}^{p}$-spaces (Fourier analysis) 42B25 Maximal functions, Littlewood-Paley theory 42B35 Function spaces arising in harmonic analysis
2013-12-05 16:08:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7723663449287415, "perplexity": 9614.864656306834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163046801/warc/CC-MAIN-20131204131726-00015-ip-10-33-133-15.ec2.internal.warc.gz"}
https://emacs.stackexchange.com/questions/21239/change-table-of-contents-title-in-org-mode-according-to-document-language/21241
My document is not in English so I want to translate table of content's title. As JeanPierre said, It's about export settings. You can set LANGUAGE at the top of your org document this way: #+LANGUAGE: fr And French will be used as default language of all strings that org produces during export. The constant responsible for translation mappings is org-export-dictionary in ox.el and if your language is not supported you can drop it there and then eval-defun to let change take place. In my case: (defconst org-export-dictionary ... ... ...) ...) I've wrote a naive function which can be useful in init.el: (defun org-export-translate-to-lang (term-translations &optional lang) "Adds desired translations to org-export-dictionary'. TERM-TRANSLATIONS is alist consisted of term you want to translate and its corresponding translation, first as :default then as :html and :utf-8. LANG is language you want to translate to." (dolist (term-translation term-translations) (let* ((term (car term-translation)) (translation-default (nth 1 term-translation)) (translation-html (nth 2 term-translation)) (translation-utf-8 (nth 3 term-translation)) (term-list (assoc term org-export-dictionary)) (term-langs (cdr term-list))) (setcdr term-list (append term-langs (list (list lang :default translation-default :html translation-html :utf-8 translation-utf-8))))))) ("Another term" "coilogji")) "sr") ## Disclaimer It doesn't work if you want to export via Latex (Latex is used when Org exports to PDF). Look at Tyler's answer and comments. • What format are you exporting to? PDF, html, or? – Tyler Mar 26 '16 at 22:26 • @TylerI I exporting mostly to ODT and HTML. – foki Mar 27 '16 at 7:44 Org manual says this about export settings (you should be able to browse it in emacs info C-h i m Org m exporting): ‘LANGUAGE’ The language used for translating some strings (‘org-export-default-language’). E.g., ‘#+LANGUAGE: fr’ will tell Org to translate File (english) into Fichier (french) in the clocktable. I haven't tried it but I expect it should do what you want. • Not sufficient for all exporters, for Latex exports, you may want to look at @rené answer. – Jocelyn delalande Dec 31 '16 at 12:50 As JeanPierre answer pointed, you need to specify the language export setting. For French the next line does the work: #+LANGUAGE: fr No all languages are supported and, as you said, is possible to see which ones are viewing the org-latex-export-dictionary variable (you can use the emacs command C-h v then write the variable name). Some languages might be only partially supported or not supported at all like Serbian. If you want it to work with an unsupported language add the translated strings to the variable and preferably send them to the devs so it ends up org-mode. # LaTex and PDF If you are exporting to LaTex and want to let Babel change the text use: #+LANGUAGE: fr This will work in both HTML and latex as the AUTO keyword will be substituted by the corresponding Babel language name. To view which languages are supported view the org-latex-babel-language-alist variable. Not all languages available in Babel are there but Serbian is and works (tested it and "Contents" appears as "Sadržaj"). If your language is not in org-latex-babel-language-alist but is available in babel, like Breton, use: #+LANGUAGE: br As Breton is not in org-latex-export-dictionary theLANGUAGE variable won't do anything for HTML export, it will be in English, but is necessary. That's because what will end up in the LaTex file will be \usepackage[breton, <default-lang>]{babel} where default-lang will be English if LANGUAGE is not present and the last language is considered the default by Babel. As br is not in org-latex-babel-language-alist we end up with \usepackage[breton, ]{babel}, there Breton is the default. If Breton is added to org-latex-babel-language-alist it will work anyway (\usepackage[breton, breton]{babel}). If Breton is included in org-latex-export-dictionary it will now work in HTML too. If Breton wasn't supported by babel it will work anyway but be in English, so this configuration is the one who gives you as much in the specified language as possible with English as a fall back. I rather use AUTO if available as there is only one place to put the language. If you don't like what Babel puts as "Contents" but still want to use it you can do something like: #+LANGUAGE: en This is like the Tyler's answer but for Babel. • Thanks! It works for standard pdf export but doesn't work with Beamer slides export. – foki Apr 11 '16 at 20:22 If you are exporting to PDF, org-mode will be calling LaTeX to do the conversion. In that case, you should be able to insert the LaTeX command to change the TOC heading with the following line: #+LATEX_HEADER: \renewcommand*{\contentsname}{My Table of Contents Header} Put that at the top of your file and try the export. • I currently don't have LATEX environment set so I can't try it with PDF. Now I want to export to ODT and HTML so LATEX command doesn't help here (tell me if I'm wrong). – foki Mar 27 '16 at 8:27 • When exporting to LaTeX, you'd better use LaTeX's own handling of languages \usepackage[mylanguage]{babel}. – JeanPierre Mar 27 '16 at 13:45 • @JeanPierre I've just noticed a weird behavior. Tyler's approach don't work for me, but behavior manifested using JeanPierre's is even more interesting - I have declared both, #+LANGUAGE: fr and #+LATEX_HEADER: \usepackage[english]{babel} and in this case Latex export respects the first setting and translates strings in French counterparts. If I declare de in the first and french or frenchb in the second - de is used. I've also noticed that in described cases exporter does not use org-export-dictionary, more likely it use Latex languages. Have any idea? – foki Mar 30 '16 at 12:13 • @foki sorry, I was missing a colon :. I've corrected my answer. – Tyler Mar 30 '16 at 15:19 • Strange, if I use #+LANGUAGE: fr on it's own, it is ignored - the output LaTeX is in English. If I use it and #+LATEX_HEADER: \usepackage[english]{babel}, the resulting LaTeX includes the line \usepackage[english, frenchb]{babel}. And if I only use #+LATEX_HEADER: \usepackage[french]{babel}, without setting LANGUAGE:, what actually gets inserted is \usepackage[frenchb, english]{babel}. None of them change the PDF, it's always English. – Tyler Mar 30 '16 at 15:32 Short answer : for pdf exports, the package texlive-lang-french is required org 9.1.9 : Setting #+LANGUAGE: fr alone has no effect. make it happen with #+LATEX_HEADER: \usepackage[frenchb]{babel} or #+LANGUAGE: fr `
2021-05-11 08:17:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5370628833770752, "perplexity": 4678.40928196187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991904.6/warc/CC-MAIN-20210511060441-20210511090441-00365.warc.gz"}
https://www.physicsforums.com/threads/help-on-quick-question.76004/
# Help on quick question 1. May 17, 2005 ### roger Hi I was given this formula for magnetic field strength: H=vector product v *d(flux density vector) I wanted to know why it's dependent on velocity of charge ? And what is permittivity ? I'ts calibrated to vacuum I'm assuming, but what is there to resist charge ? thanks in advance Roger 2. May 17, 2005 ### OlderDan This will be an oversimplified view of what is going on, but hopefully it will give you the basic ideas. Basically, magnetic field strength depends on the velocity of the charge because stationary charges do not produce magnetic fields. You have probably learned that currents in wires produce magnetic fields that are proportional to the current. Current can be increased in two ways: either you can increase the number of charges moving at a certain rate, or you can increase the rate of motion of a number of charges, including just one charge. The whole subject of electric and magnetic fields is fairly complex. Whether a charge is stationary or moving depends on the frame of reference from which you are looking at the charge. In the rest frame of the charge, there is no magnetic field, but in any reference frame that sees the charge as moving there will be a magnetic field. Observers who are moving relative to one another see different electric and magnetic fields from the same charge. It is possible to have a magnetic field without having any local charges that you can observe in motion. All that is needed is an electric field that is varying in time. This is what happens with light waves. A light wave is the combination of a time varying electric field that is produced by some distant moving charges and a corresponding time varying magnetic field. The electric and magnetic fields propagate through space as electromagnetic waves at the speed we call the speed of light. The electric displacement field D is a convenient way to account for the complex situation that arises when an electric field is present in an overall neutral assembly of charges such as exists in all materials. The microscopic electric field inside any material is the result of any applied field plus the organization of all the protons and electrons that make up the material. The displacement field is a smoothed out field that represents an average over some small (on the scale of the size of the material) region within the material. Some materials become polarized when an electric field is applied, which results in a tendency of the internal charges to align so as to cancel the applied field. Such effects are better understood in terms of the D field rather than the microscopic E field. There is a similar consideration for magnetic fields inside of magnetic materials. The microscopic B field is very complex, and the smoothed H field is a more convenient way to look at things. In the simplest cases, D is proportional to E and H is proportional to B. Permittivity is the constant of proportionality between D and E, and permeability is the constant of proportionality between B and H. You will find the equations $$D = \epsilon E$$ and $$B = \mu H$$ In any material the product the two proportionality constants is the reciprocal of the square of the speed of light in that material $$\epsilon \mu = \frac{1}{v^2}$$ In free space, the speed of light is c, and permittivity and permeability are given the subscript zero $$\epsilon_0 \mu_0 = \frac{1}{c^2}$$ The numerical values given to these parameters are different in different units, but the connection to the speed of light is the same. Last edited: May 17, 2005 3. May 18, 2005 ### roger thanks for the reply. So why is $$\epsilon_0 \mu_0$$ actually added to equations ? Is it to produce the correct numerical values ? And why is $${c^2} = \frac{1}\epsilon_0$$ ? Roger 4. May 18, 2005 ### OlderDan $${c^2} \ne \frac{1}{\epsilon_0 }$$ $${c^2} = \frac{1}{\epsilon_0 \mu_0}$$ $$\epsilon_0 \mu_0$$ is not "added to" equations. These constants are part of the fundamental equations relating electric an magnetic fields. They are not arbitrarily added or taken away any more than m is added or taken away from F = ma or G is added or taken away from $$F = \frac{GMm}{r^2}$$ They are constants of proportionality. Their product is a proportionality constant in one of the four fundamental equations developed by Maxwell that serve as the foundation of electromagnetic theory. In free space $$\nabla \times B = \epsilon_0 \mu_0 \frac{\partial E}{\partial t}$$ Historically, the origin of electric and magnetic fields were recognized as coming from charge distributions, and currents in wires. The letters used to represent those quantities and the units for the physical quantities involved developed independently. Bringing them together, as Maxwell did, requires a proportionality constant. It happens that light travels at a particular speed in free space. When expressed in terms of the units that were developed for representing length and time that speed is some large number of meters per second. It is possible (and some theoretical physicists are fond of doing it) to define the speed of light to be 1 and develop a whole system of units surrounding that. I expect that is a pretty big leap from units that are familiar to you. In those units $$1 = \frac{1}{4 \pi \epsilon_0}$$ $$1 = \frac{\mu_0}{ 4 \pi}$$ so the product of the proportionality constants becomes $$\epsilon_0 \mu_0 = 1$$ This does not really get rid of these quantities. It makes it unnecessary to write them, but at a price. The price is you need to redefine things like charge and current in units that you would not recognize. Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
2017-03-28 20:20:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7172105312347412, "perplexity": 254.01126252371952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189884.21/warc/CC-MAIN-20170322212949-00033-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.controlbooth.com/threads/visualizer-software.6868/
# Visualizer Software #### thelightguy87 ##### Active Member So I was asked today to look into some visualizer software that we are gonna use with our Hog IPC and our ETC express 72/144. I know about wysiwyg, and love it, its just too expensive, but was introduced today to ESP vision. We don't have anyone that knowledgeable with vectorworks do really draft our theater out well enough, nor does the college have a license for vectorworks. so we'd have to get both for ESP vision. and then theres Capture. Which I'm not very familiar with, although i've seen it before. We have capital money to buy this software, so WYSIWYG is still an option, but how are these other programs compared to WYSIWYG? And is WYSIWYG really worth the price? Thanks Mike #### derekleffew ##### Resident Curmudgeon Senior Team There is a one-year FREE student license for Vectorworks, so that might help. I was under the impression that ESP-Vision was more expensive than WYSIWYG. Now the rest is coming from me, who has never used any visualization software other than my brain which can be at times quite soft, so take it for what you will. I have and do work with professionals who use this software all the time, for high profile corporate and awards shows, not theatre. WYSIWYG is the oldest and most mature, it's on what R21? Those I work with who can afford anything seem to prefer ESP-Vision. PreLite Studios, which does nothing but pre-vis, uses Vision, as does DigitalStageChicago. I would also look into Martin Show Designer, as a lower cost, but still workable alternative. Don't know anything about Capture, other than it's an up-and-comer. Hope this helped a little. #### tgates ##### Member I would also look into Martin Show Designer, as a lower cost, but still workable alternative. Maybe it's because I find the precision tools in more full featured drafting software, but I can't stand MSD. I have to use it to make visualizer files for our Maxxyz, and I find it extremely cumbersome. Some people like it for sure, but I'm afraid I don't see the appeal. Anyone out there had a better experience? #### Jby007 ##### Member i use Esp vision and wig i love esp i think its much better than wig for pre viz if you want program lighting esp is around $750 wig is around$1500 hope it helps
2021-05-10 22:05:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1799156665802002, "perplexity": 2916.6282993523487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00081.warc.gz"}
https://docs.ceph.com/en/latest/rados/configuration/ceph-conf/
Notice This document is for a development version of Ceph. Configuring Ceph¶ When Ceph services start, the initialization process activates a series of daemons that run in the background. A Ceph Storage Cluster runs at a minimum three types of daemons: Ceph Storage Clusters that support the Ceph File System also run at least one Ceph Metadata Server (ceph-mds). Clusters that support Ceph Object Storage run Ceph RADOS Gateway daemons (radosgw) as well. Each daemon has a number of configuration options, each of which has a default value. You may adjust the behavior of the system by changing these configuration options. Be careful to understand the consequences before overriding default values, as it is possible to significantly degrade the performance and stability of your cluster. Also note that default values sometimes change between releases, so it is best to review the version of this documentation that aligns with your Ceph release. Option names¶ All Ceph configuration options have a unique name consisting of words formed with lower-case characters and connected with underscore (_) characters. When option names are specified on the command line, either underscore (_) or dash (-) characters can be used interchangeable (e.g., --mon-host is equivalent to --mon_host). When option names appear in configuration files, spaces can also be used in place of underscore or dash. We suggest, though, that for clarity and convenience you consistently use underscores, as we do throughout this documentation. Config sources¶ Each Ceph daemon, process, and library will pull its configuration from several sources, listed below. Sources later in the list will override those earlier in the list when both are present. • the compiled-in default value • the monitor cluster’s centralized configuration database • a configuration file stored on the local host • environment variables • command line arguments • runtime overrides set by an administrator One of the first things a Ceph process does on startup is parse the configuration options provided via the command line, environment, and local configuration file. The process will then contact the monitor cluster to retrieve configuration stored centrally for the entire cluster. Once a complete view of the configuration is available, the daemon or process startup will proceed. Bootstrap options¶ Because some configuration options affect the process’s ability to contact the monitors, authenticate, and retrieve the cluster-stored configuration, they may need to be stored locally on the node and set in a local configuration file. These options include: • mon_host, the list of monitors for the cluster • mon_host_override, the list of monitors for the cluster to initially contact when beginning a new instance of communication with the Ceph cluster. This overrides the known monitor list derived from MonMap updates sent to older Ceph instances (like librados cluster handles). It is expected this option is primarily useful for debugging. • mon_dns_srv_name (default: ceph-mon), the name of the DNS SRV record to check to identify the cluster monitors via DNS • mon_data, osd_data, mds_data, mgr_data, and similar options that define which local directory the daemon stores its data in. • keyring, keyfile, and/or key, which can be used to specify the authentication credential to use to authenticate with the monitor. Note that in most cases the default keyring location is in the data directory specified above. In the vast majority of cases the default values of these are appropriate, with the exception of the mon_host option that identifies the addresses of the cluster’s monitors. When DNS is used to identify monitors a local ceph configuration file can be avoided entirely. Skipping monitor config¶ Any process may be passed the option --no-mon-config to skip the step that retrieves configuration from the cluster monitors. This is useful in cases where configuration is managed entirely via configuration files or where the monitor cluster is currently down but some maintenance activity needs to be done. Configuration sections¶ Any given process or daemon has a single value for each configuration option. However, values for an option may vary across different daemon types even daemons of the same type. Ceph options that are stored in the monitor configuration database or in local configuration files are grouped into sections to indicate which daemons or clients they apply to. These sections include: global Description Settings under global affect all daemons and clients in a Ceph Storage Cluster. Example log_file = /var/log/ceph/$cluster-$type.$id.log mon Description Settings under mon affect all ceph-mon daemons in the Ceph Storage Cluster, and override the same setting in global. Example mon_cluster_log_to_syslog = true mgr Description Settings in the mgr section affect all ceph-mgr daemons in the Ceph Storage Cluster, and override the same setting in global. Example mgr_stats_period = 10 osd Description Settings under osd affect all ceph-osd daemons in the Ceph Storage Cluster, and override the same setting in global. Example osd_op_queue = wpq mds Description Settings in the mds section affect all ceph-mds daemons in the Ceph Storage Cluster, and override the same setting in global. Example mds_cache_memory_limit = 10G client Description Settings under client affect all Ceph Clients (e.g., mounted Ceph File Systems, mounted Ceph Block Devices, etc.) as well as Rados Gateway (RGW) daemons. Example objecter_inflight_ops = 512 Sections may also specify an individual daemon or client name. For example, mon.foo, osd.123, and client.smith are all valid section names. Any given daemon will draw its settings from the global section, the daemon or client type section, and the section sharing its name. Settings in the most-specific section take precedence, so for example if the same option is specified in both global, mon, and mon.foo on the same source (i.e., in the same configurationfile), the mon.foo value will be used. If multiple values of the same configuration option are specified in the same section, the last value wins. Note that values from the local configuration file always take precedence over values from the monitor configuration database, regardless of which section they appear in. Metavariables¶ Metavariables simplify Ceph Storage Cluster configuration dramatically. When a metavariable is set in a configuration value, Ceph expands the metavariable into a concrete value at the time the configuration value is used. Ceph metavariables are similar to variable expansion in the Bash shell. Ceph supports the following metavariables: $cluster Description Expands to the Ceph Storage Cluster name. Useful when running multiple Ceph Storage Clusters on the same hardware. Example /etc/ceph/$cluster.keyring Default ceph $type Description Expands to a daemon or process type (e.g., mds, osd, or mon) Example /var/lib/ceph/$type $id Description Expands to the daemon or client identifier. For osd.0, this would be 0; for mds.a, it would be a. Example /var/lib/ceph/$type/$cluster-$id $host Description Expands to the host name where the process is running. $name Description Expands to $type.$id. Example /var/run/ceph/$cluster-$name.asok $pid Description Expands to daemon pid. Example /var/run/ceph/$cluster-$name-$pid.asok The Configuration File¶ On startup, Ceph processes search for a configuration file in the following locations: 1. $CEPH_CONF (i.e., the path following the $CEPH_CONF environment variable) 2. -c path/path (i.e., the -c command line argument) 3. /etc/ceph/$cluster.conf 4. ~/.ceph/$cluster.conf 5. ./$cluster.conf (i.e., in the current working directory) 6. On FreeBSD systems only, /usr/local/etc/ceph/$cluster.conf where $cluster is the cluster’s name (default ceph). The Ceph configuration file uses an ini style syntax. You can add comment text after a pound sign (#) or a semi-colon (;). For example: # <--A number (#) sign precedes a comment. ; A comment may be anything. # Comments always follow a semi-colon (;) or a pound (#) on each line. # The end of the line terminates a comment. Config file section names¶ The configuration file is divided into sections. Each section must begin with a valid configuration section name (see Configuration sections, above) surrounded by square brackets. For example, [global] debug_ms = 0 [osd] debug_ms = 1 [osd.1] debug_ms = 10 [osd.2] debug_ms = 10 Config file option values¶ The value of a configuration option is a string. If it is too long to fit in a single line, you can put a backslash (\) at the end of line as the line continuation marker, so the value of the option will be the string after = in current line combined with the string in the next line: [global] foo = long long ago\ long ago In the example above, the value of “foo” would be “long long ago long ago”. Normally, the option value ends with a new line, or a comment, like [global] obscure_one = difficult to explain # I will try harder in next release simpler_one = nothing to explain In the example above, the value of “obscure one” would be “difficult to explain”; and the value of “simpler one would be “nothing to explain”. If an option value contains spaces, and we want to make it explicit, we could quote the value using single or double quotes, like [global] line = "to be, or not to be" Certain characters are not allowed to be present in the option values directly. They are =, #, ; and [. If we have to, we need to escape them, like [global] secret = "i love \# and \[" Every configuration option is typed with one of the types below: int Description 64-bit signed integer, Some SI prefixes are supported, like “K”, “M”, “G”, “T”, “P”, “E”, meaning, respectively, 103, 106, 109, etc. And “B” is the only supported unit. So, “1K”, “1M”, “128B” and “-1” are all valid option values. Some times, a negative value implies “unlimited” when it comes to an option for threshold or limit. Example 42, -1 uint Description It is almost identical to integer. But a negative value will be rejected. Example 256, 0 str Description Free style strings encoded in UTF-8, but some characters are not allowed. Please reference the above notes for the details. Example "hello world", "i love \#", yet-another-name boolean Description one of the two values true or false. But an integer is also accepted, where “0” implies false, and any non-zero values imply true. Example true, false, 1, 0 addr Description a single address optionally prefixed with v1, v2 or any for the messenger protocol. If the prefix is not specified, v2 protocol is used. Please see Address formats for more details. Example v1:1.2.3.4:567, v2:1.2.3.4:567, 1.2.3.4:567, 2409:8a1e:8fb6:aa20:1260:4bff:fe92:18f5::567, [::1]:6789 addrvec Description a set of addresses separated by “,”. The addresses can be optionally quoted with [ and ]. Example [v1:1.2.3.4:567,v2:1.2.3.4:568], v1:1.2.3.4:567,v1:1.2.3.14:567 [2409:8a1e:8fb6:aa20:1260:4bff:fe92:18f5::567], [2409:8a1e:8fb6:aa20:1260:4bff:fe92:18f5::568] uuid Description the string format of a uuid defined by RFC4122. And some variants are also supported, for more details, see Boost document. Example f81d4fae-7dec-11d0-a765-00a0c91e6bf6 size Description denotes a 64-bit unsigned integer. Both SI prefixes and IEC prefixes are supported. And “B” is the only supported unit. A negative value will be rejected. Example 1Ki, 1K, 1KiB and 1B. secs Description denotes a duration of time. By default the unit is second if not specified. Following units of time are supported: • second: “s”, “sec”, “second”, “seconds” • minute: “m”, “min”, “minute”, “minutes” • hour: “hs”, “hr”, “hour”, “hours” • day: “d”, “day”, “days” • week: “w”, “wk”, “week”, “weeks” • month: “mo”, “month”, “months” • year: “y”, “yr”, “year”, “years” Example 1 m, 1m and 1 week Monitor configuration database¶ The monitor cluster manages a database of configuration options that can be consumed by the entire cluster, enabling streamlined central configuration management for the entire system. The vast majority of configuration options can and should be stored here for ease of administration and transparency. A handful of settings may still need to be stored in local configuration files because they affect the ability to connect to the monitors, authenticate, and fetch configuration information. In most cases this is limited to the mon_host option, although this can also be avoided through the use of DNS SRV records. Configuration options stored by the monitor can live in a global section, daemon type section, or specific daemon section, just like options in a configuration file can. In addition, options may also have a mask associated with them to further restrict which daemons or clients the option applies to. Masks take two forms: 1. type:location where type is a CRUSH property like rack or host, and location is a value for that property. For example, host:foo would limit the option only to daemons or clients running on a particular host. 2. class:device-class where device-class is the name of a CRUSH device class (e.g., hdd or ssd). For example, class:ssd would limit the option only to OSDs backed by SSDs. (This mask has no effect for non-OSD daemons or clients.) When setting a configuration option, the who may be a section name, a mask, or a combination of both separated by a slash (/) character. For example, osd/rack:foo would mean all OSD daemons in the foo rack. When viewing configuration options, the section name and mask are generally separated out into separate fields or columns to ease readability. Commands¶ The following CLI commands are used to configure the cluster: • ceph config dump will dump the entire configuration database for the cluster. • ceph config get <who> will dump the configuration for a specific daemon or client (e.g., mds.a), as stored in the monitors’ configuration database. • ceph config set <who> <option> <value> will set a configuration option in the monitors’ configuration database. • ceph config show <who> will show the reported running configuration for a running daemon. These settings may differ from those stored by the monitors if there are also local configuration files in use or options have been overridden on the command line or at run time. The source of the option values is reported as part of the output. • ceph config assimilate-conf -i <input file> -o <output file> will ingest a configuration file from input file and move any valid options into the monitors’ configuration database. Any settings that are unrecognized, invalid, or cannot be controlled by the monitor will be returned in an abbreviated config file stored in output file. This command is useful for transitioning from legacy configuration files to centralized monitor-based configuration. Help¶ You can get help for a particular option with: ceph config help <option> Note that this will use the configuration schema that is compiled into the running monitors. If you have a mixed-version cluster (e.g., during an upgrade), you might also want to query the option schema from a specific running daemon: ceph daemon <name> config help [option] For example,: $ceph config help log_file log_file - path to log file (std::string, basic) Default (non-daemon): Default (daemon): /var/log/ceph/$cluster-$name.log Can update at runtime: false See also: [log_to_stderr,err_to_stderr,log_to_syslog,err_to_syslog] or: $ ceph config help log_file -f json-pretty { "name": "log_file", "type": "std::string", "level": "basic", "desc": "path to log file", "long_desc": "", "default": "", "daemon_default": "/var/log/ceph/$cluster-$name.log", "tags": [], "services": [], "see_also": [ "log_to_stderr", "err_to_stderr", "log_to_syslog", "err_to_syslog" ], "enum_values": [], "min": "", "max": "", "can_update_at_runtime": false } The level property can be any of basic, advanced, or dev. The dev options are intended for use by developers, generally for testing purposes, and are not recommended for use by operators. Runtime Changes¶ In most cases, Ceph allows you to make changes to the configuration of a daemon at runtime. This capability is quite useful for increasing/decreasing logging output, enabling/disabling debug settings, and even for runtime optimization. Generally speaking, configuration options can be updated in the usual way via the ceph config set command. For example, do enable the debug log level on a specific OSD,: ceph config set osd.123 debug_ms 20 Note that if the same option is also customized in a local configuration file, the monitor setting will be ignored (it has a lower priority than the local config file). Override values¶ You can also temporarily set an option using the tell or daemon interfaces on the Ceph CLI. These override values are ephemeral in that they only affect the running process and are discarded/lost if the daemon or process restarts. Override values can be set in two ways: 1. From any host, we can send a message to a daemon over the network with: ceph tell <name> config set <option> <value> For example,: ceph tell osd.123 config set debug_osd 20 The tell command can also accept a wildcard for the daemon identifier. For example, to adjust the debug level on all OSD daemons,: ceph tell osd.* config set debug_osd 20 2. From the host the process is running on, we can connect directly to the process via a socket in /var/run/ceph with: ceph daemon <name> config set <option> <value> For example,: ceph daemon osd.4 config set debug_osd 20 Note that in the ceph config show command output these temporary values will be shown with a source of override. Viewing runtime settings¶ You can see the current options set for a running daemon with the ceph config show command. For example,: ceph config show osd.0 will show you the (non-default) options for that daemon. You can also look at a specific option with: ceph config show osd.0 debug_osd or view all options (even those with default values) with: ceph config show-with-defaults osd.0 You can also observe settings for a running daemon by connecting to it from the local host via the admin socket. For example,: ceph daemon osd.0 config show will dump all current settings,: ceph daemon osd.0 config diff will show only non-default settings (as well as where the value came from: a config file, the monitor, an override, etc.), and: ceph daemon osd.0 config get debug_osd will report the value of a single option. Changes since Nautilus¶ With the Octopus release We changed the way the configuration file is parsed. These changes are as follows: • Repeated configuration options are allowed, and no warnings will be printed. The value of the last one is used, which means that the setting last in the file is the one that takes effect. Before this change, we would print warning messages when lines with duplicated options were encountered, like: warning line 42: 'foo' in section 'bar' redefined • Invalid UTF-8 options were ignored with warning messages. But since Octopus, they are treated as fatal errors. • Backslash \ is used as the line continuation marker to combine the next line with current one. Before Octopus, it was required to follow a backslash with a non-empty line. But in Octopus, an empty line following a backslash is now allowed. • In the configuration file, each line specifies an individual configuration option. The option’s name and its value are separated with =, and the value may be quoted using single or double quotes. If an invalid configuration is specified, we will treat it as an invalid configuration file bad option ==== bad value • Before Octopus, if no section name was specified in the configuration file, all options would be set as though they were within the global section. This is now discouraged. Since Octopus, only a single option is allowed for configuration files without a section name.
2021-03-08 18:07:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2802290916442871, "perplexity": 5851.088899843532}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385389.83/warc/CC-MAIN-20210308174330-20210308204330-00008.warc.gz"}
https://math.stackexchange.com/questions/3748779/solve-banded-linear-system-with-large-bandwidth-but-sparse-interior-band-structu
# Solve banded linear system with large bandwidth but sparse interior band structure Assume the linear system $$Ax = b$$, where $$A$$ is a $$N \times N$$ banded matrix with lower and upper bandwidth $$l$$, and $$N >> l >> 1$$. $$A$$ has the following structure: All entries of $$A$$ are zero, except for the following: the diagonal of $$A$$, the first super-diagnonal band and the first sub-diagonal band are non-zero; the $$(l-2)$$nd, $$(l-1)$$th and the $$l$$-th super-diagonal and sub-diagonal bands are also nonzero. Therefore, $$A$$ contains 9 non-zero bands. The system can be solved using a standard band-matrix $$LU$$ decomposition, the complexity of the solve then scales as $$N\times l^2$$ for $$l << N$$. However, this method does not exploit the fact that all bands between the first off-diagonals and the $$(l-2)$$nd off-diagonals are identically zero. Such a system arises in the discretization of second-order, two-dimensional differential equations with mixed derivatives, using a three-point compact stencil at interior nodes and a two-point stencil at boundary nodes. Besides bandwidth-reducing algorithms, does a faster method exist to solve such a system, that makes use of the sparse band structure described here? Note that standard banded $$LU$$ factorization for a matrix of bandwidth $$\ell$$ has time complexity $$O(N \ell^2)$$.$${}^*$$ In your question, you specifically address discretized differential equations. There has been considerable work on iterative methods for such discretizations. It is a truly vast field, but very popular is a Krylov subspace method preconditioned by, say, algebraic multigrid. For many PDEs, these can provably produce an approximate solution to a specified tolerance in $$O(N)$$ time. Let me now address the question you asked. Given an arbitrary banded matrix with bandwidth $$\ell$$ with only $$9$$ nonzero diagonals as specified in your answer, does there exist a method faster than banded $$LU$$ factorization? If the answer to this question is no, then it would be very hard to show this, as you must demonstrate that every method which solves the problem must have a certain time complexity. There is some very sophisticated and interesting work in theoretical computer science which is addressing this problem, which I must admit I don't fully understand which might be able to give very precise "lower bound" statements on how fast problems of this type might be solved. Here is a specific result which you may or may not be aware of which might temper your expectations about what time complexity you can hope to achieve. Informally, the result might be stated as follows: For any matrix $$A$$ fitting your description with $$\ell \approx \sqrt{N}$$, there is no reordering of the rows and columns of the matrix $$A$$ such that an $$LU$$ factorization of the reordered $$A$$ can be computed in time faster than $$O(N^{3/2})$$. Moreover, this time complexity is obtained by reordering $$A$$ according to the nested dissection ordering. This result is more or less a consequence of the results of this paper. The case $$\ell \approx \sqrt{N}$$ is important because this corresponds to the discretization of a 2D PDE on a square mesh. The result does not rule out some other algorithm solving the problem in faster than $$O(N^{3/2})$$ time, but it does rule out a class of algorithms which are "$$LU$$ factorization-like". My hunch is that $$N^{3/2}$$ is as good as you can get for this problem without some additional assumption (like $$A$$ being diagonally dominant, which is often the case for discretized PDEs). I would look into numerical methods for linear systems arising specifically from discretized PDEs, for which there is additional structure (like perhaps diagonal dominance) which can open up certain algorithmic approaches that won't work on a fully general matrix $$A$$ with the sparsity pattern you describe. $${}^*$$ To see why this must be the case, note that if $$A$$ is dense and thus $$\ell = N-1$$, then a complexity of $$O(N\ell)$$ would mean we could solve every system of linear equations in $$O(N^2)$$ operations, which has not been shown to be possible. $$O(N\ell^2)$$ recovers the standard $$O(N^3)$$ for Gaussian elimination.
2021-04-17 00:21:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 44, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7797196507453918, "perplexity": 239.73424794289343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038092961.47/warc/CC-MAIN-20210416221552-20210417011552-00223.warc.gz"}
https://rip94550.wordpress.com/2010/03/22/logic-truth-tables/
## Logic: truth tables edit: 29 Mar, the compact proof truth table for modus ponens needed “2”, not “1” under column 7. I think this will be the first of three posts on logic. In this one, I will look at truth tables and at using them to prove tautologies (valid logical propositions). (If I had known how much typographic trouble this post would cause…. Well, after a little practice with the new symbols, it wasn’t so bad.) I expect that the second post will deal with “quantifiers “, namely “there exists” and “for all”, and their classical or linguistic counterparts, “some”, and “all”. And the third post should deal with Aristotle’s syllogisms. They started it all — for me in particular, and for the world in general. All I wanted to do when I started was review the syllogisms given what little I knew of modern logic. It turns out there’s a major difference between Aristotle and modern logic, largely motivated I think by the explicit idea of the empty set. We’ll see the difference in principle in the second post, in practice in the third. As usual, I’m not trying to write an introductory text here, just picking out a few things that interest me. ## Introduction Let me also say up front — and I’ll try to remember to say it again — that we have more power in truth tables than I understand. That is to say, when people talk about “well formed formulas” and formal proofs, they seem to require “modus ponens” as a starting point. That’s a fancy name for something we take for granted: that from two premises P and (P implies Q), we may conclude Q. (Yes, that seems incredibly obvious. In fact, we almost never say it explicitly: we usually just write QED as soon as we have P and (P implies Q). It has a fancy Latin name — actually, its fancy name is modus ponendo ponens, which means something like “method that affirms by affirming”. But I can prove “modus ponens” — to use its usual abbreviation — using truth tables. So why do I need to assume it? I don’t know. I plan to keep my eyes open. One of the things that happened to me, is that after looking at formal logic systems and named rules to be cited as line-by-line justifications (I’ll show you an example of that in the next post) — I lost sight of the fact that we can prove things using truth tables. The point of a truth table is to show the truth or falsity of a compound statement as a function of the truth or falsity of its components. It is, in fact, a proof by exhaustive enumeration of all possible cases. Let me just jump into truth tables now. We may define “negation” by exhaustion, using the following table: $\left(\begin{array}{cc} P & \text{Not P} \\ \text{True} & \text{False} \\ \text{False} & \text{True}\end{array}\right)$ That says two things: that if P is true, then not-P is false; and if P is false, then not-P is true. If I can (yes!), I will use $\neg P$ for not-P. By the way, my Mathematica® commands for that were: Similarly, we may define “and” using the third column of the following table: $\left(\begin{array}{ccc} P & Q & P\land\ Q \\ \text{True} & \text{True} & \text{True} \\ \text{True} & \text{False} & \text{False} \\ \text{False} & \text{True} & \text{False} \\ \text{False} & \text{False} & \text{False}\end{array}\right)$ That is, “P and Q”, wriiten $P \land Q\$, is true exactly when P is true and Q is true. For reference, the Mathematica commands were: We will find it convenient to define the “inclusive or” — that is, “P or Q or both”. (Why is it convenient? Because it gives us “De Morgan’s Laws”, for which I’ll show an example below.) The “exclusive or” would be “P or Q but not both” — so called because it excludes the possibility that both are true. We take the following table as the definition of the inclusive or: $\left(\begin{array}{ccc} P & Q & P \lor Q \\ \text{True} & \text{True} & \text{True} \\ \text{True} & \text{False} & \text{True} \\ \text{False} & \text{True} & \text{True} \\ \text{False} & \text{False} & \text{False}\end{array}\right)$ We see that the inclusive or is false exactly when both P and Q are false. Once again, let me include the Mathematica commands: Finally, we will take as our definition of “implies” the following — perhaps peculiar — table. (This choice has a name, material implication.) $\left(\begin{array}{ccc} P & Q & P \Rightarrow Q \\ \text{True} & \text{True} & \text{True} \\ \text{True} & \text{False} & \text{False} \\ \text{False} & \text{True} & \text{True} \\ \text{False} & \text{False} & \text{True}\end{array}\right)$ What may seem peculiar is that the implication is false in only one of the four cases: the implication is false if P is true and Q is false. In particular, we say the implication is true in the two cases when P is false. Another way to describe that is to say that the implication “P implies Q” is vacuously true whenever P is false — in both possible cases, Q false and Q true. Both of the introductory logic books I have picked up (both titled “Introduction to Logic”, one by Gensler and the other by Copi & Cohen — both soon to be in the bibliography!) have more than a few pages about this. Interestingly, neither Exner nor Hummel say much about it. (We learn to take it for granted in mathematics.) I’m not going to say much about it either. As I said, this definition of implication is called “material implication” — and I invite you to go search the internet or your books. What I will say — and this is the bare minimum — is that we have effectively defined implication as being equivalent to this: we never have “P and not Q”. That is, with this definition — this truth table for implication — we have an equivalence: $(P\Rightarrow Q)\equiv \neg (P\land \neg Q)\$. Oh, notation. I tend to use the angular symbols $\land$ and $\lor$ for “and” and “or”. (And I’m very glad that LaTeX will do them.) Given a chance, Mathematica would use && and ||. It also understands functions “And” and “Or”. Worse, where I would prefer to use either ~ or $\neg$ for negation, Mathematica uses !, or “Not”. In fact, I will use $\neg$ whenever I can. Worst, none of the symbols on the “basic typesetting” palette seems to work for “implication” — so I have to use its named function, “Implies”, in Mathematica. Oh, let me show you an example of De Morgan’s Laws. We effectively used this equivalence $(P\Rightarrow Q) \equiv \neg (P\land \neg Q)\$ to justify the definition of material implication. In this form, it seems very natural to me. De Morgan’s Theorem (Law, Laws) says that I can eliminate the ( ) on the RHS by apply the negation operator to both pieces, and replacing “and” by “or”, getting: $(P\Rightarrow Q)\equiv \neg P\lor Q\$. (And that’s the model, in my head anyway, that seems a little odd. Valid, but odd at first glance.) Now is a good time to remark: we will find either of these equivalents handy. After all, they let us eliminate the implication symbol. Having defined implication by a truth table, we are in a position to verify either of those equivalences. Let’s do the original one. ## Proving an equivalence, long form First, let me construct the truth table for the right-hand side. I will present this in detail. Then I will show you how to compress the proof. $\left(\begin{array}{ccccc} P & Q & \neg Q & P \land \neg Q & \neg (P \land \neg Q) \\ \text{True} & \text{True} & \text{False} & \text{False} & \text{True} \\ \text{True} & \text{False} & \text{True} & \text{True} & \text{False} \\ \text{False} & \text{True} & \text{False} & \text{False} & \text{True} \\ \text{False} & \text{False} & \text{True} & \text{False} & \text{True}\end{array}\right)$ All we do is move from one column to the next, using our definitions to determine True or False, based on the previous (leftward) columns. Set P, set Q, set $\neg Q$ from them, then do the next column…. It seems about as straight-forward as possible. That gave us the RHS. For the LHS, we recall the definition of implication… $\left(\begin{array}{ccc} P & Q & P \Rightarrow Q \\ \text{True} & \text{True} & \text{True} \\ \text{True} & \text{False} & \text{False} \\ \text{False} & \text{True} & \text{True} \\ \text{False} & \text{False} & \text{True}\end{array}\right)$ and we see that the last columns agree. (It’s a very good thing that P and Q were lined up correctly. How convenient. We’ll see that sometimes it takes work to arrange that in Mathematica. By hand, of course, it should be simple enough.) It might be nice if we could lay out the equivalence in one table, rather than having to compare the two last columns in two tables. (Let me say something. I didn’t work out the following two examples of compact proofs because they looked nice— although they do! I did it because I needed to work out exactly what the heck Hummel was doing. Yes, he was doing just what I thought he should — but I needed to do it for myself. And since Mathematica was pretty capable….) ## Proving an equivalence, compact form Here’s a way to do it in one table. Basically, we will use each symbol (counting not-Q as one symbol) of the tautology to be proved as a column heading in the truth table. We want to show $(P\Rightarrow Q)\equiv \neg (P\land \neg Q)\$, so we take those symbols as our column headings. Alas, it’s easier to describe the finished product than to get Mathematica to lay it out progressively. $\left(\begin{array}{cccccccc} P & \Rightarrow & Q & \equiv & \neg & (P & \land & \neg Q) \\ \text{True} & \text{True} & \text{True} & \text{True} & \text{True} & \text{True} & \text{False} & \text{False} \\ \text{True} & \text{False} & \text{False} & \text{True} & \text{False} & \text{True} & \text{True} & \text{True} \\ \text{False} & \text{True} & \text{True} & \text{True} & \text{True} & \text{False} & \text{False} & \text{False} \\ \text{False} & \text{True} & \text{False} & \text{True} & \text{True} & \text{False} & \text{False} & \text{True} \\ 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ 1 & 4 & 2 & 7 & 6 & 1 & 5 & 3\end{array}\right)$ (There is a minor typo — a missing “)” — in the Mathematica headers, and I fixed it in the LaTeX.) I have numbered the columns in the second-to-last row for descriptive purposes — there is no need to do that for yourself. We wouldn’t usually show the numbers on the bottom row, either. What’s it all mean? First (as indicated by “1” at the bottom of columns 1 and 6), we lay out the possible values of P. Since there are two possible values for Q, we know we need 4 entries for P — two true, two false. Then (“2” in column 3) we assign the 4 values of Q in such a way as to get all four possible sets of truth values for the set {P,Q}. Third (“3” in column 8 ) we fill in $\neg Q$ from Q in column 3. Fourth (column 2), under the $\Rightarrow$ symbol, we fill in the values for $P \Rightarrow Q\$, using the values of P, Q in columns 1 and 3. Let’s do that explicitly. Recall the definition of implication, once again: $\left(\begin{array}{ccc} P & Q & P \Rightarrow Q \\ \text{True} & \text{True} & \text{True} \\ \text{True} & \text{False} & \text{False} \\ \text{False} & \text{True} & \text{True} \\ \text{False} & \text{False} & \text{True}\end{array}\right)$ … and let me display our complete table right close by: $\left(\begin{array}{cccccccc} P & \Rightarrow & Q & \equiv & \neg & (P & \land & \neg Q) \\ \text{True} & \text{True} & \text{True} & \text{True} & \text{True} & \text{True} & \text{False} & \text{False} \\ \text{True} & \text{False} & \text{False} & \text{True} & \text{False} & \text{True} & \text{True} & \text{True} \\ \text{False} & \text{True} & \text{True} & \text{True} & \text{True} & \text{False} & \text{False} & \text{False} \\ \text{False} & \text{True} & \text{False} & \text{True} & \text{True} & \text{False} & \text{False} & \text{True} \\ 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ 1 & 4 & 2 & 7 & 6 & 1 & 5 & 3\end{array}\right)$ For column 2 — and the trick is getting used to looking at column 2 instead of column 3! Oh, when I say row 1, for example, I am not counting the header… 1. For row 1, we have P,Q = T,T so we enter T in column 2, according to the truth table for implication. 2. For row 2, we have P,Q = T,F so we enter F in column 2. 3. For row 3, we have P,Q = F,T so we enter T in column 2. 4. For row 4, we have P,Q = F,F so we enter T in column 2. (No, the order in which we do columns isn’t actually set in stone, although I think it makes sense to define Q and then $\neg Q$ instead of the other way around. But we could have done column 4 third instead of fourth, as soon as we had columns 1 and 3 which determine it.) Similarly, fifth, we fill in the column labeled $\land$ (column 7) with the values for $P \land \neg Q\$. This time we are using the truth table for “and” applied to the row-by-row entries for P and $\neg Q$ in columns 6 and 8. Sixth, use the column labeled $\neg$ (column 5) to hold the values of $\neg(P \land \neg Q)\$, found by negating the values in column 7. Seventh and finally, in column 4 we explicitly record the equivalence, row by row, of columns 2 and 5, which contain the LHS and RHS respectively. We conclude that the equivalence is true because every entry in column 4 is True. OK? ## Proving an implication, compact form Let’s try that for an implication rather than for an equivalence. Let’s prove “modus ponens”: $\left(\begin{array}{ccccccc} (P & \Rightarrow & Q) & \land & P & \Rightarrow & Q \\ \text{True} & \text{True} & \text{True} & \text{True} & \text{True} & \text{True} & \text{True} \\ \text{True} & \text{False} & \text{False} & \text{False} & \text{True} & \text{True} & \text{False} \\ \text{False} & \text{True} & \text{True} & \text{False} & \text{False} & \text{True} & \text{True} \\ \text{False} & \text{True} & \text{False} & \text{False} & \text{False} & \text{True} & \text{False} \\ 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ 1 & 3 & 2 & 4 & 1 & 5 & 2\end{array}\right)$ (Once again I have edited the LaTeX to add ( ) which are not actually in the Mathematica headings. and edit: column 7 should have “2” on the bottom.) Ok, if we did that manually — of course Mathematica did it all for me — 1. we fill in columns 1 and 5 with values for P; 2. we fill in columns 3 and 7 with appropriate values for Q (we need the four possible ordered pairs of values for (P,Q); 3. we fill in column 2 with the values of $P \Rightarrow Q$ using the values in columns 1 and 3; 4. we fill in column 4 using the values of $(P \Rightarrow Q) \land P$ using columns 2 and 5; 5. finally, we fill in column 6 using the values in columns 4 and 7. We see that in every possible case, the implication in column 6 has the value True. The two examples differ in how the final column (which is not the rightmost!) was computed. In the previous example, we entered T in column 4 when columns 2 and 5 were the same (since we were testing equivalence); that turned out to be every time, i.e. every row. In this second example, we entered T in column 6 whenever it was true that column 4 implied column 7. To be explicit, columns 4 and 7 do not have the same values, row by row — they differ only in the third row. But for the third row, for example, it is true that F implies T, so we enter T in row 3 of column 6. Look at it another way: if columns 4 and 7 always agreed in this case, we would have proven equivalence instead of implication. Let me add that I worked this out in more detail, below. Let me emphasize that it is not necessary to use the compact form that I’ve laid out. Having worked the first example both ways, let me now work the second example in the not-so-compact way. ## Proving an implication, long form So we want to prove “modus ponens” again: $\text{LHS table} = \left(\begin{array}{cccc} P & Q & P \Rightarrow Q & (P \Rightarrow Q) \land P \\ \text{True} & \text{True} & \text{True} & \text{True} \\ \text{True} & \text{False} & \text{False} & \text{False} \\ \text{False} & \text{True} & \text{True} & \text{False} \\ \text{False} & \text{False} & \text{True} & \text{False}\end{array}\right)$ The RHS is simply Q — but I need to line up the occurences of Q. Hmm. And I need four values. That’s what I need. Now, let’s construct a table for $LHS \Rightarrow RHS\$. Our first column is the final column of the LHS table… t5=Take[t3,All,-1]; $t5 = \left(\begin{array}{c} \text{True} \\ \text{False} \\ \text{False} \\ \text{False}\end{array}\right)$ Our second column is the RHS… (Sewing these together was a little annoying; this illustrates why I find it easier to use the compact tables for proofs: corresponding Ps and Qs are lined up.) $\text{LHS - RHS table} = \left(\begin{array}{cc} \text{LHS} & \text{RHS} \\ \text{True} & \text{True} \\ \text{False} & \text{False} \\ \text{False} & \text{True} \\ \text{False} & \text{False}\end{array}\right)$ And now we recall the truth table for implication, since that’s what we’re trying to prove. $\left(\begin{array}{ccc} P & Q & P \Rightarrow Q \\ \text{True} & \text{True} & \text{True} \\ \text{True} & \text{False} & \text{False} \\ \text{False} & \text{True} & \text{True} \\ \text{False} & \text{False} & \text{True}\end{array}\right)$ We see that row 2 of implication does not occur in our LHS-RHS table (row 4 occurs twice), and the missing row 2 is the only one that is false — so all of the rows of our final column are True: $\left(\begin{array}{ccc} \text{LHS} & \text{RHS} & LHS \Rightarrow RHS \\ \text{True} & \text{True} & \text{True} \\ \text{False} & \text{False} & \text{True} \\ \text{False} & \text{True} & \text{True} \\ \text{False} & \text{False} & \text{True}\end{array}\right)$ That is, $LHS \Rightarrow RHS$ is true in every possible case. (And that is the additional detail that I promised when I did the compact proof of modus ponens.) I should point out that we certainly don’t want to have to use truth tables for everything: they are literally a proof by complete enumeration of all possible cases. This is one reason for coming up with a handy list of tautologies (valid rules of inference) and using rules instead of truth tables. (But it still doesn’t answer the question troubling me: if I can prove modus ponens, why must they take it as an axiom when they set up rules of inference? When I understand this, I’ll talk about it. For all I know, the very existence of truth tables may implicitly require modus ponens!) The next logic post should deal with the classical quantifers, “all” and “some” and “none”, and their modern counterparts, “there exists” and “for all”.
2018-04-23 17:36:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 39, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7882512211799622, "perplexity": 548.1329546319769}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946120.31/warc/CC-MAIN-20180423164921-20180423184921-00393.warc.gz"}
http://mathhelpforum.com/advanced-algebra/81132-linear-algebra.html
# Math Help - Linear Algebra 1. ## Linear Algebra Need a little help getting started. Thanks! If given F(R,R) the collection of all functions f:R->R. 1. Then how can I show F(R,R) is a vector space over R, the field of real numbers. 2. If L(f)=f(sqrt(2)), then how can I show that L:F(R,R)->R, is a linear transformation, what is the rank of L? 2. would the rank be 2 since the dim(F(R,R)) is 2? and does L(f)=f(sqrt(2)) imply f(sqrt(2))=2. So, L(af)=aL(f)=a(f(sqrt(2)))=a(sqrt(2)). Also, L(f+g)=L(f)+L(g)=f(sqrt(2))+g(sqrt(2))=sqrt(2)+sqr t(2)=2sqrt(2)? Therefore its linear. 3. Originally Posted by GreenandGold 1. Then how can I show F(R,R) is a vector space over R, the field of real numbers. Define $(f+g)(x) = f(x)+g(x)$ and $k(f)(x) = k(f(x))$ for all $f,g\in F(\mathbb{R},\mathbb{R}),k\in \mathbb{R}$. If L(f)=f(sqrt(2)), then how can I show that L:F(R,R)->R, is a linear transformation, what is the rank of L? Show that $L(f_1+f_2) = L(f_1)+L(f_2)$ and $L(kf_1) = kL(f_1)$. The image of $L$ is $\mathbb{R}$ so the rank is 1, since the dimension of $\mathbb{R}$ over $\mathbb{R}$ is 1. 4. Since it defined now i prove all the axioms hold? I then do the same to show that it is linear? 5. Originally Posted by GreenandGold Since it defined now i prove all the axioms hold? Show that how I defined F(R,R) satisfies the definitions (you call them axioms but I do not like to call them that way) of a vector space. I then do the same to show that it is linear? I already answered this. Just show that is satisfies the definitiion of a linear transformation (look at my first response). 6. So, just for the future I first need to define the transformation/function then attack the problem, because that lost me until u defined it? Also I have few more questions stemming from the previous: 1. Would I use a similar approach to finding L(g), if g = f - f(sqrt(2)), as the previous problem of proving a linear transformation? So does L(g) = 0? 2. How can I show that for any f:R->R I can write f = g + lamda, where g(sqrt(2)) = 0 and lamda is in R? Is the decomposition unique? 7. Here are some new ideas: If g = f - f[sqrt(2)] then, L(g) = (f - f[sqrt(2)])(sqrt) = f[sqrt(2)] - f[sqrt(2)]sqrt[2] = f[sqrt(2)] - sqrt(2)f[sqrt(2)] = L(f) - sqrt(2)L(f) can that be simplified anymore? Can you possibly explain this decompostion thing to me? Thanks!!! 8. Originally Posted by GreenandGold Need a little help getting started. Thanks! If given F(R,R) the collection of all functions f:R->R. 1. Then how can I show F(R,R) is a vector space over R, the field of real numbers. You do that by showing it satisfies the definition of "vector space": that there exist a sum and scalar product satifying all the "rules" for a vector space. Here the obvious sum is that if f and g are functions, f+ g is defined by (f+g)(x)= f(x)+ g(x). And, for any real number a, af is the function such that (af)(x)= af(x). 2. If L(f)=f(sqrt(2)), then how can I show that L:F(R,R)->R, is a linear transformation, what is the rank of L? Again, by showing that it satisfies the definition of "linear transformation". If f and g are two functions, a, b are numbers, what is L(af+ bg)? The rank of L is the dimension of its "image": that is the set of all values of L(f). Do you see that for any function, f, L(f) is a number? 9. Well my post #6 is stemming from the original conditions in post #1... However after solving the problem I didn't get a number i got L(f+g) = (f + g)(sqrt(2)) = f(sqrt(2)) + g(sqrt(2)) = L(f) +L(g) therefore additive... Did i not do that right?
2016-06-25 23:52:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.896373987197876, "perplexity": 927.666701182921}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00128-ip-10-164-35-72.ec2.internal.warc.gz"}
http://mathcenter.oxford.emory.edu/site/cs171/redBlackTreeSearchingAndInserting/
## Searching and Inserting with Red-Black Trees Remember, given that red-black trees are binary trees in symmetric order, the process used to search a red-black tree for a given key is exactly the same as that used in a binary search tree. We can thus transfer the get(Key key) method from our binary search tree class to our red-black tree class, verbatim: public Value get(Key key) { Node n = root; while (n != null) { int cmp = key.compareTo(n.key); if (cmp < 0) // key < n.key n = n.left; else if (cmp > 0) // key > n.key n = n.right; else // key == n.key (found it!) return n.val; } } Where things get more interesting is when it comes to insertion. We start the insertion process by navigating down the tree to the node where the key must first be inserted. That much, at least, is straight-forward and can be modeled after the put() method used in our binary search tree class. However, unlike in a binary search tree, we can't always just create a new node and append it where we would like it to go. Granted, sometimes this is possible -- consider for example adding $G$ to the below tree. In the case above, we only need to worry about the breaking of the perfect black balance of the tree when we add node $G$ under node $H$ -- a concern easily addressed by making the corresponding link red. Note, in terms of the underlying 2-3 tree, we just turned the 2-node $H$ into a 3-node $GH$. Indeed, with the exception of adding the root node to a null tree, we can never add a node using a black edge -- as this would break the perfect black balance of the tree. As such, we plan to make every new edge red. However, this can cause problems itself... Consider the four cases below, where we insert a new key in the same place where a binary search tree would place it, but using a red edge -- and the problems these actions create. Images of the tree before and after the node was added are also shown for clarity. Key To Insert Location Using Binary Search Method Problem Created $Q$ right child of $P$ Adding this as right child creates a right-leaning edge in a left-leaning red-black tree. Key To Insert Location Using Binary Search Method Problem Created $D$ right child of $C$ Not only did we add right-leaning red edge (which is its own problem), but also -- upon seeing a group of 3 keys connected by two adjacent red edges -- we see that a temporary 4-node has been created. To convince yourself this represents a 4-node, count the black links at the bottom of the $ACD$ group. This 4-node will need to be eliminated. Key To Insert Location Using Binary Search Method Problem Created $S$ left child of $T$ In this case, we have no right-leaning red edges, but we still create a group of 3 keys connected by two adjacent red edges. This again creates a 4-node, which will need to be eliminated. Key To Insert Location Using Binary Search Method Problem Created $U$ right child of $T$ Here, we again have the trouble-some right-leaning red edge. We also again have a 4-node created by a group of 3 keys connected by two adjacent red edges, that will need to be eliminated. Indeed, if the binary-search-tree insertion of a new node using a red edge into a red-black tree is going to cause a problem it will be similar to one of the 4 different cases shown above. As such, we need to figure out how to fix the following four "problem pieces": Importantly, the last three of the above problem pieces are really just different realizations of 4-nodes. Recall the process by which 4-nodes are eliminated in a 2-3 tree. We split the 4-node in two and pass the middle key to the parent of the former 4-node. This, of course may cause another 4-node higher up the tree, which in turn must be fixed. Indeed, such problems can track all the way to the root node in some cases. It should be no surprise then, the strategy we will employ to fix the above problem pieces will often result in just moving the problem up a level, so that we can address it later. As fixing the problem may be "delayed" in this way multiple times, there is a chance that rather than seeing the null links shown with the pieces above, there may be large subtrees in their place. As such, any "fix" we use needs to work in these cases as well. With this in mind, let us consider how to fix each of these 4 problem pieces (with subtrees) in turn. To make our examples concrete, let us assume (arbitrarily) the keys involved are $A$, $E$, and $S$. #### Fixing "Lonely" Right-Leaning Red Edges Three of the four problem pieces above involve right-leaning red edges, but only one of these involves a right-leaning red edge not adjacent to some other red edge. This "lonely" right-leaning red edge can be converted to left-leaning red edges through the action of left rotation, described by the code and images below. To see why this is called a rotation, imagine you disconnected $E$ from its parent and then rotated things rigidly in a counter-clockwise direction (as suggested by the curved blue arrow) so that $S$ could now become the child of $E$'s former parent. Note, as $E$ then becomes $S$'s left child, we must let $E$ "adopt" $S$'s prior left child as its new right child. Before Left RotationAfter Left Rotation private Node rotateLeft(Node n) { assert isRed(n.right); // only use rotateLeft when n's right link is red! Node t = n.right; n.right = t.left; t.left = n; t.color = n.color; n.color = RED; return t; } ... // then supposing the parent reference to $E$ is x, // invoke with the following to update x x = rotateLeft(x); #### Fixing Adjacent Red Edges on the Same Level Consider the case when we have two adjacent red edges on the same level in the tree, with a black edge connecting the node between them with its parent, as shown on the right. As has been mentioned before, this is actually a 4-node in the associated 2-3 tree. Following the strategy for insertion in a 2-3 tree, we split the 4-node and pass the middle node up to the parent of the former 4-node. This is easily accomplished by changing the colors of all the edges connected to the middle node from red to black or black to red, as appropriate. Consequently, this operation in a red-black tree is known as flip colors. Before Flip ColorsAfter Flip Colors private void flipColors(Node n) { assert !isRed(n); // only use this method when middle node n assert isRed(n.left); // has black parent connecting edge and both assert isRed(n.right); // edges connecting n to its children are red n.color = RED; n.left.color = BLACK; n.right.color = BLACK; } #### Fixing "Left-Left Red" Edges Consider the case where there are two adjacent red edges that span two levels, where the bottom and middle nodes are both left children of their parents, as shown at right. This too is a 4-node that must be split, and have its middle key passed to the parent level. Note that if we could perform an operation similar to left rotation, but in the other direction (appropriately called a right rotation) on the top node in this group, then we would have reduced things to a previous problem, where both of the adjacent red edges are on the same level. Following the right rotation with the previously described flip color operation will then complete the split and pass of the middle key to the parent level. We first show the operation of right rotation below, and then we will show how this can be applied to a "left-left red" situation to deal with the corresponding 4-node. Before Right RotationAfter Right Rotation private Node rotateRight (Node n) { assert isRed(n.left) // Use this method only if n's left child's edge is red. // Note we don't check to see if n.left.left is red, // although in the case described that is also true. // The reason why is that this method will not only // help us address the "left-left red" problem, but // will also help us address a future problem as // well -- one where n.left.left is not red. Node t = n.left; n.left = t.right; t.right = n; t.color = n.color; n.color = RED; return t; } ... // Similar to left rotation, suppose x is the parent reference to $S$. // To update x, we invoke the following x = rotateRight(x); Now, let us see how right rotation and flipping colors can fix the "left-left red" problem, by first balancing and then splitting the underlying 4-node: #### Fixing "Left-Right Red" Edges As a final case to consider, suppose we again have two adjacent red edges that span two levels with the middle node a left child of its parent, but this time with the bottom node a right child of the middle node. Just like the previous two cases, this again represents a 4-node in the associated 2-3 tree. So, our strategy is again to split the 4-node into two 2-nodes and pass the middle key to the former parent of the 4-node. Fortunately, in this case, we don't need to develop a new operation to accomplish this. We can use the three actions described above (i.e., "left rotation", "right rotation", and "flip colors", in that order) to do this instead. As can be seen below, the left rotation addresses the initially right-leaning red edge. Then, like the left-left red edge fix above, the right rotation balance the underlying 4-node, and the flip colors operation finishes the job -- splitting the 4-node and passing its middle key to its former parent. The only tricky thing about the application of these operations in this context is that it first requires a reference to the middle node (i.e., $n_1$), and then a reference to the top node (i.e., $n_2$). However, if one remembers how the recursive binary search tree method put() works, one will recall that we visit all of the nodes -- from the point of insertion all the way back to the root -- as references are "reset" as part of the recursive process. Consequently, if we carefully inject our three operations during that upward-moving sequence of resets, we should be able to get access to those two nodes in the order we need them. #### Putting Everything Together (and keeping the root black) Often the operations described in the above section will eliminate a problem piece in a red-black tree in a single application. However, occasionally the red links that are created higher up in the tree as a result create problems of their own -- which must then be resolved as well. Of course, this is completely consistent with how passing the middle key up to the parent of a temporary 4-node during the key insertion process in a 2-3 tree, can create additional 4-nodes higher in the tree that then also need to be eliminated. In both cases, the strategy to resolve all of the problems is straight-forward. We simply continue resolving any problems seen, working upwards through the tree from the point of insertion towards the root. If we should eliminate all of the problems before reaching the root, great! However, if problems persist all the way to the root, we'll need to consider one more thing. Note that for three of the four problems that can occur (i.e., all but the lonely right-leaning red links), the solutions left us with a red edge above the top problem node. Recalling again that a red edge directly above a node means the key in that node should be considered "glued to" the key in its parent node, combined with the fact that the root has no parent -- clearly there can never be a red edge above the root. Yet, if problems persist all the way to the root node, and if the last problem addressed is not a "lonely" right-leaning red link, that is exactly what will be produced. The fix is simple in the extreme, though. As a last step in the insertion process, we just make sure the root is black. If we must change the color stored in the root node from red to black to ensure this, we can rest assured that no additional problems will be created -- as the last "flip colors" step taken will have made the two edges below the root node black as well. With this in mind, we are now ready to present the following implementation of a put(Key key, Value val) method that can be used with red-black trees to insert new key-value pairs into the tree. Of course, the code below assumes the instance methods rotateLeft(), rotateRight(), and flipColors() discussed above are all defined. The code shown in black is cannibalized from the binary search tree version of put(). Additional new code is shown in red. As you examine this code, note in particular how tightly one is able to consolidate the application of operations used to resolve problem pieces (apart from the root). Amazingly, this can be implemented with only three if-statements! (Take a moment and apply the code to each of the previously discussed problems to convince yourself that these three statements accomplish everything done above.) public void put(Key key, Value val) { root = put(root, key, val); root.color = BLACK; // fix the root color, if other fixes made it red } private Node put(Node n, Key key, Value val) { if (n = null) { Node newNode = new Node(key, val, RED); newNode.count = 1; return newNode; } int cmp = key.compareTo(n.key); if (cmp < 0) n.left = put(n.left, key, val); else if (cmp > 0) n.right = put(n.right, key, val); else n.val = val; if (isRed(n.right) && !isRed(n.left)) // fixes an initial right-leaning red edge n = rotateLeft(n); if (isRed(n.left) && isRed(n.left.left)) // balances a 4-node, as needed n = rotateRight(n); if (isRed(n.left) && isRed(n.right) // splits the 4-node, passing its middle flipColors(n); // key to its parent n.count = 1 + size(n.left) + size(n.right); return n; }
2022-01-26 06:04:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49943915009498596, "perplexity": 1247.2696819771636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304915.53/warc/CC-MAIN-20220126041016-20220126071016-00296.warc.gz"}
http://125-problems.univ-mlv.fr/problem3.php
# Problem 3: Magic Squares and the Thue-Morse Word $$\def\sa#1{\tt{#1}} \def\dd{\dot{}\dot{}}$$ The goal of the problem is to build magic squares with the help of the infinite Thue-Morse word $\mathbf{t}$ on the binary alphabet $\{\sa{0},\sa{1}\}$ (instead of $\{\sa{a},\sa{b}\}$). The word $\mathbf{t}$ is $\mu^\infty(0)$ obtained by iterating the morphism $\mu$ defined by $\mu(0)=\sa{01}$ and $\mu(1)=\sa{10}$: $$\mathbf{t}=\sa{01101001100101101001}\cdots.$$ The $n\times n$ array $S_n$, where $n=2^m$ for a positive natural number $m$, is defined, for $0\leq i,j \le n$, by $$S_n[i,j] = \mathbf{t}[k](k+1) + (1-\mathbf{t}[k])(n^2-k),$$ where $k=i.n+j$. The generated array $S_4$ is $\mathbf{16}$ $2$ $3$ $\mathbf{13}$ $5$ $\mathbf{11}$ $\mathbf{10}$ $8$ $9$ $\mathbf{7}$ $\mathbf{6}$ $12$ $\mathbf{4}$ $14$ $15$ $\mathbf{1}$ The array is a magic square because it contains all the integers from $1$ to $16$ and the sum of elements on each row is $34$, as well as the sums on each column and on each diagonal. Show the $n\times n$ array $S_n$ is a magic square for any natural number $n$ power of $2$. ## References • Magic squares on Wikipedia
2022-11-30 14:16:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8979735374450684, "perplexity": 89.19689145587485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710764.12/warc/CC-MAIN-20221130124353-20221130154353-00099.warc.gz"}
https://brianbridges.org/film-sequence-essay/project-vector-onto-vector-essay.php
We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy Project vector onto vector essay # Project vector onto vector essay ## Vectors and additionally Matrices Some expertise utilizing vectors together with matrices is without a doubt vital towards recognize quantum computing. Everyone furnish a new small arrival under and even fascinated readership will be encouraged for you to understand a fabulous ordinary reference relating to linear algebra this kind of while Strang, You have g. (1993). Advantages to help linear algebra (Vol. 3). Wellesley, MA: Wellesley-Cambridge Press or any on line research this type of dunkirk not likely a good succeed essay Linear Algebra. A column vector (or purely vector) $v$ involving facet (or size) $n$ might be a range involving $n$ complicated amounts $(v_1,v_2,\ldots,v_n)$ established since a good column: $$v =\begin{bmatrix} v_1\\ v_2\\ \vdots\\ v_n \end{bmatrix}$$ The typic from any vector $v$ is without a doubt characterized when $\sqrt{\sum_i |v_i|^2}$. Your vector is normally mentioned to often be for product usual (or on the other hand the idea might be termed a fabulous unit vector) if perhaps a usual is $1$. The adjoint with your vector $v$ can be denoted $v^\dagger$ and even can be specified that will come to be typically the immediately after short period vector where by $*$ signifies all the problematic conjugate, $$\begin{bmatrix}v_1 \\ \vdots \\ v_n \end{bmatrix}^\dagger = \begin{bmatrix}v_1^* & \cdots & v_n^* \end{bmatrix}$$ The nearly all standard approach to help grow a couple vectors mutually is definitely by means of the actual inner product, at the same time acknowledged mainly because a fabulous dept of transportation item. ## Vector Projection Formula a interior products delivers all the projection for one vector over to a second and additionally might be important for picturing the way towards exhibit a person vector because some quantity associated with different much simpler vectors. All the central merchandise concerning $u$ together with $v$, denoted $\left\langle ough, v\right\rangle$ is usually characterized as $$\langle u v\rangle = u^\dagger v=u_1^{*} v_1 + \cdots + u_n^{*} v_n.$$ This notation moreover lets all the usu connected with a fabulous vector $v$ for you to turn out to be published when $\sqrt{\langle sixth v, v\rangle}$. We may grow an important vector save woman children article english a new range $c$ in order to kind some innovative vector in whose records can be project vector in vector essay by way of $c$. We may even create only two vectors $u$ along with $v$ so that you can mode some fresh vector creative posting intending some sort of story articles will be this quantity connected with the actual entries regarding $u$ and $v$. Such operations can be represented below: $$\mathrm{If}~u =\begin{bmatrix} u_1\\ u_2\\ \vdots\\ u_n \end{bmatrix}~\mathrm{and}~ versus =\begin{bmatrix} v_1\\ v_2\\ \vdots\\ v_n \end{bmatrix},~\mathrm{then}~ au+bv =\begin{bmatrix} au_1+bv_1\\ au_2+bv_2\\ \vdots\\ au_n+bv_n \end{bmatrix}.$$ A matrix involving measurement $m \times n$ can be some range with $mn$ complex statistics placed within $m$ series and $n$ articles simply because suggested below: $$M = \begin{bmatrix} M_{11} ~~ M_{12} ~~ \cdots ~~ M_{1n}\\ M_{21} ~~ M_{22} research articles in interactive whiteboards \cdots ~~ M_{2n}\\ \ddots\\ M_{m1} ~~ M_{m2} ~~ \cdots ~~ M_{mn}\\ \end{bmatrix}.$$ Note which a new vector connected with dimensions $n$ is actually only an important matrix in dimension $n \times 1$. As along with vectors, we tend to can easily turbocharge a fabulous matrix utilizing any telephone number solar vigor situations content articles essay to help achieve a latest matrix in which each post is increased by means of $c$, and even most people can certainly insert case analyses concerning willingness hypotheses essay matrices articles about overpopulation about humankind essay the actual same exact measurements that will manufacture any unique matrix in whose work really are the amount with this respected word options with the actual couple of matrices. ## Matrix Multiplication together with Tensor Products We might likewise turbocharge a couple of matrices $M$ involving aspect $m\times n$ in addition to $N$ regarding dimension $n \times p$ towards receive some cutting edge matrix $P$ from volume $m \times p$ simply because follows: \begin{align} &\begin{bmatrix} M_{11} ~~ M_{12} ~~ \cdots ~~ M_{1n}\\ M_{21} ~~ M_{22} ~~ \cdots ~~ M_{2n}\\ \ddots\\ M_{m1} ~~ M_{m2} ~~ \cdots ~~ M_{mn} \end{bmatrix} \begin{bmatrix} N_{11} wikipedia switch posting essay N_{12} ~~ \cdots ~~ N_{1p}\\ N_{21} ~~ N_{22} ~~ \cdots ~~ N_{2p}\\ \ddots\\ N_{n1} ~~ N_{n2} ~~ \cdots ~~ N_{np} \end{bmatrix}=\begin{bmatrix} P_{11} ~~ P_{12} ~~ \cdots ~~ P_{1p}\\ P_{21} ~~ P_{22} ~~ performance assessment daybook article content pdf file essay ~~ P_{2p}\\ \ddots\\ P_{m1} ~~ P_{m2} ~~ \cdots ~~ P_{mp} \end{bmatrix} \end{align} where the actual posts in $P$ usually are $P_{ik} = \sum_j M_{ij}N_{jk}$. Intended for example of this, this entrance $P_{11}$ is usually all the essential system of typically the 1st row involving $M$ with a 1st column involving $N$. Notice which usually given that a new vector is only your special court case connected with some matrix, it characterization stretches for you to matrix-vector multiplication. All that matrices most people give consideration to will certainly frequently turn out to be rectangular matrices, in which typically the amount about rows and even posts happen to be same, or simply vectors, which will corresponds to make sure you sole $1$ line. ### Projecting some sort of vector that will some other vector One wonderful pillow matrix will be this identity matrix, denoted $\boldone$, of which seems to have all of the their diagonal factors equivalent to help you $1$ as well as the particular remaining elements same for you to $0$: $$\boldone=\begin{bmatrix} 1 ~~ 0 ~~ \cdots ~~ 0\\ 0 ~~ 1 ~~ \cdots ~~ 0\\ ~~ \ddots\\ 0 ~~ 0 ~~ \cdots ~~ 1 \end{bmatrix}.$$ For your square matrix $A$, people say an important matrix $B$ is usually their inverse if $AB = BA = \boldone$. The inverse for some sort of matrix will need definitely not are present, however anytime the idea is it is exceptional as well as all of us denote the idea $A^{-1}$. For virtually any matrix $M$, typically the adjoint or perhaps conjugate transpose associated with $M$ is actually any matrix $N$ these types of which $N_{ij} = M_{ji}^*$. a adjoint for $M$ is definitely usually denoted $M^\dagger$. People declare a new matrix $U$ can be unitary if perhaps $UU^\dagger = U^\dagger You = \boldone$ and also equivalently, $U^{-1} = U^\dagger$. Certainly the a large number of critical premises from unitary matrices is certainly this these sustain this usual from the vector. This specific comes about because $$\langle v,v \rangle=v^\dagger / = v^\dagger U^{-1} You sixth is v = v^\dagger U^\dagger You / = \langle You sixth v, u v\rangle.$$ A matrix $M$ is definitely stated so that you can possibly be Hermitian when $M=M^\dagger$. Finally, this tensor product (or Kronecker product) associated with a pair of matrices $M$ associated with measurement $m\times n$ and additionally project vector upon vector essay associated with measurements $p \times q$ might be the much bigger matrix $P=M\otimes N$ involving proportions $mp project vector to vector essay nq$, and additionally is definitely purchased coming from $M$ along with $N$ when follows: \begin{align} t \otimes d &= \begin{bmatrix} M_{11} ~~ \cdots ~~ M_{1n} \\ \ddots\\ M_{m1} ~~ \cdots ~~ Notice in project fee \end{bmatrix} \otimes \begin{bmatrix} N_{11} ~~ \cdots ~~ N_{1q}\\ \ddots\\ N_{p1} ~~ \cdots ~~ N_{pq} \end{bmatrix}\\ &= \begin{bmatrix} M_{11} \begin{bmatrix} N_{11} ~~ \cdots ~~ N_{1q}\\ \ddots\\ N_{p1} ~~ \cdots ~~ N_{pq} \end{bmatrix}~~ \cdots ~~ M_{1n} \begin{bmatrix} N_{11} ~~ \cdots ~~ N_{1q}\\ \ddots\\ N_{p1} ~~ \cdots ~~ N_{pq} \end{bmatrix}\\ \ddots\\ M_{m1} \begin{bmatrix} N_{11} ~~ \cdots ~~ N_{1q}\\ \ddots\\ N_{p1} ~~ \cdots ~~ N_{pq} \end{bmatrix}~~ \cdots ~~ M_{mn} \begin{bmatrix} N_{11} ~~ \cdots ~~ N_{1q}\\ \ddots\\ N_{p1} ~~ \cdots ~~ N_{pq} \end{bmatrix} \end{bmatrix}. \end{align} This is usually improved proven with several examples: $$\begin{bmatrix} some sort of \\ n \end{bmatrix} \otimes \begin{bmatrix} t \\ d \\ age \end{bmatrix} = \begin{bmatrix} the \begin{bmatrix} chemical \\ h \\ elizabeth \end{bmatrix} \\[1.5em] s \begin{bmatrix} m \\ deborah \\ e\end{bmatrix} \end{bmatrix} = \begin{bmatrix} an important k \\ a good d \\ an important electronic \\ t k \\ m defense \\ be\end{bmatrix}$$ and $$\begin{bmatrix} a\ h \\ c\ ve had \end{bmatrix} \otimes \begin{bmatrix} e\ f\\g\ project vector over to vector essay \end{bmatrix} = \begin{bmatrix} a\begin{bmatrix} e\ f\\ g\ l \end{bmatrix} b\begin{bmatrix} e\ f\\ g\ l \end{bmatrix} \\[1em] c\begin{bmatrix} e\ f\\ g\ they would \end{bmatrix} d\begin{bmatrix} e\ f\\ g\ they would \end{bmatrix} \end{bmatrix} = \begin{bmatrix} ae\ af\ be\ bf \\ ag\ ah\ bg\ bh \\ ce\ cf\ de\ df \\ cg\ ch\ dg\ dh \end{bmatrix}. ### Your Answer$$ A finished useful notational lifestyle adjoining tensor items is definitely the fact that, regarding any sort of vector $v$ or possibly matrix $M$, $v^{\otimes n}$ or perhaps $M^{\otimes n}$ le libraire dissertation abstract simple personally with regard to a powerful $n$-fold repeated tensor merchandise. Regarding example: \begin{align} &\begin{bmatrix} 1 \\ 0 \end{bmatrix}^{\otimes 1} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \qquad\begin{bmatrix} 1 \\ 0 \end{bmatrix}^{\otimes 2} = \begin{bmatrix} 1 \\ 0 \\0 \\0 \end{bmatrix}, \qquad\begin{bmatrix} 1 \\ essay relating to social staff careers \end{bmatrix}^{\otimes 2} = \begin{bmatrix} 1 \\ -1 \\-1 \\1 \end{bmatrix}, \\ &\begin{bmatrix} 0 & 1 \\ 1& 0 \end{bmatrix}^{\otimes 1}= \begin{bmatrix} 0& 1 \\ 1& 0 \end{bmatrix}, \qquad\begin{bmatrix} 0 & 1 \\ 1& 0 \end{bmatrix}^{\otimes 2}= \begin{bmatrix} 0 &0&0&1 \\ 0 &0&1&0 \\ 0 &1&0&0\\ 1 &0&0&0\end{bmatrix}. \end{align} Related Essays • Book review straight man Vector Projection on top of an important Vector Information Gauge this projection for a particular vector on top of a moment vector. Establish two vectors. Work out the actual projection connected with that initially vector against any secondly vector. Instructions Used. View Equally Linear Algebra, Vector Calculus. 793 Words | 9 Pages • Past exams essay The vector projection about a vector a about (or onto) a good nonzero vector n (also referred to when any vector portion and also vector a resolution associated with the throughout any area for b) is actually any orthogonal projection of the onto the upright series parallel in order to b.It will be any vector parallel to help d identified seeing that = ^ where by is a scalar, generally known as the scalar projection in a on to n as well as b̂ can be the actual equipment vector within the particular path with b 830 Words | 1 Pages • Connecting words essay Precisely how conduct everyone work out a good orthogonal projection once you actually tend to be granted a orthonormal basis? ORTHOGONAL PROJECTION Get orthogonal projection from a new vector against place any time orthonormal base specified Suppose orthonormal time frame (uj,?.um) about vector Versus Whenever (uj,?um) might be an orthonormal structure with regard to a good. 506 Words | 8 Pages • Repeal of the act of six articles essay Ideas for the purpose of Typically the projection of any vector. Case in point 1 Specified sixth v = i actually - Three t + 2 t in addition to ough = Contemplate that i -- 3 okay acquire that element about sixth is v in the way for you, quality connected with versus inside components parallel and also verticle with respect to help ough most of us discover a aspect parallel that will u as well as in which is normally simply just the particular projection about sixth is v during all the path for ough -- this specific you experience estimated on aspect (b). 545 Words | 10 Pages • Why people visit websites essay Your vector projection can be connected with two types: Scalar projection which usually tells about typically the specifications about vector projection together with this other might be this Vector projection which in turn says about itself plus represents that appliance vector. 552 Words | 5 Pages • Enumeration definition rhetorical essay Us dot product or service as well as vector projections (Sect. 12.3) When i A few definitions to get all the dept . of transporation system. I actually Geometric definition regarding dot supplement. We Orthogonal vectors. My partner and i Dept of transportation products in addition to orthogonal projections. That i Homes of a dept . of transporation solution. As i Dot product or service during vector parts. We Scalar and vector projection prescriptions. In that respect there can be two significant procedures to teach the actual us dot device Geometrical. 538 Words | 5 Pages • Ncca 2009 essay Throughout sequence so that you can assignment your vector v about ough a person can easily beginning by means of this unique equation, len(v) * len(u) * cos(theta) = sixth is v. 617 Words | 1 Pages • Sample topics for college admission essays Projects a fabulous vector to a further vector. In order to fully grasp vector projection, imagine this onNormal is usually catching your zzz's relating to some sort of brand recommending in it's route. Someplace alongside in which series can end up being typically the adjacent stage to all the point connected with vector. The projection might be simply onNormal rescaled and so this it all reaches up to this point upon any tier. 616 Words | 9 Pages • Modern micro endodontics essay Your vector projection from any vector some upon the nonzero vector s is definitely this orthogonal projection involving a on a good upright series parallel to make sure you p Vector projection -- remedy Any vector projection for a good at m will be all the system vector associated with d by the actual scalar projection of an important relating to m 754 Words | 3 Pages • Strange meeting susan hill essay writing Projection associated with your Vector to an important Aircraft Chief Process Evoke of which your vector projection connected with your vector on top of a second vector is provided just by. Any projection with in an important aeroplane can certainly be worked out by means of subtracting your piece associated with which will is actually orthogonal to typically the plane through . Merchandise. Walnut for the purpose of Coaching & Research. 561 Words | 8 Pages • Speech pathology online programs essay Feb Twenty-eight, 2017 · A fabulous fine manner so that you can create in your mind this particular is without a doubt to presume from the particular projection while sending your line some sort of 'shadow' connected with any vector spanning your planes to help you get hold of any innovative vector. Who vector is normally your projection. In the following situation, a projection is certainly at typically the XY airline. Its an important 2-D vector. We all understand this By, Y simply features associated with it, hence that's very easy. 504 Words | 5 Pages • Dahl democracy essay The actual time-span for projection for any in the actual place with t or even typically the scalar element some t by that diagram, So, any scalar piece associated with a new vector a fabulous during establish that scalar plus vector equipment associated with a vector Air conditioners on top of vector BD. Alternative. 625 Words | 8 Pages • Toniq research paper Make me identify typically the challenge. Truly When i include couple of n*3 matrices of which i should really work a particular about all of them to make sure you some other one.(I employ dlmread to help look over such files) Each organic with these types of matrices are usually resources connected with different vectors. throughout one more phrase, very first tips tend to be "x" beliefs, subsequently articles really are "y" figures along with 3rd columns really are "z" values--> That will is definitely typically the factor precisely why by means of misstep My partner and i preferred several verticle with respect vectors. 504 Words | 4 Pages • Feminist theory sociology essay topics I just would certainly want to help challenge the actual vector $\vec{BD}$ to a personal reference planes seeing that perfectly simply because job vector $\vec{BD}$ in typically the airline orthogonal that will a blueprint aeroplanes by vector $\vec{AB}$. Finally, i require any position concerning $\vec{AB}$ together with $\vec{BD}$ at the same time any time the particular vectors are usually estimated in in order to that guide plane when nicely when a orthogonal plane. 839 Words | 9 Pages • Essay about african culture and traditions Imagining a projection on top of some aeroplanes. Expressing that the particular classic along with innovative classifications associated with projections usually are not who unique. Imagining a fabulous projection on top of some sort of aeroplane. Proving in which that outdated plus brand new descriptions associated with projections are certainly not who distinct. In the event you have been observing it message, the idea suggests we've been developing hassle reloading usb assets on our site. 800 Words | 9 Pages • Types of biography essay 12 11, 2017 · Note the fact that given that an important vector can be plainly an important special case for some sort of matrix, this specific standard extends towards matrix-vector multiplication. Almost all your matrices you take into account could whether turn out to be block matrices, wherever any quantity with rows along with tips tend to be identical, or vectors, which corresponds to be able to just $1$ column. 845 Words | 8 Pages • Good quotes wallpaper essay Absolutely free vector projection online car loan calculator - come across this vector projection step-by-step. 600 Words | 10 Pages • Louisiana articles essay Alternative Offered an important Uncover the particular vector projection s connected with back button on b h Choice Granted a new Get your vector projection v connected with back button upon Formula Specified a new See this vector projection g your Find your vector projection l involving a on top of b m Resolution Presented with an important Get that vector projection v of times into ymca b Pick up Essay or dissertation Alternative. 1,200,000+ Concerns. Full satisfaction secured. Obtain Response. 987 Words | 5 Pages • Ley lines in washington state essay Anticipate the vector g the fact that ranges all of our 1-dimensional subspace can be your vector 2,1 and a vector back button you intend to help you challenge onto of which subspace will be granted as a result of, 1,2. Why don't we immediately take this particular. Which means by is a vector 1 3 as a result times is actually experiencing over these. And this vector h can be really going in order to always be this vector 2,1 along with so this particular is normally d 710 Words | 1 Pages • Mcm 22 synthesis essay 571 Words | 1 Pages • Short essay about womens work 994 Words | 8 Pages • Wendell berry manifesto essay 357 Words | 8 Pages • Sam s meats essay 930 Words | 10 Pages • Profesor essay 300 Words | 1 Pages • Jim brown civil rights essay 965 Words | 5 Pages • First mammal cloned essay 331 Words | 1 Pages • Trucking industry case study 506 Words | 6 Pages • How it ends sequel essay 307 Words | 10 Pages • Little red riding hood essay 564 Words | 2 Pages • Walt disney financing essay 645 Words | 7 Pages • Democracy and the un essay 778 Words | 6 Pages • Technological management essay 634 Words | 9 Pages • Cervic o medical term essay 900 Words | 7 Pages Join up that will Feed SPECIFICALLY FOR YOU FOR ONLY$28.89$9.56/page
2020-11-24 18:15:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8685741424560547, "perplexity": 3252.6446059230866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141176922.14/warc/CC-MAIN-20201124170142-20201124200142-00404.warc.gz"}
https://book.cds101.com/repositories.html
## 3.3 Repositories When you access a repository on GitHub, the page will look similar to the screenshot below, The page contains a lot of information about the repository along with multiple tabs and buttons, which can be a bit overwhelming at first. As it turns out, we will not need to use many of these tabs or buttons during the class, so instead let’s focus on the most important ones, which have been highlighted in the red and blue rectangles. • Red • Code tab: Clicking this brings you back to the main repository page, which is what is displayed in the above screenshot • Commit tab: takes you to the commit history for the repository, which are the series of “snapshots” that you create using the git tool in RStudio • Blue • Dropdown branch menu: use this to inspect a branch that is different from the default master branch • Clone or download button: provides a link to use when obtaining a copy of a repository. For the class, you will do this by creating a new project in RStudio using the version control option. • Pull requests tab: generally used for code reviews and quality control when a user wants to contribute code to a repository. For the class, pull requests will be used to submit your work so that the instructor is able to leave line-by-line comments about your code. Below the tabs and button is a list of files stored in the repository, Each repository will have different files. Clicking a file’s name will bring you to another page that shows a preview of the file contents. The descriptions in the middle of the file list show the most recent commit message for each file and the timespan on the right shows how recently the file was last updated. The above file list also shows you what you’ll see in a folder after you first obtain a copy of the repository. In that way, each repository can be thought of as a folder containing files, The advantage of this approach is that each repository you create is isolated and separate, which helps to reduce certain kinds of coding errors. Below the repository file list is a rendered version of the README.md file, The README.md file describes the contents of the repository and can be used as a form of documentation. It is a good idea to look at the README.md file of any repository you visit on GitHub to see if it gives examples or quick instructions on how to set up and use the files.
2022-10-02 21:35:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34109416604042053, "perplexity": 911.6989503127675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00746.warc.gz"}
https://hal.inria.fr/inria-00124825v2
HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information # Normal Cone Approximation and Offset Shape Isotopy 1 GEOMETRICA - Geometric computing INRIA Futurs, CRISAM - Inria Sophia Antipolis - Méditerranée Abstract : This work adresses the problem of the approximation of the normals of the offsets of general compact sets in euclidean spaces. It is proven that for general sampling conditions, it is possible to approximate the gradient vector field of the distance to general compact sets. These conditions involve the $\mu$-reach of the compact set, a recently introduced notion of feature size. As a consequence, we provide a sampling condition that is sufficient to ensure the correctness up to isotopy of a reconstruction given by an offset of the sampling. We also provide a notion of normal cone to general compact sets which is stable under perturbation. Keywords : Document type : Reports Cited literature [23 references] https://hal.inria.fr/inria-00124825 Contributor : Frédéric Chazal Connect in order to contact the contributor Submitted on : Saturday, January 20, 2007 - 10:18:04 AM Last modification on : Friday, February 4, 2022 - 3:09:35 AM Long-term archiving on: : Monday, June 27, 2011 - 3:35:58 PM ### Files Files produced by the author(s) ### Identifiers • HAL Id : inria-00124825, version 2 ### Citation Frédéric Chazal, David Cohen-Steiner, André Lieutier. Normal Cone Approximation and Offset Shape Isotopy. [Research Report] RR-6100, INRIA. 2007, pp.21. ⟨inria-00124825v2⟩ Record views
2022-05-20 02:18:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2884552776813507, "perplexity": 3042.7920999020166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530553.34/warc/CC-MAIN-20220519235259-20220520025259-00667.warc.gz"}
https://en.universaldenker.org/lessons/275
# Plate Capacitor: Voltage, Capacitance and Eletric Force Level 2 (without higher mathematics) Level 2 requires school mathematics. Suitable for pupils. Updated by Alexander Fufaev on ## Basic setup A plate capacitor usually consists of two round or rectangular conductive plates (also called Electrodes). These have an area $$A$$ and are located at a distance $$d$$ from each other. Both the area and the distance between the plates are two important parameters that geometrically characterize a plate capacitor. So far there are only two plates. Only when you put positive and negative electric charges on the two plates, the whole setup becomes a plate capacitor. Charge one plate with positive charge and the opposite plate with the same amount of negative charge. So the total charge on one plate is $$+Q$$ and on the other plate $$-Q$$ . The amount $$Q$$ is the same on both plates. The positive and negative electric charges on the separated plates now attract each other. If they were free, they would simply move towards each other. But since the plates are spaced at a fixed distance $$d$$ from each other, they cannot do that. From this you probably already recognize the first possible application of a plate capacitor. If you connect the two charged plates with a conducting wire and a small lamp, an electric current flows from one plate to the other and causes the lamp to light up until there is no more difference in charge on the plates. So with a capacitor you can store electric energy. But you can do much more with it, for example create a frequency filter, which is built into the charging cable of your smartphone and is there to protect the microelectronics from external electromagnetic interference. But that is by far not all. Dielectric medium: A plate capacitor can be filled with a non-conductive material (called dielectric). For example, the dielectric could be the air, vacuum, water, wood, ceramic, or other non-conductor. This dielectric is characterized by the relative permittivity $$\varepsilon_{\text r}$$. The dielectric should not be conductive at all, because otherwise the charges would pass through the dielectric to the other plate, thus equalizing the charge difference (this is not the purpose of a capacitor). The dielectric is useful for manipulating the physical properties of the capacitor, such as its capacitance. Table : Examples of relative permittivity of some materials DielectricRelative Permittivity $$\varepsilon_{\text r}$$ Vacuum1 Air1.0006 Water80 Glas6 bis 8 ## Voltage between the plates When a small charge $$q$$, for example a free-moving positive charge (let's call it test charge) is placed next to the positive plate, the positive plate will repel the positive test charge and the negative plate will attract it. The test charge experiences an electric force $$F$$ inside the plate capacitor, which accelerates the small charge straight toward the negative plate. The charge continues to accelerate until it arrives at the opposite negative plate. Before the test charge hits the negative plate, it has gained velocity $$v$$ due to acceleration and thus has also gained kinetic energy $$W$$. This kinetic energy gained by the test charge moving from one charged plate to the other is characterized by the voltage $$U$$ between the plates. Voltage $$U$$ between two plates is the energy $$W$$ gained by a small test charge as it moves from one plate to the other, divided by the charge $$q$$. Voltage $$U$$ is therefore energy per charge. The voltage $$U$$ between the plates and thus also the gained energy $$W$$ of the test charge can be manipulated by charging the plates even more. This is done by increasing the charge $$Q$$ on both plates. This increases the electric force $$F$$ on the test charge. The test charge would then accelerate even more and thus reach a greater velocity $$v$$ at the end, so gain a larger kinetic energy $$W$$. If you double the electric charge $$Q$$, then the voltage $$U$$ also doubles. Thus, a test charge $$q$$ would then gain twice as much energy $$W$$ after traversing the voltage $$U$$. ## Electric potential in plate capacitor Electric potential $$\varphi(x)$$ is the the current potential energy of a charge at position $$x$$, per charge. You get the potential $$\varphi$$ between the electrodes by solving the one-dimensional Laplace equation. The result is a potential $$\varphi$$ that depends linearly on the spatial coordinate $$x$$: Potential between the electrodes Formula anchor The potential difference corresponds here to the voltage between the electrodes: Formula: Voltage as potential difference Formula anchor If you then plot the electric potential $$\varphi$$ behind and between the electrodes in a diagram ($$\varphi$$, $$x$$), you get a constant potential $$\varphi_1$$ in the range $$x \le 0$$, that is up to the first electrode. Also behind the second electrode, that is for $$x \ge d$$, the potential is constant $$\varphi_2$$. Between the electrodes, that is in the region between $$x=0$$ and $$x=d$$, the potential increases linearly from one electrode to the other. ## Electric field and force inside a plate capacitor No matter where you place the test charge $$q$$ inside the plate capacitor, it will always move straight ahead to the other plate everywhere and experience the same force $$F$$. A force field, that is the entirety of all force vectors in space, is homogeneous here. Homogeneous means that it does not matter where you place the test charge. The test charge experiences the same electric force everywhere in the plate capacitor. You can calculate the force on a test charge. The force $$F$$ is - without deriving the formula here: Formula anchor If you divide the force 3 by the test charge $$q$$, then you obtain electric field $$E := \frac{F}{q}$$: Formula anchor The typical unit of electric field $$E$$ is $$\frac{\mathrm V}{\mathrm m}$$ (volts per meter) or alternatively $$\frac{\mathrm N}{\mathrm C}$$ (newtons per coulomb). The electric field is therefore nothing else than force per charge. The electric field in a plate capacitor depends only on the voltage $$U$$ and on the plate distance $$d$$. The larger the voltage and the smaller the distance, the larger the electric field. Since the force field is homogeneous, the electric field in the plate capacitor is also homogeneous. Instead of drawing the vector arrows, the electric field is often illustrated with field lines. In the case of a plate capacitor, the field lines are straight parallel lines running from one plate to the other. Such straight lines characterize a homogeneous E-field. A test charge then moves on such a straight line. By definition, the electric field lines run away from the positive plate and towards the negative plate. Consequently, the field lines exit the positively charged plate from both sides. And the field lines enter the negatively charged plate on both sides. The field lines of the negative and positive plates point in opposite directions behind the plates and thus cancel each other out. The field lines between the electrodes, on the other hand, point in the same direction, which is why the electric field between the electrodes is amplified. If you plot the electric field behind and between the electrodes in a coordinate system ($$E$$, $$x$$), then the E-field up to the first plate at $$x=0$$ is zero. The E-field behind the second plate, which is at $$x=d$$, is also zero. Between the electrodes, that is in the region between $$x=0$$ and $$x=d$$, the E-field has a constant value $$U/d$$. ## Capacitance of a plate capacitor Charge $$Q$$ and voltage $$U$$ are proportional to each other, where the constant of proportionality $$C$$ is the so-called capacitance: Capacitance definition Formula anchor Unit of the capacitance $$C$$ is $$\frac{\mathrm{As}}{\mathrm V}$$ (Ampere-second per volt) or abbreviated $$\mathrm{F}$$ (Farad). Capacitance is an important characteristic quantity of a capacitor, which depends mainly on its geometry, that is on the distance $$d$$ between the plates and on the plate area $$A$$. The capacitance also depends on the material, called dielectric, with which the space between the plates is filled. Here we assume that between the plates there is a vacuum or at least only air. You can calculate the capacitance of a plate capacitor as follows: Formula: Plate capacitor capacitance Formula anchor Here, $$\varepsilon_0$$ is the electric field constant, which provides the correct unit of capacitance. This is a natural constant with the value: $$\varepsilon_0 = 8.854 \cdot 10^{-12} \, \frac{\text{As}}{\text{Vm}}$$. If another dielectric is used between the electrodes instead of vacuum ($$\varepsilon_{\text r} = 1$$), such as ceramic, then the capacitance of the capacitor changed by this can be taken into account by the relative permittivity $$\varepsilon_{\text r}$$. The capacitance changes by the factor $$\varepsilon_{\text r}$$: Formula anchor ## Electrical energy in the field of the plate capacitor If you integrate the voltage over the charge, you get the energy $$W_{\text{e}}$$, which is necessary to bring the charge $$Q$$ to the electrodes: Formula anchor Equation 9 can also be expressed using the volume $$V$$ enclosed by the E-field between the electrodes: Formula anchor The equation 11 can be interpreted in such a way that the electrical energy of the plate capacitor is not somehow in the electrodes, but stored in the electric field between the electrodes (because for the energy only the volume enclosed by the field is relevant).
2023-01-30 04:01:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 9, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8120193481445312, "perplexity": 284.2935615901198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499801.40/warc/CC-MAIN-20230130034805-20230130064805-00724.warc.gz"}
http://discrete.openmathbooks.org/dmoi3/sec_intro-functions.html
## Section0.4Functions A function is a rule that assigns each input exactly one output. We call the output the image of the input. The set of all inputs for a function is called the domain. The set of all allowable outputs is called the codomain. We would write $f:X \to Y$ to describe a function with name $f\text{,}$ domain $X$ and codomain $Y\text{.}$ This does not tell us which function $f$ is though. To define the function, we must describe the rule. This is often done by giving a formula to compute the output for any input (although this is certainly not the only way to describe the rule). For example, consider the function $f:\N \to \N$ defined by $f(x) = x^2 + 3\text{.}$ Here the domain and codomain are the same set (the natural numbers). The rule is: take your input, multiply it by itself and add 3. This works because we can apply this rule to every natural number (every element of the domain) and the result is always a natural number (an element of the codomain). Notice though that not every natural number is actually an output (there is no way to get 0, 1, 2, 5, etc.). The set of natural numbers that are outputs is called the range of the function (in this case, the range is $\{3, 4, 7, 12, 19, 28, \ldots\}\text{,}$ all the natural numbers that are 3 more than a perfect square). The key thing that makes a rule a function is that there is exactly one output for each input. That is, it is important that the rule be a good rule. What output do we assign to the input 7? There can only be one answer for any particular function. ###### Example0.4.1. The following are all examples of functions: 1. $f:\Z \to \Z$ defined by $f(n) = 3n\text{.}$ The domain and codomain are both the set of integers. However, the range is only the set of integer multiples of 3. 2. $g: \{1,2,3\} \to \{a,b,c\}$ defined by $g(1) = c\text{,}$ $g(2) = a$ and $g(3) = a\text{.}$ The domain is the set $\{1,2,3\}\text{,}$ the codomain is the set $\{a,b,c\}$ and the range is the set $\{a,c\}\text{.}$ Note that $g(2)$ and $g(3)$ are the same element of the codomain. This is okay since each element in the domain still has only one output. 3. $h:\{1,2,3,4\} \to \N$ defined by the table: $x$ 1 2 3 4 $h(x)$ 3 6 9 12 Here the domain is the finite set $\{1,2,3,4\}$ and to codomain is the set of natural numbers, $\N\text{.}$ At first you might think this function is the same as $f$ defined above. It is absolutely not. Even though the rule is the same, the domain and codomain are different, so these are two different functions. ###### Example0.4.2. Just because you can describe a rule in the same way you would write a function, does not mean that the rule is a function. The following are NOT functions. 1. $f:\N \to \N$ defined by $f(n) = \frac{n}{2}\text{.}$ The reason this is not a function is because not every input has an output. Where does $f$ send 3? The rule says that $f(3) = \frac{3}{2}\text{,}$ but $\frac{3}{2}$ is not an element of the codomain. 2. Consider the rule that matches each person to their phone number. If you think of the set of people as the domain and the set of phone numbers as the codomain, then this is not a function, since some people have two phone numbers. Switching the domain and codomain sets doesn't help either, since some phone numbers belong to multiple people (assuming some households still have landlines when you are reading this). ### SubsectionDescribing Functions It is worth making a distinction between a function and its description. The function is the abstract mathematical object that in some way exists whether or not anyone ever talks about it. But when we do want to talk about the function, we need a way to describe it. A particular function can be described in multiple ways. Some calculus textbooks talk about the Rule of Four, that every function can be described in four ways: algebraically (a formula), numerically (a table), graphically, or in words. In discrete math, we can still use any of these to describe functions, but we can also be more specific since we are primarily concerned with functions that have $\N$ or a finite subset of $\N$ as their domain. Describing a function graphically usually means drawing the graph of the function: plotting the points on the plane. We can do this, and might get a graph like the following for a function $f:\{1,2,3\} \to \{1,2,3\}\text{.}$ It would be absolutely WRONG to connect the dots or try to fit them to some curve. There are only three elements in the domain. A curve would mean that the domain contains an entire interval of real numbers. Here is another way to represent that same function: This shows that the function $f$ sends 1 to 2, 2 to 1 and 3 to 3: just follow the arrows. The arrow diagram used to define the function above can be very helpful in visualizing functions. We will often be working with functions with finite domains, so this kind of picture is often more useful than a traditional graph of a function. Note that for finite domains, finding an algebraic formula that gives the output for any input is often impossible. Of course we could use a piecewise defined function, like \begin{equation*} f(x) = \begin{cases} x+1 \amp \text{ if } x = 1 \\ x-1 \amp \text{ if } x = 2 \\ x \amp \text{ if } x = 3\end{cases}\text{.} \end{equation*} This describes exactly the same function as above, but we can all agree is a ridiculous way of doing so. Since we will so often use functions with small domains and codomains, let's adopt some notation to describe them. All we need is some clear way of denoting the image of each element in the domain. In fact, writing a table of values would work perfectly: $x$ 0 1 2 3 4 $f(x)$ 3 3 2 4 1 We simplify this further by writing this as a “matrix” with each input directly over its output: \begin{equation*} f = \twoline{0 \amp 1 \amp 2\amp 3 \amp 4}{3 \amp 3 \amp 2 \amp 4 \amp 1}\text{.} \end{equation*} Note this is just notation and not the same sort of matrix you would find in a linear algebra class (it does not make sense to do operations with these matrices, or row reduce them, for example). One advantage of the two-line notation over the arrow diagrams is that it is harder to accidentally define a rule that is not a function using two-line notation. ###### Example0.4.3. Which of the following diagrams represent a function? Let $X = \{1,2,3,4\}$ and $Y = \{a,b,c,d\}\text{.}$ Solution $f$ is a function. So is $g\text{.}$ There is no problem with an element of the codomain not being the image of any input, and there is no problem with $a$ from the codomain being the image of both 2 and 3 from the domain. We could use our two-line notation to write these as \begin{equation*} f= \begin{pmatrix} 1 \amp 2 \amp 3 \amp 4 \\ d \amp a \amp c \amp b \end{pmatrix} \qquad g = \begin{pmatrix} 1 \amp 2 \amp 3 \amp 4 \\ d \amp a \amp a \amp b \end{pmatrix}\text{.} \end{equation*} However, $h$ is NOT a function. In fact, it fails for two reasons. First, the element 1 from the domain has not been mapped to any element from the codomain. Second, the element 2 from the domain has been mapped to more than one element from the codomain ($a$ and $c$). Note that either one of these problems is enough to make a rule not a function. In general, neither of the following mappings are functions: It might also be helpful to think about how you would write the two-line notation for $h\text{.}$ We would have something like: \begin{equation*} h=\twoline{1 \amp 2 \amp 3 \amp 4}{\amp a,c? \amp d \amp b}\text{.} \end{equation*} There is nothing under 1 (bad) and we needed to put more than one thing under 2 (very bad). With a rule that is actually a function, the two-line notation will always “work”. We will also be interested in functions with domain $\N\text{.}$ Here two-line notation is no good, but describing the function algebraically is often possible. Even tables are a little awkward, since they do not describe the function completely. For example, consider the function $f:\N \to \N$ given by the table below. $x$ 0 1 2 3 4 5 $\ldots$ $f(x)$ 0 1 4 9 16 25 $\ldots$ Have I given you enough entries for you to be able to determine $f(6)\text{?}$ You might guess that $f(6) = 36\text{,}$ but there is no way for you to know this for sure. Maybe I am being a jerk and intended $f(6) = 42\text{.}$ In fact, for every natural number $n\text{,}$ there is a function that agrees with the table above, but for which $f(6) = n\text{.}$ Okay, suppose I really did mean for $f(6) = 36\text{,}$ and in fact, for the rule that you think is governing the function to actually be the rule. Then I should say what that rule is. $f(n) = n^2\text{.}$ Now there is no confusion possible. Giving an explicit formula that calculates the image of any element in the domain is a great way to describe a function. We will say that these explicit rules are closed formulas for the function. There is another very useful way to describe functions whose domain is $\N\text{,}$ that rely specifically on the structure of the natural numbers. We can define a function recursively! ###### Example0.4.4. Consider the function $f:\N \to \N$ given by $f(0) = 0$ and $f(n+1) = f(n) + 2n+1\text{.}$ Find $f(6)\text{.}$ Solution The rule says that $f(6) = f(5) + 11$ (we are using $6 = n+1$ so $n = 5$). We don't know what $f(5)$ is though. Well, we know that $f(5) = f(4) + 9\text{.}$ So we need to compute $f(4)\text{,}$ which will require knowing $f(3)\text{,}$ which will require $f(2)\text{,}$… will it ever end? Yes! In fact, this process will always end because we have $\N$ as our domain, so there is a least element. And we gave the value of $f(0)$ explicitly, so we are good. In fact, we might decide to work up to $f(6)$ instead of working down from $f(6)\text{:}$ \begin{align*} f(1) = \amp f(0) + 1 = \amp 0 + 1 = 1\\ f(2) = \amp f(1) + 3 = \amp 1 + 3 = 4\\ f(3) = \amp f(2) + 5 = \amp 4 + 5 = 9\\ f(4) = \amp f(3) + 7 = \amp 9 + 7 = 16\\ f(5) = \amp f(4) + 9 = \amp 16 + 9 = 25\\ f(6) = \amp f(5) + 11 = \amp 25 + 11 = 36 \end{align*} It looks that this recursively defined function is the same as the explicitly defined function $f(n) = n^2\text{.}$ Is it? Later we will prove that it is. Recursively defined functions are often easier to create from a “real world” problem, because they describe how the values of the functions are changing. However, this comes with a price. It is harder to calculate the image of a single input, since you need to know the images of other (previous) elements in the domain. ###### Recursively Defined Functions. For a function $f:\N \to \N\text{,}$ a recursive definition consists of an initial condition together with a recurrence relation. The initial condition is the explicitly given value of $f(0)\text{.}$ The recurrence relation is a formula for $f(n+1)$ in terms for $f(n)$ (and possibly $n$ itself). ###### Example0.4.5. Give recursive definitions for the functions described below. 1. $f:\N \to \N$ gives the number of snails in your terrarium $n$ years after you built it, assuming you started with 3 snails and the number of snails doubles each year. 2. $g:\N \to \N$ gives the number of push-ups you do $n$ days after you started your push-ups challenge, assuming you could do 7 push-ups on day 0 and you can do 2 more push-ups each day. 3. $h:\N \to \N$ defined by $h(n) = n!\text{.}$ Recall that $n! = 1 \cdot 2 \cdot 3 \cdot \cdots \cdot (n-1)\cdot n$ is the product of all numbers from $1$ through $n\text{.}$ We also define $0! = 1\text{.}$ Solution 1. The initial condition is $f(0) = 3\text{.}$ To get $f(n+1)$ we would double the number of snails in the terrarium the previous year, which is given by $f(n)\text{.}$ Thus $f(n+1) = 2f(n)\text{.}$ The full recursive definition contains both of these, and would be written, \begin{equation*} f(0) = 3;~ f(n+1) = 2f(n)\text{.} \end{equation*} 2. We are told that on day 0 you can do 7 push-ups, so $g(0) = 7\text{.}$ The number of push-ups you can do on day $n+1$ is 2 more than the number you can do on day $n\text{,}$ which is given by $g(n)\text{.}$ Thus \begin{equation*} g(0) = 7;~ g(n+1) = g(n) + 2\text{.} \end{equation*} 3. Here $h(0) = 1\text{.}$ To get the recurrence relation, think about how you can get $h(n+1) = (n+1)!$ from $h(n) = n!\text{.}$ If you write out both of these as products, you see that $(n+1)!$ is just like $n!$ except you have one more term in the product, an extra $n+1\text{.}$ So we have, \begin{equation*} h(0) = 1;~ h(n+1) = (n+1)\cdot h(n)\text{.} \end{equation*} ### SubsectionSurjections, Injections, and Bijections We now turn to investigating special properties functions might or might not possess. In the examples above, you may have noticed that sometimes there are elements of the codomain which are not in the range. When this sort of the thing does not happen, (that is, when everything in the codomain is in the range) we say the function is onto or that the function maps the domain onto the codomain. This terminology should make sense: the function puts the domain (entirely) on top of the codomain. The fancy math term for an onto function is a surjection, and we say that an onto function is a surjective function. In pictures: ###### Example0.4.6. Which functions are surjective (i.e., onto)? 1. $f:\Z \to \Z$ defined by $f(n) = 3n\text{.}$ 2. $g: \{1,2,3\} \to \{a,b,c\}$ defined by $g = \begin{pmatrix}1 \amp 2 \amp 3 \\ c \amp a \amp a \end{pmatrix}\text{.}$ 3. $h:\{1,2,3\} \to \{1,2,3\}$ defined as follows: Solution 1. $f$ is not surjective. There are elements in the codomain which are not in the range. For example, no $n \in \Z$ gets mapped to the number 1 (the rule would say that $\frac{1}{3}$ would be sent to 1, but $\frac{1}{3}$ is not in the domain). In fact, the range of the function is $3\Z$ (the integer multiples of 3), which is not equal to $\Z\text{.}$ 2. $g$ is not surjective. There is no $x \in \{1,2,3\}$ (the domain) for which $g(x) = b\text{,}$ so $b\text{,}$ which is in the codomain, is not in the range. Notice that there is an element from the codomain “missing” from the bottom row of the matrix. 3. $h$ is surjective. Every element of the codomain is also in the range. Nothing in the codomain is missed. To be a function, a rule cannot assign a single element of the domain to two or more different elements of the codomain. However, we have seen that the reverse is permissible: a function might assign the same element of the codomain to two or more different elements of the domain. When this does not occur (that is, when each element of the codomain is the image of at most one element of the domain) then we say the function is one-to-one. Again, this terminology makes sense: we are sending at most one element from the domain to one element from the codomain. One input to one output. The fancy math term for a one-to-one function is an injection. We call one-to-one functions injective functions. In pictures: ###### Example0.4.7. Which functions are injective (i.e., one-to-one)? 1. $f:\Z \to \Z$ defined by $f(n) = 3n\text{.}$ 2. $g: \{1,2,3\} \to \{a,b,c\}$ defined by $g = \begin{pmatrix}1 \amp 2 \amp 3 \\ c \amp a \amp a \end{pmatrix}\text{.}$ 3. $h:\{1,2,3\} \to \{1,2,3\}$ defined as follows: Solution 1. $f$ is injective. Each element in the codomain is assigned to at most one element from the domain. If $x$ is a multiple of three, then only $x/3$ is mapped to $x\text{.}$ If $x$ is not a multiple of 3, then there is no input corresponding to the output $x\text{.}$ 2. $g$ is not injective. Both inputs $2$ and $3$ are assigned the output $a\text{.}$ Notice that there is an element from the codomain that appears more than once on the bottom row of the matrix. 3. $h$ is injective. Each output is only an output once. Be careful: “surjective” and “injective” are NOT opposites. You can see in the two examples above that there are functions which are surjective but not injective, injective but not surjective, both, or neither. In the case when a function is both one-to-one and onto (an injection and surjection), we say the function is a bijection, or that the function is a bijective function. To illustrate the contrast between these two properties, consider a more formal definition of each, side by side. ###### Injective vs Surjective. A function is injective provided every element of the codomain is the image of at most one element from the domain. A function is surjective provided every element of the codomain is the image of at least one element from the domain. Notice both properties are determined by what happens to elements of the codomain: they could be repeated as images or they could be “missed” (not be images). Injective functions do not have repeats but might or might not miss elements. Surjective functions do not miss elements, but might or might not have repeats. The bijective functions are those that do not have repeats and do not miss elements. ### SubsectionImage and Inverse Image When discussing functions, we have notation for talking about an element of the domain (say $x$) and its corresponding element in the codomain (we write $f(x)\text{,}$ which is the image of $x$). Sometimes we will want to talk about all the elements that are images of some subset of the domain. It would also be nice to start with some element of the codomain (say $y$) and talk about which element or elements (if any) from the domain it is the image of. We could write “those $x$ in the domain such that $f(x) = y\text{,}$” but this is a lot of writing. Here is some notation to make our lives easier. To address the first situation, what we are after is a way to describe the set of images of elements in some subset of the domain. Suppose $f:X \to Y$ is a function and that $A \subseteq X$ is some subset of the domain (possibly all of it). We will use the notation $f(A)$ to denote the image of $A$ under $f$, namely the set of elements in $Y$ that are the image of elements from $A\text{.}$ That is, $f(A) = \{f(a) \in Y \st a \in A\}\text{.}$ We can do this in the other direction as well. We might ask which elements of the domain get mapped to a particular set in the codomain. Let $f:X \to Y$ be a function and suppose $B \subseteq Y$ is a subset of the codomain. Then we will write $f\inv(B)$ for the inverse image of $B$ under $f$, namely the set of elements in $X$ whose image are elements in $B\text{.}$ In other words, $f\inv(B) = \{x \in X \st f(x) \in B\}\text{.}$ Often we are interested in the element(s) whose image is a particular element $y$ of in the codomain. The notation above works: $f\inv(\{y\})$ is the set of all elements in the domain that $f$ sends to $y\text{.}$ It makes sense to think of this as a set: there might not be anything sent to $y$ (if $y$ is not in the range), in which case $f\inv(\{y\}) = \emptyset\text{.}$ Or $f$ might send multiple elements to $y$ (if $f$ is not injective). As a notational convenience, we usually drop the set braces around the $y$ and write $f\inv(y)$ instead for this set. WARNING: $f\inv(y)$ is not an inverse function! Inverse functions only exist for bijections, but $f\inv(y)$ is defined for any function $f\text{.}$ The point: $f\inv(y)$ is a set, not an element of the domain. This is just sloppy notation for $f\inv(\{y\})\text{.}$ To help make this distinction, we would call $f\inv(y)$ the complete inverse image of $y$ under $f$. It is not the image of $y$ under $f\inv$ (since the function $f\inv$ might not exist). ###### Example0.4.8. Consider the function $f:\{1,2,3,4,5,6\} \to \{a,b,c,d\}$ given by \begin{equation*} f = \begin{pmatrix}1 \amp 2 \amp 3 \amp 4 \amp 5 \amp 6 \\ a \amp a \amp b \amp b \amp b \amp c\end{pmatrix}\text{.} \end{equation*} Find $f(\{1,2,3\})\text{,}$ $f\inv(\{a,b\})\text{,}$ and $f\inv(d)\text{.}$ Solution $f(\{1,2,3\}) = \{a,b\}$ since $a$ and $b$ are the elements in the codomain to which $f$ sends $1\text{,}$ $2\text{,}$ and $3\text{.}$ $f\inv(\{a,b\}) = \{1,2,3,4,5\}$ since these are exactly the elements that $f$ sends to $a$ and $b\text{.}$ $f\inv(d) = \emptyset$ since $d$ is not in the range of $f\text{.}$ ###### Example0.4.9. Consider the function $g:\Z \to \Z$ defined by $g(n) = n^2 + 1\text{.}$ Find $g(1)$ and $g(\{1\})\text{.}$ Then find $g\inv(1)\text{,}$ $g\inv(2)\text{,}$ and $g\inv(3)\text{.}$ Solution Note that $g(1) \ne g(\{1\})\text{.}$ The first is an element: $g(1) = 2\text{.}$ The second is a set: $g(\{1\}) = \{2\}\text{.}$ To find $g\inv(1)\text{,}$ we need to find all integers $n$ such that $n^2 + 1 = 1\text{.}$ Clearly only 0 works, so $g\inv(1) = \{0\}$ (note that even though there is only one element, we still write it as a set with one element in it). To find $g\inv(2)\text{,}$ we need to find all $n$ such that $n^2 + 1 = 2\text{.}$ We see $g\inv(2) = \{-1,1\}\text{.}$ Finally, if $n^2 + 1 = 3\text{,}$ then we are looking for an $n$ such that $n^2 = 2\text{.}$ There are no such integers so $g\inv(3) = \emptyset\text{.}$ Since $f\inv(y)$ is a set, it makes sense to ask for $\card{f\inv(y)}\text{,}$ the number of elements in the domain which map to $y\text{.}$ ###### Example0.4.10. Find a function $f:\{1,2,3,4,5\} \to \N$ such that $\card{f\inv(7)} = 5\text{.}$ Solution There is only one such function. We need five elements of the domain to map to the number $7 \in \N\text{.}$ Since there are only five elements in the domain, all of them must map to 7. So \begin{equation*} f = \begin{pmatrix}1 \amp 2 \amp 3 \amp 4 \amp 5 \\ 7 \amp 7 \amp 7 \amp 7 \amp 7\end{pmatrix}\text{.} \end{equation*} ##### Function Definitions. Here is a summary of all the main concepts and definitions we use when working with functions. • A function is a rule that assigns each element of a set, called the domain, to exactly one element of a second set, called the codomain. • Notation: $f:X \to Y$ is our way of saying that the function is called $f\text{,}$ the domain is the set $X\text{,}$ and the codomain is the set $Y\text{.}$ • To specify the rule for a function with small domain, use two-line notation by writing a matrix with each output directly below its corresponding input, as in: \begin{equation*} f = \begin{pmatrix}1 \amp 2 \amp 3 \amp 4 \\ 2 \amp 1 \amp 3 \amp 1 \end{pmatrix}\text{.} \end{equation*} • $f(x) = y$ means the element $x$ of the domain (input) is assigned to the element $y$ of the codomain. We say $y$ is an output. Alternatively, we call $y$ the image of $x$ under $f$. • The range is a subset of the codomain. It is the set of all elements which are assigned to at least one element of the domain by the function. That is, the range is the set of all outputs. • A function is injective (an injection or one-to-one) if every element of the codomain is the image of at most one element from the domain. • A function is surjective (a surjection or onto) if every element of the codomain is the image of at least one element from the domain. • A bijection is a function which is both an injection and surjection. In other words, if every element of the codomain is the image of exactly one element from the domain. • The image of an element $x$ in the domain is the element $y$ in the codomain that $x$ is mapped to. That is, the image of $x$ under $f$ is $f(x)\text{.}$ • The complete inverse image of an element $y$ in the codomain, written $f\inv(y)\text{,}$ is the set of all elements in the domain which are assigned to $y$ by the function. • The image of a subset $A$ of the domain is the set $f(A) = \{f(a) \in Y \st a \in A\}\text{.}$ • The inverse image of a subset $B$ of the codomain is the set $f\inv(B) = \{x \in X \st f(x) \in B\}\text{.}$ ### ExercisesExercises ###### 7. Consider the function $f:\{1,2,3,4,5\} \to \{1,2,3,4\}$ given by the table below: $x$ 1 2 3 4 5 $f(x)$ 3 2 4 1 2 1. Is $f$ injective? Explain. 2. Is $f$ surjective? Explain. 3. Write the function using two-line notation. Solution 1. $f$ is not injective, since $f(2) = f(5)\text{;}$ two different inputs have the same output. 2. $f$ is surjective, since every element of the codomain is an element of the range. 3. $f=\begin{pmatrix}1 \amp 2 \amp 3 \amp 4 \amp 5 \\ 3 \amp 2 \amp 4 \amp 1 \amp 2\end{pmatrix}\text{.}$ ###### 8. Consider the function $f:\{1,2,3,4\} \to \{1,2,3,4\}$ given by the graph below. 1. Is $f$ injective? Explain. 2. Is $f$ surjective? Explain. 3. Write the function using two-line notation. ###### 10. Suppose $f:\N \to \N$ satisfies the recurrence $f(n+1) = f(n) + 3\text{.}$ Note that this is not enough information to define the function, since we don’t have an initial condition. For each of the initial conditions below, find the value of $f(5)\text{.}$ 1. $\displaystyle f(0) = 0\text{.}$ 2. $\displaystyle f(0) = 1\text{.}$ 3. $\displaystyle f(0) = 2\text{.}$ 4. $\displaystyle f(0) = 100\text{.}$ Solution For each case, you must use the recurrence to find $f(1)\text{,}$ $f(2)$ ... $f(5)\text{.}$ But notice each time you just add three to the previous. We do this 5 times. 1. $\displaystyle f(5) = 15\text{.}$ 2. $\displaystyle f(5) = 16\text{.}$ 3. $\displaystyle f(5) = 17\text{.}$ 4. $\displaystyle f(5) = 115\text{.}$ ###### 11. Suppose $f:\N \to \N$ satisfies the recurrence relation \begin{equation*} f(n+1) = \begin{cases} \frac{f(n)}{2} \amp \text{ if } f(n) \text{ is even} \\ 3f(n) + 1 \amp \text{ if } f(n) \text{ is odd}\end{cases}\text{.} \end{equation*} Note that with the initial condition $f(0) = 1\text{,}$ the values of the function are: $f(1) = 4\text{,}$ $f(2) = 2\text{,}$ $f(3) = 1\text{,}$ $f(4) = 4\text{,}$ and so on, the images cycling through those three numbers. Thus $f$ is NOT injective (and also certainly not surjective). Might it be under other initial conditions? 3 1. If $f$ satisfies the initial condition $f(0) = 5\text{,}$ is $f$ injective? Explain why or give a specific example of two elements from the domain with the same image. 2. If $f$ satisfies the initial condition $f(0) = 3\text{,}$ is $f$ injective? Explain why or give a specific example of two elements from the domain with the same image. 3. If $f$ satisfies the initial condition $f(0) = 27\text{,}$ then it turns out that $f(105) = 10$ and no two numbers less than 105 have the same image. Could $f$ be injective? Explain. 4. Prove that no matter what initial condition you choose, the function cannot be surjective. It turns out this is a really hard question to answer in general. The Collatz conjecture is that no matter what the initial condition is, the function will eventually produce 1 as an output. This is an open problem in mathematics: nobody knows the answer. ###### 12. For each function given below, determine whether or not the function is injective and whether or not the function is surjective. 1. $f:\N \to \N$ given by $f(n) = n+4\text{.}$ 2. $f:\Z \to \Z$ given by $f(n) = n+4\text{.}$ 3. $f:\Z \to \Z$ given by $f(n) = 5n - 8\text{.}$ 4. $f:\Z \to \Z$ given by $f(n) = \begin{cases}n/2 \amp \text{ if } n \text{ is even} \\ (n+1)/2 \amp \text{ if } n \text{ is odd} . \end{cases}$ Solution 1. $f$ is injective, but not surjective (since 0, for example, is never an output). 2. $f$ is injective and surjective. Unlike in the previous question, every integers is an output (of the integer 4 less than it). 3. $f$ is injective, but not surjective (10 is not 8 less than a multiple of 5, for example). 4. $f$ is not injective, but is surjective. Every integer is an output (of twice itself, for example) but some integers are outputs of more than one input: $f(5) = 3 = f(6)\text{.}$ ###### 13. Let $A = \{1,2,3,\ldots,10\}\text{.}$ Consider the function $f:\pow(A) \to \N$ given by $f(B) = |B|\text{.}$ That is, $f$ takes a subset of $A$ as an input and outputs the cardinality of that set. 1. Is $f$ injective? Prove your answer. 2. Is $f$ surjective? Prove your answer. 3. Find $f\inv(1)\text{.}$ 4. Find $f\inv(0)\text{.}$ 5. Find $f\inv(12)\text{.}$ Solution 1. $f$ is not injective. To prove this, we must simply find two different elements of the domain which map to the same element of the codomain. Since $f(\{1\}) = 1$ and $f(\{2\}) = 1\text{,}$ we see that $f$ is not injective. 2. $f$ is not surjective. The largest subset of $A$ is $A$ itself, and $|A| = 10\text{.}$ So no natural number greater than 10 will ever be an output. 3. $f\inv(1) = \{\{1\}, \{2\}, \{3\}, \ldots \{10\}\}$ (the set of all the singleton subsets of $A$). 4. $f\inv(0) = \{\emptyset\}\text{.}$ Note, it would be wrong to write $f\inv(0) = \emptyset$ - that would claim that there is no input which has 0 as an output. 5. $f\inv(12) = \emptyset\text{,}$ since there are no subsets of $A$ with cardinality 12. ###### 15. Consider the set $\N^2 = \N \times \N\text{,}$ the set of all ordered pairs $(a,b)$ where $a$ and $b$ are natural numbers. Consider a function $f: \N^2 \to \N$ given by $f((a,b)) =a+b\text{.}$ 1. Let $A = \{(a,b) \in \N^2 \st a, b \le 10\}\text{.}$ Find $f(A)\text{.}$ 2. Find $f\inv(3)$ and $f\inv(\{0,1,2,3\})\text{.}$ 3. Give geometric descriptions of $f\inv(n)$ and $f\inv(\{0, 1, \ldots, n\})$ for any $n \ge 1\text{.}$ 4. Find $\card{f\inv(8)}$ and $\card{f\inv(\{0,1, \ldots, 8\})}\text{.}$ ###### 16. Let $f:X \to Y$ be some function. Suppose $3 \in Y\text{.}$ What can you say about $f\inv(3)$ if you know, 1. $f$ is injective? Explain. 2. $f$ is surjective? Explain. 3. $f$ is bijective? Explain. Solution 1. $|f\inv(3)| \le 1\text{.}$ In other words, either $f\inv(3)$ is the empty set or is a set containing exactly one element. Injective functions cannot have two elements from the domain both map to 3. 2. $|f\inv(3)| \ge 1\text{.}$ In other words, $f\inv(3)$ is a set containing at least one elements, possibly more. Surjective functions must have something map to 3. 3. $|f\inv(3)| = 1\text{.}$ There is exactly one element from $X$ which gets mapped to 3, so $f\inv(3)$ is the set containing that one element. ###### 17. Find a set $X$ and a function $f:X \to \N$ so that $f\inv(0) \cup f\inv(1) = X\text{.}$ Solution $X$ can really be any set, as long as $f(x) = 0$ or $f(x) = 1$ for every $x \in X\text{.}$ For example, $X = \N$ and $f(n) = 0$ works. ###### 18. What can you deduce about the sets $X$ and $Y$ if you know, 1. there is an injective function $f:X \to Y\text{?}$ Explain. 2. there is a surjective function $f:X \to Y\text{?}$ Explain. 3. there is a bijective function $f:X \to Y\text{?}$ Explain. ###### 19. Suppose $f:X \to Y$ is a function. Which of the following are possible? Explain. 1. $f$ is injective but not surjective. 2. $f$ is surjective but not injective. 3. $|X| = |Y|$ and $f$ is injective but not surjective. 4. $|X| = |Y|$ and $f$ is surjective but not injective. 5. $|X| = |Y|\text{,}$ $X$ and $Y$ are finite, and $f$ is injective but not surjective. 6. $|X| = |Y|\text{,}$ $X$ and $Y$ are finite, and $f$ is surjective but not injective. ###### 20. Let $f:X \to Y$ and $g:Y \to Z$ be functions. We can define the composition of $f$ and $g$ to be the function $g\circ f:X \to Z$ for which the image of each $x \in X$ is $g(f(x))\text{.}$ That is, plug $x$ into $f\text{,}$ then plug the result into $g$ (just like composition in algebra and calculus). 1. If $f$ and $g$ are both injective, must $g\circ f$ be injective? Explain. 2. If $f$ and $g$ are both surjective, must $g\circ f$ be surjective? Explain. 3. Suppose $g\circ f$ is injective. What, if anything, can you say about $f$ and $g\text{?}$ Explain. 4. Suppose $g\circ f$ is surjective. What, if anything, can you say about $f$ and $g\text{?}$ Explain. Hint Work with some examples. What if $f = \twoline{1\amp 2 \amp 3}{a \amp a \amp b}$ and $g = \twoline{a\amp b \amp c}{5 \amp 6 \amp 7}\text{?}$ ###### 21. Consider the function $f:\Z \to \Z$ given by $f(n) = \begin{cases}n+1 \amp \text{ if }n\text{ is even} \\ n-3 \amp \text{ if }n\text{ is odd} . \end{cases}$ 1. Is $f$ injective? Prove your answer. 2. Is $f$ surjective? Prove your answer. Solution 1. $f$ is injective. ###### Proof. Let $x$ and $y$ be elements of the domain $\Z\text{.}$ Assume $f(x) = f(y)\text{.}$ If $x$ and $y$ are both even, then $f(x) = x+1$ and $f(y) = y+1\text{.}$ Since $f(x) = f(y)\text{,}$ we have $x + 1 = y + 1$ which implies that $x = y\text{.}$ Similarly, if $x$ and $y$ are both odd, then $x - 3 = y-3$ so again $x = y\text{.}$ The only other possibility is that $x$ is even an $y$ is odd (or visa-versa). But then $x + 1$ would be odd and $y - 3$ would be even, so it cannot be that $f(x) = f(y)\text{.}$ Therefore if $f(x) = f(y)$ we then have $x = y\text{,}$ which proves that $f$ is injective. 2. $f$ is surjective. ###### Proof. Let $y$ be an element of the codomain $\Z\text{.}$ We will show there is an element $n$ of the domain ($\Z$) such that $f(n) = y\text{.}$ There are two cases: First, if $y$ is even, then let $n = y+3\text{.}$ Since $y$ is even, $n$ is odd, so $f(n) = n-3 = y+3-3 = y$ as desired. Second, if $y$ is odd, then let $n = y-1\text{.}$ Since $y$ is odd, $n$ is even, so $f(n) = n+1 = y-1+1 = y$ as needed. Therefore $f$ is surjective. ###### 22. At the end of the semester a teacher assigns letter grades to each of her students. Is this a function? If so, what sets make up the domain and codomain, and is the function injective, surjective, bijective, or neither? Solution Yes, this is a function, if you choose the domain and codomain correctly. The domain will be the set of students, and the codomain will be the set of possible grades. The function is almost certainly not injective, because it is likely that two students will get the same grade. The function might be surjective – it will be if there is at least one student who gets each grade. ###### 23. In the game of Hearts, four players are each dealt 13 cards from a deck of 52. Is this a function? If so, what sets make up the domain and codomain, and is the function injective, surjective, bijective, or neither? ###### 24. Seven players are playing 5-card stud. Each player initially receives 5 cards from a deck of 52. Is this a function? If so, what sets make up the domain and codomain, and is the function injective, surjective, bijective, or neither? Solution This is not a function. ###### 25. Consider the function $f:\N \to \N$ that gives the number of handshakes that take place in a room of $n$ people assuming everyone shakes hands with everyone else. Give a recursive definition for this function. Hint To find the recurrence relation, consider how many new handshakes occur when person $n+1$ enters the room. Solution The recurrence relation is $f(n+1) = f(n) + n\text{.}$ ###### 26. Let $f:X \to Y$ be a function and $A \subseteq X$ be a finite subset of the domain. What can you say about the relationship between $\card{A}$ and $\card{f(A)}\text{?}$ Consider both the general case and what happens when you know $f$ is injective, surjective, or bijective. Solution In general, $\card{A} \ge \card{f(A)}\text{,}$ since you cannot get more outputs than you have inputs (each input goes to exactly one output), but you could have fewer outputs if the function is not injective. If the function is injective, then $\card{A} = \card{f(A)}\text{,}$ although you can have equality even if $f$ is not injective (it must be injective restricted to $A$). ###### 27. Let $f:X \to Y$ be a function and $B \subseteq Y$ be a finite subset of the codomain. What can you say about the relationship between $\card{B}$ and $\card{f\inv(B)}\text{?}$ Consider both the general case and what happens when you know $f$ is injective, surjective, or bijective. Solution In general, there is no relationship between $\card{B}$ and $\card{f\inv(B)}\text{.}$ This is because $B$ might contain elements that are not in the range of $f\text{,}$ so we might even have $f\inv(B) = \emptyset\text{.}$ On the other hand, there might be lots of elements from the domain that all get sent to a few elements in $B\text{,}$ making $f\inv(B)$ larger than $B\text{.}$ More specifically, if $f$ is injective, then $\card{B} \ge \card{f\inv(B)}$ (since every element in $B$ must come from at most one element from the domain). If $f$ is surjective, then $\card{B} \le \card{f\inv(B)}$ (since every element in $B$ must come from at least one element of the domain). Thus if $f$ is bijective then $\card{B} = \card{f\inv(B)}\text{.}$ ###### 28. Let $f:X \to Y$ be a function, $A \subseteq X$ and $B \subseteq Y\text{.}$ 1. Is $f\inv\left(f(A)\right) = A\text{?}$ Always, sometimes, never? Explain. 2. Is $f\left(f\inv(B)\right) = B\text{?}$ Always, sometimes, never? Explain. 3. If one or both of the above do not always hold, is there something else you can say? Will equality always hold for particular types of functions? Is there some other relationship other than equality that would always hold? Explore. ###### 29. Let $f:X \to Y$ be a function and $A, B \subseteq X$ be subsets of the domain. 1. Is $f(A \cup B) = f(A) \cup f(B)\text{?}$ Always, sometimes, or never? Explain. 2. Is $f(A \cap B) = f(A) \cap f(B)\text{?}$ Always, sometimes, or never? Explain. Hint One of these is not always true. Try some examples! ###### 30. Let $f:X \to Y$ be a function and $A, B \subseteq Y$ be subsets of the codomain. 1. Is $f\inv(A \cup B) = f\inv(A) \cup f\inv(B)\text{?}$ Always, sometimes, or never? Explain. 2. Is $f\inv(A \cap B) = f\inv(A) \cap f\inv(B)\text{?}$ Always, sometimes, or never? Explain.
2021-06-19 15:19:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9248616695404053, "perplexity": 179.15736614629904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487648373.45/warc/CC-MAIN-20210619142022-20210619172022-00177.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=0226486
MathSciNet bibliographic data MR226486 50.70 Dembowski, Peter; Ostrom, T. G. Planes of order \$n\$$n$ with collineation groups of order \$n\sp{2}\$$n\sp{2}$. Math. Z. 103 1968 239–258. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
2017-03-23 07:34:56
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9976294636726379, "perplexity": 11025.084663654625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186780.20/warc/CC-MAIN-20170322212946-00311-ip-10-233-31-227.ec2.internal.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=1660366
MathSciNet bibliographic data MR1660366 35Q30 (73D30 76D05) Clopeau, Thierry; Mikelić, Andro; Robert, Raoul On the vanishing viscosity limit for the $2{\rm D}$$2{\rm D}$ incompressible Navier-Stokes equations with the friction type boundary conditions. Nonlinearity 11 (1998), no. 6, 1625–1636. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
2016-09-27 05:05:08
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9948942065238953, "perplexity": 9512.105145652888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660957.45/warc/CC-MAIN-20160924173740-00291-ip-10-143-35-109.ec2.internal.warc.gz"}
http://www.lastfm.se/user/roxiehart/library/music/Hannah+Montana/_/The+Best+of+Both+Worlds?setlang=sv
# Bibliotek Musik » Hannah Montana » ## The Best of Both Worlds 16 spelade låtar | Gå till låtsida Låtar (16) Låt Album Längd Datum The Best of Both Worlds 2:54 1 feb 2008, 16:17 The Best of Both Worlds 2:54 31 jan 2008, 20:23 The Best of Both Worlds 2:54 31 jan 2008, 18:27 The Best of Both Worlds 2:54 31 jan 2008, 18:24 The Best of Both Worlds 2:54 31 jan 2008, 16:43 The Best of Both Worlds 2:54 31 jan 2008, 16:40 The Best of Both Worlds 2:54 31 jan 2008, 16:37 The Best of Both Worlds 2:54 31 jan 2008, 16:34 The Best of Both Worlds 2:54 31 jan 2008, 16:31 The Best of Both Worlds 2:54 31 jan 2008, 16:10 The Best of Both Worlds 2:54 31 jan 2008, 16:10 The Best of Both Worlds 2:54 31 jan 2008, 16:07 The Best of Both Worlds 2:54 31 jan 2008, 16:04 The Best of Both Worlds 2:54 31 jan 2008, 15:58 The Best of Both Worlds 2:54 31 jan 2008, 15:50 The Best of Both Worlds 2:54 31 jan 2008, 15:47
2014-03-10 11:05:03
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8267579674720764, "perplexity": 12243.142766947974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010746376/warc/CC-MAIN-20140305091226-00066-ip-10-183-142-35.ec2.internal.warc.gz"}
https://api-project-1022638073839.appspot.com/questions/how-do-you-integrate-int-1-x-2-2x-1-using-partial-fractions
# How do you integrate int 1/(x^2(2x-1)) using partial fractions? Apr 17, 2018 $2 \ln | 2 x - 1 | - 2 \ln | x | + \frac{1}{x} + C$ #### Explanation: We need to find $A , B , C$ such that $\frac{1}{{x}^{2} \left(2 x - 1\right)} = \frac{A}{x} + \frac{B}{x} ^ 2 + \frac{C}{2 x - 1}$ for all $x$. Multiply both sides by ${x}^{2} \left(2 x - 1\right)$ to get $1 = A x \left(2 x - 1\right) + B \left(2 x - 1\right) + C {x}^{2}$ $1 = 2 A {x}^{2} - A x + 2 B x - B + C {x}^{2}$ $1 = \left(2 A + C\right) {x}^{2} + \left(2 B - A\right) x - B$ Equating coefficients give us $\left\{\begin{matrix}2 A + C = 0 \\ 2 B - A = 0 \\ - B = 1\end{matrix}\right.$ And thus we have $A = - 2 , B = - 1 , C = 4$. Substituting this in the initial equation, we get $\frac{1}{{x}^{2} \left(2 x - 1\right)} = \frac{4}{2 x - 1} - \frac{2}{x} - \frac{1}{x} ^ 2$ Now, integrate it term by term $\int \setminus \frac{4}{2 x - 1} \setminus \mathrm{dx} - \int \setminus \frac{2}{x} \setminus \mathrm{dx} - \int \setminus \frac{1}{x} ^ 2 \setminus \mathrm{dx}$ to get $2 \ln | 2 x - 1 | - 2 \ln | x | + \frac{1}{x} + C$ Apr 17, 2018 The answer is $= \frac{1}{x} - 2 \ln \left(| x |\right) + 2 \ln \left(| 2 x - 1 |\right) + C$ #### Explanation: Perform the decomposition into partial fractions $\frac{1}{{x}^{2} \left(2 x - 1\right)} = \frac{A}{x} ^ 2 + \frac{B}{x} + \frac{C}{2 x - 1}$ $= \frac{A \left(2 x - 1\right) + B x \left(2 x - 1\right) + C \left({x}^{2}\right)}{{x}^{2} \left(2 x - 1\right)}$ The denominators are the same, compare the numerators $1 = A \left(2 x - 1\right) + B x \left(2 x - 1\right) + C \left({x}^{2}\right)$ Let $x = 0$, $\implies$, $1 = - A$, $\implies$, $A = - 1$ Let $x = \frac{1}{2}$, $\implies$, $1 = \frac{C}{4}$, $\implies$, $C = 4$ Coefficients of ${x}^{2}$ $0 = 2 B + C$ $B = - \frac{C}{2} = - \frac{4}{2} = - 2$ Therefore, $\frac{1}{{x}^{2} \left(2 x - 1\right)} = - \frac{1}{x} ^ 2 - \frac{2}{x} + \frac{4}{2 x - 1}$ So, $\int \frac{1 \mathrm{dx}}{{x}^{2} \left(2 x - 1\right)} = - \int \frac{1 \mathrm{dx}}{x} ^ 2 - \int \frac{2 \mathrm{dx}}{x} + \int \frac{4 \mathrm{dx}}{2 x - 1}$ $= \frac{1}{x} - 2 \ln \left(| x |\right) + 2 \ln \left(| 2 x - 1 |\right) + C$
2021-10-27 23:27:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 33, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.992803692817688, "perplexity": 1380.2032620405118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588244.55/warc/CC-MAIN-20211027212831-20211028002831-00368.warc.gz"}
https://www.electro-tech-online.com/threads/time-display-strange-issue.130141/
# Time Display strange issue Status Not open for further replies. #### pigman ##### New Member Hey, I'm messing with this function I found a while back to display milliseconds in a countdown timer and it does something strange when I feed it 6 hours it displays 00:12:14 instead of 06:00:00. Anyone have any ideas, I didn't write it I found it online. Cheers Here is the code. Code: void print_time(unsigned long t_milli) { char buffer[20]; int days, hours, mins, secs; int fractime; unsigned long inttime; inttime = t_milli / 1000; fractime = t_milli % 1000; // inttime is the total number of number of seconds // fractimeis the number of thousandths of a second // number of days is total number of seconds divided by 24 divided by 3600 days = inttime / (24*3600); inttime = inttime % (24*3600); // Now, inttime is the remainder after subtracting the number of seconds // in the number of days hours = inttime / 3600; inttime = inttime % 3600; // Now, inttime is the remainder after subtracting the number of seconds // in the number of days and hours mins = inttime / 60; inttime = inttime % 60; // Now inttime is the number of seconds left after subtracting the number // in the number of days, hours and minutes. In other words, it is the // number of seconds. secs = inttime; // Don't bother to print days //sprintf(buffer, "%02d:%02d:%02d.%03d", hours, mins, secs, fractime); sprintf(buffer, "%02d:%02d:%02d", hours, mins, secs); lcd.print(buffer); } #### dougy83 ##### Well-Known Member print_time(1000 * 6 * 3600); displays 06:00:00 as expected... EDIT: but as you're probably using Arduino, the standard 'int' is only 16 bits. Modify the lines for the hours to be Code: // number of days is total number of seconds divided by 24 divided by 3600 days = inttime / (24*3600ul); inttime = inttime % (24*3600ul); It should be fine then. The problem was that 24*3600 is more than 16 bits. Last edited: #### pigman ##### New Member AWESOME! Thanks heaps mate that did the trick! Status Not open for further replies.
2021-05-12 21:37:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41429880261421204, "perplexity": 7383.996701400408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989705.28/warc/CC-MAIN-20210512193253-20210512223253-00574.warc.gz"}
https://asmedc.silverchair.com/fluidsengineering/article-abstract/128/4/726/466646/Experimental-Investigation-on-the-Onset-of-Gas?redirectedFrom=fulltext
An experimental investigation has been carried out to simulate the onset of gas entrainment phenomenon from a stratified region through branches located on a semicircular wall configuration, in close dimensional resemblance with a Canada Deuterium and Uranium (CANDU) header-feeder system. New experimental data for the onset of gas entrainment was developed during single and multiple discharge from an air/water stratified region over a wide range of Froude numbers (0 to 100), in order to thoroughly understand the onset of gas entrainment phenomenon. It was found that the critical height at the onset of gas entrainment (single or simultaneous) was a function of the corresponding Froude number of each branch, the vertical distance between the centerlines of the branches (for multiple discharge), the hydraulic resistance of the discharging lines, as well as the orientation of the branches and their diameter with respect to the main header. Concerning multiple discharge comparisons, at intermediate Fr values $(1 the data deviates, however at higher Fr values $(>10)$ there is convergence. The present data are necessary in validating future analytical and numerical models of the onset of gas entrainment for a curved geometry, particularly at low Froude numbers. 1. Zuber , N. , 1980, “ Problems in Modeling of Small Break LOCA ,” Nuclear Regulatory Commission Report, NUREG-0724. 2. Smoglie , C. , and Reimann , J. , 1986, “ Two-Phase Flow Through Small Branches in a Horizontal Pipe With Stratified Flow ,” Int. J. Multiphase Flow 0301-9322, 12 , pp. 609 625 . 3. Schrock , V. E. , Revankar , S. T. , Mannheimer , R. , Wang , C. H. , and Jia , D. , 1986, “ Steam-Water Critical Flow Through Small Pipes From Stratified Upstream Regions ,” Proceedings of the 8th International Heat Transfer Conference , San Francisco , CA 5 , pp. 2307 2311 . 4. Yonomoto , T. , and Tasaka , K. , 1988, “ New Theoretical Model for Two-Phase Flow Discharged From Stratified Two-Phase Region Through Small Break ,” J. Nucl. Sci. Technol. 0022-3131, 25 , pp. 441 455 . 5. Yonomoto , T. , and Tasaka , K. , 1991, “ Liquid and Gas Entrainment to a Small Break Hole From a Stratified Region ,” Int. J. Multiphase Flow 0301-9322, 17 , pp. 745 765 . 6. Parrott , S. D. , Soliman , H. M. , Sims , G. E. , and Krishnan , V. S. , 1991, “ Experiments on the Onset of Gas Pull-Through During Dual Discharge From a Reservoir ,” Int. J. Multiphase Flow 0301-9322, 17 , pp. 119 129 . 7. Hassan , I. G. , Soliman , H. M. , Sims , G. E. , and Kowalski , J. E. , 1996, “ Discharge From a Smooth Stratified Two-Phase Region Through Two Horizontal Side Branches Located in the Same Vertical Plane ,” Int. J. Multiphase Flow 0301-9322, 22 , pp. 1123 1142 . 8. Hassan , I. G. , Soliman , H. M. , Sims , G. E. , and Kowalski , J. E. , 1997, “ Single and Multiple Discharge From a Stratified Two-Phase Region Through Small Branches ,” Nucl. Eng. Des. 0029-5493, 176 ( 3 ), pp. 233 245 . 9. Hassan , I. G. , Soliman , H. M. , Sims , G. E. , and Kowalski , J. E. , 1998, “ Two-Phase Flow From a Stratified Region Through a Small Side Branch ,” ASME J. Fluids Eng. 0098-2202, 120 , pp. 605 612 . 10. Kowalski , J. E. , and Krishnan , V. S. , 1987, “ Two Phase Flow Distribution in a Large Manifold ,” Proceedings of the AIChE Annual Meeting , New York , NY. 11. Armstrong , K. F. , Parrott , S. D. , Sims , G. E. , Soliman , H. M. , and Krishnan , V. S. , 1992a, “ Theoretical and Experimental Study of the Onset of Liquid Entrainment During Dual Discharge From Large Reservoirs ,” Int. J. Multiphase Flow 0301-9322, 18 , pp. 217 227 . 12. Maier , M. R. , Soliman , H. M. , Sims , G. E. , and Armstrong , K. F. , 2001a, “ Onsets of Entrainment During Dual Discharge From a Stratified Two-Phase Region Through Horizontal Branches With Centrelines Falling in an Inclined Plane: Part 1—Analysis of Liquid Entrainment ,” Int. J. Multiphase Flow 0301-9322, 27 , pp. 1011 1028 . 13. Maier , M. R. , Soliman , H. M. , and Sims , G. E. , 2001b, “ Onsets of Entrainment During Dual Discharge From a Stratified Two-Phase Region Through Horizontal Branches With Centrelines Falling in an Inclined Plane: Part 2—Experiments on Gas and Liquid Entrainment ,” Int. J. Multiphase Flow 0301-9322, 27 , pp. 1029 1049 . 14. Hassan , I. G. , 1995, “ Single, Dual, and Triple Discharge From a Large, Stratified, Two-Phase Region Through Small Branches ,” Ph.D. thesis, University of Manitoba, Winnipeg, Manitoba. 15. Taylor , G. I. , 1950, “ The Instability of Liquid Surfaces When Accelerated in a Direction Perpendicular to Their Planes ,” Proc. R. Soc. London, Ser. A 1364-5021, 201 , pp. 192 196 .
2021-12-04 17:35:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5585641264915466, "perplexity": 13619.088424905432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362999.66/warc/CC-MAIN-20211204154554-20211204184554-00264.warc.gz"}
http://forum.cubcrafters.com/showthread.php/3510-CubCrafters-Ventral-fin-install?s=51cd96c98bd4454fdd7a087a9b60b44a&p=23607&mode=threaded
## Re: CubCrafters Ventral fin install Originally Posted by Pete D Double check the drawing package but I believe yes, the Claymar installation would require a ventral. There is different size/slightly different shape ventrals for CC11/Bauman, CC18/Wipline, CC19/wipline and the Claymar installations on EX/FX but the bracket installation is very similar and shares some parts between all of them. Pete, you don't mention Aerocet's above - not applicable? I'm installing a set now on a new EX-2, and haven't seen mention of a ventral fin. Thanks, pb
2021-01-21 01:33:20
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8216296434402466, "perplexity": 13010.966046030408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522150.18/warc/CC-MAIN-20210121004224-20210121034224-00447.warc.gz"}
https://www.embeddedrelated.com/showarticle/56.php
# Bringing up Baby - product development thoughts Things have just started to get exciting. After months of defining, specifying and designing my latest product, I finally have semi-functional prototypes. After a few side steps during the building and bring-up process, power is applied and most of the low level functions have been verified. Soon, software will meet hardware and debugging can begin in earnest. Before jumping in and really enjoying the fun (besides I'm now waiting for some new parts to arrive), I thought it would be nice to look back at the steps that brought me to this point. As with almost every project that I have been involved with, this one has taught me a few new lessons. If we don't learn from the challenges we run into, we will continue to face them (over and over....). What follows is a list of some of the many steps necessary in the creation of a new product, along with some of the checks that should be in place to make sure that problems are spotted as soon as possible. I have also offered some ideas and opinions that might be helpful to anyone willing to learn from others' mistakes and challenges. Decide on a product Any good product starts as a good idea. It is important to research existing products and competitors prior to putting large amounts of effort into an idea that might already exists. Keep focused, fight the impulse to add too many features. Write a specification If you don't specify what you are going to produce, how will you know if you got it right when you are done? How will know if you are done? For a single-person project a simple specification is sufficient. The more people involved, or the more complex the project, the more detailed the specification will need to be. Again, keep focused and keep the specification limited to the features that are needed. Block out a design Begin with a simple block diagram of the product. Get a rough idea of how the features will be implemented (both in hardware and software). Start sizing memory (FLASH. RAM, EEPROM) and speed requirements of your microprocessor (I assume that you are using one, or you might want to look at other websites). Also begin to determine needed items like peripherals, I/O pins and other features. By firming up the design at this point, some of the following steps will get much easier. Select components With your rough design in hand, start looking for the parts that will be needed to fulfill the design requirements. This is where I start looking at things like cost and availability. No point in starting off a new product with parts that will no longer be available, or just very hard to obtain (i.e. near end of life). Also, by looking into cost at this point, you can avoid adding a $10.00 part to a potential$5.00 product. One of the pivotal decisions in most of my designs is picking the microprocessor. By beginning with the information from the sizing exercise of the previous step, we develop a good starting point for this decision. Add to this starting point, issues of development tools (compilers, programmers, emulators, etc.) and experience working with various families of microprocessors to help narrow down the choices. Here too, we need to look at potential risks to the project. Do I want to write low-level USB code in the microprocessor, or use a simple USB to serial converter chip? Now is the time to trade cost for complexity and risk. As an example, by spending an additional \$0.50 per board, you could potentially eliminate months of software development by leveraging pre-built functionality over risk intensive development. I also like to take a little bit of time here to examine growth potential before making a final decision on all the components. Things like, are there pin compatible family members of the microprocessor with more speed and/or memory? With the other components, are there available pin-compatible parts with greater speed and/or precision (i.e. ADC, DAC, references, regulators, etc.)? When building up your list of components, it is sometimes a good idea to look for parts that are available from a number of vendors (either IC vendors or distributors). It is always better to have choices, should the part you need not be available at the time you will be ordering it. In the past few years this has been a reoccurring problem, as both IC vendors and distributors have been phasing out leaded components. Several times I would look one day, finding ample supply of the parts I needed, only to find that none were available when it was time to place an order. As always, having multiple choices of the components/vendors is better than not having any choices at all. If while looking for parts, you find a part that you really like, that is not currently stocked, first try and modify your design to eliminate this part. If you can not design around a 'no stock' part, order the part as soon as possible, overlapping the procurement delay with time used to complete your design. There is some risk involved in ordering early, but there is also risk in waiting around for a long lead item that you just got around to ordering. Some vendors/distributors will sample a small quantity of their parts, so that you might be able to get a few, hard-to-get parts to handle your prototype needs. Generate a schematic With a good idea of what we want to achieve and a rough idea of the parts we intend to use, we can further firm up the design. For me, the process of generating a schematic is somewhat iterative. As I place components and run wires, any gaps in the design start to stand out. Design checking tools also help to isolate deficiencies in the design (like inputs with no outputs, or multiple outputs on a node). (Note: this will only be helpful if your component models are correct and complete.) Depending on your tool set, this may also be the time that you define the package types on the devices that you are using. This is the point where you need to look ahead a little bit. Your choice of package types should to take into account several factors, like space available (i.e. board real estate), assembly cost or limitations (i.e. pin pitch and spacing) and PCB costs (i.e. trace width, spacing requirements and hole sizes can effect PCB cost). Looking ahead now, could save you headaches and additional cost later. Generate a bill of materials With your schematic complete and all of your components identified, now is a good time to document your choices. By identifying part counts and manufactures part number for all components, you can begin to determine where you are going to get the parts and how much this will cost you. For potential products (as opposed to one-time builds), I like to look at near term needs and future volumes when researching pricing for components. Some manufactures offer deeper price cuts for higher volumes, so thinking about futures volumes can be important to the selection of components at this early point in a potential product life cycle. Lay out the PCBs With a complete schematic and a bill of materials, you can now begin laying out your PCB. I usually start by a rough placement of all components, focusing on minimizing all high-speed and/or high-current connections. With a rough placement, I determine the overall dimensions of the board, placement of connectors and location of mounting holes, if any. At this point, I adjust my component positioning, attempting to avoid crowding and, wherever possible, align components on grids, with similar orientations (pin 1 pointing in the same direction). With the components and connectors in place, I start to work through the power and ground connections first. There are many rules and guides that should be followed in running power and ground traces. I will not attempt to define all of these here, but I will give you a few points to consider. • Keep bypass capacitors as close as possible to the appropriate power pins on devices. • Use proper trace widths and minimize the length of the traces. • Avoid vias or feed-throughs between the bypass capacitors and the power pins. • Manage all power and ground paths with 'star patterns', emanating from single points (i.e. main ground connection, or regulator outputs), where possible. • Avoid current loops (closed paths, or multiple paths for current flows). • Avoid running high-speed signals in parallel to power and ground traces. In addition to the above points, on boards with a mix of high-speed signals and low-level analog signals, I try to minimize the use of power and ground planes. On these types of boards, the high-speed signals can (and quite often do) capacitively couple through to the power/ground planes inducing noise into the sensitive analog components. In these cases I run thick traces, radiating outward from a single point, towards the components. Power components of like functions (analog or digital) separately and use inductors/ferrite beads to decouple high-frequency signals from the power lines. Most of the boards that I have built have been 4 or more layers. I choose to bury the majority of power and ground traces onto the inner layers (ground on one, power on another), leaving the two outer layers for signal routing. With power and ground traces laid, I usually focus next on the high-speed and low-level signals. With these signals I attempt to keep the trace lengths as short as possible. In the case of the high-speed signals, I also endeavor to route these traces away from power and ground traces, favoring right angle crosses of inner layer traces, as opposed to running in parallel with them. Finally, I focus on routing the remaining signals, moving from fastest signals towards the slower, less noise sensitive signals (like pull-up resistors on static pins). With all the traces in place, I go back over the analog sections of the board, looking for areas that could benefit from ground flooding (addition of copper ground planes to protect sensitive areas from noise). I usually attempt to ground flood areas that are using high-impedance inputs and very low-level signals. When using ground flooding, I try to avoid including areas that have high-speed or high-energy signals within them, or running under them, as these may again capacitively couple these signals into the ground flooded copper planes. Board layout is a very complex process. Much has been written as to the correct steps and procedures. My suggestions are only intended as such. Board layout is more of a practiced art; only years of experience will lead you to the skills necessary to develop good boards that are small, clean, reliable and free from performance diminishing noise. Recheck layout/test fit parts Once your board is laid out, take some time to go over your work in detail. I usually print out my board plots on paper checking all component pads for proper fit. Using spare components, I attempt to verify that the part outlines are correct, leaving proper clearance on the pads for good solder joints. I also double check all my connector placements to be sure that the pins and guide pins (if any) all align properly and that the bodies of the connectors are not interfering with other components. This is also a good time to go over the silk screen layout. Check that the component call-outs are visible, clear and unbroken by vias or feed-throughs. Also check on pin 1 markings (or other markings for proper orientation) for all components. Don't forget polarity marks on polarized parts, i.e. electrolytic capacitors, diodes, etc. The silk screen is a guide of how to assemble and troubleshoot your work; now is the time to make sure it will be helpful. Order boards Now you are ready to start spending some serious money. This is where I take a couple of deep breaths and then pull the trigger (or on most websites, click on the submit button). Some PCB vendors have on-line design rule checkers to ensure that your layout does not violate any of their trace width and spacing requirements. If available, use these tools. I have found small difference between vendor tools and my own tools when it comes to spacing checks. I trust the vendor tools, and make and changes that are called out. Shopping for PCB vendors can be tricky. I have seen some inexpensive boards that were good and some that were bad. Vendors that had in the past, produced high quality boards consistently, later turned out a few boards with broken traces. Wherever possible get recommendations from others when working with a new vendor. Order parts Your PCB may take some time to be manufactured and shipped. Often the price that you pay, determines your job's priority in the manufacturing process. I use this time to start ordering my parts. With the bill of material generated in a prior step, you should now know the quantity and value/type of the components you will need. Using this as your guide, place your order for parts. Keep an eye on quantities when ordering your parts. If for example you need 20 of a given part, check the next higher quantity on the distributor's site to see if it is more cost effective to order 25 parts (at a deeper discount). The spare parts can come in handy later (like during assemble and/or debug stages). Keep in mind that you should have some spare parts on hand to handle any waste in the assembly process (some board assembly houses require that you have 5-10% waste included in your build kits). In the process of building my boards, I usually send a few very tiny parts flying off to who knows where. Sometimes you may fine that a distributor is out of stock of a part you have identified for your design. This is where good planning can help. By having identified other components or vendors during the part selection step, you should be able to switch over to another part that will work for your design. Check ordered parts against the Bill of Material Most likely, your parts will arrive before your PCBs (unless you spent the extra money to expedite their manufacture and/or delivery). As a matter of course, I always re-check the parts against the invoice, looking at the part values, part numbers and quantities. On most integrated components I also check that package type (SOIC, TSSOP, etc) is correct. On my most recent project I learned a new lesson. I should have carefully checked the somewhat cryptic part numbering on the components themselves. The distributor had made an error and sent me the wrong part. The first thing I should have noticed was the incorrect package. Even though the bag listed the part as a MSOP-8, it was in fact a TSSOP-8. These parts have the same pin pitch, but have different body widths. I was, at the time, willing to accept that I had somehow incorrectly built the component pad layout for this part, rather than suspect that the distributor had sent the wrong thing. Later, during the assembly step, I somehow managed to solder the larger component onto the pads of the smaller footprint. It was only even later, during the debugging of the board, that I found I was unable to get the part to function as expected. After many hours of debugging, I finally began to question the correctness of the part and begin to look at the part marking on the chip. Morals of this lesson: • Have some kind of incoming inspection to verify that what you have received is what you expected. Incoming inspection is cheaper that correction of an error. • When something does not look right, spend the extra time to determine why it seems wrong. Question everything at the first sign of a problem, and correct the issue at the earliest opportunity. Check boards for workmanship and shorts With new boards in hand, I first check the boards (under a microscope) to assure that they appear to be manufactured correctly. In doing this, look for any traces that end abruptly without a via or a component pad (breaks), thinned out traces (over-etched) or shorted traces (under-etched). Also, make sure that drilled holes are centered in the pads (hole registration), as this can point to broken or shorted traces in the inner layers of the board. If your boards are pre-tinned, make sure that there are no high spots in the solder that will make it difficult to place components flatly on the board. In addition to visual inspection, I usually check electrically for shorts and opens, especially on all power and ground traces, where the effects can be the most damaging (i.e. gross failure and/or the generation of heat and/or smoke). Assemble boards If able, I like to hand assemble the first round of prototype boards. This allows me to see any potential areas of the assembly process that might be difficult or unclear for others less familiar with my design. Be sure to keep notes of any findings, as these can become instructions for the next person who needs to build future boards (or input into the next design or re-design process). I try to keep one complete set of documentation (schematic, layout plots, silk screen drawings and bill of materials) that is marked as 'Master', used to track any changes or corrections. In the assembly process, I like to place smaller (lower profile) components first. I do this to allow myself as much access as possible for using clamping tweezers (self closing or reverse tweezers) to hold components in place while soldering. I moved to the highest pin count or smallest pin pitch components next (assuming that they are also very low profile). I finish by adding all the higher profile components, as I am still be able to hold them place with the tweezers (easier to clamp over lower profile parts). Also, within these groups of component , I try to work outwards from the center of the board to the edges, again making it easier to clamp components in place. If I am building a couple of prototype boards at a time, I usually choose to add a given component type (or value) to all the boards at once. This limits the handling of components and makes it easier for me to avoid some assembly errors. Repeating the same step over a couple of times makes it easier to see errors, like putting a part in the wrong spot, as we tend to see differences more clearly. Check out the assembled boards This is a many step process. The order of steps is often effected by the complexity of the design, the access to good test points and many other factors. What follows are some of the simpler steps that I try to use in all my check out processes. Start with visual inspection of the boards and the solder joints. Even if I did the soldering (and sometime just because I did the soldering), I like to pass the boards under a microscope one more time to look for errors or mistakes. I look for cold or broken solder joints, excess solder (short circuits or bridges) and correct component placement/orientation. I also look for free solder balls and excess flux build ups left over from the assembly process and clean the boards as necessary. When the boards 'look' good, it's time again to check electrically for opens and shorts in power and ground paths. Using an ohmmeter, measure the impedance between power and ground paths (in both directions). I also like to check power and ground continuities to all the major components, if possible. Satisfied that thing should not blow up, I bring up the power and quickly measure voltage levels on all the main power circuits (regulator outputs and inputs, and other voltage source points). Next, I like to check the current draw of the board to assure that it is in the correct range. This sometime requires a little rework or fixturing to accomplish, but it is usually worth the effort. I also like to take a quick temperature reading on the power components (just a quick touch is usually sufficient, as long as it it safe to do so, i.e. no high voltage present in the area). My next step is usually to connect a device programmer and see if the microprocessor is functioning. Most programmers and devices have a simple method to test the device ID of the chip. If the ID looks good, it's safe to move to the next step (deep breath). Most microprocessors have 'fuses' (programmable settings) that control things like basic modes of operation, oscillator types, etc. My next step is to program these fuses so that the microprocessor begins to function. Here is where things really get fun. I prefer to take a low level approach to bringing up a board. To verify that the microprocessor is operating at the speed I expect, I have it write a counting pattern to all of the available output pins. I look for a unique squarewave pattern on each of the outputs using an oscilloscope (another deep breath). Looking at the highest frequency output, I compare the value to the target microprocessor frequency. The ratio of these two value should be proportional to the number of instructions taken to increment a variable, output it to a port, and then loop back for the next iteration. Looking at the compiled program listing, you can count the number of instruction cycles needed for the opcodes to complete the loop. The instruction count, times two, should match the ratio between the two frequencies (as the loop needs to be executed twice to toggle an output high then low). My next steps widely vary based on the design complexity and the available test points, but my goal is to test each function of the circuit with as little interaction from surrounding components as possible. I like to take many small steps with each building on the other to test more and more of the design. As an example, consider a board with a serial interface. My first step is to have the microprocessor write a single character, over and over. Once verified, I move to the ever popular 'echo' test, where the microprocessor reads a character and then writes back the same character. Next, I assign different characters to actions (like set outputs high or set outputs low). This allows me to build-in test sequences used to more completely test my design. I continue these sorts of tests, building more and more complexity, until I am satisfied that my circuit is basically functional. Then, using the test code that I have created as a model, I begin to write the real software for my product. As my software grows, to complete the full functionality of my design, I sometime find it useful to drop back to my 'test software' to prove or test some function that I had not previously tested (or completely tested) during the bring up process. So here I wait. The replacements for the in-correctly shipped parts, are still in transit (FedEx from Thailand), as my distributor of choice is now 'out of stock' on the correct part (their inventory system is most likely still recovering from the mis-binning of parts). Meanwhile, I am building up a couple more boards, so that they will be ready to accept the new parts as they arrive. In parting, may all of your challenges be not too challenging! Gene [ - ] Comment by July 13, 2009 Good article, comprehensive ! The part I would like to add my two cents in, is the DVT of prototypes. Its very important that the prototype PCB's make it back to the designer so that he can verify the PCB itself, and of course the product functions that need to be verified before the design is handed off to test engineering and production. Far, far to often managers think its a prudent move to have someone else verify the products basic functions before the designer ever knows the prototypes have been recieved in house. It is allways best to have the desiger choose whom will be the owner of DVT for prototypes. [ - ] Comment by April 27, 2010 [ - ] Comment by April 3, 2011 This is a down-right practical and exhaustive description of all those processes involved in product development. [ - ] Comment by May 24, 2011 That's a really good concise list of the development process. Well stated! [ - ] Comment by June 21, 2012 Good article... Very good learning of product design cycle for hardware design freshers !! To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
2021-09-22 21:32:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3990296721458435, "perplexity": 1336.6686607295462}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057388.12/warc/CC-MAIN-20210922193630-20210922223630-00371.warc.gz"}
https://brilliant.org/problems/havent-got-space-for-conservation/
# Haven't got space for conservation Consider a system with $$N$$ particles whose coordinates are $$\mathbf{r}_i = \{r_i^x,r_i^y,r_i^z\}$$, and whose velocities are $$\dot{\mathbf{r}}_i$$. The energy of the system is described by the kinetic energy $$\frac{1}{2}m\sum \dot{\mathbf{r}}^2$$ and an effective potential energy term $$V\left(\mathbf{r}_1, \ldots, \mathbf{r}_N\right).$$ All we know about $$V$$ is that it depends on the positions only through their differences $$\mathbf{r}_1 - \mathbf{r}_2$$, i.e. $V(\{\mathbf{r}^i\}, t) = V\left(\mathbf{r}_1-\mathbf{r}_2, \mathbf{r}_1-\mathbf{r}_3, \ldots, t\right).$ As this system evolves in time, which of the following bulk quantities must be conserved? Assumptions • All interactions of the system with the outside world are described by $$V.$$ ×
2018-09-22 00:31:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8334892988204956, "perplexity": 165.56490735652423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157574.27/warc/CC-MAIN-20180921225800-20180922010200-00000.warc.gz"}
https://newbedev.com/are-intelligent-systems-able-to-bypass-the-uncertainty-principle
# Are "intelligent" systems able to bypass the uncertainty principle? Now that's a nice question. I've only browsed the paper you submitted, so correct me if I misinterpreted it. But the idea there is to use machine learning methods to identify those features of a collection of quantum data that are best suited to be used as characteristic features to the processes at play there. So this is about analyzing data. Therefore, simply put, as long as the data doesn't violate the uncertainty principle, neither will the AI the paper talks about. But I think your question is a bit more ambitious. Any AI that is trained on a data set can in principle be asked to make predictions about data it has not yet seen, and your question is what it is that hinders the AI to make arbitrarily accurate predictions, thus giving both the momentum and the position of a particle to arbitrary degrees of accuracy. Now, this is much more about physics than it is about AI. I think the key aspect here is to ask what it means to know a particle's position and momentum. We don't have to go to AI there, we can for example look at something as simple as the ground state of a particle in a 1D box. We consider this problem in classical Schrödinger QM (which, as commenters correctly pointed out, is only a fraction of all of QM). This state can be described by the wave function $$\psi_1(x,t) = e^{-i\omega_1 t} \cos\left(\frac{\pi}{L}x\right),$$ for $$x \in (-L/2,L/2)$$, where $$L$$ is the size of the box and $$\omega_1 = \pi^2\hbar/2mL^2$$. This is, for all we know by the Schrödinger picture of quantum mechanics, the exact state the particle is in. I repeat that: This is everything we can know about the state of a particle in a box, when someon gives us this wave function, we solved the problem of finding the particle's ground state. A naive way to look at this wave function is to go to use Born's rule to find the probability distribution (which happens to be stationary because we chose an eigenstate of the Hamiltonian $$H$$) $$\rho(x) = |\psi(x,t)|^2 = \cos^2\left(\frac{\pi}{L}x\right)$$ and argue that the particle it describes just wiggles around between $$-L/2$$ and $$L/2$$ with this given probability, and once we measure position, we pick up the particle's position at an instant, losing momentum information. But this is just one way of looking at it, and it is problematic, albeit there are ways to make it mathematically sound. This picture leads to a confusion, namely that you think that there is something like the particle's position that is independent of measurement. And at the same time, there is something like the particle's velocity that is independent of measurement, it's just measurement that necessarily discards some of that information, but one could try to get a smart AI to track them both. But this is not really the data you have. The wave function - encapsulating the full knowledge of the state of the particle - contains no information about the particle's exact position or momentum at an instant. The history of QM showed that it is hopeless to try to maintain our intuition about what position and momentum is. Yes, you can get well-defined relations between them for each path in a path integral formalism, but then you suddenly find the particle tracing out multiple paths. Or you add global hidden variables (like e.g. Bohmian mechanics) to recover a well-defined concept of position and momentum, but then those are not measurable so they come back to haunt you whenever you perform a measurement. There really isn't a way around this: A clear concept of position and momentum can not be maintained at the quantum level. The AI cannot be "smarter" or "more observant" than the maximum information available, which is encoded in the wave function. The information you desire to trace with your AI does just not exist in the way you would need it to. If you are interested, there is a nice 3Blue1Brown video about the mathematical origins of uncertainty in Fourier analysis which also talks about another aspect of this question, even beyond the scope of quantum physics. I can recommend that. If it could be done it would be a violation of the uncertainty principle. This would mean one of two things: • The AI cannot violate the uncertainty principle, or... • The uncertainty principle is wrong So if we start from the assumption that the current model of QM is perfect in every way, then the AI could not beat the odds, because it would not have the physical tools needed to go about beating the odds. However, where AI tools like neural nets are powerful is in their ability to detect patterns that we did not see before. It is plausible that an AI could come across some more fundamental law of nature which yields more correct results than the uncertainty principle does. This would invite us to develop an entirely new formulation of microscopic physics! As a very trivial example, let me give you a series of numbers. 293732 114329 934700 172753 489332 85129 759100 61953 644932 335929 623500 671153 760532 866729 527900 353 836132 677529 472300 49553 871732 768329 456700 818753 867332 139129 481100 307953 822932 789929 545500 517153 738532 720729 649900 446353 614132 931529 794300 95553 449732 422329 978700 464753 245332 193129 203100 553953 932 243929 467500 363153 716532 574729 771900 892353 392132 185529 116300 141553 27732 76329 500700 110753 623332 247129 925100 799953 178932 697929 389500 209153 694532 428729 893900 338353 170132 439529 438300 187553 605732 730329 22700 756753 1332 301129 647100 45953 356932 151929 311500 55153 672532 282729 15900 784353 948132 693529 760300 233553 183732 384329 544700 402753 379332 355129 369100 291953 534932 605929 233500 901153 650532 136729 137900 230353 726132 947529 82300 279553 761732 38329 66700 48753 757332 409129 91100 537953 712932 59929 155500 747153 628532 990729 259900 676353 504132 201529 404300 325553 339732 692329 588700 694753 135332 463129 813100 783953 890932 513929 77500 593153 606532 844729 381900 122353 282132 455529 726300 371553 917732 346329 110700 340753 513332 517129 535100 29953 68932 967929 999500 439153 584532 698729 503900 568353 60132 709529 48300 417553 495732 329 632700 986753 891332 571129 257100 275953 246932 421929 921500 285153 562532 552729 625900 14353 838132 963529 370300 463553 These numbers appear highly random. Upon seeing it in a physical setting, one might assume these numbers actually are random, and invoke statistical laws like those at the heart of the uncertainty principle. But, if you were to throw an AI at this, you'd notice that it could predict the results with frustratingly high regularity. Once a neural network, like that described in the journal article, has shown that there is indeed a pattern, we can try to tease it apart. And, lo and behold, you would find that sequence was $$\{X_1, X_2, X_3, ...\}$$ where $$X_i=2175143 * X_{i-1} + 10653\quad\text{(mod 1000000)}$$ starting with $$X_{0}=3553$$ I used a linear congruential PRNG to generate those. If the universe actually used that sequence as its "source" for drawing the random values predicted in QM, then an AI could pick up on it, and start using this more-fundamental law of nature to do things that the uncertainty principle says are impossible. On the other hand, if the universe actually has randomness in it, the AI cannot do any better than the best statistical results it can come up with. In the middle is a fascinating case. Permit me to give you another series of numbers, this one in binary (because the tool I used outputs in binary) 1111101101100110111010101101010001000101111100101011111110000110100010010001110010010011101010000010101001111001100011100110001010011110100100010001000111110000010100101101111101011111000001011101011110110100000000000101010110100001101101001100111111000110000101000110000000110001100101001011000110101111011011101011011101110010111101111001111110010110011000000101110010010010111111001110101101111100110100111010010001011101101111110001111111011010111000101000001011001011010010011111000000110011100000001110000011000101110111100001100010111010111101010101000011010111010011011010101000111110110011100111000011101101110011111100011100101111101110100111001101011000000000110000111001010000001011100100100010111100101101101111011110000011110100010100011000011110010000001100011001110111011010001100010000011101011011011001011001100110100101001011001000101101000110010010010000110100110010111010001111001000111000100100100100111011001101011111001110011100100001001010001011110101001010000010100010111010 I will not tell you whether this series is random or pseudorandom. I will not tell you whether it was generated using the Blum Blum Shub algorithm. And I certainly wont tell you the key I used if I used the Blum Blum Shub algorithm. It is currently believed that, to tell the difference between a random stream and the output of Blum Blum Shub, one must solve a problem we do not currently believe is solvable in any practical amount of time. So, hypothetically, if the universe actually used the stream of numbers I Just provided as part of the underlying physics that appears to be random per, quantum mechanics, we would not be able to tell the difference. But an AI might be able to detect a pattern that we didn't even know we could detect. It could latch onto the pattern, and start predicting things that are "impossible" to predict. Or could it? Nobody is saying that that string of binary numbers is actually the result of an algorithm. It might truly be random... Neural networks like the one described in the paper can find patterns that we did not observe with our own two eyes and our own squishyware inside our skull. However, they cannot find a pattern if one does not exist (or worse: they can find a false pattern that leads one astray). tl;dr Yes, a physicist might come up with a model that supersedes those with an uncertainty-principle, and such a physicist might be an AI. This is a pretty simple topic overall. In short: 1. A better model of reality could find the uncertainty-principle to be emergent, dispensing with it in favor of describing what it emerges from. 2. The better model could be found by a sufficient intelligence, be it human, AI, or otherwise. 3. So, yes, an AI could potentially dispense with the uncertainty-principle. That said, the linked paper discusses simulated systems: Machine learning models are a powerful theoretical tool for analyzing data from quantum simulators, in which results of experiments are sets of snapshots of many-body states. A god-like AI with infinite computation could, in principle, find the ultimate truth of anything it analyzes. So if it analyzes real experimental data, then it could find deeper models of physics. But, if it's looking at simulated data, then the ultimate truth that it'd find would be an exact description of the simulation – not necessarily what the simulation was attempting to emulate. And if the simulation respects something like an uncertainty-principle, then a perfect analysis of it would reflect that. Likewise, a god-like AI analyzing a simulation of Newtonian physics wouldn't discover quantum-mechanics nor relativity. But the AI could discover quantum-mechanics and relativity from looking at real data, even if the folks who made the AI thought the world was perfectly Newtonian.
2023-03-25 01:40:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5809327363967896, "perplexity": 197.8275552605667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00045.warc.gz"}
https://itprospt.com/num/6912461/the-student-x27-s-t-equation-can-be-rearranged-to-estimate
5 # The Student's t equation can be rearranged to estimate the number of samples that must be analyzed by a specific technique to determine the mean with a desired... ## Question ###### The Student's t equation can be rearranged to estimate the number of samples that must be analyzed by a specific technique to determine the mean with a desired confidence,t283where n is the number of samples needed, 8s is the sampling standard deviation, t is the Students t value at the desired confidence interval, and e is the sought for uncertainty: This equation assumes that the analytical uncertainty is negligible compared to the sampling uncertainty: Soil samples from a local dog park The Student's t equation can be rearranged to estimate the number of samples that must be analyzed by a specific technique to determine the mean with a desired confidence, t283 where n is the number of samples needed, 8s is the sampling standard deviation, t is the Students t value at the desired confidence interval, and e is the sought for uncertainty: This equation assumes that the analytical uncertainty is negligible compared to the sampling uncertainty: Soil samples from a local dog park are being analyzed for nitrate content: The sampling standard deviation is known to be +8.0%. Estimate the number of samples that need to be analyzed to give a 98% confidence that the mean is known t0 within #5.0%. Assume the analytical uncertainty is much smaller than the sampling uncertainty Use the Student's t values in this table #### Similar Solved Questions ##### Polynomial of degree 3 with real coefficients that satisfies the given conditions Find a Zeros of 3 and P(2) = 25 3) 16+7 2 _ 86& polynomial of degree 3 with real coefficients that satisfies the given conditions Find a Zeros of 3 and P(2) = 25 3) 16+7 2 _ 86&... ##### 0*is s- 770 9) 6inn 4af flx) = [6 0 , elze where (6) Deterrine F (AJ; (4Ealealate ?01< }*2) a5 flaetion (474v Jistribatien fnactiext_ fr" #il 7 0*is s- 770 9) 6inn 4af flx) = [6 0 , elze where (6) Deterrine F (AJ; (4Ealealate ?01< }*2) a5 flaetion (474v Jistribatien fnactiext_ fr" #il 7... ##### (20 points) Blocks _ and areona frictionless horizontal surface: Block initially: Is to the left of block Block has mas Block| 0f 2.5 kg and atrest has] Mass of1.3kB: Block = struck = from the left with average force of this impact hammer: The and lasts for 5 '12 seconds: Block slides to the right, strikes and sticks- The two blocks then slide to block the right together. They then reach rough surface which has friction 0f 0.31. coefficient of kinetlc Sketch diagrams of the situation befo (20 points) Blocks _ and areona frictionless horizontal surface: Block initially: Is to the left of block Block has mas Block| 0f 2.5 kg and atrest has] Mass of1.3kB: Block = struck = from the left with average force of this impact hammer: The and lasts for 5 '12 seconds: Block slides to the ri... ##### DP b.) Given marginal profit function 0.6x + 225, find the total change in profit when the dx number of units sold increases from 20 to 30. Round your final answer to two decimal places_ dP b.) Given marginal profit function 0.6x + 225, find the total change in profit when the dx number of units sold increases from 20 to 30. Round your final answer to two decimal places_... ##### A1.10 kg mass oscillates according equalion 700 cos 70t , Khere meters and secondsDetermine the amplitude.Henrenuxundo (eto Yesd? keyboard shortcuts HlelpValueSubmitRequcst AnsWcfPart BDelermine Ine frequency:[oupa* LuuiaUndo re30 YeseE keyboard shortcuts HelpValueHzSubmitRequest AnsiefPart €Determine the total energy:Ieuplauzundo redlo Teset keyboard shortculs helpEtolalValueSubmitRequest AnswerPar DDetermine the kinetic energy when420 mundo rexlo Yese E keyboard shortcuts HelpValueSubml A1.10 kg mass oscillates according equalion 700 cos 70t , Khere meters and seconds Determine the amplitude. Henrenux undo (eto Yesd? keyboard shortcuts Hlelp Value Submit Requcst AnsWcf Part B Delermine Ine frequency: [oupa* Luuia Undo re30 YeseE keyboard shortcuts Help Value Hz Submit Request Ansi... ##### Point) Wite each ot the QNVCn numtxers In (he polar Iotm rel 14031 3-1 ([email protected]) -2x(2 +iv2)15.3906144.736(c) (1 +4)711.3137pi/4 point) Wite each ot the QNVCn numtxers In (he polar Iotm rel 14031 3-1 (a 0.790569 0 18 @) -2x(2 +iv2) 15.3906 144.736 (c) (1 +4)7 11.3137 pi/4... ##### Use a change of variables to verify that the formula for triple integrals in spherical coordinates is correct_ Your answer needs to include an explanation of the reasoning; the calculation alone will not earn full marks_ Use a change of variables to verify that the formula for triple integrals in spherical coordinates is correct_ Your answer needs to include an explanation of the reasoning; the calculation alone will not earn full marks_... ##### Point) Find the general solution; y(t) , which solves the problem below; by the method of integrating factors_dy +y=tFind the integrating factor; u(t) and then find y(t)(use C as the unkown constant ) point) Find the general solution; y(t) , which solves the problem below; by the method of integrating factors_ dy +y=t Find the integrating factor; u(t) and then find y(t) (use C as the unkown constant )... ##### Use differentials to approximate the number 2.022 + 8.972 + 5.992, (Round your answer to five decimal places ) 194.4 X Use differentials to approximate the number 2.022 + 8.972 + 5.992, (Round your answer to five decimal places ) 194.4 X... ##### Compare the weather in two places where it is cloudy and breezy. At beach $A$ the temperature is $20^{\circ} \mathrm{C}$, the pressure is $103.5 \mathrm{kPa}$, and the relative humidity is $90 \% ;$ beach $B$ has $25^{\circ} \mathrm{C}, 99 \mathrm{kPa}$ and $20 \%$ relative humidity. Suppose you just took a swim and came out of the vater. Where would you feel more comfortable, and why? Compare the weather in two places where it is cloudy and breezy. At beach $A$ the temperature is $20^{\circ} \mathrm{C}$, the pressure is $103.5 \mathrm{kPa}$, and the relative humidity is $90 \% ;$ beach $B$ has $25^{\circ} \mathrm{C}, 99 \mathrm{kPa}$ and $20 \%$ relative humidity. Suppose you jus... ##### Question 130.67 1 ptsAn experiment is trying to test if there is an association between smoking (a categorical variable with three levels: never smoked, currently smoking; smoked but stopped for more than 5 years) and lung cancer (a categorical variable with two levels: cancer, no cancer). Which is/are true about this experiment (more than one can bc truc)?You AnsweredThe null hypolhesis thal lung cancer is associaled wilh srokingCorrect!The expected frequencies in Lhe conlingency Lable for this Question 13 0.67 1 pts An experiment is trying to test if there is an association between smoking (a categorical variable with three levels: never smoked, currently smoking; smoked but stopped for more than 5 years) and lung cancer (a categorical variable with two levels: cancer, no cancer). Which i... ##### [-/1 Points]DETAILSWhat Is the angle between the vectors / = 5.07 {+ 7.27 } and 6 = 1.92 i+ 8.87 J in degrees? degrees [-/1 Points] DETAILS What Is the angle between the vectors / = 5.07 {+ 7.27 } and 6 = 1.92 i+ 8.87 J in degrees? degrees... ##### Place the following items in order from smallest tolargest and explain your reasoning for why you placed them in thisorder. (HINT: Think level of chemical organization) Carbon GlycogenCell Hydrogen Glucose Place the following items in order from smallest to largest and explain your reasoning for why you placed them in this order. (HINT: Think level of chemical organization) Carbon Glycogen Cell Hydrogen Glucose... ##### A gas cylinder holds 0.10 mol ofO2 at 160 ∘C and a pressureof 3 atm . The gas expands adiabatically until thepressure is halved.What is the final volume?What is the final temperature? A gas cylinder holds 0.10 mol of O2 at 160 ∘C and a pressure of 3 atm . The gas expands adiabatically until the pressure is halved. What is the final volume? What is the final temperature?... ##### Establish each identity1 cos(0) (a) (csc(0) cot(0))? 1 + cos(0) cos(0) sin (0) (6) sin(0) + cos(0) 1 - tan(0) cot(0) cos(a +8) 1 = tan(a) tan( 8) (c) cos(a 8) 1 + tan(a) tan(8) Establish each identity 1 cos(0) (a) (csc(0) cot(0))? 1 + cos(0) cos(0) sin (0) (6) sin(0) + cos(0) 1 - tan(0) cot(0) cos(a +8) 1 = tan(a) tan( 8) (c) cos(a 8) 1 + tan(a) tan(8)... ##### ChoHOH0 DOHOHOHCHZOH0 E OHOHOHOHReset SelectionMark for Review_Whats This? choH OH 0 D OH OH OH CHZOH 0 E OH OH OH OH Reset Selection Mark for Review_Whats This?...
2022-10-05 18:50:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7199728488922119, "perplexity": 4170.576696819827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337663.75/warc/CC-MAIN-20221005172112-20221005202112-00409.warc.gz"}
https://www.physicsforums.com/threads/period-of-revolution-of-two-double-stars.721980/
# Period of revolution of two double stars 1. Nov 10, 2013 ### leftnes 1. The problem statement, all variables and given/known data Two double stars of the same mass as the sun rotate about their common center of mass. Their separation is 4 light years. What is their period of revolution? 2. Relevant equations Lagranian = T - U = $\mu\dot{r}^{2}/2$ + $\vec{L}^{2}/2\mu r^{2}$ - $Gm_{1}m_{2}/r$ F = ma = m$\omega^{2}$r = $Gm_{1}m_{2}/r$ 3. The attempt at a solution Tried to solve this using the orbital equation , but I'm off by a power of 10. I've also tried using F = m$\omega^{2}$r = $Gm_{1}m_{2}/r$ and solving for the period using $\omega$ = $2\pi r/T$ but I'm not sure where I'm going wrong. Since the question asks for the period of two double stars, does this mean that the reduced mass is $\mu$ = $(2m_{1})(2m_{2})/(2m_{1} + 2m_{2})$ = $4m^{2}/4m$ = m since all the masses are the same? I'm assuming that two double stars means 4 separate stars acting in pairs. I'm not really sure where to go with this problem. 2. Nov 10, 2013 ### haruspex Aren't there a couple of things wrong with the RHS? It's dimensionally wrong for a force, no? And is r standing for the same distance each side? 3. Nov 10, 2013 ### leftnes Oops, yeah. F = m$\omega^{2}$r = $Gm_{1}m_{2}/r^{2}$ I believe? Since $\omega^{2}$ = $a/r$, I substituted for acceleration and set the only acting force on the stars as their gravitational attraction towards each other. Am I missing something else? 4. Nov 10, 2013 ### leftnes And assuming a circular orbit, r = .5d, where d is the separation between the stars. 5. Nov 10, 2013 ### haruspex If the separation is d, what force does each experience?
2017-08-19 14:26:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6928214430809021, "perplexity": 510.41268492063017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105451.99/warc/CC-MAIN-20170819124333-20170819144333-00280.warc.gz"}
https://tex.stackexchange.com/questions/213535/content-shows-up-outside-of-sections-when-there-are-more-than-three/213539
# Content shows up outside of sections when there are more than three I have the following LATEX document: \documentclass[a4paper,12pt]{article} \begin{document} \section{Enemies} \begin{table}[ht] \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line Grey&Beard&1234567890&the ocean&Enemies\\ \hline %inserts single line \end{tabular} \end{table} \section{Friends} \begin{table}[ht] \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line John&Doe&1234567890&1424 Brooklyn Ave.&Friends\\ \hline %inserts single line \end{tabular} \end{table} \section{Relatives} \begin{table}[ht] \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line My&Self&1234567890&My current location&Relatives\\ \hline %inserts single line \end{tabular} \end{table} \section{Mythical Beasts} \begin{table}[ht] \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line Puff&Dragon&1234567890&cave by the ocean&Mythical Beasts\\ \hline %inserts single line \end{tabular} \end{table} \section{Public Figures} \begin{table}[ht] \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line Santa&Claus&10234987&North Pole&Public Figures\\ \hline %inserts single line \end{tabular} \end{table} \end{document} This is what it looks like after being compiled: As you can see, the first few sections render properly, but after the first three the other sections' content appears outside of the sections. I tried reordering the sections and the same thing happens. What is the problem? EDIT: New example: \documentclass[a4paper,12pt]{article} \title{Contact List} \date{\today} \begin{document} \maketitle \section{Enemies} \begin{table}[!h] % note the ! \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line Grey&Beard&1234567890&the ocean&Enemies\\ Grey&Beard&1234567890&the ocean&Enemies\\ Grey&Beard&1234567890&the ocean&Enemies\\ Grey&Beard&1234567890&the ocean&Enemies\\ Grey&Beard&1234567890&the ocean&Enemies\\ \hline %inserts single line \end{tabular} \end{table} \section{Friends} \begin{table}[!h] % note the ! \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line John&Doe&1234567890&1424 Brooklyn Ave.&Friends\\ \hline %inserts single line \end{tabular} \end{table} \section{Relatives} \begin{table}[!h] % note the ! \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line My&Self&1234567890&My current location&Relatives\\ \hline %inserts single line \end{tabular} \end{table} \section{Mythical Beasts} \begin{table}[!h] % note the ! \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line Puff&Dragon&1234567890&cave by the ocean&Mythical Beasts\\ \hline %inserts single line \end{tabular} \end{table} \section{Public Figures} \begin{table}[!h] % note the ! \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line Santa&Claus&10234987&North Pole&Public Figures\\ \hline %inserts single line \end{tabular} \end{table} \end{document} The table environment is a float environment. LaTeX puts the content of floats where it deems best, which might not correspond directly to where you put the float in your code. See Floating with table. Unless you really need float functionality for your tables (but given your question it sounds like you don't), you can just remove the \begin{table} and \end{table} lines from each of your tables. The tabular environment does not need to be wrapped in a table environment. \section{Enemies} \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line Grey&Beard&1234567890&the ocean&Enemies\\ \hline %inserts single line \end{tabular} \section{Friends} \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line John&Doe&1234567890&1424 Brooklyn Ave.&Friends\\ \hline %inserts single line \end{tabular} \section{Relatives} \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line My&Self&1234567890&My current location&Relatives\\ \hline %inserts single line \end{tabular} \section{Mythical Beasts} \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line Puff&Dragon&1234567890&cave by the ocean&Mythical Beasts\\ \hline %inserts single line \end{tabular} \section{Public Figures} \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line Santa&Claus&10234987&North Pole&Public Figures\\ \hline %inserts single line \end{tabular} \end{document} If you do decide you need table (for a caption, for instance), you can add ! to the options declaration, which forces the float to appear where you specify: \documentclass[a4paper,12pt]{article} \begin{document} \section{Enemies} \begin{table}[!h] % note the ! \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line Grey&Beard&1234567890&the ocean&Enemies\\ \hline %inserts single line \end{tabular} \end{table} \section{Friends} \begin{table}[!h] % note the ! \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line John&Doe&1234567890&1424 Brooklyn Ave.&Friends\\ \hline %inserts single line \end{tabular} \end{table} \section{Relatives} \begin{table}[!h] % note the ! \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line My&Self&1234567890&My current location&Relatives\\ \hline %inserts single line \end{tabular} \end{table} \section{Mythical Beasts} \begin{table}[!h] % note the ! \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line Puff&Dragon&1234567890&cave by the ocean&Mythical Beasts\\ \hline %inserts single line \end{tabular} \end{table} \section{Public Figures} \begin{table}[!h] % note the ! \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line Santa&Claus&10234987&North Pole&Public Figures\\ \hline %inserts single line \end{tabular} \end{table} \end{document} ## EDIT For an even stronger effect, use the float package and the [H] option on the tables. \documentclass[a4paper,12pt]{article} \usepackage{float} \title{Contact List} \date{\today} \begin{document} \maketitle \section{Enemies} \begin{table}[H] % note the H \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line Grey&Beard&1234567890&the ocean&Enemies\\ Grey&Beard&1234567890&the ocean&Enemies\\ Grey&Beard&1234567890&the ocean&Enemies\\ Grey&Beard&1234567890&the ocean&Enemies\\ Grey&Beard&1234567890&the ocean&Enemies\\ \hline %inserts single line \end{tabular} \end{table} \section{Friends} \begin{table}[H] % note the H \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line John&Doe&1234567890&1424 Brooklyn Ave.&Friends\\ \hline %inserts single line \end{tabular} \end{table} \section{Relatives} \begin{table}[H] % note the H \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line My&Self&1234567890&My current location&Relatives\\ \hline %inserts single line \end{tabular} \end{table} \section{Mythical Beasts} \begin{table}[H] % note the H \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line Puff&Dragon&1234567890&cave by the ocean&Mythical Beasts\\ \hline %inserts single line \end{tabular} \end{table} \section{Public Figures} \begin{table}[H] % note the H \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line Santa&Claus&10234987&North Pole&Public Figures\\ \hline %inserts single line \end{tabular} \end{table} \end{document} • I might need to have captions later, so I would prefer to use the second method, but I found that it doesn't work after adding a title to the document and a few more rows to a table. I updated my question with a modified example. Any idea why it still doesn't work? – Nate Nov 23 '14 at 17:03 • There are a number of different parameters that the algorithm uses to determine float placement, and !h doesn't override all of them. For a more detailed explanation look at How to influence the position of float environments. Nov 23 '14 at 18:11 • I edited my answer to show how you can use the float package and the [H] declaration for your new MWE. Nov 23 '14 at 18:30 • If you need your tables to break across pages, though, it's best to use one of the packages mentioned in the answers to this question. Nov 23 '14 at 18:31 Float placement in LaTeX is not always easy, as things may not end up where you want them to go. This is discussed in detail in the two FAQs How to influence the position of float environments like figure and table in LaTeX? and Keeping tables/figures close to where they are mentioned. From what it seems like you're not interested in content floating around in your document. Instead, you just want a tabular representation of some data that should be contained within the section. For that you do not need a table environment. Indeed, a tabular can survive on its own without being placed inside a table float. So, just don't use it. Secondly, the reason for your current limit of "three tables" stems from LaTeX's float setup. In fact, the total number of floats that you can have on any page is set to a default of 3 via a counter totalnumber. Increasing this number might be of help, but it's not necessarily the best idea, since other counters keep track of how much space is occupied by floats at the top/bottom of the page. This, in turn, might cause movement of floats beyond your control. Taking the above into consideration, I would suggest using a tabular-only implementation. If you want captions, you can add them in the text like you would anything else. If you want numbered captions (like a regular table), you can include the capt-of package. And, finally, for tables that need to break across the page boundary, you can consider using longtable. Here is a minimal example showing a possible setup: \documentclass{article} \usepackage{capt-of} \title{Contact List} \date{\today} \begin{document} \maketitle \section{Enemies} \begin{center} \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line Grey&Beard&1234567890&the ocean&Enemies\\ Grey&Beard&1234567890&the ocean&Enemies\\ Grey&Beard&1234567890&the ocean&Enemies\\ Grey&Beard&1234567890&the ocean&Enemies\\ Grey&Beard&1234567890&the ocean&Enemies\\ \hline %inserts single line \end{tabular} \end{center} \section{Friends} \begin{center} \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line John&Doe&1234567890&1424 Brooklyn Ave.&Friends\\ \hline %inserts single line \end{tabular} \vspace{\abovecaptionskip}% This is a caption for the table. \end{center} \section{Relatives} \begin{center} \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line My&Self&1234567890&My current location&Relatives\\ \hline %inserts single line \end{tabular} \captionof{table}{This is a caption for the table.}% Placed inside a (center) group \end{center} \section{Mythical Beasts} \noindent \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table \hline % inserts single horizontal line Puff&Dragon&1234567890&cave by the ocean&Mythical Beasts\\ \hline %inserts single line \end{tabular} \section{Public Figures} \noindent \begin{tabular}{c c c c c} % centered columns (4 columns) First Name & Last Name & Phone & Address & Category \\ [0.5ex] % inserts table
2021-10-28 05:46:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.995945930480957, "perplexity": 8596.729481458135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588257.34/warc/CC-MAIN-20211028034828-20211028064828-00363.warc.gz"}
https://www.physicsforums.com/threads/restricted-circular-3-body-problem-rc3bp.81254/
# Restricted circular 3 body problem RC3BP 1. Jul 5, 2005 ### kristian jerpetjøn Hello everyone i am new to these forums As a part of my master thesis i am trying to construct transfer orbits based on pathces of the restricted 3 body problem. However i keep meeting several walls One is how to calculate the optimal orbit from the earth orbit to the sun earth L1 point The next is propagating this untill it falls back towards the earth then patch it to the earth moon restricted problem and fall trough earth moon L2 into a low lunar orbit All help is welcome I have the book from belbruno covering much of the math involved but i am having problems constructing a program from it. 2. Jul 5, 2005 ### pervect Staff Emeritus http://www.cds.caltech.edu/~shane/papers/gomez-et-al-2004.pdf [Broken] http://etd.caltech.edu/etd/available/etd-05182004-154045/unrestricted/rossthesis_5_11.pdf [Broken] The way I found this was to start at the Wikipedia link http://en.wikipedia.org/wiki/Interplanetary_Superhighway and take a look at the very first external link on this page under "papers" which led me here http://www.cds.caltech.edu/~shane/papers/ [Broken] At this point I looked for some theory-oriented titles. At a very basic level you would want to start with the Hamiltonian of the third body $$H(x,y,p_x,p_y)= {{{\it p_x}}^{2}/2m+{\it p_x}\,y\omega+{{\it p_y}}^{2}/2m-{\it p_y}\,x\omega+V \left( x,y \right)$$ These give the equations of motion of the third body directly by Hamilton's equations http://mathworld.wolfram.com/HamiltonsEquations.html here $\omega$ is the angular frequency of rotation of the two massive bodies M1 and M2 around their common center of mass, and the potential function V(x,y) is - GmM1/r1 - gmM2/r2, where r1 and r2 are the distance from the third body m to the massive bodies M1 and M2 respectively. The above equation is for the restricted circular planar 3-body problem. Also useful is the Hamiltonian's little brother, the energy function 'h' $$h(x,y,\dot{x},\dot{y})=1/2\,m{{\it \dot{x}}}^{2}+1/2\,m{{\it \dot{y}}}^{2}-1/2\,m{y}^{2}{\omega}^{2 }-1/2\,m{x}^{2}{\omega}^{2}+V \left( x,y \right)$$ The quantity h is a conserved quantity, it is proportional to the "Jacobi intergal function". It is the Hamiltonian re-expressed in different variables. h is conserved because the Hamiltonian in the rotating coordinate system is not a function of time. http://www.geocities.com/syzygy303/ has some plots of the inequalities that setting h <= some number gives on the (x,y) plane. These plots are inequalities because you can't plot $\dot{x}$ or $\dot{y}$ on the same graph, so you get the minimum value of h when $\dot{x}=\dot{y}=0$ These plots illustrate why low energy orbits must pass through the L1 and L2 plots. You'll probably see plots like these a lot (with prettier graphics) if you read through some of the literature. This is all very basic, I'm afraid, but perhaps the background info will prove useful. Last edited by a moderator: May 2, 2017
2018-04-21 13:50:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6930360794067383, "perplexity": 953.6375403239728}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945222.55/warc/CC-MAIN-20180421125711-20180421145711-00136.warc.gz"}
https://worldbuilding.stackexchange.com/questions/159874/battlelines-with-both-gun-users-and-magic-users
# Battlelines with both gun users and magic users? In this fantasy world magic users are called rankers are graded into three grades: 1,soldier grade (Anyone with a magic weapon can become one.) 2,champion grade (requires years of training, only one in a thousand can reach this level) 3,general grade (Very very rare. A nation will be super power if it can have more than hundred general grade rankers.) Anyone above this power level will not be participating directly in battles, (like nuclear deterrent). This group will not be relevant to the question, just mentioning for completeness. So, the gun technology was fairly undeveloped due to availability of magic. -No precise sniping etc., A soldier grade ranker is similar to ancient warriors on earth with slightly powerful cold weapons. Mostly they will be killed if he/she meets a gun wielding enemy as rankers don't use guns (as they will be obsolete in later stages). A Champion grade ranker can use powerful magics and can kill a gun wielding enemy before the bullet is fired. But even they will be in trouble if they meet a gun wielding enemy battalion. In front of hundreds of bullets, they cannot do much individually. At most they will kill tens of enemies before dying. General grade rankers have domains of their own and can easily stop large number of projectiles as long as they don't contain large amount of energy (like a cannon shell). Of course, they have limits and will be defeated if you keep on sending human waves against them. However, it would take thousands of soldiers to kill a single General grade ranker. Now my question is: What will be the composition and deployment patterns of troops in such a scenario? Assume that: Cost of arming a soldier with gun is equal to a magic weapon. General grade experts are equally matched on both sides. Rankers need catalysts to use magic. Catalysts can only be procured from dungeons. So most of the times rankers has to learn magic compatible with available catalysts than vice-versa. Depending on the method of usage of the catalyst, rankers are generally two types. Warriors who embed the catalyst in some kind of melee weapon and mages who use catalyst directly. Warriors use magic as an enhancement to their strength, so they use magic spells with little or no activation times and spend lot of money on making a perfect weapon for their catalyst. Mages on the other hand design complicated magic spells to extract the maximum output from the catalyst. These are only rough styles and every ranker is a mix of the two styles. Warriors also have flashy and powerful abilities and mages can cast instant spells. Clarifications about stamina and other requirements for magic: In this world there are three kinds of energy good, bad, neutral (called rajas). Gods and demons use good and bad kinds of energy. Humans manipulate rajas using catalysts. This is called magic, with different magic systems corresponding to different methods of manipulating the rajas. Every mortal ranker (i.e not a singularity) has some kind of limit in usage of rajas. If a general grade ranker used too much magic for too long he/she would need some rest before being able to use magic. If they forcefully try to use magic in that time it will cripple or even kill the user. Also Catalysts also have limits on their maximum output and will be destroyed if overloaded. People need training before using catalyst or they would go mad. That is one the reasons soldier grade rankers use magic weapons (contains traces of catalyst) to get used before using the real thing. Magic weapons are made from materials mined from dungeons. Even normal resources contains some traces of rajas. For example iron ore found in a dungeon can be extra hard or super light. As said earlier, magic weapons are just slightly stronger than normal cold weapons. They generally don't rust or break down easily. May have resistance to temperature changes etc. Only nations with huge manpower can take control of dungeon and mine resources from it. A General level is not invincible (A next grade of rankers called Singularities would take that honor). A group of twenty champion grade rankers can battle a general by attrition. Especially if the champions have abilities that counter General's domain. As I mentioned generals are very rare and are treated as strategic assets. Every loss of a general grade expert will be a blow to nation's overall strength. In fact, most of the noble clans were headed by a general grade expert. Very crude guns with a need to reload manually after every shot. Mobile cannons were recently invented but not well received as most of the champions can do the same thing much faster. It should be noted that the backwardness of technology is not due to lack of knowledge but rather lack of interest in the field. So no large scale assembly lines and standardized equipment. Magic can do what science can do and even what it cannot do. Especially since dungeons are a constant source of fear to the society, the powers that gained from dungeons are source of admiration. Medical field is developed but is a hybrid between science and magic. In fact, dungeons were ingrained so deeply in the culture that development of technology mostly means finding new methods to use catalysts. The singularities (above general grade) are legendary existences in this world. They can reportedly even bend space and time. Legends about them inspire every youngster to become a ranker. They are bound by an restriction to not fight in this world. If they break the restriction a heavenly tribulation will descend from sky. It is said that singularities are resistant to passage of time. The possibility of immortality is another reason many people take the path of magic. A related question which can make this question more clear. Warfare-in-the-presence-of-magic-users • Comments are not for extended discussion; this conversation has been moved to chat. Nov 2, 2019 at 16:00 Due to the proliferation of firearms (even if they're primitive) and their effectiveness against sub-champion level opponents, I don't think you'd see any "traditional" battle tactics with battle lines, cavalry charges, shield-walls, etc. Instead, I'd expect such a world to have military strategy be more akin to a modern military conflict, where small squads of soldiers move from cover to cover and it's essentially a deadly hide-and-seek game. After the invention of modern firearms (notably rifling), the "stand in a line facing your enemy in a field and shoot" style of combat died out very quickly. For example, during the American Revolution against the British, the world saw how one of the most powerful military forces in the world was defeated by farmers with muskets and guerilla-tactics. While there were still some "stand in a line and shoot" conflicts after that, the practice quickly died out when weapon accuracy improved above drunken dart-throwing accuracy. Modern combat is often about small squads of soldiers (groups of 10 ish) who move from cover to cover and attempt to seize control over valuable areas such as tactically important points, important infrastructure, or civilian locations. I imagine your world could be quite similar except: • Substitute armored vehicles, tanks, or air support for "champion grade" fighters • Substitute destroyers, tank regiments, or aircraft carriers for "general grade" fighters In this situation, I'd imagine that a "squad" could consist of a dozen soldiers who are led by a champion. This would allow rapid movement and high combat effectiveness and still prevent the group of soldiers from being instantly slaughtered by an opposing champion. Other champions could be called in in "support roles". For example, in a modern battle soldiers could call in air support to deal with enemies in an entrenched positions, In your world, groups of soldiers could call in "champion support" who rove around the engagement area and solve problems that regular grunts can't. General rank individuals would most likely be in a heavily "support role" and be the center of bases and represent FOB's or airfields. Their role would be to provide a safe place to retreat to and counter other general rank individuals from striking at "home base". In rare situations, they'd take to the field themselves but as they're quite valuable, direct combat would probably be kept to a minimum. • I would imagine that the Battle Maniacs (warning TVTropes) who believe in fighting to gain experience and power might want to leave the cushy home base job. It would make an excellent point of motivational conflict between defending a homeland and trying to reach singularity level. Sep 28, 2020 at 19:38 This is a resource war above all else, when you control dungeons, you control a region which gives you new soldiers, revenue, and labor (which is vital to mine the dungeons). In this world, manpower is everything. The entire conflict is a quest for enough military might to overwhelm Champions or Generals to force them into using rajas when they are exhausted. Armies have developed ambush and deception strategies specifically to begin a battle with expendable cold weapons, saving their rajas for the final blow. In the generic composition description below, replace the word “National” with the name of your protectorate or empire. 1. Military organization: $$\underleftrightarrow{\hspace{32 pt} \fbox{Emperor} \hspace{32 pt}}\\ \tiny \fbox{Imperial Ministry} \hspace{80 pt} \fbox{Homeland Ministry} \\ \Downarrow \hspace{2em} \Downarrow \hspace{2em} \Downarrow \hspace{6em} \Downarrow \hspace{2em} \Downarrow \hspace{2em} \Downarrow \\ \hspace{1em} \small \text{501 401 301 201} \normalsize \{Legions\} \small \text{ 501 401 301 201} \\ \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow \hspace{6em} \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow \\ \swarrow \hspace{1 em} Battalions \hspace{1 em} \searrow \\ \fbox{Conquest Brigade} \hspace{3 em} \fbox{Defense Brigade}$$ A nation has Imperial and Homeland department, for conquest and national defense strategies respectively. Each Department is led either by a Singularity who holds the position of Supreme Minister (if you’re lucky enough to have one), or a Ministry of National Supremacy (a panel of at least 3 elite generals). This means each Nation has at least two Ministries who report directly to the Emperor, and only the Emperor can direct them. The National Army is divided into Legions of 8,000 to 12,000 troops. The number of legions a Nation has depends on their size and population. A small nation of 10 million may have up to 50 Legions to secure their borders and conduct conquest campaigns. Legions are specialized to their region, with knowledge of the unique mountains, forests, coasts, or desserts in their area. Normally a General Grade (lower) commands a Legion, with one Champion serving as Chief Security of the General to protect the general and arrange his safe passage into battlefield arenas. Attacks on any General will have to get through this Champion first. Each Legion commands several Battalions of 1,000 to 4,000 troops. Battalions are lead by Champions who hold secret conferences with their Legion generals (using magical telepathy, like the Force). This ability allows Champions to remain on the battlefield, get instructions, and give situation reports without bringing the General into the fight. As long as the Champion is not directly involved in combat he can use his raja this way. Battalions command their brigades of any size depending on the special mission. Brigades are specialized first into either Conquest operations or Homeland operations. Within each of these each Battalion has a special warfare skill, commanding troops trained in their areas. # Infantry Battalions • By far the largest force in your army is the Infantry battalion. 80% of your forces are in the infantry, and of these more than 75% are using cold weapons. The advantage of cold weapons is they are cheap to produce, they don’t need catalyst, and they can overwhelm an enemy running low on raja. The cold weapons are always the first to attack, forcing the enemy to burn all their raja on defending themselves. Forcing a champion into overloading on raja is a key strategy, and once that is accomplished you can come in with your own Soldier Grade with magical weapons. Infantry is divided into specialized divisions based on terrain. There is a Mountain Division, Dessert Division, Woodlands Division, and Coastal Division. They each have weapons, skills, and tactics unique to fighting in these environments. A Legion may not have all # Airborne Battalions • These troops include falconers, seers, and rarely wyvern riders. Their primary responsibility is to provide reconnaissance of enemy movements and to scout for safe passage. Falconers send falcons over a territory and their soldier’s basic raja skill is communing. The falcon is their magic weapon, when the bird is adorned with an amulet made from catalyst. Seers are Champions who can commune with nature and use wildlife to scout for them. Generally birds are better suited for this. Although a Seer can not control the birds like a falconer can, there are usually very many birds they can use, especially in woodland regions. But seers are not normally used in dessert campaigns due to the lack of wildlife. Wyvern riders are Champions who have communed with a Wyvern and bonded with it. The Wyvern is a large dragon-like animal which can fly, but has no magical powers and has an average animal intelligence like a horse. Some airborne Battalions can use Wyvern riders for field command, allowing the Champion rider to see the entire battlefield when making decisions. The Wyvern can fly above the lethal range of most guns. If they do get hit, the bullet has slowed down so much it cannot break the scaly armor. Wyvern are also valuable to carry messages from the battlefield to Legion commanders quickly and securely without using raja. # Marine Battalions • These troops are equipped with magical weapons with specific strengths in water. They do not corrode, they can fire underwater, and so,e can even allow the soldier to breathe underwater for short periods. Marine Champions also develop special aquatic skills. They have normal sight underwater, as if they were in land. They are extremely efficient swimmers, and can assault a ship from the water. Their primary mission is overtaking coastal regions by surprise from the sea. # Police Battalions • One of the smallest Battalions, but also the most magical troops will be in the police battalions. They will either be Sentry class or Guard class, depending on what they are protecting. A sentry protects friendly assets, such as catalyst ore, hideouts, fortress gates, or treasure rooms. Their mission is to simply control access whatever they are protecting so no enemy can take or even see it. Their weapons and skills include things which can hide or lock the thing they are protecting, and fight off anyone trying to pilfer it. Guards are protecting dangerous assets such as war prisoners, captured generals, or anything else which could be hostile to the Nation. They have the same mission as a Sentry but also must be skilled in controlling the hostile prisoner. They have some very advanced weapons which will subdue their prisoner and prevent escape, without killing the asset. Films often show guards and sentries as the weakest fighters but in real armies they are the most difficult to defeat. Several failed real attempts to break into Fort Knox demonstrate how tough these soldiers are. Your Police Battalions will use Champions to move high level prisoners around, and sometimes a Champion may dress as a regular Soldier and stand routine guard duty, surprising enemies who try to rescue their prisoners or steal your goods. # Intelligence Battalions • These elite troops include spies, interrogators, and tactical specialists. Legion commanders rely on the Intelligence Battalion to get the information needed to plan attacks on enemy fortresses or camps. These soldiers can look and speak like an enemy, use special language skills to collect information, and interrogate prisoners. Champions in this Battalion combine all their intelligence and form plans to maximize the effectiveness of the ground battle. # Legions A Legion is in command of a certain region. They will be numbered according to their location and territory. They can contain as many Battalions as they need to cover their area, and may or may not need every type of battalion. For example, a Legion in the Mountain region will not need dessert infantry or a Marine Battalion. Legion commanders report directly to their ministry. Conquest legions have missions involving expanding the empire and capturing new dungeons. Homeland legions have a mission to prevent loosing any Territory or Dungeons to enemy fighters.
2022-05-22 11:40:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32826173305511475, "perplexity": 5150.204083990114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545326.51/warc/CC-MAIN-20220522094818-20220522124818-00273.warc.gz"}
https://gmatclub.com/forum/a-salesman-makes-a-commission-of-x-percent-on-the-first-2-000-worth-260276.html
Summer is Coming! Join the Game of Timers Competition to Win Epic Prizes. Registration is Open. Game starts Mon July 1st. It is currently 18 Jul 2019, 20:58 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Show Tags 22 Feb 2018, 20:46 1 00:00 Difficulty: 25% (medium) Question Stats: 95% (02:27) correct 5% (00:00) wrong based on 23 sessions HideShow timer Statistics A salesman makes a commission of x percent on the first $2,000 worth of sales in any given month and y percent on all further sales during that month. If he makes$700 from $4,000 of sales in October and he makes$900 from $5,000 of sales in November, what is the value of x ? (A) 2% (B) 5% (C) 10% (D) 15% (E) 20% _________________ Senior PS Moderator Joined: 26 Feb 2016 Posts: 3360 Location: India GPA: 3.12 A salesman makes a commission of x percent on the first$2,000 worth  [#permalink] Show Tags 22 Feb 2018, 22:52 Bunuel wrote: A salesman makes a commission of x percent on the first $2,000 worth of sales in any given month and y percent on all further sales during that month. If he makes$700 from $4,000 of sales in October and he makes$900 from $5,000 of sales in November, what is the value of x ? (A) 2% (B) 5% (C) 10% (D) 15% (E) 20% If the salesman makes a sale of$z, the commission on this sale is $$\frac{x}{100}*2000 + \frac{y}{100}*(z-2000)$$ Since the salesman made $700 from z=$4000 in October, $$\frac{x}{100}*2000 + \frac{y}{100}*2000 = 700$$ Similarly, the salesman made $900 from z=$5000 in October, $$\frac{x}{100}*2000 + \frac{y}{100}*3000 = 900$$ The equations formed are as follows : 20x + 20y = 700 -> (1) * 3 -> 60x + 60y = 2100 -> (3) 20x + 30y = 900 -> (2) * 2 -> 40x + 60y = 1800 -> (4) Solving for x, we get 20x = 300 -> x = 15%(Option D) _________________ You've got what it takes, but it will take everything you've got Retired Moderator Joined: 07 Jan 2016 Posts: 1090 Location: India GMAT 1: 710 Q49 V36 A salesman makes a commission of x percent on the first $2,000 worth [#permalink] Show Tags 23 Feb 2018, 00:40 Bunuel wrote: A salesman makes a commission of x percent on the first$2,000 worth of sales in any given month and y percent on all further sales during that month. If he makes $700 from$4,000 of sales in October and he makes $900 from$5,000 of sales in November, what is the value of x ? (A) 2% (B) 5% (C) 10% (D) 15% (E) 20% x/100 x 2000 + y/100 x (s - 2000) x/100 x 2000 + y/100 x 2000 = 700 x/100 x 2000 + y/100 x 3000 = 900 i.e 10y = 200 ( subtracted the equation ) y=20% now substitute y = 20 20x + 400 = 700 x = 300/20 = 15 (D) imo
2019-07-19 03:58:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4165878891944885, "perplexity": 6196.3023861874735}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525974.74/warc/CC-MAIN-20190719032721-20190719054721-00451.warc.gz"}
https://codeforces.com/problemset/problem/1470/D
In problems A and G in Codeforces Round #713 (Div. 3) invalid hacks were allowed for some time. Such hacks will be removed, and hacked solutions will be rejudged. × D. Strange Housing time limit per test 1 second memory limit per test 256 megabytes input standard input output standard output Students of Winter Informatics School are going to live in a set of houses connected by underground passages. Teachers are also going to live in some of these houses, but they can not be accommodated randomly. For safety reasons, the following must hold: • All passages between two houses will be closed, if there are no teachers in both of them. All other passages will stay open. • It should be possible to travel between any two houses using the underground passages that are open. • Teachers should not live in houses, directly connected by a passage. Please help the organizers to choose the houses where teachers will live to satisfy the safety requirements or determine that it is impossible. Input The first input line contains a single integer $t$ — the number of test cases ($1 \le t \le 10^5$). Each test case starts with two integers $n$ and $m$ ($2 \le n \le 3 \cdot 10^5$, $0 \le m \le 3 \cdot 10^5$) — the number of houses and the number of passages. Then $m$ lines follow, each of them contains two integers $u$ and $v$ ($1 \le u, v \le n$, $u \neq v$), describing a passage between the houses $u$ and $v$. It is guaranteed that there are no two passages connecting the same pair of houses. The sum of values $n$ over all test cases does not exceed $3 \cdot 10^5$, and the sum of values $m$ over all test cases does not exceed $3 \cdot 10^5$. Output For each test case, if there is no way to choose the desired set of houses, output "NO". Otherwise, output "YES", then the total number of houses chosen, and then the indices of the chosen houses in arbitrary order. Examples Input 2 3 2 3 2 2 1 4 2 1 4 2 3 Output YES 2 1 3 NO Input 1 17 27 1 8 2 9 3 10 4 11 5 12 6 13 7 14 8 9 8 14 8 15 9 10 9 15 10 11 10 15 10 17 11 12 11 17 12 13 12 16 12 17 13 14 13 16 14 16 14 15 15 16 15 17 16 17 Output YES 8 1 3 4 5 6 9 14 17 Note The picture below shows the second example test.
2021-04-10 18:30:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34097641706466675, "perplexity": 651.2270227461183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057476.6/warc/CC-MAIN-20210410181215-20210410211215-00630.warc.gz"}
http://qurope.eu/aggregator
## Feed aggregator ### Entanglement Wedge Reconstruction of Infinite-dimensional von Neumann Algebras using Tensor Networks. (arXiv:1910.06328v1 [hep-th]) arXiv.org: Quantum Physics - 3 hours 13 min ago Quantum error correcting codes with finite-dimensional Hilbert spaces have yielded new insights on bulk reconstruction in AdS/CFT. In this paper, we give an explicit construction of a quantum error correcting code where the code and physical Hilbert spaces are infinite-dimensional. We define a von Neumann algebra of type II$_1$ acting on the code Hilbert space and show how it is mapped to a von Neumann algebra of type II$_1$ acting on the physical Hilbert space. This toy model demonstrates the equivalence of entanglement wedge reconstruction and the exact equality of bulk and boundary relative entropies in infinite-dimensional Hilbert spaces. Categories: Journals, Physics ### Statistical localization: from strong fragmentation to strong edge modes. (arXiv:1910.06341v1 [cond-mat.str-el]) arXiv.org: Quantum Physics - 3 hours 13 min ago Certain disorder-free Hamiltonians can be non-ergodic due to a \emph{strong fragmentation} of the Hilbert space into disconnected sectors. Here, we characterize such systems by introducing the notion of statistically localized integrals of motion' (SLIOM), whose eigenvalues label the connected components of the Hilbert space. SLIOMs are not spatially localized in the operator sense, but appear localized to sub-extensive regions when their expectation value is taken in typical states with a finite density of particles. We illustrate this general concept on several Hamiltonians, both with and without dipole conservation. Furthermore, we demonstrate that there exist perturbations which destroy these integrals of motion in the bulk of the system, while keeping them on the boundary. This results in statistically localized \emph{strong zero modes}, leading to infinitely long-lived edge magnetizations along with a thermalizing bulk, constituting the first example of such strong edge modes in a non-integrable model. We also show that in a particular example, these edge modes lead to the appearance of topological string order in a certain subset of highly excited eigenstates. Some of our suggested models can be realized in Rydberg quantum simulators. Categories: Journals, Physics ### Quantum certification and benchmarking. (arXiv:1910.06343v1 [quant-ph]) arXiv.org: Quantum Physics - 3 hours 13 min ago Concomitant with the rapid development of quantum technologies, challenging demands arise concerning the certification and characterization of devices. The promises of the field can only be achieved if stringent levels of precision of components can be reached and their functioning guaranteed. This Expert Recommendation provides a brief overview of the known characterization methods of certification, benchmarking, and tomographic recovery of quantum states and processes, as well as their applications in quantum computing, simulation, and communication. Categories: Journals, Physics ### Proposal for a new quantum theory of gravity V: Karolyhazy uncertainty relation, Planck scale foam, and holography. (arXiv:1910.06350v1 [gr-qc]) arXiv.org: Quantum Physics - 3 hours 13 min ago The Karolyhazy uncertainty relation is the statement that if a device is used to measure a length $l$, there will be a minimum uncertainty $\delta l$ in the measurement, given by $(\delta l)^3 \sim L_P^2\; l$. This is a consequence of combining the principles of quantum mechanics and general relativity. In this note we show how this relation arises in our approach to quantum gravity, in a bottom-up fashion, from the matrix dynamics of atoms of space-time-matter. We use this relation to define a space-time-matter foam at the Planck scale, and to argue that our theory is holographic. By coarse graining over time scales larger than Planck time, one obtains the laws of quantum gravity. Quantum gravity is not a Planck scale phenomenon; rather it comes into play whenever no classical space-time background is available to describe a quantum system. Space-time and classical general relativity arise from spontaneous localisation in a highly entangled quantum gravitational system. The Karolyhazy relation continues to hold in the emergent theory. An experimental confirmation of this relation will constitute a definitive test of the quantum nature of gravity. Categories: Journals, Physics ### Topological Spin Liquids: Robustness under perturbations. (arXiv:1910.06355v1 [cond-mat.str-el]) arXiv.org: Quantum Physics - 3 hours 13 min ago We study the robustness of the paradigmatic Resonating Valence Bond (RVB) spin liquid and its orthogonal version, the quantum dimer model, on the kagome lattice. The non-orthogonality of singlets in the RVB model and the induced finite length scale not only makes it difficult to analyze, but can also significantly affect its physics, such as its resilience to perturbations. Surprisingly, we find that this is not the case: The robustness of the RVB spin liquid is not affected by the finite correlation length, which demonstrates that the dimer model forms a viable model for studying RVB physics under perturbations. A microscopic analysis, based on tensor networks, allows us to trace this robustness back to two universal mechanisms: First, the dominant correlations in the RVB are spinon correlations, making the state robust against doping with visons. Second, reflection symmetry stabilizes the spin liquid against doping with spinons, by forbidding mixing of the initially dominant correlations with the correlations which lead to the breakdown of topological order. Categories: Journals, Physics ### Topological states in non-Hermitian two-dimensional Su-Schrieffer-Heeger model. (arXiv:1910.06362v1 [cond-mat.mes-hall]) arXiv.org: Quantum Physics - 3 hours 13 min ago A non-Hermitian topological insulator with real spectrum is interesting in the theory of non-Hermitian extension of topological systems. We find an experimentally realizable example of a two dimensional non-Hermitian topological insulator with real spectrum. We consider two-dimensional Su-Schrieffer-Heeger (SSH) model with gain and loss. We introduce non-Hermitian polarization vector to explore topological phase and show that topological edge states in the band gap exist in the system. Categories: Journals, Physics ### Seeing topological edge and bulk currents in time-of-flight images. (arXiv:1910.06446v1 [cond-mat.mes-hall]) arXiv.org: Quantum Physics - 3 hours 13 min ago Here we provide a general methodology to directly measure the topological currents emerging in the optical lattice implementation of the Haldane model. Alongside the edge currents supported by gapless edge states, transverse currents can emerge in the bulk of the system whenever the local potential is varied in space, even if it does not cause a phase transition. In optical lattice implementations the overall harmonic potential that traps the atoms provides the boundaries of the topological phase that supports the edge currents, as well as providing the potential gradient across the topological phase that gives rise to the bulk current. Both the edge and bulk currents are resilient to several experimental parameters such as trapping potential, temperature and disorder. We propose to investigate the properties of these currents directly from time-of-flight images with both short-time and long-time expansions. Categories: Journals, Physics ### An Unpublished Debate Brought to Light: Karl Popper's Enterprise against the Logic of Quantum Mechanics. (arXiv:1910.06450v1 [physics.hist-ph]) arXiv.org: Quantum Physics - 3 hours 13 min ago Karl Popper published, in 1968, a paper that allegedly found a flaw in a very influential article of Birkhoff and von Neumann, which pioneered the field of "quantum logic". Nevertheless, nobody rebutted Popper's criticism in print for several years. This has been called in the historiographical literature an "unsolved historical issue". Although Popper's proposal turned out to be merely based on misinterpretations, and was eventually abandoned by the author himself, this paper aims at providing a resolution to such historical open issues. I show that (i) Popper's paper was just the tip of an iceberg of a much vaster campaign conducted by Popper against quantum logic (which encompassed several more unpublished papers that I retrieved); and (ii) that Popper's paper stimulated a heated debate that remained however confined within private correspondence. Categories: Journals, Physics ### Active versus Passive Coherent Equalization of Passive Linear Quantum Systems. (arXiv:1910.06462v1 [eess.SY]) arXiv.org: Quantum Physics - 3 hours 13 min ago The paper considers the problem of equalization of passive linear quantum systems. While our previous work was concerned with the analysis and synthesis of passive equalizers, in this paper we analyze coherent quantum equalizers whose annihilation (respectively, creation) operator dynamics in the Heisenberg picture are driven by both quadratures of the channel output field. We show that the characteristics of the input field must be taken into consideration when choosing the type of the equalizing filter. In particular, we show that for thermal fields allowing the filter to process both quadratures of the channel output may not improve mean square accuracy of the input field estimate, in comparison with passive filters. This situation changes when the input field is squeezed'. Categories: Journals, Physics ### On the Status of Conservation Laws in Physics: Implications for Semiclassical Gravity. (arXiv:1910.06473v1 [gr-qc]) arXiv.org: Quantum Physics - 3 hours 13 min ago We start by surveying the history of the idea of a fundamental conservation law and briefly examine the role conservation laws play in different classical contexts. In such contexts we find conservation laws to be useful, but often not essential. Next we consider the quantum setting, where the conceptual problems of the standard formalism obstruct a rigorous analysis of the issue. We then analyze the fate of energy conservation within the various viable paths to address such conceptual problems; in all cases we find no satisfactory way to define a (useful) notion of energy that is generically conserved. Finally, we focus on the implications of this for the semiclassical gravity program and conclude that Einstein's equations cannot be said to always hold. Categories: Journals, Physics ### Nonlocal quantum correlations under amplitude damping decoherence. (arXiv:1910.06483v1 [quant-ph]) arXiv.org: Quantum Physics - 3 hours 13 min ago Different nonlocal quantum correlations of entanglement, steering and Bell nonlocality are defined with the help of local hidden state (LHS) and local hidden variable (LHV) models. Considering their unique roles in quantum information processing, it is of importance to understand the individual nonlocal quantum correlation as well as their relationship. Here, we investigate the effects of amplitude damping decoherence on different nonlocal quantum correlations. In particular, we have theoretically and experimentally shown that the entanglement sudden death phenomenon is distinct from those of steering and Bell nonlocality. In our scenario, we found that all the initial states present sudden death of steering and Bell nonlocality, while only some of the states show entanglement sudden death. These results suggest that the environmental effect can be different for different nonlocal quantum correlations, and thus, it provides distinct operational interpretations of different quantum correlations. Categories: Journals, Physics ### Two-color Bell states heralded via entanglement swapping. (arXiv:1910.06506v1 [quant-ph]) arXiv.org: Quantum Physics - 3 hours 13 min ago We report on an experiment demonstrating entanglement swapping of time-frequency entangled photons. We perform a frequency-resolved Bell-state measurement on the idler photons from two independent entangled photon pairs, which projects the signal photons onto a two-color Bell state. We verify entanglement in this heralded state using two-photon interference and observing quantum beating without the use of filters, indicating the presence of two-color entanglement. Our method could lend itself to use as a highly-tunable source of frequency-bin entangled single photons. Categories: Journals, Physics ### Quantum sensing with a single-qubit pseudo-Hermitian system. (arXiv:1910.06553v1 [quant-ph]) arXiv.org: Quantum Physics - 3 hours 13 min ago Quantum sensing exploits fundamental features of quantum system to achieve highly efficient measurement of physical quantities. Here, we propose a strategy to realize a single-qubit pseudo-Hermitian sensor from a dilated two-qubit Hermitian system. The pseudo-Hermitian sensor exhibits divergent susceptibility in dynamical evolution that does not necessarily involve exceptional point. We demonstrate its potential advantages to overcome noises that cannot be averaged out by repetitive measurements. The proposal is feasible with the state-of-art experimental capability in a variety of qubit systems, and represents a step towards the application of non-Hermitian physics in quantum sensing. Categories: Journals, Physics ### Simultaneous measurement of DC and AC magnetic fields at the Heisenberg limit. (arXiv:1910.06577v1 [physics.atom-ph]) arXiv.org: Quantum Physics - 3 hours 13 min ago High-precision magnetic field measurement is an ubiquitous issue in physics and a critical task in metrology. Generally, magnetic field has DC and AC components and it is hard to extract both DC and AC components simultaneously. The conventional Ramsey interferometry can easily measure DC magnetic fields, while it becomes invalid for AC magnetic fields since the accumulated phases may average to zero. Here, we propose a scheme for simultaneous measurement of DC and AC magnetic fields by combining Ramsey interferometry and rapid periodic pulses. In our scheme, the interrogation stage is divided into two signal accumulation processes linked by a unitary operation. In the first process, only DC component contributes to the accumulated phase. In the second process, by applying multiple rapid periodic $\pi$ pulses, only the AC component gives rise to the accumulated phase. By selecting suitable input state and the unitary operations in interrogation and readout stages, and the DC and AC components can be extracted by population measurements. In particular, if the input state is a GHZ state and two interaction-based operations are applied during the interferometry, the measurement precisions of DC and AC magnetic fields can approach the Heisenberg limit simultaneously. Our scheme provides a feasible way to achieve Heisenberg-limited simultaneous measurement of DC and AC fields. Categories: Journals, Physics ### Quantum speed limit and shortcuts to adiabaticity in coherent many-particle systems. (arXiv:1910.06581v1 [quant-ph]) arXiv.org: Quantum Physics - 3 hours 13 min ago We discuss the effects of many-body coherence on the quantum speed limit in ultracold atomic gases. Our approach is focused on two related systems, spinless fermions and the bosonic Tonks-Girardeau gas, which possess equivalent density dynamics but very different coherence properties. To illustrate the effect of the coherence on the dynamics we consider squeezing an anharmonic potential which confines the particles and find that the quantum speed limit exhibits subtle, but fundamental, differences between the atomic species. Furthermore, we explore the difference in the driven dynamics by implementing a shortcut to adiabaticity designed to reduce spurious excitations. We show that collisions between the strongly interacting bosons can lead to changes in the coherence which results in larger speed limits. Categories: Journals, Physics ### Quantum East model: localization, non-thermal eigenstates and slow dynamics. (arXiv:1910.06616v1 [cond-mat.stat-mech]) arXiv.org: Quantum Physics - 3 hours 13 min ago We study in detail the properties of the quantum East model, an interacting quantum spin chain inspired by simple kinetically constrained models of classical glasses. Through a combination of analytics, exact diagonalization and tensor network methods we show the existence of a fast-to-slow transition throughout the spectrum that follows from a localization transition in the ground state. On the slow side, we explicitly construct a large (exponential in size) number of non-thermal states which become exact finite-energy-density eigenstates in the large size limit, and -- through a "super-spin" generalization -- a further large class of area-law states guaranteed to display very slow relaxation. Under slow conditions many eigenstates have large overlap with product states and can be approximated well by matrix product states at arbitrary energy densities. We discuss implications of our results for slow thermalization and non-ergodicity more generally in systems with constraints. Categories: Journals, Physics ### The minimum parameterization of the wave function for the many-body electronic Schr\"odinger equation. I. Theory and ansatz. (arXiv:1910.06633v1 [physics.chem-ph]) arXiv.org: Quantum Physics - 3 hours 13 min ago The minimum parameterization of the wave function is derived for the time-independent many-body problem of identical fermions. It is shown that the exponential scaling with the number of particles plaguing all other correlation methods stems from the expansion of the wave function in one-particle basis sets. It is demonstrated that using a geminal basis, which fulfill a Lie algebra, the parametrization of the exact wave function becomes independent of the number of particles and only scale quadratic with the number of basis functions in the optimized basis. The resulting antisymmetrized geminal power wave function is shown to fulfill the necessary and sufficient conditions for the exact wavefunction, treat all electrons and electron pairs equally, be invariant to all orbital rotations and virtual-virtual and occupied-occupied geminal rotations, be the most compact representation of the exact wave function possible and contain exactly the same amount of information as the two-particle reduced density matrix. These findings may have severe consequences for quantum computing using identical fermions since the amount of information stored in a state is very little. A discussion of how the most compact wave function can be derived in general is also presented. Due to the breaking of the scaling wall for the exact wave function it is expected that even systems of biological relevance can be treated exactly in the near future. Categories: Journals, Physics ### Bose-Einstein condensate comagnetometer. (arXiv:1910.06642v1 [physics.atom-ph]) arXiv.org: Quantum Physics - 3 hours 13 min ago We describe a comagnetometer employing the $f=1$ and $f=2$ ground state hyperfine manifolds of a $^{87}$Rb spinor Bose-Einstein condensate as co-located magnetometers. The hyperfine manifolds feature nearly opposite gyromagnetic ratios and thus the sum of their precession angles is only weakly coupled to external magnetic fields, while being highly sensitive to any effect that rotates both manifolds in the same way. The $f=1$ and $f=2$ transverse magnetizations and azimuth angles are independently measured by non-destructive Faraday rotation probing, and we demonstrate a $44.0(8)\text{dB}$ common-mode rejection in good agreement with theory. We show how spin-dependent interactions can be used to inhibit $2\rightarrow 1$ hyperfine relaxing collisions, extending to $\sim 1\text{s}$ the transverse spin lifetime of the $f=1,2$ mixtures. The technique could be used in high sensitivity searches for new physics on sub-millimeter length scales, precision studies of ultra-cold collision physics, and angle-resolved studies of quantum spin dynamics. Categories: Journals, Physics ### Filter-free single-photon quantum dot resonance fluorescence in an integrated cavity-waveguide device. (arXiv:1910.06806v1 [quant-ph]) arXiv.org: Quantum Physics - 3 hours 13 min ago Semiconductor quantum dots embedded in micro-pillar cavities are excellent emitters of single photons when pumped resonantly. Often, the same spatial mode is used to both resonantly excite a quantum dot and to collect the emitted single photons, requiring cross-polarization to reduce the uncoupled scattered laser light. This inherently reduces the source brightness to 50 %. Critically, for some quantum applications the total efficiency from generation to detection must be over 50 %. Here, we demonstrate a resonant-excitation approach to creating single photons that is free of any cross-polarization, and in fact any filtering whatsoever. It potentially increases single-photon rates and collection efficiencies, and simplifies operation. This integrated device allows us to resonantly excite single quantum-dot states in several cavities in the plane of the device using connected waveguides, while the cavity-enhanced single-photon fluorescence is directed vertical (off-chip) in a Gaussian mode. We expect this design to be a prototype for larger chip-scale quantum photonics. Categories: Journals, Physics ### Quantum speed limits and the maximal rate of entropy production. (arXiv:1910.06811v1 [quant-ph]) arXiv.org: Quantum Physics - 3 hours 13 min ago The Bremermann-Bekenstein bound sets a fundamental upper limit on the rate with which information can be processed. However, the original treatment heavily relies on cosmological properties and plausibility arguments. In the present analysis, we derive equivalent statements by relying on only two fundamental results in quantum information theory and quantum dynamics -- Fannes inequality and the quantum speed limit. As main results, we obtain Bremermann-Bekenstein-type bounds for the rate of change of the von Neumann entropy in quantum systems undergoing open system dynamics, and for the rate of change of the Shannon information over some logical basis in unitary quantum evolution. Categories: Journals, Physics
2019-10-17 02:59:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5493916869163513, "perplexity": 1088.178301954735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672548.33/warc/CC-MAIN-20191017022259-20191017045759-00193.warc.gz"}
http://soft-matter.seas.harvard.edu/index.php/On-chip_background_noise_reduction_for_cell-based_assays_in_droplets
# On-chip background noise reduction for cell-based assays in droplets Written by Kevin Tian, AP 225, Fall 2011 --Ktian 17:05, 9 November 2011 (UTC) Title: On-chip background noise reduction for cell-based assays in droplets Authors: Pascaline Mary, Angela Chen, Irwin Chen, Adam R. Abate and David A. Weitz Journal: Lab on a Chip (2011), Vol 11, Pages 2066-2070 ## Paper Summary One area that has frequently become a problem in droplet-based microfluidics is that the technology is limited to homogenous assays. The primary reason for this limitation is that it is difficult to wash out reagents from the reaction vessels. Multi-step processes or simultaneous reaction and detection steps are made extremely difficult due to this inability to effectively remove excess reagents. Previous washing methods utilized magnetic beads to isolate the reagents of interest. The method is applicable to cells by coating the surfaces of these cells with magnetic particles, and miniaturization of magnetic actuators is possible due to high field gradients. However the major drawback is the appropriate control systems are required to manipulate these magnetic fields. This paper proposes a high-throughput method capable of 14 times dilution. The framework involves using dielectrophoresis (DEP) to inject specific quantities of reagents into droplets. The system splits the larger droplets into eight smaller droplets via three consecutive T-junctions. This process can be repeated for greater dilution factor. One great advantage of the technique is that it avoids the dangers of off-chip handling procedures, which can result in reagent loss, longer response times and cross-contamination of samples. In order to demonstrate the effectiveness of the method, the authors apply it to detect an enzyme mediated, site-specific, protein labeling reaction on the surface of yeast. Repetition of the dilution process for a total of two times allows for a reduction of background noise of up to factor of 100 within the droplet. ## Materials and Methods Microfluidic Device Fabrication Fabrication is done entirely with Polydimethylsiloxane (A.K.A. PDMS) using standard soft-lithography techniques. The droplet-dilution module requires electron for triggering the dilution process. The electrodes are made by fabricating microchannels of desired shape, gently heating the PDMS device and injecting a low mpt solder into the channels (when cooled we get solid solder electrodes). Electrical connections are made using eight-pin terminal blocks (from Digikey) that are glued to the device surface for strain relief. The glue used was Loctite UV cured. On the electrical side, the voltages applied come from a a function generator producing 20kHz pules that are amplified 1000x by a Trek high-voltage amplifier. Microfluidic Device Operation Figure 1. Three separate devices were fabrication for this paper. These are all shown in Figure 1 (a,d and g, which are the drop maker, dilution/splitting and detection devices respectively). All fluid flows are controlled by syringe pumps (PHD 22/2000, Harvard Apparatus). The essential process is as follows: • 1) Droplet Formation Module (Figure 1a) • The module uses flow-focsing junctions with a 25x25$\mu m$ nozzle (Figure 1b). Droplets are made in a fluorinated oil (HFE 7500, 3M, St Paul, Minnesota) containing 1.8% (wt/wt) of EA surfactant (Raindance Technologies). • The resulting emulsion is collected in a 500$\mu L$ glass syringe (Hamilton gastight) which is used as input to the 2nd module. • Figure 1c shows $40 \mu m$ droplets flowing in the channels. • 2) Dilution/Splitting Module (Figure 1d) • Droplets are spaced out by an oil flow and enters a flow-focusing channel (50$\mu m$ x 40$\mu m$, height x width) that causes droplets to flow single-file. • [Dilution] Near the electrodes, a T-junction connects the droplet channel to a secondary injection channel (which flows larger droplets with no 'reagent'). Via DEP the two droplets are fused together, thus 'diluting' the original droplet(see Figure 1e) • [Splitting] After dilution three successive T-junctions symmetrically break drops into smaller drops (See Figure 1f) • 3) Detection Module (Figure 1g and 1h) • The smaller drops resulting from the second module are re-injected into this module • The drops are spaced out by an additional oil flow and the intensity of their fluorescence is measured. Droplet Detection The detection module does not in of itself detect the fluorescence signal, as the module is placed on an inverted microscope and the fluorescence is detected with a photomultiplicator (PMT) attached to the epifluorescent port. A 20mW cyan Laser (Picarro) is used for excitation, which is aligned with the microscope's optical axis and focused onto the sample by a 40x objective lens. Droplets flow through a 15$\mu m$ wide x 20$\mu m$high channel, where the fluorescence is detected. Data is colected via a National Instruments DAQ card, controlled using Labview. Statis images captured by an EM-CCD Camera (Qimaging Rolera MGI). Reagents in droplets This section describes the experiment used to demonstrate the functionality of the above apparatus. Primary droplet solution contains fluorescein at 1mM is buffered with 1 $\times$ tris buffered saline (TBS). Drops are diluted with pure 1$\times$ TBS. The droplet enzymatic assay is performed by using a droplet maker with two inlet channels, one for a yeast suspension and the other with substrate and enzymes. Yeast cells: • Grown in Yeast-extract Peptone Dextrose (YPD) medium. • Cell density is measured, followed by twice centrifuging and resuspension at appropriate concentrations in TBS containing 1$mg~mL^{-1}$ of bovine serum albumin(BSA). • In order to prevent sedimentation and to match the surrounding medium's density with yeast density, 35% (v/v) Optiprep (Axis-Shield) is added to the suspension. Reagent solution is composed of: • 40mM 4'-phosphopantetheinyl transferase (SFP synthase) • 10mM CoA-488 (New Englands Biolabs) • 10mM $MgCl_2$ • 780 $\mu g ~ mL^{-1}$ BSA ## Results Dilution Evaluation Figure 2. The droplet maker produces droplets containing fluoroscein at high-volume fraction, which are then injected into the dilution/splitting module via tubing connecting the two devices. Flow rates are adjusted accordingly to achieve synchronization between reagent drop reinjection and dilution buffer injection. Some other minor details regarding the adjustments necessary to maintain continuous flow and generation of droplets is discussed. Image analysis is performed to determine drop radii, which can be used to compute dilution ratio (simply a ratio of volumes). If we define quantities, $Q_{drop},~~Q_{inject}$, representing the flow rate of regent drops and injected buffer respectively, then we know can make the claim that the dilution ratio is equal to the flow rate over droplet rate which is: $flowrate~ratio = {(Q_{inject}+Q_{drop}) \over Q_{drop}}$ This is assuming a continuous droplet formation rate (which can be adjusted for optimal rates). This is verified in Figure 2a, where ratios of volume before/after injection and dilution ratio are plotted. As one can see, it is essentially an x=y plot, to within error. Thus to control the dilution ratio one needs only to adjust the flow rate ratio. Thus the concentration of fluorescein in the droplets before and after dilution ($C_{initial}, C_{final}$ respectively)) is given by: ${C_{final} \over C_{final}} = {Q_{drop} \over {Q_{drop}+Q_{inject}} }$ Each T-junction reduces the droplet volume by 1/2 each time, leading to a final reduction of 1/8 droplet volume size at the end of 3 consecutive T-junctions. However at high reagent injection rates secondary breakup events occur at the T-junction, limiting the drop injection flow rate to $100~\mu L h^{-1}$ Fluorescence measurement results are depicted in Figure 2b, where distributions are given of the normalized intensities emitted by drops. This was given for an initial drop fluorescein concentration of 1$\mu M$, and drops after 6, 8, and 10 times dilution. As one can see there is a definite decrease in the standard deviation after successive dilutions. It is claimed that standard deviations of diluted solutions is ~5 times larger than that of drops containing the same fluorescein concentration formed without dilution [Paraphrased from article. I believe this is a mistyped statement and it should be the other way round, since lower intensity peaks indicate higher dilution, yet very sharp normalized distributions. Plus the method would be pointless if the statement was true.]. Enzymatic Assays: SFP Labeling Reaction Figure 3. The purpose of using fluoroscein above was due to the prevalence of fluorometric assays to quantify enzymatic reactions. Fluorescence Resonance Energy Transfer (FRET) avoids the washing steps before measurement, however has only a limited dynamic range of detection (relative to simple fluorescence measurements). Additionally FRET requires pre-labeling cell surfaces with fluorophores (not always possible). Thus the authors present a fluorescent labeling reaction on the cell surface. The expression of the S6 peptide sequence on the cell surface is targeted in this reaction. In this reaction, the Alexa Fluor488-substituted phophopantetheine group of CoA-488 substrate is covalently transferred to the serine side chain within the S6 sequence by the enzyme SFP synthase (Figure 3). Without washing, one immediately notes the problem; there is a bulk fluorescence and surface fluorescence that must be distinguished from one another. This is only possible by adjusting the fluorescent substrate to be smaller than the number of ligated molecules. However this restricts one to non-ideal concentrations for the reaction. Dilution instead allows one to have high fluorescent substrate initially, only to remove significant fractions of those in the bulk to leave mostly those that had bound to the cell surface in the desired reaction. Figure 4. The enzymatic droplet assay was performed by coflowing a suspension containing yeast cells engineered to display the S6 peptide with a second stream of SFP synthase and the CoA-488 substrate. The enzymatic reaction begins after droplet formation (when the streams are mixed). The drops are incubated over night before dilution in the dilution/splitting module. The drops are twice diluted which should yield 100 times reduction of unreaction fluorophore concentration. An illustration of how effective this technique is is shown in Figure 4a-f. • Figures 4a-c depict bright field images of the cell before dilution, 10 times and 100 times dilution. • Figures 4d-f depict fluorescence images of the cell before dilution, 10 times and 100 times dilution. As one can clearly see the difference is quite profound, even by eye. An observation of Figure 4g yields the quantitative picture after dilution. The first peak centered on I=0.142 is the fluorescent background. The brightest peak, centered on I=0.256 corresponds to cell-surface fluorescence. It is quite easy to separate the two intensities as being from one versus the other. [Note: What would have been really nice would have been a comparison of this same graph for an undiluted set of cells. The graph looks nice and all, and does demonstrate what it needs to. However I can't tell how big of an improvement it is data-wise without a direct comparison to what it was like before, after 10x and 100x dilution. Quite the shortcoming for the final conclusion of the paper.] ## Discussion & Conclusions The authors have presented a method that provides means to significantly reduce background noise in the performance of cell-based droplet assays. The washing step is performed through electrocoalescence followed by a breakup to reduce drop size post-dilution. It is also possible to screen cells at the same high throughput without external washing. External washing would break the emulsion and risk losing a significant fraction of cells. The improvement in the signal to noise ratio demonstrated in the paper is certainly has many practical applications. On a more personal note, the paper highlights how microfluidics takes advantages of the various properties of emulsions in the encapsulation of reagents in droplets suspended in an oil medium, thus allowing for separation of each droplet, yet allowing for high throughput by simple fluid flow. Although some details describing the process are lacking, it is not quite unexpected of a paper to omit these important details.
2019-11-12 09:45:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5677669048309326, "perplexity": 3953.9370964898103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664808.68/warc/CC-MAIN-20191112074214-20191112102214-00003.warc.gz"}
http://stats.stackexchange.com/questions/31723/impact-of-regression-normality-assumption-on-model-comparison-prediction
# Impact of regression normality-assumption on model comparison & prediction? This question is a continuation of the discussion here: How to test the statistical significance for categorical variable in linear regression? Following Macro's suggestion, I started a new thread. The new question is not limited to only study the inclusion/exclusion of categorical variable. It's about general model comparison and prediction. I found that my data is highly non-normal. The QQ plot is as follows: the curve is all below the straight 45 degree line. The curve is tangent to that straight line. And the curve looks like the curve of f(x)=-x^2 ( shape-wise). It's not the entire set of points that's under the 45 degree line. It's the curve with the shape of f(x)=-x^2 is "tangent" to the 45 degree line. By "tangent" I should have meant that those points around the "tangent" point are actually above the 45 degree line, very slightly though. Therefore, visually speaking, most of the data (~98%) are below the 45 degree line... These are the residual QQ plot coming out from "plot(lmModel)"... Using qqnorm(lmModel$res); qqline(lmModel$res) I got exactly the same curves and lines. My questions are: 1. If my end-goal is to use yhat to do prediction onto a wide data-set, does the non-normality of data matter? 2. For Model Comparison, the approaches pointed out by Macro probably won't work; any other alternative approaches that don't assume Gaussian distribution? 3. What shall I do to fix the non-normality problem? (data-size: 10 variables, 1700 observations). Thank you! - Of general interest are two threads: stats.stackexchange.com/questions/2492/… and stats.stackexchange.com/questions/29731/… which may be useful - these are mostly related to (1). There are various threads on transformation so you may want to do a search to find something specific, which would mostly address (3). –  Macro Jul 5 '12 at 20:00 Thank you Macro. However the two threads you posted above don't address the concern about prediction. Am I missing anything there? Thank you again! –  Luna Jul 5 '12 at 22:04 1. Because you are using least squares the nonnormality effects the regression coefficients and can hurt prediction. You may want to try a robust regression method. 2. Criteria like AIC and BIC look at closeness of the fit penalized by the number of parameters used. I do not think that the normality is important in choosing between models using this type of criteria. But keep in mind that if all these models have nonnormal residuals the fact that they all use least squares may mean that they all could be improved using a more robust fitting technique. 3. If you apply robust regression you do not have to "fix the nonnormality problem". Finding suitable transformations for the covariates in the model might be a way to "fix the nonnormality problem." But appropriate transformations may not be apparent. - Thank you! Why does the non-normality hurt the prediction? Does it ever "help" the prediction? How does a "robust regression" method help in this case? Thank you again! –  Luna Jul 5 '12 at 22:06 @Luna The regression coefficients based on least squares are sensitve to outliers. So the slope of the regression is forced to fit outlying observations. This is because least squares minimizes the sum of squared errors. Robust regression use different fitting criteria that don't peanlize as much for large individual errors. For example one robust method uses the sum of the absolute value of the errors. This will not as large . For example a term with an absolute error of 2 would have a squared error of 4. –  Michael Chernick Jul 5 '12 at 22:18 I can not conceive of a situation where a least squares fit to nonnormal data will improve prediction. –  Michael Chernick Jul 5 '12 at 22:20 Thank you Mike. But I don't have outliers. It is just I am having highly non-normal data and I was wondering what's an optimal solution under non-normal data and how does it impact my yhat? Thanks again! –  Luna Jul 5 '12 at 22:22 Nonnormality can occur because of heavy skewness or heavy kurtosis. If you have heavy kurtosis there are probably some outliers in the data. They don't necessarily show up as large residuals. –  Michael Chernick Jul 5 '12 at 22:33 show 3 more comments The residual plot that you describe sounds like a right skewed distribution. One possibility is to fit a regression model that assumes a right skewed distribution rather than a normal distribution. The glm function can be used to fit a gamma distribution (which is right skewed). Another approach is to transform the data, a log tranform on the y-variable or other Box-Cox transforms can help with skewness. The biggest problem with skewed data and regression is that the usual tests are based on normality, so you can fit the regression model using regular least squares or robust methods, then instead of the normal based tests use permutation or bootstrap tests that do not depend on normality (but make sure you understand what assumptions you are making). For any of these make sure that they make sense with the science and the questions that you are asking. -
2013-12-08 14:51:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7179003953933716, "perplexity": 1042.9047808013597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163065934/warc/CC-MAIN-20131204131745-00096-ip-10-33-133-15.ec2.internal.warc.gz"}
http://skozlov.net/algorithms/sort/insertion
## Idea To add a next element to a sorted subarray, move greater elements right and insert that element into the appropriate place. For example, we have the subarray from 1 to 5 sorted and 3 to insert. First, we move 4 and 5 right: And finally, we insert 3: ## Time complexity ### Worst case When inserting each element, the whole subarray is moved (this takes place for arrays sorted in reverse order), which leads to arithmetic progression $1 + 2 + … + (n-1) = O(n^2)$. ### Best case When inserting each element, nothing is moved, i.e. the input is already sorted. In this case, we just traverse the array once, which requires $O(n)$ operations. ### Average case It depends on what is considered “average”. For a length of $n$, let’s consider all permutations of sequence 1 to n and calculate the average number of moves per sequence. Note that the number of moves done by the insertion sort is exactly the number of inversions in the array. The average number of inversions in a permutation of $n$ elements is $\frac{n(n-1)}{4} = O(n^2)$. Proof Initially I tried to calculate the number of permutations of length $n$ with $k$ inversions. These numbers are known as Mahonian numbers (see M. Bóna, Combinatorics of Permutations, 2004, p. 43ff). Fortunately, we don't need to calculate them. Instead, let's see how desired average numbers of inversions change when increasing $n$. Then we get a recurrence relation and solve it. Creation of a permutation of length $n$ can be considered as inserting $n$ into a permutation of $\{1,...,n-1\}$. As a result, each of $(n-1)!$ source permutations become $n$ permutations of length $n$. Inversions of a final permutation come from two sources: 1. Inversions of the source permutation. 2. Inversions added by inserting. If we denote the total number of inversions in all permutations of length $n$ by $a_n$, the first source gives $na_{n-1}$ inversions. Regarding the second source, each of $(n-1)!$ source permutations gives $0 + 1 + ... + (n-1)$ inversions. So $a_n = na_{n-1} + (n-1)! \sum_{k=1}^{n-1} k$. Now it's easy to get a recurrence relation for the average number of inversions: $b_n = \frac{a_n}{n!} = \frac{a_{n-1}}{(n-1)!} + (n-1)/2 = b_{n-1} + \frac{n-1}{2}$. The generating function: $G(z) = \sum_{n=1}^\infty b_n z^n = \sum_{n=2}^\infty (b_{n-1} + \frac{n-1}{2}) z^n = \sum_{n=2}^\infty b_{n-1} z^n + \sum_{n=2}^\infty \frac{n-1}{2} z^n = zG(z) + \sum_{n=2}^\infty \frac{n-1}{2} z^n$, then $G(z) = \frac{\sum_{n=2}^\infty \frac{n-1}{2} z^n}{1-z}$. Transform $\frac{1}{1-z} = \sum_{k=0}^\infty z^k$ and group coefficients in the product of the two sums. Then $G(z) = \sum_{n=2}^\infty (\sum_{k=2}^n \frac{n(n-1)}{4}) z^n$, so $b_n = \frac{n(n-1)}{4}$. Note that the average number of inversions is 2 times less than the maximum one. In fact, the corresponding distribution is symmetric. Permutations of length 3: Of length 4: ## Stability An element can be moved only past greater elements, so the order of equal elements never changes, and the sort is stable. ## Usage Although this sort is not asymptotically optimal, its simplicity makes it super fast for short arrays.
2020-08-15 04:13:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9321072697639465, "perplexity": 588.5808617594291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740679.96/warc/CC-MAIN-20200815035250-20200815065250-00032.warc.gz"}
https://www.physicsforums.com/threads/evaluate-this-index-integral-containing-trig.632108/
# Evaluate this index Integral containing Trig 1. Aug 29, 2012 ### bugatti79 1. The problem statement, all variables and given/known data Folks Evaluate $B_{11}$ given $\displaystyle B_{ij}=\int_0^1 (1+x) \frac{d \phi_i}{dx} \frac{ d\phi_j}{dx} dx$ where $\phi_i= sin i \pi x$ and $\phi_j=sin j \pi x$ 2. Relevant equations 3. The attempt at a solution I calculate $\displaystyle B_{ij}=\int_0^1 (1+x)[ i \pi \cos(i \pi x))(j \pi \cos(j \pi x)]=ij \pi^2 \int_0^1 (1+x) \left[\frac{1}{2} [\cos(i+j)\pi x+\cos(i-j)\pi x\right ]dx$ $\displaystyle = \frac{ij \pi^{2}}{2} \int_0^1\left [ \cos(i+j) \pi x+\cos(i-j) \pi x +x \cos(i+j) \pi x+x \cos(i-j) \pi x \right ]dx$ Now for the second and last term in the integrand if we substitue $i=j=1$ after integrating we will get a 0 in the denominator ....but the book calculates $B_{ij}=\frac{3 \pi^2}{4}$ What have I dont wrong? 2. Aug 29, 2012 ### SammyS Staff Emeritus Did you mean to say "numerator" ? cos(0) = 1 3. Aug 29, 2012 ### bugatti79 Well no...looking at the second term $\displaystyle \int_0^1 \cos(i-j) \pi x dx= \left [\frac{sin(i-j) \pi x}{\pi(i-j)} \right]_0^1$ 1 is indeterminate.... I dont know how that answer $\frac{3 \pi^2}{4}$ was evaluated... 4. Aug 29, 2012 ### vela Staff Emeritus You can't integrate it like that when i=j. Assume i=j, simplify the integrand, and then integrate. 5. Aug 29, 2012 ### SammyS Staff Emeritus Well, if i = j then you are integrating cos((0)πx), but cos((0)πx) = 1 6. Aug 29, 2012 ### bugatti79 Thank you, I thought one has to insert the index values AFTER integration....? Where is the mathematical rule that asserts this? Cheers Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
2017-08-21 01:33:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8073078989982605, "perplexity": 3838.869448184288}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107065.72/warc/CC-MAIN-20170821003037-20170821023037-00621.warc.gz"}
http://forum.arduino.cc/index.php?topic=187290.0
Pages: [1]   Go Down Author Topic: More 1.5.3 fun...  (Read 977 times) 0 Members and 1 Guest are viewing this topic. Offline Newbie Karma: 0 Posts: 37 « on: September 10, 2013, 12:01:35 pm » Bigger Smaller Reset I have renamed all the library.properties files to get my libraries added but now I have a different issue... Arduino: 1.5.3 (Windows 7), Board: "Arduino Due (Programming Port)" In file included from C:\Program Files (x86)\Arduino\libraries\SD/utility/Sd2Card.h:26, from C:\Program Files (x86)\Arduino\libraries\SD/utility/SdFat.h:27, from C:\Program Files (x86)\Arduino\libraries\SD/SD.h:20, from Revision_2_3BETA.ino:2: C:\Program Files (x86)\Arduino\libraries\SD/utility/Sd2PinMap.h:23: fatal error: avr/io.h: No such file or directory compilation terminated. Line 23 in Sd2PinMap.h says: "#include <avr/io.h>" The file is located in: C:\Program Files (x86)\Arduino\hardware\tools\avr\avr\include\avr Why can't it find this file? Do I need to copy it into a different directory? Logged Earth Offline Sr. Member Karma: 14 Posts: 331 Arduino rocks « Reply #1 on: September 10, 2013, 03:14:33 pm » Bigger Smaller Reset This sort of question seems to come up a lot. You cannot use AVR system headers in your ARM Cortex code. The Due is a totally different processor architecture. It has nothing to do with AVR at all. If you want to use libraries that make direct reference to AVR libraries and/or headers then you will have to rewrite the library to work around this. Logged Nashville Tennessee, USA Offline Sr. Member Karma: 15 Posts: 367 « Reply #2 on: September 10, 2013, 07:48:37 pm » Bigger Smaller Reset Are you sure you are in the 1.5.3 libraries? My copy of Sd2PinMap.h has this include on line 39 and it is only used when compiling for an AVR based arduino. Code: #elif defined(__AVR__) // Other AVR based Boards follows // Warning this file was generated by a program. #ifndef Sd2PinMap_h #define Sd2PinMap_h #include <avr/io.h> Logged Offline Newbie Karma: 0 Posts: 37 « Reply #3 on: September 11, 2013, 09:23:39 am » Bigger Smaller Reset I know the Due is a sam, I had it selected but got that message still; which confused me. I did manage to get 1.5.3 working and 1.5.4 working but my sketch didn't run on either of them or even the nightly build. Went back to 1.5.2 and it's running flawlessly. I think ill just wait out a few versions... Logged Milano, Italy Offline Sr. Member Karma: 23 Posts: 292 « Reply #4 on: September 11, 2013, 09:41:03 am » Bigger Smaller Reset Stu1987 can you send me your sketch for testing? Logged C. Pages: [1]   Go Up
2014-09-21 20:21:29
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.850531280040741, "perplexity": 14730.389550067546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135930.79/warc/CC-MAIN-20140914011215-00299-ip-10-234-18-248.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2287427/how-to-define-a-function-in-first-order-logic-that-return-different-values-each
How to define a function in first order logic that return different values each time? How to represent f() in first order logic so that following scenario holds? M=f() N=f() where M $!=$ N • You don't. Axioms of equality deman that $M=N$ if $M=f()$ and $N=f()$. – Hagen von Eitzen May 19 '17 at 6:25 • how to represent a function f so that $M != N$ – Tom May 19 '17 at 6:34 • You cannot: a function in mathematics is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. – Mauro ALLEGRANZA May 19 '17 at 6:42 • But the symbol M=f( ) is meaningless: what is the input value ? When you say "a function that return different values each time?" what do you mean with "each time"? different input values? are there instant of time ? – Mauro ALLEGRANZA May 19 '17 at 6:43 • Yeah, by definition a function takes exactly one value on each argument. And there is no such notion as "each time" in logic. What you can do however is for instance to consider a function with one more argument, $t$, express that $t$ ranges over a linearly ordered set, and think of $f(x,t)$ as being the value "at time $t$" of a "function that depends on time". – Régis May 19 '17 at 7:07
2021-06-21 06:54:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.710269033908844, "perplexity": 535.7592444747492}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488268274.66/warc/CC-MAIN-20210621055537-20210621085537-00284.warc.gz"}
https://www.physicsforums.com/threads/can-someone-tell-me-the-essence-of-derivative.13405/
# Can someone tell me the essence of derivative? 1. Jan 28, 2004 ### franz32 How can I "easily" solve or understand the application of derivative involving the rate of change? 2. Jan 28, 2004 ### matt grime This is a bit tricky to answer because the notion of 'rate of change' is to me the easy explanation of what the derivative is. Differentiation from first principles. Suppose we want to know the slope of the graph of some function at a point. We can try drawing the tangent line by hand and hoping we get a good enough fit and then finding its gradient (it's just a straight line after all). But the problem there is the standard 'real life does not correspond to the perfect world we imagine mathematically'. Instead let's think about drawing a little chord from the point on the curveto a little point a bit further along, say e in the x direction. The smaller we let e get the better that chord's slope approximates the tangent's. In numbers, if f is the function, we want to know what happens in {f(x+e)-f(x)}/e as e gets small. To see where that expression comes from, try drawing the graph of something, and picking some x, some e and 'joining up the dots... ' I'm sorry, anyone know if the tex mode here allows us to use xypic? In words, that quantity is looking at the instantaneous rate of change if you like. We'll try and explain why later. Let's do an example x^n Work out the binomial expanion of (x+e)^n and subtract x^n, divide everythin by e and what happens as e gets small? As is often the case, here you can just set e to be zero and nothing goes wrong. You should get nx^(n-1). Do the same for sin, knowing that for e small sin(e) is practically e. Let's try and get back to rates of change. Kinetics. Suppose we are doing th standard eqautions of motion with a fixed acceleration. suppose our initial velocity is u, we accelerate for e seconds what's our final velocity? u+ae where a is the acceleration. Now, a is the rate of change of speed with respect to time, as we all know and understand. What this tells us is, writing it more formally, let v(t) be the velocity at time t v(t+e)=v(t) + e.a where a can be thought of as dv/dt generally, dv/dt isn't constant, and the above equation should be approximate. So another way to interpret (actually just the same but rewritten) derivatives is f'(x) is the function that makes f(x+e) apporximately equal to f(x) + e.f'(x) it's the linearization of the error if you like. The approximate we needn't worry about too much at the moment. How much of that do you know? What do you mean by 'solve' an application? Perhaps if you posted an example of what you were trying to solve? Last edited: Jan 28, 2004 3. Jan 28, 2004 ### franz32 Me again... Well, I was wondering how can I find the derivative of an area (let's say circle) with respect to its radius... sort of those kinds of problem solving. Is there a general explantion for these kinds of problem? 4. Jan 28, 2004 ### Tom Mattson Staff Emeritus Re: Me again... Yes, and that's what Matt was driving at. When he says: He means that you take the limit of that expression as e-->0. In your case, you have A=f(r)=pr2. So, take [f(r+e)-f(r)]/e and take the limit as e-->0. Like so: f'(r)=lime-->0[p(r+e)2-pr2]/e Try to take it from there. 5. Jan 29, 2004 ### lastlaugh another hint see what from: (pi(r+e)^2+pi*r^2)/h you can factor out front and you will learn one of the fundamental problem solving techniques of derivatives. 6. Feb 1, 2004 ### modmans2ndcoming a derivative is.... all a derivative is, is the slope of a line that is tangent to a graph at a particular point P. that might sound a bit complicated, but think of a circle. if you draw a line that touches the circle at exactly one point, that line has an equasion. that equasion is the slope (change in y over the change in X) at that particular point. if you are at the very top or bottom of the circle, the derivative will be zero because the slope is zero on a horazontal line. if you are at the exact left or right of the circle, the derivative will be undefined because the slope of a vertical line is undefined. as to the slope with respect to the radius, you would need to manipulate the equasion of the circle so that you are solving for X or Y so that you can take the derivative with respect to r. just a bit of algebra before you start is all. 7. Feb 1, 2004 ### modmans2ndcoming re:re:me again I think "e" is not a very good variable to use when defining the derivative since "e" is a number and he/she will most certainly be using it in the next few weeks...no need to make it confusing for him/her. I use "h", but that is just me. 8. Feb 1, 2004 ### modmans2ndcoming oh, also, as for easily solving what meathods of derivation have you learned so far? if you are just learning the formal definition, don't worry, it will get easier when you learn other ways to take a derivative.... if, you are beyond the formal definition, then it does not get much easier ;-) 9. Feb 2, 2004 ### franz32 what meathods of derivation have you learned so far? Well, I am already under the topic optimization... and the intermediate value theorem.. =) 10. Feb 12, 2004 ### franz32 Thank you very much! Hello guys, I really thanked all of you who help me about the derivatives. I learned it now. =) 11. Jul 23, 2004 ### mathwonk to take the derivative of an area of a circle wrt the radius, compare the change in area to the change in radius. Note that the change in area for a small change in radius is the area of a small collar or ring, which looks like a rectangle rolled up. the height of the rectangle is the change in radius and the length of the rolled up rectangle is the circumference, hence dividing by the change in radius gives the circumference. there fore the derivative of area wrt radius is circumference. to check this note that d/dr of pir^2 is 2pi r, which is the circumference.
2018-11-14 01:12:14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8298181891441345, "perplexity": 744.225772040525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741569.29/warc/CC-MAIN-20181114000002-20181114022002-00243.warc.gz"}
https://testbook.com/question-answer/the-incomes-of-a-and-b-are-in-the-ratio-3-2-and--60773542f7142e56d76da788
The incomes of A and B are in the ratio 3 : 2 and their expenditures are in the ratio 5 : 3. If each saves Rs. 1,000, then B's income is This question was previously asked in WBCS Prelims 2016 Official Paper View all WBCS Papers > 1. Rs. 3,000 2. Rs. 4,000 3. Rs. 6,000 4. Rs. 800 Option 2 : Rs. 4,000 Detailed Solution Given: The incomes of A and B are in the ratio 3 : 2. The expenditure of A and B are in the ratio 5 : 3. The saving of each is Rs. 1000 Concept Used: Income = Expenditure + Saving Calculation: Let the income of A and B be 3x and 2x The expenditure of A = 3x - 1000 The expenditure of B = 2x - 1000 According to the question; ⇒ (3x - 1000)/(2x - 1000) = 5/3 ⇒ 9x - 3000 = 10x - 5000 ⇒ x = 2000 The income of B = 2 × 2000 = Rs. 4,000 ∴ The income of B is Rs. 4,000.
2021-09-20 07:38:35
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.857607364654541, "perplexity": 3500.6111755755196}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057033.33/warc/CC-MAIN-20210920070754-20210920100754-00631.warc.gz"}
https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2007/bb214309%28v%3Doffice.12%29
# References Collection Access Developer Reference The References collection contains Reference objects representing each reference that's currently set. Remarks The Reference objects in the References collection correspond to the list of references in the References dialog box, available by clicking References on the Tools menu. Each Reference object represents one selected reference in the list. References that appear in the References dialog box but haven't been selected aren't in the References collection. You can enumerate through the References collection by using the For Each...Next statement. The References collection belongs to the Microsoft Access Application object. Individual Reference objects in the References collection are indexed beginning with 1.
2019-10-19 18:44:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.871737003326416, "perplexity": 3222.7779183859875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986697439.41/warc/CC-MAIN-20191019164943-20191019192443-00200.warc.gz"}
https://www.lhscientificpublishing.com/Journals/articles/DOI-10.5890-DNC.2021.12.009.aspx
ISSN:2164-6376 (print) ISSN:2164-6414 (online) Discontinuity, Nonlinearity, and Complexity Dimitry Volchenkov (editor), Dumitru Baleanu (editor) Dimitry Volchenkov(editor) Mathematics & Statistics, Texas Tech University, 1108 Memorial Circle, Lubbock, TX 79409, USA Email: dr.volchenkov@gmail.com Dumitru Baleanu (editor) Cankaya University, Ankara, Turkey; Institute of Space Sciences, Magurele-Bucharest, Romania Well-posedness and Stability for a Moore-Gibson-Thompson Equation with Internal Distributed Delay Discontinuity, Nonlinearity, and Complexity 10(4) (2021) 693--703 | DOI:10.5890/DNC.2021.12.009 Abdelkader Braik$^1$, Abderrahmane Beniani$^2$, Khaled Zennir$^3$ $^1$ Department of Sciences and Technology, University of Hassiba Ben Bouali, Chlef, Algeria $^2$ Department of Mathematics, BP 284, University Centre BELHADJ Bouchaib Ain Tmouchent 46000, Algeri $^3$ Department of Mathematics, College of Sciences and Arts, Qassim University, Ar-Rass, Saudi Arabia Abstract In this work, we consider the Moore-Gibson-Thompson equation with distributed delay. We prove, under an appropriate assumptions and a smallness conditions on the parameters $\alpha$, $\beta$, $\gamma$ and $\mu$, that this problem is well-posed and then by introducing suitable energy and Lyapunov functionals, the solution of \eqref{p1} and \eqref{p2} decays to zero as $t$ tends to infinity. References 1. [1] Kaltenbacher, B., Lasiecka, I., and Marchand, R. (2011), Wellposedness and exponential decay rates for the Moore-Gibson-Thompson equation arising in high intensity ultrasound, Control Cybernet., {\bf 40}(4), 971-988. 2. [2] Caixeta, A.H., Lasiecka, I., and Cavalcanti, C.V.N. (2016), Global attractors for a third order in time nonlinear dynamics, J. Diff. Equ., {261}(1), 113-147. 3. [3] Lasiecka, I. and Wang, X. (2016), Moore-Gibson-Thompson equation with memory, part I: exponential decay of energy, Z. Angew. Math. Phys., {\bf 67}(17), https://doi.org/10.1007/s00033-015-0597-8. 4. [4] Lasiecka, I. and Wang, X. (2015), Moore-Gibson-Thompson equation with memory part II: general decay of energy, J. Diff. Equ., {\bf 259}(12), 7610-7635. 5. [5] DellOro, F., Lasiecka, I., and Pata V. (2016), The Moore-Gibson-Thompson equation with memory in the critical case, J. Diff. Equ., {\bf 261}(7), 4188-4222. 6. [6] Nicaise, S. and Pignotti, C. (2008), Stabilization of the wave equation with boundary or internal distributed delay, Diff. Int. Equa., {\bf 9}(10), 935-958. 7. [7] Nicaise, S. and Pignotti, C. (2006), Stability and instability results of the wave equation with a delay term in the boundary or internal feedbacks, SIAM J. Control Optim., {\bf 5}, 1561-1585. 8. [8] Nicaise, S. and Pignotti, C. (2011), Interior feedback stabilization of wave equations with time dependent delay, Electron. J. Diff. Equ., {\bf 2011}(41), 1-20. 9. [9] Nicaise, S. and Pignotti, C. (2011), Exponential stability of the wave equation with boundary time-varying delay, Disc. Cont. Dynam. Syst., {\bf 3}, 693-722. 10. [10] Nicaise, S. and Valein, J. (2010), Stabilization of second order evolution equations with unbounded feedback with delay, ESAIM Control Optim., {\bf 2}, 420-456. 11. [11] Mustafa, M.I. (2014), A uniform stability result for thermoelasticity of type III with boundary distributed delay, J. Math. Anal. Appl., {\bf 415}, 148-158. 12. [12] Apalara, T.A. (2014), Well-posedness and exponential stability for a linear damped Timoshenko system with second sound and internal distributed delay, Electron. J. Diff. Equ., {\bf 254}, 1-15. 13. [13] Mustafa, M.I. and Kafini, M. (2013), Exponential decay in thermoelastic systems with internal distributed delay, Palestine J. Math., {\bf 2}(2), 287-299. 14. [14] Messaoudi, S.A., Fareh, A., and Doudi, N. (2016), Well posedness and exponential satbility in a wave equation with a strong damping and a strong delay, J. Math. Phys., {\bf 57}, 111501. 15. [15] Boulaaras, S., Draifia, A., and Zarai, A. (2019), Galerkin method for nonlocal mixed boundary value problem for the Moore-Gibson-Thompson equation with integral condition, Math. Meth. App. Sci., {\bf 42}, 2664-2679. 16. [16] Caixeta, A.H., Lasiecka, I., and Domingos, C.V.N. (2016), On long time behavior of Moore-Gibson-Thompson equation with molecular relaxation, Evol. Equ. Control Theory., 5(4), 661-676. 17. [17] Conejero, J.A., Lizama, C., and Rodenas, F. (2015), Chaotic behaviour of the solutions of the Moore-Gibson-Thompson equation, Appl. Math. Inf. Sci., {\bf 9}(5), 2233-2238. 18. [18] DellOro, F. and Pata, V. (2017), On the Moore-Gibson-Thompson equation and its relation to linear viscoelasticity, Appl. Math. Optim., {\bf 76}(3), 641-655. 19. [19] DellOro, F. and Pata, V. (2017), On a fourth-order equation of Moore-Gibson-Thompson type, Milan J. Math., {\bf 85}(2), 215-234. 20. [20] Kaltenbacher, B. and Lasiecka, I. (2012), Exponential decay for low and higher energies in the third order linear Moore-Gibson-Thompson equation with variable viscosity, Palest. J. Math., {\bf 1}(1), 1-10. 21. [21] Pazy, A. (1983), Semi-groups of Linear Operators and Applications to Partial Differential Equations, Springer-Verlag, New York. 22. [22] Komornik, V. (1994), Exact Controllability and Stabilization. The Multiplier Method, Paris-Masson-John Wiley. 23. [23] Guesmia, A. (2013), Well-posedness and exponential stability of an abstract evolution equation with infinite memory and time delay, IMA J. Math. Control Inform., {\bf 30}(4), 507-526.
2021-10-25 06:40:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.874201774597168, "perplexity": 9421.61808600216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587655.10/warc/CC-MAIN-20211025061300-20211025091300-00015.warc.gz"}
https://cs.stackexchange.com/questions/139650/existence-of-a-cfl-l-such-that-sqrtl-is-not-cfl/139653
# Existence of a CFL $L$ such that $\sqrt{L}$ is not CFL Does there exist a CFL L such that the language defined as $$L' = \sqrt{L} = \{w | ww \in L\}$$ is not CFL? I feel that there is no such $$L$$ but obviously, I am unable to prove it. I am sorry but I have not made any mentionable progress with my attempts on this problem. I would appreciate any hint to the proof or a language $$L$$ that could satisfy this. There is an example, and $$L = \{a^nb^na^{2m}b^ka^k \mid n,m,k \in \mathbb{N}\}$$ does the trick. We get that $$\sqrt{L} = \{a^nb^na^n \mid n \in \mathbb{N}\}$$, which is a standard example of a non-context-free language. • Amazing! Thanks so much. Is it possible to create an example parametrized with one variable, say $n$? I had been trying to solve this problem using such languages only which is why I couldn't find one probably. – bigbang Apr 28 at 18:42 • @bigbang Close $\{ a^nb^n \mid n\in \mathbb N \} \cdot \{ a^na^n \mid n\in \mathbb N \} \cdot \{ b^na^n \mid n\in \mathbb N \}$. – Hendrik Jan Apr 28 at 19:43
2021-06-25 01:29:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6511668562889099, "perplexity": 197.10672741369277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488560777.97/warc/CC-MAIN-20210624233218-20210625023218-00567.warc.gz"}
https://astro.paperswithcode.com/paper/mond-like-behavior-in-the-dirac-milne
MOND-like behavior in the Dirac-Milne universe -- Flat rotation curves and mass/velocity relations in galaxies and clusters 17 Feb 2021  ·  Gabriel Chardin, Yohan Dubois, Giovanni Manfredi, Bruce Miller, Clément Stahl · We show that in the Dirac-Milne universe (a matter-antimatter symmetric universe where the two components repel each other), rotation curves are generically flat beyond the characteristic distance of about 3 virial radii, and that a Tully-Fisher relation with exponent $\approx 3$ is satisfied. Using 3D simulations with a modified version of the RAMSES code, we show that the Dirac-Milne cosmology presents a Faber-Jackson relation with a very small scatter and an exponent equal to $\approx 3$ between the mass and the velocity dispersion. We also show that the mass derived from the rotation curves assuming Newtonian gravity is systematically overestimated compared to the mass really present. We also show that the Dirac-Milne universe, featuring a polarization between its matter and antimatter components, presents a behavior similar to that of MOND (Modified Newtonian Dynamics), characterized by an additional surface gravity compared to the Newtonian case. We show that in the Dirac-Milne universe, at the present epoch, the intensity of the additional gravitational field $g_{am}$ due to the presence of clouds of antimatter is of the order of a few $10^{-11}$ m/s$^2$, similar to the characteristic acceleration of MOND. We study the evolution of this additional acceleration $g_{am}$ and show that it depends on the redshift, and is therefore not a fundamental constant. Combined with its known concordance properties on SNIa luminosity distance, age, nucleosynthesis and structure formation, the Dirac-Milne cosmology may then represent an interesting alternative to the $\Lambda$CDM, MOND, and other scenarios for explaining the Dark Matter and Dark Energy conundrum. PDF Abstract
2022-12-09 23:46:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5451796650886536, "perplexity": 1116.0350399669219}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711552.8/warc/CC-MAIN-20221209213503-20221210003503-00217.warc.gz"}
https://en.m.wikipedia.org/wiki/Enigma_(machine)
# Enigma machine (Redirected from Enigma (machine)) Military Enigma machine, model "Enigma 1", used during the late 1930s and during the war; displayed at Museo scienza e tecnologia Milano, Italy Military Enigma machine (in wooden box) The Enigma machines were a series of electro-mechanical rotor cipher machines developed and used in the early- to mid-20th century to protect commercial, diplomatic and military communication. Enigma was invented by the German engineer Arthur Scherbius at the end of World War I.[1] Early models were used commercially from the early 1920s, and adopted by military and government services of several countries, most notably Nazi Germany before and during World War II.[2] Several different Enigma models were produced, but the German military models, having a plugboard, were the most complex. Japanese and Italian models were also in use. Around December 1932, Marian Rejewski of the Polish Cipher Bureau used the theory of permutations and flaws in the German military message procedures to break the message keys of the plugboard Enigma machine. Rejewski achieved this result without knowledge of the wiring of the machine, so the result did not allow the Poles to decrypt actual messages. The French had a spy with access to German cipher materials that included the daily keys used in September and October 1932. Those keys included the plugboard settings. The French gave the material to the Poles, and Rejewski used some of that material and the message traffic in September and October to solve for the unknown rotor wiring. Consequently, the Poles were able to build their own Enigma machines, which were called Enigma doubles. Rejewski was aided by cryptanalysts Jerzy Różycki and Henryk Zygalski, both of whom had been recruited with Rejewski from Poznań University. The Polish Cipher Bureau developed techniques to defeat the plugboard and find all components of the daily key, which enabled the Cipher Bureau to read the German's Enigma messages. Over time, the German cryptographic procedures improved, and the Cipher Bureau developed techniques and designed mechanical devices to continue breaking the Enigma traffic. As part of that effort, the Poles exploited quirks of the rotors, compiled catalogs, built a cyclometer to help make a catalog with 100,000 entries, made Zygalski sheets and built the electro-mechanical cryptologic bomb to search for rotor settings. In 1938, the Germans added complexity to the Enigma machines that finally became too expensive for the Poles to counter. The Poles had six bomby, but when the Germans added two more rotors, ten times as many bomby were needed, but the Poles did not have the resources.[3] On 26 and 27 July 1939,[4] in Pyry near Warsaw, the Poles initiated French and British military intelligence representatives into their Enigma-decryption techniques and equipment, including Zygalski sheets and the cryptologic bomb, and promised each delegation a Polish-reconstructed Enigma. The demonstration represented a vital basis for the later British continuation and effort.[5] During the war, British cryptologists decrypted a vast number of messages enciphered on Enigma. The intelligence gleaned from this source, codenamed "Ultra" by the British, was a substantial aid to the Allied war effort.[6] Though Enigma had some cryptographic weaknesses, in practice it was German procedural flaws, operator mistakes, failure to systematically introduce changes in encipherment procedures, and Allied capture of key tables and hardware that, during the war, enabled Allied cryptologists to succeed and "turned the tide" in the Allies' favor.[7][8] ## EtymologyEdit The word Enigma is derived from the Latin word aenigma ("riddle"), which again has been derived itself from the Ancient Greek verbal noun τὸ αἴνιγμα. ## DesignEdit Enigma in use, 1943 Like other rotor machines, the Enigma machine is a combination of mechanical and electrical subsystems. The mechanical subsystem consists of a keyboard; a set of rotating disks called rotors arranged adjacently along a spindle; one of various stepping components to turn at least one rotor with each key press, and a series of lamps, one for each letter. ### Electrical pathwayEdit Enigma wiring diagram with arrows and the numbers 1 to 9 showing how current flows from key depression to a lamp being lit. The A key is encoded to the D lamp. D yields A, but A never yields A; this property was due to a patented feature unique to the Enigmas, and could be exploited by cryptanalysts in some situations. The mechanical parts act in such a way as to form a varying electrical circuit. When a key is pressed, one or more rotors rotate on the spindle. On the outside of the rotors are a series of electrical contacts that, after rotation, line up with contacts on the other rotors or fixed wiring on either end of the spindle. When the rotors are properly aligned, each key on the keyboard is connected to a unique electrical pathway through the series of contacts and internal wiring. Current, typically from a battery, flows through the pressed key, into the newly configured set of circuits and back out again, ultimately lighting one display lamp, which shows the output letter. For example, when encrypting a message starting ANX..., the operator would first press the A key, and the Z lamp might light, so Z would be the first letter of the ciphertext. The operator would next press N, and then X in the same fashion, and so on. The scrambling action of Enigma's rotors is shown for two consecutive letters with the right-hand rotor moving one position between them. Current flowed from the battery (1) through a depressed bi-directional keyboard switch (2) to the plugboard (3). Next, it passed through the (unused in this instance, so shown closed) plug "A" (3) via the entry wheel (4), through the wiring of the three (Wehrmacht Enigma) or four (Kriegsmarine M4 and Abwehr variants) installed rotors (5), and entered the reflector (6). The reflector returned the current, via an entirely different path, back through the rotors (5) and entry wheel (4), proceeding through plug "S" (7) connected with a cable (8) to plug "D", and another bi-directional switch (9) to light the appropriate lamp.[9] The repeated changes of electrical path through an Enigma scrambler implemented a polyalphabetic substitution cipher that provided Enigma's security. The diagram on the right shows how the electrical pathway changed with each key depression, which caused rotation of at least the right-hand rotor. Current passed into the set of rotors, into and back out of the reflector, and out through the rotors again. The greyed-out lines are other possible paths within each rotor; these are hard-wired from one side of each rotor to the other. The letter A encrypts differently with consecutive key presses, first to G, and then to C. This is because the right-hand rotor has stepped, sending the signal on a completely different route. Eventually other rotors step with a key press. ### RotorsEdit Enigma rotor assembly. In the Wehrmacht Enigma, the three installed movable rotors are sandwiched between two fixed wheels: the entry wheel, on the right, and the reflector on the left. The rotors (alternatively wheels or drums, Walzen in German) formed the heart of an Enigma machine. Each rotor was a disc approximately 10 cm (3.9 in) in diameter made from hard rubber or bakelite with 26 brass, spring-loaded, electrical contact pins arranged in a circle on one face; the other side housing the corresponding number of circular plate electrical contacts. The pins and contacts represent the alphabet—typically the 26 letters A–Z (this will be assumed for the rest of this description). When the rotors were mounted side-by-side on the spindle, the pins of one rotor rested against the plate contacts of the neighbouring rotor, forming an electrical connection. Inside the body of the rotor, 26 wires connected each pin on one side to a contact on the other in a complex pattern. Most of the rotors were identified by Roman numerals, and each issued copy of rotor I was wired identically to all others. The same was true for the special thin beta and gamma rotors used in the M4 naval variant. Three Enigma rotors and the shaft, on which they are placed when in use. By itself, a rotor performs only a very simple type of encryption—a simple substitution cipher. For example, the pin corresponding to the letter E might be wired to the contact for letter T on the opposite face, and so on. Enigma's security came from using several rotors in series (usually three or four) and the regular stepping movement of the rotors, thus implementing a polyalphabetic substitution cipher. When placed in an Enigma, each rotor can be set to one of 26 possible positions. When inserted, it can be turned by hand using the grooved finger-wheel, which protrudes from the internal Enigma cover when closed. So that the operator can know the rotor's position, each had an alphabet tyre (or letter ring) attached to the outside of the rotor disk, with 26 characters (typically letters); one of these could be seen through the window, thus indicating the rotational position of the rotor. In early models, the alphabet ring was fixed to the rotor disk. A later improvement was the ability to adjust the alphabet ring relative to the rotor disk. The position of the ring was known as the Ringstellung ("ring setting"), and was a part of the initial setting prior to an operating session. In modern terms it was a part of the initialization vector. Two Enigma rotors showing electrical contacts, stepping ratchet (on the left) and notch (on the right-hand rotor opposite D). Each rotor contained a notch (or more than one) that controlled rotor stepping. In the military variants, the notches are located on the alphabet ring. The Army and Air Force Enigmas were used with several rotors, initially three. On 15 December 1938, this changed to five, from which three were chosen for a given session. Rotors were marked with Roman numerals to distinguish them: I, II, III, IV and V, all with single notches located at different points on the alphabet ring. This variation was probably intended as a security measure, but ultimately allowed the Polish Clock Method and British Banburismus attacks. The Naval version of the Wehrmacht Enigma had always been issued with more rotors than the other services: at first six, then seven, and finally eight. The additional rotors were marked VI, VII and VIII, all with different wiring, and had two notches, resulting in more frequent turnover. The four-rotor Naval Enigma (M4) machine accommodated an extra rotor in the same space as the three-rotor version. This was accomplished by replacing the original reflector with a thinner one and by adding a thin fourth rotor. That fourth rotor was one of two types, Beta or Gamma, and never stepped, but could be manually set to any of 26 positions. One of the 26 made the machine perform identically to the three-rotor machine. ### SteppingEdit To avoid merely implementing a simple (and easily breakable) substitution cipher, every key press caused one or more rotors to step by one twenty-sixth of a full rotation, before the electrical connections were made. This changed the substitution alphabet used for encryption, ensuring that the cryptographic substitution was different at each new rotor position, producing a more formidable polyalphabetic substitution cipher. The stepping mechanism varied slightly from model to model. The right-hand rotor stepped once with each keystroke, and other rotors stepped less frequently. ### TurnoverEdit The Enigma stepping motion seen from the side away from the operator. All three ratchet pawls (green) push in unison as a key is depressed. For the first rotor (1), which to the operator is the right-hand rotor, the ratchet (red) is always engaged, and steps with each keypress. Here, the middle rotor (2) is engaged because the notch in the first rotor is aligned with the pawl; it will step (turn over) with the first rotor. The third rotor (3) is not engaged, because the notch in the second rotor is not aligned to the pawl, so it will not engage with the rachet. The advancement of a rotor other than the left-hand one was called a turnover by the British. This was achieved by a ratchet and pawl mechanism. Each rotor had a ratchet with 26 teeth and every time a key was pressed, the set of spring-loaded pawls moved forward in unison, trying to engage with a ratchet. The alphabet ring of the rotor to the right normally prevented this. As this ring rotated with its rotor, a notch machined into it would eventually align itself with the pawl, allowing it to engage with the ratchet, and advance the rotor on its left. The right-hand pawl, having no rotor and ring to its right, stepped its rotor with every key depression.[10] For a single-notch rotor in the right-hand position, the middle rotor stepped once for every 26 steps of the right-hand rotor. Similarly for rotors two and three. For a two-notch rotor, the rotor to its left would turn over twice for each rotation. The first five rotors to be introduced (I–V) contained one notch each, while the additional naval rotors VI, VII and VIII each had two notches. The position of the notch on each rotor was determined by the letter ring which could be adjusted in relation to the core containing the interconnections. The points on the rings at which they caused the next wheel to move were as follows.[11] Position of turnover notches Rotor Turnover position(s) BP mnemonic I R Royal II F Flags III W Wave IV K Kings V A Above VI, VII and VIII A and N The design also included a feature known as double-stepping. This occurred when each pawl aligned with both the ratchet of its rotor and the rotating notched ring of the neighbouring rotor. If a pawl engaged with a ratchet through alignment with a notch, as it moved forward it pushed against both the ratchet and the notch, advancing both rotors. In a three-rotor machine, double-stepping affected rotor two only. If in moving forward the ratchet of rotor three was engaged, rotor two would move again on the subsequent keystroke, resulting in two consecutive steps. Rotor two also pushes rotor one forward after 26 steps, but since rotor one moves forward with every keystroke anyway, there is no double-stepping.[10] This double-stepping caused the rotors to deviate from odometer-style regular motion. With three wheels and only single notches in the first and second wheels, the machine had a period of 26 × 25 × 26 = 16,900 (not 26 × 26 × 26, because of double-stepping).[10] Historically, messages were limited to a few hundred letters, and so there was no chance of repeating any combined rotor position during a single session, denying cryptanalysts valuable clues. To make room for the Naval fourth rotors, the reflector was made much thinner. The fourth rotor fitted into the space made available. No other changes were made, which eased the changeover. Since there were only three pawls, the fourth rotor never stepped, but could be manually set into one of 26 possible positions. A device that was designed, but not implemented before the war's end, was the Lückenfüllerwalze (gap-fill wheel) that implemented irregular stepping. It allowed field configuration of notches in all 26 positions. If the number of notches was a relative prime of 26 and the number of notches were different for each wheel, the stepping would be more unpredictable. Like the Umkehrwalze-D it also allowed the internal wiring to be reconfigured.[12] ### Entry wheelEdit The current entry wheel (Eintrittswalze in German), or entry stator, connects the plugboard to the rotor assembly. If the plugboard is not present, the entry wheel instead connects the keyboard and lampboard to the rotor assembly. While the exact wiring used is of comparatively little importance to security, it proved an obstacle to Rejewski's progress during his study of the rotor wirings. The commercial Enigma connects the keys in the order of their sequence on a QWERTZ keyboard: Q${\displaystyle \rightarrow }$ A, W${\displaystyle \rightarrow }$ B, E${\displaystyle \rightarrow }$ C and so on. The military Enigma connects them in straight alphabetical order: A${\displaystyle \rightarrow }$ A, B${\displaystyle \rightarrow }$ B, C${\displaystyle \rightarrow }$ C, and so on. It took inspired guesswork for Rejewski to penetrate the modification. ### ReflectorEdit Internal mechanism of an Enigma machine showing the type B reflector and rotor stack. With the exception of models A and B, the last rotor came before a 'reflector' (German: Umkehrwalze, meaning 'reversal rotor'), a patented feature unique to Enigma among the period's various rotor machines. The reflector connected outputs of the last rotor in pairs, redirecting current back through the rotors by a different route. The reflector ensured that Enigma is self-reciprocal: conveniently, encryption was the same as decryption. The reflector also gave Enigma the property that no letter ever encrypted to itself. This was a severe conceptual flaw and a cryptological mistake subsequently exploited by codebreakers. In Model 'C', the reflector could be inserted in one of two different positions. In Model 'D', the reflector could be set in 26 possible positions, although it did not move during encryption. In the Abwehr Enigma, the reflector stepped during encryption in a manner similar to the other wheels. In the German Army and Air Force Enigma, the reflector was fixed and did not rotate; there were four versions. The original version was marked 'A', and was replaced by Umkehrwalze B on 1 November 1937. A third version, Umkehrwalze C was used briefly in 1940, possibly by mistake, and was solved by Hut 6.[13] The fourth version, first observed on 2 January 1944, had a rewireable reflector, called Umkehrwalze D, allowing the Enigma operator to alter the connections as part of the key settings. ### PlugboardEdit The plugboard (Steckerbrett) was positioned at the front of the machine, below the keys. When in use during World War II, there were ten connections. In this photograph, just two pairs of letters have been swapped (A↔J and S↔O). The plugboard (Steckerbrett in German) permitted variable wiring that could be reconfigured by the operator (visible on the front panel of Figure 1; some of the patch cords can be seen in the lid). It was introduced on German Army versions in 1930, and was soon adopted by the Reichsmarine (German Navy). The plugboard contributed more cryptographic strength than an extra rotor. Enigma without a plugboard (known as unsteckered Enigma) can be solved relatively straightforwardly using hand methods; these techniques are generally defeated by the plugboard, driving Allied cryptanalysts to develop special machines to solve it. A cable placed onto the plugboard connected letters in pairs; for example, E and Q might be a steckered pair. The effect was to swap those letters before and after the main rotor scrambling unit. For example, when an operator presses E, the signal was diverted to Q before entering the rotors. Up to 13 steckered pairs might be used at one time, although only 10 were normally used. Current flowed from the keyboard through the plugboard, and proceeded to the entry-rotor or Eintrittswalze. Each letter on the plugboard had two jacks. Inserting a plug disconnected the upper jack (from the keyboard) and the lower jack (to the entry-rotor) of that letter. The plug at the other end of the crosswired cable was inserted into another letter's jacks, thus switching the connections of the two letters. ### AccessoriesEdit The Schreibmax was a printing unit which could be attached to the Enigma, removing the need for laboriously writing down the letters indicated on the light panel. Other features made various Enigma machines more secure or more convenient.[14] #### SchreibmaxEdit Some M4 Enigmas used the Schreibmax, a small printer that could print the 26 letters on a narrow paper ribbon. This eliminated the need for a second operator to read the lamps and transcribe the letters. The Schreibmax was placed on top of the Enigma machine and was connected to the lamp panel. To install the printer, the lamp cover and light bulbs had to be removed. It improved both convenience and operational security; the printer could be installed remotely such that the signal officer operating the machine no longer had to see the decrypted plaintext. #### FernlesegerätEdit Another accessory was the remote lamp panel Fernlesegerät. For machines equipped with the extra panel, the wooden case of the Enigma was wider and could store the extra panel. A lamp panel version could be connected afterwards, but that required, as with the Schreibmax, that the lamp panel and lightbulbs be removed.[9] The remote panel made it possible for a person to read the decrypted plaintext without the operator seeing it. #### UhrEdit The Enigma Uhr attachment In 1944, the Luftwaffe introduced a plugboard switch, called the Uhr (clock), a small box containing a switch with 40 positions. It replaced the standard plugs. After connecting the plugs, as determined in the daily key sheet, the operator turned the switch into one of the 40 positions, each producing a different combination of plug wiring. Most of these plug connections were, unlike the default plugs, not pair-wise.[9] In one switch position, the Uhr did not swap letters, but simply emulated the 13 stecker wires with plugs. ### Mathematical analysisEdit The Enigma transformation for each letter can be specified mathematically as a product of permutations.[15] Assuming a three-rotor German Army/Air Force Enigma, let ${\displaystyle P}$  denote the plugboard transformation, ${\displaystyle U}$  denote that of the reflector, and ${\displaystyle L,M,R}$  denote those of the left, middle and right rotors respectively. Then the encryption ${\displaystyle E}$  can be expressed as ${\displaystyle E=PRMLUL^{-1}M^{-1}R^{-1}P^{-1}.}$ After each key press, the rotors turn, changing the transformation. For example, if the right-hand rotor ${\displaystyle R}$  is rotated ${\displaystyle i}$  positions, the transformation becomes ${\displaystyle \rho ^{i}R\rho ^{-i}}$ , where ${\displaystyle \rho }$  is the cyclic permutation mapping A to B, B to C, and so forth. Similarly, the middle and left-hand rotors can be represented as ${\displaystyle j}$  and ${\displaystyle k}$  rotations of ${\displaystyle M}$  and ${\displaystyle L}$ . The encryption transformation can then be described as ${\displaystyle E=P(\rho ^{i}R\rho ^{-i})(\rho ^{j}M\rho ^{-j})(\rho ^{k}L\rho ^{-k})U(\rho ^{k}L^{-1}\rho ^{-k})(\rho ^{j}M^{-1}\rho ^{-j})(\rho ^{i}R^{-1}\rho ^{-i})P^{-1}.}$ Combining three rotors from a set of five, the rotor settings with 26 positions, and the plugboard with ten pairs of letters connected, the military Enigma has 158,962,555,217,826,360,000 (nearly 159 quintillion) different settings.[16] ## OperationEdit ### Basic operationEdit A German Enigma operator would be given a plaintext message to encrypt. For each letter typed in, a lamp indicated a different letter according to a pseudo-random substitution, based upon the wiring of the machine. The letter indicated by the lamp would be recorded as the enciphered substitution. The action of pressing a key also moved the rotor so that the next key press used a different electrical pathway, and thus a different substitution would occur. For each key press there was rotation of at least the right hand rotor, giving a different substitution alphabet. This continued for each letter in the message until the message was completed and a series of substitutions, each different from the others, had occurred to create a cyphertext from the plaintext. The cyphertext would then be transmitted as normal to an operator of another Enigma machine. This operator would key in the cyphertext and—as long as all the settings of the deciphering machine were identical to those of the enciphering machine—for every key press the reverse substitution would occur and the plaintext message would emerge. ### DetailsEdit German Kenngruppenheft (a U-boat codebook with grouped key codes) Monthly key list Number 649 for the German Air Force Enigma, including settings for the reconfigurable reflector. In use, the Enigma required a list of daily key settings and auxiliary documents. In German military practice, communications were divided into separate networks, each using different settings. These communication nets were termed keys at Bletchley Park, and were assigned code names, such as Red, Chaffinch, and Shark. Each unit operating in a network was given the same settings list for its Enigma, valid for a period of time. The procedures for German Naval Enigma were more elaborate and more secure than those in other services and employed auxiliary codebooks. Navy codebooks were printed in red, water-soluble ink on pink paper so that they could easily be destroyed if they were endangered. An Enigma machine's setting (its cryptographic key in modern terms; Schlüssel in German) specified each operator-adjustable aspect of the machine: • Wheel order (Walzenlage) – the choice of rotors and the order in which they are fitted. • Ring settings (Ringstellung) – the position of each alphabet ring relative to its rotor wiring. • Plug connections (Steckerverbindungen) – the pairs of letters in the plugboard that are connected together. • In very late versions, the wiring of the reconfigurable reflector. • Starting position of the rotors (Grundstellung) – chosen by the operator, should be different for each message. For a message to be correctly encrypted and decrypted, both sender and receiver had to configure their Enigma in the same way; rotor selection and order, ring positions, plugboard connections and starting rotor positions must be identical. Except for the starting positions, these settings were established beforehand, distributed in key lists and changed daily. For example, the settings for the 18th day of the month in the German Luftwaffe Enigma key list number 649 (see image) were as follows: • Wheel order: IV, II, V • Ring settings: 15, 23, 26 • Plugboard connections: EJ OY IV AQ KW FX MT PS LU BD • Reconfigurable reflector wiring: IU AS DV GL FT OX EZ CH MR KN BQ PW • Indicator groups: lsa zbw vcj rxn Enigma was designed to be secure even if the rotor wiring was known to an opponent, although in practice considerable effort protected the wiring configuration. If the wiring is secret, the total number of possible configurations has been calculated to be around 3 x 10114 (approximately 380 bits); with known wiring and other operational constraints, this is reduced to around 1023 (76 bits).[17] Users of Enigma were confident of its security because of the large number of possibilities; it was not then feasible for an adversary to even begin to try a brute force attack. ### IndicatorEdit Most of the key was kept constant for a set time period, typically a day. A different initial rotor position was used for each message, a concept similar to an initialisation vector in modern cryptography. The reason is that encrypting many messages with identical or near-identical settings (termed in cryptanalysis as being in depth), would enable an attack using a statistical procedure such as Friedman's Index of coincidence.[18] The starting position for the rotors was transmitted just before the ciphertext, usually after having been enciphered. The exact method used was termed the indicator procedure. Design weakness and operator sloppiness in these indicator procedures were two of the main weaknesses that made cracking Enigma possible. Figure 2. With the inner lid down, the Enigma was ready for use. The finger wheels of the rotors protruded through the lid, allowing the operator to set the rotors, and their current position, here RDKP, was visible to the operator through a set of windows. One of the earliest indicator procedures was used by Polish cryptanalysts to make the initial breaks into the Enigma. The procedure was for the operator to set up his machine in accordance with his settings list, which included a global initial position for the rotors (the Grundstellung, meaning ground setting), say, AOH. The operator turned his rotors until AOH was visible through the rotor windows. At that point, the operator chose his own arbitrary starting position for that particular message. An operator might select EIN, and these became the message settings for that encryption session. The operator then typed EIN into the machine, twice, to allow for detection of transmission errors. The results were an encrypted indicator—the EIN typed twice might turn into XHTLOA, which would be transmitted along with the message. Finally, the operator then spun the rotors to his message settings, EIN in this example, and typed the plaintext of the message. At the receiving end, the operation was reversed. The operator set the machine to the initial settings and typed in the first six letters of the message (XHTLOA). In this example, EINEIN emerged on the lamps. After moving his rotors to EIN, the receiving operator then typed in the rest of the ciphertext, deciphering the message. The weakness in this indicator scheme came from two factors. First, use of a global ground setting—this was later changed so the operator selected his initial position to encrypt the indicator, and sent the initial position in the clear. The second problem was the repetition of the indicator, which was a serious security flaw. The message setting was encoded twice, resulting in a relation between first and fourth, second and fifth, and third and sixth character. This security problem enabled the Polish Cipher Bureau to break into the pre-war Enigma system as early as 1932. In 1940 the Germans changed procedure. During World War II, codebooks were only used each day to set up the rotors, their ring settings and the plugboard. For each message, the operator selected a random start position, let's say WZA, and a random message key, perhaps SXT. He moved the rotors to the WZA start position and encoded the message key SXT. Assume the result was UHL. He then set up the message key, SXT, as the start position and encrypted the message. Next, he transmitted the start position, WZA, the encoded message key, UHL, and then the ciphertext. The receiver set up the start position according to the first trigram, WZA, and decoded the second trigram, UHL, to obtain the SXT message setting. Next, he used this SXT message setting as the start position to decrypt the message. This way, each ground setting was different and the new procedure avoided the security flaw of double encoded message settings.[19] This procedure was used by Wehrmacht and Luftwaffe only. The Kriegsmarine procedures on sending messages with the Enigma were far more complex and elaborate. Prior to encryption the message was encoded using the Kurzsignalheft code book. The Kurzsignalheft contained tables to convert sentences into four-letter groups. A great many choices were included, for example, logistic matters such as refueling and rendezvous with supply ships, positions and grid lists, harbour names, countries, weapons, weather conditions, enemy positions and ships, date and time tables. Another codebook contained the Kenngruppen and Spruchschlüssel: the key identification and message key.[20] The Army Enigma machine used only the 26 alphabet characters. Punctuation was replaced with rare character combinations. A space was omitted or replaced with an X. The X was generally used as period or full-stop. Some punctuation marks were different in other parts of the armed forces. The Wehrmacht replaced a comma with ZZ and the question mark with FRAGE or FRAQ. The Kriegsmarine replaced the comma with Y and the question mark with UD. The combination CH, as in "Acht" (eight) or "Richtung" (direction), was replaced with Q (AQT, RIQTUNG). Two, three and four zeros were replaced with CENTA, MILLE and MYRIA. The Wehrmacht and the Luftwaffe transmitted messages in groups of five characters. The Kriegsmarine, using the four rotor Enigma, had four-character groups. Frequently used names or words were varied as much as possible. Words like Minensuchboot (minesweeper) could be written as MINENSUCHBOOT, MINBOOT, MMMBOOT or MMM354. To make cryptanalysis harder, messages were limited to 250 characters. Longer messages were divided into several parts, each using a different message key.[21][22] ## HistoryEdit The Enigma family included multiple designs. The earliest were commercial models dating from the early 1920s. Starting in the mid-1920s, the German military began to use Enigma, making a number of security-related changes. Various nations either adopted or adapted the design for their own cipher machines. A selection of seven Enigma machines and paraphernalia exhibited at the US National Cryptologic Museum. From left to right, the models are: 1) Commercial Enigma; 2) Enigma T; 3) Enigma G; 4) Unidentified; 5) Luftwaffe (Air Force) Enigma; 6) Heer (Army) Enigma; 7) Kriegsmarine (Naval) Enigma—M4. An estimated 100,000 Enigma machines were constructed. After the end of World War II, the Allies sold captured Enigma machines, still widely considered secure, to developing countries.[23] ### Commercial EnigmaEdit Scherbius's Enigma patent—U.S. Patent 1,657,411, granted in 1928. On 23 February 1918, Arthur Scherbius with E. Richard Ritter applied for a patent for a ciphering machine that used rotors and founded the firm of Scherbius & Ritter. They approached the German Navy and Foreign Office with their design, but neither agencies were interested. Scherbius & Ritter then assigned the patent rights to Gewerkschaft Securitas, who founded the Chiffriermaschinen Aktien-Gesellschaft (Cipher Machines Stock Corporation) on 9 July 1923; Scherbius and Ritter were on the board of directors. #### Enigma A (1923)Edit Chiffriermaschinen AG began advertising a rotor machine—Enigma model A—which was exhibited at the Congress of the International Postal Union in 1924. The machine was heavy and bulky, incorporating a typewriter. It measured 65×45×38 cm and weighed about 50 kilograms (110 lb). #### Enigma B (1924)Edit In 1924 Enigma model B was introduced, and was of a similar construction.[24] While bearing the Enigma name, both models A and B were quite unlike later versions: they differed in physical size and shape, but also cryptographically, in that they lacked the reflector. #### Enigma C (1926)Edit The reflector—suggested by Scherbius's colleague Willi Korn—was introduced in Enigma C (1926). Model C was smaller and more portable than its predecessors. It lacked a typewriter, relying on the operator; hence the informal name of "glowlamp Enigma" to distinguish it from models A and B. #### Enigma D (1927)Edit The Enigma C quickly gave way to Enigma D (1927). This version was widely used, with shipments to Sweden, the Netherlands, United Kingdom, Japan, Italy, Spain, United States and Poland. In 1927 Hugh Foss at the British Government Code and Cypher School was able to show that commercial Enigma machines could be broken provided that suitable cribs were available.[25] ##### "Navy Cipher D" – Italian NavyEdit Other countries used Enigma machines. The Italian Navy adopted the commercial Enigma as "Navy Cipher D". The Spanish also used commercial Enigma during their Civil War. British codebreakers succeeded in breaking these machines, which lacked a plugboard.[26] Enigma were also used by diplomatic services. #### Enigma H (1929)Edit A rare 8-rotor printing Enigma model H (1929). There was also a large, eight-rotor printing model, the Enigma H, called Enigma II by the Reichswehr. In 1933 the Polish Cipher Bureau detected that it was in use for high-level military communications, but that it was soon withdrawn, as it was unreliable and jammed frequently.[27] #### Enigma KEdit The Swiss used a version of Enigma called model K or Swiss K for military and diplomatic use, which was very similar to commercial Enigma D. The machine was cracked by Poland, France, the United Kingdom and the United States (the latter codenamed it INDIGO). An Enigma T model (codenamed Tirpitz) was used by Japan. #### TypexEdit Once the British broke the Enigma, they fixed the problem with it and created their own, which the Germans believed to be unsolvable.[28] ### Military EnigmaEdit #### Funkschlüssel CEdit The Reichsmarine was the first military branch to adopt Enigma. This version, named Funkschlüssel C ("Radio cipher C"), had been put into production by 1925 and was introduced into service in 1926.[29] The keyboard and lampboard contained 29 letters—A-Z, Ä, Ö and Ü—which were arranged alphabetically, as opposed to the QWERTZUI ordering.[30] The rotors had 28 contacts, with the letter X wired to bypass the rotors unencrypted.[8] 3 rotors were chosen from a set of five[31] and the reflector could be inserted in one of four different positions, denoted α, β, γ and δ.[32] The machine was revised slightly in July 1933.[33] #### Enigma G (1928–1930)Edit By 15 July 1928,[34] the German Army (Reichswehr) had introduced their own exclusive version of the Enigma machine; the Enigma G. The Abwehr used the Enigma G (the Abwehr Enigma). This Enigma variant was a four-wheel unsteckered machine with multiple notches on the rotors. This model was equipped with a counter which incremented upon each key press, and so is also known as the "counter machine" or the Zählwerk Enigma. #### Wehrmacht Enigma I (1930–1938)Edit Enigma machine G was modified to the Enigma I by June 1930.[35] Enigma I is also known as the Wehrmacht, or "Services" Enigma, and was used extensively by German military services and other government organisations (such as the railways[36]) before and during World War II. Heinz Guderian in the Battle of France, with an Enigma machine The major difference between Enigma I (German Army version from 1930), and commercial Enigma models was the addition of a plugboard to swap pairs of letters, greatly increasing cryptographic strength. Other differences included the use of a fixed reflector and the relocation of the stepping notches from the rotor body to the movable letter rings. The machine measured 28 cm × 34 cm × 15 cm (11.0 in × 13.4 in × 5.9 in) and weighed around 12 kg (26 lb).[37] In August 1935, the Air Force introduced the Wehrmacht Enigma for their communications.[35] #### M3 (1934)Edit By 1930, the Reichswehr had suggested that the Navy adopt their machine, citing the benefits of increased security (with the plugboard) and easier interservice communications.[38] The Reichsmarine eventually agreed and in 1934[39] brought into service the Navy version of the Army Enigma, designated Funkschlüssel ' or M3. While the Army used only three rotors at that time, the Navy specified a choice of three from a possible five.[40] Enigma in use on the Russian front #### Two extra rotors (1938)Edit In December 1938, the Army issued two extra rotors so that the three rotors were chosen from a set of five.[35] In 1938, the Navy added two more rotors, and then another in 1939 to allow a choice of three rotors from a set of eight.[40] #### M4 (1942)Edit A four-rotor Enigma was introduced by the Navy for U-boat traffic on 1 February 1942, called M4 (the network was known as Triton, or Shark to the Allies). The extra rotor was fitted in the same space by splitting the reflector into a combination of a thin reflector and a thin fourth rotor. ## Surviving machinesEdit US Enigma replica on display at the National Cryptologic Museum in Fort Meade, Maryland, USA. The effort to break the Enigma was not disclosed until the 1970s. Since then, interest in the Enigma machine has grown. Enigmas are on public display in museums around the world, and several are in the hands of private collectors and computer history enthusiasts.[41] The Deutsches Museum in Munich has both the three- and four-rotor German military variants, as well as several civilian versions. Enigma machines are exhibited at the National Codes Centre in Bletchley Park, the Government Communications Headquarters, the Science Museum in London, the Polish Army Museum in Warsaw, the Swedish Army Museum (Armémuseum) in Stockholm, the Military Museum of A Coruña in Spain, the Nordland Red Cross War Memorial Museum in Narvik,[42] Norway, The Artillery, Engineers and Signals Museum in Hämeenlinna, Finland[43] the Technical University of Denmark in Lyngby, Denmark, and at the Australian War Memorial and in the foyer of the Defence Signals Directorate, both in Canberra, Australia. The Jozef Pilsudski Institute in London exhibits a rare Polish Enigma double assembled in France in 1940.[44][45] A four–rotor Kriegsmarine (German Navy, 1. February 1942 to 1945) Enigma machine on display at the US National Cryptologic Museum In the United States, Enigma machines can be seen at the Computer History Museum in Mountain View, California, and at the National Security Agency's National Cryptologic Museum in Fort Meade, Maryland, where visitors can try their hand at enciphering and deciphering messages. Two machines that were acquired after the capture of U-505 during World War II are on display at the Museum of Science and Industry in Chicago, Illinois. A four rotor device is on display in the ANZUS Corridor of the Pentagon on the second floor, A ring, between corridors 9 and 10. This machine is on loan from Australia. The United States Air Force Academy in Colorado Springs has a machine on display in the Computer Science Department. There's also a machine located at the National World War II Museum in New Orleans. The Museum of World War II in Boston has seven Enigma machines on display, including a U-Boat four-rotor model, one of three surviving examples of an Enigma machine with a printer, one of fewer than ten surviving ten-rotor code machines, an example blown up by a retreating German Army unit, and two three-rotor Enigmas that visitors can operate to encode and decode messages themselves. In Canada, a Swiss Army issue Enigma-K, is in Calgary, Alberta. It is on permanent display at the Naval Museum of Alberta inside the Military Museums of Calgary. A 4-rotor Enigma machine is on display at the Military Communications and Electronics Museum at Canadian Forces Base (CFB) Kingston in Kingston, Ontario. Occasionally, Enigma machines are sold at auction; prices have in recent years ranged from US$40,000[46][47] to US$203,000[48] in 2011. Replicas are available in various forms, including an exact reconstructed copy of the Naval M4 model, an Enigma implemented in electronics (Enigma-E), various simulators and paper-and-scissors analogues. A rare Abwehr Enigma machine, designated G312, was stolen from the Bletchley Park museum on 1 April 2000. In September, a man identifying himself as "The Master" sent a note demanding £25,000 and threatening to destroy the machine if the ransom was not paid. In early October 2000, Bletchley Park officials announced that they would pay the ransom, but the stated deadline passed with no word from the blackmailer. Shortly afterward, the machine was sent anonymously to BBC journalist Jeremy Paxman, missing three rotors. In November 2000, an antiques dealer named Dennis Yates was arrested after telephoning The Sunday Times to arrange the return of the missing parts. The Enigma machine was returned to Bletchley Park after the incident. In October 2001, Yates was sentenced to 10 months in prison and served three months.[49] In October 2008, the Spanish daily newspaper El País reported that 28 Enigma machines had been discovered by chance in an attic of Army headquarters in Madrid. These 4-rotor commercial machines had helped Franco's Nationalists win the Spanish Civil War because, though the British cryptologist Alfred Dilwyn Knox in 1937 broke the cipher generated by Franco's Enigma machines, this was not disclosed to the Republicans, who failed to break the cipher. The Nationalist government continued using its 50 Enigmas into the 1950s. Some machines have gone on display in Spanish military museums,[50][51] including one at the National Museum of Science and Technology (MUNCYT) in La Coruña. Two have been given to Britain's GCHQ.[52] The Bulgarian military used Enigma machines with a Cyrillic keyboard; one is on display in the National Museum of Military History in Sofia.[53] ## DerivativesEdit The Enigma was influential in the field of cipher machine design, spinning off other rotor machines. The British Typex was originally derived from the Enigma patents; Typex even includes features from the patent descriptions that were omitted from the actual Enigma machine. The British paid no royalties for the use of the patents, to protect secrecy. The Typex implementation is not the same as that found in German or other Axis versions. A Japanese Enigma clone was codenamed GREEN by American cryptographers. Little used, it contained four rotors mounted vertically. In the U.S., cryptologist William Friedman designed the M-325, a machine logically similar, although not in construction. A unique rotor machine was constructed in 2002 by Netherlands-based Tatjana van Vark. This device makes use of 40-point rotors, allowing letters, numbers and some punctuation to be used; each rotor contains 509 parts.[54] Machines like the SIGABA, NEMA, Typex and so forth, are deliberately not considered to be Enigma derivatives as their internal ciphering functions are not mathematically identical to the Enigma transform. Several software implementations exist, but not all exactly match Enigma behaviour. The most commonly used software derivative (that is not compliant with any hardware implementation of the Enigma) is at EnigmaCo.de. Many Java applet Enigmas only accept single letter entry, complicating use even if the applet is Enigma compliant. Technically, Enigma@home is the largest scale deployment of a software Enigma, but the decoding software does not implement encipherment making it a derivative (as all original machines could cipher and decipher). A user-friendly 3-rotor simulator, where users can select rotors, use the plugboard and define new settings for the rotors and reflectors is available.[55] The output appears in separate windows which can be independently made "invisible" to hide decryption.[56] Another includes an "autotyping" function which takes plaintext from a clipboard and converts it to cyphertext (or vice versa) at one of four speeds. The "very fast" option produces 26 characters in less than one second.[57] ## SimulatorsEdit Name Platform Machine types Uhr UKW-D Franklin Heath Enigma Simulator[58] Android K Railway, Kriegsmarine M3,M4 No No EnigmAndroid[59] Android Wehrmacht I, Kriegsmarine M3, M4, Abwehr G31, G312, G260, D, K, Swiss-K, KD, R, T No No Andy Carlson Enigma Applet (Standalone Version)[60] Java Kriegsmarine M3, M4 No No Minarke (Minarke Is Not A Real Kriegsmarine Enigma)[61] C/Posix/CLI (MacOS, Linux, UNIX, etc.) Wehrmacht, Kriegsmarine, M3, M4 No No Russell Schwager Enigma Simulator[62] Java Kriegsmarine M3 No No PA3DBJ G-312 Enigma Simulator[63] Javascript G312 Abwehr No No Daniel Palloks Universal Enigma[64] Javascript Wehrmacht, Kriegsmarine M3, M4. D (commercial), K (Swiss), Railway, Tirpitz (Japan), A-865 Zählwerk, G-111 Hungary/Munich, G-260 Abwehr/Argentina, G-312 Abwehr/Bletchley Yes No Universal Enigma Machine Simulator[65] Javascript D, I, Norway, M3, M4, Zählwerk, G, G-111, G-260, G-312, K, Swiss-K, KD, Railway, T Yes Yes Terry Long Enigma Simulator[66] MacOS Kriegsmarine M3 No No Paul Reuvers Enigma Simulator for RISC OS[67] RISC OS Kriegsmarine M3, M4, G-312 Abwehr No No Dirk Rijmenants Enigma Simulator v7.0[68] Windows Wehrmacht, Kriegsmarine M3, M4 No No Frode Weierud Enigma Simulators[69] Windows Abwehr, Kriegsmarine M3, M4, Railway No No ## In popular cultureEdit Literature Films • Sekret Enigmy (1979; translation: The Enigma Secret), is a Polish film dealing with Polish aspects of the subject.[70] • The plot of the film U-571 (released in 2000) revolves around an attempt by American, rather than British, forces to seize an Enigma machine from a German U-boat. • Harris' book, with substantial changes in plot, was adapted as the film Enigma (2001), directed by Michael Apted and starring Kate Winslet and Dougray Scott. The film was criticised for historical inaccuracies, including neglect of the role of Poland's Biuro Szyfrów. The film—like the book—makes a Pole the villain, who seeks to betray the secret of Enigma decryption.[71] • The film The Imitation Game (2014) tells the story of Alan Turing and his attempts to crack the Enigma machine code during World War II.[41] Television • In the British television series The Bletchley Circle, the Typex was used by the protagonists during the war, and in Season 2, Episode 4, they visit Bletchley Park to seek one out, in order to crack the code of the black market procurer and smuggler Marta, who used the Typex to encode her ledger. The Circle, forced to settle for using an Enigma, instead, successfully cracks the code. • In season 5, episode 23 ("Scrambled") of the American television series Elementary a drug smuggling gang uses a four-rotor Enigma machine as part of their effort to encrypt their communications. ## ReferencesEdit ### NotesEdit 1. ^ Singh, Simon (26 January 2011). The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography. Knopf Doubleday Publishing Group. ISBN 978-0-307-78784-2. 2. ^ Lord, Bob (1998–2010). "1937 Enigma Manual by: Jasper Rosal – English Translation". Retrieved 31 May 2011. 3. ^ Kozaczuk 1984, p. 63. 4. ^ Ralph Erskine: The Poles Reveal their Secrets – Alastair Dennistons's Account of the July 1939 Meeting at Pyry. Cryptologia. Rose-Hulman Institute of Technology. Taylor & Francis, Philadelphia PA 30.2006,4, p. 294. 5. ^ Gordon Welchman, who became head of Hut 6 at Bletchley Park, has written: "Hut 6 Ultra would never have gotten off the ground if we had not learned from the Poles, in the nick of time, the details both of the German military version of the commercial Enigma machine, and of the operating procedures that were in use." Gordon Welchman, The Hut Six Story, 1982, p. 289. 6. ^ Much of the German cipher traffic was encrypted on the Enigma machine, and the term "Ultra" has often been used almost synonymously with "Enigma decrypts". Ultra also encompassed decrypts of the German Lorenz SZ 40 and 42 machines that were used by the German High Command, and decrypts of Hagelin ciphers and other Italian ciphers and codes, as well as of Japanese ciphers and codes such as Purple and JN-25. 7. ^ 8. ^ a b 9. ^ a b c Rijmenants, Dirk; Technical details of the Enigma machine Cipher Machines & Cryptology 10. ^ a b c Hamer, David (January 1997). "Enigma: Actions Involved in the 'Double-Stepping' of the Middle Rotor". Cryptologia. 21 (1): 47–50. doi:10.1080/0161-119791885779. Archived from the original (zip) on 19 July 2011. 11. ^ Sale, Tony. "Technical specifications of the Enigma rotors". Technical Specification of the Enigma. Retrieved 15 November 2009. 12. ^ "Lückenfüllerwalze". Cryptomuseum.com. Retrieved 17 July 2012. 13. ^ Philip Marks, "Umkehrwalze D: Enigma's Rewirable Reflector — Part I", Cryptologia 25(2), April 2001, pp. 101–141 14. ^ Reuvers, Paul (2008). "Enigma accessories". Retrieved 22 July 2010. 15. ^ 16. ^ 17. ^ Miller, A. Ray (2001). "The Cryptographic Mathematics of Enigma" (PDF). National Security Agency. 18. ^ Friedman, W.F. (1922). The index of coincidence and its applications in cryptology. Department of Ciphers. Publ 22. Geneva, Illinois, USA: Riverbank Laboratories. OCLC 55786052. 19. ^ Rijmenants, Dirk; Enigma message procedures Cipher Machines & Cryptology 20. ^ Rijmenants, Dirk; Kurzsignalen on German U-boats Cipher Machines & Cryptology 21. ^ "The translated 1940 Enigma General Procedure". codesandciphers.org.uk. Retrieved 16 October 2006. 22. ^ "The translated 1940 Enigma Officer and Staff Procedure". codesandciphers.org.uk. Retrieved 16 October 2006. 23. ^ Bauer 2000, p. 112. 24. ^ 25. ^ Bletchley Park Trust Museum display 26. ^ Smith 2006, p. 23. 27. ^ Kozaczuk 1984, p. 28. 28. ^ Numberphile (2013-01-14), Flaw in the Enigma Code - Numberphile, retrieved 2017-02-14 29. ^ Kahn 1991, pp. 39–41, 299. 30. ^ Ulbricht 2005, p. 4. 31. ^ Kahn 1991, pp. 40, 299. 32. ^ Bauer 2000, p. 108. 33. ^ Stripp 1993, plate 3. 34. ^ Kahn 1991, pp. 41, 299. 35. ^ a b c Kruh & Deavours 2002, p. 97. 36. ^ Smith 2000, p. 73. 37. ^ Stripp, 1993 38. ^ Kahn 1991, p. 43. 39. ^ Kahn 1991, p. 43 says August 1934. Kruh & Deavours 2002, p. 15 say October 2004. 40. ^ a b Kruh & Deavours 2002, p. 98. 41. ^ a b Ng, David. "Enigma machine from World War II finds unlikely home in Beverly Hills". Los Angeles Times. 22 January 2015. 42. ^ 43. ^ 44. ^ "Enigma exhibition in London pays tribute to Poles". Polskie Radio dla Zagranicy. Retrieved 2016-04-05. 45. ^ "13 March 2016, ‘Enigma Relay’– how Poles passed the baton to Brits in the run for WWII victory,". pilsudski.org.uk. Retrieved 2016-04-05. 46. ^ Hamer, David; Enigma machines – known locations* Archived 4 November 2011 at the Wayback Machine. 47. ^ Hamer, David; Selling prices of Enigma and NEMA – all prices converted to US\$ Archived 27 September 2011 at the Wayback Machine. 48. ^ Christi's; 3 Rotor enigma auction 49. ^ "Man jailed over Enigma machine". BBC News. 19 October 2001. Retrieved 2 May 2010. 50. ^ Graham Keeley. Nazi Enigma machines helped General Franco in Spanish Civil War, The Times, 24 October 2008, p. 47. 51. ^ "Taller de Criptografía – Enigmas españolas". Cripto.es. Retrieved 8 September 2013. 52. ^ "Schneier on Security: Rare Spanish Enigma Machine". Schneier.com. 26 March 2012. Retrieved 8 September 2013. 53. ^ "Communication equipment". znam.bg. 29 November 2003. 54. ^ van Vark, Tatjana The coding machine 56. ^ Enigma at Multimania 58. ^ Franklin Heath Ltd. "Enigma Simulator – Android Apps on Google Play". google.com. 59. ^ "F-Droid". f-droid.org. 60. ^ Andy Carlson, Enigma Applet (Standalone Version) 61. ^ John Gilbert, Minarke – A Terminal Friendly Enigma Emulator 62. ^ Russell Schwager, Enigma Simulator Russell Schwager Enigma Simulator 63. ^ PA3DBJ G-312, Enigma Simulator 64. ^ Daniel Palloks,Universal Enigma 65. ^ Summerside Makerspace,Universal Enigma Machine Simulator 66. ^ Terry Long, Enigma Simulator 67. ^ Paul Reuvers, Enigma Simulator for RISC OS 68. ^ Dirk Rijmenants, Enigma Simulator v7.0 69. ^ Frode Weierud Enigma Simulators 70. ^ 71. ^ Laurence Peter (20 July 2009). "How Poles cracked Nazi Enigma secret". BBC News.
2017-08-22 20:34:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 20, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46485137939453125, "perplexity": 6291.720162176607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886112682.87/warc/CC-MAIN-20170822201124-20170822221124-00461.warc.gz"}
https://questioncove.com/updates/4dfac2930b8b370c28be8117
Mathematics OpenStudy (anonymous): soybean meal is 16% protein; cornmeal is 8% protein how many pounds of each should be mixed together in order to get 320lb mixture that is 10% protein OpenStudy (anonymous): i think i have done this exact problem maybe 8 times. write $.16x+.08(230-x)=32$ and sovle for x OpenStudy (anonymous): actually it is $.16x+.08(320-x)=32$ OpenStudy (anonymous): Use x + y = 320 and .16x + .08y = .10 Solve the system OpenStudy (anonymous): i get 80 of course you can use two varialbes if you like OpenStudy (anonymous): but if you do it should be $.16x+.08y=32$ OpenStudy (anonymous): $.16x+.08(320-x)=32$ because if you use x soybeans you use 320-x cormeal and 10% of 320 is 32 OpenStudy (anonymous): so how many pounds of cornmeal should be in the mix and how many pounds of soybean should be in the mix OpenStudy (anonymous): $.16x+25.6-.08x=32$ $.08x+25.6=32$ $.08x=32-25.6=6.4$ $x=\frac{6.4}{.08}=80$ OpenStudy (anonymous): 80 cornmeal the rest soybeans OpenStudy (anonymous): I am subtracting 32 from 80 to get the soybean OpenStudy (anonymous): or is the soybean 32 OpenStudy (anonymous): Its a lot easier to me if you use the substitution method OpenStudy (anonymous): tervin03 can you show me how to arrive at the answer by using that method OpenStudy (anonymous): 0Use x + y = 320 and .16x + .08y =32 and use the substitution method. So in the first equation x + y = 320 you can subtract x from both side to get y = 320 - x and substitute it in the second equation making it .16x +.08(320 - x) =32 OpenStudy (anonymous): Solve it for x and then back substitute the answer in the first original equation x + y = 320 to get y OpenStudy (anonymous): Tervin03 so how many pounds of cornmeal should be in the mix and how many pounds of soybean should be in the mix OpenStudy (anonymous): Soybean is 80 and cornmeal is 240
2021-06-18 09:44:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7967894077301025, "perplexity": 1632.8491055874094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487635920.39/warc/CC-MAIN-20210618073932-20210618103932-00575.warc.gz"}
https://blogs.ams.org/featurecolumn/2021/11/01/what-is-a-prime-and-who-decides/?replytocom=325
# What is a prime, and who decides? Some people view mathematics as a purely platonic realm of ideas independent of the humans who dream about those ideas. If that’s true, why can’t we agree on the definition of something as universal as a prime number? Courtney R. Gibbons Hamilton College ### Introduction Scene: It’s a dark and stormy night at SETI. You’re sitting alone, listening to static on the headphones, when all of the sudden you hear something: two distinct pulses in the static. Now three. Now five. Then seven, eleven, thirteen — it’s the sequence of prime numbers! A sequence unlikely to be generated by any astrophysical phenomenon (at least, so says Carl Sagan in Contact, the novel from which I’ve lifted this scene) — in short, proof of alien intelligence via the most fundamental mathematical objects in the universe… Hi! I’m Courtney, and I’m new to this column. I’ve been enjoying reading my counterparts’ posts, including Joe Malkevitch’s column Decomposition and David Austin’s column Meet Me Up in Space. I’d like to riff on those columns a bit, both to get to some fun algebra (atoms and ideals!) and to poke at the idea that math is independent of our humanity. ### Introduction, Take 2 Scene: It’s a dark and stormy afternoon in Clinton, NY. I’m sitting alone at my desk with two undergraduate abstract algebra books in front of me, both propped open to their definitions of a prime number… • Book A says that an integer $p$ (of absolute value at least 2) is prime provided it has exactly two positive integer factors. Otherwise, Book A says $p$ is composite. • Book B says that an integer $p$ (of absolute value at least 2) is prime provided whenever it divides a product of integers, it divides one of the factors (in any possible factorization). Otherwise, Book B says $p$ is composite. Note: Book A is Bob Redfield’s Abstract Algebra: A Concrete Introduction (Bob is my predecessor at Hamilton College). Book B Abstract Algebra: Rings, Groups, and Fields by Marlow Anderson (Colorado College; Marlow was my undergraduate algebra professor) and Todd Feil (Denison University). I reached for the nearest algebra textbook to use as a tie-breaker, which happened to be Dummit and Foote’s Abstract Algebra, only to find that the authors hedge their bets by providing Book A’s definition and then saying, well, actually, Book B’s definition can be used to define prime, actually. Yes, it’s a nice exercise to show these definitions are equivalent. I can’t help but wonder, though: which is what it really is to be prime, and which is merely a consequence of that definition? ### Who Decides? Some folks take the view that math is a true and beautiful thing and we humans merely discover it. This seems to me to be a way of saying that math is independent of our humanity. Who we are, what communities we belong to — these don’t have any effect on Mathematics, Platonic Realm of Pure Ideas. To quantify this position as one might for an intro to proofs class: For each mathematical idea $x$, $x$ has a truth independent of humanity. And yet, two textbooks fundamental to the undergraduate math curriculum are sitting here on my desk with the audacity to disagree about the very definition of arguably the most pure, most platonic, most absolutely mathematical phenomenon you could hope to encounter: prime numbers! This isn’t a perfect counterexample to the universally quantified statement above (maybe one of these books is wrong?). But in my informal survey of undergraduate algebra textbooks (the librarians at Hamilton really love me and the havoc I wreak in the stacks!), there’s not exactly a consensus on the definition of a prime! As far as I can tell, the only consensus is that we shouldn’t consider $-1$, $0$, or $1$ to be prime numbers. But, uh, why not?! In the case of $0$, it breaks both definitions. You can’t divide by zero (footnote: well, you shouldn’t divide by if you want numbers to be meaningful, which is, of course, a decision that someone made and that we continue to make when we assert “you can’t divide by zero”), and zero has infinitely many positive integer factors. But when $\pm 1$ divides a product, it divides one (all!) of the factors. And what’s so special about exactly two positive divisors anyway? Why not “at most two” positive divisors? Well, if you’re reading this, you probably have had a course in algebra, and so you know (or can be easily persuaded, I hope!) that the integers have a natural (what’s natural is a matter of opinion, of course) algebraic analog in a ring of polynomials in a single variable with coefficients from a field $F$. The resemblance is so strong, algebraically, that we call $F[x]$ an integral domain (“a place where things are like integers” is my personal translation). The idea of prime, or “un-break-down-able”, comes back in the realm of polynomials, and Book A and Book B provide definitions as follow: • Book B says that a nonconstant polynomial $p(x)$ is irreducible provided the only way it factors is into a product in which one of the factors must have degree 0 (and the other necessarily has the same degree as $p(x)$). Otherwise, Book B says $p(x)$ is reducible. • Book A says that a nonconstant polynomial $p(x)$ is irreducible provided whenever $p(x)$ divides a product of polynomials in $F[x]$, it divides one of the factors. Otherwise, Book A says $p(x)$ is reducible. Both books agree, however, that a polynomial is reducible if and only if it has a factorization that includes more than one irreducible factor (and thus a polynomial cannot be both reducible and irreducible). Notice here that we have a similar restriction: the zero polynomial is excluded from the reducible/irreducible conversation, just as the integer 0 was excluded from the prime/composite conversation. But what about the other constant polynomials? They satisfy both definitions aside from the seemingly artificial caveat that they’re not allowed to be irreducible! Well, folks, it turns out that in the integers and in $F[x]$, if you’re hoping to have meaningful theorems (like the Fundamental Theorem of Arithmetic or an analog for polynomials, both of which say that factorization into primes/irreducibles is unique up to a mild condition), you don’t want to allow things with multiplicative inverses to be among your un-break-down-ables! We call elements with multiplicative inverses units, and in the integers, $(-1)\cdot(-1) = 1$ and $1\cdot 1 = 1$, so both $-1$ and $1$ are units (they’re the only units in the integers). In the integers, we want $6$ to factor uniquely into $2\cdot 3$, or, perhaps (if we’re being generous and allowing negative numbers to be prime, too) into $(-2)\cdot(-3)$. This generosity is pretty mild: $2$ and $-2$ are associates, meaning that they are the same up to multiplication by a unit. One statement of the Fundamental Theorem of Arithmetic is that every integer (of absolute value at least two) is prime or factors uniquely into a product of primes up to the order of the factors and up to associates. That means that the list prime factors (up to associates) that appear in the factorization of $6$ is an invariant of $6$, and the number of prime factors (allowing for repetition) in any factorization of $6$ is another invariant (and it’s well-defined). Let’s call it the length of $6$. But if we were to let $1$ or $-1$ be prime? Goodbye, fundamental theorem! We could write $6 = 2\cdot 3$, or $6 = 1\cdot 1\cdot 1 \cdots 1 \cdot 2 \cdot 3$, or $6 = (-1)\cdot (-2) \cdot 3$. We have cursed ourselves with the bounty of infinitely many distinct possible factorizations of $6$ into a product primes (even accounting for the order of the factors or associates), and we can’t even agree on the length of $6$. Or $2$. Or $1$. The skeptical, critical-thinking reader has already been working on workarounds. Take the minimum number of factors as the length. Write down the list of prime factors without their powers. Keep the associates in the list (or throw them out, but at that point, just agree that $1$ and $-1$ shouldn’t be prime!). But in the polynomial ring $F[x]$, dear reader, every nonzero constant polynomial is a unit: given $p(x) = a$ for some nonzero $a \in F$, the polynomial $d(x) = a^{(-1)}$ is also in $F[x]$ since $a^{(-1)}$ is in the field $F$, and $p(x)d(x) = 1$, the multiplicative identity in $F[x]$. So, if you allow units to be irreducible in $F[x]$, now even an innocent (and formerly irreducible) polynomial like $x$ has infinitely many factorizations into things like ($a)(1/a)(b)(1/b)\cdots x$. So much for those workarounds! So, since we like our Fundamental Theorems to be neat, tidy, and useful, we agree to exclude units from our definitions of prime and composite (or irreducible and reducible, or indecomposable and decomposable, or…). ### More Consequences (or Precursors) Lately I’ve been working on problems related to semigroups, by which I mean nonempty sets equipped with an associative binary operation — and I also insist that my semigroups be commutative and have a unit element. In the study of factorization in semigroups, the Fundamental Theorem of Arithmetic leads to the idea of counting the distinct factors an element can have in any factorization into atoms (the semigroup equivalent of irreducible/prime elements; these are elements $p$ that factor only into products involving units and associates of $p$). One of my favorite (multiplicative) semigroups is $\mathbb{Z}[\sqrt{-5}] = \{a + b \sqrt{-5} \, : \, a,b \in \mathbb{Z}\}$, favored because the element $6$ factors distinctly into two different products of irreducibles! In this semigroup, $6 = 2\cdot 3$ and $6 = (1+\sqrt{-5})(1-\sqrt{-5})$. It’s a nice exercise to show that $1\pm \sqrt{-5}$ are not associates of $2$ or $3$, yielding two distinct factorizations into atoms! While we aren’t lucky enough to have unique factorization, at least we have that the number of irreducible factors in any factorization of $6$ is always two. That is, excluding units from our list of atoms leads to an invariant of $6$ in the semigroup $\mathbb{Z}[\sqrt{-5}]$. Anyway, without the context of more general situations like this semigroup (and I don’t know, is $\mathbb{Z}[\sqrt{-5}]$ one of those platonically true things, or were Gauss et al. just really imaginative weirdos?), would we feel so strongly that $1$ is not a prime integer? ### Still More Consequences (or precursors) Reminding ourselves yet again that the integers form a ring under addition and multiplication, we might be interested in the ideals generated by prime numbers. (What’s an ideal? It’s a nonempty subset of the ring closed under addition, additive inverses, and scalar multiplication from the ring.) We might even call those ideals prime ideals, and then generalize to other rings! The thing is, if we do that, we end up with this definition: (Book A and B agree here:) An ideal $P$ is prime provided $xy \in P$ implies $x$ or $y$ belongs to $P$. But in the case of the integers — a principal ideal domain! — that means that a product $ab$ belongs to the principal ideal generated by the prime $p$ precisely when $p$ divides one of the factors. From the perspective of rings, every (nonzero) ring has two trivial ideals: the ring $R$ itself (and if $R$ has unity, then that ideal is generated by $1$, or any other unit in $R$) and the zero ideal (generated by $0$). If we want the study of prime ideals to be the study of interesting ideals, then we want to exclude units from our list of potential primes. And once we do, we recover nice results like an ideal $P$ is prime in a commutative ring with unity if and only if $R/P$ is an integral domain. ### Conclusions I still have two books propped open on my desk, and after thinking about semigroups and ideals, I’m no closer to answering the question “But what is a prime, really?” than I was at the start of this column! All I have is some pretty good evidence that we, as mathematicians, might find it useful to exclude units from the prime-or-composite dichotomy (I haven’t consulted with the mathematicians on other planets, though). To me, that evidence is a reminder that we are constantly updating our mathematics framework in reference to what we learn as we do more math.  We look back at these ideas that seemed so solid when we started — something fundamentally indivisible in some way — and realize that we’re making it up as we go along. (And ignoring a lot of what other humans consider math, too, as we insist on our axioms and law of the excluded middle and the rest of the apparatus of “modern mathematics” while we’re making it up…) And the math that gets done, the math that allows us to update our framework… Well, that depends on what is trendy/fundable/publishable, who is trendy/fundable/publishable, and who is making all of those decisions. Perhaps, on planet Blarglesnort, math looks very different. ### References Anderson, Marlow; Feil, Todd. A first course in abstract algebra. Rings, groups, and fields. Third edition. ISBN: 9781482245523. Dummit, David S.; Foote, Richard M. Abstract algebra. Third edition. ISBN: 0471433349. Redfield, Robert. Abstract algebra. A concrete introduction. First edition. ISBN: 9780201437218. Geroldinger, Alfred; Halter-Koch, Franz. Non-unique factorizations. Algebraic, combinatorial and analytic theory. ISBN: 9781584885764. 1. If you don’t know of it already, you might be interested in the book “Greek Mathematical Thought and the Origins of Algebra” by Jacob Klein. It discusses questions related to “what is number” and outlines how the notion of number changed from the Greeks through to the modern period. 2. Great column, Courtney. Who was it who said “God made the integers. All the rest is the work of man.” (Kronecker maybe) Anyway, I’m a Platonist at heart, but realize we usually have some editing to do with the ideas that exist “out there.” The idea of discovering something is so much more romantic than the thought of inventing it. My observation is that we usually choose the definition of something to suit our present purpose and prove theorems that give equivalent definitions. My current example was choosing an appropriate definition for a module being finitely co-generated. I look forward to reading your future insights. Mentioning what mathematics is “trendy” or “fundable” was so on point. 3. A somewhat related issue with definitions that don’t agree, that has long caused me trouble when I wanted to explain my work to others: We have many equivalent (in the same sense as in your essay) definitions of a group. I work in “loops”, which not too many have every thought about, and a natural way to define them is to say “a group without the requirement of associativity”. But showing those group definitions equivalent requires associativity! So a loop is “which kind of group” without associativity”? (My choice: Work upwards, not downwards. Define a quasigroup as an algebraic object whose operation table is a Latin Square. Now a loop is a quasigroup with an identity.) Bob W 4. In asking “what is a prime ?”, perhaps it should also be asked whether primes are the right, the “scientific” thing to make the center of the study of numbers. Perhaps primes are only understandable in terms of something more fundamental or central. These remarks by Harold M. Edwards in the introduction to his Divisor Theory may be relevant: “ … one of Kronecker’s most valuable ideas, namely, the idea that the objective of the theory [of divisors] is to define greatest common divisors, not to achieve factorization into primes. The reason Kronecker gave greatest common divisors the primary role is simple: they are independent of the ambient field while factorization into primes is not. The very notion of primality depends on the field under consideration–a prime in one field may factor in a larger field- so if the theory is founded on factorization into primes, extension of the field entails a completely new theory. Greatest common divisors, on the other hand, can be defined in a manner that does not change at all when the field is extended … “
2022-01-16 22:15:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7454206943511963, "perplexity": 569.988312377112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300244.42/warc/CC-MAIN-20220116210734-20220117000734-00696.warc.gz"}
https://nips.cc/Conferences/2016/ScheduleMultitrack?event=7260
Timezone: » Poster Preference Completion from Partial Rankings Suriya Gunasekar · Sanmi Koyejo · Joydeep Ghosh Wed Dec 07 09:00 AM -- 12:30 PM (PST) @ Area 5+6+7+8 #32 We propose a novel and efficient algorithm for the collaborative preference completion problem, which involves jointly estimating individualized rankings for a set of entities over a shared set of items, based on a limited number of observed affinity values. Our approach exploits the observation that while preferences are often recorded as numerical scores, the predictive quantity of interest is the underlying rankings. Thus, attempts to closely match the recorded scores may lead to overfitting and impair generalization performance. Instead, we propose an estimator that directly fits the underlying preference order, combined with nuclear norm constraints to encourage low--rank parameters. Besides (approximate) correctness of the ranking order, the proposed estimator makes no generative assumption on the numerical scores of the observations. One consequence is that the proposed estimator can fit any consistent partial ranking over a subset of the items represented as a directed acyclic graph (DAG), generalizing standard techniques that can only fit preference scores. Despite this generality, for supervision representing total or blockwise total orders, the computational complexity of our algorithm is within a $\log$ factor of the standard algorithms for nuclear norm regularization based estimates for matrix completion. We further show promising empirical results for a novel and challenging application of collaboratively ranking of the associations between brain--regions and cognitive neuroscience terms. #### Author Information ##### Sanmi Koyejo (UIUC) Sanmi Koyejo an Assistant Professor in the Department of Computer Science at Stanford University. Koyejo also spends time at Google as a part of the Brain team. Koyejo's research interests are in developing the principles and practice of trustworthy machine learning. Additionally, Koyejo focuses on applications to neuroscience and healthcare. Koyejo has been the recipient of several awards, including a best paper award from the conference on uncertainty in artificial intelligence (UAI), a Skip Ellis Early Career Award, and a Sloan Fellowship. Koyejo serves as the president of the Black in AI organization.
2023-02-07 18:39:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36978331208229065, "perplexity": 1467.7861312819368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500628.77/warc/CC-MAIN-20230207170138-20230207200138-00105.warc.gz"}
https://notebook.community/ddcampayo/ddcampayo.github.io/CFD1
# Metodos numéricos en Mecánica de Fluidos ## Theory I 1-D Linear Convection equation is the simplest, most basic model that can be used to learn something about CFD. $$\frac{\partial \phi}{\partial t} + u \frac{\partial \phi}{\partial x} = 0$$ From the definition of a derivative (and simply removing the limit), we know that: $$\frac{\partial \phi}{\partial x}\approx \frac{ \phi(x+\Delta x)- \phi(x)}{\Delta x}$$ Our discrete equation, then, is: $$\frac{ \phi_i^{n+1}- \phi_i^n}{\Delta t} + u \frac{ \phi_i^n - \phi_{i-1}^n}{\Delta x} = 0$$ Where $n$ and $n+1$ are two consecutive steps in time, while $i-1$ and $i$ are two neighboring points of the discretized $x$ coordinate. If there are given initial conditions, then the only unknown in this discretization is $\phi_i^{n+1}$. We can solve for our unknown to get an equation that allows us to advance in time, as follows: $$\phi_i^{n+1} = \phi_i^n - u \frac{\Delta t}{\Delta x}( \phi_i^n- \phi_{i-1}^n)$$ Notice the only combination that appears is $$\mathrm{Co} := u \frac{\Delta t}{\Delta x}$$
2021-10-19 04:45:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8903618454933167, "perplexity": 361.3018369746299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585242.44/warc/CC-MAIN-20211019043325-20211019073325-00600.warc.gz"}
https://docs.deistercloud.com/content/Axional%20development%20tools.15/Axional%20Studio.4/Development%20Guide.10/Languages%20reference.16/XSQL%20Script.10/Packages.40/http/http.request.JRepAdapter.xml?embedded=true
Below it is exponed a package of functions related with the adapter. This means, functions which allows asign and collect information of the form cursor. It should be noted that any modification or reload of the cursor, the information of the adapter will be lost. #### Deprecated Deprecated since September 2017. Instead of it, use cursor. <http.request.JRepAdapter />
2021-10-25 13:49:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5196591019630432, "perplexity": 1811.3125074275044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587711.69/warc/CC-MAIN-20211025123123-20211025153123-00233.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/surface-area-right-circular-cylinder-the-inner-diameter-circular_6975
# Solution - Surface Area of a Right Circular Cylinder Account Register Share Books Shortlist ConceptSurface Area of a Right Circular Cylinder #### Question The inner diameter of a circular well is 3.5 m. It is 10 m deep. Find (i) Its inner curved surface area, (ii) The cost of plastering this curved surface at the rate of Rs 40 per m2. ["Assume "pi=22/7] #### Solution You need to to view the solution Is there an error in this question or solution? #### Reference Material Solution for concept: Surface Area of a Right Circular Cylinder. For the course CBSE S
2017-12-18 18:48:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20256741344928741, "perplexity": 2088.716228429787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948619804.88/warc/CC-MAIN-20171218180731-20171218202731-00131.warc.gz"}
http://basilisk.fr/src/compressible.h
# Compressible gas dynamics The Euler system of conservation laws for a compressible gas can be written ${\partial }_{t}\left(\begin{array}{c}\hfill \rho \hfill \\ \hfill E\hfill \\ \hfill {w}_{x}\hfill \\ \hfill {w}_{y}\hfill \end{array}\right)+{\nabla }_{x}\cdot \left(\begin{array}{c}\hfill {w}_{x}\hfill \\ \hfill \frac{{w}_{x}}{\rho }\left(E+p\right)\hfill \\ \hfill \frac{{w}_{x}^{2}}{\rho }+p\hfill \\ \hfill \frac{{w}_{y}{w}_{x}}{\rho }\hfill \end{array}\right)+{\nabla }_{y}\cdot \left(\begin{array}{c}\hfill {w}_{y}\hfill \\ \hfill \frac{{w}_{y}}{\rho }\left(E+p\right)\hfill \\ \hfill \frac{{w}_{y}{w}_{x}}{\rho }\hfill \\ \hfill \frac{{w}_{y}^{2}}{\rho }+p\hfill \end{array}\right)=0$ with $\rho$ the gas density, $E$ the total energy, $\mathbf{\text{w}}$ the gas momentum and $p$ the pressure given by the equation of state $p=\left(\gamma -1\right)\left(E-\rho {\mathbf{\text{u}}}^{2}/2\right)$ with $\gamma$ the polytropic exponent. This system can be solved using the generic solver for systems of conservation laws. ``````#include "conservation.h" `````` The conserved scalars are the gas density $\rho$ and the total energy $E$. The only conserved vector is the momentum $\mathbf{\text{w}}$. The constant $\gamma$ is represented by gammao here, with a default value of 1.4. ``````scalar ρ[], E[]; vector w[]; scalar * scalars = {ρ, E}; vector * vectors = {w}; double gammao = 1.4 ;`````` The system is entirely defined by the `flux()` function called by the generic solver for conservation laws. The parameter passed to the function is the array `s` which contains the state variables for each conserved field, in the order of their definition above (i.e. scalars then vectors). ``````void flux (const double * s, double * f, double e[2]) {`````` We first recover each value ($\rho$, $E$, ${w}_{x}$ and ${w}_{y}$) and then compute the corresponding fluxes (`f[0]`, `f[1]`, `f[2]` and `f[3]`). `````` double ρ = s[0], E = s[1], wn = s[2], w2 = 0.; for (int i = 2; i < 2 + dimension; i++) w2 += sq(s[i]); double un = wn/ρ, p = (gammao - 1.)*(E - 0.5*w2/ρ); f[0] = wn; f[1] = un*(E + p); f[2] = un*wn + p; for (int i = 3; i < 2 + dimension; i++) f[i] = un*s[i];`````` The minimum and maximum eigenvalues for the Euler system are the characteristic speeds $u±\sqrt{\left(}\gamma p/\rho \right)$. `````` double c = sqrt(gammao*p/ρ); e[0] = un - c; // min e[1] = un + c; // max }``````
2018-09-19 12:27:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7791741490364075, "perplexity": 1816.0794859633656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156224.9/warc/CC-MAIN-20180919122227-20180919142227-00409.warc.gz"}