url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://telescoper.wordpress.com/2010/11/23/bayes-and-hi-theorem/
Bayes and his Theorem My earlier post on Bayesian probability seems to have generated quite a lot of readers, so this lunchtime I thought I’d add a little bit of background. The previous discussion started from the result $P(B|AC) = K^{-1}P(B|C)P(A|BC) = K^{-1} P(AB|C)$ where $K=P(A|C).$ Although this is called Bayes’ theorem, the general form of it as stated here was actually first written down, not by Bayes but by Laplace. What Bayes’ did was derive the special case of this formula for “inverting” the binomial distribution. This distribution gives the probability of x successes in n independent “trials” each having the same probability of success, p; each “trial” has only two possible outcomes (“success” or “failure”). Trials like this are usually called Bernoulli trials, after Daniel Bernoulli. If we ask the question “what is the probability of exactly x successes from the possible n?”, the answer is given by the binomial distribution: $P_n(x|n,p)= C(n,x) p^x (1-p)^{n-x}$ where $C(n,x)= n!/x!(n-x)!$ is the number of distinct combinations of x objects that can be drawn from a pool of n. You can probably see immediately how this arises. The probability of x consecutive successes is p multiplied by itself x times, or px. The probability of (n-x) successive failures is similarly (1-p)n-x. The last two terms basically therefore tell us the probability that we have exactly x successes (since there must be n-x failures). The combinatorial factor in front takes account of the fact that the ordering of successes and failures doesn’t matter. The binomial distribution applies, for example, to repeated tosses of a coin, in which case p is taken to be 0.5 for a fair coin. A biased coin might have a different value of p, but as long as the tosses are independent the formula still applies. The binomial distribution also applies to problems involving drawing balls from urns: it works exactly if the balls are replaced in the urn after each draw, but it also applies approximately without replacement, as long as the number of draws is much smaller than the number of balls in the urn. I leave it as an exercise to calculate the expectation value of the binomial distribution, but the result is not surprising: E(X)=np. If you toss a fair coin ten times the expectation value for the number of heads is 10 times 0.5, which is five. No surprise there. After another bit of maths, the variance of the distribution can also be found. It is np(1-p). So this gives us the probability of x given a fixed value of p. Bayes was interested in the inverse of this result, the probability of p given x. In other words, Bayes was interested in the answer to the question “If I perform n independent trials and get x successes, what is the probability distribution of p?”. This is a classic example of inverse reasoning. He got the correct answer, eventually, but by very convoluted reasoning. In my opinion it is quite difficult to justify the name Bayes’ theorem based on what he actually did, although Laplace did specifically acknowledge this contribution when he derived the general result later, which is no doubt why the theorem is always named in Bayes’ honour. This is not the only example in science where the wrong person’s name is attached to a result or discovery. In fact, it is almost a law of Nature that any theorem that has a name has the wrong name. I propose that this observation should henceforth be known as Coles’ Law. So who was the mysterious mathematician behind this result? Thomas Bayes was born in 1702, son of Joshua Bayes, who was a Fellow of the Royal Society (FRS) and one of the very first nonconformist ministers to be ordained in England. Thomas was himself ordained and for a while worked with his father in the Presbyterian Meeting House in Leather Lane, near Holborn in London. In 1720 he was a minister in Tunbridge Wells, in Kent. He retired from the church in 1752 and died in 1761. Thomas Bayes didn’t publish a single paper on mathematics in his own name during his lifetime but despite this was elected a Fellow of the Royal Society (FRS) in 1742. Presumably he had Friends of the Right Sort. He did however write a paper on fluxions in 1736, which was published anonymously. This was probably the grounds on which he was elected an FRS. The paper containing the theorem that now bears his name was published posthumously in the Philosophical Transactions of the Royal Society of London in 1764. P.S. I understand that the authenticity of the picture is open to question. Whoever it actually is, he looks  to me a bit like Laurence Olivier… 11 Responses to “Bayes and his Theorem” 1. Bryn Jones Says: The Royal Society is providing free access to electronic versions of its journals until the end of this month. Readers of this blog might like to look at Thomas Bayes’s two posthumous publications in the Philosophical Transactions. The first is a short paper about series. The other is the paper about statistics communicated by Richard Price. (The statistics paper may be accessible on a long-term basis because it is one of the Royal Society’s Trailblazing papers the society provides access to as part of its 350th anniversary celebrations.) Incidentally, both Thomas Bayes and Richard Price were buried in the Bunhill Fields Cemetery in London and their tombs can be seen there today. 2. Steve Warren Says: You may be remembered in history as the discoverer of coleslaw, but you weren’t the first. • Anton Garrett Says: For years I thought it was “cold slaw” because it was served cold. A good job I never asked for warm slaw. 3. telescoper Says: My surname, in Spanish, means “Cabbages”. So it was probably one of my ancestors who invented the chopped variety. 4. Anton Garrett Says: Thomas Bayes is now known to have gone to Edinburgh University, where his name appears in the records. He was barred from English universities because his nonconformist family did not have him baptised in the Church of England. (Charles Darwin’s nonconformist family covered their bets by having baby Charles baptised in the CoE, although perhaps they believed it didn’t count as a baptism since Charles had no say in it. Tist is why he was able to go to Christ’s College, Cambridge.) 5. “Cole” is an old English word for cabbage, which survives in “cole slaw”. The German word is “Kohl”. (Somehow, I don’t see PM or President Cabbage being a realistic possibility. 🙂 ) Note that Old King Cole is unrelated (etymologically). Of course, this discussion could cause Peter to post a clip of Nat “King” Cole (guess what his real surname is). To remind people to pay attention to spelling when they hear words, we’ll close with the Quote of the Day: It’s important to pay close attention in school. For years I thought that bears masturbated all winter. —Damon R. Milhem 6. Of course, this discussion could cause Peter to post a clip of Nat King Cole (giess what his real surname is). 7. Of course, this discussion could cause Peter to post a clip of Nat King Cole (giess what his real surname is). The first typo was my fault; the extra linebreaks in the second attempt (tested again here) appear to be a new “feature”. 8. telescoper Says: The noun “cole” can be found in English dictionaries as a generic name for plants of the cabbage family. It is related to the German kohl and scottish kail or kale. These are all derived from the latin word colis (or caulis) meaning a stem, which is also the root of the word cauliflower. The surname “Cole” and the variant “Coles” are fairly common in England and Wales, but are not related to the latin word for cabbage. Both are diminutives of the name “Nicholas”. 9. […] I posted a little piece about Bayesian probability. That one and the others that followed it (here and here) proved to be surprisingly popular so I’ve been planning to add a few more posts […] 10. It already has a popular name: Stigler’s law of eponymy.
2016-09-28 22:12:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6193313002586365, "perplexity": 1089.0032368849284}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661768.10/warc/CC-MAIN-20160924173741-00020-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/physical-quantity-analogous-to-inductance.691308/
Physical Quantity Analogous to Inductance 1. May 12, 2013 tapan_ydv Hi, I understand that some physical quantities in electromagnetism are analogous to physical quantities in heat transfer. For instance, electric field is analogous to temperature gradient. I want to know which physical quantity in heat transfer is analogous to Inductance ("L") ? Regards, 2. May 12, 2013 tiny-tim welcome to pf! hi tapan_ydv! welcome to pf! i don't know about a heat transfer analogy, but a hydraulics analogy is a paddle-wheel A heavy paddle wheel placed in the current. The mass of the wheel and the size of the blades restrict the water's ability to rapidly change its rate of flow (current) through the wheel due to the effects of inertia, but, given time, a constant flowing stream will pass mostly unimpeded through the wheel, as it turns at the same speed as the water flow …​ (from http://en.wikipedia.org/wiki/Hydraulic_analogy#Component_equivalents ) 3. May 12, 2013 technician In mechanics.....inertia 4. May 12, 2013 tiny-tim how? 5. May 12, 2013 technician Reluctance to change...as in a paddle wheel. Last edited: May 12, 2013
2018-03-25 03:47:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8782012462615967, "perplexity": 2018.39232950813}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651780.99/warc/CC-MAIN-20180325025050-20180325045050-00464.warc.gz"}
https://docs.microej.com/en/latest/ApplicationDeveloperGuide/testsuiteEngine.html
# MicroEJ Test Suite Engine¶ ## Introduction¶ The MicroEJ Test Suite Engine is a generic tool made for validating any development project using automatic testing. This section details advanced configuration for users who wish to integrate custom test suites in their build flow. The MicroEJ Test Suite Engine allows the user to test any kind of projects within the configuration of a generic Ant file. The MicroEJ Test Suite Engine is already pre-configured for running test suites on a MicroEJ Platform (either on Simulator or on Device). ## Using the MicroEJ Test Suite Ant Tasks¶ Multiple Ant tasks are available in the testsuite-engine.jar provided in the Build Kit: • testsuite allows the user to run a given test suite and to retrieve an XML report document in a JUnit format. • javaTestsuite is a subtask of the testsuite task, used to run a specialized test suite for Java (will only run Java classes). • htmlReport is a task which will generate an HTML report from a list of JUnit report files. ### The testsuite Task¶ The following attributes are mandatory: testsuite task mandatory attributes Attribute Name Description outputDir The output folder of the test suite. The final report will be generated at [outputDir]/[label]/[reportName].xml, see the testsuiteReportFileProperty and testsuiteReportDirProperty attributes. harnessScript The harness script must be an Ant script and it is the script which will be called for each test by the test suite engine. It is called with a basedir located at output location of the current test. The test suite engine provides the following properties to the harness script giving all the informations to start the test: harnessScript properties Attribute Name Description testsuite.test.name The output name of the current test in the report. Default value is the relative path of the test. It can be manually set by the user. More details on the output name are available in the section Specific Custom Properties. testsuite.test.path The current test absolute path in the filesystem. testsuite.test.properties The absolute path to the custom properties of the current test (see the property customPropertiesExtension) testsuite.common.properties The absolute path to the common properties of all the tests (see the property commonProperties) testsuite.report.dir The absolute path to the directory of the final report. The following attributes are optional: testsuite task optional attributes Attribute Name Description Default value timeOut The time in seconds before any test is considerated as unknown. Set it to 0 to disable the time-out. 60 verboseLevel The required level to output messages from the test suite. Can be one of those values: error, warning, info, verbose, debug. info reportName The final report name (without extension). testsuite-report customPropertiesExtension The extension of the custom properties for each test. For instance, if it is set to .options, a test named xxx/Test1.class will be associated with xxx/Test1.options. If a file exists for a test, the property testsuite.test.properties is set with its absolute path and given to the harnessScript. If the test path references a directory, then the custom properties path is the concatenation of the test path and the customPropertiesExtension value. .properties commonProperties The properties to apply to every test of the test suite. Those options might be overridden by the custom properties of each test. If this option is set and the file exists, the property testsuite.common.properties is set to the absolute path of the harnessScript file. no common properties label The build label. timestamp of when the test suite was invoked. productName The name of the current tested product. TestSuite jvm The location of your Java VM to start the test suite (the harnessScript is called as is: [jvm] [...] -buildfile [harnessScript]). java.home location if the property is set, java otherwise. jvmargs The arguments to pass to the Java VM started for each test. None. testsuiteReportFileProperty The name of the Ant property in which the path of the final report is stored. Path is [outputDir]/[label]/[reportName].xml testsuite.report.file testsuiteReportDirProperty The name of the Ant property in which is store the path of the directory of the final report. Path is [outputDir]/[label]. testsuite.report.dir testsuiteResultProperty The name of the Ant property in which you want to have the result of the test suite (true or false), depending if every tests successfully passed the test suite or not. Ignored tests do not affect this result. None Finally, you have to give as nested element the path containing the tests. testsuite task nested elements Element Name Description testPath Containing all the file of the tests which will be launched by the test suite. testIgnoredPath (optional) Any test in the intersection between testIgnoredPath and testPath will be executed by the test suite, but will not appear in the JUnit final report. It will still generate a JUnit report for each test, which will allow the HTML report to let them appears as “ignored” if it is generated. Mostly used for known bugs which are not considered as failure but still relevant enough to appears on the HTML report. Example of test suite task invocation <!-- Launch the testusite engine --> <testsuite:testsuite timeOut="${microej.kf.testsuite.timeout}" outputDir="${target.test.xml}/testkf" harnessScript="${com.is2t.easyant.plugins#microej-kf-testsuite.microej-kf-testsuite-harness-jpf-emb.xml.file}" commonProperties="${microej.kf.launch.propertyfile}" testsuiteResultProperty="testkf.result" testsuiteReportDirProperty="testkf.testsuite.report.dir" productName="${module.name} testkf" jvmArgs="${microej.kf.testsuite.jvmArgs}" lockPort="${microej.kf.testsuite.lockPort}" verboseLevel="${testkf.verbose.level}" > <testPath refid="target.testkf.path"/> </testsuite:testsuite> ### The javaTestsuite Task¶ This task extends the testsuite task, specializing the test suite to only start real Java class. This task retrieves the classname of the tests from the classfile and provides new properties to the harness script: javaTestsuite task properties Property Name Description testsuite.test.class The classname of the current test. The value of the property testsuite.test.name is also set to the classname of the current test. testsuite.test.classpath The classpath of the current test. <!-- Launch test suite --> <testsuite:javaTestsuite verboseLevel="${microej.testsuite.verboseLevel}" timeOut="${microej.testsuite.timeout}" outputDir="${target.test.xml}/@{prefix}" harnessScript="${harness.file}" commonProperties="${microej.launch.propertyfile}" testsuiteResultProperty="@{prefix}.result" testsuiteReportDirProperty="@{prefix}.testsuite.report.dir" productName="${module.name} @{prefix}" jvmArgs="${microej.testsuite.jvmArgs}" lockPort="${microej.testsuite.lockPort}" retryCount="${microej.testsuite.retry.count}" retryIf="${microej.testsuite.retry.if}" retryUnless="${microej.testsuite.retry.unless}" > <testPath refid="target.@{prefix}.path"/> <testIgnoredPath refid="tests.@{prefix}.ignored.path" /> </testsuite:javaTestsuite> ### The htmlReport Task¶ This task allow the user to transform a given path containing a sample of JUnit reports to an HTML detailed report. Here is the attributes to fill: • A nested fileset element containing all the JUnit reports of each test. Take care to exclude the final JUnit report generated by the test suite. • A nested element report: • format: The format of the generated HTML report. Must be noframes or frames. When noframes format is choosen, a standalone HTML file is generated. • todir: The output folder of your HTML report. • The report tag accepts the nested tag param with name and expression attributes. These tags can pass XSL parameters to the stylesheet. The built-in stylesheets support the following parameters: • PRODUCT: the product name that is displayed in the title of the HTML report. • TITLE: the comment that is displayed in the title of the HTML report. Note It is advised to set the format to noframes if your test suite is not a Java test suite. If the format is set to frames, with a non-Java MicroEJ Test Suite, the name of the links will not be relevant because of the non-existency of packages. Example of htmlReport task invocation <!-- Generate HTML report --> <testsuite:htmlReport> <fileset dir="${@{prefix}.testsuite.report.dir}"> <include name="**/*.xml"/> <!-- include unary reports --> <exclude name="**/bin/**/*.xml"/> <!-- exclude test bin files --> <exclude name="*.xml"/> <!-- exclude global report --> </fileset> <report format="noframes" todir="\${target.test.html}/@{prefix}"/> </testsuite:htmlReport> ## Using the Trace Analyzer¶ This section will shortly explains how to use the Trace Analyzer. The MicroEJ Test Suite comes with an archive containing the Trace Analyzer which can be used to analyze the output trace of an application. It can be used from different forms; • The FileTraceAnalyzer will analyze a file and research for the given tags, failing if the success tag is not found. • The SerialTraceAnalyzer will analyze the data from a serial connection. Here is the common options to all TraceAnalyzer tasks: • successTag: the regular expression which is synonym of success when found (by default .*PASSED.*). • failureTag: the regular expression which is synonym of failure when found (by default .*FAILED.*). • verboseLevel: int value between 0 and 9 to define the verbose level. • waitingTimeAfterSuccess: waiting time (in s) after success before closing the stream (by default 5). • noActivityTimeout: timeout (in s) with no activity on the stream before closing the stream. Set it to 0 to disable timeout (default value is 0). • stopEOFReached: boolean value. Set to true to stop analyzing when input stream EOF is reached. If false, continue until timeout is reached (by default false). • onlyPrintableCharacters: boolean value. Set to true to only dump ASCII printable characters (by default false). Here is the specific options of the FileTraceAnalyzer task: • traceFile: path to the file to analyze. Here is the specific options of the SerialTraceAnalyzer task: • port: the comm port to open. • baudrate: serial baudrate (by default 9600). • databits: databits (5|6|7|8) (by default 8). • stopBits: stopbits (0|1|3 for (1_5)) (by default 1). • parity: none | odd | event (by default none). ## Appendix¶ The goal of this section is to explain some tips and tricks that might be useful in your usage of the test suite engine. ### Specific Custom Properties¶ Some custom properties are specifics and retrieved from the test suite engine in the custom properties file of a test. • The testsuite.test.name property is the output name of the current test. Here are the steps to compute the output name of a test: • If the custom properties are enabled and a property named testsuite.test.name is find on the corresponding file, then the output name of the current test will be set to it. • Otherwise, if the running MicroEJ Test Suite is a Java test suite, the output name is set to the class name of the test. • Otherwise, from the path containing all the tests, a common prefix will be retrieved. The output name will be set to the relative path of the current test from this common prefix. If the common prefix equals the name of the test, then the output name will be set to the name of the test. • Finally, if multiples tests have the same output name, then the current name will be followed by _XXX, an underscore and an integer. • The testsuite.test.timeout property allow the user to redefine the time out for each test. If it is negative or not an integer, then global timeout defined for the MicroEJ Test Suite is used.
2021-10-16 06:03:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26926159858703613, "perplexity": 3266.692339823195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583423.96/warc/CC-MAIN-20211016043926-20211016073926-00486.warc.gz"}
https://www.biostars.org/p/57152/
T-Test In R On Microarray Data 3 1 Entering edit mode 10.4 years ago Diana ▴ 900 Hello everyone, I'm trying to do a simple t-test on my microarray sample in R. My sample looks like this: gene_id gene sample_1 value_1 sample_2 value_2 XLOC_000001 LOC425783 Renal 20.8152 Heart 14.0945 XLOC_000002 GOLGB1 Renal 10.488 Heart 8.89434 So the t-test is between sample 1 and sample 2 and my code looks like this: ttestfun = function(x) t.test(x[4], x[6])$p.value p.value = apply(expression_data, 1, ttestfun) It gives me the following error: Error in t.test.default(x[6], x[8]) : not enough 'x' observations In addition: Warning message: In mean.default(x) : argument is not numeric or logical: returning NA What am I doing wrong? Please help. Many thanks. r microarray • 15k views ADD COMMENT 8 Entering edit mode Nag your supervisor to provide some more arrays and allow you to run the experiment again. The arguments to convince him or her are possibly that: • a nonreplicated experiment does not meet the standards of research in the field (does it in any field?) • the data will therefore not be publishable • the money and time invested in the first screen will therefore be wasted ADD REPLY 3 Entering edit mode +1 because I can't give +2 or more. ADD REPLY 9 Entering edit mode 10.4 years ago I think there's some misconceptions operating here from the original questioner. First and foremost, a t-test is not just a way of calculating p-values, it is a statistical test to determine whether two populations have varying means. The p-value that results from the test is a useful indicator for whether or not to support your null hypothesis (that the two populations have the same mean), but is not the purpose of the test. In order to carry out a t-test between two populations, you need to know two things about those populations: 1) the mean of the observations and 2) the variance about that mean. The single value you have for each population could be a proxy for the mean (although it is a particularly bad one - see below), but there is no way that you can know the variance from only one observation. This is why replicates are required for microarray analysis, not a nice optional extra. The reason a single observation on a single microarray is a bad proxy for the population mean is because you have no way of knowing whether the individual tested is typical for the population concerned. Assuming the expression of a given gene is normally distributed among your population (and this is an assumption that you have to make in order for the t-test to be a valid test anyway), your single individual could come from anywhere on the bell curve. Yes, it is most likely that the observation is somewhere near the mean (by definition, ~68% within 1 standard deviation, see the graph), but there is a significant chance that it could have come from either extreme. Finally, I've read what you suggest about the hypergeometric test in relation to RNA-Seq data recently, but again the use of this test is based on a flawed assumption (that the variance of a gene between the 2 populations is equivalent to the population variance). Picking a random statistical test out of the bag, just because it is able to give you a p-value in your particular circumstance is almost universally bad practise. You need to be able to justify it in light of the assumptions you are making in order to apply the test. BTW, your data does not look like it is in log2 scale (if it is, there's an ~32-fold difference between the renal and heart observations for the first gene above) - how have you got the data into R & normalised it? ADD COMMENT 0 Entering edit mode +1 excellent explaination for beginners ADD REPLY 3 Entering edit mode 10.4 years ago It looks like you are trying to do a t-test with one value per group. That is a statistical impossibility (hence, the "not enough 'x' observations" error). Your only real option is to calculate a fold-change between the two samples by calculating a ratio. expression_data$ratio = expression_data[,3]-expression_data[,5] # assumes log scaled data You can choose 2-fold changed genes by: expression_data_filtered = expression_data[abs(expression_data$ratio)>2,] After you obtain replicates, you will want to use limma for gene expression analysis. Unmoderated t-tests are probably not the best way to go. ADD COMMENT 0 Entering edit mode Thank you so much Ben and Sean. Actually I'm trying to answer which of the genes are differentially expressed between these two samples and these are the only values I have. I don't have replicate experiments. Basically I want to associate some kind of significance to the differential expression and I thought calculating p-values would do that and hence the t-test. So there's no way I can calculate p-value for each gene with this data? ADD REPLY 3 Entering edit mode Hi, Diana. Unfortunately there is no way a statistical test can be performed without replication. The only option you have to compute p-values is to repeat the experiment. ADD REPLY 0 Entering edit mode Your interpretation is correct--no p-values with the data that you have in hand. ADD REPLY 0 Entering edit mode I don't know if this is a stupid question again, but someone whose working on such data suggested to me that a hypergeometric test can be done with only these values in hand. I wanted to confirm before I embarked on a useless journey. What do you all think? ADD REPLY 0 Entering edit mode How would you apply that test? ADD REPLY 0 Entering edit mode The hypergeometric distribution is used for the analysis of overlaps of gene sets, e.g. given 2 gene sets selected by some arbitrary choice, what is the probability that 100 or more out of the 1000 genes in each set are common to both both. That doesn't fit because you cannot make sensible gene sets yet. ADD REPLY 0 Entering edit mode Another point. The way you are approaching your problem is detrimental to the solution. Instead of responding by picking some random methods which you seemingly don't understand, you should: - respond to our proposal to replicate the experiment (what did your boss say about replication?) - try to understand how tests work ADD REPLY 0 Entering edit mode Thanks. No replicates for now. Maybe in near future. ADD REPLY 2 Entering edit mode 10.4 years ago Ben ★ 2.0k You are applying the t-test to the 4th and 6th value in each row; firstly R doesn't use zero-indexing so you don't seem to have a 6th column and secondly you are comparing two single values each time. For an (unpaired) t-test comparing expression_data$value_1 and expression_data$value_2 try: t.test(expression_data[,3], expression_data[,5])$p.value edit: of course it's probably more useful to keep the whole returned list than just the p-value 0 Entering edit mode Thanks a lot. I want to put all pairwise p-values in one object. When I try to use a loop, it gives me the same error again. for(i in 1:38620) { u = t.test(expression_data[i,3], expression_data[i,5]) } Error in t.test.default(RNA[i, 3], RNA[i, 5]) : not enough 'x' observations What's wrong with my loop? 3 Entering edit mode Again, you're trying to perform a t-test on two values... I think you need to look at what a t-test is and think about what you're trying to find from this data. You likely just want to add paired=T to the code I gave you above. See ?t.test in R too. 0 Entering edit mode I need to do a t-test for each gene and I will be using two values for comparison. My question is: how can I do the pairwise t-test for each of the two values quickly...I was thinking a loop but its giving me an error. I don't want to do a t-test for each gene individually because I have a lot of genes 0 Entering edit mode As Ben and I point out, you cannot perform a t-test between groups with only 1 member in them. As an aside, using a for-loop like this in R is usually not the best way to go. See the "apply" function for a better approach (can be orders-of-magnitude faster than a for loop).
2023-04-02 02:31:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3696608543395996, "perplexity": 2290.7514236553943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950373.88/warc/CC-MAIN-20230402012805-20230402042805-00763.warc.gz"}
https://www.scootersoftware.com/vbulletin/showthread.php?11869-how-to-compare-multiple-files&p=39716
# Thread: how to compare multiple files? 1. Visitor Join Date Jun 2013 Posts 4 ## how to compare multiple files? hi, I dont have much exp with BC 3 and scripting, i would like to build script to compare four files for example: scenario1: file 1: \\server1\folder1\text.txt file 2: \\server2\folder1\text.txt scenario2: file 3: \\server3\folder1\text.txt file 4: \\server4\folder1\text.txt ofc, I would like to have these two comparisons done in the same time. 2. Team Scooter Join Date Oct 2007 Location Posts 11,375 Hello, Would you like to generate a report comparing file1 to file2, then generate a 2nd report comparing file3 to file4? This can be done in scripting using the command line: bcompare.exe "@c:\bcscript.txt" Then the script file example could be: [CODE] text-report layout:side-by-side output-to:"c:bcreport1.html" output-options:html-color "\\server1\folder1\text.txt" "\\server2\folder1\text.txt" text-report layout:side-by-side output-to:"c:bcreport2.html" output-options:html-color "\\server3\folder1\text.txt" "\\server4\folder1\text.txt" Scripting actions follow the general actions you can perform in the graphical interface. Could you provide more details on the steps you are following in the interface and the reports you are generating from there? We can then help with the script to follow similar steps. 3. Visitor Join Date Jun 2013 Posts 4 would it be possible to have output in one file instead of multiple files? for example: bcreport.html also, where exactly output file bcreport.html will be saved? 4. Visitor Join Date Jun 2013 Posts 4 also, would it be possible to note only file differences (if any)? 5. Team Scooter Join Date Oct 2007 Location Posts 11,375 It is not possible to have a single HTML report file for multiple text comparisons unless you open a folder compare, select the multiple files you want to compare, then generate the report. If you pass in pairs of files on the command line, we do not support appended reports together. Code: log verbose "c:\bclog.txt" criteria rules-based expand all select diff.files text-report layout:side-by-side options:display-mismatches output-to:"c:\bcreport.html" output-options:html-color For a plain text report, you could append them together using a batch file: Code: bcompare.exe "@c:\script.txt" "c:\file1" "c:\file2" type tempReport.txt >> mainreport.txt bcompare.exe "@c:\script.txt" "c:\file3" "c:\file4" type tempReport.txt >> mainreport.txt Where script.txt is Code: text-report layout:side-by-side options:display-mismatches output-to:"c:\tempReport.txt" "%1" "%2" 6. Team Scooter Join Date Oct 2007 Location Posts 11,375 To show only differences, add the "options:display-mismatches" parameter to the text-report command. Detailed documentation can be found in the Help file -> Scripting Reference, or in the Help file -> Using Beyond Compare -> Automating with Script chapter. 7. Visitor Join Date Jun 2013 Posts 4 thank you, this was very useful!
2018-02-21 03:33:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21669554710388184, "perplexity": 13077.311894654757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813322.19/warc/CC-MAIN-20180221024420-20180221044420-00151.warc.gz"}
http://www.self.gutenberg.org/articles/eng/Sampling_distribution
#jsDisabledContent { display:none; } My Account |  Register |  Help # Sampling distribution Article Id: WHEBN0000520670 Reproduction Date: Title: Sampling distribution Author: World Heritage Encyclopedia Language: English Subject: Collection: Statistical Theory Publisher: World Heritage Encyclopedia Publication Date: ### Sampling distribution In statistics a sampling distribution or finite-sample distribution is the probability distribution of a given statistic based on a random sample. Sampling distributions are important in statistics because they provide a major simplification en route to statistical inference. More specifically, they allow analytical considerations to be based on the sampling distribution of a statistic, rather than on the joint probability distribution of all the individual sample values. ## Contents • Introduction 1 • Standard error 2 • Examples 3 • Statistical inference 4 • References 5 ## Introduction The sampling distribution of a statistic is the distribution of that statistic, considered as a random variable, when derived from a random sample of size n. It may be considered as the distribution of the statistic for all possible samples from the same population of a given size. The sampling distribution depends on the underlying distribution of the population, the statistic being considered, the sampling procedure employed, and the sample size used. There is often considerable interest in whether the sampling distribution can be approximated by an asymptotic distribution, which corresponds to the limiting case either as the number of random samples of finite size, taken from an infinite population and used to produce the distribution, tends to infinity, or when just one equally-infinite-size "sample" is taken of that same population. For example, consider a normal population with mean μ and variance σ². Assume we repeatedly take samples of a given size from this population and calculate the arithmetic mean \scriptstyle \bar x for each sample – this statistic is called the sample mean. Each sample has its own average value, and the distribution of these averages is called the "sampling distribution of the sample mean". This distribution is normal \scriptstyle \mathcal{N}(\mu,\, \sigma^2/n) (n is the sample size) since the underlying population is normal, although sampling distributions may also often be close to normal even when the population distribution is not (see central limit theorem). An alternative to the sample mean is the sample median. When calculated from the same population, it has a different sampling distribution to that of the mean and is generally not normal (but it may be close for large sample sizes). The mean of a sample from a population having a normal distribution is an example of a simple statistic taken from one of the simplest statistical populations. For other statistics and other populations the formulas are more complicated, and often they don't exist in closed-form. In such cases the sampling distributions may be approximated through Monte-Carlo simulations[1][p. 2], bootstrap methods, or asymptotic distribution theory. ## Standard error The standard deviation of the sampling distribution of a statistic is referred to as the standard error of that quantity. For the case where the statistic is the sample mean, and samples are uncorrelated, the standard error is: \sigma_{\bar x} = \frac{\sigma}{\sqrt{n}} where \sigma is the standard deviation of the population distribution of that quantity and n is the sample size (number of items in the sample). An important implication of this formula is that the sample size must be quadrupled (multiplied by 4) to achieve half (1/2) the measurement error. When designing statistical studies where cost is a factor, this may have a role in understanding cost–benefit tradeoffs. ## Examples Population Statistic Sampling distribution Normal: \mathcal{N}(\mu, \sigma^2) Sample mean \bar X from samples of size n \bar X \sim \mathcal{N}\Big(\mu,\, \frac{\sigma^2}{n} \Big) Bernoulli: \operatorname{Bernoulli}(p) Sample proportion of "successful trials" \bar X n \bar X \sim \operatorname{Binomial}(n, p) Two independent normal populations: \mathcal{N}(\mu_1, \sigma_1^2)  and  \mathcal{N}(\mu_2, \sigma_2^2) Difference between sample means, \bar X_1 - \bar X_2 \bar X_1 - \bar X_2 \sim \mathcal{N}\! \left(\mu_1 - \mu_2,\, \frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2} \right) Any absolutely continuous distribution F with density ƒ Median X_{(k)} from a sample of size n = 2k − 1, where sample is ordered X_{(1)} to X_{(n)} f_{X_{(k)}}(x) = \frac{(2k-1)!}{(k-1)!^2}f(x)\Big(F(x)(1-F(x))\Big)^{k-1} Any distribution with distribution function F Maximum M=\max\ X_k from a random sample of size n F_M(x) = P(M\le x) = \prod P(X_k\le x)= \left(F(x)\right)^n ## Statistical inference In the theory of statistical inference, the idea of a sufficient statistic provides the basis of choosing a statistic (as a function of the sample data points) in such a way that no information is lost by replacing the full probabilistic description of the sample with the sampling distribution of the selected statistic. In frequentist inference, for example in the development of a statistical hypothesis test or a confidence interval, the availability of the sampling distribution of a statistic (or an approximation to this in the form of an asymptotic distribution) can allow the ready formulation of such procedures, whereas the development of procedures starting from the joint distribution of the sample would be less straightforward. In Bayesian inference, when the sampling distribution of a statistic is available, one can consider replacing the final outcome of such procedures, specifically the conditional distributions of any unknown quantities given the sample data, by the conditional distributions of any unknown quantities given selected sample statistics. Such a procedure would involve the sampling distribution of the statistics. The results would be identical provided the statistics chosen are jointly sufficient statistics. ## References 1. ^ • Merberg, A. and S.J. Miller (2008). "The Sample Distribution of the Median". Course Notes for Math 162: Mathematical Statistics, on the web at http://web.williams.edu/Mathematics/sjmiller/public_html/BrownClasses/162/Handouts/MedianThm04.pdf pgs 1–9.
2020-08-10 05:32:56
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.90926194190979, "perplexity": 696.0688358784275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738609.73/warc/CC-MAIN-20200810042140-20200810072140-00013.warc.gz"}
https://kimsereylam.com/azure/2017/07/22/conemu-a-better-command-prompt-for-windows.html
By Kimserey Lam with # Conemu A Better Command Prompt For Windows Jul 22nd, 2017 - written by Kimserey with . When developing multiple Web api under multiple Visual Studio solutions, it can become very tedious to maintain, run and debug. Opening multiple instances of Visual Studio is very costly in term of memory and running all at once also clutter the screen which rapidly becomes irritating. With the advent of dotnet CLI tools, it has been clear that the next step would be to move out of the common “right click/build, F5” of Visual Studio and toward “dotnet run” on a command prompt. Last month I was looking for a Windows alternative of the bash terminal which can be found on Mac and I found ConEmu. ConEmu provides access to all typical shells via an enhanced UI. Today we will see how we can use ConEmu to ease our development process by leveraging only 2 of its features; the tasks and environment setup. 1. dotnet CLI 2. Setup environment 4. Apply to multiple services ## 1. dotnet CLI We can start first by getting ConEmu from the repository releases https://github.com/Maximus5/ConEmu/releases. From now we can start straight using ConEmu as a command prompt. Multi tabs are supported by default, win + w hotkey opens a new tab. Next what we can do is navigate to our Web API project and run dotnet run. This will run the Web API service in the command prompt, here in ConEmu. It is also possible to restore packages with dotnet restore and build a project without running with dotnet build. When the project is ran, it is ran in production mode. This is the default behaviour since usually the production setup is the most restrictive one. In order to have the environment set to development we can set it by setting it in the current command prompt context: 1 set ASPNETCORE_ENVIRONMENT=Development We would need to run this on every new command prompt window. If we want to persist it, we can set it as a global Windows variable but this will affect the whole operating system. Lucky us ConEmu provides a way to run repeated commands on start of prompt which we will see now. ## 2. Setup environment At each prompt start, ConEmu allows us to run a set of commands. Those can be used to set environment variables or to set aliases which will exist only in ConEmu context. In order to access the environment setup, go to settings > startup > environment and the following window will show: From here we can see that we can set variables, here I’ve set ASPNETCORE_ENVIRONMENT and also the base path of all my projects. And I also set an alias ns which helps me to quickly serve an Angular app with Angular CLI ng serve. ConEmuBaseDir is the base directory containing ConEmu files. As we can see, %ConEmuBaseDir%\Scripts is also set to the path. This \Scripts folder is provided by ConEmu and already set to path for us to place scripts in which are then easy access for our tasks. Now that we know how to setup environment variables, we will no longer need to manually set the ASPNETCORE_ENVIRONMENT variable as it will be done automatically. What we still need to do is to navigate to our service and dotnet run the project manually. Lucky us, again, ConEmu has a way to automate that by creating a script and setting it to a hotkey with ConEmu tasks which we will see next. Let’s say we have a Web API located in C:\Projects\MyApi\MyApi.Web. In order to run it, we could do the following: 1 2 3 title My Api cd C:\Projects\MyApi\MyApi.Web dotnet run This would set the title of the prompt to My Api then navigate to the service folder and run the project under development environment (since it was set in 2.). What we can do now is put those 3 lines in MyApi.cmd file which we will place under ConEmu \Scripts folder. 1 \ConEmu\ConEmu\Scripts\MyApi.cmd Since the \Scripts folder is added to PATH in each prompt, we should be able to launch it straight from anywhere. 1 > MyApi.cmd This is already pretty neat as it cut down a lot of time for quick launching but we can go a step further by defining a task. We start by opening the task settings settings > startup > tasks. From there we can set a task which will start a new prompt and run the MyApi.cmd script. We do that by clicking on +, naming the service Services::My Api and adding the command cmd.exe /k MyApi.cmd. The naming convention allows grouping of tasks for easy access through the UI, [Group]::[Task] which is accessable from + on the main UI page. A Hotkey can also be set with a combination of keys for even quicker access. ## 4. Apply to multiple services All we have to do left is to create a script and task per service that we have. We can then create a global task which we can call Services::Multi containing all services: 1 2 3 4 5 cmd.exe /k MyApi.cmd cmd.exe /k MyApi2.cmd cmd.exe /k MyApi3.cmd This task when ran will open 3 tabs and launch one script per tab which will result in a start of all services in one click. # Conclusion Today we saw how to configure ConEmu to environment and task to allow us to start multiple services running ASP NET Core Web API in a single click. The ease of use and the support of multi tab make ConEmu a major contributor to reducing the amount of time wasted in development cycle. Hope you enjoyed reading this post as much as I enjoyed writing it. If you have any questions leave it here or hit me on Twitter @Kimserey_Lam. See you next time! Designed, built and maintained by Kimserey Lam.
2019-07-22 11:33:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18113961815834045, "perplexity": 2410.309029130337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528013.81/warc/CC-MAIN-20190722113215-20190722135215-00025.warc.gz"}
https://community.wolfram.com/groups/-/m/t/2373558
# Why does the DSolve not solve the PDE giving the 'Arbitrary functions'? Posted 1 month ago 343 Views | 6 Replies | 0 Total Likes | Hello, I have two PDEs (strainDisp11 & strainDisp22) in 2 variables x1 and x2. strainDisp11 is a PDE with the partial differential term in x1 whereas, strainDisp22 is a PDE with the partial differential term in x2 I am trying to solve these two PDEs separately using DSolve (Last two command lines in the attached file), however, the solution is not generated along with the required arbitrary functions C1[1] which should be f1[x2] and C1[1] which should be f2[x1] in the respective solutions of the PDEs. Attached is Notebook for your reference. Appreciate your help. 6 Replies Sort By: Posted 1 month ago A Tip: Don't use Subscript , because causes problems. Posted 1 month ago Thanks! Very much appreciated. Posted 11 days ago Hello, I have two PDEs in 2 variables 'r' and 'theta'. I am trying to solve these two PDEs separately using DSolve (The last two command lines in the attached file). The solution is generated as expected for the 1st PDE (Integration with respect to variable 'r'), however, the solution is not generated for the 2nd PDE (Integration with respect to 'theta'). I cannot understand why Mathematica does not solve all the terms and has replaced 'theta' by K[1] in the unsolved integral with limits? Attached is Notebook for your reference. Appreciate your help. Posted 11 days ago Maybe: solDispRR = DSolve[strainDispRR == 0, uR, {r, \[Theta]}] // Flatten; solDisp\[Theta]\[Theta] = DSolve[strainDisp\[Theta]\[Theta] == 0, u\[Theta], {r, \[Theta]}] // Flatten; uRFunctionTemp = uR[r, \[Theta]] /. solDispRR[[1]] u\[Theta]FunctionTemp = (u\[Theta][r, \[Theta]] /. solDisp\[Theta]\[Theta][[1]] /. solDispRR[[1]]) // Activate // ExpandAll Looks like MMA can't integrate, a workaround: u\[Theta]FunctionTemp = (Integrate[#, {K[1], 1, \[Theta]}] & /@ (u\[Theta]FunctionTemp[[1, 1]])) + u\[Theta]FunctionTemp[[2]] (*Integrate[-C[1][K[1]], {K[1], 1, \[Theta]}] + (2*P*\[Nu]^2*Log[r]*(Sin[1] - Sin[\[Theta]]))/(Pi*\[DoubleStruckCapitalE]) + (2*P*\[Nu]*(-Sin[1] + Sin[\[Theta]]))/(Pi*\[DoubleStruckCapitalE]) + (2*P*\[Nu]^2*(-Sin[1] + Sin[\[Theta]]))/(Pi*\[DoubleStruckCapitalE]) + (2*P*Log[r]*(-Sin[1] + Sin[\[Theta]]))/(Pi*\[DoubleStruckCapitalE]) + C[1][r]*) In this line: Integrate[-C[1][K[1]], {K[1], 1, \[Theta]}] what answer do you expect?
2021-10-28 08:17:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26450031995773315, "perplexity": 7160.811352501623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00550.warc.gz"}
http://mathhelpforum.com/calculus/209791-partial-derivative-notation-question.html
# Math Help - partial derivative notation question 1. ## partial derivative notation question what does the notation at the bottom mean? the second derivative wrt z over the partial of y times the partial of x. Is that right? and what does that mean procedurally? 2. ## Re: partial derivative notation question It means to first take the partial derivative of z with respect to y, then take the partial derivative of this result with respect to x. For a function like this which is continuous and the respective partials exist, the order of differentiation does not matter, i.e.: $\frac{\delta^2 z}{\delta x\delta y}=\frac{\delta^2 z}{\delta y\delta x}$
2014-10-22 04:12:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9681448936462402, "perplexity": 317.01724842933305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507445299.20/warc/CC-MAIN-20141017005725-00146-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.dsprelated.com/showarticle/1228.php
# Compute the Frequency Response of a Multistage Decimator Figure 1a shows the block diagram of a decimation-by-8 filter, consisting of a low-pass finite impulse response (FIR) filter followed by downsampling by 8 [1].  A more efficient version is shown in Figure 1b, which uses three cascaded decimate-by-two filters.  This implementation has the advantages that only FIR 1 is sampled at the highest sample rate, and the total number of filter taps is lower. The frequency response of the single-stage decimator before downsampling is just the response of the FIR filter from f = 0 to fs/2.  After downsampling, remaining signal components above fs/16 create aliases at frequencies below fs/16.  It’s not quite so clear how to find the frequency response of the multistage filter:  after all, the output of FIR 3 has unique spectrum extending only to fs/8, and we need to find the response from 0 to fs/2.  Let’s look at an example to see how to calculate the frequency response.  Although the example uses decimation-by-2 stages, our approach applies to any integer decimation factor. Figure 1.  Decimation by 8.  (a)  Single-stage decimator.  (b)  Three-stage decimator. For this example, let the input sample rate of the decimator in Figure 1b equal 1600 Hz.  The three FIR filters then have sample rates of 1600, 800, and 400 Hz.  Each is a half-band filter [2 - 4] with passband of at least 0 to 75 Hz.  Here is Matlab code that defines the three sets of filter coefficients (See Appendix): b1= [-1 0 9 16 9 0 -1]/32; % fs = 1600 Hz b2= [23 0 -124 0 613 1023 613 0 -124 0 23]/2048; % fs/2 = 800 Hz b3= [-11 0 34 0 -81 0 173 0 -376 0 1285 2050 1285 0 -376 0 173 0 ... -81 0 34 0 -11]/4096; % fs/4 = 400 Hz The frequency responses of these filters are plotted in Figure 2.  Each response is plotted over f = 0 to half its sampling rate: FIR 1:  0 to 800 Hz FIR 2:  0 to 400 Hz FIR 3:  0 to 200 Hz Figure 2.  Frequency Responses of halfband decimation filters. Now, to find the overall response at fs = 1600 Hz, we need to know the time or frequency response of FIR 2 and FIR 3 at this sample rate.  Converting the time response is just a matter of sampling at fs instead of at  fs /2 or fs /4 – i.e., upsampling.  For example, the following Matlab code upsamples the FIR 2 coefficients by 2, from fs/2 to fs: b2_up= zeros(1,21); b2_up(1:2:21)= b2; Figure 3 shows the coefficients b2 and b2_up.  The code has inserted samples of value zero halfway between each of the original samples of b2 to create b2_up.  b2_up now has a sample rate of fs.  But although we have a new representation of the coefficients, upsampling has no effect on the math:  b2_up and b2 have the same coefficient values and the same time interval between the coefficients. For FIR 3, we need to upsample by 4 as follows: b3_up= zeros(1,89); b3_up(1:4:89)= b3; Figure 4 shows the coefficients b3 and b3_up.  Again, the upsampled version is mathematically identical to the original version.  Now we have three sets of coefficients, all sampled at fs = 1600 Hz.  A block diagram of the cascade of these coefficients is shown in Figure 5. Figure 3.  Top:  Halfband filter coefficients b2.    Bottom:  Coefficients upsampled by 2. Figure 4.  Top:  Halfband filter coefficients b3.    Bottom:  Coefficients upsampled by 4. Figure 5.  Conceptual diagram showing cascade of FIR 1 and upsampled versions of FIR 2 and FIR 3,  used for computing frequency response of decimator of Figure 1b. Using the DFT, we can compute and plot the frequency response of each filter stage, as shown in Figure 6.  Upsampling b2 and b3 has allowed us to compute the DFT at the input sampling frequency fs for those sections.  The sampling theorem [5] tells us that the frequency response of b2, which has a sample rate of 800 Hz, has an image between 400 and 800 Hz.  Since b2_up has a sample rate of 1600 Hz, this image appears in its DFT (middle plot).  Similarly, the DFT of b3_up has images from 200 to 400; 400 to 600; and 600 to 800 Hz (bottom plot). Each decimation filter response in Figure 6 has stopband centered at one-half of its original sample frequency, shown as a red horizontal line (see Appendix).  This attenuates spectrum in that band prior to downsampling by 2. Figure 6.   Frequency responses of decimator stages, fs = 1600 Hz. Top:  FIR 1 (b1 )    Middle:  FIR 2 (b2_up)    Bottom:  FIR 3 (b3_up) Now let’s find the overall frequency response.  To do this, we could a) find the product of the three frequency responses in Figure 6, or b) compute the impulse response of the cascade of b1, b2_up, and b3_up, then use it to find H(z).  Taking the latter approach, the overall impulse response is: b123 = b1 ⊛ (b2up ⊛ b3up) where ⊛ indicates convolution.  The Matlab code is: b23= conv(b2_up,b3_up); b123= conv(b23,b1); % overall impulse response at fs= 1600 Hz The impulse response is plotted in Figure 7.  It is worth comparing the length of this response to that of the decimator stages.  The impulse response has 115 samples; that is, it would take a 115-tap FIR filter to implement the decimator as a single stage FIR sampled at 1600 Hz.  Of the 115 taps, 16 are zero.  By contrast, the length of the three decimator stages are 7, 11, and 23 taps, of which a total of 16 taps are zero.  So the multistage approach saves taps, and furthermore, only the first stage operates at 1600 Hz.  Thus, the multistage decimator uses significantly fewer resources than a single stage decimator. Calculating the frequency response from b_123: fs= 1600; % Hz decimator input sample rate [h,f]= freqz(b123,1,256,fs); H= 20*log10(abs(h)); % overall freq response magnitude The frequency response magnitude is plotted in Figure 8, with the stopband specified in the Appendix shown in red. Here is a summary of the steps to compute the decimator frequency response: 1. Upsample the coefficients of all of the decimator stages (except the first stage) so that their sample rate equals the input sample rate. 2. Convolve all the coefficients from step 1 to obtain the overall impulse response at the input sample rate. 3. Take the DFT of the overall impulse response to obtain the frequency response. Our discussion of upsampling may bring to mind the use of that process in interpolators.  As in our example, upsampling in an interpolator creates images of the signal spectrum at multiples of the original sample frequency.  The interpolation filter then attenuates those images [6]. We don’t want to forget aliasing, so we’ll take a look at that next. Figure 7.  Overall impulse response of three-stage decimator at fs = 1600 Hz (length = 115). Figure 8.  Overall frequency response of Decimator at fs= 1600 Hz. ## Taking Aliasing into Account The output sample rate of the decimator in Figure 1b is fs out  = 1600/8 = 200 Hz.  If we apply sinusoids to its input, they will be filtered by the response of Figure 8, but then any components above fs out /2 (100 Hz) will produce aliases in the band of 0 to fs out /2.  Let’s apply equal level sinusoids at 75, 290, and 708 Hz, as shown in Figure 9.  The response in the bottom of Figure 9 shows the expected attenuation at 290 Hz is about 52 dB and at 708 Hz is about 53 dB (red dots).  For reference, the component at 75 Hz has 0 dB attenuation.  After decimation, the components at 290 and 708 Hz alias as follows: f1 = 290 – fs out  = 290 – 200 = 90 Hz f= 4*fs out  – 708 = 800 – 708 = 92 Hz So, after decimation, we expect a component at 90 MHz that is about 52 dB below the component at 75 Hz, and a component at 92 Hz that is about 53 dB down.  This is in fact what we get when we go through the filtering and downsampling operations:  see Figure 10. Note that the sines at 290 and 708 MHz are not within the stopbands as defined in the Appendix for FIR 1 and FIR 2.  For that reason, the aliased components are greater than the specified stopband of -57 dB.  This is not necessarily a problem, however, because they fall outside the passband of 75 Hz.  They can be further attenuated by a subsequent channel filter. Figure 9.  Top:  Multiple sinusoidal input to decimator at 75, 290, and 708 Hz. Bottom:  Decimator overall frequency response.  Note fs out = fs/8. Figure 10.  Decimator output spectrum for input of Figure 9.  fs out = fs/8 = 200 Hz. ## Appendix:  Decimation Filter Synthesis The halfband decimators were designed by the window method [3] using Matlab function fir1.  We obtain halfband coefficients by setting the cutoff frequency to one-quarter of the sample rate.  The order of each filter was chosen to meet the passband and stopband requirements shown in the table.  Frequency responses are plotted in Figure 2 of the main text.  We could have made the stopband attenuation of FIR 3 equal to that of the other filters, at the expense of more taps. Common parameters: Passband:  > -0.1 dB at 75 Hz Window function:  Chebyshev, -47 dB Section Sample rate Stopband edge Stopband atten Order FIR 1 fs = 1600 Hz fs/2 – 75 = 725 Hz 57 dB 6 FIR 2 fs/2 = 800 Hz fs/4 – 75 = 325 Hz 57 dB 10 FIR 3 fs/4 = 400 Hz fs/8 - 75 = 125 Hz 43 dB 22 Note that the filters as synthesized by fir1 have zero-valued coefficients on each end, so the actual filter order is two less than that in the function call.  Using N = 6 and 10 in fir1 (instead of 8 and 12) would eliminate these superfluous zero coefficients, but would result in somewhat different responses. % dec_fil1.m 1/31/19 Neil Robertson % synthesize halfband decimators using window method % fc = (fs/4)/fnyq = (fs/4)/(fs/2) = 1/2 % resulting coeffs have zeros on the each end,so actual filter order is N-2. % > fc= 1/2; % -6 dB freq divided by nyquist freq % % b1: halfband decimator from fs= 1600 Hz to 800 Hz N= 8; win= chebwin(N+1,47); % chebyshev window function, -47 dB b= fir1(N,fc,win); % filter synthesis by window method b1= round(b*32)/32; % fixed-point coefficients % % b2: halfband decimator from fs= 800 Hz to 400 Hz N= 12; win= chebwin(N+1,47); b= fir1(N,fc,win); b2= round(b*2048)/2048; % % b3: halfband decimator from fs= 400 Hz to 200 Hz N= 24; win= chebwin(N+1,47); b= fir1(N,fc,win); b3= round(b*4096)/4096; ## References 1.  Lyons, Richard G. , Understanding Digital Signal Processing, 2nd Ed., Prentice Hall, 2004, section 10.1. 2. Mitra, Sanjit K.,Digital Signal Processing, 2nd Ed., McGraw-Hill, 2001, p 701-702. 3. Robertson, Neil, “Simplest Calculation of Halfband Filter Coefficients”, DSP Related website, Nov, 2017 https://www.dsprelated.com/showarticle/1113.php 4. Lyons, Rick, “Optimizing the Half-band Filters in Multistage Decimation and Interpolation”, DSP Related website, Jan, 2016 https://www.dsprelated.com/showarticle/903.php 5. Oppenheim, Alan V. and Shafer, Ronald W., Discrete-Time Signal Processing, Prentice Hall, 1989, Section 3.2. 6. Lyons, Richard G. , Understanding Digital Signal Processing, 2nd Ed., Prentice Hall, 2004, section 10.2. Neil Robertson       February 2019 [ - ] Comment by February 11, 2019 Hi Neil. This is a great blog. Your Figure 10 shows a very important principle that we sometimes forget. That principle is: After decimation by 8, *ALL* of the spectral energy that exists in the freq range of 0 -to- 800 Hz in the filter's output in Figure 8 is folded down and shows up in the decimated-by-8 signal's spectrum that you show your Figure 10. Good job! [ - ] Comment by February 12, 2019 Thanks Rick, I appreciate the encouragement! To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
2021-04-23 10:43:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7315039038658142, "perplexity": 3455.8979256515695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039617701.99/warc/CC-MAIN-20210423101141-20210423131141-00498.warc.gz"}
https://economics.stackexchange.com/questions/14047/why-are-vacancy-rate-and-unemployment-rate-negatively-correlated
# Why are vacancy rate and unemployment rate negatively correlated? Why is this the case? Since Vacancy rate is defined as following, let $A,Q,U$ denote number of vacancies in the economy, labor force, unemployed respectively. $$\frac{A}{A+Q-U}$$ Here we can see that if unemployed increase vacancy rate would go up? Why is there a negatively correlation then? Take the beveridge curve as an example : https://en.wikipedia.org/wiki/Beveridge_curve • I have no idea what you are asking here. Maybe rephrase the question. – Jamzy Nov 1 '16 at 22:05 • Are you asking 'why is unemployment lower when job vacancies are higher?'. Unemployed people are people are looking for work. When you increase the thing that they are looking for (work), there will be less of them. – Jamzy Nov 1 '16 at 22:08 Adopting your notation, the vacancy rate at any given time is defined as $A/Q$. There is no mechanical relationship between the unemployment rate $U/Q$ and vacancy rate (A/Q).
2019-10-20 09:38:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7795020341873169, "perplexity": 2063.607101534271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986705411.60/warc/CC-MAIN-20191020081806-20191020105306-00218.warc.gz"}
https://www.computer.org/csdl/trans/td/2008/08/ttd2008081099-abs.html
Issue No. 08 - August (2008 vol. 19) ISSN: 1045-9219 pp: 1099-1110 ABSTRACT Peer-to-peer (P2P) networks often demand scalability, low communication latency among nodes, and low system-wide overhead. For scalability, a node maintains partial states of a P2P network and connects to a few nodes. For fast communication, a P2P network intends to reduce the communication latency between any two nodes as much as possible. With regard to a low system-wide overhead, a P2P network minimizes its traffic in maintaining its performance efficiency and functional correctness. In this paper, we present a novel tree-based P2P network with low communication delay and low system-wide overhead. The merits of our tree-based network include: $(i)$ a tree-shaped P2P network which guarantees that the degree of a node is constant in probability regardless of the system size. The network diameter in our tree-based network increases logarithmically with an increase of the system size. Specially, given a physical network with a power-law latency expansion property, we show that the diameter of our tree network is constant. $(ii)$ Our proposal has the provable performance guarantees. We evaluate our proposal by rigorous performance analysis, and validate by extensive simulations. INDEX TERMS Distributed networks, Distributed Systems, Multicast CITATION H. Hsiao and C. He, "A Tree-Based Peer-to-Peer Network with Quality Guarantees," in IEEE Transactions on Parallel & Distributed Systems, vol. 19, no. , pp. 1099-1110, 2007. doi:10.1109/TPDS.2007.70798
2018-05-28 10:18:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7336714267730713, "perplexity": 1543.3951099112298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794872766.96/warc/CC-MAIN-20180528091637-20180528111637-00313.warc.gz"}
https://mathhothouse.me/category/pre-rmo-2/
## Category Archives: Pre-RMO ### Rules for Inequalities If a, b and c are real numbers, then 1. $a < b \Longrightarrow a + c< b + c$ 2. $a < b \Longrightarrow a - c < b - c$ 3. $a < b \hspace{0.1in} and \hspace{0.1in}c > 0 \Longrightarrow ac < bc$ 4. $a < b \hspace{0.1in} and \hspace{0.1in}c < 0 \Longrightarrow bc < ac$ special case: $a < b \Longrightarrow -b < -a$ 5. $a > 0 \Longrightarrow \frac{1}{a} > 0$ 6. If a and b are both positive or both negative, then $a < b \Longrightarrow \frac{1}{b} < \frac{1}{a}$. Remarks: Notice the rules for multiplying an inequality by a number: Multiplying by a positive number preserves the inequality; multiplying by a negative number reverses the inequality. Also, reciprocation reverses the inequality for numbers of the same sign. Regards, Nalin Pithwa. ### Set Theory, Relations, Functions Preliminaries: II Relations: Concept of Order: Let us say that we create a “table” of two columns in which the first column is the name of the father, and the second column is name of the child. So, it can have entries like (Yogesh, Meera), (Yogesh, Gopal), (Kishor, Nalin), (Kishor, Yogesh), (Kishor, Darshna) etc. It is quite obvious that “first” is the “father”, then “second” is the child. We see that there is a “natural concept of order” in human “relations”. There is one more, slightly crazy, example of “importance of order” in real-life. It is presented below (and some times also appears in basic computer science text as rise and shine algorithm) —- Rise and Shine algorithm: When we get up from sleep in the morning, we brush our teeth, finish our morning ablutions; next, we remove our pyjamas and shirt and then (secondly) enter the shower; there is a natural order here; first we cannot enter the shower, and secondly we do not remove the pyjamas and shirt after entering the shower. 🙂 Ordered Pair: Definition and explanation: A pair $(a,b)$ of numbers, such that the order, in which the numbers appear is important, is called an ordered pair. In general, ordered pairs (a,b) and (b,a) are different. In ordered pair (a,b), ‘a’ is called first component and ‘b’ is called second component. Two ordered pairs (a,b) and (c,d) are equal, if and only if $a=c$ and $b=d$. Also, $(a,b)=(b,a)$ if and only if $a=b$. Example 1: Find x and y when $(x+3,2)=(4,y-3)$. Solution 1: Equating the first components and then equating the second components, we have: $x+3=4$ and $2=y-3$ $x=1$ and $y=5$ Cartesian products of two sets: Let A and B be two non-empty sets then the cartesian product of A and B is denoted by A x B (read it as “A cross B”),and is defined as the set of all ordered pairs (a,b) such that $a \in A$, $b \in B$. Thus, $A \times B = \{ (a,b): a \in A, b \in B\}$ e.g., if $A = \{ 1,2\}$ and $B = \{ a,b,c\}$, tnen $A \times B = \{ (1,a),(1,b),(1,c),(2,a),(2,b),(2,c)\}$. If $A = \phi$ or $B=\phi$, we define $A \times B = \phi$. Number of elements of a cartesian product: By the following basic counting principle: If a task A can be done in m ways, and a task B can be done in n ways, then the tasks A (first) and task B (later) can be done in mn ways. So, the cardinality of A x B is given by: $n(A \times B)= n(A) \times n(B)$. So, in general if a cartesian product of p finite sets, viz, $A_{1}, A_{2}, A_{3}, \ldots, A_{p}$ is given by $n(A_{1} \times A_{2} \times A_{3} \ldots A_{p}) = n(A_{1}) \times n(A_{2}) \times \ldots \times n(A_{p})$ Definitions of relations, arrow diagrams (or pictorial representation), domain, co-domain, and range of a relation: Consider the following statements: i) Sunil is a friend of Anil. ii) 8 is greater than 4. iii) 5 is a square root of 25. Here, we can say that Sunil is related to Anil by the relation ‘is a friend of’; 8 and 4 are related by the relation ‘is greater than’; similarly, in the third statement, the relation is ‘is a square root of’. The word relation implies an association of two objects according to some property which they possess. Now, let us some mathematical aspects of relation; Definition: A and B are two non-empty sets then any subset of $A \times B$ is called relation from A to B, and is denoted by capital letters P, Q and R. If R is a relation and $(x,y) \in R$ then it is denoted by $xRy$. y is called image of x under R and x is called pre-image of y under R. Let $A=\{ 1,2,3,4,5\}$ and $B=\{ 1,4,5\}$. Let R be a relation such that $(x,y) \in R$ implies $x < y$. We list the elements of R. Solution: Here $A = \{ 1,2,3,4,5\}$ and $B=\{ 1,4,5\}$ so that $R = \{ (1,4),(1,5),(2,4),(2,5),(3,4),(3,5),(4,5)\}$ Note this is the relation R from A to B, that is, it is a subset of A x B. Check: Is a relation $R^{'}$ from B to A defined by x<y, with $x \in B$ and $y \in A$ — is this relation $R^{'}$ *same* as R from A to B? Ans: Let us list all the elements of R^{‘} explicitly: $R^{'} = \{ (1,2),(1,3),(1,4),(1,5),(4,5)\}$. Well, we can surely compare the two sets R and $R^{'}$ — the elements “look” different certainly. Even if they “look” same in terms of numbers, the two sets $R$ and $R^{'}$ are fundamentally different because they have different domains and co-domains. Definition : Domain of a relation R: The set of all the first components of the ordered pairs in a relation R is called the domain of relation R. That is, if $R \subseteq A \times B$, then domain (R) is $\{ a: (a,b) \in R\}$. Definition: Range: The set of all second components of all ordered pairs in a relation R is called the range of the relation. That is, if $R \subseteq A \times B$, then range (R) = $\{ b: (a,b) \in R\}$. Definition: Codomain: If R is a relation from A to B, then set B is called co-domain of the relation R. Note: Range is a subset of co-domain. Type of Relations: One-one relation: A relation R from a set A to B is said to be one-one if every element of A has at most one image in B and distinct elements in A have distinct images in B. For example, let $A = \{ 1,2,3,4\}$, and let $B=\{ 2,3,4,5,6,7\}$ and let $R_{1}= \{ (1,3),(2,4),(3,5)\}$ Then $R_{1}$ is a one-one relation. Here, domain of $R_{1}= \{ 1,2,3\}$ and range of $R_{1}$ is $\{ 3,4,5\}$. Many-one relation: A relation R from A to B is called a many-one relation if two or more than two elements in the domain A are associated with a single (unique) element in co-domain B. For example, let $R_{2}=\{ (1,4),(3,7),(4,4)\}$. Then, $R_{2}$ is many-one relation from A to B. (please draw arrow diagram). Note also that domain of $R_{1}=\{ 1,3,4\}$ and range of $R_{1}=\{ 4,7\}$. Into Relation: A relation R from A to B is said to be into relation if there exists at least one element in B, which has no pre-image in A. Let $A=\{ -2,-1,0,1,2,3\}$ and $B=\{ 0,1,2,3,4\}$. Consider the relation $R_{1}=\{ (-2,4),(-1,1),(0,0),(1,1),(2,4) \}$. So, clearly range is $\{ 0,1,4\}$ and $range \subseteq B$. Thus, $R_{3}$ is a relation from A INTO B. Onto Relation: A relation R from A to B is said to be ONTO relation if every element of B is the image of some element of A. For example: let set $A= \{ -3,-2,-1,1,3,4\}$ and set $B= \{ 1,4,9\}$. Let $R_{4}=\{ (-3,9),(-2,4), (-1,1), (1,1),(3,9)\}$. So, clearly range of $R_{4}= \{ 1,4,9\}$. Range of $R_{4}$ is co-domain of B. Thus, $R_{4}$ is a relation from A ONTO B. Binary Relation on a set A: Let A be a non-empty set then every subset of $A \times A$ is a binary relation on set A. Illustrative Examples: E.g.1: Let $A = \{ 1,2,3\}$ and let $A \times A = \{ (1,1),(1,2),(1,3),(2,1),(2,2),(2,3),(3,1),(3,2),(3,3)\}$. Now, if we have a set $R = \{ (1,2),(2,2),(3,1),(3,2)\}$ then we observe that $R \subseteq A \times A$, and hence, R is a binary relation on A. E.g.2: Let N be the set of natural numbers and $R = \{ (a,b) : a, b \in N and 2a+b=10\}$. Since $R \subseteq N \times N$, R is a binary relation on N. Clearly, $R = \{ (1,8),(2,6),(3,4),(4,2)\}$. Also, for the sake of completeness, we state here the following: Domain of R is $\{ 1,2,3,4\}$ and Range of R is $\{ 2,4,6,8\}$, codomain of R is N. Note: (i) Since the null set is considered to be a subset of any set X, so also here, $\phi \subset A \times A$, and hence, $\phi$ is a relation on any set A, and is called the empty or void relation on A. (ii) Since $A \times A \subset A \times A$, we say that $A \subset A$ is a relation on A called the universal relation on A. Note: Let the cardinality of a (finite) set A be $n(A)=p$ and that of another set B be $n(B)=q$, then the cardinality of the cartesian product $n(A \times B)=pq$. So, the number of possible subsets of $A \times B$ is $2^{pq}$ which includes the empty set. Types of relations: Let A be a non-empty set. Then, a relation R on A is said to be: (i) Reflexive: if $(a,a) \in R$ for all $a \in A$, that is, aRa for all $a \in A$. (ii) Symmetric: If $(a,b) \in R \Longrightarrow (b,a) \in R$ for all $a,b \in R$ (iii) Transitive: If $(a,b) \in R$, and $(b,c) \in R$, then so also $(a,c) \in R$. Equivalence Relation: A (binary) relation on a set A is said to be an equivalence relation if it is reflexive, symmetric and transitive. An equivalence appears in many many areas of math. An equivalence measures “equality up to a property”. For example, in number theory, a congruence modulo is an equivalence relation; in Euclidean geometry, congruence and similarity are equivalence relations. Also, we mention (without proof) that an equivalence relation on a set partitions the set in to mutually disjoint exhaustive subsets. Illustrative examples continued: E.g. Let R be an equivalence relation on $\mathbb{Q}$ defined by $R = \{ (a,b): a, b \in \mathbb{Q}, (a-b) \in \mathbb{Z}\}$. Prove that R is an equivalence relation. Proof: Given that $R = \{ (a,b) : a, b \in \mathbb{Q}, (a-b) \in \mathbb{Z}\}$. (i) Let $a \in \mathbb{Q}$ then $a-a=0 \in \mathbb{Z}$, hence, $(a,a) \in R$, so relation R is reflexive. (ii) Now, note that $(a,b) \in R \Longrightarrow (a-b) \in \mathbb{Z}$, that is, $(a-b)$ is an integer $\Longrightarrow -(b-a) \in \mathbb{Z} \Longrightarrow (b-a) \in \mathbb{Z} \Longrightarrow (b,a) \in R$. That is, we have proved $(a,b) \in R \Longrightarrow (b,a) \in R$ and so relation R is symmetric also. (iii) Now, let $(a,b) \in R$, and $(b,c) \in R$, which in turn implies that $(a-b) \in \mathbb{Z}$ and $(b-c) \in \mathbb{Z}$ so it $\Longrightarrow (a-b)+(b-c)=a-c \in \mathbb{Z}$ (as integers are closed under addition) which in turn $\Longrightarrow (a,c) \in R$. Thus, $(a,b) \in R$ and $(b,c) \in R$ implies $(a,c) \in R$ also, Hence, given relation R is transitive also. Hence, R is also an equivalence relation on $\mathbb{Q}$. Illustrative examples continued: E.g.: If $(x+1,y-2) = (3,4)$, find the values of x and y. Solution: By definition of an ordered pair, corresponding components are equal. Hence, we get the following two equations: $x+1=3$ and $y-2=4$ so the solution is $x=2,y=6$. E.g.: If $A = (1,2)$, list the set $A \times A$. Solution: $A \times A = \{ (1,1),(1,2),(2,1),(2,2)\}$ E.g.: If $A = \{1,3,5 \}$ and $B=\{ 2,3\}$, find $A \times B$, and $B \times A$, check if cartesian product is a commutative operation, that is, check if $A \times B = B \times A$. Solution: $A \times B = \{ (1,2),(1,3),(3,2),(3,3),(5,2),(5,3)\}$ whereas $B \times A = \{ (2,1),(2,3),(2,5),(3,1),(3,3),(3,5)\}$ so since $A \times B \neq B \times A$ so cartesian product is not a commutative set operation. E.g.: If two sets A and B are such that their cartesian product is $A \times B = \{ (3,2),(3,4),(5,2),(5,4)\}$, find the sets A and B. Solution: Using the definition of cartesian product of two sets, we know that set A contains as elements all the first components and set B contains as elements all the second components. So, we get $A = \{ 3,5\}$ and $B = \{ 2,4\}$. E.g.: A and B are two sets given in such a way that $A \times B$ contains 6 elements. If three elements of $A \times B$ are $(1,3),(2,5),(3,3)$, find its remaining elements. Solution: We can first observe that $6 = 3 \times 2 = 2 \times 3$ so that A can contain 2 or 3 elements; B can contain 3 or 2 elements. Using definition of cartesian product of two sets, we get that $A= \{ 1,2,3\}$ and $\{ 3,5\}$ and so we have found the sets A and B completely. E.g.: Express the set $\{ (x,y) : x^{2}+y^{2}=25, x, y \in \mathbb{W}\}$ as a set of ordered pairs. Solution: We have $x^{2}+y^{2}=25$ and so $x=0, y=5 \Longrightarrow x^{2}+y^{2}=0+25=25$ $x=3, y=4 \Longrightarrow x^{2}+y^{2}=9+16=25$ $x=4, y=3 \Longrightarrow x^{2}+y^{2}=16+9=25$ $x=5, y=0 \Longrightarrow x^{2}+y^{2}=25+0=25$ Hence, the given set is $\{ (0,5),(3,4),(4,3),(5,0)\}$ E.g.: Let $A = \{ 1,2,3\}$ and $B = \{ 2,4,6\}$. Show that $R = \{ (1,2),(1,4),(3,2),(3,4)\}$ is a relation from A to B. Find the domain, co-domain and range. Solution: Here, $A \times B = \{ (1,2),(1,4),(1,6),(2,2),(2,4),(2,6),(3,2),(3,4),(3,6)\}$. Clearly, $R \subseteq A \times B$. So R is a relation from A to B. The domain of R is the set of first components of R (which belong to set A, by definition of cartesian product and ordered pair)  and the codomain is set B. So, Domain (R) = $\{ 1,3\}$ and co-domain of R is set B itself; and Range of R is $\{ 2,4\}$. E.g.: Let $A = \{ 1,2,3,4,5\}$ and $B = \{ 1,4,5\}$. Let R be a relation from A to B such that $(x,y) \in R$ if $x. List all the elements of R. Find the domain, codomain and range of R. (as homework quiz, draw its arrow diagram); Solution: Let $A = \{ 1,2,3,4,5\}$ and $B = \{ 1,4,5\}$. So, we get R as $(1,4),(1,5),(2,4),(2,5),(3,4),(3,5),(4,5)$. $domain(R) = \{ 1,2,3,4\}$, $codomain(R) = B$, and $range(R) = \{ 4,5\}$. E.g. Let $A = \{ 1,2,3,4,5,6\}$. Define a binary relation on A such that $R = \{ (x,y) : y=x+1\}$. Find the domain, codomain and range of R. Solution: By definition, $R \subseteq A \times A$. Here, we get $R = \{ (1,2),(2,3),(3,4),(4,5),(5,6)\}$. So we get $domain (R) = \{ 1,2,3,4,5\}$, $codomain(R) =A$, $range(R) = \{ 2,3,4,5,6\}$ Tutorial problems: 1. If $(x-1,y+4)=(1,2)$, find the values of x and y. 2. If $(x + \frac{1}{3}, \frac{y}{2}-1)=(\frac{1}{2} , \frac{3}{2} )$ 3. If $A=\{ a,b,c\}$ and $B = \{ x,y\}$. Find out the following: $A \times A$, $B \times B$, $A \times B$ and $B \times A$. 4. If $P = \{ 1,2,3\}$ and $Q = \{ 4\}$, find the sets $P \times P$, $Q \times Q$, $P \times Q$, and $Q \times P$. 5. Let $A=\{ 1,2,3,4\}$ and $\{ 4,5,6\}$ and $C = \{ 5,6\}$. Find $A \times (B \bigcap C)$, $A \times (B \bigcup C)$, $(A \times B) \bigcap (A \times C)$, $A \times (B \bigcup C)$, and $(A \times B) \bigcup (A \times C)$. 6. Express $\{ (x,y) : x^{2}+y^{2}=100 , x, y \in \mathbf{W}\}$ as a set of ordered pairs. 7. Write the domain and range of the following relations: (i) $\{ (a,b): a \in \mathbf{N}, a < 6, b=4\}$ (ii) $\{ (a,b): a,b \in \mathbf{N}, a+b=12\}$ (iii) $\{ (2,4),(2,5),(2,6),(2,7)\}$ 8. Let $A=\{ 6,8\}$ and $B=\{ 1,3,5\}$. Let $R = \{ (a,b): a \in A, b \in B, a+b \hspace{0.1in} is \hspace{0.1in} an \hspace{0.1in} even \hspace{0.1in} number\}$. Show that R is an empty relation from A to B. 9. Write the following relations in the Roster form and hence, find the domain and range: (i) $R_{1}= \{ (a,a^{2}) : a \hspace{0.1in} is \hspace{0.1in} prime \hspace{0.1in} less \hspace{0.1in} than \hspace{0.1in} 15\}$ (ii) $R_{2} = \{ (a, \frac{1}{a}) : 0 < a \leq 5, a \in N\}$ 10. Write the following relations as sets of ordered pairs: (i) $\{ (x,y) : y=3x, x \in \{1,2,3 \}, y \in \{ 3,6,9,12\}\}$ (ii) $\{ (x,y) : y>x+1, x=1,2, y=2,4,6\}$ (iii) $\{ (x,y) : x+y =3, x, y \in \{ 0,1,2,3\}\}$ More later, Nalin Pithwa ### Set Theory, Relations, Functions Preliminaries: I In these days of conflict between ancient and modern studies there must surely be something to be said of a study which did not begin with Pythagoras and will not end with Einstein. — G H Hardy (On Set Theory) In every day life, we generally talk about group or collection of objects. Surely, you must have used the words such as team, bouquet, bunch, flock, family for collection of different objects. It is very important to determine whether a given object belongs to a given collection or not. Consider the following conditions: i) Successful persons in your city. ii) Happy people in your town. iii) Clever students in your class. iv) Days in a week. v) First five natural numbers. Perhaps, you have already studied in earlier grade(s) —- can you state which of the above mentioned collections are sets? Why? Check whether your answers are as follows: First three collections are not examples of sets but last two collections represent sets. This is because in first three collections, we are not sure of the objects. The terms ‘successful persons’, ‘happy people’, ‘clever students’ are all relative terms. Here, the objects are not well-defined. In the last two collections, we can determine the objects clearly (meaning, uniquely, or without ambiguity). Thus, we can say that the objects are well-defined. So what can be the definition of a set ? Here it goes: A collection of well-defined objects is called a set. (If we continue to “think deep” about this definition, we are led to the famous paradox, which Bertrand Russell had discovered: Let C be a collection of all sets such which are not elements of themselves. If C is allowed to be a set, a contradiction arises when one inquires whether or not C is an element of itself. Now plainly, there is something suspicious about the idea of a set being an element of itself, and we shall take this as evidence that the qualification “well-defined” needs to be taken seriously. Bertrand Russell re-stated this famous paradox in a very interesting way: In the town of Seville lives a barber who shaves everyone who does not shave himself. Does the barber shave himself?…) The objects in a set are called elements or members of that set. We denote sets by capital letters : A, B, C etc. The elements of a set are represented by small letters : a, b, c, d, e, f ….etc. If x is an element of a set A, we write $x \in A$. And, we read it as “x belongs to A.” If x is not an element of a set A, we write $x \not\in A$, and read as ‘x does not belong to A.’e.g., 1 is a “whole” number but not a “natural” number. Hence, $0 \in W$, where W is the set of whole numbers and $0 \not\in N$, where N is a set of natural numbers. There are two methods of representing a set: a) Roster or Tabular Method or List Method (b) Set-Builder or Ruler Method a) Roster or Tabular or List Method: Let A be the set of all prime numbers less than 20. Can you enumerate all the elements of the set A? Are they as follows? $A=\{ 2,3,5,7,11,15,17,19\}$ Can you describe the roster method? We can describe it as follows: In the Roster method, we list all the elements of the set within braces $\{, \}$ and separate the elements by commas. In the following examples, state the sets using Roster method: i) B is the set of all days in a week ii) C is the set of all consonants in English alphabets. iii) D is the set of first ten natural numbers. 2) Set-Builder Method: Let P be the set of first five multiples of 10. Using Roster Method, you must have written the set as follows: $P = \{ 10, 20, 30, 40, 50\}$ Question: What is the common property possessed by all the elements of the set P? Answer: All the elements are multiples of 10. Question: How many such elements are in the set? Answer: There are 5 elements in the set. Thus, the set P can be described using this common property. In such a case, we say that set-builder method is used to describe the set. So, to summarize: In the set-builder method, we describe the elements of the set by specifying the property which determines the elements of the set uniquely. Thus, we can write : $P = \{ x: x =10n, n \in N, n \leq 5\}$ In the following examples, state the sets using set-builder method: i) Y is the set of all months of a year ii) M is the set of all natural numbers iii) B is the set of perfect squares of natural numbers. Also, if elements of a set are repeated, they are written once only; while listing the elements of a set, the order in which the elements are listed is immaterial. (but this situation changes when we consider sets from the view-point of permutations and combinations. Just be alert in set-theoretic questions.) Subset: A set A is said to be a subset of a set B if each element of set A is an element of set B. Symbolically, $A \subseteq B$. Superset: If $A \subset B$, then B is called the superset of set A. Symbolically: $B \supset A$ Proper Subset: A non empty set A is said to be a proper subset of the set B, if and only if all elements of set A are in set B, and at least one element of B is not in A. That is, if $A \subseteq B$, but $A \neq B$ then A is called a proper subset of B and we write $A \subset B$. Note: the notations of subset and proper subset differ from author to author, text to text or mathematician to mathematician. These notations are not universal conventions in math. Intervals: 1. Open Interval : given $a < b$, $a, b \in R$, we say $a is an open interval in $\Re^{1}$. 2. Closed Interval : given $a \leq x \leq b = [a,b]$ 3. Half-open, half-closed: $a , or $a \leq x 4. The set of all real numbers greater than or equal to a : $x \geq a =[a, \infty)$ 5. The set of all real numbers less than or equal to a is $(-\infty, a] = x \leq a$ Types of Sets: 1. Empty Set: A set containing no element is called the empty set or the null set and is denoted by the symbol $\phi$ or $\{ \}$ or void set. e.g., $A= \{ x: x \in N, 1 2. Singleton Set: A set containing only one element is called a singleton set. Example : (i) Let A be a set of all integers which are neither positive nor negative. Then, $A = \{ 0\}$ and example (ii) Let B be a set of capital of India. Then $B= \{ Delhi\}$ We will define the following sets later (after we giving a working definition of a function): finite set, countable set, infinite set, uncountable set. 3. Equal sets: Two sets are said to be equal if they contain the same elements, that is, if $A \subseteq B$ and $B \subseteq A$. For example: Let X be the set of letters in the word ‘ABBA’ and Y be the set of letters in the word ‘BABA’. Then, $X= \{ A,B\}$ and $Y= \{ B,A\}$. Thus, the sets $X=Y$ are equal sets and we denote it by $X=Y$. How to prove that two sets are equal? Let us say we are given the task to prove that $A=B$, where A and B are non-empty sets. The following are the steps of the proof : (i) TPT: $A \subset B$, that is, choose any arbitrary element $x \in A$ and show that also $x \in B$ holds true. (ii) TPT: $B \subset A$, that is, choose any arbitrary element $y \in B$, and show that also $y \in A$. (Note: after we learn types of functions, we will see that a fundamental way to prove two sets (finite) are equal is to show/find a bijection between the two sets). PS: Note that two sets are equal if and only if they contain the same number of elements, and the same elements. (irrespective of order of elements; once again, the order condition is changed for permutation sets; just be alert what type of set theoretic question you are dealing with and if order is important in that set. At least, for our introduction here, order of elements of a set is not important). PS: Digress: How to prove that in general, $x=y$? The standard way is similar to above approach: (i) TPT: $x < y$ (ii) TPT: $y < x$. Both (i) and (ii) together imply that $x=y$. 4. Equivalent sets: Two finite sets A and B are said to be equivalent if $n(A)=n(B)$. Equal sets are always equivalent but equivalent sets need not be equal. For example, let $A= \{ 1,2,3 \}$ and $B = \{ 4,5,6\}$. Then, $n(A) = n(B)$, so A and B are equivalent. Clearly, $A \neq B$. Thus, A and B are equivalent but not equal. 5. Universal Set: If in a particular discussion all sets under consideration are subsets of a set, say U, then U is called the universal set for that discussion. You know that the set of natural numbers the set of integers are subsets of set of real numbers R. Thus, for this discussion is a universal set. In general, universal set is denoted by or X. 6. Venn Diagram: The pictorial representation of a set is called Venn diagram. Generally, a closed geometrical figures are used to represent the set, like a circle, triangle or a rectangle which are known as Venn diagrams and are named after the English logician John Venn. In Venn diagram the elements of the sets are shown in their respective figures. Now, we have these “abstract toys or abstract building-blocks”, how can we get new such “abstract buildings” using these “abstract building blocks”. What I mean is that we know that if we are a set of numbers like 1,2,3, …, we know how to get “new numbers” out of these by “adding”, subtracting”, “multiplying” or “dividing” the given “building blocks like 1, 2…”. So, also what we want to do now is “operations on sets” so that we create new, more interesting or perhaps, more “useful” sets out of given sets. We define the following operations on sets: 1. Complement of a set: If A is a subset of the universal set U then the set of all elements in U which are not in A is called the complement of the set A and is denoted by $A^{'}$ or $A^{c}$ or $\overline{A}$ Some properties of complements: (i) ${A^{'}}^{'}=A$ (ii) $\phi^{'}=U$, where U is universal set (iii) $U^{'}= \phi$ 2. Union of Sets: If A and B are two sets then union of set A and set B is the set of all elements which are in set A or set B or both set A and set B. (this is the INCLUSIVE OR in digital logic) and the symbol is : \$latex A \bigcup B 3. Intersection of sets: If A and B are two sets, then the intersection of set A and set B is the set of all elements which are both in A and B. The symbol is $A \bigcap B$. 4. Disjoint Sets: Let there be two sets A and B such that $A \bigcap B=\phi$. We say that the sets A and B are disjoint, meaning that they do not have any elements in common. It is possible that there are more than two sets $A_{1}, A_{2}, \ldots A_{n}$ such that when we take any two distinct sets $A_{i}$ and $A_{j}$ (so that $i \neq j$, then $A_{i}\bigcap A_{j}= \phi$. We call such sets pairwise mutually disjoint. Also, in case if such a collection of sets also has the property that $\bigcup_{i=1}^{i=n}A_{i}=U$, where U is the Universal Set in the given context, We then say that this collection of sets forms a partition of the Universal Set. 5. Difference of Sets: Let us say that given a universal set U and two other sets A and B, $B-A$ denotes the set of elements in B which are not in A; if you notice, this is almost same as $A^{'}=U-A$. 6. Symmetric Difference of Sets: Suppose again that we are two given sets A and B, and a Universal Set U, by symmetric difference of A and B, we mean $(A-B)\bigcup (B-A)$. The symbol is $A \triangle B.$ Try to visualize this (and describe it) using a Venn Diagram. You will like it very much. Remark : The designation “symmetric difference” for the set $A \triangle B$ is not too apt, since $A \triangle B$ has much in common with the sum $A \bigcup B$. In fact, in $A \bigcup B$ the statements “x belongs to A” and “x belongs to B” are joined by the conjunction “or” used in the “either …or …or both…” sense, while in $A \triangle B$ the same two statements are joined by “or” used in the ordinary “either…or….” sense (as in “to be or not to be”). In other words, x belongs to $A \bigcup B$ if and only if x belongs to either A or B or both, while x belongs to $A \triangle B$ if and only if x belongs to either A or B but not both. The set $A \triangle B$ can be regarded as a kind of a “modulo-two-sum” of the sets A and B, that is, a sum of the sets A and B in which elements are dropped if they are counted twice (once in A and once in B). Let us now present some (easily provable/verifiable) properties of sets: 1. $A \bigcup B = B \bigcup A$ (union of sets is commutative) 2. $(A \bigcup B) \bigcup C = A \bigcup (B \bigcup C)$ (union of sets is associative) 3. $A \bigcup \phi=A$ 4. $A \bigcup A = A$ 5. $A \bigcup A^{'}=U$ where U is universal set 6. If $A \subseteq B$, then $A \bigcup B=B$ 7. $U \bigcup A=U$ 8. $A \subseteq (A \bigcup B)$ and also $B \subseteq (A \bigcup B)$ Similarly, some easily verifiable properties of set intersection are: 1. $A \bigcap B = B \bigcap A$ (set intersection is commutative) 2. $(A \bigcap B) \bigcap C = A \bigcap (B \bigcap C)$ (set intersection is associative) 3. $A \bigcap \phi = \phi \bigcap A= \phi$ (this matches intuition: there is nothing common in between a non empty set and an empty set :-)) 4. $A \bigcap A =A$ (Idempotent law): this definition carries over to square matrices: if a square matrix is such that $A^{2}=A$, then A is called an Idempotent matrix. 5. $A \bigcap A^{'}=\phi$ (this matches intuition: there is nothing in common between a set and another set which does not contain any element of it (the former set)) 6. If $A \subseteq B$, then $A \bigcap B =A$ 7. $U \bigcap A=A$, where U is universal set 8. $(A \bigcap B) \subseteq A$ and $(A \bigcap B) \subseteq B$ 9. i: $A \bigcap (B \bigcap )C = (A \bigcap B)\bigcup (A \bigcap C)$ (intersection distributes over union) ; (9ii) $A \bigcup (B \bigcap C)=(A \bigcup B) \bigcap (A \bigcup C)$ (union distributes over intersection). These are the two famous distributive laws. The famous De Morgan’s Laws for two sets are as follows: (it can be easily verified by Venn Diagram): For any two sets A and B, the following holds: i) $(A \bigcup B)^{'}=A^{'}\bigcap B^{'}$. In words, it can be captured beautifully: the complement of union is intersection of complements. ii) $(A \bigcap B)^{'}=A^{'} \bigcup B^{'}$. In words, it can be captured beautifully: the complement of intersection is union of complements. Cardinality of a set: (Finite Set) : (Again, we will define the term ‘finite set’ rigorously later) The cardinality of a set is the number of distinct elements contained in a finite set A and we will denote it as $n(A)$. Inclusion Exclusion Principle: For two sets A and B, given a universal set U: $n(A \bigcup B) = n(A) + n(B) - n(A \bigcap B)$. For three sets A, B and C, given a universal set U: $n(A \bigcup B \bigcup C)=n(A) + n(B) + n(C) -n(A \bigcap B) -n(B \bigcap C) -n(C \bigcup A) + n(A \bigcap B \bigcap C)$. Homework Quiz: Verify the above using Venn Diagrams. Power Set of a Set: Let us consider a set A (given a Universal Set U). Then, the power set of A is the set consisting of all possible subsets of set A. (Note that an empty is also a subset of A and that set A is a subset of A itself). It can be easily seen (using basic definition of combinations) that if $n(A)=p$, then $n(power set A) = 2^{p}$. Symbol: $P(A)$. Homework Tutorial I: 1. Describe the following sets in Roster form: (i) $\{ x: x \hspace{0.1in} is \hspace{0.1in} a \hspace{0.1in} letter \hspace{0.1in} of \hspace{0.1in} the \hspace{0.1in} word \hspace{0.1in} PULCHRITUDE\}$ (II) $\{ x: x \hspace{0.1in } is \hspace{0.1in} an \hspace{0.1in} integer \hspace{0.1in} with \hspace{0.1in} \frac{-1}{2} < x < \frac{1}{2} \}$ (iii) $\{x: x=2n, n \in N\}$ 2. Describe the following sets in Set Builder form: (i) $\{ 0\}$ (ii) $\{ 0, \pm 1, \pm 2, \pm 3\}$ (iii) $\{ \}$ 3. If $A= \{ x: 6x^{2}+x-15=0\}$ and $B= \{ x: 2x^{2}-5x-3=0\}$, and $x: 2x^{2}-x-3=0$, then find (i) $A \bigcup B \bigcup C$ (ii) $A \bigcap B \bigcap C$ 4. If A, B, C are the sets of the letters in the words, ‘college’, ‘marriage’, and ‘luggage’ respectively, then verify that $\{ A-(B \bigcup C)\}= \{ (A-B) \bigcap (A-C)\}$ 5. If $A= \{ 1,2,3,4\}$, $B= \{ 3,4,5, 6\}$, $C= \{ 4,5,6,7,8\}$ and universal set $X= \{ 1,2,3,4,5,6,7,8,9,10\}$, then verify the following: 5i) $A\bigcup (B \bigcap C) = (A\bigcup B) \bigcap (A \bigcup C)$ 5ii) $A \bigcap (B \bigcup C)= (A \bigcap B) \bigcup (A \bigcap C)$ 5iii) $A= (A \bigcap B)\bigcup (A \bigcap B^{'})$ 5iv) $B=(A \bigcap B)\bigcup (A^{'} \bigcap B)$ 5v) $n(A \bigcup B)= n(A)+n(B)-n(A \bigcap B)$ 6. If A and B are subsets of the universal set is X, $n(X)=50$, $n(A)=35$, $n(B)=20$, $n(A^{'} \bigcap B^{'})=5$, find (i) $n(A \bigcup B)$ (ii) $n(A \bigcap B)$ (iii) $n(A^{'} \bigcap B)$ (iv) $n(A \bigcap B^{'})$ 7. In a class of 200 students who appeared certain examinations, 35 students failed in MHTCET, 40 in AIEEE, and 40 in IITJEE entrance, 20 failed in MHTCET and AIEEE, 17 in AIEEE and IITJEE entrance, 15 in MHTCET and IITJEE entrance exam and 5 failed in all three examinations. Find how many students (a) did not flunk in any examination (b) failed in AIEEE or IITJEE entrance. 8. From amongst 2000 literate and illiterate individuals of a town, 70 percent read Marathi newspaper, 50 percent read English newspapers, and 32.5 percent read both Marathi and English newspapers. Find the number of individuals who read 8i) at least one of the newspapers 8ii) neither Marathi and English newspaper 8iii) only one of the newspapers 9) In a hostel, 25 students take tea, 20 students take coffee, 15 students take milk, 10 students take both tea and coffee, 8 students take both milk and coffee. None of them take the tea and milk both and everyone takes at least one beverage, find the number of students in the hostel. 10) There are 260 persons with a skin disorder. If 150 had been exposed to chemical A, 74 to chemical B, and 36 to both chemicals A and B, find the number of persons exposed to  (a) Chemical A but not Chemical B (b) Chemical B but not Chemical A (c) Chemical A or Chemical B. 11) If $A = \{ 1,2,3\}$ write down the power set of A. 12) Write the following intervals in Set Builder Form: (a) $(-3,0)$ (b) $[6,12]$ (c) $(6,12]$ (d) $[-23,5)$ 13) Using Venn Diagrams, represent (a) $(A \bigcup B)^{'}$ (b) $A^{'} \bigcup B^{'}$ (c) $A^{'} \bigcap B$ (d) $A \bigcap B^{'}$ Regards, Nalin Pithwa. ### References for IITJEE Foundation Mathematics and Pre-RMO (Homi Bhabha Foundation/TIFR) 1. Algebra for Beginners (with Numerous Examples): Isaac Todhunter (classic text): Amazon India link: https://www.amazon.in/Algebra-Beginners-Isaac-Todhunter/dp/1357345259/ref=sr_1_2?s=books&ie=UTF8&qid=1547448200&sr=1-2&keywords=algebra+for+beginners+todhunter 2. Algebra for Beginners (including easy graphs): Metric Edition: Hall and Knight Amazon India link: https://www.amazon.in/s/ref=nb_sb_noss?url=search-alias%3Dstripbooks&field-keywords=algebra+for+beginners+hall+and+knight 3. Elementary Algebra for School: Metric Edition: https://www.amazon.in/Elementary-Algebra-School-H-Hall/dp/8185386854/ref=sr_1_5?s=books&ie=UTF8&qid=1547448497&sr=1-5&keywords=elementary+algebra+for+schools 4. Higher Algebra: Hall and Knight: Amazon India link: https://www.amazon.in/Higher-Algebra-Knight-ORIGINAL-MASPTERPIECE/dp/9385966677/ref=sr_1_6?s=books&ie=UTF8&qid=1547448392&sr=1-6&keywords=algebra+for+beginners+hall+and+knight 5. Plane Trigonometry: Part I: S L Loney: https://www.amazon.in/Plane-Trigonometry-Part-1-S-L-Loney/dp/938592348X/ref=sr_1_16?s=books&ie=UTF8&qid=1547448802&sr=1-16&keywords=plane+trigonometry+part+1+by+s.l.+loney The above references are a must. Best time to start is from standard VII or standard VIII. -Nalin Pithwa. ### Pre RMO Practice question: 2018: How long does it take for a news to go viral in a city? And, a cyclist vs horseman Problem 1: Some one arrives in a city with very interesting news and within 10 minutes tells it to two others. Each of these tells the news within 10 minutes to two others(who have not heard it yet), and so on. How long will it take before everyone in the city has heard the news if the city has three million inhabitants? Problem 2: A cyclist and a horseman have a race in a stadium. The course is five laps long. They spend the same time on the first lap. The cyclist travels each succeeding lap 1.1 times more slowly than he does the preceding one. On each lap the horseman spends d minutes more than he spent on the preceding lap. They each arrive at the finish line at the same time. Which of them spends the greater amount of time on the fifth lap and how much greater is this amount of time? I hope you enjoy “mathematizing” every where you see… Good luck for the Pre RMO in Aug 2018! Nalin Pithwa. ### How to solve equations: Dr. Vicky Neale: useful for Pre-RMO or even RMO training Dr. Neale simply beautifully nudges, gently encourages mathematics olympiad students to learn to think further on their own… ### A nice dose of practice problems for IITJEE Foundation math and PreRMO It is said that “practice makes man perfect”. Problem 1: Six boxes are numbered 1 through 6. How many ways are there to put 20 identical balls into  these boxes so that none of them is empty? Problem 2: How many ways are there to distribute n identical balls in m numbered boxes so that none of the boxes is empty? Problem 3: Six boxes are numbered 1 through 6. How many ways are there to distribute 20 identical balls between the boxes (this time some of the boxes can be empty)? Finish this triad of problems now! Nalin Pithwa. ### IITJEE Foundation Math and PRMO (preRMO) practice: another random collection of questions Problem 1: Find the value of $\frac{x+2a}{2b--x} + \frac{x-2a}{2a+x} + \frac{4ab}{x^{2}-4b^{2}}$ when $x=\frac{ab}{a+b}$ Problem 2: Reduce the following fraction to its lowest terms: $(\frac{1}{x} + \frac{1}{y} + \frac{1}{z}) \div (\frac{x+y+z}{x^{2}+y^{2}+z^{2}-xy-yz-zx} - \frac{1}{x+y+z})+1$ Problem 3: Simplify: $\sqrt[4]{97-56\sqrt{3}}$ Problem 4: If $a+b+c+d=2s$, prove that $4(ab+cd)^{2}-(a^{2}+b^{2}-c^{2}-d^{2})^{2}=16(s-a)(s-b)(s-c)(s-d)$ Problem 5: If a, b, c are in HP, show that $(\frac{3}{a} + \frac{3}{b} - \frac{2}{c})(\frac{3}{c} + \frac{3}{b} - \frac{2}{a})+\frac{9}{b^{2}}=\frac{25}{ac}$. May u discover the joy of Math! 🙂 🙂 🙂 Nalin Pithwa. ### Pre-RMO (PRMO) Practice Problems Pre-RMO days are back again. Here is a list of some of my random thoughts: Problem 1: There are five different teacups, three saucers, and four teaspoons in the “Tea Party” store. How many ways are there to buy two items with different names? Problem 2: We call a natural number “odd-looking” if all of its digits are odd. How many four-digit odd-looking numbers are there? Problem 3: We toss a coin three times. How many different sequences of heads and tails can we obtain? Problem 4: Each box in a 2 x 2 table can be coloured black or white. How many different colourings of the table are there? Problem 5: How many ways are there to fill in a Special Sport Lotto card? In this lotto, you must predict the results of 13 hockey games, indicating either a victory for one of two teams, or a draw. Problem 6: The Hermetian alphabet consists of only three letters: A, B and C. A word in this language is an arbitrary sequence of no more than four letters. How many words does the Hermetian language contain? Problem 7: A captain and a deputy captain must be elected in a soccer team with 11 players. How many ways are there to do this? Problem 8: How many ways are there to sew one three-coloured flag with three horizontal strips of equal height if we have pieces of fabric of six colours? We can distinguish the top of the flag from the bottom. Problem 9: How many ways are there to put one white and one black rook on a chessboard so that they do not attack each other? Problem 10: How many ways are there to put one white and one black king on a chessboard so that they do not attack each other? I will post the answers in a couple of days. Nalin Pithwa. ### Three in a row !!! If my first were a 4, And, my second were a 3, What I am would be double, The number you’d see. For I’m only three digits, Just three in a row, So what must I be? Don’t say you don’t know! Cheers, Nalin Pithwa.
2019-09-22 12:42:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 362, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9647656083106995, "perplexity": 602.2883204159482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575513.97/warc/CC-MAIN-20190922114839-20190922140839-00541.warc.gz"}
http://math.stackexchange.com/questions/222974/probability-of-getting-2-aces-2-kings-and-1-queen-in-a-five-card-poker-hand-pa
# Probability of getting 2 Aces, 2 Kings and 1 Queen in a five card poker hand (Part II) So I reworked my formula in method 1 after getting help with my original question - Probability of getting 2 Aces, 2 Kings and 1 Queen in a five card poker hand. But I am still getting results that differ...although they are much much closer than before, but I must still be making a mistake somewhere in method 1. Anyone know what it is? Method 1 $P(2A \cap 2K \cap 1Q) = P(Q|2A \cap 2K)P(2A|2K)P(2K)$ $$= \frac{1}{12}\frac{{4 \choose 2}{46 \choose 1}}{50 \choose 3}\frac{{4 \choose 2}{48 \choose 3}}{52 \choose 5}$$ $$= \frac{(6)(17296)(6)(46)}{(2598960)(19600)(12)}$$ $$= 4.685642 * 10^{-5}$$ Method 2 $$\frac{{4 \choose 2} {4 \choose 2}{4 \choose 1}}{52 \choose 5} = \frac{3}{54145}$$ $$5.540678 * 10^{-5}$$ - Please make an effort to make the question self-contained and provide a link to your earlier question. –  Sasha Oct 28 '12 at 19:56 I think we would rather ahve you edit your initial question by adding your new progress. This avoids having loss of answer and keeps track of progress –  Jean-Sébastien Oct 28 '12 at 19:56 But there already answers to my original question so those answers would not make sense now that I am using a new formula for method 1. –  sonicboom Oct 28 '12 at 20:03 Conditional probability arguments can be delicate. Given that there are exactly two Kings, what's the $46$ doing? That allows the possibility of more Kings. –  André Nicolas Oct 28 '12 at 20:26 The $46$ is because have already taken two kings from the pack leaving us with 50. And now we have chosen 2 aces and we have to pick the other 1 card from the 50 remaining cards less the 4 aces? –  sonicboom Oct 28 '12 at 20:42 show 1 more comment $$\frac{1}{11}\frac{{4 \choose 2}{44 \choose 1}}{48 \choose 3}\frac{{4 \choose 2}{48 \choose 3}}{52 \choose 5}$$ If you wrote this as $$\frac{{4 \choose 2}{48 \choose 3}}{52 \choose 5}\frac{{4 \choose 2}{44 \choose 1}}{48 \choose 3}\frac{{4 \choose 1}{40 \choose 0}}{44 \choose 1}$$ it might be more obvious why they are the same.
2014-03-07 11:01:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7363124489784241, "perplexity": 334.5089065648005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999642170/warc/CC-MAIN-20140305060722-00084-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.stat.math.ethz.ch/pipermail/r-help/2008-June/166104.html
# [R] Sweave: controlling pointsize (pdf) Lauri Nikkinen lauri.nikkinen at iki.fi Fri Jun 27 13:37:49 CEST 2008 Yes, I think so too. I already tried with options(SweaveHooks=list(fig=function() pdf(pointsize=10))) but as you said it tries to open pdf device and Sweaving fails... Best Lauri 2008/6/27, Duncan Murdoch <murdoch at stats.uwo.ca>: > On 27/06/2008 7:12 AM, Lauri Nikkinen wrote: > > pdf.options() seems to be a new function (from 2.7.0), so I quess I'll > > have to upgrade or write my own hook function for Sweave. Thanks. > > > > I'd recommend upgrading. I think it would be difficult to do this with a > hook function: you'd basically need to close the pdf file that Sweave > opened, and reopen it with new args --- but I don't think it's easy for you > to determine the filename that Sweave would have used. You probably need to > look at sys.frame(-2)$chunkprefix or something equally ugly. > > Duncan Murdoch > > > > > > Best > > Lauri > > > > 2008/6/27, Duncan Murdoch <murdoch at stats.uwo.ca>: > > > > > On 27/06/2008 6:23 AM, Lauri Nikkinen wrote: > > > > > > > I'm working with Windows XP and R 2.6.0 > > > > > > > > > > > > > R.Version() > > > > > > > > > > > > > >$platform > > > > [1] "i386-pc-mingw32" > > > > > > > > -Lauri > > > > > > > > 2008/6/27, Lauri Nikkinen <lauri.nikkinen at iki.fi>: > > > > > > > > > > > > > Hello, > > > > > > > > > > Is there a way to control pointsize of pdf:s produced by Sweave? I > > > > > would like to have the same pointsize from (not a working example) > > > > > > > > > > > > > > > > > You could use a pdf.options() call in an early chunk in the file, and it > > > will apply to subsequent chunks. > > > > > > For some other cases you might want code to be executed before every > figure; > > > that could be put in a hook function (as described in ?Sweave, and in > the > > > Sweave manual). > > > > > > Duncan Murdoch > > > > > > > > > > > > > > > pdf(file="C:/temp/example.pdf", width=7, height=7, bg="white", > > > > > > > > > > > > pointsize=10) > > > > > > > > > > > > plot(1:10) > > > > > etc.. > > > > > dev.off() > > > > > > > > > > as > > > > > > > > > > \documentclass[a4paper]{article} > > > > > \usepackage[latin1]{inputenc} > > > > > \usepackage[finnish]{babel} > > > > > \usepackage[T1]{fontenc} > > > > > > > > > > > > > > > > > > \usepackage{C:/progra\string~1/R/R-26\string~1.0/share/texmf/Sweave} > > > > > > > > > > > > <<fig=TRUE, width=7, height=7>>= > > > > > plot(1:10) > > > > > etc.. > > > > > @ > > > > > > > > > > \end{document} > > > > > > > > > > Regards > > > > > Lauri > > > > > > > > > > > > > > > > > > > ______________________________________________ > > > > R-help at r-project.org mailing list > > > > https://stat.ethz.ch/mailman/listinfo/r-help > > > > > > > http://www.R-project.org/posting-guide.html > > > > > > > and provide commented, minimal, self-contained, reproducible code. > > > > > > > > > > > > > > > > > > ______________________________________________ > > R-help at r-project.org mailing list > > https://stat.ethz.ch/mailman/listinfo/r-help
2022-05-27 16:55:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8736045956611633, "perplexity": 6206.012528144832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662658761.95/warc/CC-MAIN-20220527142854-20220527172854-00640.warc.gz"}
https://www.nature.com/articles/s41612-021-00161-2?error=cookies_not_supported&code=6d05393e-b28f-4c88-8f77-a97a1f381950
# Amplified risk of spatially compounding droughts during co-occurrences of modes of natural ocean variability ## Abstract Spatially compounding droughts over multiple regions pose amplifying pressures on the global food system, the reinsurance industry, and the global economy. Using observations and climate model simulations, we analyze the influence of various natural Ocean variability modes on the likelihood, extent, and severity of compound droughts across ten regions that have similar precipitation seasonality and cover important breadbaskets and vulnerable populations. Although a majority of compound droughts are associated with El Niños, a positive Indian Ocean Dipole, and cold phases of the Atlantic Niño and Tropical North Atlantic (TNA) can substantially modulate their characteristics. Cold TNA conditions have the largest amplifying effect on El Niño-related compound droughts. While the probability of compound droughts is ~3 times higher during El Niño conditions relative to neutral conditions, it is ~7 times higher when cold TNA and El Niño conditions co-occur. The probability of widespread and severe compound droughts is also amplified by a factor of ~3 and ~2.5 during these co-occurring modes relative to El Niño conditions alone. Our analysis demonstrates that co-occurrences of these modes result in widespread precipitation deficits across the tropics by inducing anomalous subsidence, and reducing lower-level moisture convergence over the study regions. Our results emphasize the need for considering interactions within the larger climate system in characterizing compound drought risks rather than focusing on teleconnections from individual modes. Understanding the physical drivers and characteristics of compound droughts has important implications for predicting their occurrence and characterizing their impacts on interconnected societal systems. ## Introduction Weather and climate extremes pose substantial risks to people, property, infrastructure, natural resources and ecosystems1,2,3. Although a majority of risk assessment studies have focused on single stressor hazards occurring in specific regions, the Intergovernmental Panel on Climate Change (IPCC) Special Report on Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation (SREX) highlights the importance of considering compound extremes resulting from the simultaneous or sequential occurrence of multiple climate hazards in the same region, for improved modeling and risk estimation of their impacts4. Since then, several studies have analyzed the risks and mechanisms of such compound events5,6,7,8,9. Another emerging category of compound events that involve the simultaneous occurrence of extremes across multiple regions, referred to as spatially compounding extremes, is gaining prominence due to the potential for their cascading impacts on the global food system, disaster management resources, international aid, reinsurance industries, and the global economy10,11,12. Recent work has started to build an understanding of the physical mechanisms that connect the occurrence of extremes across different regions. Kornhuber et al.17 found that the co-occurring summer 2018 heatwaves across North America, Western Europe, and the Caspian Sea region were driven by a recurrent wave-7 circulation pattern in the Northern-hemisphere mid-latitude jet stream. More generally, the occurrence of Rossby wave numbers 5 and 7 are found to substantially increase the probability of spatially compounding heat extremes over multiple mid-latitude regions including Central North America, Eastern Europe, and Eastern Asia, reducing global average crop production by nearly 4%18. The occurrence of Rossby waves can also link extreme events in the mid-latitudes and subtropics. For instance, Lau and Kim19 identified the role of Rossby wave trains in linking two record-setting extreme events during summer 2010—the persistent Russian heat wave and catastrophic flooding in Pakistan—with land-atmosphere feedbacks amplifying the Russian heat wave and moisture transport from the Bay of Bengal sustaining and amplifying the rains over Pakistan. Such compound extremes simultaneously affected millions of people and triggered a global food price spike associated with an approximately 30% loss in grain production in Russia20, which is a leading contributor to the global wheat trade21. While recent studies of compound extremes have focused on the Northern Hemisphere mid-latitudes, the processes influencing compound extremes across the lower latitudes have received relatively little attention. Singh et al.22 investigated the underlying mechanisms of one such event—compound severe droughts across South Asia, East Asia, Brazil, and North and South Africa during 1876–1878, which were linked to the famines that contributed to the Late Victorian Holocausts23. The severity, duration, and extent of this compound drought event was shaped by the co-occurrence of a record-breaking El Niño (1877–1878), a record strong Indian Ocean Dipole (IOD) (1877), and record warm conditions in the North Atlantic Ocean (1878)22. El Niño Southern Oscillation (ENSO) is one of the main modes of variability that can cause simultaneous droughts and consequently affect food production in multiple remote regions. For instance, the reduction in global maize production in 1983 resulting from simultaneous crop failures across multiple regions13 is linked to the strong 1982–1983 El Niño event24. ENSO teleconnections lead to correlated climate risks between agricultural regions in North and South America and across the Pacific in Northern China and Australia25. For example, maize and soybean growing conditions in the US and southeast South America are favorable during the El-Niño phase, while the conditions are unfavorable in northern China, Brazil, and Southern Mexico25. In addition to ENSO, modes of variability in the Indian and Atlantic Ocean such as the Indian Ocean Dipole (IOD), tropical Atlantic variability, and the North Atlantic Oscillation are found to substantially affect the production of globally-aggregated maize, soybean, and wheat24. The influence of the interaction between these modes of natural variability on spatially compounding droughts across various regions has not yet been investigated. Here, we examine the influence of four modes of natural climate variability on compound droughts across ten regions (Fig. 1a) defined in the SREX2 - Amazon (AMZ), Central America (CAM), Central North America (CNA), East Africa (EAF), East Asia (EAS), East North America (ENA), South Asia (SAS), Southeast Asia (SEA), Tibetan Plateau (TIB), and West Africa (WAF). We select these regions for three main reasons: (1) these regions include areas that receive a majority of their annual precipitation during the summer season (June–September) and experience high monthly precipitation variability, (2) several of these regions are physically connected by the global summer monsoon system26,27, and (3) climate variability across these regions are affected by similar modes of sea surface temperature (SST) variability. Our analysis only focuses on areas within these regions that meet the criteria of predominantly summer season precipitation and high monthly variability, which are identified based on the Shannon Entropy Index. These regions include major population centers with high levels of poverty and food insecurity and a number of major grains producing regions of the world, making them important in the context of global food security. The predominant influence of tropical Pacific SSTs (El-Niño or La-Nina condition) on precipitation variability over these regions is well-known28,29. In addition, previous studies have highlighted the significant influence of other modes of variability such as the IOD, the Atlantic Niño and the Tropical North Atlantic (TNA) alongside El-Niño on individual regions such as SAS30,31, WAF/EAF32,33, EAS34,35, SEA36, and AMZ37,38. We aim to understand how the co-occurrence of these modes of variability influence the characteristics of spatially compound droughts across the ten SREX regions. By advancing the knowledge of the physical drivers of compound droughts, the findings from this study have relevance for quantifying the cascading risk to critical, globally connected socio-economic sectors such as agriculture and thereby to regional and global food security and disaster risk management. By identifying SST conditions that have prediction skill on seasonal timescales39,40,41, our findings also highlight the potential for predictability of such events that can aid in predicting and managing their impacts42. ## Results and discussion ### Compound drought characteristics and their physical drivers To identify summer season (June–September) compound droughts across the ten SREX regions (Fig. 1a), we utilize the Standardized Precipitation Index (SPI), which is a commonly-used measure of meteorological drought. Our analysis is limited to grid cells within each region that have high entropy values (Fig. 1a), signifying substantial summer season precipitation and high monthly precipitation variability. We define drought at a grid cell when SPI is below −1 standard deviation (< −1σ) and consider a region under drought when total number of grids with SPI < −1σ exceeds 80th percentile of the historical drought area for that region (see “Methods” section; Fig. 1c). Based on these definitions, we find 11 years since 1981 that have at least three regions simultaneously experiencing droughts (Fig. 1b), which we hereafter refer to as compound droughts. El Niño exhibits the strongest influence on the occurrences of compound droughts in the observations. 8 of the 11 observed compound droughts in CHIRPS are associated with anomalously warm SSTs in the Niño3.4 region, with seven of them classified as El Niño events (≥0.5σ; Fig. 2). A majority of compound droughts occur during the developing phase of moderate to strong El Niño (SST anomaly >1σ) (Fig. S1) and only two compound droughts are associated with anomalously cold SST over Niño3.4 region. For instance, the strong El Niños of 1982, 1997, and 2015 resulted in widespread and severe compound droughts that simultaneously affected over five of the study regions. In each case, the total drought affected area across all ten regions exceeded the historical 90th percentile (referred to as widespread droughts) and average SPI across all regions remained in the lowest historical 10th percentile (referred to as severe droughts; Fig. 2). However, not all strong El Niño years led to compound droughts (e.g., 1987) and substantial SST anomalies across the Atlantic and Indian Ocean basins were also present during the 11 compound droughts (Fig. S2), indicating the possibility of a more complex interplay of multiple modes of ocean variability. Therefore, we seek to investigate the influence of individual and co-occurring natural modes of ocean variability on the characteristics of compound droughts. Specifically, we consider El Niño co-occurrences with IOD, Atlantic Niño, and TNA, since their influences on the interannual precipitation variability in our study regions are well established30,43. We note that 7 of 12 positive IOD (IOD+; DMI > 0.5σ), 5 of 11 negative Atlantic Niño (AtlNiño; SST anomaly < −0.5σ), and 7 of 14 negative TNA (TNA; SST anomaly < −0.5σ) co-occurred with compound droughts (Fig. 2). Overall, more than 60% (7 out of 11) of the observed compound droughts occurred during the years when two or more of these modes of ocean variability were active (Fig. 2). The apparent dominance of El Niño as a major player during the episodes of compound droughts is not sensitive to the choice of threshold used to define drought. For instance, if classification of a region under drought is based on 90th percentile of the historical drought area for that region instead of the 80th percentile, the total number of compound droughts in the last four decades expectedly reduces (5 instead of 11) (Fig. S1), however, 80% of them are still during strong El Niño events. These findings are also insensitive to the choice of the observational dataset. For instance, use of precipitation from Climate Research Unit (CRU) and SSTs from Extended Reconstructed Sea Surface Temperature (ERSST) NOAA V544 over 1901–2018 yields nearly 70% (12 of the 17) of compound and widespread droughts during strong El Niño events (Fig. S3). Similar to CHIRPS, more than half (~60%) of the compound droughts are associated with the co-occurrence of two or more modes of ocean variability (Fig. S3). While we do find 8 of the 39 compound droughts in the 118-year record associated with opposite phases of two or more of these variability modes (Figs. 2 and S3), those conditions are comparatively rare45. ### Identifying relevant phases of natural variability modes To establish the relationship between these modes of ocean variability and SPI in the study regions, we perform a multiple linear regression analysis (Fig. S4). Our analyses reveal a widespread and consistent negative influence of the Niño3.4 SST anomalies (Fig. S4a) and positive influence of the Atlantic Niño SST anomalies (Fig. S4b) on SPI in most regions, suggesting that Niño3.4+ and AtlNiño conditions are conducive to droughts in these regions. In contrast, we find that the TNA SST anomalies (Fig. S4c) and the IOD (Fig. S4d) have a varied influence across these regions. For instance, the IOD has a positive influence on SPI over parts of WAF, EAF and SAS but a negative influence over parts of CNA and SEA. This indicates that IOD+ conditions promote droughts over CNA and SEA. Similarly, the TNA SST anomalies exhibit a negative influence over parts of AMZ, but positive influence over parts of CAM, SEA, and WAF, which suggest that TNA conditions favor droughts over the latter regions. We also calculate the fraction of the total drought events in each region during different phases of these modes of ocean variability (Fig. S5). Positive Niño3.4 SST anomalies (>0.5σ; Niño3.4+) are linked to a substantial fraction of historical drought events over several regions. Niño3.4+ (nine events historically) conditions are associated with ≥75% of droughts over CAM, SAS, and SEA, and ≥50% of droughts over EAF, WAF, and TIB. Similarly, TNA conditions are coincident with ≥75% of droughts over CAM and SAS, and ≥50% over SEA, TIB, and WAF. AtlNiño- events coincide with ≥50% of droughts over AMZ, EAS, and WAF while IOD+ is present during ≥75% of droughts over SEA and ≥50% of droughts over AMZ, CAM, CAN, SAS, TIB, and WAF (Fig. S5). In contrast, the opposite phases of IOD, TNA, and AtlNiño are associated with a small fraction of droughts over only one region. Collectively, these results suggest the predominant influence of Niño3.4+, IOD+, TNA, and AtlNiño on individual regions and compound droughts. These conditions are also more likely to co-occur. For instance, El Niño conditions are more likely to co-occur with IOD+ conditions as they tend to drive warmer SSTs over the western Indian ocean through the atmospheric bridge and cooler SSTs over the eastern Indian ocean via oceanic Indonesian throughflow45. Similarly, cold SSTs over the tropical north Atlantic Ocean can induce warm conditions over the Pacific Ocean by influencing the Walker circulations45, making cold TNA conditions and El Niños more likely. Therefore, we further explore how IOD+, TNA, and AtlNiño modes interact with El Niño to influence drought characteristics over individual regions and consequently, compound droughts. ### Amplifying effect of co-occurring modes with El Niño The interplay of Niño3.4+ with other modes of ocean variability requires several instances of their co-occurrences for robustly distinguishing their individual and combined influence. Given the limited length of the observed record, we primarily study their interactions in a multicentury (1800 years) preindustrial climate simulation from the National Center for Atmospheric Research (NCAR) Community Earth System Model (CESM)46. CESM skillfully represents precipitation over the study regions and SST variability representing various oceanic modes relevant to this study47,48. We have included comparisons of CESM with observations, where feasible (Fig. 3). The 1800-year preindustrial simulation provides a substantially larger number of events to examine the relative and combined influence of natural modes of variability without any changes resulting from external climate forcing (Fig. 3). We compare regional drought characteristics during three types of conditions (see “Methods” section)—(1) El Niño co-occurring with other modes (either IOD+ or/and TNA or/and AtlNiño; referred to as co-occurring conditions), (2) El Niño occurring alone (referred to as Niño3.4+ conditions), and (3) neutral conditions, when none of them are active (Fig. 3). It should be noted that there are no neutral conditions in the 38-year observational record, and limited instances of the other two conditions does not allow their robust comparisons (e.g., there are 2 Niño3.4+ and 7 co-occurring conditions). During Niño3.4+, a large fraction of all tropical regions—AMZ, CAM, EAF, WAF, SAS, and SEA—experience abnormally dry anomalies (Fig. S6), consistent with well-known observed ENSO teleconnections49,50,51. The co-occurrence of Niño3.4+ with other modes intensifies dry conditions over EAS, SEA, CAM, and AMZ, while the opposite impact is experienced over EAF (Fig. S6). The simulated composites show consistency with both observed datasets over most regions, with the exception of biases in the extent and intensity of precipitation deficits over parts of SAS, WAF and EAF between model and observations (Fig. S6). We also quantify the aggregate drought area and intensity across the individual regions (Fig. 3). In the CESM preindustrial simulation, two regions—CAM and SEA—experience significantly larger drought areas during both Niño3.4+ and co-occurring conditions relative to neutral conditions (indicated by gray arrows in Fig. 3a), while two regions—AMZ and SAS only show significantly larger droughts during co-occurring conditions but not during Niño3.4+ relative to neutral conditions (box plots, Fig. 3a). In addition, co-occurring conditions expand the drought area over AMZ and SEA and significantly reduce drought area over EAF relative to Niño3.4+ (indicated by green arrows in Fig. 3a), consistent with observations (solid circles, Fig. 3a), highlighting their role in shaping drought characteristics. Moreover, Niño3.4+ significantly increases drought intensity over EAF, SEA, and WAF relative to neutral conditions. Further, co-occurring conditions are associated with significantly higher drought intensity over AMZ, CAM, WAF, EAF, SAS, and SEA relative to Niño3.4+ (Fig. 3b), consistent with observations (solid circles, Fig. 3b). Overall, these findings highlight the complex interplay of Niño3.4+ and other modes of ocean variability that control the spatial footprint and severity of over studies regions (Figs. 3 and S6). While Niño3.4+ exhibits the strongest influence on regional precipitation characteristics, (Fig. 3 and S6), the frequency, severity and spatial extent of compound droughts is substantially enhanced when Niño3.4+ co-occurs with other natural modes of ocean variability (Fig. S7). For instance, the probability of compound droughts in CESM increases from 0.09 during neutral conditions to ~0.27 during Niño3.4+ conditions and ~0.43 during co-occurring conditions (Fig. S7d). Likewise, the probability of widespread and severe droughts is nearly 70% higher during co-occurring conditions relative to Niño3.4+ conditions alone (Fig. S7e, f). These model-based findings are mostly consistent with observations (Fig. S7a–c), except that the simulated number of drought-affected regions during co-occurring conditions is not significantly higher even though the probability of simulated compound droughts is ~20% higher relative to Niño3.4+ conditions in observations (Fig. S7a). ### Influence of co-occurring modes on regional droughts Next, we isolate the influence of each individual mode of variability and their co-occurrence with Niño3.4+ on precipitation characteristics (Fig. 4). AtlNiño is associated with anomalously dry conditions (relative to neutral) over WAF, central AMZ, northern TIB and EAS (Fig. 4a, b). Its co-occurrence with Niño3.4+ significantly influences precipitation anomalies in the Atlantic Rim regions, including stronger precipitation deficits over WAF and the AMZ and reversal of Niño3.4+ forced anomalies (wet to dry) over CNA (Fig. 4e, f). More intense and widespread drying over WAF and AMZ during Niño3.4+/AtlNiño occurs without substantial increase in SST anomalies over the Niño3.4 region, which indicates an additive influence of these modes on regional drought characteristics. Likewise, co-occurring TNA-/Niño3.4+ conditions also appear to have an additive influence though the composites do indicate significantly higher SST anomalies over the part of Niño3.4 region indicative of slightly stronger Niño3.4+ conditions (Fig. 4g). Individually, TNA are associated with dry conditions over WAF, EAF, CAM, southern SAS, and northern TIB relative to neutral conditions (Fig. 4a, c, e). Co-occurring TNA/Niño3.4+ conditions amplify the Niño3.4+-related drying over CAM, AMZ, EAF, northern TIB, central EAS, and SEA. In addition, there are more widespread precipitation deficits across WAF, EAF and SAS over areas that would experience wet anomalies during Niño3.4+ (Fig. 4, e, g). In contrast to the relatively consistent drying influence of these modes across multiple regions, IOD+ exhibits a dipolar influence across the regions surrounding the Indian Ocean. IOD+ is associated with anomalous drying over western SEA, northern SAS, TIB, northeast EAS and parts of CNA and anomalous wet conditions over EAF52 and WAF49 (Fig. 4a, d). Therefore, Niño3.4+/IOD+ co-occurrence dampens the drying impacts of Niño3.4+across the latter regions, while it expands and intensifies precipitation deficits over SAS and SEA. These findings are consistent with Preethi et al.49 suggesting the co-occurrence of IOD+ conditions can dampen the influence of tropical drivers over Africa. One confounding factor in determining the modulating influence of the IOD+ on Niño3.4+-related drought effects is that intensity of Niño3.4+ is substantially higher during IOD+ (Fig. 4e, h), as studies suggest that strong Niño3.4+ events force IOD+ conditions53,54,55, which is perhaps partly responsible for the intensification of drought severity over SEA and parts of SAS during Niño3.4+/IOD+ co-occurrence. Given the substantial effect of all four natural variability modes on regional precipitation (Figs. 3 and 4), we assess the individual and combined influence of each of the combinations on aggregate drought area and intensity across a subset of six SREX regions that are substantially affected by these variability modes (Fig. 5). Amongst the four modes, Niño3.4+ significantly increases drought area over the largest number of these regions—CAM, EAF, and SEA-relative to neutral conditions, followed by TNA- that increases drought area over CAM and EAF (indicated by gray arrows in Fig. 5a). The individual influence of other modes is limited to fewer regions—AtlNiño significantly increases drought area over WAF, whereas IOD+ significantly decreases drought area over EAF and WAF and increases it over SAS. However, their co-occurrence with Niño3.4+ has significant effects over multiple regions. Co-occurring Niño3.4+/TNA are associated with significantly higher drought area over all regions but AMZ relative to neutral conditions (gray arrows in Fig. 5a). In addition, the co-occurring Niño3.4+/TNA significantly (at 5% significance level) increase drought area over SEA and CAM while Niño3.4+/IOD+ co-occurrence significantly decreases drought area over EAF and increases drought area over SEA, relative to Niño3.4+ alone (indicated by green arrows in Fig. 5a). AMZ, which experiences no significant change in drought area under Niño3.4+ relative to neutral conditions, has a significantly higher drought area when Niño3.4+/AtlNiño or Niño3.4+/IOD+ co-occur. Similarly, WAF only shows significantly higher drought area during co-occurring Niño3.4+/TNA and Niño3.4+/AtlNiño but not during Niño3.4+ alone. Unlike the influence on drought area, we find a more limited influence of the individual occurrences of these modes on drought intensity over most regions, except an increase in drought intensity over WAF during TNA and AtlNiño and over EAF, SEA, and WAF during Niño3.4+ relative to neutral conditions (indicated by gray arrows in Fig. 5b). However, co-occurring modes significantly increase drought intensity over all six regions relative to neutral conditions. For instance, despite no substantial difference in drought intensity over CAM and SAS between Niño3.4+ and neutral conditions, co-occurring Niño3.4+/TNA lead to significantly higher drought intensity over these regions and over SEA and WAF (indicated by gray arrows in Fig. 5a). In addition, co-occurring Niño3.4+/IOD+ are associated with significantly higher drought intensity over SEA and SAS and Niño3.4+/TNA are associated with significantly higher drought intensity over CAM and SAS, relative to Niño3.4+ alone (indicated by green arrows in Fig. 5b). ### Influence of co-occurring modes on compound droughts The individual and co-occurring influences of these modes on regional drought characteristic also leads to the episodes of compound droughts across ten SREX regions when at least three regions simultaneously experience drought during the same season (Fig. 6). The probability of experiencing compound droughts increases approximately threefold during AtlNiño- (probability = 0.25), TNA (probability = 0.24) and Niño3.4+ (probability = 0.27) relative to neutral conditions (probability = 0.09) (gray arrows in Fig. 6a), which is further amplified during their co-occurrences. For instance, co-occurring Niño3.4+/IOD+ or Niño3.4+/AtlNiño increase the probability of compound droughts by a factor of ~5 while co-occurring TNA-/ Niño3.4+ increase it by a factor of ~7 relative to neutral conditions. Overall, the co-occurring Niño3.4+/TNA conditions are associated with the largest amplification of compound drought risk (~2.5 or ~150%) over their probability during Niño3.4+ conditions. Similarly, the total compound drought area measured across all ten SREX regions shows a significant increase during TNA- and Niño3.4+ relative to neutral conditions (Fig. 6b). Niño3.4+ increases the probability of widespread droughts, events with drought area in the top 90th percentile (~21%), to 0.19 compared to ~0 during neutral conditions. Co-occurrence of other natural variability modes with Niño3.4+ also substantially increase compound drought area compared to neutral conditions. Most notably, co-occurring TNA/Niño3.4+ raises the probability of widespread droughts by a factor of ~3 relative to Niño3.4+. Likewise, co-occurrence of various ocean variability modes amplifies the probability of severe compound droughts events with the area-weighted average drought intensity across all regions in the lowest 10th percentile (~−1.52) (Fig. 6c). Co-occurring Niño3.4+/TNA are associated with a 2.5 times higher probability of severe droughts relative to Niño3.4+. The co-occurring Niño3.4+/IOD+ and Niño3.4+/AtlNiño- also increase the probability of severe droughts by a factor of ~2 and ~1.5, respectively relative to Niño3.4+. Overall, these analyses suggest that Niño3.4+ leads to the largest increase in the probability, extent and intensity of compound droughts relative to the neutral conditions, and the co-occurrence of IOD+, and/or TNA, and/or AtlNiño- with Niño3.4+ can significantly amplify these characteristics through their influence on drought intensity and extent over one or multiple SREX regions. ### Physical mechanisms associated with compound droughts We investigate the underlying physical mechanisms that connect simultaneous precipitation anomalies over several terrestrial regions with SST anomalies in various oceanic basins by analyzing upper level (200 hPa) velocity potential (VP) and low-level (at 850 hPa) moisture flux convergence (MFC) anomalies corresponding to the individual and co-occurring modes (Fig. 7). The VP describes large-scale horizontal convergence and divergence centers of the atmospheric circulation and is particularly useful in identifying anomalies in the tropical circulations. It is well known that El Niño modulates tropical/sub-tropical precipitation via forcing anomalies in the Walker circulation56,57. Climatologically, the strongest upper-level divergence centers (also known as the ascending branches of the Walker circulation) during the boreal summer are located in the western Pacific and eastern Indian Oceans and their subsiding branches are located in the eastern Pacific, southwestern Indian, and Atlantic Oceans (Fig. S8). These upper-level divergence centers coincide with the strong monsoon-driven convection across Asia. During Niño3.4+, the ascending (subsiding) branches of the Walker circulation in the western Pacific and eastern Indian (eastern Pacific and south Atlantic) weaken, leading to anomalous upper level convergence (divergence) anomalies that are reflected in the positive (negative) VP anomalies (Fig. 7a). Such changes in the tropical circulations weaken boreal summer monsoons, reduce low-level moisture convergence and consequently, support drier conditions over those regions (Fig. 7a, e). The associated anomalies in the South Atlantic high also induce changes in the trade winds over the equatorial Atlantic which influence moisture supply over AMZ, CAM, and WAF (Fig. 7e). The co-occurrence of AtlNiño with Niño3.4+ noticeably amplifies the positive VP anomalies over WAF during Niño3.4+ and reduces the anomalous ascent of the Walker circulation over CAM and AMZ, (Figs. 5a and 7f). These circulation changes along with cooler than normal SSTs in the region lead to reduced moisture convergence, expanding the precipitation deficits over these regions relative to during Niño3.4+ (Figs. 4e, f and 7e, f). Co-occurring Niño3.4+/TNA exhibit the strongest and most widespread positive VP anomalies over the studied regions that influence the large-scale monsoon circulations (Fig. 7a, c) and low-level moisture availability (Fig. 7e, g), which further intensify the strength of Niño3.4+-induced drying as reflected in Figs. 4g and 5b. Earlier studies also note that TNA influences precipitation over the African regions by altering the northward extent of the West African Monsoon49 and moisture transport from the Atlantic Ocean and Gulf of Guinea58, and over CAM through the modulation of low-level moisture convergence over the Caribbean region and the strength of the Atlantic northeasterly trades59. The main influence of IOD+ is seen over the African and Asian regions. IOD+ reduces (strengthens) the influence of Niño3.4+ on the upper-level circulation over Africa (EAS and SEA), which reduces (intensifies) the extent and intensity of dry anomalies (Fig. 5a, b). Our findings are consistent with previous studies that have found that IOD+ weakens the African Easterly Jet and strengthens the Tropical Easterly Jet, while Niño3.4+ generally drives the opposite response49. Similarly, the anomalously cool SSTs surrounding SEA during IOD+ contribute to reducing the low-level moisture convergence (Fig. 7e, h), and thereby amplify the regional drying associated with substantial weakening of Walker circulation60. Overall, we note that the simultaneous occurrence of other modes of ocean variability oftentimes intensifies and/or expands the large-scale circulation anomalies associated with Niño3.4+, resulting in more intense or widespread moisture deficits over several regions. ## Summary and conclusions Spatially compound extremes impose amplifying pressures on the disaster risk management resources and the global food system. As the impacts of such extremes are increasingly being recognized, recent studies have started to investigate their probability of occurrence and associated mechanisms7,12,14,18,24. While previous studies have focused on the mechanisms of compound temperature extremes across the mid-latitudes18, we examine the drivers of compound droughts across ten SREX regions that predominantly experience summer precipitation with high variability, identified based on the Shannon Entropy index. We use the 38-year observational record and an 1800-year CESM preindustrial climate simulation to examine the characteristics of compound droughts and the influence of natural ocean variability modes. We identify 11 historical compound droughts in the observational records, of which seven are associated with strong El Niño conditions. In addition to the central role of El Niño in driving these events, our analysis based on observational and the preindustrial simulation demonstrates substantial influence of three other modes of ocean variability—IOD, TNA, and AtlNiño conditions—that amplify various characteristics of regional droughts and global occurrences of compound droughts. El Niño leads to a significant increase in the drought area and intensity over the largest number of regions relative to the other modes of natural variability (Figs. 3 and 5), and in turn, increase the probability of compound droughts by a factor of ~3, compared to their probability during neutral conditions (Fig. 6). Additionally, El Niño heightens the probability of widespread and severe droughts to 0.19 and 0.17, respectively, relative to 0 during neutral conditions. Other modes of natural variability show a varying influence on drought extent and intensity over specific regions and therefore, by themselves have an overall smaller impact on the probability of compound droughts compared to the impact of El Niño. The TNA mode has the largest influence among the three other modes, with TNA significantly amplifies drought area across CAM and SEA, and drought intensity over CAM and SAS during its co-occurrence with El Niño, contributing to a 2.5-fold, 3-fold, and 2.5-fold increase in probability of compound, widespread and severe droughts, respectively (Fig. 6). In contrast, because IOD+ dampens the influence of El Niño on drought area in EAF but amplifies it in SEA, and its co-occurrence with Niño3.4+ leads to a relatively moderate 1.6-fold increase in the probability of compound, widespread, and severe droughts (Fig. 6). Overall, our analyses reveal the importance of considering other modes of ocean variability in addition to El Niño for assessing the risk, extent, and severity of compound droughts. We highlight a few caveats and limitations of this study. First, because of the relatively small sample size of the precipitation record in several of the study regions, our analysis of the individual and combined influence of natural variability modes largely depends on the long preindustrial climate simulation. Second, although the CESM simulation largely captures the relationship between various modes of variability considered in this study, it demonstrates stronger than observed correlations between TNA and ENSO, and IOD and ENSO at different lead times. Third, while we utilize the CESM model, which is one of the most skillful climate models in representing El Niño conditions48, we do not investigate intermodel differences in the identified relationships that may arise due to varying representations of precipitation processes, natural variability characteristics and teleconnections. Four, we do not consider the potential lead-lag relationships between some of these modes of variability and their regional impacts on precipitation45,61,62,63. Efforts to comprehensively assess these relationships and interactions between modes on various timescales can support predictability efforts. In addition, our future work will also focus on investigating the physical processes underlying the interactions between these modes and the regional and global impacts of their co-occurrence. Compound droughts have the potential to induce synchronous crop failures and simultaneously cause other impacts across various societal sectors in multiple regions, leading to cascading global consequences. In the backdrop of the global interconnectivity of our socio-economic and physical systems, our study highlights the importance of considering the occurrence of and interactions between multiple modes of natural variability that represent the large-scale state of climate in characterizing the compound drought risks and their impacts on global food security, rather than solely focusing on individual modes that drive region-specific droughts. Our study presents the first step towards understanding the factors that influence compound droughts and their characteristics, which can help understand how they might change in response to the projected increases in extreme El Niño conditions47 and positive IOD conditions64. Understanding the factors that shape the characteristics of compound droughts have important implications for enhancing society’s resilience to the multitude impacts of droughts including food insecurity and water scarcity. A better understanding of compound drought risks is relevant for informing agriculture insurance companies to design more optimal crop insurance schemes, which are presently based on the historical probabilities of extreme events in individual regions without considering their spatial relationships. By identifying how interactions among different modes of natural variability can influence compound droughts, our study highlights the potential for seasonal prediction of such events to aid in the management of their impacts. Several modes of SST variability have skillful predictions at varying lead times including up to 9-months for El Niño39, up to 6 months for the IOD40 and 4 months for tropical Atlantic Ocean SSTs41. Timely predictions of droughts and drought-induced shocks in agricultural production will help manage potential food insecurity in several vulnerable regions42. Additionally, predictions of such events have implications for international trade, where the agribusiness industry and grain producers can get enough time to minimize the economic losses due to anticipated disasters. ## Methods ### Data We primarily use precipitation from the widely-used high-resolution (0.25° × 0.25°) Climate Hazards group Infrared Precipitation with Stations (CHIRPS version 2) dataset (1981 to present). The CHIRPS daily precipitation dataset has been used for the assessment of daily, monthly, seasonal, and annual precipitation characteristics in several regions of the world65,66,67,68. CHIRPS blends satellite-based precipitation estimates with in situ observations, and models of terrain-based precipitation modification to provide high resolution, spatially-complete, and continuous long-term data from 1981 to present, providing distinct advantages over rain-gauge-based products that include variations in station density or remotely sensed data that have a limited temporal extent69,70. In order to establish the robustness of our findings, we also compare our analyses with data from the Climate Prediction Center (CPC; 0.5° × 0.5°) and Climatic Research Unit (CRU; 0.5° × 0.5°), by comparing the Standardized Precipitation Index (SPI) across all ten SREX regions from all three datasets (Fig. S9). The SPI from CHIRPS and CRU are strongly correlated ($$\rho$$ > 0.72) over all regions but CPC-based estimates exhibit comparatively lower correlations over some regions including EAF, WAF, SAS, and EAS. We find that CPC-based SPI does not capture documented droughts over AMZ71, SAS72 during the record breaking El Niño year 2015, and over SAS73, EAF74, and EAS75 in another well-known El Niño year 2009 (Fig. S9). Therefore, of these three datasets, we use CHIRPS for the remainder of our analysis. Further, while the Global Historical Climate Network has station-data availability over a longer period of time over some regions, we do not include it in this analysis due to the non-uniform density of stations across the study area, and temporal discontinuities in data availability. We obtain sea surface temperatures (SST) from the National Oceanic and Atmospheric Administration (NOAA) High Resolution (0.25° spatial resolution) Optimum Interpolation (OI) SST dataset version 2 (V2), which has temporal coverage from 1981 to present76. Although our observational analysis is based on precipitation from CHIRPS and SST from OI NOAA V2 due to their finer spatial resolution, we perform complementary analyses with the long-term observed precipitation from CRU77 (0.5° spatial resolution) and SSTs (2° spatial resolution) from Extended Reconstructed Sea Surface Temperature (ERSST) NOAA V544 during 1901–2018. Given the limited length of the observed record, we further characterize the influence of various SST variability modes on precipitation variability in the ten SREX regions using an 1800-year preindustrial simulation from the CESM46. Since the simulation has constant preindustrial climate forcing, it isolates the influence of unforced natural climate variability from the confounding influence of changing external climate forcings46. We select the CESM model simulation because it is one of the most skillful modern climate models in reproducing El Niño behavior and its teleconnections47,48. ### Selection of regions We examine compound droughts across ten SREX regions4,9,78, which are selected based on the similarity in their precipitation characteristics. Specifically, we consider regions that show high variability in summer precipitation and receive a majority of their precipitation during the summer season. To identify the subregions that meet these criteria, we compute the Shannon Entropy Index for summer season precipitation, which is a concept drawn from information theory to measure the variability of a random variable79. The Shannon Entropy index is defined as measure of variability and has been used in hydroclimatic studies to assess the spatial and temporal variability of precipitation time series80. The Shannon entropy H can be computed as80, $$H = - {\sum} {p_{\rm{i}}\log _2p_{\rm{i}}},$$ (1) where p is the probability of each ith observation of the variable time series. We restrict our analyses to regions that have high entropy values over more than 30% of the total domain. Only ten tropical and mid-latitude SREX regions meet this criterion. Within these regions, we only consider areas with entropy values exceeding 4.86, which is the median entropy value across the regions considered. ### Drought definitions We define drought at each grid cell based on SPI calculated with accumulated summer season (June–September; JJAS) precipitation. Following the method developed by McKee et al.81, the probability of accumulated JJAS precipitation from all season is transformed to a standard normal distribution. The estimated JJAS SPI is similar to the JJAS precipitation anomaly, but the standardization makes it comparable across space and time. The SPI time series is linearly de-trended to eliminate long-term trends and capture interannual precipitation variability. We define a grid cell under drought if its SPI is less than –1 standard deviation (σ) of the long-term (1981–2018) mean SPI. We define a region under drought if the fractional area experiencing drought (SPI < −1σ) within that region exceeds the 80th percentile of the seasonal drought area distribution. We choose the 80th percentile threshold to define a region under drought because it captures several documented droughts across various regions and, compared to higher percentile thresholds, it is relatively less sensitive to the length of observational records. Additionally, higher percentiles (>80th percentile) also substantially limit the drought events sample size, limiting the statistical robustness of our findings. The drought extent is defined as the fraction of the area within a region with SPI < −1σ and the drought intensity is defined as the area weighted-average SPI value over all the grid cells experiencing drought. We define compound droughts as at least three of ten SREX regions simultaneously experiencing droughts. We define widespread drought as events in which the fraction of total area across all ten regions simultaneously affected by drought exceeds the 90th percentile of the long-term average drought area. We define severe drought as events in which average SPI across all drought affected areas falls below the 10th percentile of the long-term average SPI. ### Multiple linear regression (MLR) We perform a MLR analysis to understand the individual influence of Niño3.4, TNA, IOD, and Atlantic Niño indices on SPI across all SREX regions. Using MLR, we compute the regression coefficients (slope) between SPI (dependent variable) and these SST-based indices (independent variable). To examine the multicollinearity in this multiple regression model, we estimate the variation inflation factor (VIF) corresponding to each independent variable82. We found relatively low VIFs for all four indices (TNA—1.05; Atlantic Niño—1.17; Niño3.4—1.46; IOD—1.27), which suggests a minimal concern of multicollinearity in our regression model. ### Natural variability modes The Niño3.4 index is used to define ENSO as the average SST anomalies over 5°S–5°N, 170°–120°W83. The TNA index is estimated as the average SST anomalies over 5.5°–23.5°N, 15°–57.5°W84. The Atlantic Niño (AtlNiño) index is calculated from average SST anomalies over 5°S–5°N and 20°W–0°85, and IOD is identified by using the Dipole Mode Index (DMI), which is calculated as the SST difference between the western (50°–70°E, 10°S–10°N) and eastern (90°–110°E, 10°S– Equator) equatorial Indian Ocean22,86. The spatial extent of all regions used to calculate these indices are highlighted in Fig. 1. All indices are calculated for the summer. Niño3.4+ refers to El Niño conditions when JJAS positive SST anomaly over the Niño3.4 region is >0.5σ. TNA and AtlNiño refer to cold phases of these indices that are identified based on negative JJAS SST anomalies (< −0.5σ) over their corresponding regions. IOD+ refers to positive IOD when JJAS DMI is >0.5σ. Since, we aim to investigate the relationship between modes of ocean variability and compound droughts on interannual timescales, we remove the climate change signal by detrending the observed timeseries of all modes, SSTs and SPI, which makes the identified relationships more comparable between observations and preindustrial simulations. To understand the influence of El-Niño and its interactions with other modes of natural variability on drought characteristics, we first categorize all available seasons in the observed record into Niño3.4+-only and co-occurring conditions. Niño3.4+-only conditions are defined as years when Niño3.4+ is active while all other modes are in their neutral phase (<±0.5). Co-occurring conditions are defined as years when Niño3.4+ co-occurs with AtlNiño, TNA, or IOD+ conditions. There are two Niño3.4+ and seven co-occurring conditions during the 38-year observed period. To get a larger distribution of compound droughts under various anomalous SST conditions, we examine these interactions in a 1800-year CESM preindustrial climate simulation. In addition, we categorize years based on the individual occurrences of each variability mode, and their combined occurrences with Niño3.4+ to understand their individual and combined influence on drought characteristics relative to neutral conditions. Neutral conditions are defined as years without any substantial phase of either of the four modes of ocean variability. Niño3.4+/AtlNiño, Niño3.4+/IOD+, and Niño3.4+/TNA refer to years when Niño3.4+ co-occur with AtlNiño, IOD+, and TNA, respectively, while the other modes are in their neutral conditions. We evaluate the lead correlations between each mode and JJAS SPI over study regions during 1901–2018 to assess the validity of using contemporaneous (JJAS) SSTs in each basin. Given that El Niño events typically peak in winter87, we examine correlations between the 4-month moving average of the Niño3.4 index starting from November of the previous year to September of the current year (Fig. S10). Although some regions show significant correlations at several month lag times, they constitute a relatively small fraction of the all regions considered (~12%) (Fig. S10a). The area with significant correlations between JJAS(0) (“0” refers to the months of the current year) SPI and ENSO increases substantially with reduced lead time of the ENSO index. Specifically, ~40% of the studied area shows the strongest correlation with instantaneous impact of summer ENSO conditions49,50,88,89,90 (Fig. S10a). In addition, JJAS(0) SPI shows the strongest correlation with contemporaneous ENSO (Fig. S10b). Similarly, we assess the correlations of JJAS(0) SPI with other modes of variability and find that the strongest and most widespread correlations across all regions altogether are with contemporaneous IOD and Atlantic Niño. The TNA index has its strongest correlations at a short lead time though the correlations are not substantially different than during the JJAS season (Fig. S10a, b). We also note that there are some contemporaneous and lagged correlations between ENSO and other modes of variability61,62,63(Fig. S11). Consistent with previous studies, we find an insignificant contemporaneous correlation between co-occurring AtlNiño and ENSO62,63 but weak lead correlations up to 6 months in advance61. Further, we find insignificant correlations between TNA and ENSO on most timescales in observations. Correlations between IOD and ENSO are the strongest in JJAS (Fig. S11a). The simulations generally capture these relationships but indicate stronger than observed correlations between TNA and ENSO, and IOD and ENSO at nearly all lead times (Fig. S11b). These lagged correlations between modes61,62,63 highlight the potential for their predictability and their associated regional precipitation anomalies91 and warrant further investigation. However, our analyses are constrained to the influence of contemporaneous states of all modes on regional precipitation, given the overall strongest and most widespread influence of most modes on regional precipitation in these regions. Our choice of using contemporaneous SSTs follows numerous studies that have identified the importance of contemporaneous Pacific, Atlantic, and Indian Ocean SST conditions on monsoons, which govern precipitation over a majority of these regions49,50,92,93. ### Statistical significance We use the permutation test to assess the statistical significance of the differences in mean of drought characteristics during the occurrence of various combinations of natural ocean variability modes94. Permutation tests are becoming increasingly common to estimate the significance level of certain statistical analyses95. The non-parametric permutation test does not make any assumptions pertaining to sample size and distribution of the data, and is therefore suitable for a variety of situations, including for comparing distributions of different sizes, as is the case here. Here, we use the difference in the means of the two distributions as the test statistic. We first quantify the test statistic from the two original distributions and then randomly permute the samples from the two distributions, and re-estimate the test statistic from the resampled distributions. We repeat this procedure 10,000 times to obtain an empirical distribution of the test statistic, which represents the possible outcomes if the distributions were totally random. If the original test statistic is higher (lower) than the 95th (5th) percentile of the statistic from these permuted samples, we consider the mean of the distributions to be significantly different at the 5% level. ## Data availability All datasets used in the manuscript are publicly available and their sources are provided in the “Methods” section. ## Code availability The scripts developed to analyze these datasets can be made available on request from the corresponding author. ## References 1. 1. Hoegh-Guldberg, O. et al. in: Global Warming of 1.5 °C. An IPCC special report on the impacts of global warming of 1.5 °C above preindustrial levels and related global greenhouse gas emission pathways […]. (ed. Masson-Delmotte, V.) 175–311 (World Meteorological Organization, 2018). 2. 2. IPCC. Managing the risks of extreme events and disasters to advance climate change adaptation A Special Report of Working Groups I and II of the Intergovernmental Panel on Climate Change (eds. Field, C. B. et al.) (Cambridge University Press, 2012). 3. 3. National Academies of Science, Engineering, Medicine. Attribution of Extreme Weather Events in the Context of Climate Change (2016) https://doi.org/10.17226/21852 (2016). 4. 4. Seneviratne, S. et al. in Managing the Risk of Extreme Events and Disasters to Advance Climate Change Adaptation (eds. Field, C. B. et al.) 109–230 https://doi.org/10.2134/jeq2008.0015br (2012). 5. 5. Mazdiyasni, O. & AghaKouchak, A. Substantial increase in concurrent droughts and heatwaves in the United States. Proc. Natl Acad. Sci. USA 112, 11484–11489 (2015). 6. 6. Zscheischler, J. & Seneviratne, S. I. Dependence of drivers affects risks associated with compound events. Sci. Adv. 3, 1–11 (2017). 7. 7. Sarhadi, A., Ausín, M. C., Wiper, M. P., Touma, D. & Diffenbaugh, N. S. Multidimensional risk in a nonstationary climate: joint probability of increasingly severe warm and dry conditions. Sci. Adv. 4, eaau3487 (2018). 8. 8. Zhou, P. & Liu, Z. Likelihood of concurrent climate extremes and variations over China. Environ. Res. Lett. 13, 094023 (2018). 9. 9. Pfleiderer, P., Schleussner, C. F., Kornhuber, K. & Coumou, D. Summer weather becomes more persistent in a 2 °C world. Nat. Clim. Change 9, 666–671 (2019). 10. 10. Mills, E. Insurance in a climate of change. Science 309, 1040–1044 (2005). 11. 11. Leonard, M. et al. A compound event framework for understanding extreme impacts. Wiley Interdiscip. Rev. Clim. Change 5, 113–128 (2014). 12. 12. Tigchelaar, M., Battisti, D. S., Naylor, R. L. & Ray, D. K. Future warming increases probability of globally synchronized maize production shocks. Proc. Natl Acad. Sci. USA 115, 6644–6649 (2018). 13. 13. Mehrabi, Z. & Ramankutty, N. Synchronized failure of global crop production. Nat. Ecol. Evol. 3, 780–786 (2019). 14. 14. Gaupp, F., Hall, J., Hochrainer-stigler, S. & Dadson, S. Changing risks of simultaneous global breadbasket failure. Nat. Clim. Change https://doi.org/10.1038/s41558-019-0600-z (2019). 15. 15. von Braun, J. & Tadesse, G. Global food price volatility and spikes: an overview of costs, causes, and solutions. ZEF Discuss. Pap. Dev. Policy 161 (2012). 16. 16. Porter, J. R. et al. Food security and food production systems. Climate Change 2014 Impacts, Adaptation Vulnerability Part A Glob. Sect. Asp. 485–534 https://doi.org/10.1017/CBO9781107415379.012 (2015). 17. 17. Kornhuber, K. et al. Extreme weather events in early summer 2018 connected by a recurrent hemispheric wave-7 pattern. Environ. Res. Lett. 14, 054002 (2019). 18. 18. Kornhuber, K. et al. Amplified Rossby waves enhance risk of concurrent heatwaves in major breadbasket regions. Nat. Clim. Change https://doi.org/10.1038/s41558-019-0637-z (2019). 19. 19. Lau, W. K. M. & Kim, K. M. The 2010 Pakistan flood and Russian heat wave: teleconnection of hydrometeorological extremes. J. Hydrometeorol. 13, 392–403 (2012). 20. 20. Wegren, S. Food security and Russia’s 2010 drought. Eurasia. Geogr. Econ. 52, 140–156 (2011). 21. 21. Svanidze, M. & Götz, L. Determinants of spatial market efficiency of grain markets in Russia. Food Policy 89, 101769 (2019). 22. 22. Singh, D. et al. Climate and the Global Famine of 1876–78. J. Clim. 31, 9445–9467 (2018). 23. 23. Davis, M. Late Victorian holocausts: El Niño famines and the Making of the Third World (Verso Books, 2002). 24. 24. Anderson, W. B., Seager, R., Baethgen, W., Cane, M. & You, L. Synchronous crop failures and climate-forced production variability. Sci. Adv. 5, 1–10 (2019). 25. 25. Anderson, W., Seager, R., Baethgen, W. & Cane, M. Trans-Pacific ENSO teleconnections pose a correlated risk to agriculture. Agric. Meteorol. 262, 298–309 (2018). 26. 26. Kitoh, A. et al. Monsoons in a changing world: a regional perspective in a global context. J. Geophys. Res. Atmos. 118, 3053–3065 (2013). 27. 27. Wang, B., Liu, J., Kim, H. J., Webster, P. J. & Yim, S. Y. Recent change of the global monsoon precipitation (1979–2008). Clim. Dyn. 39, 1123–1135 (2012). 28. 28. Lyon, B. & Barnston, A. G. ENSO and the spatial extent of interannual precipitation extremes in tropical land areas. J. Clim. 18, 5095–5109 (2005). 29. 29. Mason, S. J. & Goddard, L. Probabilistic precipitation anomalies associated with ENSO. Bull. Am. Meteorol. Soc. 82, 619–638 (2001). 30. 30. Ashok, K., Guan, Z., Saji, N. H. & Yamagata, T. Individual and combined influences of ENSO and the Indian Ocean Dipole on the Indian summer monsoon. J. Clim. 17, 3141–3155 (2004). 31. 31. Cherchi, A. & Navarra, A. Influence of ENSO and of the Indian Ocean Dipole on the Indian summer monsoon variability. Clim. Dyn. 41, 81–103 (2013). 32. 32. Nicholson, S. E. & Kim, J. The relationship of the el MNO-southern oscillation to African rainfall. Int. J. Climatol. 17, 117–135 (1997). 33. 33. Parhi, P., Giannini, A., Gentine, P. & Lall, U. Resolving contrasting regional rainfall responses to EL Niño over tropical Africa. J. Clim. 29, 1461–1476 (2016). 34. 34. Wang, H. The Instability of the East Asian Summer Monsoon-ENSO Relations. Adv. Atmos. Sci. 19, 1–11 (2002). 35. 35. Zheng, J., Li, J. & Feng, J. A dipole pattern in the Indian and Pacific oceans and its relationship with the East Asian summer monsoon. Environ. Res. Lett. 9, 074006 (2014). 36. 36. Kripalani, R. H. & Kulkarni, A. Rainfall variability over South-East Asia—connections with Indian monsoon and Enso extremes: new perspectives. Int. J. Climatol. 17, 1155–1168 (1997). 37. 37. Zeng, N. et al. Causes and impacts of the 2005 Amazon drought. Environ. Res. Lett. 3, 014002 (2008). 38. 38. Yoon, J. H. & Zeng, N. An Atlantic influence on Amazon rainfall. Clim. Dyn. 34, 249–264 (2010). 39. 39. Barnston, A. G., Tippett, M. K., L’Heureux, M. L., Li, S. & Dewitt, D. G. Skill of real-time seasonal ENSO model predictions during 2002-11: Is our capability increasing? Bull. Am. Meteorol. Soc. 93, 631–651 (2012). 40. 40. Shi, L. et al. How predictable is the indian ocean dipole? Mon. Weather Rev. 140, 3867–3884 (2012). 41. 41. Repelli, C. A. & Nobre, P. Statistical prediction of sea-surface temperature over the tropical Atlantic. Int. J. Climatol. 24, 45–55 (2004). 42. 42. Goddard, L. & Dilley, M. El Niño: Catastrophe or opportunity. J. Clim. 18, 651–665 (2005). 43. 43. Okumura, Y. & Shang-Ping, X. Interaction of the Atlantic equatorial cold tongue and the African MonsoonJ. Clim. 17, 3589–3602 (2004). 44. 44. Huang, B. et al. Extended reconstructed Sea surface temperature, Version 5 (ERSSTv5): upgrades, validations, and intercomparisons. J. Clim. 30, 8179–8205 (2017). 45. 45. Wang, C. Three-ocean interactions and climate variability: a review and perspective. Clim. Dyn. 53, 5119–5136 (2019). 46. 46. Kay, J. E. et al. The community earth system model (CESM) large ensemble project: a community resource for studying climate change in the presence of internal climate variability. Bull. Am. Meteorol. Soc. 96, 1333–1349 (2015). 47. 47. Cai, W. et al. Increasing frequency of extreme El Niño events due to greenhouse warming. Nat. Clim. Change 4, 111–116 (2014). 48. 48. Fasullo, J. T., Otto-Bliesner, B. L. & Stevenson, S. ENSO’s changing influence on temperature, precipitation, and wildfire in a warming Climate. Geophys. Res. Lett. 45, 9216–9225 (2018). 49. 49. Preethi, B., Sabin, T. P., Adedoyin, J. A. & Ashok, K. Impacts of the ENSO Modoki and other tropical indo-pacific climate-drivers on African rainfall. Sci. Rep. 5, 1–15 (2015). 50. 50. Kumar, K. K., Rajagopalan, B., Hoerling, M., Bates, G. & Cane, M. Unraveling the mystery of Indian monsoon failure during El Niño. Science 314, 115–119 (2006). 51. 51. Lin, J. & Qian, T. A new picture of the global impacts of El Nino-Southern oscillation. Sci. Rep. 9, 1–7 (2019). 52. 52. Saji, N., Goswami, B., Vinayachandran, P. & Yamagata, T. A dipole mode in the Tropical Ocean. Nature 401, 360–363 (1999). 53. 53. Zhang, W., Wang, Y., Jin, F., Stuecker, M. F. & Turner, A. G. Impact of different El Niño types on the El Niño/IOD relationship. Geophys. Res. Lett. https://doi.org/10.1002/2015GL065703.Received (2015). 54. 54. Lee Drbohlav, H. K., Gualdi, S. & Navarra, A. A diagnostic study of the Indian Ocean dipole mode in El Niño and non-El Niño years. J. Clim. 20, 2961–2977 (2007). 55. 55. Roxy, M., Gualdi, S., Drbohlav, H. K. L. & Navarra, A. Seasonality in the relationship between El Nino and Indian Ocean dipole. Clim. Dyn. 37, 221–236 (2011). 56. 56. Glantz, M. Impacts of El Nino and La Nina on Climate and Society 2nd edn (Cambridge Press, 2001). 57. 57. Sohn, B. J., Yeh, S. W., Lee, A. & Lau, W. K. M. Regulation of atmospheric circulation controlling the tropical Pacific precipitation change in response to CO2 increases. Nat. Commun. 10, 1–8 (2019). 58. 58. Broman, D., Rajagopalan, B., Hopson, T. & Gebremichael, M. Spatial and temporal variability of East African Kiremt season precipitation and large-scale teleconnections. Int. J. Climatol. 40, 1241–1254 (2020). 59. 59. Giannini, A., Kushnir, Y. & Cane, M. A. Interannual variability of Caribbean rainfall, ENSO, and the Atlantic Ocean. J. Clim. 13, 297–311 (2000). 60. 60. Nur’utami, M. N. & Hidayat, R. Influences of IOD and ENSO to Indonesian rainfall variability: role of atmosphere-ocean interaction in the Indo-pacific sector. Proc. Environ. Sci. 33, 196–203 (2016). 61. 61. Jia, F. et al. Weakening Atlantic Niño–Pacific connection under greenhouse warming. Sci. Adv. 5, 1–10 (2019). 62. 62. Zebiak, S. E. Air-sea interaction in the equatorial Atlantic region. J. Clim. 6, 1567–1586 (1993). 63. 63. Ruiz-Barradas, A., Carton, J. A. & Nigam, S. Structure of Interannual-to-Decadal climate variability in the tropical Atlantic sector. J. Clim. 13, 3285–3297 (2000). 64. 64. Cai, W. et al. Increased frequency of extreme Indian ocean dipole events due to greenhouse warming. Nature 510, 254–258 (2014). 65. 65. Dunning, C. M., Black, E. & Allan, R. P. Later wet seasons with more intense rainfall over Africa under future climate change. J. Clim. 31, 9719–9738 (2018). 66. 66. Urrea, V., Ochoa, A. & Mesa, O. Seasonality of rainfall in Colombia. Water Resour. Res. 55, 4149–4162 (2019). 67. 67. Vigaud, N. & Giannini, A. West African convection regimes and their predictability from submonthly forecasts. Clim. Dyn. 52, 7029–7048 (2019). 68. 68. Wainwright, C. M. et al. Eastern African Paradox’ rainfall decline due to shorter not less intense long rains. npj Clim. Atmos. Sci. 2, 1–9 (2019). 69. 69. Bai, L., Shi, C., Li, L., Yang, Y. & Wu, J. Accuracy of CHIRPS satellite-rainfall products over mainland China. Remote Sens. 10, 362 (2018). 70. 70. Funk, C. et al. The climate hazards infrared precipitation with stations—a new environmental record for monitoring extremes. Sci. Data 2, 1–21 (2015). 71. 71. Jiménez-Muñoz, J. C. et al. Record-breaking warming and extreme drought in the Amazon rainforest during the course of El Niño 2015–2016. Sci. Rep. 6, 1–7 (2016). 72. 72. Aadhar, S. & Mishra, V. Data descriptor: high-resolution near real-time drought monitoring in South Asia. Sci. Data 4, 1–14 (2017). 73. 73. Neena, J. M., Suhas, E. & Goswami, B. N. Leading role of internal dynamics in the 2009 Indian summer monsoon drought. J. Geophys. Res. Atmos. 116, 1–14 (2011). 74. 74. Mwangi, E., Wetterhall, F., Dutra, E., Di Giuseppe, F. & Pappenberger, F. Forecasting droughts in East Africa. Hydrol. Earth Syst. Sci. 18, 611–620 (2014). 75. 75. Barriopedro, D., Gouveia, C. M., Trigo, R. M. & Wang, L. The 2009/10 drought in China: possible causes and impacts on vegetation. J. Hydrometeorol. 13, 1251–1267 (2012). 76. 76. Reynolds, R. W. et al. Daily high-resolution-blended analyses for sea surface temperature. J. Clim. 20, 5473–5496 (2007). 77. 77. Harris, I., Jones, P. D., Osborn, T. J. & Lister, D. H. Updated high-resolution grids of monthly climatic observations—the CRU TS3.10 Dataset. Int. J. Climatol. 34, 623–642 (2014). 78. 78. Ciavarella, A., Stott, P. & Lowe, J. Early benefits of mitigation in risk of regional climate extremes. Nat. Clim. Change 7, 326–330 (2017). 79. 79. Shannon, C. A mathematical theory of communication. Bell. Syst. Tech. J. 27, 623–656 (1948). 80. 80. Mishra, A. K., Özger, M. & Singh, V. P. An entropy-based investigation into the variability of precipitation. J. Hydrol. 370, 139–154 (2009). 81. 81. McKee, T. B., Doesken, N. J. & Kleist, J. The relationship of drought frequency and duration to time scales. In Proceedings on Eighth Conference on Applied Climatology 179–184 (American Meteorological Society, 1993). 82. 82. Fox, J. & Monette, G. Generalized collinearity diagnostics. J. Am. Stat. Assoc. 87, 178–183 (1992). 83. 83. Rayner, N. A. et al. Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century. J. Geophys. Res. D 108, D14 (2003). 84. 84. Enfield, D. B. & Alfaro, E. J. The dependence of Caribbean rainfall on the interaction of the tropical Atlantic and Pacific Oceans. J. Clim. 12, 2093–2103 (1999). 85. 85. Sahastrabuddhe, R., Ghosh, S., Saha, A. & Murtugudde, R. A minimalistic seasonal prediction model for Indian monsoon based on spatial patterns of rainfall anomalies. Clim. Dyn. 52, 3661–3681 (2019). 86. 86. Saji, N. H. & Yamagata, T. Possible impacts of Indian Ocean Dipole mode events on global climate. Clim. Res. 25, 151–169 (2003). 87. 87. Timmermann, A. et al. El Niño–Southern oscillation complexity. Nature 559, 535–545 (2018). 88. 88. Supari et al. ENSO modulation of seasonal rainfall and extremes in Indonesia. Clim. Dyn. 51, 2559–2580 (2018). 89. 89. Srivastava, G., Chakraborty, A. & Nanjundiah, R. S. Multidecadal see-saw of the impact of ENSO on Indian and West African summer monsoon rainfall. Clim. Dyn. 52, 6633–6649 (2019). 90. 90. Wang, B. et al. Northern Hemisphere summer monsoon intensified by mega-El Niño/southern oscillation and Atlantic multidecadal oscillation. Proc. Natl Acad. Sci. USA 110, 5347–5352 (2013). 91. 91. Jong, B. T., Ting, M., Seager, R. & Anderson, W. B. ENSO teleconnections and impacts on U.S. summertime temperature during a multiyear la Niña life cycle. J. Clim. 33, 6009–6024 (2020). 92. 92. Wang, B., Xiang, B. & Lee, J. Y. Subtropical High predictability establishes a promising way for monsoon and tropical storm predictions. Proc. Natl Acad. Sci. USA 110, 2718–2722 (2013). 93. 93. Wang, B., Li, J. & He, Q. Variable and robust East Asian monsoon rainfall response to El Niño over the past 60 years (1957–2016). Adv. Atmos. Sci. 34, 1235–1248 (2017). 94. 94. Good, P. I. Permutation Tests: A Practical Guide to Resampling Methods for Testing Hypotheses (Springer, 1994). 95. 95. DelSole, T., Trenary, L., Tippett, M. K. & Pegion, K. Predictability of week-3-4 average temperature and precipitation over the contiguous United States. J. Clim. 30, 3499–3512 (2017). ## Acknowledgements We would like to thank the National Oceanic and Atmospheric Administration (NOAA), National Center for Atmospheric Research (NCAR), Climatic Research Unit (CRU) University of East Anglia, and Climate Hazards Center UC Santa Barbara for archiving and enabling public access to their data. We thank Washington State University for the startup funding that has supported J.S. and D.S. W.B.A. acknowledges funding from Earth Institute Postdoctoral Fellowship. M.A. was supported by the National Climate‐Computing Research Center, which is located within the National Center for Computational Sciences at the ORNL and supported under a Strategic Partnership Project, 2316‐T849‐08, between DOE and NOAA. This manuscript has been co-authored by employees of Oak Ridge National Laboratory, managed by UT Battelle, LLC, under contract DE-AC05-00OR22725 with the U.S. Department of Energy (DOE). The publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan). ## Author information Authors ### Contributions All authors contributed to the design of the study. J.S. collected the data and performed the analyses. All authors were involved in discussions of the results. J.S. and D.S. wrote the manuscript with feedback from all authors. ### Corresponding author Correspondence to Jitendra Singh. ## Ethics declarations ### Competing interests The authors declare no competing interests. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Singh, J., Ashfaq, M., Skinner, C.B. et al. Amplified risk of spatially compounding droughts during co-occurrences of modes of natural ocean variability. npj Clim Atmos Sci 4, 7 (2021). https://doi.org/10.1038/s41612-021-00161-2
2021-02-26 20:40:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5471985340118408, "perplexity": 8569.394599589055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357935.29/warc/CC-MAIN-20210226175238-20210226205238-00518.warc.gz"}
http://math.stackexchange.com/questions/35899/relation-between-independent-increments-and-markov-property
# Relation between independent increments and Markov property Independent increments and Markov property.do not imply each other. I was wondering • if being one makes a process closer to being the other? • if there are cases where one implies the other? Thanks and regards! - To see this, assume that $(X_n)_{n\ge0}$ has independent increments, that is, $X_0=0$ and $X_n=Y_1+\cdots+Y_n$ for every $n\ge1$, where $(Y_n)_{n\ge1}$ is a sequence of independent random variables. The filtration of $(X_n)_{n\ge0}$ is $(\mathcal{F}^X_n)_{n\ge0}$ with $\mathcal{F}^X_n=\sigma(X_k;0\le k\le n)$. Note that $$\mathcal{F}^X_n=\sigma(Y_k;1\le k\le n),$$ hence $X_{n+1}=X_n+Y_{n+1}$ where $X_n$ is $\mathcal{F}^X_n$ measurable and $Y_{n+1}$ is independent on $\mathcal{F}^X_n$. This shows that the conditional distribution of $X_{n+1}$ conditionally on $\mathcal{F}^X_n$ is $$\mathbb{P}(X_{n+1}\in\mathrm{d}y|\mathcal{F}^X_n)=Q_n(X_n,\mathrm{d}y), \quad \mbox{where}\quad Q_n(x,\mathrm{d}y)=\mathbb{P}(x+Y_{n+1}\in\mathrm{d}y).$$ Hence $(X_n)_{n\ge0}$ is a Markov chain with transition kernels $(Q_n)_{n\ge0}$. @Didier: Thanks! But I think it doesn't because of the following. First $P(X(t_3) | X(t_2), X(t_1)) = P(X(t_3)-X(t_2)|X(t_2), X(t_2)-X(t_1))$. Next $P(X(t_3)-X(t_2)|X(t_2), X(t_2)-X(t_1)) = P(X(t_3)-X(t_2)|X(t_2))$, if and only if $X(t_3)-X(t_2)$ and $X(t_2)-X(t_1))$ are conditionally independent given $X(t_2)$, which can not be implied by $X(t_3)-X(t_2)$ and $X(t_2)-X(t_1))$ are independent. Any mistake? – Tim Apr 29 '11 at 20:54 What is $P(W|U,V)$ for three random variables $U$, $V$, $W$? – Did Apr 29 '11 at 22:43 Why should "independent increments" require that $Y_j$ are independent of $X_0$? $X_0$ is not an increment. – Robert Israel Apr 29 '11 at 23:08 @Didier: Thanks! 1) I still have no clue how to explain and correct (2) in my last comment. Would you point me where in what texts/materials? 2) Generally when saying increments of a stochastic process, is $X_0$ an increment? Does the definition of an independent-increment process require $X_0=0$? – Tim May 3 '11 at 12:29 Invoking "smartness" here is a way to avoid following the explicit suggestions I made, which would lead you to understand the problem. It is also a cheap shot at my advice, considering the time and work I spent on your questions. // Since once again you are stopped by matters of definitions I suggest to come back to definitions: consider random variables $\xi$ and $\eta$ and a sigma-algebra $G$ such that $\xi$ is independent on $H=\sigma(\eta)\vee G$. Why is $E(u(\xi+\eta)\mid H)=E(u(\xi+\eta)\mid\eta)$ for every bounded $u$? Why is this related to your question? .../... – Did Nov 6 '11 at 8:48
2016-06-27 02:45:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9062880873680115, "perplexity": 229.43696224200997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00053-ip-10-164-35-72.ec2.internal.warc.gz"}
http://www.sciencemadness.org/talk/viewthread.php?tid=71282&page=2
Not logged in [Login - Register] Sciencemadness Discussion Board » Special topics » Technochemistry » Ostwald style nitric production Select A Forum Fundamentals   » Chemistry in General   » Organic Chemistry   » Reagents and Apparatus Acquisition   » Beginnings   » Miscellaneous   » The Wiki Special topics   » Technochemistry   » Energetic Materials   » Biochemistry   » Radiochemistry   » Computational Models and Techniques   » Prepublication   » References Non-chemistry   » Forum Matters   » Legal and Societal Issues   » Whimsy   » Detritus   » The Moderators' Lounge Pages:  1  2    4 Author: Subject: Ostwald style nitric production Magpie lab constructor Posts: 5223 Registered: 1-11-2003 Location: USA Member Is Offline Mood: pumped Here's some recollections from my lab experience, FYI: 1. A porous catalyst support can be made from landscaping lava rock. I made some 4-8 mesh as a support for H3PO4 catalyst. It's extremely hard, however, and difficult to reduce to the desired particle size. Pumice is available as an abrasive at pool supply stores. This might make a good catalyst support. 2. I made anhydrous NH3 by boiling it out of a water solution then passing it through a column loaded with KOH flakes. The single most important condition for a successful synthesis is good mixing - Nicodem Chemetix Hazard to Others Posts: 111 Registered: 23-9-2016 Location: Oztrayleeyah Member Is Offline Mood: Wavering between lucidity and madness What I have learnt from this build is the ammonia oxidation reaction is a robust one. It doesn't need a finely tuned set of operating parameters. And there are probably dozens of suitable catalysts out there, I had a few lined up to test and it turned out cobalt was active enough and so I stuck with it. Nickle oxide was my next bet. The support bed can be just about anything- dirt! The catalyst doesn't seem to care what it's on so long as it holds up at the temperatures and gives you enough support, but volcanic rock or pumice (also a volcanic rock) would be perfect. I was looking to smash up and grind a piece of kiln furniture, and I did try breaking a fire brick but got more dust than screenings. The tube - how many other options are there? I'd say the reaction can be lowered to 500C, which is borosilicate range of working temps, just pack more catalyst into a longer tube to allow for the slower rate of reaction. I have eyed off a piece of tubing used for thermocouples, a pyro-ceramic of some sorts. A suitable alternative would be something like a copper tube with some glass tape wound around it( automotive exhaust shop), make a paint with sodium silicate and some silica flour(inhalation hazard) from a ceramics supply. Then some nichrome wire around that and more insulation over the top. I'd say if you get fairly anhydrous NH3 without the CO2 at an optimum air mix, the reaction might just self sustain the heating. Nickle oxide I strongly suspect to be more active than the cobalt, but that's a hunch at this stage. [Edited on 25-12-2016 by Chemetix] Chemetix Hazard to Others Posts: 111 Registered: 23-9-2016 Location: Oztrayleeyah Member Is Offline Mood: Wavering between lucidity and madness Quote: Originally posted by WGTR Let me know the inner diameter of the catalyst tube. Sorry I forgot to answer that; 8mm ID. Chemetix Hazard to Others Posts: 111 Registered: 23-9-2016 Location: Oztrayleeyah Member Is Offline Mood: Wavering between lucidity and madness [Edited on 26-12-2016 by Chemetix] Magpie lab constructor Posts: 5223 Registered: 1-11-2003 Location: USA Member Is Offline Mood: pumped Quote: Originally posted by Chemetix The reactor tube runs into the converted 2L sep. funnel which admits air via the custom condenser fitting. I'm a little confused here: are you admitting air into the 2L sep funnel absorber as well as ahead of the reactor? Quote: Originally posted by Chemetix This shot shows where the ammonia air mixture is fed into the reaction zone, the nice red glow is transmitted up the quartz. Clearly you are admitting air here ahead of the reactor. I assume this is the slightly pressurized source from the compressor? The single most important condition for a successful synthesis is good mixing - Nicodem Chemetix Hazard to Others Posts: 111 Registered: 23-9-2016 Location: Oztrayleeyah Member Is Offline Mood: Wavering between lucidity and madness Yes I can add air ahead of the reactor, in fact you need to: 4NH3 +5 O2 => 2NO + 3H2O In the sep funnel the reaction: 2NO+ O2 => 2NO2 means you should need to add air as well. The ammonia air mixture was quite oxygen rich and so the unreacted O2 formed the needed O2 in the sep funnel. This meant I was only diluting the reaction with more air and slowing the next reaction down 2NO2 => N2O4 The rate of dimerization is proportional to concentration. So I turned off the secondary air inlet and noticed the colour became darker. Edit- sorry, that makes it sound like the N2O4 is the darker product... it's just there was more NO2 by volume and hence darker. [Edited on 26-12-2016 by Chemetix] [Edited on 26-12-2016 by Chemetix] phlogiston International Hazard Posts: 1009 Registered: 26-4-2008 Location: Neon Thorium Erbium Lanthanum Neodymium Sulphur Member Is Offline Mood: pyrophoric Do you have any idea if some of the ammonia is able to pass the reactor unreacted? That would result in acid containing dissolved ammonia nitrate. ----- "If a rocket goes up, who cares where it comes down, that's not my concern said Wernher von Braun" - Tom Lehrer Chemetix Hazard to Others Posts: 111 Registered: 23-9-2016 Location: Oztrayleeyah Member Is Offline Mood: Wavering between lucidity and madness New Catalyst Quote: Originally posted by phlogiston Do you have any idea if some of the ammonia is able to pass the reactor unreacted? That would result in acid containing dissolved ammonia nitrate. In the preliminary trials I did with the catalyst on glass fiber, the support melted and there was little surface area to react with and ammonia started coming over past the reaction zone. White fumes appeared as the acid and base began to react. This happened again today as I had made some modifications to the set up. I dried the ammonia/air mixture with a reflux condenser this time and noticed there was more glow coming from the reactor and then things started to change. The oxidation chamber lost colour and there was no condensation forming. I lifted the condenser coil out and whiffed the emanating fumes lightly - ammonia. The catalyst had died. I shut everything down and cleaned out the reactor; the support had a slightly glazed look and a greyish colour. There was oxide residue on the walls and I used some conc. HCL to remove it. What was telling and confusing was the smell of sulfide. Sulfur had killed the catalyst but where did it come from? It gave me the chance to try another variation I had in mind. Broken bath tile support and nickle oxide. Not only did the NiO work, it worked well. The reaction zone glowed much hotter and pulsed hotter with the higher flow rates from the air/ ammonia generator. I'd bet that this would self sustain once it has got to this temperature. Will try next run. This is the condenser to dry the ammonia/air stream. The pulsing glow happens due to the concentrated ammonia solution in the condenser falling back into the flask as drops, the cold concentrated solution emits gas as it hits the hot solution of urea. The dried air/ammonia "burns" hotter than with the water rich vapour I was using. 1- it meant the high temperatures could have caused contaminants in the expanded clay balls to react with the catalyst or fuse with the catalyst, killing it. 2- it makes more concentrated acid without the introduction of water into the stream. Concentrated nitric fumes like crazy in moist air. The oxidation chamber was filled with acid mist this time. Concentrated acid fumes can be seen leaving the absorption tower. I now understand the need for multiple towers used in industry. I ran out of time to titrate the product, but it took more Bi-Carb to neutralise a similar quantity of the last batch, and the tower solution gave a more pronounced reaction with bicarb despite far less operating time. [Edited on 27-12-2016 by Chemetix] Jstuyfzand Hazard to Others Posts: 133 Registered: 16-1-2016 Location: Netherlands Member Is Offline Mood: Learning, Sorta. POTENTIAL! Looking great Chemetix, great work! Have you tried MnO2 as the catalyst? Fulmen International Hazard Posts: 773 Registered: 24-9-2005 Member Is Offline Mood: Bored Outstanding work, truly inspiring. Cobalt seems to be the ideal catalyst for this, from what I can tell it's in commercial use today. And a heck of a lot easier to get hold off than platinum/rhodium. If nickel was anywhere near this good, wouldn't we've heard about it by now? Anyway, the biggest challenge as I see it is the ammonia-generation. I like your approach, but I still can't help thinking there's a better one out there. A kipp-style generator would be perfect, but that's not as easy as it sounds. I'm not big on glassware, but I wonder if it isn't possible to construct a compact design from metal, at least the ammonia and reaction zones. We're not banging rocks together here. We know how to put a man back together. j_sum1 International Hazard Posts: 2596 Registered: 4-10-2014 Location: Oz Member Is Offline Mood: inert even with vigorous stirring. My standard ammonia generator is NaOH drain cleaner and ammonium sulfate fertiliser. Neither is too expensive. I don't see why a different ammonia feed would be problematic. Chemetix Hazard to Others Posts: 111 Registered: 23-9-2016 Location: Oztrayleeyah Member Is Offline Mood: Wavering between lucidity and madness Quote: Originally posted by Fulmen If nickel was anywhere near this good, wouldn't we've heard about it by now? .... but I wonder if it isn't possible to construct a compact design from metal, at least the ammonia and reaction zones. It's funny, I've sort of made a career out of doing things that are assumed to be obvious to everyone else as either 'they've tried and it doesn't work' or 'if it worked like that they'd already be doing it that way'. Nickel seemed an obvious choice really; when you can't have Pt or Pd the next on the list is Ni. But because of the German patent I tried cobalt first. Actually, I tried red iron oxide first and it didn't seem to do anything. The problem with these alternative catalysts is after shut down there is concentrated nitric acid around to basically chew off your catalyst in the reactor. Platinum puts up more of a fight in this regard. And on to the other point, the glass porn. The short answer to 'is there a way to do it in metal' and I'd say yes. 316 Stainless would have to be the next best thing, I have this in my workshop and I'm handy with a TIG. But that said I might give a more grass roots approach a go too. Bits of copper tubing- glass bottles with holes ground into them- lots of teflon tape. Maybe I'll leave that to some inventive backyarder to complete, because it's entirely doable. The glassware gives a very educational approach to the process. You really can see what is happening and when and get a feel for how much. And I hope the pics are encouraging and informative for the forum. I think the ammonia generator could have a few improvements made, I did a literature search for anything that can catalyze the urea decomposition reaction. Nothing useful so far, the research is focused around engine emission control measures. I found using glycerol or sugar with water I can raise the temperature and the rate of evolution...and the energy cost of generating it. I like the idea of just pour in the urea and water and no by products if you keep the ratios right. I'll draw up a schematic using my M.S.paint- 'Fu'; I should get a better sketch app one day... ps "Have you tried MnO2 as the catalyst?" Het spijt me, ik weet niet dat die MnO2 werken. I should think it would work, I'm starting to suspect my suspicions about there being many available catalysts is correct. Someone can give it a go. [Edited on 28-12-2016 by Chemetix] [Edited on 28-12-2016 by Chemetix] Herr Haber Hazard to Others Posts: 145 Registered: 29-1-2016 Member Is Offline I absolutely love the "let me prove everyone wrong" mindset. Especially in this case ! How many pages in this forum alone saying this or that process for making HNO3 is not doable ? The only sad thing I see here is the timing. A few more days and you would have definitely gotten my vote for Mad Scientist of the year Fulmen International Hazard Posts: 773 Registered: 24-9-2005 Member Is Offline Mood: Bored Quote: Originally posted by Chemetix The problem with these alternative catalysts is after shut down there is concentrated nitric acid around to basically chew off your catalyst in the reactor. Good point. Shouldn't be hard to avoid as long as one is aware of the problem though. As for metals I agree that 316 is the obvious choice, it should work for the absorption tower as well as long as the concentration and temperature isn't too high. Copper sounds like a poor choice, it could perhaps work for the reaction chamber assuming you can produce dry ammonia gas? We're not banging rocks together here. We know how to put a man back together. Jstuyfzand Hazard to Others Posts: 133 Registered: 16-1-2016 Location: Netherlands Member Is Offline Mood: Learning, Sorta. Mixing in some Dutch, love it Chemetix! I look forward to seeing more, especially the Titration results. Fulmen International Hazard Posts: 773 Registered: 24-9-2005 Member Is Offline Mood: Bored I missed your post where you tested nickel, seems like we have several catalysts at our disposal. This simplifies thing even more as I already have nickel salts. As for the ammonia-generator it's hard to beat urea as a source, although a kipp-style generator would be nice. This might be useful: http://eel.ecsdl.org/content/4/10/E5.full (Electrochemically Induced Conversion of Urea to Ammonia) It might be possible to design a pressure regulated generator this way, using a gravity fed reservoir and back pressure to regulate the electrode area. [Edited on 28-12-16 by Fulmen] Attachment: ECS Electrochem. Lett.-2015-Lu-E5-7.pdf (287kB) We're not banging rocks together here. We know how to put a man back together. Magpie lab constructor Posts: 5223 Registered: 1-11-2003 Location: USA Member Is Offline Mood: pumped Quote: Originally posted by Herr Haber The only sad thing I see here is the timing. A few more days and you would have definitely gotten my vote for Mad Scientist of the year Yes, this is the most exciting project since Pok's making of Potassium. On an importance scale this has to rank very high. Urea seems a very good source of ammonia: compact, dry solid, just add water and heat - what could be easier. Regulation would be nice - I guess that's what a Kipp would give you. Does the CO2 cause any problems other than dilution? Also, urea is dirt cheap. I bought a 50 lb bag for $10. The single most important condition for a successful synthesis is good mixing - Nicodem Jstuyfzand Hazard to Others Posts: 133 Registered: 16-1-2016 Location: Netherlands Member Is Offline Mood: Learning, Sorta. Quote: Originally posted by Magpie Quote: Originally posted by Herr Haber Also, urea is dirt cheap. I bought a 50 lb bag for$10. Where did you find such deals? Magpie lab constructor Posts: 5223 Registered: 1-11-2003 Location: USA Member Is Offline Mood: pumped "Weed & feed" places. That is, agriculture and garden suppliers. The single most important condition for a successful synthesis is good mixing - Nicodem ecos National Hazard Posts: 442 Registered: 6-3-2014 Member Is Offline Mood: Learning ! did you try to use copper wire as catalyst? I found some videos showing that it works fine plz check attachment. Attachment: Media.mpg (3.2MB) [Edited on 28-12-2016 by ecos] WGTR International Hazard Posts: 620 Registered: 29-9-2013 Location: Online Member Is Offline So what's the longest period of time that you've used a particular catalyst? Or how much product can you currently obtain before needing to change out the catalyst? I'm the type of person that likes to do chemical reactions in stages, so naturally I'd suggest making some dry ammonia gas ahead of time and storing it in a bag, a "gas bag", if you will. I'm not describing the mother-in-law after a chili cook-off, but rather a plastic bag with a weight on top, to regulate the flow of gas. Even if it isn't used for bulk storage, some kind of bag like this can work as a regulator, to absorb pressure fluctuations from your ammonia generator. Air can be supplied the way you already do. Perhaps I could demonstrate it if I have extra time. But then again, if I had spare time, I might spend it taking rides on the pet unicorn that I'll never have either. But I can try. A cubic foot of gas would be around a mole of ammonia, and that would make quite a bit of nitric acid, if system-wide efficiencies are good. Chemetix Hazard to Others Posts: 111 Registered: 23-9-2016 Location: Oztrayleeyah Member Is Offline Mood: Wavering between lucidity and madness Quote: Originally posted by ecos did you try to use copper wire as catalyst? I found some videos showing that it works fine Quote: I am really surprised that copper works as a catalyst . I thought we need only Pt to oxidize ammonia.i found a link that shows copper as a catalyst . it also has videos.Link :http://www.digipac.ca/chemical/mtom/contents/chapter3/fritzh...http://www.digipac.ca/chemical/mtom/contents/chapter3/fritzh...[Edited on 9-11-2016 by ecos] And it works too well and over-oxidises the ammonia to N2 and H2O if you recall my original assessment. copper glows alright- just no discernable production of NO or NO2. Jstuyfzand Hazard to Others Posts: 133 Registered: 16-1-2016 Location: Netherlands Member Is Offline Mood: Learning, Sorta. The copper is too active, does this mean that the Ammonia gets converted to N2 and H2O immediately or does it go to NO first and then N2 and H2O? If its the second, higher flow rates could take care of this problem, I presume. Fulmen International Hazard Posts: 773 Registered: 24-9-2005 Member Is Offline Mood: Bored Electrochemical urea to ammonia (eU2A) sounds like a promising alternative. The paper used nickel electrodes which had a catalytic function, luckily nickel strip for battery assembly is easy to get hold of. The method used constant voltage; 1.65V applied to 30% urea + KOH. We're not banging rocks together here. We know how to put a man back together. Chemetix Hazard to Others Posts: 111 Registered: 23-9-2016 Location: Oztrayleeyah Member Is Offline Mood: Wavering between lucidity and madness As to the question of whether the ammonia goes to NO then N2, it is largely academic. I can't say by observation alone. I'll leave that to analytical chemists to solve one day. But there just hasn't been enough time in the day for me to give the unit a good run down the highway so to speak, open her up and see what she can do. Holidays have consumed me with family duties and finishing off commercial work as well. Why won't someone pay me to be a hobby chemist? Dammit! So as yet I haven't been able to get an estimation of catalyst life expectancy. All I can say is the cobalt carbonate survived several shutdown and restarts until a change in the system design ultimately caused it to die. Maybe using those clay balls was fine so long as there was moisture in the stream to prevent the catalyst being poisoned somehow. Maybe the ceramic tile fragments are cleaner and will run at higher temperatures and drier conditions and would have allowed the cobalt to survive longer. But I'm sure you can all appreciate that the goal is to find a set of parameters that allow the production of the highest volume with the highest concentrations in the shortest amount of time. I'll get a few days soon to let it have a good run and get an estimation of efficiency and final concentrations. eU2A does sound like a faster way to get ammonia, but then, so is a bigger pot. The latter is the simpler approach, I'll leave electrochemistry for those braver than me. The gas bag idea sounds a great way to do analytical studies on the reaction. I have my chemical engineering hat on at the moment, it's all about increase of production for minimum costs. [Edited on 29-12-2016 by Chemetix] Pages:  1  2    4 Sciencemadness Discussion Board » Special topics » Technochemistry » Ostwald style nitric production Select A Forum Fundamentals   » Chemistry in General   » Organic Chemistry   » Reagents and Apparatus Acquisition   » Beginnings   » Miscellaneous   » The Wiki Special topics   » Technochemistry   » Energetic Materials   » Biochemistry   » Radiochemistry   » Computational Models and Techniques   » Prepublication   » References Non-chemistry   » Forum Matters   » Legal and Societal Issues   » Whimsy   » Detritus   » The Moderators' Lounge
2017-01-20 03:49:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37076860666275024, "perplexity": 4923.032340522449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00155-ip-10-171-10-70.ec2.internal.warc.gz"}
https://socratic.org/questions/what-is-the-derivative-of-f-x-e-2x-ln-x
What is the derivative of f(x)=(e^(2x))(ln(x))? Mar 3, 2017 $f ' \left(x\right) = {e}^{2 x} \left(2 \ln x + \frac{1}{x}\right)$ Explanation: The derivative of $\ln x$ is $\frac{1}{x}$ The derivative of ${e}^{g} \left(x\right)$is ${e}^{g} \left(x\right) \cdot g ' \left(x\right)$ The derivative of $h \left(x\right) \cdot l \left(x\right)$ is $h ' \left(x\right) \cdot l \left(x\right) + h \left(x\right) \cdot l ' \left(x\right)$ Then $f ' \left(x\right) = {e}^{2 x} \cdot 2 \cdot \ln x + {e}^{2 x} \cdot \frac{1}{x}$ $= {e}^{2 x} \left(2 \ln x + \frac{1}{x}\right)$
2019-10-15 07:23:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8896889090538025, "perplexity": 561.6444014124368}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657586.16/warc/CC-MAIN-20191015055525-20191015083025-00023.warc.gz"}
https://mathspp.com/blog/page:2
Mathspp Blog A blog dedicated to mathematics and programming! This blog has a really interesting assortment of articles on mathematics and programming. You can use the tags to your right to find topics that interest you, or you may want to have a look at You can also subscribe to the blog newsletter. 1187 str and repr | Pydon't Python's str and repr built-in methods are similar, but not the same. Use str to print nice-looking strings for end users and use repr for debugging purposes. Similarly, in your classes you should implement the __str__ and __repr__ dunder methods with these two use cases in mind. 415 What all promotions actually mean Nowadays stores come up with all sorts of funky promotions to catch your eye... But how much money do you actually save with each type of promotion? 2318 Assignment expressions and the walrus operator := | Pydon't The walrus operator := can be really helpful, but if you use it in convoluted ways it will make your code worse instead of better. Use := to flatten a sequence of nested ifs or to reuse partial computations. 712 Problem #028 - hidden key 🗝️ There is a key hidden in one of three boxes and each box has a coin on top of it. Can you use the coins to let your friend know where the key is hiding? 992 EAFP and LBYL coding styles | Pydon't In Python, if you are doing something that may throw an error, there are many cases in which it is better to "apologise than to ask for permission". This means you should prefer using a try block to catch the error, instead of an if statement to prevent the error. 1145 Unpacking with starred assignments | Pydon't How should you unpack a list or a tuple into the first element and then the rest? Or into the last element and everything else? Pydon't unpack with slices, prefer starred assignment instead. 400 Problem #027 - pile of coconuts 🥥 Five sailors and their monkey were washed ashore on a desert island. They decide to go get coconuts that they pile up. During the night, each of the sailors, suspicious the others wouldn't behave fairly, went to the pile of coconuts take their fair share. How many coconuts were there in the beginning..? 1769 Pydon't disrespect the Zen of Python The "Zen of Python" is the set of guidelines that show up in your screen if you import this. If you have never read them before, read them now and again from time to time. If you are looking to write Pythonic code, write code that abides by the Zen of Python. 860 Problem #026 - counting squares I bet you have seen one of those Facebook publications where you have a grid and you have to count the number of squares the grid contains, and then you jump to the comment section and virtually no one agrees on what the correct answer should be... Let's settle this once and for all! 542 Problem #025 - knight's tour Alice and Bob sit down, face to face, with a chessboard in front of them. They are going to play a little game, but this game only has a single knight... Who will win? 3605 Pydon't Manifesto "Pydon'ts" are short, to-the-point, meaningful Python programming tips. A Pydon't is something you should not do when programming in Python. In general, following a Pydon't will make you write more Pythonic code. 752 Problem #024 - hats in a line Some people are standing quiet in a line, each person with a hat that has one of two colours. How many people can guess their colour correctly? 430 Filling your Pokédex - a probabilistic outlook Join me in this blog post for Pokéfans and mathematicians alike. Together we'll find out how long it would take to fill your complete Pokédex by only performing random trades. 457 Implementing an interpreter in 14 lines of Python. In this blog post I'll show you how you can write a full interpreter for the brainf*ck programming language in just 14 lines of Python. Be prepared, however, to see some unconventional Python code! 498 Problem #023 - guess the polynomial In this problem you have to devise a strategy to beat the computer in a "guess the polynomial" game. 310 Twitter proof: consecutive integers are coprime Let's prove that if $$k$$ is an integer, then $$\gcd(k, k+1) = 1$$. That is, any two consecutive integers are coprime. 295 Twitter proof: maximising the product with a fixed sum Let's prove that if you want to maximise $$ab$$ with $$a + b$$ equal to a constant value $$k$$, then you want $$a = b = \frac{k}{2}$$. 442 Problem #022 - coprimes in the crowd This simple problem is an example of a very interesting phenomenon: if you have a large enough "universe" to consider, even randomly picked parts exhibit structured properties. 725 Problem #021 - predicting coin tosses Alice and Bob are going to be locked away separately and their faith depends on their guessing random coin tosses! 409 Let's build a simple interpreter for APL - part 3 - the array model In this blog post we will go over some significant changes, from implementing APL's array model to introducing dyadic operators!
2021-04-22 22:41:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2345961183309555, "perplexity": 1445.706594397109}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039563095.86/warc/CC-MAIN-20210422221531-20210423011531-00371.warc.gz"}
https://mathematica.stackexchange.com/questions/194590/integers-to-soundnote-for-midi-generation
Integers to soundnote for midi generation To generate a midi file given two lists of integers of equal length, one for the note pitch and one for the corresponding note duration (tempo), I'd like to use the built in Midi file generator, but am not sure how to proceed to map the integers to the soundnote "C#" etc, I would like to use an 88 note mapping like a piano, and perhaps 5 discrete note duration values. Thanks. I saw this but it takes a sound note and gives a number, whereas I'd like to generate 88 soundnotes scaled linearly from my list of integers. Getting MIDI SoundNote Pitches as Numeric Values This is what I have so far, a 0.25second fixed note duration, and a list of values which I am not sure about for the range of soundnotes they generate: Sound[SoundNote[#, 0.25, "Piano"] & /@ {0, -7, -50, 7, 12, 50, 0, -10, 50, -50, 0, 0, 10, 60, 65, 67}] Export["sequence.mid", %] Thanks. cheers, Jamie If you want to use 2 integer lists, try this pitch = {0, 2, 4, 5, 7, 9, 11, 12}; tempo = {.5, 1, .5, 1, .3, .2, .1, .1}; Sound[SoundNote[#, #2, "Piano"] & @@@ Transpose@{pitch, tempo}] As for the mapping the 88 keys are the Range[-39,48] -39 is A-1, -38 is A#-1 ,-37 is B-1 , -36 is C0 ,-35 is C#0 etc If Mod[Tone,12]=0 then you have a C so -36 is C0, -24 is C1, -12 is C2 , 0 is C3, 12 is C4 ... 48 is C7 using Mod[#,12] you can easily find the tones 0 is C, 1 is C#, 2 is D, 3 is D#, 4 is E, 5 is F, 6 is F#, 7 is G, 8 is G#, 9 is A, 10 is A# and 11 is B Mod[#,12] actually is the reminder of the division #/12, so it can take values from 0 to 11 which are the 12 notes But if you don't want to use integers you can use the builtin notation: pitch = {"C3", "D3", "E3", "F3", "G3", "A3", "B3", "C4"}; tempo = {.5, 1, .5, 1, .3, .2, .1, .1}; Sound[SoundNote[#, #2, "Piano"] & @@@ Transpose@{pitch, tempo}]
2020-02-24 12:40:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5054821372032166, "perplexity": 2888.0448661706596}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145941.55/warc/CC-MAIN-20200224102135-20200224132135-00184.warc.gz"}
https://www.transtutors.com/questions/set-up-a-definite-integral-that-represents-the-length-of-the-curve-y-x-cos-x--5375990.htm
# Set up a definite integral that represents the length of the curve y = x + cos x... Set up a definite integral that represents the length of the curve y = x + cos x for 0 5 x 5 it. Then use your calculator to find the length rounded off to four decimal places. Note: x is given in radians. Attachments: ## Plagiarism Checker Submit your documents and get free Plagiarism report Free Plagiarism Checker
2020-05-25 13:54:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8035172820091248, "perplexity": 433.085334932267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388758.12/warc/CC-MAIN-20200525130036-20200525160036-00470.warc.gz"}
http://www.drumtom.com/q/what-is-the-domain-range-of-y-10-x-compare-to-the-equation-y-1-x
FIND THE ANSWERS # What is the domain range of y= -10/x. compare to the equation y = 1/x? ### Answer this question • What is the domain range of y= -10/x. compare to the equation y = 1/x? ## Answers Discusses the domain and range of a function, ... I'll just list the x-values for the domain and the y-values for the range: domain: {–3, –2, –1 ... Read more Positive: 54 % Compute domain and range for functions of several variables. HOME ABOUT PRODUCTS BUSINESS RESOURCES ... domain of f(x,y) = log(1-(x^2+y^2)) Related Wolfram ... Read more Positive: 51 % ### More resources ... For function y=1/x-2 Give the y values for x =-1,0,1,2,3,4 ... Start with the given equation Plug in Calculate the y value by following the order of ... Read more Positive: 54 % {x x 1, x R} and a range of {y ... y 8. How many roots does the equation 2 81 ... of the function 2 2 4 x x y. b) Identify the domain, range, ... Read more Positive: 49 % everything maths & science. ... Functions of the form y = 1 x. ... Domain and range. For y = a x + q, the function is undefined for x = 0. Read more Positive: 35 % Discovering the characteristics. For functions of the general form $$f(x) = y = a(x + p)^2 + q$$: Domain and range. The domain is \(\left\{x:x\in ℝ\right ... Read more Positive: 12 % Show more results Anonymous76691 Login to your account Create new account Discover Questions
2017-01-24 03:28:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31406155228614807, "perplexity": 2512.0758548780245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00537-ip-10-171-10-70.ec2.internal.warc.gz"}
https://eurekamathanswers.com/use-of-integers/
An integer is a number that includes 0, positive numbers, and negative numbers. It can never be a fraction, decimal, or percent. Integers are mainly used in our day-to-day lives in mathematical terms. Get to know the definition, operations, and use of integers in the below-mentioned sections. Also, get the example questions with solutions for the convenience of grade 6 math students. Also, Check: Use of Integers as Directed Numbers ## What is an Integer? Integers are a set of counting numbers (positive and negative) along with zero. Some of the examples of integers are -4, -3, -2, -1, 0, 1, 2, 3. The integers are represented by Z. The types of integers are positive integers, negative integers, and zero. ### Integer Rules The rules depend on the operations performed on integers given below: • If the sign of both integers is the same, then the sum will have the same sign. • If the sign of one integer is positive, the sign of another integer is negative, then the sign of the result is the sign of the larger number. • Subtraction Rule: • Convert the operation to addition by changing the sign of the subtrahend. • Multiplication Rule: • Multiply the sign of integers to get the result sign. • Division Rule: • Divide the signs of two operands and get the resultant sign. ### Real Life Examples of Integers The examples on using integers are along the lines: • If profit is represented by positive integers, losses are by negative integers. • The rise in the price of a product is represented by positive integers and fill in price by negative integers. • If heights above the sea level are represented by positive integers, then depths below sea level by negative integers, and so on. ### Integers as Directed Numbers If a number represents direction, then the number is called a directed number. The below-given examples explain it in detail. Example: If +4 represents 4 m towards East, then -5 represents 5 m towards its opposite direction i.e towards west. If a positive integer shows a particular direction, then the negative integer shows the opposite direction. ### Example Questions on Use of Integers Question 1: Write an integer to describe a situation (i) Losing Rs 100 (ii) Owing Rs 1500 (iii) Depositing $500 in a bank Solution: (i) An integer representing a loss of Rs 100 is -100. (ii) An integer representing an owing of Rs 1500 is -1500 (iii) An integer representing depositing$500 in a bank is +500 Question 2: Write an appropriate integer for each of the following: (i) earned Rs 800 interest (ii) a decrease of 5 members (iii) an increase of 3 inches Solution: (i) Earned Rs 800 interest is represented by +800 (ii) Decrease of 5 members is represented by -5 (iii) An increase of 3 inches is represented by +3 ### Frequently Asked Question’s on Using Integers 1. What are the applications of integers? Integers are used to signify two contradicting situations. The positive and negative integers have different applications. Integers can compare and measure changes in temperature, credits, debits calculation by the bank. 2. What are the integer rules? The integer rules are the sum of two integers is an integer, a difference of two integers is an integer. Multiplication of two or more integers is an integer. The division of integers may or may not be an integer. 3. What are the integer properties? The main properties of integers are closure property, commutative property, associative property, identity property, and distributive property. 4. What are the 5 integer operations? The operations with integers are addition, subtraction, multiplication, and division.
2022-01-22 09:07:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48777544498443604, "perplexity": 961.7523615772751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303779.65/warc/CC-MAIN-20220122073422-20220122103422-00002.warc.gz"}
http://www.maths.usyd.edu.au/u/UG/JM/MATH1111/Quizzes/quiz32.html
## MATH1111 Quizzes Local Linearity and the Differential Quiz Web resources available Questions This quiz tests the work covered in lecture on local linearity and the differential and corresponds to Section 14.3 of the textbook Calculus: Single and Multivariable (Hughes-Hallett, Gleason, McCallum et al.). There is a useful applet at http://www.slu.edu/classes/maymk/banchoff/TangentPlane.html - take some time to read the instructions and add your own functions. There are more web quizzes at Wiley, select Section 3. This quiz has 10 questions. Suppose $f\left(3,2\right)=4\phantom{\rule{0.3em}{0ex}},\phantom{\rule{1em}{0ex}}{f}_{x}\left(3,2\right)=-2$ and ${f}_{y}\left(3,2\right)=3$ for some surface $z=f\left(x,y\right)\phantom{\rule{0.3em}{0ex}}.$ Which of the following is the tangent plane to the surface at $\left(3,2,4\right)\phantom{\rule{0.3em}{0ex}}?$ Exactly one option must be correct) a) $4z=-2\left(x-3\right)+3\left(y-2\right)$ b) $z=4-2\left(x-3\right)+3\left(y-2\right)$ c) $z+4=2\left(x+3\right)+3\left(y+2\right)$ d) $4z=3\left(x+3\right)-2\left(y+2\right)$ Choice (a) is incorrect Try again, check the formula for the tangent plane. Choice (b) is correct! The tangent at the point $\left(a,b\right)$ on the surface is $z=f\left(a,b\right)+{f}_{x}\left(a,b\right)\left(x-a\right)+{f}_{\left(}a,b\right)\left(y-b\right)$ so the above equation is correct. Choice (c) is incorrect Try again, check the formula for the tangent plane. Choice (d) is incorrect Try again, check the formula for the tangent plane. Is the plane $z=12+8\left(x-1\right)+7\left(y-2\right)$ the tangent plane to the surface, $f\left(x,y\right)={x}^{2}+3xy+{y}^{2}-1$ at $\left(1,2\right)\phantom{\rule{0.3em}{0ex}}?$ Exactly one option must be correct) a) Yes. b) No Choice (a) is incorrect $f\left(1,2\right)=10$ and $z=12$ at $\left(1,2\right)$ so the plane does not touch the surface. Choice (b) is correct! ${f}_{x}\left(x,y\right)=2x+3y$ so ${f}_{x}\left(1,2\right)=2+6=8$ ${f}_{y}\left(x,y\right)=3x+2y$ so ${f}_{y}\left(1,2\right)=3+4=7$ $f\left(1,2\right)=10$ so the tangent plane is $z=10+8\left(x-1\right)+7\left(y-2\right)\phantom{\rule{0.3em}{0ex}}.$ Which of the following is the tangent plane to the surface $f\left(x,y\right)={x}^{2}-2xy-3{y}^{2}$ at the point $\left(-2,1,5\right)\phantom{\rule{0.3em}{0ex}}?$ Exactly one option must be correct) a) $z+6x+2y+15=0$ b) $z-6x-2y+5=0$ c) $z+6x+2y+5=0\phantom{\rule{0.3em}{0ex}}.$ d) None of the above, since $\left(-2,1,5\right)$ is not on the surface. Choice (a) is incorrect Try again, look carefully at the signs of the constant terms. Choice (b) is incorrect Try again, carefully rearrange your equation. Choice (c) is correct! ${f}_{x}\left(x,y\right)=2x-2y$ so ${f}_{x}\left(-2,1\right)=-4-2=-6$ ${f}_{y}\left(x,y\right)=-2x-6y$ so ${f}_{y}\left(-2,1\right)=4-6=-2$ $f\left(-2,1\right)=5$ so the tangent plane is $z=5-6\left(x+2\right)-2\left(y-1\right)⇒z+6x+2y+5=0$ as required. Choice (d) is incorrect Try again, the point is on the surface. Which of the following is the differential of $f\left(x,y\right)=sinxy\phantom{\rule{0.3em}{0ex}}{e}^{xy}\phantom{\rule{0.3em}{0ex}}?$ Exactly one option must be correct) a) $df=cosxy\phantom{\rule{0.3em}{0ex}}{e}^{xy}\left({y}^{2}\phantom{\rule{0.3em}{0ex}}dx+{x}^{2}\phantom{\rule{0.3em}{0ex}}dy\right)$ b) $df={e}^{xy}\left(cosxy+sinxy\right)\left(x\phantom{\rule{0.3em}{0ex}}dx+y\phantom{\rule{0.3em}{0ex}}dy\right)$ c) $df={e}^{xy}\left(-cosxy+sinxy\right)\left(y\phantom{\rule{0.3em}{0ex}}dx+x\phantom{\rule{0.3em}{0ex}}dy\right)$ d) $df={e}^{xy}\left(cosxy+sinxy\right)\left(y\phantom{\rule{0.3em}{0ex}}dx+x\phantom{\rule{0.3em}{0ex}}dy\right)$ Choice (a) is incorrect Try again, you must use the product rule to differentiate $f\left(x,y\right)\phantom{\rule{0.3em}{0ex}}.$ Choice (b) is incorrect Try again, you have not differentiated $f\left(x,y\right)$ correctly. Choice (c) is incorrect Try again, you have not differentiated $sinxy$ correctly. Choice (d) is correct! ${f}_{x}\left(x,y\right)=ycosxy\phantom{\rule{0.3em}{0ex}}{e}^{xy}+sinxy\left(y{e}^{xy}\right)=y{e}^{xy}\left(cosxy+sinxy\right)$ using the product rule, and ${f}_{y}\left(x,y\right)=xcosxy\phantom{\rule{0.3em}{0ex}}{e}^{xy}+sinxy\left(x{e}^{xy}\right)=x{e}^{xy}\left(cosxy+sinxy\right)$ using the product rule, so $df=y{e}^{xy}\left(cosxy+sinxy\right)\phantom{\rule{0.3em}{0ex}}dx+x{e}^{xy}\left(cosxy+sinxy\right)\phantom{\rule{0.3em}{0ex}}dy={e}^{xy}\left(cosxy+sinxy\right)\left(y\phantom{\rule{0.3em}{0ex}}dx+x\phantom{\rule{0.3em}{0ex}}dy\right)\phantom{\rule{0.3em}{0ex}}.$
2018-03-20 11:58:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 45, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.794314444065094, "perplexity": 679.7104532559423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647406.46/warc/CC-MAIN-20180320111412-20180320131412-00660.warc.gz"}
https://mathematicsgre.com/viewtopic.php?f=1&t=382
## FR0568 #41 Forum for the GRE subject test in mathematics. thmsrhn Posts: 17 Joined: Fri Mar 26, 2010 7:18 am ### FR0568 #41 Hey can any solve this for me? I know how to solve the line intergral, but what are the upper and lower limits? origin415 Posts: 61 Joined: Fri Oct 23, 2009 11:42 pm ### Re: FR0568 #41 When you make new threads like this, please post the problem to make it easier for everyone. The question is Let C be the circle $$x^2 + y^2 = 1$$ oriented counterclockwise in the xy-plane. What is the value of the line integral $$\oint_C (2x-y) dx + (x+3y)dy$$ A) 0 B) 1 C) pi/2 D) pi E) 2pi The limits you need for the integral will depend on the parametrization of the circle you use. You could use the parametrization $$y = \sqrt{1-x^2}$$ for the top half of the circle, and then your x would go from 1 to -1. You'll also need to compute the integral on the bottom half. However, I think actually attempting to compute that line integral would be excessively difficult and miss the point of the question, use the other techniques at your disposal. thmsrhn Posts: 17 Joined: Fri Mar 26, 2010 7:18 am ### Re: FR0568 #41 God it s hard integrating this line integral, have you got another method in mind origin? Coz I could sure it rite now. origin415 Posts: 61 Joined: Fri Oct 23, 2009 11:42 pm ### Re: FR0568 #41 And spoil all the fun of it? Alright, Green's Theorem. Basically anytime you have a surface integral, you should be checking if its easier to integrate the boundary, and anytime you have a closed line integral, you should be checking if its easier to integrate the surface. The GRE guys are tricky like that. mathQ Posts: 41 Joined: Thu Mar 25, 2010 12:14 am ### Re: FR0568 #41 line integral was pretty straigt forward here. and the ans I calculated is 2pi thmsrhn Posts: 17 Joined: Fri Mar 26, 2010 7:18 am ### Re: FR0568 #41 hey greens theorem did the trick! Hardly took any time!! thanks!!!
2023-02-05 11:10:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8142852187156677, "perplexity": 2269.68555150114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500251.38/warc/CC-MAIN-20230205094841-20230205124841-00373.warc.gz"}
https://www.transtutors.com/questions/consider-the-following-multilayer-perceptron-network-the-transfer-function-of-the-hi-2011189.htm
Consider the following multilayer perceptron network. (The transfer function of the hidden layer... Consider the following multilayer perceptron network. (The transfer function of the hidden layer is The initial weights and biases are: Perform one iteration of the standard steepest descent backpropagation (use matrix operations) with learning rate a = 0.5 for the following input/ target pair: Plagiarism Checker Submit your documents and get free Plagiarism report Free Plagiarism Checker
2021-04-21 06:04:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8393312692642212, "perplexity": 3066.8290442374864}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039508673.81/warc/CC-MAIN-20210421035139-20210421065139-00105.warc.gz"}
https://www.tensorflow.org/graphics/api_docs/python/tfg/geometry/transformation/axis_angle/from_euler_with_small_angles_approximation
# tfg.geometry.transformation.axis_angle.from_euler_with_small_angles_approximation Converts small Euler angles to an axis-angle representation. Under the small angle assumption, $$\sin(x)$$ and $$\cos(x)$$ can be approximated by their second order Taylor expansions, where $$\sin(x) \approx x$$ and $$\cos(x) \approx 1 - \frac{x^2}{2}$$ . In the current implementation, the smallness of the angles is not verified. #### Note: The conversion is performed by first converting to a quaternion representation, and then by converting the quaternion to an axis-angle. #### Note: In the following, A1 to An are optional batch dimensions. angles A tensor of shape [A1, ..., An, 3], where the last dimension represents the three small Euler angles. [A1, ..., An, 0] is the angle about x in radians [A1, ..., An, 1] is the angle about y in radians and [A1, ..., An, 2] is the angle about z in radians. name A name for this op that defaults to "axis_angle_from_euler_with_small_angles_approximation". A tuple of two tensors, respectively of shape [A1, ..., An, 3] and [A1, ..., An, 1], where the first tensor represents the axis, and the second represents the angle. The resulting axis is a normalized vector.
2020-05-26 18:06:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9281443357467651, "perplexity": 1994.3965365572162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391277.13/warc/CC-MAIN-20200526160400-20200526190400-00328.warc.gz"}
https://www.effortlessmath.com/math-puzzles/algebra-puzzle-challenge-50/
# Algebra Puzzle – Challenge 50 This is another math puzzle and brain teaser that is interactive, challenging, and entertaining for those who love Math challenges! ## Challenge: If 11 workers can build 11 cars in 11 days, then how many days would it take 7 workers to build 7 cars? A- 7 B- 9 C- 11 D- 14 E- 18 ### The Absolute Best Book to challenge your Smart Student! If, 11 workers can build one car per day, then, one worker can make a car in 11 days. (Each worker can build $$\frac{1}{11}$$ of a car per day. So, it takes 11 days for a worker to make a car)
2022-01-20 08:06:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1920827031135559, "perplexity": 1741.780503342366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301730.31/warc/CC-MAIN-20220120065949-20220120095949-00023.warc.gz"}
http://clay6.com/qa/48042/a-charge-of-8-mc-is-located-at-the-origin-calculate-the-work-done-in-taking
Want to ask us a question? Click here Browse Questions Ad 0 votes # A charge of $8 \;mC$ is located at the origin. Calculate the work done in taking a small charge of $−2 \times 10^{−9} C$ from a point $P (0, 0, 3 cm)$ to a point $Q (0, 4 cm, 0),$ via a point $R (0, 6 cm, 9 cm).$ Can you answer this question? ## 1 Answer 0 votes $(B)1.27 J$ Hence B is the correct answer. answered Jun 23, 2014 by
2016-12-07 14:24:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7803388237953186, "perplexity": 644.9861641090696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542213.61/warc/CC-MAIN-20161202170902-00280-ip-10-31-129-80.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1644647/how-can-we-prove-a-statement-is-provable?noredirect=1
# How can we prove a statement is provable? Given a concrete mathematical statement, such as BSD conjecture(https://en.wikipedia.org/wiki/Birch_and_Swinnerton-Dyer_conjecture), do we know if it is provable? • I suspect the answer, in the vast majority of specific cases, is going to be, quite simply, "We don't." I've never heard of a non-independence result that doesn't itself discern whether the statement is true or false. I would be interested in finding out if such a thing exists - by the consistency theorem, you could start with two models, one of the statement and one of its negation, and try and derive a contradiction. – Dustan Levenstein Feb 7 '16 at 16:21 • What do you mean by provable? If you mean does a proof exist - then it is just as hard as to prove conjecture. Only provably correct conjecture provably exist a proof. If you mean if it is possible to have a proof, however, then it is easy. The only thing that you cannot write a proof are "non-statements". For example, one cannot write a proof to "Good Morning", or "How are you" – Andrew Au Feb 7 '16 at 16:27 • @AndrewAu I was thinking people are trying to prove BSD conjecture, but is it possible that the conjecture is not provable by Godel's incompleteness theorem? – Qixiao Feb 7 '16 at 17:46 • A statement is not "provable" in and of itself. It is only provable relative to a particular axiom system. The most common way to show an axiom system doesn't prove a statement is to build a model of the system that doesn't satisfy the statement. For BSD there seems to be no specific reason to suspect it is unprovable from ZFC set theory. – Carl Mummert Feb 13 '16 at 13:34 • In general, however, there is no algorithm that can decide whether arbitrary statements are provable from ZFC. They have to be considered on a case by case basis. – Carl Mummert Feb 13 '16 at 13:35 You're using the wrong term. You mean to ask whether we can tell if a conjecture is decidable, meaning that it is either provable or disprovable. But no we cannot tell whether a statement is decidable if the quantifier complexity is too high. Furthermore, it may be possible that even the decidability of a statement is itself undecidable! (See below for an example.) First read https://math.stackexchange.com/a/1643073/21820, to ensure that you fully understand the import of Godel's incompleteness theorem. After that, consider the following. $\def\imp{\rightarrow}$ [We work in a meta-system and assume that $PA$ is omega-consistent.] Let $φ = \square_{PA} Con(PA) \lor \square_{PA} \neg Con(PA)$. [So $φ$ expresses "Con(PA) is decidable over $PA$".] If $PA \vdash φ$: Within $PA$: $\square Con(PA) \lor \square \neg Con(PA)$. If $\square Con(PA)$: $\neg Con(PA)$. [by the internal incompleteness theorem] $\square \bot$. $\square \neg Con(PA)$. [by (D1),(D2)] $\square \neg Con(PA)$. [by basic logic] $\neg Con(PA)$. [because $PA$ is omega-consistent] Contradiction. [with the external incompleteness theorem] Therefore $PA \nvdash φ$. If $PA \vdash \neg φ$: Within $PA$: $\neg \square Con(PA)$. [by basic logic] If $\square \bot$: $\square Con(PA)$. [by (D1),(D2)] $\neg \square \bot$. $Con(PA)$. Therefore $PA \nvdash \neg φ$. Thus $φ$ is independent of $PA$.
2019-05-21 09:37:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7555591464042664, "perplexity": 467.9351876700904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256314.25/warc/CC-MAIN-20190521082340-20190521104340-00470.warc.gz"}
https://socratic.org/questions/how-do-you-use-the-product-rule-to-differentiate-g-x-x-2-1-x-2-2x
# How do you use the product rule to differentiate g(x)=(x^2+1)(x^2-2x)? Jan 14, 2017 $g ' \left(x\right) = 4 {x}^{3} - 6 {x}^{2} + 2 x - 2$ #### Explanation: $\text{Given " g(x)=f(x).h(x)" then}$ $\textcolor{red}{\overline{\underline{| \textcolor{w h i t e}{\frac{2}{2}} \textcolor{b l a c k}{g ' \left(x\right) = f \left(x\right) h ' \left(x\right) + h \left(x\right) f ' \left(x\right)} \textcolor{w h i t e}{\frac{2}{2}} |}}} \leftarrow \text{ product rule}$ $\text{here } f \left(x\right) = {x}^{2} + 1 \Rightarrow f ' \left(x\right) = 2 x$ $\text{and } h \left(x\right) = {x}^{2} - 2 x \Rightarrow h ' \left(x\right) = 2 x - 2$ $\Rightarrow g ' \left(x\right) = \left({x}^{2} + 1\right) \left(2 x - 2\right) + \left({x}^{2} - 2 x\right) .2 x$ $= 2 {x}^{3} - 2 {x}^{2} + 2 x - 2 + 2 {x}^{3} - 4 {x}^{2}$ $= 4 {x}^{3} - 6 {x}^{2} + 2 x - 2$
2019-02-17 10:30:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9969086647033691, "perplexity": 2172.637939934715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481832.13/warc/CC-MAIN-20190217091542-20190217113542-00265.warc.gz"}
https://math.stackexchange.com/questions/1403926/expected-value-of-die-rolls-roll-n-keep-1
# Expected value of die rolls - roll $n$, keep $1$ I know how to calculate expected value for a single roll, and I read several other answers about expected value with rerolls, but how does the calculation change if you can make your reroll before choosing which die to keep? For instance, what is the expected value of rolling $2$ fair $6$-sided dice and keeping the higher value? And can you please generalize to $n$ $x$-sided dice? • If you wish to find distribution of $\max$ of $n$ i.i.d. random variables, then $P(\max\{X_1,..,X_n\}<x) = P(X_1<x,...,X_n<x)=$/*they are independent*/$=\prod_{i=1}^n P(X_i<x)=$/*probabilities are equal*/$=(P(X_1<x))^n$ – Slowpoke Aug 20 '15 at 14:29 • The expectation of the sum without rerolling is (for $k$ $n-sided$ dices) : $\frac{k(n+1)}{2}$ – Peter Aug 20 '15 at 14:34 • @hcl14, thank you for the response. Could you please show an example? I don't understand some of the notation you are using. – Catherine Aug 20 '15 at 14:39 • @Catherine The notation means, that if you have maximum of a few variables be less or equal than some value, then every variable is less or equal than that value. That allows you to easily write the distribution function $F_{max}(x)=P(\max\{...\}\leq x)$ of the maximum as a product of distribution functions of the variables. Then expectation can be easily computed: as long as for one m-sided dice $P(X_1\leq x) = x/m$, then $F_{max}(x)=(x/m)^n$ and $P(\max \{..\}=x) = F_{max}(x)-F_{max}(x-1)$. $E[\max]=\sum_{x=1}^m x*P(\max \{..\}=x)$ which will lead to the result in Jason Carr's answer. – Slowpoke Aug 20 '15 at 15:00 So to calculate this one in particular isn't all that difficult, but it's a special case of order statistics, that is, generating statistics about multiple events when they're ordered. You'd need to use that for the middle. In this case where we take the one highest die, consider that to be less than or equal any given value, we must have both dice no greater than that value. So, it's the intersection of the probabilities that each individual die is no greater than than the value. If we have a cumulative distribution function for a die, then it describes the probability that the die will roll at most some value. In the case of s-sided dice, we have that $P(X \le a) = \{\frac{a}{s}, a \in [1,s]\}$. To find out what the intersection of multiple dice is, we take the intersection of their probabilities, so noting that that intersection of a number of distinct events is $\prod(P)$ we can get that our new $P(X \le a)$ is $\frac{a^n}{s^n}$ or similar equivalent expressions for the intersection of n s-sided dice Now in order to get the expected value, we need to get the probability distribution function, that is $P(X = a)$. To do this we'll take the discrete difference. We can't really simplify this, so we'll just take $P(X = a) = \frac{(a)^n}{s^n} - \frac{(a - 1)^n}{s^n}$ Then we can take the summation of each of these for all $a \in [1,s]$ Then the expected value is the sum $\sum_{a=1}^s{a({\frac{(a)^n}{s^n} - \frac{(a - 1)^n}{s^n})}}$ To start with two dices. You can make a table and insert the maximum values of the two dices. $\begin{array}{|c|c|c|c|c|c|} \hline \text{dice 1 / dice 2 } & 1 &2 &3 &4 &5 &6 \\ \hline\hline \hline 1 & &2 & &4 &5 & \\ \hline 2 & 2 &2 & &4 &&6 \\ \hline 3&3 &3 &3 &4 &5&6 \\ \hline 4 & & &&4&5&6 \\ \hline 5 &5 &5&5&5&5& \\ \hline 6 &6&&&6&6&6 \\ \hline \end{array}$ I left out some values to leave some work to do for you. The probability for each combination is $p_{ij}=\frac{1}{36}$ The expected value then is $E(x)=\sum_{i=1}^6\sum_{j=1}^6 p_{ij} \cdot max(x_i,x_j)$ A start: Let $X$ be the "larger" number when you roll two fair $n$-sided dice. Then $$\Pr(X=k)=\Pr(X\le k)-\Pr(X\le k-1).$$ But the probability that $X$ is $\le a$ is the probability both dice are $\le a$. This is $\frac{a^2}{n^2}$. Remark: There are easier (and arguably better) ways to show that $\Pr(X\le k)=\frac{2k-1}{n^2}$. But the above trick is a useful one. The same idea works for tossing $d$ $n$-sided dice. The probability that the maximum $X$ is $k$ is $\frac{k^d}{n^d}-\frac{(k-)^d}{n^d}$.
2019-10-20 02:50:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8720001578330994, "perplexity": 223.04828978898834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986702077.71/warc/CC-MAIN-20191020024805-20191020052305-00414.warc.gz"}
https://codereview.stackexchange.com/questions/248359/concatenate-several-csv-files-in-a-single-dataframe
# Concatenate several CSV files in a single dataframe I have currently 600 CSV files (and this number will grow) of 50K lines each i would like to put in one single dataframe. I did this, it works well and it takes 3 minutes : colNames = ['COLUMN_A', 'COLUMN_B',...,'COLUMN_Z'] folder = 'PATH_TO_FOLDER' # Dictionnary of type for each column of the csv which is not string dictTypes = {'COLUMN_B' : bool,'COLUMN_D' :int, ... ,'COLUMN_Y':float} try: # Get all the column names, if it's not in the dict of type, it's a string and we add it to the dict dictTypes.update({col: str for col in colNames if col not in dictTypes}) except: print('Problem with the column names.') # Function allowing to parse the dates from string to date, we put in the read_csv method cache = {} def cached_date_parser(s): if s in cache: return cache[s] dt = pd.to_datetime(s, format='%Y-%m-%d', errors="coerce") cache[s] = dt return dt # Concatenate each df in finalData allFiles = glob.glob(os.path.join(folder, "*.csv")) finalData = pd.DataFrame() finalData = pd.concat([pd.read_csv(file, index_col=False, dtype=dictTypes, parse_dates=[6,14], date_parser=cached_date_parser) for file in allFiles ], ignore_index=True) It takes one minute less without the parsing date thing. So i was wondering if i could improve the speed or it was a standard amount of time regarding the number of files. Thanks ! • I don't expect much of a speed boost from this comment, but it's useful to understand nonetheless. Like any reasonable function of this kind, the pd.concat() function will take not only sequences (eg, list or tuple) but any iterable, so you don't need to create a never-used list. Instead, just give pd.concat() a generator expression -- a lightweight piece of code that pd.concat() will execute on your behalf to populate the data frame. Like this: pd.concat((pd.read_csv(...) for file in allFiles), ...) – FMc Aug 24, 2020 at 18:38 • It's a little bit slower with this but at least i've learned something ! Aug 25, 2020 at 7:52 • Where do you get colNames and folder from? Aug 25, 2020 at 8:26 • Sorry forgot those, one is a list of names, the other one is the path of the folder in a string Aug 25, 2020 at 8:48 • Have u tried replacing date_parser=cached_date_parser with infer_datetime_format=True in the read_csv call? The API document says reading could be faster if the format is correctly inferred. – GZ0 Aug 26, 2020 at 16:15 Here is my untested feedback on your code. Some remarks: • Encapsulate the functionality as a named function. I assumed folder_path as the main "variant" your calling code might want to vary, but your use case might "call" for a different first argument. • Use PEP8 recommandations for variable names. • Comb/separate the different concerns within the function: 1. gather input files 2. handle column types 3. read CSVs and parse dates • Depending on how much each of those concerns grows in size over time, multiple separate functions could organically grow out of these separate paragraphs, ultimately leading to a whole utility package or class (depending on how much "instance" configuration you would need to preserve, moving the column_names and dtypes parameters to object attributes of a class XyzCsvReader's __init__ method.) • Concerning the date parsing: probably the bottleneck is not caused by caching or not, but how often you invoke the heavy machinery behind pd.to_datetime. My guess is that only calling it once in the end, but with infer_datetime_format enabled will be much faster than calling it once per row (even with your manual cache). import glob import os import pandas as pd folder_path, column_names=None, dtypes=None): all_files = glob.glob(os.path.join(folder_path, "*.csv")) if column_names is None: column_names = [ 'COLUMN_A', 'COLUMN_B', # ... 'COLUMN_Z'] if dtypes is None: dtypes = { 'COLUMN_B': bool, 'COLUMN_D': int, 'COLUMN_Y': float} dtypes.update({col: str for col in column_names if col not in dtypes}) result = pd.concat(( for file in all_files), ignore_index=True) # untested pseudo-code, but idea: call to_datetime only once result['date'] = pd.to_datetime( result[[6, 14]], infer_datetime_format=True, errors='coerce') return result # use as Edit: as suggested by user FMc in their comment, switch from a list comprehension to a generator expression within pd.concat to not create an unneeded list. • thanks, same idea than GZ0 in the comment with the infer_datetime_format=True but it's better by a few seconds with your untested idea Aug 27, 2020 at 8:40
2023-03-25 05:38:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36963438987731934, "perplexity": 4986.300772309475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00714.warc.gz"}
https://tex.stackexchange.com/questions/361159/how-to-draw-a-pretty-tree-diagram-i-already-did-one-but-it-is-very-ugly
# How to draw a pretty tree-diagram (I already did one, but it is very ugly) \documentclass{standalone} \usepackage{forest} \usepackage{rotating} \begin{document} \centering \begin{forest} for tree={ if level=0{ align=center, l sep=20mm, }{% align={@{}C{1.5em}@{}}, edge path={ \noexpand\path [draw, \forestoption{edge}] (!u.parent anchor) -- +(0,-5mm) -| (.child anchor)\forestoption{edge label}; }, }, draw, font=\sffamily\bfseries, parent anchor=south, child anchor=north, l sep=10mm, edge={thick, rounded corners=1pt}, thick, inner color=gray!5, outer color=gray!20, rounded corners=2pt, fzr/.style={ alias=fzr, align=center, child anchor=west, fill=green!25, edge path={ \noexpand\path[\forestoption{edge}] ([yshift=-1em]!u.parent anchor) -- (.child anchor)\forestoption{edge label}; }, }, } [manager, alias=master, align=center % [expatriate,fzr] [\rotatebox{90}{bureau}] [\rotatebox{90}{production} [\rotatebox{90}{line}] ] [\rotatebox{90}{finance}] [\rotatebox{90}{quality},align=center %[quality supervisor,fzr] [\rotatebox{90}{laboratory}] [\rotatebox{90}{review}] ] [\rotatebox{90}{supply} [\rotatebox{90}{material}] [\rotatebox{90}{\parbox{2.5cm}{Semi-finished\\ products}}] [\rotatebox{90}{\parbox{2.5cm}{Finished\\ product}}] ] ] \node [draw,font=\sffamily\bfseries,thick,fill=green!25,rounded corners=2pt,xshift=25mm,] (fuzeren) [yshift=-1.3em,] {expatriate}; \path [draw,thick] ([yshift=-.5em]!master.south) -- (fuzeren); \end{forest} \end{document} It appears to be something like this: In fact I modified the code from someone and I don't really understand everything inside the code. The diagram I desire is: 1. The 'manager' aligns with 'quality' and the 'supply' aligns with 'semi finished products' (I already let the 'quality' align = center. I don't know why it doesn't align with manager). 2. The turnings of the connecting line should not have fillet, to let the intersection to be straight right. 3. It should spare some place for the 'expatriat' for that it doesn't touch the horizontal line. 4. The frame of each level should be aligned by the upper border. Anyway, would someone give a solution to help achieve my desired tree-diagram? • Who's someone? Please attribute code and provide a link to the original. – cfr Mar 30 '17 at 12:27 • See the Forest manual for better ways to rotate nodes. You don't want to use \rotatebox here, I don't think. – cfr Mar 30 '17 at 13:47 • Don't you get a compilation error? If we had the original source, we might at least be able to compare them. – cfr Mar 30 '17 at 23:34 • You didn't test this before uploading, did you? How is the C column type defined? – cfr Mar 30 '17 at 23:35 • The original code uses ctex, a chinese language package and all contents are in chinese. The original code, if you want, is here : paste.ubuntu.com/24288316 Thanks a lot. – jiexiang wen Mar 31 '17 at 14:53 Without working code, with no clue where the original source with missing stuff might be found, it is easier to just start from scratch. Use the edges library for forked edges. Have Forest do the rotation. Have Forest place the expatriate. Then the code is much simpler, cleaner and more straightforward. If you don't understand something in the code, look it up in the manual. If you don't understand the explanation, ask. If you use code from somebody else, attribute it. People have names. They are not anonymous some-bodies. \documentclass[border=10pt]{standalone} \usepackage[edges]{forest} \begin{document} \begin{forest} forked edges, for tree={ edge+={thick}, inner color=gray!5, outer color=gray!20, rounded corners=2pt, draw, thick, tier/.option=level, align=center, font=\sffamily, }, where level<=1{}{ rotate=90, anchor=east, }, [manager [, coordinate, calign with current [bureau] [production [line] ] [finance] [quality, calign with current [laboratory] [review] ] [supply [material] [Semi-finished\\products, calign with current] [Finished\\product] ] ] [expatriate, inner color=green!10, outer color=green!25, child anchor=west, edge path'={(!u.parent anchor) -| (.child anchor)}, before drawing tree={y'+=7.5pt} ] ] \end{forest} \end{document} • Thank you. I have provided the original code in comment. – jiexiang wen Mar 31 '17 at 14:57 • And the source is a friend, he wanted to do this and his code does't run neither, I modified into english and post here for an answer. – jiexiang wen Mar 31 '17 at 15:05
2019-08-20 05:02:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5196300745010376, "perplexity": 4913.866983804146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315222.56/warc/CC-MAIN-20190820045314-20190820071314-00411.warc.gz"}
https://aitopics.org/mlt?cdid=arxivorg%3A0D86D0F0&dimension=pagetext
to ### Quality Evaluation of GANs Using Cross Local Intrinsic Dimensionality Generative Adversarial Networks (GANs) are an elegant mechanism for data generation. However, a key challenge when using GANs is how to best measure their ability to generate realistic data. In this paper, we demonstrate that an intrinsic dimensional characterization of the data space learned by a GAN model leads to an effective evaluation metric for GAN quality. In particular, we propose a new evaluation measure, CrossLID, that assesses the local intrinsic dimensionality (LID) of real-world data with respect to neighborhoods found in GAN-generated samples. Intuitively, CrossLID measures the degree to which manifolds of two data distributions coincide with each other. In experiments on 4 benchmark image datasets, we compare our proposed measure to several state-of-the-art evaluation metrics. Our experiments show that CrossLID is strongly correlated with the progress of GAN training, is sensitive to mode collapse, is robust to small-scale noise and image transformations, and robust to sample size. Furthermore, we show how CrossLID can be used within the GAN training process to improve generation quality. ### Manifold regularization with GANs for semi-supervised learning Generative Adversarial Networks are powerful generative models that are able to model the manifold of natural images. We leverage this property to perform manifold regularization by approximating a variant of the Laplacian norm using a Monte Carlo approximation that is easily computed with the GAN. When incorporated into the semi-supervised feature-matching GAN we achieve state-of-the-art results for GAN-based semi-supervised learning on CIFAR-10 and SVHN benchmarks, with a method that is significantly easier to implement than competing methods. We also find that manifold regularization improves the quality of generated images, and is affected by the quality of the GAN used to approximate the regularizer. ### Variational Approaches for Auto-Encoding Generative Adversarial Networks Auto-encoding generative adversarial networks (GANs) combine the standard GAN algorithm, which discriminates between real and model-generated data, with a reconstruction loss given by an auto-encoder. Such models aim to prevent mode collapse in the learned generative model by ensuring that it is grounded in all the available training data. In this paper, we develop a principle upon which auto-encoders can be combined with generative adversarial networks by exploiting the hierarchical structure of the generative model. The underlying principle shows that variational inference can be used a basic tool for learning, but with the in- tractable likelihood replaced by a synthetic likelihood, and the unknown posterior distribution replaced by an implicit distribution; both synthetic likelihoods and implicit posterior distributions can be learned using discriminators. This allows us to develop a natural fusion of variational auto-encoders and generative adversarial networks, combining the best of both these methods. We describe a unified objective for optimization, discuss the constraints needed to guide learning, connect to the wide range of existing work, and use a battery of tests to systematically and quantitatively assess the performance of our method. ### Perturbative GAN: GAN with Perturbation Layers Perturbative GAN, which replaces convolution layers of existing convolutional GANs (DCGAN, WGAN-GP, BIGGAN, etc.) with perturbation layers that adds a fixed noise mask, is proposed. Compared with the convolu-tional GANs, the number of parameters to be trained is smaller, the convergence of training is faster, the incep-tion score of generated images is higher, and the overall training cost is reduced. Algorithmic generation of the noise masks is also proposed, with which the training, as well as the generation, can be boosted with hardware acceleration. Perturbative GAN is evaluated using con-ventional datasets (CIFAR10, LSUN, ImageNet), both in the cases when a perturbation layer is adopted only for Generators and when it is introduced to both Generator and Discriminator. ### ChainGAN: A sequential approach to GANs We propose a new architecture and training methodology for generative adversarial networks. Current approaches attempt to learn the transformation from a noise sample to a generated data sample in one shot. Our proposed generator architecture, called $\textit{ChainGAN}$, uses a two-step process. It first attempts to transform a noise vector into a crude sample, similar to a traditional generator. Next, a chain of networks, called $\textit{editors}$, attempt to sequentially enhance this sample. We train each of these units independently, instead of with end-to-end backpropagation on the entire chain. Our model is robust, efficient, and flexible as we can apply it to various network architectures. We provide rationale for our choices and experimentally evaluate our model, achieving competitive results on several datasets.
2022-01-20 05:48:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.537478506565094, "perplexity": 1010.6213934574956}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301720.45/warc/CC-MAIN-20220120035934-20220120065934-00120.warc.gz"}
http://es.mathworks.com/help/robust/ref/psinfo.html?nocookie=true
# psinfo Inquire about polytopic or parameter-dependent systems created with `psys` ## Syntax ```psinfo(ps) [type,k,ns,ni,no] = psinfo(ps) pv = psinfo(ps,'par') sk = psinfo(ps,'sys',k) sys = psinfo(ps,'eval',p) ``` ## Description `psinfo ` is a multi-usage function for queries about a polytopic or parameter-dependent system `ps` created with `psys`. It performs the following operations depending on the calling sequence: • `psinfo(ps)` displays the type of system (affine or polytopic); the number `k` of `SYSTEM` matrices involved in its definition; and the numbers of `ns`, `ni`, `no` of states, inputs, and outputs of the system. This information can be optionally stored in MATLAB® variables by providing output arguments. • `pv = psinfo(ps,'par')` returns the parameter vector description (for parameter-dependent systems only). • `sk = psinfo(ps,'sys',k)` returns the k-th `SYSTEM` matrix involved in the definition of `ps`. The ranking k is relative to the list of systems `syslist` used in `psys`. • `sys = psinfo(ps,'eval',p)` instantiates the system for a given vector p of parameter values or polytopic coordinates. For affine parameter-dependent systems defined by the `SYSTEM` matrices S0, S1, . . ., Sn, the entries of `p` should be real parameter values p1, . . ., pn and the result is the LTI system of `SYSTEM` matrix S(p) = S0 + p1S1 + . . .+ pnSn For polytopic systems with `SYSTEM` matrix ranging in Co{S1, . . ., Sn}, the entries of `p` should be polytopic coordinates p1, . . ., pn satisfying pj ≥ 0 and the result is the interpolated LTI system of `SYSTEM` matrix $S=\frac{{p}_{1}{S}_{1}+\cdots +{p}_{n}{S}_{n}}{{p}_{1}+\cdots +{p}_{n}}$
2015-05-06 17:44:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8108884692192078, "perplexity": 2428.035381164915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430458969644.56/warc/CC-MAIN-20150501054249-00022-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-10-exponents-and-radicals-review-exercises-chapter-10-page-693/1
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) By the Product Rule of radicals which is given by $\sqrt[m]{x}\cdot\sqrt[m]{y}=\sqrt[m]{xy},$ the given statement is $\text{ TRUE .}$
2020-04-08 16:43:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8470041751861572, "perplexity": 537.4119410260097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371818008.97/warc/CC-MAIN-20200408135412-20200408165912-00367.warc.gz"}
http://www.charmpeach.com/stochastic-processes/solutions-to-stochastic-processes-ch-1/18/
# Solutions to Stochastic Processes Ch.1 Solutions to Stochastic Processes Sheldon M. Ross Second Edition(pdf) Since there is no official solution manual for this book, I handcrafted the solutions by myself. Some solutions were referred from web, most copyright of which are implicit, can’t be listed clearly. Many thanks to those authors! Hope these solutions be helpful, but No Correctness or Accuracy Guaranteed. Comments are welcomed. Excerpts and links may be used, provided that full and clear credit is given. 1.1 Let $$N$$ denote a nonnegative integer-valued random variable. Show that $$E[N]= \sum_{k=1}^{\infty} P\{N\geq k\} = \sum_{k=0}^{\infty} P\{N > k\}.$$ In general show that if $$X$$ is nonnegative with distribution $$F$$, then $$E[X] = \int_{0}^{\infty}\overline{F}(x)dx$$ and $$E[X^n] = \int_{0}^{\infty}nx^{n-1}\overline{F}(x)dx.$$ $$Proof:$$ \begin{align} E[N] &= \sum_{k=0}^{\infty}kP\{N=k\} \\ &= \sum_{k=0}^{\infty}k[P\{N \geq k\} – P\{N \geq k+1\} ] \\ &= P\{N \geq 1\} – P\{N \geq 2\} + 2\cdot P\{N \geq 2\} – 2\cdot P\{N \geq 3\} + \dots \\ &= \sum_{k=1}^{\infty} P\{N\geq k\} \\ &= \sum_{k=0}^{\infty} P\{N > k\}. \\ E[X^n] &= \int_{0}^{\infty}x^ndF(x) \\ &= \int_{0}^{\infty}\int_{0}^{x}nt^{n-1}dtdF(x) \\ &= \int_{0}^{\infty}\int_{t}^{\infty} nt^{n-1} dF(x)dt \\ &= \int_{0}^{\infty}nt^{n-1}\cdot [F(\infty) – F(t)]dt \\ &= \int_{0}^{\infty}nt^{n-1}\overline{F}(t)dt \end{align} Let $$n=1$$, we obtain $$E[X] = \int_{0}^{\infty}\overline{F}(x)dx$$. 1.2 If $$X$$ is a continuous random variable having distribution $$F$$ show that. (a) $$F(X)$$ is uniformly distributed over(0, 1), (b) if $$U$$ is a uniform (0, 1) random variable, then $$F^{-1}(U)$$ has distribution $$F$$, where $$F^{-1}(x)$$ is that value of $$y$$ such that $$F(y)=x$$ (a) Let $$Z = F(X)$$, \begin{align} F_Z(x) &= P\{Z \leq x\} = P\{F_X(X) \leq x\} \\ &= P \{X \leq F_X^{-1}(x)\} \quad (F(x) \text{ is invertible and non-decreasing})\\ &= F_X(F_X^{-1}(x))\\ &= x \end{align} (b) Let $$Z = F^{-1}(U)$$, \begin{align} F_Z(x) &= P\{Z \leq x\} = P\{F^{-1}(U) \leq x\} \\ &= P \{U \leq F(x)\} \quad (F(x) \text{ is invertible and non-decreasing})\\ &= F_U(F(x))\\ &= F(x) \end{align} 1.3 Let $$X_n$$ denote a binomial random variable with parameters $$(n, p_n), n \geq 1$$ If $$np_n \rightarrow \lambda$$ as $$n \rightarrow \infty$$, show that $$P\{X^n = i\} \rightarrow e^{-\lambda}\lambda^i/i! \quad as\enspace n \rightarrow \infty.$$ $$Proof:$$ \begin{align} \lim_{n \to \infty}P\{X_n = i\} &= \lim_{n \to \infty} {n \choose i}p_n^i(1-p_n)^{n-i} \\ &=\lim_{n \to \infty} \frac{n(n-1)\dots (n-i+1)}{i!}\frac{(np_n)^i}{n^i}(1-\frac{np_n}{n})^n( 1-\frac{np_n}{n})^{-i}\\ &= \lim_{n \to \infty}\frac{(np_n)^i}{i!}[1 \cdot (1 – \frac{1}{n}) \dots (1 – \frac{i – 1}{n})](1-\frac{np_n}{n})^n( 1-\frac{np_n}{n})^{-i} \\ &=\frac{\lambda ^i}{i!} \cdot 1 \cdot e^{-\lambda} \cdot 1 \\ &= \frac{e^{-\lambda}\lambda ^i}{i!} \end{align} 1.4 Compute the mean and variance of a binomial random variable with parameters $$n$$ and $$p$$ \begin{align} E[N] &= \sum_{k=0}^{n} k{n \choose k}p^k(1-p)^{n-k} \\ &= np\sum_{k=1}^{n} {{n-1} \choose {k-1}}p^{k-1}(1-p)^{n-k} \\ &= np\sum_{k=0}^{n-1} {{n-1} \choose k}p^{k}(1-p)^{n-1-k} \\ & = np(p + 1 – p)^{n-1} = np\\ E[N^2] &= \sum_{k=0}^{n} k{n \choose k}p^k(1-p)^{n-k} \\ &= np\sum_{k=1}^{n}k {{n-1} \choose {k-1}}p^{k-1}(1-p)^{n-k} \\ &= np\sum_{k=0}^{n-1}(k+1) {{n-1} \choose k}p^{k}(1-p)^{n-1-k} \\ &= np[(n-1)p + (p + 1 – p)^{n-2}]\\ &= np(1-p) + n^2p^2\\ Var(N) &= E[N^2] – E^2[N] = np(1-p) \end{align} $$\text{Or, let } X_i\sim B(1, p), X_i \text{ are independent from each other, }Y = \sum_{i=1}^{n}X_i, \\ \text{thus } Y\sim B(n, p)$$ \begin{align} E[Y] &= \sum_{i=1}^{n}E[X_i] = np \\ Var(Y) &= \sum_{i=1}^{n}Var(X_i) = np(1-p) \\ \end{align} 1.6 (a) Hint: max(X_1, \dots, X_{n-1}) = F^{n-1}(X)\\ \text{Let }\\ I_i = \left\{ \begin{array}{ll} 1 \quad X_n \text{ is a record} \\ 0 \quad X_n \text{ is not a record} \\ \end{array} \right. \\ \begin{align} P\{I_i = 1\} &= \int_{-\infty}^{+\infty}P\{I_i = 1 | X_i=t\}dF(t) \\ &= \int_{-\infty}^{+\infty} F^{i-1}(t)dF(t) \\ &=\int_0^1 x^{i-1}dx \\ &= \frac{1}{i} \end{align} \\ \text{thus, } I_i \sim B(1, \frac{1}{i}) 1.7 Let $$X$$ denote the number of white balls selected when $$k$$ balls are chosen at random from an urn containing $$n$$ white and $$m$$ black balls. Compute $$E[X]$$ and $$Var(X)$$ . Obviously, $$X \sim H(m+n, n, k)$$ Thus, \begin{align} E(X) &= \frac{kn}{m+n}\\ Var(X) &= \frac{kmn}{(m+n)^2} (\frac{n+m-k}{n-k}) \end{align} More about Hypergeometric Distribution from wikipedia.org, I’ve also written down the derivation in this post. 1.8 Let $$X_1$$ and $$X_2$$ be independent Poisson random variables with means $$\lambda_1$$ and $$\lambda_2$$. (a) Find the distribution of $$X_1 + X_2$$ (b) Compute the conditional distribution of $$X_1$$ given that $$X_1 + X_2 = n$$ (a) Let $$Z=X_1 + X_2$$, \begin{align} P\{z=i\} &= \sum_{k=0}^i \frac{\lambda_1^k \lambda_2^{i-k}}{k!(i-k)!}e^{-(\lambda_1+\lambda_2)} \\ &= \frac{ e^{-(\lambda_1+\lambda_2)} }{i!} \sum_{k=0}^i \frac{i!}{k!(i-k)!} \lambda_1^k \lambda_2^{i-k} \\ &= \frac{(\lambda_1 + \lambda_2)^i e^{-(\lambda_1+\lambda_2)} }{i!}, \quad i=0, 1, 2, \dots , \\ \end{align} Thus $$X_1 + X_2 \sim \pi(\lambda_1 + \lambda_2)$$ (b) \begin{align} P\{X_1 = k | X_1 + X_2 = n\} &= \frac{P\{X_1 = k\} P\{X_2 = n-k\} }{P\{X_1 + X_2 = n\}} \\ &= { n \choose k} \lambda_1^k \lambda_2^{n-k} \end{align} 1.9 A round-robin tournament of $$n$$ contestants is one in which each of the $${n \choose 2}$$ pairs of contestants plays each other exactly once, with the outcome of any play being that one of the contestants wins and the other loses. Suppose the players are initially numbered $$1, 2, \dots, n$$. The permutation $$i_1, \dots, i_n$$ is called a Hamiltonian permutation if $$i_1$$ beats $$i_2$$, $$i_2$$ beats $$i_3, \dots$$ and $$i_{n-1}$$ beats $$i_n$$. Show that there is an outcome of the round-robin for which the number of Hamiltonian is at least $$n!/2^{n-1}$$. (Hint. Use the probabilistic method.) $$Proof:$$ Suppose $$X$$ be the permutation number of a n contestants Hamiltonian permutation which start at particular contestant, the expectation is $$E_n$$, the total number’s expectation will be $$nE_n$$. Also we suppose each game equally likely to be won by either contestant, independently. Thus, \begin{align} E_n &= \sum {n-1 \choose k}(\frac{1}{2})^k (\frac{1}{2})^{n-1-k} k E_{n-1} \\ &= \frac{(n-1)E_{n-1}}{2} \sum {n-2 \choose k-1} (\frac{1}{2})^{k-1} (\frac{1}{2})^{n-1-k} \\ &= \frac{(n-1)E_{n-1}}{2} \\ &= \frac{(n-1)!}{2^n}E_1 \\ \end{align} Obviously, $$E_1 = 1, nE_n = n!/2^{n-1}$$. Since at least one of the possible values of a random variable must be at least as large as its mean, proven. 1.11 If $$X$$ is a nonnegative integer-valued random variable then the function $$P(z)$$, defined for $$|z| \leq 1$$ by $$P(z) = E[z^X] = \sum_{j=0}^{\infty} z^j P\{X=j\}$$ is called the probability generating function of $$X$$ (a) Show that $$\frac{d^k}{dz^k}P(z)_{|z=0} = k!P\{X=k\}.$$ (b) With 0 being considered even, show that $$P\{X\ is\ even\} = \frac{P(-1) + P(1)}{2}$$ (c) If $$X$$ is binomial with parameters $$n$$ and $$p$$, show that $$P\{X\ is\ even\} = \frac{1 + (1-2p)^n}{2}$$ (d) If $$X$$ is Poisson with mean $$\lambda$$, show that $$P\{X\ is\ even\} = \frac{1 + e^{-2\lambda}}{2}$$ (e) If $$X$$ is geometric with parameter p, show that, $$P\{X\ is\ even\} = \frac{1-p}{2-p}$$ (f) If $$X$$ is a negative binomial random variable with parameters $$r$$ and $$p$$, show that $$P\{X\ is\ even\} = \frac{1}{2} [1 + (-1)^r (\frac{p}{2-p})^r]$$ (a) $$\frac{d^k}{dz^k}P(z)_{|z=0} = k!P\{X=k\} + \sum_{j=k+1}^{\infty}z^{j-k}P\{X=j\} = k!P\{X=k\}$$ (b) $$\frac{P(-1) + P(1)}{2} = \frac{1}{2}\sum_{j=0, 2, 4, \dots}^{\infty}2P\{X=j\} = P\{X\ is\ even\}$$ (c) \begin{align} P(1) &= \sum_{j=0}^{n} 1^j {n \choose j} p^j (1-p)^{n-j} = 1 \\ P(-1) &= \sum_{j=0}^{n} {n \choose j} (-p)^j (1-p)^{n-j} = (1-2p)^n \\ P\{X\ is\ even\} &= \frac{P(-1) + P(1)}{2} = \frac{1 + (1-2p)^n}{2} \\ \end{align} (d) \begin{align} P(1) &= \sum_{j=0}^{\infty} 1^j \frac{\lambda ^j e^{-\lambda}}{j!} = 1 \\ P(-1) &= e^{-2\lambda}\sum_{j=0}^{\infty} \frac{(-\lambda) ^j e^{\lambda}}{j!} = e^{-2\lambda} \\ P\{X\ is\ even\} & = \frac{P(-1) + P(1)}{2} = \frac{1 + e^{-2\lambda}}{2} \end{align} (e) \begin{align} P(1) &= 1 \\ P(-1) &= \sum_{j=1}^{\infty} (-1)^j (1-p)^(j-1) p \\ &= -\frac{p}{2-p} \sum_{j=1}^{\infty}(p-1)^(j-1) (2-p) = -\frac{p}{2-p} \\ P\{X\ is\ even\} & = \frac{P(-1) + P(1)}{2} = \frac{1-p}{2-p} \end{align} (f) \begin{align} P(1) &= 1 \\ P(-1) &= \sum_{j=r}^{\infty} (-1)^j {j-1 \choose r-1} p^r (1-p)^{j-r}\\ &= (-1)^r (\frac{p}{2-p})^r \sum_{j=r}^{\infty} (2-p)^r (p-1)^{j-r} = (-1)^r (\frac{p}{2-p})^r \\ P\{X\ is\ even\} & = \frac{P(-1) + P(1)}{2} = \frac{1}{2} [1 + (-1)^r (\frac{p}{2-p})^r] \end{align} 1.12 If $$P\{0 \leq X \leq a\} = 1$$, show that $$Var(X) \leq a^2 / 4.$$ $$Proof:$$ \begin{align} Var(X) &= E[X^2] – E^2[X] = \int_0^a x^2 dF(x) – [\int_0^a x dF(x)]^2 \\ &= x^2 F(x)|_0^a – 2\int_0^a xF(x) dx – [xF(x)|_0^a – \int_0^a F(x) dx ]^2 \\ &= -[\int_0^a F(x) dx ]^2 – 2\int_0^a (x-a)F(x) dx \\ &\leq -[\int_0^a F(x) dx ]^2 – \frac{2}{a}\int_0^a (x-a)dx\int_0^a F(x) dx \quad \text{(Chebyshev’s sum inequality)}\\ &= -t^2 + at \quad (t = \int_0^a F(x) dx) \\ \end{align} When $$t = a/2$$ we get the max value, which is $$a^2 / 4$$, proven. 1.13 Consider the following method of shuffling a deck of $$n$$ playing cards, numbered 1 through $$n$$. Take the top card from the deck and then replace it so that it is equally likely to be put under exactly $$k$$ cards, for $$k = 0, 1, \dots , n-1$$. Continue doing this operation until the card that was initially on the bottom of the deck is now on top. Then do it one more time and stop. (a) Suppose that at some point there are $$k$$ cards beneath the one that was originally on the bottom of the deck. Given this set of $$k$$ cards explain why each of the possible $$k!$$ orderings is equally likely to be the ordering of last $$k$$ cards. (b) Conclude that the final ordering of the deck is equally likely to be any of the $$N!$$ possible orderings. (c) Find the expected number of times the shuffling operation is performed. (a) Consider there are k numbered positions that can be inserted at the beginning, every card has the same probability to be inserted into any of the k position. That’s to say every ordering has the same probability. (b) Let k = n. (c) Let $$X_i$$ denote the number of operations performed to add the ith card to the $$i-1$$ cards beneath the “bottom card”. Obviously, $$X_i \sim G(i/n)$$, and total number is $$X$$, $$E[X] = E[\sum_{i=1}^{n-1} X_i] + 1 = \sum_{i=1}^{n-1} E[X_i] + 1 = \sum_{i=1}^{n} \frac{n}{i}$$ (c)Wrong attempt: Let $$X_i$$ denote number of cards beneath the “bottom card” in the $$ith$$ perform, then we have \begin{align} E[X_i] &= E[E[X_i| X_{i-1} = k]] \\ &= E[(k+1)\frac{k+1}{n} + k(1-\frac{k+1}{n})] \\ &= \frac{n+1}{n}E_{i-1} + \frac{1}{n}\\ &= (\frac{n+1}{n})^i – 1 \quad (E[X_1] = 1/n) \\ \end{align} Let $$E[X_i] = n – 1$$, solved $$i + 1 = \ln {(n+1)}/(\ln{(n+1)} – \ln{n})$$, which is the expected times. 1.15 Let $$F$$ be a continuous distribution function and let $$U$$ be a uniform (0, 1) random variable. (a) If $$X= F^{-1}(U)$$, show that $$X$$ has distribution function $$F$$. (b) Show that $$-\ln{U}$$ is an exponential random variable with mean 1. (a) See Problem 1.2(b). (b) Since $$F^{-1}(U) = -\ln{U}, F(x) = e^{-x}$$, thus, $$(-\ln{U}) \sim Exponential(1)$$ $$E[-\ln{U}] = 1$$ 1.16 Let $$f(x)$$ and $$g(x)$$ be probability density functions, and suppose that for some constant $$c$$, $$f(x) \leq cg(x)$$ for all x. Suppose we can generate random variables having density function $$g$$, and consider the following algorithm. Step 1: Generate $$Y$$, a random variable having density function $$g$$. Step 2: Generate $$U$$, a uniform (0, 1) random variable. Step 3: If $$U \leq \frac{f(Y)}{cg(Y)}$$ set $$X = Y$$. Otherwise, go back to Step 1. Assuming that successively generated random variables are independent, show that: (a) $$X$$ has density function $$f$$ (b) the number of iterations of the algorithm needed to generate $$X$$ is a geometric random variable with mean $$c$$ (b) Suppose the $$p$$ is probability to generate $$X$$, then, \begin{align} p &= P\{U \leq \frac{f(Y)}{cg(Y)}\} \\ &= \int_{-\infty}^{\infty} \frac{f(y)}{cg(y)} g(y)dy \\ &= \frac{1}{c} \int_{-\infty}{\infty} f(y)dy \\ &= \frac{1}{c} \end{align} Obviously, the number of iterations needed is $$G(\frac{1}{c})$$, whose mean is $$c$$. (a) \begin{align} f_x(y) &= P\{Y = y | U \leq \frac{f(Y)}{cg(Y)}\} \\ &= cP\{Y = y, U \leq \frac{f(y)}{cg(y)}\} \\ &= cg(y)P\{U \leq \frac{f(y)}{cg(y)}\} \\ &= cg(y) \frac{f(y)}{cg(y)} \\ &= f(y) \end{align} This is called Acceptance-Rejection Method, refer this paper for detail. 1.17 Hint: $$P\{X_n \text{is the ith smallest}\} = \int {n-1 \choose i-1}F(x)^{i-1}\overline F(x)^{n-i}dF(x)$$, do partial integration repeatedly, we get the probability is $$1/n$$. This is called Order Statistic, more detail at Wikipedia. 1.18 A coin, which lands on heads with probability $$p$$, is continually flipped. Compute the expected number of flips that are made until a string of $$r$$ heads in a row is obtained. Let $$X$$ denote the number of flips that are made until a string of $$r$$ heads in a row is obtained. Let $$Y$$ denote the number of flips until the first occurrence of tails. Then $$P\{Y=i\}=p^{i-1}(1-p)$$. When $$i \leq r$$, we start over again, $$E(X|Y=i) = i + E(X)$$, and if $$i > r, E[X|Y=i] = r$$. Thus, \begin{align} E[X] &= \sum_{i=1}^{\infty} E[X|Y=i]P\{Y=i\} \\ &= (1-p) \sum_{i=1}^{r}p^{i-1}(i + E[X]) + (1-p)\sum_{i=r+1}^{\infty} p^{i-1}r \\ &= (1-p) \sum_{i=1}^{r}ip^{i-1} + E[X](1-p^r) + rp^r \\ \end{align} Let $$S = \sum_{i=1}^{r}ip^{i-1}$$, then $$(1-p)S = \sum_{i=1}^r p^{i-1} – rp^r = \frac{1-p^r}{1-p} – rp^r$$ . Hence, \begin{align} E[X] &= \frac{1-p^r}{1-p} – rp^r + E[X](1-p^r) + rp^r \\ &= \frac{1-p^r}{1-p} + E[X](1-p^r) \\ &= \frac{1-p^r}{p^r(1-p)} \end{align} 1.19 An urn contains $$a$$ white and $$b$$ black balls. After a ball is drawn, it is returned to the urn if it is white; but if it is black, it is replaced by a white ball from another urn. Let $$M_n$$ denote the expected number of white balls in the urn after the foregoing operation has been repeated $$n$$ times. (a) Derive the recursive equation $$M_{n+1} = (1 – \frac{1}{a+b})M_n + 1.$$ (b) Use part (a) to prove that $$M_n = a + b – b(1 – \frac{1}{a+b})^n$$ (c) What is the probability that the (n+1)st ball drawn is white? (a) Let $$X_n$$ denote the number of white balls after $$n$$ operations, then, \begin{align} M_{n+1} &= E[E[X_{n+1}|X_n=k]] \\ &= E[k\frac{k}{a+b} + (k+1)(1 – \frac{k}{a+b})] \\ &= (1 – \frac{1}{a+b})E[k] + 1 \\ &= (1 – \frac{1}{a+b})M_n + 1 \end{align} (b) \begin{align} M_n – (a+b) &= (1 – \frac{1}{a+b})(M_{n-1} – (a+b)) \\ &= (M_0 – a – b)(1 – \frac{1}{a+b})^n \\ M_n &= a + b – b (1 – \frac{1}{a+b})^n \\ \end{align} (c) Let $$I_n = 1$$ denote the nth ball drawn is white, $$I_n = 0$$ when black. then, \begin{align} P\{(n+1)st \text{ ball is white}\} &= E[I_{n+1}] = E[E[I_{n+1} | X_n = k]] \\ &= \frac{M_n}{a+b} \end{align} 1.20 A Continuous Random Packing Problem Consider the interval $$(0, x)$$ and suppose that we pack in this interval random unit intervals–whose left-hand points are all uniformly distributed over $$(0, x-1)$$ — as follows. Let the first such random interval be $$I_1$$. If $$I_1, \dots , I_k$$ have already been packed in the interval, then the next random unit interval will be packed if it dose not intersect any of the intervals $$I_1, \dots , I_k$$, and the interval will be denoted by $$I_{k+1}$$. If it dose intersect any of the intervals $$I_1, \dots , I_k$$, we disregard it and look at the next random interval. The procedure is continued until there is no more room for additional unit intervals (that is, all the gaps between packed intervals are smaller than 1). Let $$N(x)$$ denote the number of unit intervals packed in $$[0, x]$$ by this method. For instance, if $$x=5$$ and the successive random intervals are $$(0.5, 1.5),\ (3.1, 4.1),\ (4, 5),\ (1.7, 2.7)$$, then $$N(5) = 3$$ with packing as follows Let $$M(x) = E[N(x)]$$, Show that $$M$$ satisfies \begin{align} M(x) &= 0 \quad \quad x < 1,\\ M(x) &= \frac{2}{x-1}\int_0^{x-1} M(y)dy + 1, \quad x>1 \end{align} Let $$Y$$ denote the left-hand point of the first interval, $$Y \sim U(0, x-1)$$, and the first interval divide the whole into two parts with length y and x – y -1, hence \begin{align} M(x) &= E[N(x)] = E[E[N(x) | Y]] \\ &= E[N(y) + N(x-y-1) + 1] \\ &= \int_0^{x-1} (\frac{1}{x-1})[M(y) + M(x-y-1) + 1] dy \\ &= \frac{2}{x-1}\int_0^{x-1} M(y)dy + 1, \quad x>1 \end{align} 1.23 Consider a particle that moves along the set of integers in the following manner. If it is presently at $$i$$ then it next moves to $$i + 1$$ with probability $$p$$ and to $$i-1$$ with probability $$1-p$$. Starting at 0, let $$\alpha$$ denote the probability that it ever reaches 1. (a) Argue that $$\alpha = p + (1-p) \alpha^2$$ (b) Show that $$\alpha = \left\{ \begin{array}{ll} 1 \quad p \geq 1/2 \\ p/(1-p) \quad p < 1/2 \\ \end{array} \right.$$ (c) Find the probability that the particle ever reaches $$n, n > 0$$ (d) Suppose that $$p<1/2$$ and also that the particle eventually reaches $$n, n > 0$$. If the particle is presently at $$i, i<n$$, and $$n$$ has not yet been reached, show that the particle will next move to $$i+1$$ with probability $$1-p$$ and to $$i-1$$ with probability $$p$$. That is, show that $$P\{\text{next at } i + 1 | \text{at } i \text{ and will reach n}\} = 1-p$$ (Note that the roles of $$p$$ and $$1-p$$ are interchanged when it is given that $$n$$ is eventually reached) (a) Obviously (b) Solve the equation in (a), get $$\alpha=1$$ or $$\alpha= p/(1-p)$$. Since $$\alpha \leq 1$$, $$\alpha = \left\{ \begin{array}{ll} 1 \quad p \geq 1/2 \\ p/(1-p) \quad p < 1/2 \\ \end{array} \right.$$ (c) $$\alpha^n = \frac{p^n}{(1-p)^n}$$ (d) \begin{align} &P\{\text{next at } i + 1 | \text{at } i \text{ and will reach n}\} \\ &= \frac{P\{\text{at } i \text{ and will reach n} | \text{next at } i + 1 \} \cdot p}{P\{\text{at } i \text{ and will reach n}\} } \\ &= \frac{p\cdot p^{n-i-1}}{(1-p)^{n-i-1}} / \frac{p^{n-i}}{(1-p)^{n-i}} \\ &= 1- p \end{align} 1.24 In Problem 1.23, let $$E[T]$$ denote the expected time until the particle reaches 1. (a) Show that $$E[T] = \left\{ \begin{array}{ll} 1/(2p – 1) \quad p > 1/2 \\ \infty \quad p \leq 1/2 \\ \end{array} \right.$$ (b) Show that, for $$p > 1/2$$, $$Var(T) = \frac{4p(1-p)}{(2p-1)^3}$$ (c) Find the expected time until the particle reaches $$n, n > 0$$. (d) Find the variance of the time at which the particle reaches $$n, n > 0$$. Let $$X_i$$ denote the time a particle at $$i$$ need to take to eventually reaches $$i+1$$. Then all X_i, i \in Z), are independent identically distributed. (a) \begin{align} E[T] &= E[E[T|X_{-1}, X_{0}]] = E[p + (1-p)(1 + X_{-1} + X_{0})] \\ &= 1 + 2(1-p)E[T] = 1/(2p – 1) \end{align} Since \(E[T] \geq 1, when $$p \leq 1/2, E[T]$$ doesn’t exist. (b) \begin{align} E[T^2] &= E[E[T^2|X_{-1}, X_{0}]] = E[p + (1-p)(1 + X_{-1} + X_{0})^2] \\ &= 1 + 2(1-p)E[T^2] + 4(1-p)E[T] + 2(1-p)E^2[T] \\ &= \frac{-4p^2 + 6p – 1}{(2p – 1)^3}\\ Var(T) &= E[T^2] – E^2[T] \\ &= \frac{4p(1-p)}{(2p-1)^3} \end{align} (c) $$E = E[\sum_{i=0}^{n-1}X_i] = nE[T] = \frac{n}{2p – 1} \quad (p > 1/2)$$ (d) $$Var = Var(\sum_{i=0}^{n-1}X_i) = nVar(T) = \frac{4np(1-p)}{(2p-1)^3} \quad (p > 1/2)$$ 1.25 Consider a gambler who on each gamble is equally likely to either win or lose 1 unit. Starting with $$i$$ show that the expected time util the gambler’s fortune is either 0 or $$k$$ is $$i(k-i), i = 0, \dots , k$$. (Hint: Let $$M_i$$ denote this expected time and condition on the result of the first gamble) Let $$M_i$$ denote this expected time, then $$M_i = \frac{1}{2}(1 + M_{i-1}) + \frac{1}{2}(1 + M_{i+1}) \\ M_{i+1} – M_{i} = M_{i} – M_{i-1} – 2\\$$ Obviously, $$M_0 = M_k = 0, M_1 = M_{k-1}$$, $$M_{k} – M_{k-1} = M_{1} – M_{0} – 2(k-1)\\$$ Solved, $$M_{1} = k – 1$$, easily we can get $$M_i = i(k-i)$$. 1.26 In the ballot problem compute the probability that $$A$$ is never behind in the count of the votes. We see that $$P_{1,0} = 1, P\{2,1\} = 1/3, P\{3, 1\}= 3/4$$, assume $$P_{n,m} = n/(n+m)$$, it hold when $$n+m=1, (n=1, m=1)$$. If it holds true for $$n+m=k$$, then when $$n + n = k+1$$, \begin{align} P_{n, m} &= \frac{n}{n + m}\frac{n-1}{n+m-1} + \frac{m}{n+m}\frac{n}{n+m-1} \\ &= \frac{n}{m+n} \end{align} Hence, the probability is $$n / (m+n)$$. 1.27 Consider a gambler who wins or loses 1 unit on each play with respective possibilities $$p$$ and $$1-p$$. What is the probability that, starting with $$n$$ units, the gambler will play exactly $$n+2i$$ games before going broke? (Hint: Make use of ballot theorem.) The probability of playing exactly $$n+2i$$ games, $$n+i$$ of which loses, is $${n+2i \choose i}p^{i}(1-p)^{n+i}$$. And given the $$n+2i$$ games, the number of lose must be never behind the number of win from the reverse order. Hence we have the result is, $${n+2i \choose i}p^{i}(1-p)^{n+i} \frac{n+i}{n+2i}$$ 1.28 Verify the formulas given for the mean and variance of an exponential random variable. \begin{align} E[x] &= \int_0^{\infty} x\lambda e^{-\lambda x}dx \\ &= -(x + 1/\lambda)e^{-\lambda x} |_0^{\infty} \\ &= 1/\lambda \\ Var(x) &= E[X^2] – E^2[X]\\ &= \int_0^{\infty} x^2 \lambda e^{-\lambda x}dx – \frac{1}{\lambda^2} \\ &= -(x^2 + 2x/\lambda + 2/\lambda^2)|_0^{\infty} – \frac{1}{\lambda^2} \\ &= 1/\lambda^2 \end{align} 1.29 If $$X_1, X_2, \dots , X_n$$ are independent and identically distributed exponential random variables with parameter $$\lambda$$, show that $$\sum_1^n X_i$$ has a gamma distribution with parameters $$(n, \lambda)$$. That is, show that the density function of $$\sum_1^n X_i$$ is given by $$f(t) = \lambda e^{-\lambda t}(\lambda t)^{n-1} / (n-1)!, \quad t\geq 0$$ The density function holds for $$n=1$$, assume it holds for $$n=k$$, when $$n = k + 1$$, \begin{align} f_{k+1}(t) &= \int_0^{t} f_{k}(x)f_1(t-x)dx \\ &= \int_0^{t} \lambda e^{-\lambda x}(\lambda x)^{k-1} \lambda e^{-\lambda(t-x)} / (k-1)! dx \\ &= \lambda e^{-\lambda t}(\lambda t)^{n} / (n)! \end{align} Proven. 1.30 In Example 1.6(A) if server $$i$$ serves at an exponential rate $$\lambda_i, i= 1, 2$$, compute the probability that Mr. A is the last one out. \begin{align} P\{\text{server 1 finish before server 2}\} &= \int_0^{\infty}\lambda e^{-\lambda_2 x} \int_0^{x} \lambda e^{-\lambda_1 t} dtdx \\ &= \frac{\lambda_1}{\lambda_1 + \lambda_2} \\ P(1-P) + (1-P)P &= \frac{2\lambda_1 \lambda_2}{(\lambda_1 + \lambda_2)^2} \end{align} 1.31 If $$X$$ and $$Y$$ are independent exponential random variables with respective means $$1/\lambda_1$$ and $$1/\lambda_2$$, compute the distribution of $$Z=min(X, Y)$$. What is the conditional distribution of $$Z$$ given that $$Z = X$$? \begin{align} F_{min}(z) &= P\{Z \leq z\} = 1 – P\{Z > z\} \\ &= 1 – P\{X >z, Y>z\} \\ &= 1 – [1 – F_X(z)][1 – F_Y(z)] \\ &= 1 – e^{-(\lambda_1 + \lambda_2)z} \\ f_Z &= \left\{ \begin{array}{ll} (\lambda_1 + \lambda_2) e^{-(\lambda_1 + \lambda_2)z} \quad z > 0 \\ 0 \quad z \leq 0 \\ \end{array} \right. \\ f_{Z|Z=X}(x) &= P\{X = x|X < Y\} = \frac{P\{X = x, x < Y\}}{P\{X < Y\}} \\ &= \frac{\lambda_1 + \lambda_2}{\lambda_1} f_X(x)\bar{F}_Y(x) \\ &= (\lambda_1 + \lambda_2)e^{-(\lambda_1+\lambda_2) x} \end{align} 1.32 Show that the only continuous solution of the functional equation $$g(s + t) = g(s) + g(t)$$ is $$g(s) = cs$$. \begin{align} g(0) &= g(0 + 0) – g(0) = 0\\ g(-s) &= g(0) – g(s) = -g(s) \\ f_{-}^{\prime}(s) &= \lim_{h \to 0^{-}}\frac{g(s+h) – g(s)}{h} \\ &= \lim_{h \to 0^{-}} \frac{g(h)}{h} \\ &= \lim_{h \to 0^{-}} \frac{g(-h)}{-h} \\ &= \lim_{h \to 0^{+}} \frac{g(h)}{h} \\ &= \lim_{h \to 0^{+}}\frac{g(s+h) – g(s)}{h} \\ &= f_{+}^{\prime}(s) \end{align} Hence, $$g(s)$$ is differentiable, and the derivative is a constant. The general solution is, $$g(s) = cs + b$$ Since $$g(0) = 0, b = 0$$. 1.33 Derive the distribution of the ith record value for an arbitrary continuous distribution $$F$$. (See Example 1.6(B)) Let $$F(x)$$ denote $$X_i$$’s distribution function, then the distribution function of ith value is $$F^i (x)$$. (I’m not much confident of it.) 1.35 Let $$X$$ be a random variable with probability density function $$f(x)$$, and let $$M(t) = E[e^{tx}]$$ be its moment generating function. The tilted density function $$f_t$$ is denfined by$$f_t(x) = \frac{e^{tx}f(x)}{M(t)}$$ Let $$X_t$$ have density function $$f_t$$. (a) Show that for any function $$h(x)$$ $$E[h(X)] = M(t)E[exp\{-tX_t\}h(X_t)]$$ (b) Show that, for $$t > 0$$, $$P\{X > a\} \leq M(t)e^{-ta}P\{X_t > a\}$$ (c) Show that if $$E[X_{t*}] = a$$, then $$\underset{t}{min} M(t)e^{-ta} = M(t*)e^{-t*a}$$ (a) \begin{align} M(t)E[exp\{-tX_t\}h(X_t)] &= M(t)\int_{-\infty}^{\infty} e^{-tx}h(x)f_t(x)dx \\ &= \int_{-\infty}^{\infty} h(x)f(x)dx \\ &= E[h(X)] \end{align} (b) \begin{align} M(t)e^{-ta}P\{X_t > a\} &= M(t)e^{-ta} \int_{a}^{\infty} \frac{e^{tx}f(x)}{M(t)} dx \\ &= \int_{a}^{\infty} e^{t(x-a)}f(x)dx \\ &\geq \int_{a}^{\infty} f(x)dx \\ &= P\{X > a\} \end{align} (c)\begin{align} f(x, t) &= M(t)e^{-ta} = e^{-ta}\int_{-\infty}^{\infty} e^{tx}f(x)dx \\ f^{\prime}_t(x, t) &= e^{-2ta} (\int_{-\infty}^{\infty} e^{ta} xe^{tx}f(x)dx – a\int_{-\infty}^{\infty} e^{ta} e^{tx}f(x)dx) \\ &= e^{-ta} (\int_{-\infty}^{\infty} xe^{tx}f(x)dx – aM(t))\\ \end{align} Let the derivative equal to 0, we get $$E[X_{t*}] = a$$ . 1.36 Use Jensen’s inequality to prove that the arithmetic mean is at least as large as the geometric mean. That is, for nonnegative $$x_i$$, show that $$\sum_{i=1}^{n} x_i/n \geq (\prod_{i=1}^{n} x_i)^{1/n}.$$ Let $$X$$ be random variable, and $$P\{X = x_i\} = 1/n, i=1,2,\dots$$, define a concave function $$f(t) = -\ln{t}$$, then \begin{align} E[f(X)] &= \frac{\sum_{i=1}^{n}-\ln{x_i}}{n} \\ &= -\ln{(\prod_{i=1}^{n}x_i)^{1/n}} \\ f(E[X]) &= -\ln{\frac{\sum_{i=1}^n x_i}{n}} \end{align} According to Jensen’s Inequality, $$E[f(Z)] \geq f(E[Z])$$, then $$\sum_{i=1}^{n} x_i/n \geq (\prod_{i=1}^{n} x_i)^{1/n}$$ 1.38 In Example 1.9(A), determine the expected number of steps until all the states $$1, 2, \dots, m$$ are visited. (Hint: Let $$X_i$$ denote the number of additional steps after $$i$$ of these states have been visited util a total of $$i+1$$ of them have been visited, $$i=0, 1, \dots, m-1$$, and make use of Problem 1.25.) Let $$X_i$$ denote the number of additional steps after $$i$$ of these states have been visited util a total of $$i+1$$ of them have been visited, $$i=0, 1, \dots, m-1$$, then $$E[X_i] = 1 \cdot (m – 1) = m – 1 \\ E[\sum_{i = 0}^{m-1} X_i] = \sum_{i = 0}^{m-1} E[X_i] = m(m-1)$$ 1.40 Suppose that $$r=3$$ in Example 1.9(C) and find the probability that the leaf on the ray of size $$n_1$$ is the last leaf to be visited. $$\frac{1/n_2}{1/n_1 + 1/n_2 + 1/n_3}\frac{1/n_3}{1/n_1 + 1/n_3} + \frac{1/n_3}{1/n_1 + 1/n_2 + 1/n_3}\frac{1/n_2}{1/n_1 + 1/n_2}$$ 1.41 Consider a star graph consisting of a central vertex and $$r$$ rays, with one ray consisting of $$m$$ vertices and the other $$r-1$$ all consisting of $$n$$ vertices. Let $$P_r$$ denote the probability that the leaf on the ray of $$m$$ vertices is the last leaf visited by a particle that starts at 0 and at each step is equally likely to move to any of its neighbors. (a) Find $$P_2$$. (b) Express $$P_r$$ in terms of $$P_{r-1}$$. (a)$$P_2 = \frac{1/n}{1/m + 1/n}$$ (b)$$P_r = \frac{(r-1)/n}{1/m + (r-1)/n}P_{r-1}$$ 1.42 Let $$Y_1, Y_2, \dots$$ be independent and identically distributed with \begin{align} P\{Y_n = 0\} &= \alpha \\ P\{Y_n > y\} &= (1 – \alpha)e^{-y}, \quad y>0 \end{align} Define the random variables $$X_n, n \geq 0$$ by \begin{align} X_0 &= 0\\ X_{n+1} &= \alpha X_n + Y_{n+1} \\ \end{align} Prove that \begin{align} P\{X_n = 0\} &= \alpha^n \\ P\{X_n > x\} &= (1 – \alpha^n)e^{-x}, \quad x>0 \end{align} Obviously, $$Y_n \geq 0, X_n \geq 0, 0 \leq \alpha \leq 1$$. For $$n = 0$$, $$P\{X_0 = 0\} = 1 = \alpha^0 \\ P\{X_0 > x\} = 0 = (1 – \alpha^0)e^{-x} \quad x > 0.$$ The probability density function of $$X_n$$ when $$x>0$$ is $$(1 – P\{X_n > x\})^{\prime} = (1 – \alpha^n)e^{-x}$$ Assume, it holds true for $$n = k$$, then when $$n = k +1$$, \begin{align} P\{X_{k+1} = 0\} &= P\{X_k = 0, Y_{k+1} = 0\} \\ &= P\{X_k = 0\}P\{Y_{k+1} = 0\} \\ &= \alpha^{k+1}\\ P\{X_{k+1} > x\} &= P\{X_{k} = 0\}P\{Y_{k+1} > x\} + \int_0^{\infty} P\{X_{k} = t\}P\{Y_{k+1} > x – t\alpha \} dt \\ &= \alpha^k(1 – \alpha)e^{-x} + (1 – \alpha^k)e^{-x}\\ &= (1 – \alpha^{k+1})e^{-x} \end{align} 1.43 For a nonnegative random variable $$X$$, show that for $$a > 0$$, $$P\{X \geq a\} \leq E[X^t]/a^t$$ Then use this result to show that $$n! \geq (n/e)^n$$ $$Proof:$$ When $$t=0$$, the inequality always hold: $$P\{X \geq a\} \leq E[X^0]/a^0 = 1$$ When $$t>0$$, then $$P\{X \geq a\} = P\{X^t \geq a^t\} \leq E[X^t]/a^t \quad \text{(Markov Inequality)}$$ There seems to be a mistake here, the condition $$t \geq 0$$ missing. We can easily(It takes me a whole day to realize there maybe something wrong 🙁 ) construct a variable that conflict with the inequality: $$P\{X=1\} = P\{X=2\} = 1/2$$ and let $$a = 1, t = -1$$ $$P\{X \geq 1\} = 1 > (2^{-1} \cdot \frac{1}{2} + 1^{-1} \cdot \frac{1}{2})/1 = \frac{3}{4}$$ \begin{align} \sum_{k=0,1,2, \dots} \frac{E[X^k]}{k!} &= E[\sum_{k=0,1,2, \dots} \frac{X^k}{k!} ] \\ &= E[e^X] \geq \frac{E[X^n]}{n!} \\ \end{align} From above, we got, $$E[X^n] \geq a^n P\{X \geq a\}$$ thus, $$\frac{a^n P\{X \geq a\}}{n!} \leq E[e^X]$$ Let $$a = n, X \equiv n, \text{so } P\{X \geq a\}=1, E[e^X] = e^n$$, proven. ## 15 Replies to “Solutions to Stochastic Processes Ch.1” 1. Loretta says: 刚开始学stochastic的我就看到了你的答案!!仿佛找到了宝藏!感谢!!! 2. 博客内容写的很详细,博主很厉害,希望能多交流! 3. yan says: 感谢博主分享!!希望多交流 1.26题,我的一点想法,仅供参考。 形如 (n-a*m)/(n+m) 和 (n-a*m+1)/(n+1) 的解都满足递推关系。可能需要加入边界条件来判别,比如 m = n+1 时,概率为0. 满足这个条件的解是 (n-m+1)/(n+1) 1. Jin says: 知乎“eigenvalue”留言: 1.26中如果取P{2,2}会算得1/3,不符合文中给出的公式。这个网页(https://wenku.baidu.com/view/13b72f22aaea998fcc220e6e.html)提供了使用古典概型解决这个问题的思路,我也有一套使用条件期望+“A始终领先B”事件发生概率为(n-m)/(n+m)的结论得到结论的方法,最后答案都是(n-m+1)/(n+1)。 4. Damian says: 1.8b错误 1. Jin says: 时间太长了,很多已经忘了,还请附详细解答供大家交流,谢谢! 1. Damian says: 分母的$$P(X_{1}+X_{2}=n)$$应该最终留下一个$$(\lambda_{1}+\lambda_{2})^{n}$$ 2. CY says: 应该是 $$C_n^k \frac{\lambda_1^k \lambda_2^{n-k}}{(\lambda_1 + \lambda_2)^n}$$ 5. Damian says: 1.27错误 1. Damian says: 根据ballot theorem($$\frac{p-q}{p+q}$$),领先的可能性应该是$$\frac{n+i-i}{n+i+i} = \frac{n}{n+2i}$$ 2. Jin says: 知乎“eigenvalue”留言: 如果把破产定义为“赌徒的钱数值达到-1及以下”,那么文中的解法是正确的,但是如果把破产定义为“赌徒的钱数值达到0及以下”,答案应为n/(n+2i),同Damian所给出的答案。 6. Jin says: 知乎“Beneldor”留言: 习题1.12,用 Chebyshev’s sum inequality 通常人应该想不到,其实可以参考茆诗松《概率论与数理统计教材习题解答》里面相关问题的更简单的做法 7. Jin says: 知乎“Beneldor”留言: 习题1.26,通项公式猜错了。正确的:P_{n,m}=(n-m+1)/(n+1)。 8. CY says: 第一题我可以使用示性函数来做 \begin{align*} \mathbb{E}X &= \mathbb{E}[\int_0^X dx] = \mathbb{E}[\int_0^{\infty} \mathbb{I}_{(0, X)}(t) dt ] \\ &= \int_0^{\infty} \mathbb{E}[ \mathbb{I}_{(0, X)}(t)]dt \\ &= \int_0^{\infty} \mathbb{E}[ \mathbb{I}_{(t, \infty)}(X)]dt \\ &= \int_0^{\infty} \mathbb{P}(X \geq t) dt = \int_0^{\infty} \bar{F}(t) dt \end{align*} 求n-th moment只需要系数加上就行没有区别 9. CY says: 第一题我可以使用示性函数来做 \begin{align*} \mathbb{E}X &= \mathbb{E}[\int_0^X dx] = \mathbb{E}[\int_0^{\infty} \mathbb{1}_{(0, X)}(t) dt ] \\ &= \int_0^{\infty} \mathbb{E}[ \mathbb{1}_{(0, X)}(t)]dt \\ &= \int_0^{\infty} \mathbb{E}[ \mathbb{1}_{(t, \infty)}(X)]dt \\ &= \int_0^{\infty} \mathbb{P}(X \geq t) dt = \int_0^{\infty} \bar{F}(t) dt \end{align*} 求n-th moment只需要系数加上就行没有区别
2022-11-29 19:03:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 1324.4486095553482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710710.91/warc/CC-MAIN-20221129164449-20221129194449-00835.warc.gz"}
http://www.sciencesoftware.com.cn/NewSoftware_detail.aspx?sid=129
Scientific WorkPlace V6——LaTeX科学论文排版软件 Scientific Workplace软件支持Windows 和Mac系统。以Mozilla为基础的体系结构,Scientific Workplace具有更多的灵活性。根据您的发布和便捷需要,您可以以多种格式保存和导出文档。 Scientific WorkPlace继续作为Latex排版程序的前端,排版复杂的技术文档。这意味着您不需要去学习Latex语法就可以进行排版。由于它卓越的精度和质量,Latex是科学论文和书籍出版商和作者的黄金标准。 Scientific WorkPlace使用自然数学符号来输入和展示结果,省去了学习复杂命令的语法。有了Scientific Workplace,您可以用鼠标轻松输入数学符号,当您熟练时,用键盘快捷键就可以轻松搞定。 Tex字体中所有的符号,Scientific WorkPlace都有包含,这意味着,在Scientific WorkPlace中,您可以任意输入数学符号。您不用知道Tex 名称也可以输入数学符号。如果您知道数学对象和符号的Tex名称,您也可以使用它们。 Scientific WorkPlace带有预定义的文档壳,每种文档壳都有不同的排版风格,大部分的风格都是为满足特定期刊和学术机构的格式要求而设计的。您可以选择最适合您的期刊和出版商的文档壳。如果您还不知道您的作品将在哪里出版,那么我们建议您从一个标准的Latex文档壳开始,它可以很容易的引用到您的论文当中。 Sciencetific Workplace中内置了CSS文件基本设置,并且不可更改,但是您可以在Sciencetific Workplace的官方网站上替换文件中的参考文献。这会稍微减小网站的大小,但也意味着您的文件只有连上网络才可以读取。 Windows XP或以上版本; OS X 10.5或以上版本 (带有Intel 处理器) 800 MB - 1 GB的硬盘空间(取决于硬盘驱动的类型和所选择的安装选项) The Integration of LaTeX Typesetting and Computer Algebra With Scientific WorkPlace Version 5, you can create, edit, and typeset mathematical and scientific text more easily than ever before. The software is based on an easy-to-use word processor that completely integrates writing mathematics and text in the same environment. With the built-in computer algebra system, you can perform computations right on the screen. The Gold Standard for Mathematical, Scientific, and Technical Publishing In Scientific WorkPlace, you can typeset complex technical documents with LaTeX, the industry standard for mathematics typesetting. Because of its superior precision and quality, publishers and writers of scientific material use LaTeX extensively. When you typeset, LaTeX automatically generates footnotes, indexes, bibliographies, tables of contents, and cross-references. You don’t have to learn LaTeX to produce typeset documents. Many of the more than 150 document shells have been designed to meet the typesetting requirements of specific professional journals and institutions. Scientific WorkPlace automatically saves your documents as LaTeX files. You can concentrate on writing a correct paper; Scientific WorkPlace makes it a beautiful one. Sharing Your Work Just Got Easier Scientific WorkPlace now exports documents to RTF format for importing into Microsoft Word. The mathematics in your document are converted to Microsoft Equation Editor or MathType 5 format. The Power of An Easy-to-Use Computer Algebra System Scientific WorkPlace combines the ease of entering and editing mathematics in natural mathematical notation with the ability to compute with the built-in computer algebra engine, MuPAD?2.5. In this integrated working environment, you can enter mathematics and perform computations without having to think or work in a programming language. The computer algebra system uses natural mathematical notation, so you don’t have to master complex syntax to be able to evaluate, simplify, solve, or plot mathematical expressions. Full computer algebra capabilities are available. You can compute symbolically or numerically, integrate, differentiate, and solve algebraic and differential equations. With menu commands, you can create 2-D and 3-D plots in many styles and coordinate systems; import data from graphing calculators; and compute with over 150 units of physical measure. In addition, you can use the Exam Builder provided with Scientific WorkPlace to construct exams algorithmically and to generate, grade, and record quizzes on a web server. Increased Productivity This software thinks like you do. Whether you prefer to use the mouse or the keyboard, entering mathematics is so straightforward there is practically no learning curve. Formatting is fast, simple, and consistent. In Scientific WorkPlace, you use tags to define the document structure and format it consistently. Users have reported significant productivity increases when support staff use Scientific WorkPlace instead of raw LaTeX to typeset documents. Both technical and non-technical users can quickly learn to enter and number equations, create tables and matrices, and import and create graphics, all with pleasing on-screen mathematics and italics created with TrueType outline fonts. Scientific WorkPlace has the tools that simplify writing and editing books and other large documents. It is perfect for writers in academic, industrial, and government institutions and in all scientific and technical fields: mathematics, physics, engineering, economics, chemistry, computer science, statistics, medical research, and logic. The software comes with an extensive online help system and a series of reference manuals. If you need additional help, MacKichan Software provides reliable, prompt, free technical support. International, Interoperable, Indispensable Scientific WorkPlace simplifies working with colleagues in other locations. You can import text (.txt) and Rich Text Format (.rtf) files, and you can copy content to the clipboard for export as text or graphics to other applications. You can create .dvi, .htm, .pdf, or .rtf files from your documents, or generate portable LaTeX output for seamless transfer to different LaTeX installations. The Document Manager simplifies file transfer by email or on diskette. Spelling, font, and hyphenation support for languages other than English is available. You can switch languages in the same document using Babel, the multilingual LaTeX system. The software supports input using any left-to-right language supported by a version of Windows, including Chinese, Japanese, and Russian. It uses the in-place IME (Input Method Editor) for these languages. (The ability to typeset a language may depend on the availability of TeX for that language. Non-Latin character sets are typeset with Lambda, which is included.) Fully localized Japanese and German versions of Scientific WorkPlace are available now through our local distributors. Scientific WorkPlace has a built-in link to the World Wide Web. If you have Internet access, you can open the file at any URL address from inside the program. Also, you can deliver content via the Web. The software supports hypertext links, so you can facilitate navigation for your readers through a series of related documents. Readers can view and print documents using Scientific Viewer, which we distribute at no cost. New Features in Version 5 Compatibility You can interact with colleagues more easily and distribute your documents in different formats when you take advantage of new and enhanced export filters in Version 5. Export your documents as RTF files. You can now export your SWP, SW, and SNB documents as Rich Text Format (RTF) files, so that interactions with colleagues in non-TeX environments are simplified. The RTF export preserves the formatting you see in the document window. Any mathematics in your document can be represented with MathType 3 (Equation Editor) or MathType 5 objects. The resulting RTF file can be viewed in Microsoft Word even if an Equation Editor is not part of the Word installation. If the Microsoft Word installation includes the appropriate Equation Editor, any MathType 3 or MathType 5 mathematical objects in the RTF file can be edited. The file can also be displayed in outline mode. Read MathType mathematics in RTF files. In Version 5, you can open and read the MathType equations in RTF files when you import the RTF files in SWP, SW, or SNB. The equations are converted to LaTeX. Create more accurate HTML files. When you export your SWP, SW, or SNB documents to HTML, the program now places any graphics generated during the process in a subdirectory. Version 5 successfully exports fixed-width tables to HTML and saves the screen format to a Cascading Style Sheet (.css file). With HTML exports, you can make your mathematics available on various platforms over the Internet and in applications that can read HTML files. Export mathematics as MathML. When you export HTML files, you can output your mathematics as MathML or graphics. Note that not all HTML browsers support MathML. Typesetting Version 5 provides new typesetting capabilities and many new document shells, some intended for international use. Create typeset PDF files. Now you can share your work across platforms in PDF format by typesetting your SWP and SW documents with pdfLaTeX. No extra software is necessary to generate PDF files. The program automatically embeds fonts and graphics in the PDF file. Use pdfTeX to process files that contain graphics. Until now, using pdfTeX with most graphics file formats has been tedious or impossible. Before typesetting your document with pdfLaTeX, Version 5 of SWP and SW converts any graphics in the document to formats that can be processed by pdfLaTeX. Preserve LaTeX cross-references in PDF files. If you add the hyperref package to your document, any cross-references in your SWP or SW document are converted to hypertext links when you typeset with pdfLaTeX. The package extends hypertext capabilities with hypertext targets and references. Additionally, pdfLaTeX fully links the table of contents in the resulting PDF file and includes in the file hierarchical markers and thumbnail pictures of all the pages in the document. Use LaTeX PostScript packages. If you create PDF files from your SWP and SW documents, you can take advantage of LaTeX packages, such as the rotating package. Use expanded typesetting documentation. A new edition of Typesetting Documents in Scientific WorkPlace and Scientific Word provides more typesetting tips and information about more LaTeX packages. Learn how to tailor typesetting specifications from inside the program to achieve the typeset document appearance you need. Examine an expanded gallery of shells. View images of sample documents for each shell provided with the program in A Gallery of Document Shells for Scientific WorkPlace and Scientific Word, provided on the program CD as a PDF file. Use the documentation to choose document shells appropriately. Choose shells tailored for international documents. Version 5 includes new shells for documents created in non-English languages, including German, Japanese, Chinese, and Russian. SWP and SW, in combination with TrueTeX, support international typesetting with the Lambda system. Computation Complex computational capability makes SWP and SNB indispensable tools. Compute with MuPAD. In SWP and SNB, compute right in your document with the MuPAD 2.5 computer algebra engine. Use enhanced MuPAD capabilities. The new MuPAD 2.5 kernel is an upgrade from the MuPAD 2.0 kernel included in Version 4.0. New features include improved 2D and 3D plotting, expanded ODE capabilities, an expanded Rewrite submenu, and an improved Simplify operation. Compute with MathType mathematics in RTF files. If you open an RTF file containing MathType equations, the program converts the equations to LaTeX. In SWP and SNB, you can compute with the mathematics just like any other mathematics in SWP and SW documents. Use an improved Exam Builder. The Version 5 Exam Builder is fully functional with MuPAD. Printed quizzes can be reloaded without losing their math definitions, just like other documents. Exam Builder materials generated with earlier versions using either Maple or MuPAD work successfully in Version 5. Natural Mathematical Notation Until now, traditional typesetting and symbolic computation systems forced you to use an array of commands and a complex syntax to represent your input. Many of these systems have over 2,000 separate operators, such as int and diff, that you must learn in order to create input. For example, if you want to integrate the expression using a traditional computation system, you must enter it in linear fashion, int(x\2/sqrt(x\2-9),dx). To typeset it with LaTeX, you must write $\int\frac{x\{2}}{\sqrt{x\{2}-9}}dx$. A simple typing mistake would cause an error message. Scientific WorkPlace, Scientific Word, and Scientific Notebook eliminate the need to learn complex syntax by using natural notation for input and to show results. With these products, you can enter mathematics easily with the mouse, or, as you gain confidence and familiarity, with keyboard shortcuts. Here is how you enter the above integral using the mouse in Scientific WorkPlace, Scientific Word, and Scientific Notebook: In Scientific WorkPlace, Scientific Word, and Scientific Notebook, the space key always moves the insertion point out of the object it is in, and the Tab key always moves the insertion point to the next input box in the current template, if there is one. Thus, in step 9, the first space moves the insertion point out of the radical, but leaves it in the denominator of the fraction. The second space moves it out of the fraction. Pressing the Ctrl key together with the up or down arrow key moves the insertion point up or down to a superscript or a subscript position. The space key returns the insertion point to the main line. Ctrl+up arrow followed by Ctrl+down arrow moves the insertion point to the subscript of a superscript position, not to the main line. All the symbols in the main TeX fonts are available in Scientific WorkPlace, Scientific Word, and Scientific Notebook, which means you have everything you need to type mathematics. Also, if you know the TeX names for mathematical objects and symbols, you can use them (for example, holding down Ctrl while you type int enters an integral). You do not need to know TeX names to enter mathematics. Product Philosophy Scientific WorkPlace, Scientific Word, and Scientific Notebook are designed to increase productivity for anyone who writes technical documents, especially those containing mathematics. They are perfect for writers in all technical fields: mathematics, physics, engineering, chemistry, computer science, economics, finance, statistics, medical research, operations research, logic, and more. Logical Design Separates Content and Appearance Our approach, known as logical design, separates the creative process of writing from the mechanical process of formatting. You apply tags to text to say what the text is; the software handles the job of formatting it. Logical design leads to a more consistent and attractive document appearance because choices of fonts, spacing, emphasis, and other aspects of format are applied automatically. Separating the processes of creating and formatting a document combines the best of the online and print worlds. You concentrate on writing a correct paper; our software makes it a beautiful paper. Scientific WorkPlace and Scientific Word come with over 150 predefined document shells. Over 20 shells are available with Scientific Notebook. Logical Design Is a New Way of Working When you use a WYSIWYG system, you constantly give commands that affect the appearance of the content. You select text and then choose a font, a font size, or a typeface. You apply alignment commands such as center, left justify, and right justify. To center an equation, for example, you select it and choose the center alignment. In a logical system, formatting commands are replaced by commands that define the logical structure of the content instead of its appearance. Rather than center text, you create a title, a section head, or a displayed equation by applying tags to information in the document. The format of the title, the alignment of section heads, and the alignment of displayed equations are all determined separately by the properties of the tags you use. In Scientific WorkPlace and Scientific Word, tag properties are determined by the document’s typesetting specifications (a collection of commands that define the way the document appears when you produce it with LaTeX typesetting) and by the style (a collection of commands that define the way the document appears onscreen and when you produce it without LaTeX typesetting.) In Scientific Notebook, the tag properties are determined by the style only, since it does not include LaTeX typesetting. Also, WYSIWYG systems divide documents into pages according to their anticipated appearance in print. To see an entire line, you often have to scroll horizontally because the screen dimensions and page dimensions do not match. In a logical system, working with pages is unnecessary, because the division of a document into pages has no connection to the document’s logical structure. Thus, on the screen Scientific WorkPlace, Scientific Word, and Scientific Notebook break lines to fit the window. If you resize the window, the text is reshaped to fit it. Logical Design Ensures a Beautiful Document Appearance Our emphasis on logical structure does not ignore the fact that documents must still be printed in a readable, organized, and visually pleasing format, nor does it ignore the fact that you may not always need publication-quality output. With version 4 of Scientific WorkPlace and Scientific Word, you can preview and print your documents in two ways. You can compile, preview, and print your documents with LaTeX to obtain a high-quality, typeset appearance, or you can preview and direct print without typesetting for a near-WYSIWYG appearance. With Scientific Notebook, only direct printing is available. Typesetting Features In Scientific WorkPlace and Scientific Word, you can typeset your documents using LaTeX, the undisputed industry standard for typesetting mathematical text. LaTeX provides automatic document formatting, including margins, hyphenation, kerning, ligatures, and many other elements of fine typesetting. LaTeX also automatically generates document elements including the title pages, table of contents, footnotes, margin notes, headers, footers, indexes, and bibliographies. Because Scientific WorkPlace and Scientific Word communicate with LaTeX for you, you can concentrate on what you do best—creating the content of your document—without worrying about LaTeX syntax. You don’t need to understand LaTeX to produce beautifully typeset material, but if you do know TeX or LaTeX commands, you can use them in your Scientific WorkPlace or Scientific Word documents to make the typesetting even more precise. Take advantage of these typesetting features of Scientific WorkPlace and Scientific Word: Formatting variety with predefined document shells. Scientific WorkPlace and Scientific Word come with over 150 predefined document shells, each with a different typeset appearance and many designed to meet the formatting requirements of specific journals and academic institutions. You can choose the shell that’s most appropriate for your journal or publisher. If you don’t know yet where your work will be published, we recommend that you start with one of the standard LaTeX shells, which can be easily adapted after your paper has been written. Typesetting control. Each document shell has a LaTeX document class and may also have LaTeX packages. Both the class and the packages have options and settings that create a more finely typeset appearance for your document. The available options and packages depend on the shell, but typically govern the ability to modify the formatting for typesetting details such as different paper sizes, portrait or landscape orientation, double-sided printing, double-column output, different font sizes, and draft or final output. You can change the options and packages with the Options and Packages item on the Typeset menu. Additional LaTeX packages. The supplied LaTeX packages provide even more control. By adding packages to your document, you can achieve a variety of typesetting effects. For example, you can add packages that switch between single and multiple columns of text on a single page; create endnotes from footnotes; or govern the appearance of footnotes, including their numbering or symbol scheme. Automatic numbering of theorems, lemmas, and other theorem environments. You can number theorems, lemmas, propositions, and conjectures in a variety of styles. You control whether they are each numbered in the same or separate sequences, so that your theorem environments might be numbered as Theorem 1, Lemma 2, Theorem 3, Conjecture 4, Lemma 5..., or as Theorem 1, Lemma1, Theorem 2, Conjecture 1, Lemma 2.... As an option, you can reset the numbering at the beginning of each chapter or section, and you can include the chapter and section numbers in the number. Automatic cross-referencing. You can create automatically generated cross-references to equations, tables, figures, pages, and other numbered objects elsewhere in your document. You don’t have to know the object or page number in advance. When you typeset, LaTeX inserts the number of the referenced object in the text. Automatic bibliography generation. Scientific WorkPlace and Scientific Word include BibTeX for automatic bibliographies. You select references from a BibTeX database of references, and BibTeX formats them according to the bibliography style you select. Scientific WorkPlace and Scientific Word also include tools for the maintenance of the BibTeX database. LaTeX Packages such as EndNotes can save references in BibTeX format. Computer Algebra Systems -------------------------------------------------------------------------------- Important Notice After Version 4.1 Build 2347, Scientific WorkPlace (SWP) and Scientific Notebook (SNB) will contain a kernel for the new MuPAD 2.5 computer algebra system, an upgrade from the MuPAD 2.0 kernel included in earlier builds. The products will no longer contain the Maple V 5.1 kernel. If you purchased Version 3.5, 4.0, or 4.1 of Scientific WorkPlace or Scientific Notebook before December 21, 2002, and your program contains Maple, you have a permanent license for the Maple kernel. When you upgrade to Version 4.1, you can successfully use the Maple kernel provided with the earlier version of your software. If you have Version 4.0 of Scientific WorkPlace or Scientific Notebook, you can upgrade to our latest build at no charge. The build includes the new MuPAD 2.5 kernel and also works with the Maple kernel provided with Version 4.0. This change does not affect Scientific Word. -------------------------------------------------------------------------------- A computer algebra system, or CAS, is a mathematics engine that performs the symbolic computations fundamental to algebra, trigonometry, and calculus. After Version 4.1 Build 2347, Scientific WorkPlace and Scientific Notebook include the kernel to the computer algebra system MuPAD? 2.5. With MuPAD, you can evaluate, factor, combine, expand, and simplify terms and expressions that contain integers, fractions, and real and complex numbers, as required in simple arithmetic and algebra. You can also evaluate integrals and derivatives, perform matrix and vector operations, find standard deviations, and perform many other more complex computations involved in calculus, linear algebra, differential equations, and statistics. Additionally, you can create 2D and 3D plots of polynomials, trigonometric functions, and exponentials. MuPAD Version. After Version 4.1 Build 2347, Scientific WorkPlace and Scientific Notebook use the MuPAD 2.5 kernel, which is the same as in the full version of MuPAD 2.5. Earlier versions use the MuPAD 2.0 kernel. We have created an interface to the kernel to make MuPAD easy to use with Scientific WorkPlace and Scientific Notebook. In addition, the system accepts input and creates output using natural mathematical notation, the basis for our scientific word processors. Performing computations in Scientific WorkPlace and Scientific Notebook is easy. Computational Functions. Scientific WorkPlace and Scientific Notebook provide a wide range of the graphic, numeric, and symbolic computational functions available with MuPAD. The programs provide ample functionality for both simple and sophisticated mathematical computations involving calculus, PDE, ODE, matrix manipulations, statistics, linear algebra, and 2D and 3D plots. Also, you can access additional functions available to MuPAD---even if they don’t appear as items on the Compute menu---with the Define MuPAD Name menu item. User-defined Functions. With MuPAD, you can create user-defined functions (.mu files) with an ASCII editor, even if you don’t have access to a full MuPAD installation. The files are easy to manipulate and are powerful tools for users interested in programming. Working in a Scientific WorkPlace or Scientific Notebook document, you call the function with the Define MuPAD Name command. Available Functions. While Scientific WorkPlace and Scientific Notebook provide many functions available with MuPAD, not all capabilities are included. Programming packages, certain plot types and options (especially animated plots), and manipulation of the position of highlights and shadows in 3D plots aren’t available. Scientific Notebook doesn’t have 3D implicit plotting with either CAS. Additionally, some limitations exist regarding the placement of text on plots and the use of different types of plots on the same graph. Iteration and condition commands (such as if, elif, else, fi, for, while, do, and od) aren’t available. Publishing on the Web The factors to consider in publishing mathematics-intensive documents on the Web are the same as publishing any other content online: Who is your intended audience? What browser do they use? What is their connection speed? What other software is available to them? The answers to these questions will influence your choice of Web publishing tools and viewing options. With Scientific WorkPlace, Scientific Word, and Scientific Notebook, you can create mathematics-intensive information for the web in several ways: Create .tex files. You can create your document as a .tex file, just as you would create any other Scientific WorkPlace, Scientific Word, or Scientific Notebook document. No special action is required. You can then place the file directly on the Web. When the file is saved to a reader’s Scientific WorkPlace, Scientific Word, or Scientific Notebook installation, any mathematics in the file is live. If you have Scientific WorkPlace, Scientific Word, or Scientific Notebook, view a .tex file on our website. Create HTML files. With Version 4 or above of our software, you can export your .tex file as HTML. All mathematics and plots are ordinarily exported as graphics, although you can choose to export mathematics as MathML. The mathematics in an HTML file is not live. View the same .tex file exported to HTML. View a PDF file created using pdfLaTeX. View the .tex file used to create the above file. With Scientific Notebook, if you have Adobe Acrobat Writer installed, you can, create PDF files of your documents. From the File menu, choose P,rint and then select either Acrobat Writer or Distiller as your printer. There will be no hyperlinking in the file. View the PDF file created from the .tex file produced with typesetting. View the PDF file created from the .tex file produced without typesetting. Your readers can access .tex, PDF, or HTML files created with Scientific WorkPlace, Scientific Word, or Scientific Notebook in a variety of ways. Each has advantages and disadvantages. Using Scientific Viewer If your readers don’t have Scientific WorkPlace, Scientific Word, or Scientific Notebook, we recommend that they use our free Scientific Viewer to access the Scientific WorkPlace, Scientific Word, or Scientific Notebook documents you place on the Web. The software is free for the reader. The mathematics that can be displayed and accessed is unlimited. Links to HTML files and TEX files can be intermixed. Scientific Viewer is currently available only for Microsoft Windows? platforms. Of course, your readers can also use any of our software products---Scientific WorkPlace, Scientific Word, or Scientific Notebook---for maximum flexibility. Using HTML Any document created with Version 4 or above of Scientific WorkPlace, Scientific Word, or Scientific Notebook can be exported in several HTML formats. You can export any mathematics and plots in these graphics formats: .bmp, .dib, .emf, .gif, .jpg, .png, or .wmf. The HTML output filter creates an accurate HTML version of your document. You can further manipulate the HTML files with other Web authoring tools. The HTML filter interprets any HTML commands in your document. The mathematics is not live. Using MathML When you export to HTML, any mathematics in the document can be exported as MathML. MacKichan Software is a corporate member of the MathML standard committee, and is committed to supporting XML and MathML. Our products can produce documents using MathML designed for viewing by Netcape Navigator 7 and above, MathPlayer (a free MathML rendering plug-in from Design Science), and IBM techexplorer. The HTML output filter creates an accurate HTML version of your document. You can further manipulate the HTML files with other Web authoring tools. The HTML filter interpret any HTML commands in your document. The mathematics is not live. MathML is not supported by all HTML browsers. MathML is interpreted differently by different browsers, so not all readers may see the same thing. Using PDF Readers who have the free Adobe Acrobat reader can read PDF files created from Scientific WorkPlace Version 5 and Scientific Word Version 5 documents using pdfLaTeX. Authors with Scientific Notebook can create PDF files by using Adobe Distiller. The software is free for the reader. The mathematics that can be displayed and accessed is unlimited. Links to HTML files and PDF files can be intermixed. Using Latex2Html Latex2Html is a freeware program that converts LaTeX files to HTML. Because HTML can’t display mathematics correctly, Latex2Html converts mathematics to graphics (.gif) files. Any browser will display the resulting file, although some problems may arise. The conversion of mathematics to graphics causes several problems: File sizes expand quickly. Graphics files of mathematics are compressed bitmaps that look acceptable on the screen but grainy in print. When readers magnify text, the size of graphics files of mathematics may be too large or too small. Graphics files can’t be magnified for visually impaired readers. The baselines for text and mathematics may not always line up. Preparing PDF files with Scientific WorkPlace and Scientific Word The PDF format is a good format for presenting mathematical and technical content on the Internet because the Adobe Acrobat Viewer is nearly universally available, and the format allows software to include in the PDF document all the fonts that are necessary to render mathematics well. Further, the format supports hyperlinking and bookmarks. Version 5 of Scientific WorkPlace and Scientific Word now supports pdfTeX Now in Scientific WorkPlace and Scientific Word you can typeset your file with pdfLaTeX to produce a PDF file. You can, of course, still typeset with LaTeX to produce a DVI file. The Typeset menu has three additional items: Preview PDF, Print PDF, and Compile PDF. When you use pdfLaTeX, you can also use several LaTeX packages that previously have not been supported by Scientific WorkPlace and Scientific Word because they require PostScript printers. These packages, including rotating and the PSNFSS font packages, can now be used when you compile with pdfTeX. When the hyperref package is included in your document, the PDF file produced is fully hyperlinked with links in the table of contents and with hierarchical bookmarks that reflect the structure of your LaTeX document. One problem with pdfLaTeX has been that it allows only a very few graphics file formats, so to get the benefits of producing a PDF file, you had to forego using most graphics file formats. Scientific WorkPlace and Scientific Word solve this problem by converting all the graphics embedded in your document to PDF format before calling pdfLaTeX. In the past it was possible to produce PDF files from Scientific WorkPlace and Scientific Word by printing the DVI file using the Acrobat Distiller printer driver. This method, however, does not preserve the hyperlinks in your LaTeX document. This is still the only method of producing PDF files with Scientific Notebook. Creating and Grading On-Line Tests with Exam Builder Exam Builder takes advantage of the capabilities built into Scientific WorkPlace and Scientific Notebook, yielding some of the most powerful features available in algorithmic exam generation. Use random number functions, tables, and graphics in a document. Create a wide variety of course materials for use in your courses: exams, quizzes, tests, tutorials, problem sets, drills, or homework assignments. Save Time by Eliminating Manual Grading After course materials are created algorithmically, Exam Builder can be used for on-line and automatic grading. How much time you save depends on your teaching and exam style. How Exam Builder Works Exam Builder generates course materials from source files you create with Scientific WorkPlace or Scientific Notebook. You specify an exam problem with formulas and conditions, which may contain random numbers and conditions to be satisfied by the quantities computed from the random numbers. When a quiz is read by Scientific WorkPlace or Scientific Notebook, actual numbers are generated until all the conditions are satisfied. Each time a student opens your quiz, the details of each question will be different. Invaluable Tool Regardless of the level at which you teach—arithmetic, trigonometry, algebra, calculus, linear algebra, differential equations, probability, or statistics—you’ll find the Exam Builder invaluable. Spell Check in Any One of 19 Languages All versions of Scientific WorkPlace, Scientific Word, and Scientific Notebook are provided with an American English spell checker. Spelling dictionaries for these other languages are available for \$20.00 USD per language: British English Catalan Danish Dutch Finnish German German (Swiss) Italian Norwegian (Bokmal) Norwegian (Nynorsk) Polish Portuguese (Continental) Portuguese (Brazilian) Russian Spanish Swedish You can quickly and easily switch between languages, add words to the dictionary, and adjust parameters for spell checking. Your product CD contains locked versions of each spell checker. When you purchase a spell checker for an additional language, we deliver an unlock code to you electronically. MacKichan Software spell checker technology utilizes Proximity Linguistic Technology. Proximity is a subsidiary of Franklin Electronic Publishers. Version 3.0 users, please note: The spell checkers provided with Version 3.5 and 4.0 and higher are incompatible with Version 3.0 installations. # 视频课程 • 相关软件 • 推荐图书 • 中国区典型用户 • 中国人民大学 • 哈尔滨工业大学 • 华东师范大学 • 北京信息科技大学 • 中国社会科学院 • 北京林业大学 • 北京大学 • 华东理工大学 • 山西大学 • 上海财经大学 • 厦门大学 • 广西民族大学 • 深圳大学 • 北京师范大学香港浸会大学联合国际学院 • 大同大学
2018-01-21 05:08:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5094395279884338, "perplexity": 3660.3453738141047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890187.52/warc/CC-MAIN-20180121040927-20180121060927-00669.warc.gz"}
https://matholympiad.org.bd/forum/viewtopic.php?f=13&t=3730&p=17063&hilit=weird+angle
## BdMO National 2016 Secondary 3: Weird angle condition Thanic Nur Samin Posts: 176 Joined: Sun Dec 01, 2013 11:02 am ### BdMO National 2016 Secondary 3: Weird angle condition In $\triangle ABC$, $AB=AC$. $P$ is a point inside the triangle such that $\angle BCP=30^{\circ}$ and $\angle APB=150^{\circ}$ and $\angle CAP=39^{\circ}$. Find $\angle BAP$ Hammer with tact. Because destroying everything mindlessly isn't cool enough. Thanic Nur Samin Posts: 176 Joined: Sun Dec 01, 2013 11:02 am ### Re: BdMO National 2016 Secondary 3: Weird angle condition My solution is quite bash-y, so I am omitting the details. You can work them out by yourselves. Let $\angle BAP=2x$ and $\angle CAP=2y$. Now, use the isosceles condition and other informations and use trig ceva to arrive at the conclusion, $\sin 2x \sin (60^{\circ}-x-y) \sin (60^{\circ}+x-y)=\sin 2y \sin 30^{\circ} \sin(30^{\circ}-2x)$ From there, with enough manipulation with product to sum formulas, we can show that, $2x=\dfrac{2y}{3}$ Since $2y=39^{\circ}$, we can conclude that $\angle BAP=13^{\circ}$ Hammer with tact. Because destroying everything mindlessly isn't cool enough. Thanic Nur Samin Posts: 176 Joined: Sun Dec 01, 2013 11:02 am ### Re: BdMO National 2016 Secondary 3: Weird angle condition On second thought, I am showing my calculation. Not that it is too long. Hammer with tact. Because destroying everything mindlessly isn't cool enough. joydip Posts: 48 Joined: Tue May 17, 2016 11:52 am ### Re: BdMO National 2016 Secondary 3: Weird angle condition A synthetic solution : The first principle is that you must not fool yourself and you are the easiest person to fool.
2021-04-15 16:36:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7562976479530334, "perplexity": 7402.702806485073}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038087714.38/warc/CC-MAIN-20210415160727-20210415190727-00288.warc.gz"}
http://www.logic.univie.ac.at/2014/Talk_10-09_a.html
# 2014 seminar talk: Maximal pseudocompactness and maximal countable compactness in the class of Tychonoff spaces Talk held by Vladimir V. Tkachuk (Universidad Autónoma Metropolitana de México, Mexico City, Mexico) at the KGRC seminar on 2014-10-09. ### Abstract This is a presentation of results obtained in 2013-2014 jointly with O.T. Alas and R.G. Wilson. Given a property $\mathcal P$, say that a space $X$ is maximal $\mathcal P$ in the class of Tychonoff spaces if $X$ has $\mathcal P$ but any stronger Tychonoff topology on $X$ does not have $\mathcal P$. It turns out that maximal pseudocompactness and maximal countable compactness in the class of Tychonoff spaces have more interesting properties than maximal pseudocompactness and maximal countable compactness in the class of all spaces so we will call the respective spaces maximal pseudocompact and maximal countably compact. Our presentation will include the following results (all spaces are assumed to be Tychonoff): 1) Any dyadic maximal pseudocompact space is metrizable. 2) Any Fréchet-Urysohn compact space is a retract of a Fréchet-Urysohn maximal pseudocompact space; since there are Fréchet-Urysohn compact spaces which are not maximal pseudocompact, maximal pseudocompactness is not preserved by continuous images even in compact spaces. 3) If $\kappa$ is strictly smaller than the first weakly inaccessible cardinal, then the Tychonoff cube $I^\kappa$ is maximal countably compact. 4) If $\lambda$ is the first measurable cardinal, then the Tychonoff cube $I^\lambda$ does not even embed in a maximal countably compact space. 5) If a space $X$ is maximal countably compact, then every $\omega$-continuous real-valued function on $X$ is continuous. 6) If $X$ is a countably compact space with the Mazur property, i.e., every sequentially continuous real-valued function on $X$ is continuous, then $X$ is maximal countably compact. 7) If $X$ is an $\omega$-monolithic compact space, then $C_p(X)$ has the Mazur property if and only if $C_p(X)$ is Fréchet-Urysohn.
2021-12-05 20:33:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7821134328842163, "perplexity": 566.0800088675012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363216.90/warc/CC-MAIN-20211205191620-20211205221620-00169.warc.gz"}
http://www.digplanet.com/wiki/Fracture
digplanet beta 1: Athena Share digplanet: Agriculture Applied sciences Arts Belief Chronology Culture Education Environment Geography Health History Humanities Language Law Life Mathematics Nature People Politics Science Society Technology A fracture is the separation of an object or material into two, or more, pieces under the action of stress. The fracture of a solid almost always occurs due to the development of certain displacement discontinuity surfaces within the solid. If a displacement develops in this case perpendicular to the surface of displacement, it is called a normal tensile crack or simply a crack; if a displacement develops tangentially to the surface of displacement, it is called a shear crack, slip band, or dislocation.[1] The word fracture is often applied to bones of living creatures (that is, a bone fracture), or to crystals or crystalline materials, such as gemstones or metal. Sometimes, in crystalline materials, individual crystals fracture without the body actually separating into two or more pieces. Depending on the substance which is fractured, a fracture reduces strength (most substances) or inhibits transmission of light (optical crystals). A detailed understanding of how fracture occurs in materials may be assisted by the study of fracture mechanics. A fracture is also the term used for a particular mask data preparation procedure within the realm of integrated circuit design that involves transposing complex polygons into simpler shapes such as trapezoids and rectangles. ## Fracture strength Stress vs. strain curve typical of aluminum 1. Ultimate tensile strength 2. Yield strength 3. Proportional limit stress 4. Fracture 5. Offset strain (typically 0.2%) Fracture strength, also known as breaking strength, is the stress at which a specimen fails via fracture.[2] This is usually determined for a given specimen by a tensile test, which charts the stress-strain curve (see image). The final recorded point is the fracture strength. Ductile materials have a fracture strength lower than the ultimate tensile strength (UTS), whereas in brittle materials the fracture strength is equivalent to the UTS.[2] If a ductile material reaches its ultimate tensile strength in a load-controlled situation,[Note 1] it will continue to deform, with no additional load application, until it ruptures. However, if the loading is displacement-controlled,[Note 2] the deformation of the material may relieve the load, preventing rupture. If the stress-strain curve is plotted in terms of true stress and true strain the curve will always slope upwards and never reverse, as true stress is corrected for the decrease in cross-sectional area. The true stress on the material at the time of rupture is known as the breaking strength. This is the maximum stress on the true stress-strain curve, given by point 1 on curve B. ## Types ### Brittle fracture Brittle fracture in glass. Fracture of an aluminum crank arm. Bright: brittle fracture. Dark: fatigue fracture. In brittle fracture, no apparent plastic deformation takes place before fracture. In brittle crystalline materials, fracture can occur by cleavage as the result of tensile stress acting normal to crystallographic planes with low bonding (cleavage planes). In amorphous solids, by contrast, the lack of a crystalline structure results in a conchoidal fracture, with cracks proceeding normal to the applied tension. The theoretical strength of a crystalline material is (roughly) $\sigma_\mathrm{theoretical} = \sqrt{ \frac{E \gamma}{r_o} }$ where: - $E$ is the Young's modulus of the material, $\gamma$ is the surface energy, and $r_o$ is the equilibrium distance between atomic centers. On the other hand, a crack introduces a stress concentration modeled by $\sigma_\mathrm{elliptical\ crack} = \sigma_\mathrm{applied}\left(1 + 2 \sqrt{ \frac{a}{\rho}}\right) = 2 \sigma_\mathrm{applied} \sqrt{\frac{a}{\rho}}$ (For sharp cracks) where: - $\sigma_\mathrm{applied}$ is the loading stress, $a$ is half the length of the crack, and $\rho$ is the radius of curvature at the crack tip. Putting these two equations together, we get $\sigma_\mathrm{fracture} = \sqrt{ \frac{E \gamma \rho}{4 a r_o}}.$ Looking closely, we can see that sharp cracks (small $\rho$) and large defects (large $a$) both lower the fracture strength of the material. Recently, scientists have discovered supersonic fracture, the phenomenon of crack motion faster than the speed of sound in a material.[3] This phenomenon was recently also verified by experiment of fracture in rubber-like materials. ### Ductile fracture Ductile failure of a specimen strained axially. Schematic representation of the steps in ductile fracture (in pure tension). In ductile fracture, extensive plastic deformation (necking) takes place before fracture. The terms rupture or ductile rupture describe the ultimate failure of tough ductile materials loaded in tension. Rather than cracking, the material "pulls apart," generally leaving a rough surface. In this case there is slow propagation and an absorption of a large amount energy before fracture.[citation needed] Many ductile metals, especially materials with high purity, can sustain very large deformation of 50–100% or more strain before fracture under favorable loading condition and environmental condition. The strain at which the fracture happens is controlled by the purity of the materials. At room temperature, pure iron can undergo deformation up to 100% strain before breaking, while cast iron or high-carbon steels can barely sustain 3% of strain.[citation needed] Because ductile rupture involves a high degree of plastic deformation, the fracture behavior of a propagating crack as modeled above changes fundamentally. Some of the energy from stress concentrations at the crack tips is dissipated by plastic deformation before the crack actually propagates. The basic steps are: void formation, void coalescence (also known as crack formation), crack propagation, and failure, often resulting in a cup-and-cone shaped failure surface. ## Crack separation modes The three fracture modes. There are three ways of applying a force to enable a crack to propagate: • Mode I crack – Opening mode (a tensile stress normal to the plane of the crack) • Mode II crack – Sliding mode (a shear stress acting parallel to the plane of the crack and perpendicular to the crack front) • Mode III crack – Tearing mode (a shear stress acting parallel to the plane of the crack and parallel to the crack front) Crack initiation and propagation accompany fracture. The manner through which the crack propagates through the material gives great insight into the mode of fracture. In ductile materials (ductile fracture), the crack moves slowly and is accompanied by a large amount of plastic deformation. The crack will usually not extend unless an increased stress is applied. On the other hand, in dealing with brittle fracture, cracks spread very rapidly with little or no plastic deformation. The cracks that propagate in a brittle material will continue to grow and increase in magnitude once they are initiated. Another important mannerism of crack propagation is the way in which the advancing crack travels through the material. A crack that passes through the grains within the material is undergoing transgranular fracture. However, a crack that propagates along the grain boundaries is termed an intergranular fracture. ## Notes 1. ^ A simple load-controlled tensile situation would be to support a specimen from above, and hang a weight from the bottom end. The load on the specimen is then independent of its deformation. 2. ^ A simple displacement-controlled tensile situation would be to attach a very stiff jack to the ends of a specimen. As the jack extends, it controls the displacement of the specimen; the load on the specimen is dependent on the deformation. ## References 1. ^ Cherepanov, G.P., Mechanics of Brittle Fracture 2. ^ a b Degarmo, E. Paul; Black, J T.; Kohser, Ronald A. (2003), Materials and Processes in Manufacturing (9th ed.), Wiley, p. 32, ISBN 0-471-65653-4. 3. ^ C. H. Chen, H. P. Zhang, J. Niemczura, K. Ravi-Chandar and M. Marder (November 2011). "Scaling of crack propagation in rubber sheets". Europhysics Letters 96 (3): 36009. Bibcode:2011EL.....9636009C. doi:10.1209/0295-5075/96/36009. • Dieter, G. E. (1988) Mechanical Metallurgy ISBN 0-07-100406-8 • A. Garcimartin, A. Guarino, L. Bellon and S. Cilberto (1997) " Statistical Properties of Fracture Precursors ". Physical Review Letters, 79, 3202 (1997) • Callister, Jr., William D. (2002) Materials Science and Engineering: An Introduction. ISBN 0-471-13576-3 • Peter Rhys Lewis, Colin Gagg, Ken Reynolds, CRC Press (2004), Forensic Materials Engineering: Case Studies. 287418 videos foundNext > FractureBrilliant movie. Fracture TrailerAnthony Hopkins and Ryan Gosling star in the dramatic thriller Fracture. When a meticulous structural engineer (Hopkins) is found innocent of the attempted m... Fracture Gameplayhttp://www.cbmall.com/to/gamer Here is Fracture for the XBOX 360 and PS3 by LucasArts. Fracture Xbox Gameplay - Part 1I missed out on this game back when it came out. I heard it didn't do so well sales-wise despite being a very good game. So far I'm having a blast! How Do I Know If I have a Stress Fracture?http://www.neufoot.com Dr. Jason Knox explains what a stress fracture is and how you can get a stress fracture. Dr Knox also explains the treatment plan for ... Pelvic Fracture Overview - Everything You Need To Know - Dr. Nabil EbraheimEducational video describing fracture injury conditions and treatment associated with the pelvis. King Crimson - Fracture (with lyrics)Lyrics: http://easylyrics.org/?artist=King+Crimson&title=Fracture Thanks for checking out our videos and site! Fracture Design - A Winter's TaleEvery now and then I rediscover an old track and fall in love all over again. Fracture Design: http://soundcloud.com/fracturedesign/ https://www.facebook.com... Fracture Design - WildlifeAmazing! Fracture Design: http://soundcloud.com/fracturedesign/ https://www.facebook.com/fracturedesigndnb Picture: http://etwoo.deviantart.com/art/Chalky-mo... Feint - Times Like These (Fracture Design remix)Fracture Design - real name is Egor. Born in 1990 in Russia, in a small town called Tyumen, but is currently living in Hungary, Budapest. In his childhood me... 287418 videos foundNext > 59791 news items KMBZ Autopsy: Lafayette County infant died from skull fracture - KCTV5 KCTV Kansas City Wed, 22 May 2013 10:07:59 -0700 Authorities are investigating the death of a 10-week-old who suffered from a skull fracture and other injuries. An autopsy revealed the baby had brain damage from a skull fracture. In addition, the child had broken bones and contusions. Lafayette ... Business Today Cabot using Marcellus field gas to fracture wells Oil & Gas Journal Mon, 20 May 2013 14:41:54 -0700 Cabot Oil & Gas Corp. said it is using natural gas from the Marcellus shale in Susquehanna County, Pa., to fracture wells via dual-fuel technology in a process that can displace as much as 70% of the diesel fuel traditionally used to operate hydraulic ... New York Daily News Words with Friends: fracture Ct Post Mon, 20 May 2013 12:51:33 -0700 Cynthia Clarke, of Bridgeport, Conn., suffered a broken shoulder during Friday's Metro North train derailment and collision. Clarke described the sound of the crash as the loudest sound she had ever heard, after which she "just went flying". Clarke, a ... ABC News Giants' Vogelsong suffers fracture on pitching hand Sacramento Bee (blog) Mon, 20 May 2013 22:12:57 -0700 Vogelsong grimaced as the ball appeared to catch him squarely on the top of the hand. He bent over clutching the hand and left the field accompanied by trainer Dave Groeschner and manager Bruce Bochy. The Giants announced the fracture several innings ... Open Reduction-Internal Fixation of a Navicular Body Fracture with Dorsal ... Journal of the American Podiatric Medical Association (subscription) Wed, 22 May 2013 08:48:21 -0700 To date, there is little literature discussing a navicular body fracture with dorsal subluxation of the first and second cuneiforms over the navicular. This case study presents a 30-year-old patient with this injury. He underwent open reduction ... Nonunion of an Isolated Cuboid Fracture Journal of the American Podiatric Medical Association (subscription) Wed, 22 May 2013 08:49:53 -0700 Nonunion of an isolated undisplaced cuboid fracture is unusual. We report a case of symptomatic nonunion of an isolated cuboid fracture after nonoperative treatment. Fracture union was achieved with surgery, and the patient returned to full activities. Brett Anderson has a stress fracture in his right foot NBCSports.com Fri, 17 May 2013 16:09:01 -0700 Athletics starter Brett Anderson has not pitched in a game since his April 29 start against the Angels due to ongoing problems with his right foot. In the time since, the A's have had the lefty run and throw side sessions, and they even had him make a ... Comcast SportsNet Bay Area Anderson diagnosed with navicular fracture in right foot Comcast SportsNet Bay Area Sat, 18 May 2013 05:59:33 -0700 The Oakland Athletics announced today that left-handed starting pitcher Brett Anderson has been diagnosed with a navicular stress fracture of his right foot which will require further rest and time on the disabled list. He will be reevaluated in four ... Limit to books that you can completely read online Include partial books (book previews) .gsc-branding { display:block; } Oops, we seem to be having trouble contacting Twitter
2013-05-22 17:13:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26575976610183716, "perplexity": 4948.689695791076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702127714/warc/CC-MAIN-20130516110207-00051-ip-10-60-113-184.ec2.internal.warc.gz"}
https://meangreenmath.com/tag/slope/page/2/
# Engaging students: Slope-intercept form of a line In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission comes from my former student Jessica Williams. Her topic, from Algebra I: the point-slope intercept form of a line. A.2 How could you as a teacher create an activity or project that involves your topic? In order to teach a lesson regarding slope intercept form of a line, I believe it is crucial to use visual learning to really open the student’s minds to the concept. Prior to this lesson, students should know how to find the slope of a line. I would provide each student with a piece of graph paper and small square deli sheet paper. I would have them fold their deli sheet paper into half corner to corner/triangle way). I would ask each student to put the triangle anywhere on the graph so that it passes through the x and the y-axis. Then I will ask the students to trace the side of the triangle and to find two points that are on that line. For the next step, each student will find the slope of the line they created. Once the students have discovered their slope, I will ask each of them to continue their line further using the slope they found. I will ask a few students to show theirs as an example (picking the one who went through the origin and one who did not). I will scaffold the students into asking what the difference would look like in a formula if you go through the origin or if you go through (0,4) or (0,-3) and so on. Eventually the students will come to the conclusion how the place where their line crosses the y-axis is their y intercept. Lastly, each student will be able to write their equation of the line they specifically created. I will then introduce the y=mx+b formula to them and show how the discovery they found is that exact formula. This is a great way to allow the students to work hands on with the material and have their own individual accountability for the concept. They will have the pride of knowing that they learned the slope intercept formula of a line on their own. E.1 How can technology (YouTube, Khan Academy [khanacademy.org], Vi Hart, Geometers Sketchpad, graphing calculators, etc.) be used to effectively engage students with this topic? Graphing calculators are a very important aspect of teaching slope-intercept form of a line. It allows the students to visually see where the y-intercept is and what the slope is. Also, another good program to use is desmos. It allows the students to see the graph on the big screen and you can put multiple graphs on the screen at one time to see the affects that the different slopes and y intercept have on the graph. This leads students into learning about transformations of linear functions. Also, the teacher can provide the students with a graph, with no points labeled, and ask them to find the equation of the line on the screen. This could lead into a fun group activity/relay race of who can write the formula of the graph in the quickest time. Also, khan academy has a graphing program where the students are asked to create the graph for a specific equation. This allows the students to practice their graphing abilities and truly master the concept at home. To engage the students, you could also use Kahoot to practice vocabulary. For Kahoot quizzes, you can set the time for any amount up to 2 minutes, so you could throw a few formula questions in their as well. It is an engaging way to have each student actively involved and practicing his or her vocabulary. B1. How can this topic be used in your students’ future courses in mathematics or science? Learning slope intercept form is very important for the success of their future courses and real world problems. Linear equations are found all over the world in different jobs, art, etc. By mastering this concept, it is easier for students to visualize what the graph of a specific equation will look like, without actually having to graph it. The students will understand that the b in y=mx+b is the y-intercept and they will know how steep the graph will be depending on the value of m. Mastering this concept will better prepare them to lead into quadratic equations and eventually cubic. Slope intercept form is the beginning of what is to come in the graphing world. Once you grasp the concept of how to identify what the graph will look like, it is easier to introduce the students to a graph with a higher degree. It will be easier to explain how y=mx+b is for linear graphs because it is increases or decreases at a constant rate. You could start by asking, 1.What about if we raise the degree of the graph to x^2? 2.What will happen to the graph? 3.Why do you think this will happen, can you explain? 4.What does squaring the x value mean? It really just prepares the students for real world applications as well. When they are presented a problem in real life, for example, the student is throwing a bday party and has $100 dollars to go to the skating rink. If they have to spend$20 on pizza and each friend costs $10 to take, how many friends can you take? Linear equations are used every day, and it truly helps each one of the students. # Engaging students: Using the point-slope equation of a line In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission again comes from my former student Rachel Delflache. Her topic, from Algebra: using the point-slope equation of a line. A2: How could you as a teacher create an activity that involves the topic? An adaptation of the stained-glass window project could be used to practice the point-slope formula (picture beside). Start by giving the students a piece of graph paper that is shaped like a traditional stained-glass window and then let they students create a window of their choosing using straight lines only. Once they are done creating their window, ask them to solve for and label the equations of the lines used in their design. While this project involves the point slope formula in a rather obvious way, giving the students the freedom to create a stained-glass window that they like helps to engage the students more than a normal worksheet. Also, by having them solve for the equations of the lines they created it is very probable that the numbers they must use for the equation will not be “pretty numbers” which would add an addition level of difficulty to the assignment. B2: How does this topic extend what your students should have learned in previous courses? The point-slope formula extends from the students’ knowledge of the slope formula m = (y2-y1)/(x2-x1) (x2-x1)m = y2-y1 y-y1 = m(x-x1). This means that the students could solve for the point-slope formula given the proper information and prompts. By allowing students to solve for the point-slope formula given the previous knowledge of the formula for slope, it gives the students a deeper understanding of how and why the point-slope formula works the way it does. Allowing the students to solve for the point-slope formula also increases the retention rate among the students. C1&3: How has this topic appeared in pop culture and the news? Graphs are everywhere in the news, like the first graph below. While they are often time line charts, each section of the line has its own equation that could be solved for given the information found on the graph. One of the simplest way to solve for each section of the line graph would be to use point slope formula. The benefit of using point slope formula to solve for the equations of these graphs is that there is very minimal information needed—assuming that two coordinates can be located on the graph, the linear equation can be solved for. Another place where graphs appear is in pop culture. It is becoming more common to find graphs like the second one below. These graphs are often time linear equation for which the formula could be solved for using the point slope formula. These kinds of graphs could be used to create an activity where the students use the point slope formula to solve to the equations shown in either the real world or comical graph. References: Stained glass window- Stained Glass Window Graphing Project # Engaging students: Finding the slope of a line In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission again comes from my former student Deanna Cravens. Her topic, from Algebra: finding the slope of a line. C3. How has this topic appeared in high culture (art/sports)? While one might not think of ski jumping as an art but more of a sport, there is definitely an artistic way about doing the jumping. The winter Olympics is one of the most popular sporting events, besides the summer Olympics that the world watches. This is a perfect engage for the beginning of class, not only is it extremely humorous but it is extremely engaging. It will instantly get a class interested in the topic of the day. I would first ask the students what the hill the skiers going down is called. Of course the answer that I would be looking for is the “ski slope.” This draws on prior knowledge to help students make a meaningful connection to the mathematical term of slope. Then I would ask students to interpret the meaning of slope in the context of the skiers. This allows for an easy transition into the topic for finding the slope of a line. C1. How has this topic appeared in pop culture (movies, TV, current music, video games, etc.)? Look at this scene from Transformers, it shows a perfect example of a linear line on the edge of the pyramid that the Decepticon is destroying. This video easily catches the attention of students because it is from the very popular Transformer movie. I would play the short twenty second clip and then have some student discussion at the beginning of class. This could be done as an introduction to the topic where students could be asked “how can we find the steepness of that edge of the pyrmaid?” Then the students can discuss with a partner and then group discussion can ensue. It could also be done as a quick review, where students are asked to recall how to find the slope of a line and what it determines. The students would be asked to draw on their knowledge of slope and produce a formula that would calculate it. How can this topic be used in your students’ future courses in mathematics or science? Finding the slope of a line is an essential part of mathematics. It is used in statistics, algebra, calculus, and so much more. One could say it is an integral part of calculus (pun intended). Not only is it used in mathematics classes, but it is also very relevant to science. One specific example is chemistry. There are specific reaction rates of solutions. These rates are expressed in terms of change in concentration divided by the change in time. This is exactly the formula that is used in math classes to find the slope. However, it is usually expressed in terms of change in y divided by change in x. Slope is also used in physics when working with velocity and acceleration of objects. While one could think of slope in the standard way of ‘rise over run,’ in these advanced classes whether math or science, it usually better thought of as ∆y/∆x. References: # Engaging students: Graphs of linear equations In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission again comes from my former student Anna Park. Her topic, from Algebra: graphs of linear equations. How could you as a teacher create an activity or project that involves your topic? • Have the students enter the room with all of the desks and chairs to the wall, to create a clear floor. On the floor, put 2 long pieces of duct tape that represent the x and y-axis. Have the students get into groups of 3 or 4 and on the board put up a linear equation. One of the students will stand on the Y-axis and will represent the point of the Y-Intercept. The rest of the students have to represent the slope of the line. The students will be able to see if they are graphing the equation right based on how they form the line. This way the students will be able to participate with each other and get immediate feedback. Have the remaining groups of students, those not participating in the current equation, graph the line on a piece of paper that the other group is representing for them. By the end of the engage, students will have a full paper of linear equation examples. The teacher can make it harder by telling the students to make adjustments like changing the y intercept but keeping the slope the same. Or have two groups race at once to see who can physically graph the equation the fastest. Because there is only one “graph” on the floor, have each group go separately and time each group. • Have the students put their desks into rows of even numbers. Each group should have between 4 and 5 students. On the wall or white board the teacher has an empty, laminated graph. The teacher will have one group go at a time. The teacher will give the group a linear equation and the student’s have to finish graphing the equation as fast as possible. Each group is given one marker, once the equation is given the first student runs up to the graph and will graph ONLY ONE point. The first student runs back to the second student and hands the marker off to them. That student runs up to the board and marks another point for that graph. The graph is completed once all points are on the graph, the x and y intercepts being the most important. If there are two laminated graphs on the board two groups can go at one time to compete against the other. Similar to the first engage, students will have multiple empty graphs on a sheet of paper that they need to fill out during the whole engage. This activity also gives the students immediate feedback. What interesting things can you say about the people who contributed to the discovery and/or the development of this topic? Sir William Rowan Hamilton was an Irish mathematician who lived to be 60 years old. Hamilton invented linear equations in 1843. At age 13 he could already speak 13 languages and at the age of 22 he was a professor at the University of Dublin. He also invented quaternions, which are equations that help extend complex numbers. A complex number of the form w + xi + yj + zk, where wxyz are real numbers and ijk are imaginary units that satisfy certain conditions. Hamilton was an Irish physicist, mathematician and astronomer. Hamilton has a paper written over fluctuating functions and solving equations of the 5th degree. He is celebrated in Ireland for being their leading scientist, and through the years he has been celebrated even more because of Ireland’s appreciation of their scientific heritage. Culture: How has this topic appeared in pop culture? An online video game called “Rescue the Zogs” is a fun game for anyone to play. In order for the player to rescue the zogs, they have to identify the linear equation that the zogs are on. This video game is found on mathplayground.com. References https://www.teachingchannel.org/videos/graphing-linear-equations-lesson https://www.reference.com/math/invented-linear-equations-ad360b1f0e2b43b8# https://en.wikipedia.org/wiki/William_Rowan_Hamilton http://www.mathplayground.com/SaveTheZogs/SaveTheZogs.html # Engaging students: Solving systems of linear inequalities In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission again comes from my former student Heidee Nicoll. Her topic, from Algebra: solving linear systems of inequalities. How could you as a teacher create an activity or project that involves your topic? I found a fun activity on a high school math teacher’s blog that makes solving systems of linear inequalities rather exciting. The students are given a map of the U.S. with a grid and axes over the top, and their goal is to find where the treasure is hidden. At the bottom of the page there are six possible places the treasure has been buried, marked by points on the map. The students identify the six coordinate points, and then use the given system of inequalities to find the buried treasure. This teacher’s worksheet has six equations, and once the students have graphed all of them, the solution contains only one of the six possible burial points. I think this activity would be very engaging and interesting for the students. Using the map of the U.S. is a good idea, since it gives them a bit of geography as well, but you could also create a map of a fictional island or continent, and use that as well. To make it even more interesting, you could have each student create their own map and system of equations, and then trade with a partner to solve. How does this topic extend what your students should have learned in previous courses? If students have a firm understanding of inequalities as well as linear systems of equations, then they have all the pieces they need to understand linear systems of inequalities quite easily and effectively. They know how to write an inequality, how to graph it on the coordinate plane, and how to shade in the correct region. They also know the different processes whereby they can solve linear systems of equations, whether by graphing or by algebra. The main difference they would need to see is that when solving a linear system of equations, their solution is a point, whereas with a linear system of inequalities, it is a region with many, possibly infinitely many, points that fit the parameters of the system. It would be very easy to remind them of what they have learned before, possibly do a little review if need be, and then make the connection to systems of inequalities and show them that it is not something completely different, but is simply an extension of what they have learned before. How can technology be used effectively to engage students with this topic? Graphing calculators are sufficiently effective when working with linear systems of equations, but when working with inequalities, they are rather limited in what they can help students visualize. They can only do ≥, not just >, and have the same problem with <. It is also difficult to see the regions if you have multiple inequalities because the screen has no color. This link is an online graphing calculator that has several options for inequalities: https://www.desmos.com/calculator. You can choose any inequality, <, >, ≤, or ≥, type in several equations or inequalities, and the regions show up on the graph in different colors, making it easier to find the solution region. Another feature of the graphing calculator is that the equations or inequalities do not have to be in the form of y=. You can type in something like 3x+2y<7 or solve for y and then type it in. I would use this graphing calculator to help students visualize the systems of inequalities, and see the solution. When working with more than two inequalities, I would add just one region at a time to the graph, which you can do in this graphing calculator by clicking the equation on or off, so the students could keep track of what was going on. References Live.Love.Laugh.Teach. Blog by Mrs. Graves. https://livelovelaughteach.wordpress.com/category/linear-inequalities/ Graphing calculator https://www.desmos.com/calculator # Engaging students: Finding the slope of a line In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission again comes from my former student Brianna Horwedel. Her topic, from Algebra: finding the slope of a line. How can technology (YouTube, Khan Academy [khanacademy.org], Vi Hart, Geometers Sketchpad, graphing calculators, etc.) be used to effectively engage students with this topic? Algebra vs. the Cockroaches is a great way to get students engaged in learning about slopes. The object of the game is to kill the cockroaches by figuring out what the equation of the line that they are walking on is. It progresses from simple lines such as y=5 to more complicated equations such as y=(-2/3)x+7. It allows the students to quickly recognize y-intercepts and slopes. Once finished, you can print out a “report” that tells you how many the student got correct and how many tries it took them to complete a level. This game could even be used as a formative assessment for the teacher. http://hotmath.com/hotmath_help/games/kp/kp_hotmath_sound.swf How could you as a teacher create an activity or project that involves your topic? Last year, I was placed in an eighth grade classroom that was learning about slope. One of the things that really stuck out to me was that the teacher gave a ski illustration to get the students talking about slope. The illustration starts off with the teacher going skiing. She talks about how when she is going up the ski lift she is really excited and having a “positive” experience which correlates to the slope being positive. Once she gets off of the ski lift, she isn’t going up or down, but in a straight line. She talks about how she doesn’t really feel either excited or nervous because she is on flat ground. This corresponds to lines that have a slope of 0. She then proceeds to talk about how when she starts actually going down the ski slope, she hates it! This relates to the negative slope of a line. She also mentions how she went over the side of a cliff and fell straight down. She was so scared she couldn’t even think or “define” her thoughts. This is tied to slopes that are undefined. I thought that this illustration was a great way of explaining the concept of slope from a real world example. After sharing the illustration, the students could work on problems involving calculating the slope of ski hills. How can this topic be used in your students’ future courses in mathematics or science? Understanding how to find the slope of a line is crucial for mathematics courses beyond Algebra I and Algebra II. Particularly, knowing how to find the slope of a line is essential for finding tangent lines of curves. This comes in handy for Calculus when you have to use limits to determine the slope. If a student does not have a strong grasp of what slope means and what its relationship is with the graph and the equation in Algebra I, then they will have a difficult time understanding slopes of lines that are not straight. # Engaging students: Finding the slope of a line In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission again comes from my former student Jason Trejo. His topic, from Algebra: finding the slope of a line. A2) How could you as a teacher create an activity or project that involves your topic? I have to start off by giving some credit to my 5th grade math teacher for giving me the idea on how I could create an activity involving this topic. You see, back in my 5th grade math class, we were to plot points given to us on a Cartesian plane and then connect the dots to create a picture (which turned out to be a caveman). Once we created the picture, we were to add more to it and the best drawing would win a prize. My idea is to split the class up into groups and give them an assortment of lines on separate pieces of transparent graphing sheets. They would then find the slopes and trace over the line in a predetermined color (e.g. all lines with m=2 will be blue, when m=1/3 then red, etc.). Next they stack each line with matching slopes above the other to create pictures like this: Of course, what I have them create would be more intricate and colorful, but this is the idea for now. It is also possible to have the students fine the slope of lines at certain points to create a picture like I did back in 5th grade and then have them color their drawing. They would end up with pictures such as: C1) How has this topic appeared in pop culture (movies, TV, current music, videogames, etc.)? Sure there aren’t many places where finding the slope of a line will be the topic that everyone goes on and one about on TV or on the hottest blog or all over Vine (whatever that is), but take a look around and you will be able to see a slope maybe on a building or from the top of Tom Hank’s head to the end of his shadow. Think about it, with enough effort, anyone could imagine a coordinate plane “behind” anything and try to find the slop from one point to another. The example I came up with goes along with this picture I edited: *Picture not accurately to scale This is the infamous, first double backflip ever landed in a major competition. The athlete: Travis Pastrana; the competition: the 2006 X-Games. I would first show the video (found here: https://www.youtube.com/watch?v=rLKERGvwBQ8), then show them the picture above to have them solve for each of the different slopes seen. In reality this is a parabola, but we can break up his motion to certain points in the trick (like when Travis is on the ground or when Travis is upside down for the first backflip). When the students go over parabolas at a later time, we could then come back to this picture. B2) How does this topic extend what your students should have learned in previous courses? It has been many years since I was first introduced to finding the slope of the line so I’m not sure exactly when I learned it, but I do know that I at least saw what a line was in 5th grade based on the drawing project I stated earlier. At that point, all I knew was to plot points on a graph and “connect the dots”, so this builds on that by actually being able to give a formula for those lines that connected the dots. Other than that, finding slopes on a Cartesian plane can give more insight on what negative numbers are and how they relate to positive numbers. Finally, students should have already learned about speed and time, so by creating a representation how those two relate, a line can be drawn. The students would see the rate of change based on speed and time. References: Minimalistic Landscape: http://imgur.com/a/44DNn Minimalistic Flowers: http://imgur.com/Kwk0tW0 Double Backflip Image: http://cdn.motocross.transworld.net/files/2010/03/tp_doubleback_final.jpg Double Backflip Video: : https://www.youtube.com/watch?v=rLKERGvwBQ8 # Engaging students: Graphs of linear equations In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission again comes from my former student Nada Al-Ghussain. Her topic, from Algebra: graphs of linear equations. How could you as a teacher create an activity or project that involves your topic? Positive slope, negative slope, no slope, and undefined, are four lines that cross over the coordinate plane. Boring. So how can I engage my students during the topic of graphs of linear equations, when all they can think of is the four images of slope? Simple, I assign a project that brings out the Individuality and creativity of each student. Something to wake up their minds! An individualized image-graphing project. I would give each student a large coordinate plane, where they will graph their picture using straight lines only. I would ask them to use only points at intersections, but this can change to half points if needed. Then each student will receive an Equation sheet where they will find and write 2 equations for each different type of slope. So a student will have equations for two horizontal lines, vertical lines, positive slope, and negative slope. The best part is the project can be tailored to each class weakness or strength. I can also ask them to write the slop-intercept form, point slope form, or to even compare slopes that are parallel or perpendicular. When they are done, students would have practiced graphing and writing linear equations many times using their drawn images. Some students would be able to recognize slopes easier when they recall this project and their specific work on it. Example of a project template: Examples of student work: How has this topic appeared in the news? Millions of people tune in to watch the news daily. Information is poured into our ears and images through our eyes. We cannot absorb it all, so the news makes it easy for us to understand and uses graphs of linear equations. Plus, the Whoa! Factor of the slopping lines is really the attention grabber. News comes in many forms either through, TV, Internet, or newspaper. Students can learn to quickly understand the meaning of graphs with the different slopes the few seconds they are exposed to them. On television, FOX news shows a positive slope of increasing number of job losses through a few years. (Beware for misrepresented data!) A journal article contains the cost of college increase between public and private colleges showing the negative slope of private costs decreasing. Most importantly line graphs can help muggles, half bloods, witches, and wizards to better understand the rise and decline of attractive characters through the Harry Potter series. How can this topic be used in your students’ future courses in mathematics or science? Students are introduced to simple graphs of linear equations where they should be able to name and find the equation of the slope. In a student’s future course with computers or tablets, I would use the Desmos graphing calculator online. This tool gives the students the ability to work backwards. I would ask a class to make certain lines, and they will have to come up with the equation with only their knowledge from previous class. It would really help the students understand the reason behind a negative slope and positive slope plus the difference between zero slope and undefined. After checking their previous knowledge, students can make visual representations of graphing linear inequalities and apply them to real-world problems. References: http://www.hoppeninjamath.com/teacherblog/?p=217 http://walkinginmathland.weebly.com/teaching-math-blog/animal-project-graphing-linear-lines-and-stating-equations http://mediamatters.org/research/2012/10/01/a-history-of-dishonest-fox-charts/190225 http://money.cnn.com/2010/10/28/pf/college/college_tuition/ http://dailyfig.figment.com/2011/07/13/harry-potter-in-charts/ https://www.desmos.com/calculator # Engaging students: Slope-intercept form of a line In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place. I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course). This student submission again comes from my former student Kelley Nguyen. Her topic, from Algebra: slope-intercept form of a line. How has this topic appeared in high culture (art, classical music, theatre, etc.)? The slope-intercept form of a line is a linear function. Linear functions are dealt with in many ways in everyday life, some of which you probably don’t even notice. One example where the slope-intercept form of a line appears in high culture is through music and arts. Suppose a band wants to book an auditorium for their upcoming concert. As most bands do, they meet with the manager of the location, book a date, and determine a payment. Let’s say it costs$1,500 to rent the building for 2 hours. In addition to this fee, the band earns 20% of each $30 ticket sold. Write an equation that determines whether the band made profit or lost money due to the number of tickets sold – the equation would be y = 0.2(30)x – 1500, where y is the amount gained or lost and x is the number of tickets sold that night. This can also help the band determine their goal on how many tickets to sell. If they want to make a profit of$2,000, they would have to sell x-many tickets to accomplish that. In reality, most arts performances make a profit from their shows or concerts. Not only do mathematicians and scientists use slope-intercept of a line, but with this example, it shows up in many types of arts and real-world situations. Not only does the form work for calculating cost or profit, it can relate to the number of seats in a theatre, such as x rows of 30 seats and a VIP section of 20 seats. The equation to find how many seats are available in the theatre is y = 30x + 20, where x is the number of rows. How can technology be used to effectively engage students with this topic? A great way to engage students when learning about slope-intercept form of a line is to use Geometer’s Sketchpad. After opening a graph with an x- and y-axis, use the tools to create a line. From there, you can drag the line up or down and notice that the slope increases as you move upward and decreases as you move downward. Students can also find the equation of the line by selecting the line, clicking “Measure” in the menu bar, and selecting “Equation” in the drop-down list. This gives the students an accurate equation of the line they selected in slope-intercept form. Geometer’s Sketchpad allows students to experiment and explore directions of lines, determine whether or not it has an increasing slope, and help create a visual image for positive and negative slopes. Also, with this program, students can play a matching game with slope-intercept equations and lines. You will instruct the student to create five random lines that move in any direction. Next, they will select all of the lines, go to “Measure” in the menu bar, and click “Equation.” From there, it’ll give them the equation of each line. Then, the student will go back and select the lines once again, go to “Edit” on the menu bar, hover over “Action Buttons,” and select “Hide/Show.” Once a box comes up, they will click the “Label” tab and type Scramble Lines in the text line. Next, the lines will scramble and stop when clicked on. Once the lines are done scrambling, the student could then match the equations with their lines. This activity gives the students the chance to look at equations and determine whether the slope is increasing and decreasing and where the line hits the y-axis. How could you as a teacher create an activity or project that involves your topic? With this topic, I could definitely do a project that consists of slope-intercept equations, their graphs, and word problems that involve computations. For example, growing up, some students had to earn money by doing chores around the house. Parents give allowance on daily duties that their children did. The project will give the daily amount of allowance that each student earned. With that, say the student needed to reach a certain amount of money before purchasing the iPad Air. In part one of the project, the student will create an equation that reflects their daily allowing of $5 and the amount of money they have at the moment. In part two, the student will construct a graph that shows the rate of their earnings, supposing that they don’t skip a day of chores. In part three, the students will answer a series of questions, such as, • What will you earn after a week? • What is your total amount of money after that week? • When will you have enough money to buy that iPad Air at$540 after tax? This would be a short project, but it’s definitely something that the students can do outside of class as a fun activity. It can also help them reach their goals of owning something they want and making a financial plan on how to accomplish that. References # Finding the equation of a line between two points Here’s a standard problem that could be found in any Algebra I textbook. Find the equation of the line between $(-1,-2)$ and $(4,2)$. The first step is clear: the slope of the line is $m = \displaystyle \frac{2-(-2)}{4-(-1)} = \frac{4}{5}$ At this point, there are two reasonable approaches for finding the equation of the line. Method #1. This is the method that was hammered into my head when I took Algebra I. We use the point-slope form of the line: $y - y_1 = m (x - x_1)$ $y - 2 = \displaystyle \frac{4}{5} (x-4)$ $y - 2 = \displaystyle \frac{4}{5}x - \frac{16}{5}$ $y = \displaystyle \frac{4}{5}x - \frac{6}{5}$ For what it’s worth, the point-slope form of the line relies on the fact that the slope between $(x,y)$ and $(x_1,y_1)$ is also equal to $m$. Method #2. I can honestly say that I never saw this second method until I became a college professor and I saw it on my students’ homework. In fact, I was so taken aback that I almost marked the solution incorrect until I took a minute to think through the logic of my students’ solution. Let’s set up the slope-intercept form of a line: $y= \displaystyle \frac{4}{5}x + b$ Then we plug in one of the points for $x$ and $y$ to solve for $b$. $2 = \displaystyle \frac{4}{5}(4) + b$ $\displaystyle -\frac{6}{5} = b$ Therefore, the line is $y = \displaystyle \frac{4}{5}x - \frac{6}{5}$. My experience is that most college students prefer Method #2, and I can’t say that I blame them. The slope-intercept form of a line is far easier to use than the point-slope form, and it’s one less formula to memorize. Still, I’d like to point out that there are instances in courses above Algebra I that the point-slope form is really helpful, and so the point-slope form should continue to be taught in Algebra I so that students are prepared for these applications later in life. Topic #1. In calculus, if $f$ is differentiable, then the tangent line to the curve $y=f(x)$ at the point $(a,f(a))$ has slope $f'(a)$. Therefore, the equation of the tangent line (or the linearization) has the form $y = f(a) + f'(a) \cdot (x-a)$ This linearization is immediately obtained from the point-slope form of a line. It also can be obtained using Method #2 above, so it takes a little bit of extra work. This linearization is used to derive Newton’s method for approximating the roots of functions, and it is a precursor to Taylor series. Topic #2. In statistics, a common topic is finding the least-squares fit to a set of points $(x_1,y_1), (x_2,y_2), \dots, (x_n,y_n)$. The solution is called the regression line, which has the form $y - \overline{y} = r \displaystyle \frac{s_y}{s_x} (x - \overline{x})$ In this equation, • $\overline{x}$ and $\overline{y}$ are the means of the $x-$ and $y-$values, respectively. • $s_x$ and $s_y$ are the sample standard deviations of the $x-$ and $y-$values, respectively. • $r$ is the correlation coefficient between the $x-$ and $y-$values. The formula of the regression line is decidedly easier to write in point-slope form than in slope-intercept form. Also, the point-slope form makes the interpretation of the regression line clear: it must pass through the point of averages $(\overline{x}, \overline{y})$.
2022-05-18 17:14:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 36, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4346045255661011, "perplexity": 441.0398549264822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522284.20/warc/CC-MAIN-20220518151003-20220518181003-00254.warc.gz"}
http://www.solutioninn.com/an-experiment-consists-of-rolling-a-pair-of-six-sided
# Question An experiment consists of rolling a pair of (six- sided) dice and observing the sum. This experiment is repeated until the sum of 7 is observed at which point the experiment stops. Let be the random variable which represents the number of times the experiment is repeated. That is, if the first occurrence of {sum= 7} happens on the 5th roll of the dice, then N =5. (a) Find the probability mass function for the random variable N. That is, find PN (k) = Pr (N= k) for all k. (b) What is the probability that the experiment proceeds for at least 4 rolls? That is, find Pr (N ≥ 4). Sales0 Views13
2016-10-26 06:40:31
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.828967809677124, "perplexity": 349.1102873342359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720737.84/warc/CC-MAIN-20161020183840-00337-ip-10-171-6-4.ec2.internal.warc.gz"}
https://aramram.com/forum/8739c3-moment-generating-function-formula
0 X {\displaystyle S_{n}=\sum _{i=1}^{n}a_{i}X_{i}} ⁡ may not exist. 1 To get around this difficulty, we use some more advanced mathematical theory and calculus. α = n t a X Thus we obtain formulas for the moments of the random variable X: This means that if the moment generating function exists for a particular random variable, then we can find its mean and its variance in terms of derivatives of the moment generating function. Moment generating functions are positive and log-convex, with M(0) = 1. {\displaystyle M_{\alpha X+\beta }(t)=e^{\beta t}M_{X}(\alpha t)}, If The moment-generating function is so named because it can be used to find the moments of the distribution. X is a vector and ↦ ) μ ≤ One way to calculate the mean and variance of a probability distribution is to find the expected values of the random variables X and X2. t {\displaystyle e^{tX}} t X The following is a formal definition. α {\displaystyle M_{X}(-t)} {\displaystyle m_{i}} Weisstein, Eric W. "Moment-Generating Function." X ⟩ X {\displaystyle X} t k, (6.3.1) where m k = E[Yk] is the k-th moment of Y. Theorem for Characteristic Functions." + M , density function , if there exists for any wherever this expectation exists. 0 ( The end result is something that makes our calculations easier. X Join the initiative for modernizing math education. 0 However, a key problem with moment-generating functions is that moments and the moment-generating function may not exist, as the integrals need not converge absolutely. We let X be a discrete random variable. holds: since the PDF's two-sided Laplace transform is given as, and the moment-generating function's definition expands (by the law of the unconscious statistician) to. Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. {\displaystyle M_{X}(t)} ) ) {\displaystyle x\mapsto e^{xt}} The lognormal distribution is an example of when this occurs. > X > t Section 3.5: Moments and Moment Generating Functions De–nition 1 Expected Values of integer powers of X and X are called moments. This statement is also called the Chernoff bound. This function allows us to calculate moments by simply taking derivatives. {\displaystyle a>0} Here are some examples of the moment-generating function and the characteristic function for comparison. Given a random variable and a probability {\displaystyle X} , we have. i t 0 If there is a positive real number r such that E(etX) exists and is finite for all t in the interval [-r, r], then we can define the moment generating function of X. Various lemmas, such as Hoeffding's lemma or Bennett's inequality provide bounds on the moment-generating function in the case of a zero-mean, bounded random variable. {\displaystyle M_{X}(t)=e^{t^{2}/2}} {\displaystyle t} X ( There are particularly simple results for the moment-generating functions of distributions defined by the weighted sums of random variables. This statement is not equivalent to the statement "if two distributions have the same moments, then they are identical at all points." ( always exists and is equal to 1. ) t and Practice online or make a printable study sheet. M , we can choose Moment generating functions possess a uniqueness property. function satisfies, If is differentiable at zero, then the By contrast, the characteristic function or Fourier transform always exists (because it is the integral of a bounded function on a space of finite measure), and for some purposes may be used instead. a {\displaystyle f_{X}(x)} m ( What Is the Skewness of an Exponential Distribution? {\displaystyle m_{n}} {\displaystyle M_{X}(t)} n The Moment Generating Function of a Random Variable, Courtney K. Taylor, Ph.D., is a professor of mathematics at Anderson University and the author of "An Introduction to Abstract Algebra. exists. {\displaystyle i} ∑ For powers of X these are called moments about the mean. ≥ {\displaystyle Y} ) t {\displaystyle t>0} ( / of Statistics, Pt. X i ) is the Fourier transform of its probability density function The next example shows how the mgf of an exponential random variableis calculated. i , then 0 an such that. ) has moment generating function ) M = , {\displaystyle t>0} In addition to real-valued distributions (univariate distributions), moment-generating functions can be defined for vector- or matrix-valued random variables, and can even be extended to more general cases. X t X Explore anything with the first computational knowledge engine. i n th moments … This is consistent with the characteristic function of X X Since Unlimited random practice problems and answers with built-in Step-by-step solutions. M X moment-generating function. for , where denotes P As its name implies, the moment generating function can be used to compute a distribution’s moments: the nth moment about 0 is the nth derivative of the moment-generating function, evaluated at 0. {\displaystyle \mathbf {X} } X The moment generating function has many features that connect to other topics in probability and mathematical statistics. : M x of Statistics, Pt. [2]. ⟨ 2, 2nd ed. where . ) and the two-sided Laplace transform of its probability density function when the moment generating function exists, as the characteristic function of a continuous random variable i In other words, we say that the moment generating function of X is given by: This expected value is the formula Σ etx f (x), where the summation is taken over all x in the sample space S. This can be a finite or infinite sum, depending upon the sample space being used. See the relation of the Fourier and Laplace transforms for further information. X , which is within a factor of 1+a of the exact value. t n Expected Value of a Binomial Distribution, Explore Maximum Likelihood Estimation Examples, How to Calculate Expected Value in Roulette, Maximum and Inflection Points of the Chi Square Distribution, How to Find the Inflection Points of a Normal Distribution, B.A., Mathematics, Physics, and Chemistry, Anderson University. 72-77, {\displaystyle f(x)} {\displaystyle tX} / . ( a Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. t ) {\displaystyle \mathbf {t} } ) M and recall that {\displaystyle f(x)} is a Wick rotation of its two-sided Laplace transform in the region of convergence. β ) f x e The mean is M’(0), and the variance is M’’(0) – [M’(0)]2. > X The mean and the variance of a random variable X with a binomial probability distribution can be difficult to calculate directly. where 2 {\displaystyle \mathbf {t} } The moment-generating function of a real-valued distribution does not always exist, unlike the characteristic function. The moment-generating function is the expectation of a function of the random variable, it can be written as: Note that for the case where The moment-generating function is so called because if it exists on an open interval around t = 0, then it is the exponential generating function of the moments of the probability distribution: That is, with n being a nonnegative integer, the nth moment about 0 is the nth derivative of the moment generating function, evaluated at t = 0. M instead of  β is the two-sided Laplace transform of X t th moment. is monotonically increasing for ( 2 1 A moment-generating function, or MGF, as its name implies, is a function used to find the moments of a given random variable. is a continuous random variable, the following relation between its moment-generating function Differentiating ) ", ThoughtCo uses cookies to provide you with a great user experience. In other words, the moment-generating function is the expectation of the random variable f By using ThoughtCo, you accept our, Use of the Moment Generating Function for the Binomial Distribution, How to Calculate the Variance of a Poisson Distribution. E((X )3) is called the third moment about the mean. , {\displaystyle \mu } e = e 1951. see Calculations of moments below. times with respect to In summary, we had to wade into some pretty high-powered mathematics, so some things were glossed over. {\displaystyle X} m {\displaystyle P(X\geq a)\leq e^{-a^{2}/2}} X Knowledge-based programming for everyone. is the is the dot product. For example, when X is a standard normal distribution and 2, 2nd ed. X {\displaystyle \alpha X+\beta } Upper bounding the moment-generating function can be used in conjunction with Markov's inequality to bound the upper tail of a real random variable X. is. T t e when the latter exists. Moment generating functions can be used to calculate moments of X. If the moment generating functions for two random variables match one another, then the probability mass functions must be the same. ⋅ and setting {\displaystyle n} ) , t and any a, provided Kenney, J. F. and Keeping, E. S. "Moment-Generating and Characteristic Functions," "Some Examples of Moment-Generating Functions," and "Uniqueness {\displaystyle M_{X}(t)} f {\displaystyle M_{X}(t)}
2021-06-25 01:51:21
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9519264101982117, "perplexity": 439.89571961465634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488560777.97/warc/CC-MAIN-20210624233218-20210625023218-00254.warc.gz"}
http://wiki.freepascal.org/Using_resourcestrings
# Using resourcestrings Jump to: navigation, search English (en) Bahasa Indonesia (id) русский (ru) The .rst file is created to provide a mechanism to localize your application. Currently, only one localization mechanism is provided: gettext. The steps are as follows: 1. Compiler creates .rst file. 2. rstconv tool converts to .po (input for gettext) This file can be translated to many languages. All standard gettext tools can be used. 3. Gettext creates .mo files. 4. .mo files are read by gettext unit and all resourcestrings are translated. The calls needed to translate all resourcestrings are in the objpas unit. They are documented. Nothing stops people from creating a mechanism that does not depend on gettext. One could implement a mechanism to create resource DLL's (as delphi does) which contain the translated texts. In fact, output to .rc files, i.e. source texts for resource compiler, is already available in rstconv - however, portable functions for loading the texts from such DLLs are missing (see point 3 below). The same applies to the third output format supported by rstconv at the moment, IBM OS/2 MSG files. The reason gettext was chosen is that it's more or less standard on Unix. But Gettext is horribly inefficient, so if someone has a better idea, please do. Plus, GetText is context insensitive (it operates on the string itself), which is a drawback: sometimes the same word/sentence must be translated differently according to the context, and this is not possible. To implement another mechanism, 3 things are needed: 1. Update rstconv so it can output another format. 2. Tools to manipulate the other format. 3. Implement a unit that loads the other format at runtime. This is also the reason we create an intermediate file format: this was the compiler needs no knowledge of the translation tool. It just needs to create the .rst file. An alternate way of doing it would e.g. be create a ini file per language, with a section for each unit used, and a key for each string. english.ini: [sysutils] SErrInvalidDateTime="%S" is not a valid date/time indication. dutch.ini: [sysutils] SErrInvalidDateTime="%S" is geen geldige datum/tijd aanduiding. This would allow reuse of various files.
2019-04-19 00:45:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39635834097862244, "perplexity": 4745.835014335601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578526923.39/warc/CC-MAIN-20190419001419-20190419023419-00245.warc.gz"}
https://www.vedantu.com/question-answer/8g-of-naoh-is-dissolved-in-18g-of-h2o-mole-class-12-chemistry-cbse-5fd73ace147a833c29eb4f1c
Questions & Answers # 8g of NaOH is dissolved in 18g of ${{H}_{2}}O$. Mole fraction of NaOH in solution and molality (in mol/kg) of the solutions respectively are:(a) 0.167, 11.11(b) 0.2, 22.20(c) 0.2, 11.11(d) 0.167, 22.20 Verified 119.1k+ views Hint: In this first we have to find the no of moles of NaOH and ${{H}_{2}}O$ and then we can find the mole fraction of NaOH by using the formula as Mole fraction of NaOH= $\dfrac{{{n}_{NaOH}}}{{{n}_{NaOH}}+{{n}_{{{H}_{2}}O}}}$ and we know the mass of the water and can easily find the molality of the solution by using the formula as; Molality =$\dfrac{\text{no of moles of the solute}}{\text{ total mass of the solvent in kg}}$. Now solve it. Complete step by step answer: First of all, let’s discuss mole fraction and molality. By the term mole fraction we mean the ratio of the number of a particular component to the total number of moles of the solution (i.e. solute or solvent). If the substance A is dissolved in solvent B and ${{n}_{A}}$ and ${{n}_{B}}$ are the mole fractions of the solute A and solvent B , then; Mole fraction of solute A=$\dfrac{{{n}_{A}}}{{{n}_{A}}+{{n}_{B}}}$ Mole fraction of solvent B= $\dfrac{{{n}_{B}}}{{{n}_{A}}+{{n}_{B}}}$ And by the term molality we mean the no of moles of the solutes to the total mass of the solvent in kilograms. i.e. Molality =$\dfrac{\text{no of moles of the solute}}{\text{ total mass of the solvent in kg}}$ -----------(A) Now, considering the numerical; We can calculate the moles of NaOH by using the formula as; Moles of NaOH =$\dfrac{given\text{ }mass}{molecular\text{ }mass}$ ---------(1) Given mass of NaOH= 8g Molecular mass of NaOH= 23+16+1= 40 Put these values in equation(1), we get; Moles of NaOH =$\dfrac{8}{40}$ =0.2 Similarly, Given mass of ${{H}_{2}}O$= 18g Molecular mass of ${{H}_{2}}O$= 2+16= 18 Put these values in equation(1), we get; Moles of NaOH =$\dfrac{18}{18}$ = 1 Mole fraction of NaOH= $\dfrac{{{n}_{NaOH}}}{{{n}_{NaOH}}+{{n}_{{{H}_{2}}O}}}$ Put the values of number of moles of NaOH and ${{H}_{2}}O$ in it, we get; Mole fraction of NaOH= $\dfrac{0.2}{0.2+1}$ =$\dfrac{0.2}{1.2}$ = 0.167 Now, calculating the molality of the solution using the equation (A) as; Molality =$\dfrac{\text{no of moles of the NaOH}}{\text{ total mass of the }{{\text{H}}_{2}}\text{O in kg}}$---------(2) No of moles of NaOH= 0.2 Mass of water=18g = $\dfrac{18}{1000}$ kg ( 1kg=1000g) Put these values in equation (2) we get: Molality = $\dfrac{0.2\times 1000}{18}$ = 11.11m Hence, option(a) is correct. Note: The sum of the mole fraction of the solute and solvent is always equal to one and can never be greater than one but can be less than one and molality of the solution is independent of the temperature and depends only on the mass of the solvent.
2022-01-28 08:03:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6831548810005188, "perplexity": 2346.6732741057103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305423.58/warc/CC-MAIN-20220128074016-20220128104016-00363.warc.gz"}
https://glum.readthedocs.io/en/latest/tutorials/glm_french_motor_tutorial/glm_french_motor.html
# GLM Tutorial: Poisson, Gamma, and Tweedie with French Motor Third-Party Liability Claims Intro This tutorial shows why and how to use Poisson, Gamma, and Tweedie GLMs on an insurance claims dataset using glum. It was inspired by, and closely mirrors, two other GLM tutorials that used this dataset: 1. An sklearn-learn tutorial, Tweedie regression on insurance claims, which was created for this (partially merged) sklearn PR that we based glum on 2. An R tutorial, Case Study: French Motor Third-Party Liability Claims with R code. Background Insurance claims are requests made by a policy holder to an insurance company for compensation in the event of a covered loss. When modeling these claims, the goal is often to estimate, per policy, the total claim amount per exposure unit. (i.e. number of claims $$\times$$ average amount per claim per year). This amount is also referred to as the pure premium. Two approaches for modeling this value are: 1. Modeling the total claim amount per exposure directly 2. Modeling number of claims and claim amount separately with a frequency and a severity model In this tutorial, we demonstrate both approaches. We start with the second option as it shows how to use two different families/distributions (Poisson and Gamma) within a GLM on a single dataset. We then show the first approach using a single poison-gamma Tweedie regressor (i.e. a Tweedie with power $$p \in (1,2)$$) [1]: import matplotlib.pyplot as plt import numpy as np import pandas as pd import scipy.optimize as optimize import scipy.stats from dask_ml.preprocessing import Categorizer from sklearn.metrics import mean_absolute_error from sklearn.model_selection import ShuffleSplit from glum import GeneralizedLinearRegressor from glum import TweedieDistribution ## 1. Load and prepare datasets from Openml First, we load in our dataset from openML and apply several transformations. In the interest of simplicity, we do not include the data loading and preparation code in this notebook. Below is a list of further resources if you wish to explore further: 1. If you want to run the same code yourself, please see the helper functions here. 2. For a detailed description of the data, see here. 3. For an excellent exploratory data analysis, see the case study paper linked above. Some important notes about the dataset post-transformation: • Total claim amounts are aggregated per policy • For ClaimAmountCut, the claim amounts (pre-aggregation) were cut at 100,000 per single claim. We choose to use this amount rather than the raw ClaimAmount. (100,000 is the 0.9984 quantile but claims > 100,000 account for 25% of the overall claim amount) • We aggregate the total claim amounts per policy • ClaimNb is the total number of claims per policy with claim amount greater zero • VehPower, VehAge, and DrivAge are clipped and/or digitized into bins so that they can be used as categoricals later on [2]: df = load_transform() with pd.option_context('display.max_rows', 10): display(df) ClaimNb Exposure Area VehPower VehAge DrivAge BonusMalus VehBrand VehGas Density Region ClaimAmount ClaimAmountCut IDpol 1 0 0.10000 D 5 0 5 50 B12 Regular 1217 R82 0.0 0.0 3 0 0.77000 D 5 0 5 50 B12 Regular 1217 R82 0.0 0.0 5 0 0.75000 B 6 1 5 50 B12 Diesel 54 R22 0.0 0.0 10 0 0.09000 B 7 0 4 50 B12 Diesel 76 R72 0.0 0.0 11 0 0.84000 B 7 0 4 50 B12 Diesel 76 R72 0.0 0.0 ... ... ... ... ... ... ... ... ... ... ... ... ... ... 6114326 0 0.00274 E 4 0 5 50 B12 Regular 3317 R93 0.0 0.0 6114327 0 0.00274 E 4 0 4 95 B12 Regular 9850 R11 0.0 0.0 6114328 0 0.00274 D 6 1 4 50 B12 Diesel 1323 R82 0.0 0.0 6114329 0 0.00274 B 4 0 5 50 B12 Regular 95 R26 0.0 0.0 6114330 0 0.00274 B 7 1 2 54 B12 Diesel 65 R72 0.0 0.0 678013 rows × 13 columns ## 2. Frequency GLM - Poisson distribution We start with the first part of our two part GLM - modeling the frequency of claims using a Poisson regression. Below, we give some background on why the Poisson family makes the most sense in this context. ### 2.1 Why Poisson distributions? Poisson distributions are typically used to model the number of events occuring in a fixed period of time when the events occur independently at a constant rate. In our case, we can think of motor insurance claims as the events, and a unit of exposure (i.e. a year) as the fixed period of time. To get more technical: We define: • $$z$$: number of claims • $$w$$: exposure (time in years under risk) • $$y = \frac{z}{w}$$: claim frequency per year • $$X$$: feature matrix The number of claims $$z$$ is an integer, $$z \in [0, 1, 2, 3, \ldots]$$. Theoretically, a policy could have an arbitrarily large number of claims—very unlikely but possible. The simplest distribution for this range is a Poisson distribution $$z \sim Poisson$$. However, instead of $$z$$, we will model the frequency $$y$$. Nonetheless, this is still (scaled) Poisson distributed with variance inverse proportional to $$w$$, cf. wikipedia:Reproductive_EDM. To verify our assumptions, we start by plotting the observed frequencies and a fitted Poisson distribution (Poisson regression with intercept only). [3]: # plt.subplots(figsize=(10, 7)) df_plot = ( df.loc[:, ['ClaimNb', 'Exposure']].groupby('ClaimNb').sum() .assign(Frequency_Observed = lambda x: x.Exposure / df['Exposure'].sum()) ) mean = df['ClaimNb'].sum() / df['Exposure'].sum() x = range(5) plt.scatter(x, df_plot['Frequency_Observed'].values, color="blue", alpha=0.85, s=60, label='observed') plt.scatter(x, scipy.stats.poisson.pmf(x, mean), color="orange", alpha=0.55, s=60, label="poisson fit") plt.xticks(x) plt.legend() plt.title("Frequency"); This is a strong confirmation for the use of a Poisson when fitting! ### 2.2 Train and test frequency GLM Now, we start fitting our model. We use claims frequency = claim number/exposure as our outcome variable. We then divide the dataset into training set and test set with a 9:1 random split. Also, notice that we do not one hot encode our columns. Rather, we take advantage of glum’s integration with tabmat, which allows us to pass in categorical columns directly! tabmat will handle the encoding for us and even includes a handful of helpful matrix operation optimizations. We use the Categorizer from dask_ml to set our categorical columns as categorical dtypes and to ensure that the categories align in fitting and predicting. [4]: z = df['ClaimNb'].values weight = df['Exposure'].values y = z / weight # claims frequency ss = ShuffleSplit(n_splits=1, test_size=0.1, random_state=42) train, test = next(ss.split(y)) categoricals = ["VehBrand", "VehGas", "Region", "Area", "DrivAge", "VehAge", "VehPower"] predictors = categoricals + ["BonusMalus", "Density"] glm_categorizer = Categorizer(columns=categoricals) X_train_p = glm_categorizer.fit_transform(df[predictors].iloc[train]) X_test_p = glm_categorizer.transform(df[predictors].iloc[test]) y_train_p, y_test_p = y[train], y[test] w_train_p, w_test_p = weight[train], weight[test] z_train_p, z_test_p = z[train], z[test] Now, we define our GLM using the GeneralizedLinearRegressor class from glum. • family='poisson': creates a Poisson regressor • alpha_search=True: tells the GLM to search along the regularization path for the best alpha • l1_ratio = 1 tells the GLM to only use l1 penalty (not l2). l1_ratio is the elastic net mixing parameter. For l1_ratio = 0, the penalty is an L2 penalty. For l1_ratio = 1, it is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2. See the GeneralizedLinearRegressor class API documentation for more details. Note: glum also supported a cross validation model GeneralizedLinearRegressorCV. However, because cross validation requires fitting many models, it is much slower and we don’t demonstrate it in this tutorial. [5]: f_glm1 = GeneralizedLinearRegressor(family='poisson', alpha_search=True, l1_ratio=1, fit_intercept=True) f_glm1.fit( X_train_p, y_train_p, sample_weight=w_train_p ); pd.DataFrame({'coefficient': np.concatenate(([f_glm1.intercept_], f_glm1.coef_))}, index=['intercept'] + f_glm1.feature_names_).T [5]: intercept VehBrand__B1 VehBrand__B10 VehBrand__B11 VehBrand__B12 VehBrand__B13 VehBrand__B14 VehBrand__B2 VehBrand__B3 VehBrand__B4 ... VehAge__1 VehAge__2 VehPower__4 VehPower__5 VehPower__6 VehPower__7 VehPower__8 VehPower__9 BonusMalus Density coefficient -4.269268 -0.003721 -0.010846 0.138466 -0.259298 0.0 -0.110712 -0.003604 0.044075 0.0 ... 0.045494 -0.139428 -0.070054 -0.028142 0.0 0.0 0.016531 0.164711 0.026764 0.000004 1 rows × 60 columns To measure our model’s test and train performance, we use the deviance function for the Poisson family. We can get the total deviance function directly from glum’s distribution classes and divide it by the sum of our sample weight. Note: a Poisson distribution is equivlane to a Tweedie distribution with power = 1. [6]: PoissonDist = TweedieDistribution(1) print('training loss f_glm1: {}'.format( PoissonDist.deviance(y_train_p, f_glm1.predict(X_train_p), sample_weight=w_train_p)/np.sum(w_train_p) )) print('test loss f_glm1: {}'.format( PoissonDist.deviance(y_test_p, f_glm1.predict(X_test_p), sample_weight=w_test_p)/np.sum(w_test_p))) training loss f_glm1: 0.45704947333555146 test loss f_glm1: 0.45793061314157685 A GLM with canonical link function (Normal - identity, Poisson - log, Gamma - 1/x, Binomial - logit) with an intercept term has the so called balance property. Neglecting small deviations due to an imperfect fit, on the training sample the results satisfy the equality: $\sum_{i \in training} w_i y_i = \sum_{i \in training} w_i \hat{\mu}_i$ As expected, this property holds in our real data: [7]: # balance property of GLM with canonical link, like log-link for Poisson: z_train_p.sum(), (f_glm1.predict(X_train_p) * w_train_p).sum() [7]: (23785, 23785.198509368805) ## 3. Severity GLM - Gamma distribution Now, we fit a GLM for the severity with the same features as the frequency model. The severity $$y$$ is the average claim size. We define: • $$z$$: total claim amount, single claims cut at 100,000 • $$w$$: number of claims (with positive claim amount!) • $$y = \frac{z}{w}$$: severity ### 3.1 Why Gamma distributions The severity $$y$$ is a positive, real number, $$y \in (0, \infty)$$. Theoretically, especially for liability claims, one could have arbitrary large numbers—very unlikely but possible. A very simple distribution for this range is an Exponential distribution, or its generalization, a Gamma distribution $$y \sim Gamma$$. In the insurance industry, it is well known that the severity might be skewed by a few very large losses. It’s common to model these tail losses separately so here we cut out claims larger than 100,000 to focus on modeling small and moderate claims. [8]: df_plot = ( df.loc[:, ['ClaimAmountCut', 'ClaimNb']] .query('ClaimNb > 0') .assign(Severity_Observed = lambda x: x['ClaimAmountCut'] / df['ClaimNb']) ) df_plot['Severity_Observed'].plot.hist(bins=400, density=True, label='Observed', ) x = np.linspace(0, 1e5, num=400) plt.plot(x, scipy.stats.gamma.pdf(x, *scipy.stats.gamma.fit(df_plot['Severity_Observed'], floc=0)), 'r-', label='fitted Gamma') plt.legend() plt.title("Severity"); plt.xlim(left=0, right = 1e4); #plt.xticks(x); [9]: # Check mean-variance relationship for Gamma: Var[Y] = E[Y]^2 / Exposure # Estimate Var[Y] and E[Y] # Plot estimates Var[Y] vs E[Y]^s/Exposure # Note: We group by VehPower and BonusMalus in order to have different E[Y]. def my_agg(x): """See https://stackoverflow.com/q/44635626""" x_sev = x['Sev'] x_cnb = x['ClaimNb'] n = x_sev.shape[0] names = { 'Sev_mean': np.average(x_sev, weights=x_cnb), 'Sev_var': 1/(n-1) * np.sum((x_cnb/np.sum(x_cnb)) * (x_sev-np.average(x_sev, weights=x_cnb))**2), 'ClaimNb_sum': x_cnb.sum() } return pd.Series(names, index=['Sev_mean', 'Sev_var', 'ClaimNb_sum']) for col in ['VehPower', 'BonusMalus']: claims = df.groupby(col)['ClaimNb'].sum() df_plot = (df.loc[df[col].isin(claims[claims >= 4].index), :] .query('ClaimNb > 0') .assign(Sev = lambda x: x['ClaimAmountCut']/x['ClaimNb']) .groupby(col) .apply(my_agg) ) plt.plot(df_plot['Sev_mean'], df_plot['Sev_var'] * df_plot['ClaimNb_sum'], '.', markersize=12, label='observed') # fit: mean**p/claims p = optimize.curve_fit(lambda x, p: np.power(x, p), df_plot['Sev_mean'].values, df_plot['Sev_var'] * df_plot['ClaimNb_sum'], p0 = [2])[0][0] df_fit = pd.DataFrame({'x': df_plot['Sev_mean'], 'y': np.power(df_plot['Sev_mean'], p)}) df_fit = df_fit.sort_values('x') plt.plot(df_fit.x, df_fit.y, 'k--', label='fit: Mean**{}'.format(p)) plt.xlabel('Mean of Severity ') plt.ylabel('Variance of Severity * ClaimNb') plt.legend() plt.title('Man-Variance of Claim Severity by {}'.format(col)) plt.show() Great! A Gamma distribution seems to be an empirically reasonable assumption for this dataset. Hint: If Y were normal distributed, one should see a horizontal line, because $$Var[Y] = constant/Exposure$$ and the fit should give $$p \approx 0$$. ### 3.2 Severity GLM with train and test data We fit a GLM for the severity with the same features as the frequency model. We use the same categorizer as before. Note: • We filter out ClaimAmount == 0. The severity problem is to model claim amounts conditional on a claim having already been submitted. It seems reasonable to treat a claim of zero as equivalent to no claim at all. Additionally, zero is not included in the open interval $$(0, \infty)$$ support of the Gamma distribution. • We use ClaimNb as sample weights. • We use the same split in train and test data such that we can predict the final claim amount on the test set as the product of our Poisson claim number and Gamma claim severity GLMs. [10]: idx = df['ClaimAmountCut'].values > 0 z = df['ClaimAmountCut'].values weight = df['ClaimNb'].values # y = claims severity y = np.zeros_like(z) # zeros will never be used y[idx] = z[idx] / weight[idx] # we also need to represent train and test as boolean indices itrain = np.zeros(y.shape, dtype='bool') itest = np.zeros(y.shape, dtype='bool') itrain[train] = True itest[test] = True # simplify life itrain = idx & itrain itest = idx & itest X_train_g = glm_categorizer.fit_transform(df[predictors].iloc[itrain]) X_test_g = glm_categorizer.transform(df[predictors].iloc[itest]) y_train_g, y_test_g = y[itrain], y[itest] w_train_g, w_test_g = weight[itrain], weight[itest] z_train_g, z_test_g = z[itrain], z[itest] We fit our model with the same parameters before, but of course, this time we use family=gamma. [11]: s_glm1 = GeneralizedLinearRegressor(family='gamma', alpha_search=True, l1_ratio=1, fit_intercept=True) s_glm1.fit(X_train_g, y_train_g, sample_weight=weight[itrain]) pd.DataFrame({'coefficient': np.concatenate(([s_glm1.intercept_], s_glm1.coef_))}, index=['intercept'] + s_glm1.feature_names_).T [11]: intercept VehBrand__B1 VehBrand__B10 VehBrand__B11 VehBrand__B12 VehBrand__B13 VehBrand__B14 VehBrand__B2 VehBrand__B3 VehBrand__B4 ... VehAge__1 VehAge__2 VehPower__4 VehPower__5 VehPower__6 VehPower__7 VehPower__8 VehPower__9 BonusMalus Density coefficient 7.3389 -0.034591 0.040528 0.13116 0.035838 0.100753 -0.073995 -0.033196 0.0 0.049078 ... 0.0 -0.024827 -0.009537 -0.089972 0.071376 0.009361 -0.042491 0.051636 0.002365 -0.000001 1 rows × 60 columns Again, we measure peformance with the deviance of the distribution. We also compare against the simple arithmetic mean and include the mean absolute error to help understand the actual scale of our results. Note: a Gamma distribution is equivalent to a Tweedie distribution with power = 2. [26]: GammaDist = TweedieDistribution(2) print('training loss (deviance) s_glm1: {}'.format( y_train_g, s_glm1.predict(X_train_g), sample_weight=w_train_g )/np.sum(w_train_g) )) print('training mean absolute error s_glm1: {}'.format( mean_absolute_error(y_train_g, s_glm1.predict(X_train_g)) )) print('\ntesting loss s_glm1 (deviance): {}'.format( y_test_g, s_glm1.predict(X_test_g), sample_weight=w_test_g )/np.sum(w_test_g) )) print('testing mean absolute error s_glm1: {}'.format( mean_absolute_error(y_test_g, s_glm1.predict(X_test_g)) )) print('\ntesting loss Mean (deviance): {}'.format( y_test_g, np.average(z_train_g, weights=w_train_g)*np.ones_like(z_test_g), sample_weight=w_test_g )/np.sum(w_test_g) )) print('testing mean absolute error Mean: {}'.format( mean_absolute_error(y_test_g, np.average(z_train_g, weights=w_train_g)*np.ones_like(z_test_g)) )) training loss (deviance) s_glm1: 1.29010461534461 training mean absolute error s_glm1: 1566.1785138646032 testing loss s_glm1 (deviance): 1.2975718597070154 testing mean absolute error s_glm1: 1504.4458958597086 testing loss Mean (deviance): 1.3115309309577132 testing mean absolute error Mean: 1689.205530922944 Even though the deviance improvement seems small, the improvement in mean absolute error is not! (In the insurance world, this will make a significant difference when aggregated over all claims). ### 3.3 Combined frequency and severity results We put together the prediction of frequency and severity to get the predictions of the total claim amount per policy. [13]: #Put together freq * sev together print("Total claim amount on train set, observed = {}, predicted = {}". format(df['ClaimAmountCut'].values[train].sum(), np.sum(df['Exposure'].values[train] * f_glm1.predict(X_train_p) * s_glm1.predict(X_train_p))) ) print("Total claim amount on test set, observed = {}, predicted = {}". format(df['ClaimAmountCut'].values[test].sum(), np.sum(df['Exposure'].values[test] * f_glm1.predict(X_test_p) * s_glm1.predict(X_test_p))) ) Total claim amount on train set, observed = 44594644.68, predicted = 44549152.42247057 Total claim amount on test set, observed = 4707551.37, predicted = 4946960.354743531 ## 4. Combined GLM - Tweedie distribution Finally, to demonstrate an alternate approach to the combined frequency severity model, we show how we can model pure premium directly using a Tweedie regressor. Any Tweedie distribution with power $$p\in(1,2)$$ is known as compound Poisson Gamma distribution [14]: weight = df['Exposure'].values df["PurePremium"] = df["ClaimAmountCut"] / df["Exposure"] X_train_t = glm_categorizer.fit_transform(df[predictors].iloc[train]) X_test_t = glm_categorizer.transform(df[predictors].iloc[test]) y_train_t, y_test_t = y.iloc[train], y.iloc[test] w_train_t, w_test_t = weight[train], weight[test] For now, we just arbitrarily select 1.5 as the power parameter for our Tweedie model. However for a better fit we could include the power parameter in the optimization/fitting process, possibly via a simple grid search. Note: notice how we pass a TweedieDistribution object in directly for the family parameter. While glum supports strings for common families, it is also possible to pass in a glum distribution directly. [15]: TweedieDist = TweedieDistribution(1.5) t_glm1 = GeneralizedLinearRegressor(family=TweedieDist, alpha_search=True, l1_ratio=1, fit_intercept=True) t_glm1.fit(X_train_t, y_train_t, sample_weight=w_train_t) pd.DataFrame({'coefficient': np.concatenate(([t_glm1.intercept_], t_glm1.coef_))}, index=['intercept'] + t_glm1.feature_names_).T [15]: intercept VehBrand__B1 VehBrand__B10 VehBrand__B11 VehBrand__B12 VehBrand__B13 VehBrand__B14 VehBrand__B2 VehBrand__B3 VehBrand__B4 ... VehAge__1 VehAge__2 VehPower__4 VehPower__5 VehPower__6 VehPower__7 VehPower__8 VehPower__9 BonusMalus Density coefficient 2.88667 -0.064157 0.0 0.231868 -0.211061 0.054979 -0.270346 -0.071453 0.00291 0.059324 ... 0.008117 -0.229906 -0.111796 -0.123388 0.060757 0.005179 -0.021832 0.208158 0.032508 0.000002 1 rows × 60 columns Again, we use the distribution’s deviance to measure model performance [16]: print('training loss s_glm1: {}'.format( TweedieDist.deviance(y_train_t, t_glm1.predict(X_train_t), sample_weight=w_train_t)/np.sum(w_train_t))) print('testing loss s_glm1: {}'.format( TweedieDist.deviance(y_test_t, t_glm1.predict(X_test_t), sample_weight=w_test_t)/np.sum(w_test_t))) training loss s_glm1: 73.91371104577475 testing loss s_glm1: 72.35318912371723 Finally, we again show the total predicted vs. true claim amount on the training and test set [17]: #Put together freq * sev together print("Total claim amount on train set, observed = {}, predicted = {}". format(df['ClaimAmountCut'].values[train].sum(), np.sum(df['Exposure'].values[train] * t_glm1.predict(X_train_p))) ) print("Total claim amount on test set, observed = {}, predicted = {}". format(df['ClaimAmountCut'].values[test].sum(), np.sum(df['Exposure'].values[test] * t_glm1.predict(X_test_p))) ) Total claim amount on train set, observed = 44594644.68, predicted = 45027861.66007367 Total claim amount on test set, observed = 4707551.37, predicted = 4999381.03386664 In terms of the combined proximity to the true total claim amounts, the frequency severity model performed a bit better than Tweedie model. However, both approaches ultimatley prove to be effective.
2022-05-26 16:54:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2997027635574341, "perplexity": 7413.155445404053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662619221.81/warc/CC-MAIN-20220526162749-20220526192749-00269.warc.gz"}
https://delong.typepad.com/sdj/2007/05/avoiding_weimar.html
## Avoiding Weimar Russia Matthew Yglesias writes: Matthew Yglesias: Beyond Economics: Over at Brad DeLong's site you can see a fascinating discussion of America's Russia policy in the 1990s between DeLong, Martin Wolf, and Lawrence Summers. One remark I would make is that to an extraordinary extent, all three participants are willing to accept the premise that the only goal of US policy toward Russia in the 1990s was a good-faith effort to induce Russian prosperity, with such efforts being hampered by political constraints, the objective difficulty of the task, and pure policy errors... Well, yes. Russia was once a superpower and may be one again. One would have thought that the history of 1914-1945 would teach ample lessons about the national security undesirability of trying to keep great powers--like Weimar Germany--poor and weak. One would have thought that the history of 1945-1990 would teach ample lessons about the national security desirability of trying to help great powers--like Japan and West Germany--become prosperous, democratic, and well-integrated into the world economy. One top of the national-security strategic argument there is the economic argument: the fact that richer trading partners are better trading partners: they make more and more interesting stuff for us to buy. Plus there is the moral argument. "Russia" is not a government. "Russia" is people, families of people--people dead, living, and unborn. Those of us alive today in western Europe, North America, and elsewhere are eighted down by a heavy burden. We owe an enormous debt to many Russians who are now dead: the soldiers of the Red Army, the peasants who grew the food that feed them, and the workers of Magnitogorsk and elsewhere who built the T-34C tanks they drove saved us from the Nazis. We are all under the enormous obligation created by this debt to repay it forward, and Russia's living and unborn would be appropriate recipients for this repayment. Last, there is the credibiilty argument. The people of the United States, the nation of the United States, and the government of the United States will have a much easier and happier time if they are and are perceived to be a people, nation, and government that plays positive-sum games of mutual aid and prosperity and resorts to negative-sum games of encirclement, sabotage, and war only when the necessity is dire. And the the necessity now is not dire. Compared to these four mighty, weighty, and heavy reasons to make the only appropriate goal of U.S. policy a good-faith effort to induce prosperity in Russia, the prospect of a minor advantage in some penny-ante Bismarckian-Metternichian-Tallyrandish-Kissingerian game of diplomatic realpolitik is lighter than a small chickenhawk feather. But Matthew Yglesias does not see it that way: In the real world... policymakers and presidents -- though perhaps not Treasury Department economists like Summers -- concern themselves with questions of power politics. A prosperous Russia was seen as.... not nearly so good as a Russia... willing to concede to the United States an equal (or even greater than equal) share of influence in Russia's "near abroad." This is a big part of the story of the relatively uncritical backing the Clinton administration provided to Boris Yeltsin... Not inside the Treasury it isn't. Inside the Treasury the belief is that a Russian that is properly assertive would be much better in the long run than if reformers were to be seen as beholden to foreigners who want a weak Russia. That was, after all, the card that Hitler and company played against Rathenau and Stresemann in the 1920s. Sigh. If only Matthew Yglesias had been an economics rather than a philosophy major. But at least he wasn't an international relations major. UPDATE: Matthew Yglesias responds: Matthew Yglesias: I think Brad DeLong and I are talking about cross purposes with regard to Russia policy in the 1990s. I agree with him as to what the goal of America's policy should have been. In his earlier post, though, Brad was writing about why our policy didn't achieve those results and all I'm trying to say is that we should consider the possibility that we didn't achieve what Brad (and I) think we should have achieved because these weren't the actual policy goals the Clinton administration was pursuing. They may well have been the Treasury Department's goals (it seems to me that economists generally have sound foreign policy views) but the Treasury Department doesn't ultimately set policy toward major countries like Russia. I saw Clinton in action: Clinton felt that he might have turned into Yeltsin had he been born in the Soviet Union, empathizes with Yeltsin, and is willing to cut him enormous slack. I didn't see Talbott in action doing anything other than agreeing with Clinton, but I presumed that Talbott had talked to Clinton privately beforehand, and it was extremely rare for anybody to do anything other than agree with the president in any meeting large enough for me to be a part of it. I saw Congress in action, and they were unsympathetic to the argument that $10 billion in aid now might well save us$500 billion in military spending in a decade. And I saw the Treasury.
2020-10-30 04:07:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17301617562770844, "perplexity": 3754.2746024554817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107907213.64/warc/CC-MAIN-20201030033658-20201030063658-00494.warc.gz"}
http://mahalonottrash.blogspot.com/2013/02/lake-front-property-expansive-view-of.html
### Lake-front property, expansive view of faint, red sun that never sets From right to left: Courtney Dressing, Dave Charbonneau and yours truly at the live CfA press conference. Photo by Kris Snibbe/Harvard Staff Photographer Yesterday, I had the honor of participating in a live press conference today at the Harvard Center for Astrophysics (CfA). The event was to announce new findings by third-year graduate student Courtney Dressing and her advisor Dave Charbonneau, who studied the occurrence of planets around M dwarfs in the Kepler field. Dear Sara Seager, check it out! A woman with not only a big exoplanet press announcement, but a HUGE exoplanet press announcement! (But, yes, we need more). Sound familiar? If so, it's because Jon Swift and I made a related announcement last month at the AAS meeting. But while we focused ont he bulk occurrence rate, finding 1.0 +/- 0.1 planets per M dwarf, Courtney focused on Earth-like planets. By Earth-like she means, "planets the size of the Earth that receive a similar amount of sun light as our planet." (As an aside, Jon and I were very much relieved and excited that Courtney's statistical analysis matched our result on the bulk occurrence rate.) Her big results are • 6% of M dwarfs (=red dwarfs) have Earth-like planets. • This means that there are at least 6 billion Earth-like planets in the Galaxy since M dwarfs comprise 7 out of 10 of the Milky Way's stars • The nearest Earth-like planet is around an M dwarf within 13 light years of the Earth. Which one? We don't know...yet. We need to start searching, like, yesterday IMO. • At 95% confidence, there is a transiting Earth-like planet around an M dwarf within 100 light years. Here's the CfA press release. Here's a preprint of Courtney's paper, which will very soon be accepted by ApJ (referee report was positive and has been responded to). Slide from Courney Dressing's press announcement showing the amount of "sun light" received by the planets around Kepler's red dwarfs. The locations of Mars, Earth, Venus and Mercury are shown along the top. Three of the planets around Kepler's red dwarfs are squarely in the "goldilocks zone". Have done several of these types of press conferences over the past couple of years, I've started recognizing a pattern in the Q&A with the press. It goes a little something like this: Astronomers: 6% of M dwarfs have Earth-sized planets in the HZ! (out of breath from all the hard work) Reporters: But come on, can life really emerge on planets around M dwarfs?! What about flares and tidal locking and bears, oh my? (Ed. note: Okay, I added that third problem) Astronomers: Ummmm...did we mention all the Earth-sized planets we found with temperate equilibrium temperatures? First, I'll admit that it's the fault of astronomers for playing it fast and loose with the term "habitable" in reference to the locations of certain planets around other stars. The habitable zone is an extremely idealized concept referring to the region around stars where the incident sun light results in planetary temperatures that would be like the Earth's. But this is under the assumption that the planet has an Earth-like orbit (low eccentricity), an Earth-like atmosphere (albedo), and a nice solid surface where liquid water could pool into lakes and oceans and the like. So reporters are correct to be skeptical. Thus, when an astronomer says "habitable zone," there's no reason to conclude that the planet is inhabited, or that it even could be inhabited (despite what some astronomers believe). Instead, when you hear the term you should think "possible location around the star where, if a myriad set of conditions are just right, a planet could have liquid water on the surface." Habitable zone is just much easier to say. Also, the habitable zone is something that is easy to calculate based on the parameters of planets discovered by various techniques. We bag 'em, the astrobiologists tag 'em...er...to help us understand whether they could truly be habitable. So my first point is for the astronomers. We need to be more nuanced when tossing around notions of habitability. My second point is to the reporters. The question "Are these planets truly habitable" is pretty much impossible to answer right now. Why? Because we don't even know the conditions for habitability on our own planet! Here's a long, yet incomplete list of factors/questions that may or may not be important for the emergence of life on Earth: • Our Moon maintains the Earth's moderate obliquity (axial tilt). Mars undergoes large obliquity swings because it has no moon, which wreaks havoc with its weather • We have plate tectonics to maintain a carbon-silicate cycle, which keeps CO2 in a stable equilibrium. Maybe. We think. • If plate tectonics are necessary, is the high water content of Earth's mantle necessary for plate tectonics? • If the Earth formed "dry" then how was water delivered? • Do we need a large ocean to maintain thermal inertia? • Is it important that we have just the right amount of water so as to not cover all landforms? • Is dry land necessary? • Is Jupiter a friend or foe? Does it hoover up comets or toss asteroids in? • Why do we have a hydrogen-poor atmosphere? • Is water the only suitable solvent for life? • Is it important that we lack a close stellar binary companion despite ~50% binarity of stars Galaxy-wide? • Do we need an especially "calm" sun? • Do we need a low eccentricity? • Earth is not too large as to have ended up as a mini-Neptune • Earth is not too small to end up like Mars with high atmospheric escape • What about Milankovitch cycles? • Do we need our nickel-iron core for magnetic field generation? This is just a partial list that I was able to come up with while Google chatting with Prof. Jason Wright. What did we forget? Andrew Howard said… Great post and congratulations especially to Courtney! A word of caution about over-restricting the habitable zone with "rare-Earth" reasoning (requiring too many specific characteristics of the Earth). On this point I particularly like excerpt below from Chyba & Hand, 2005, Annual Reviews of Astronomy & Astrophysics, 43, 31 "A second example of “rare-Earth” reasoning concerns conclusions drawn from the important discovery of the obliquity-stabilizing effect of Earth’s Moon (Laskar & Robutel 1993; Laskar, Joutel & Robutel 1993). The inference is made (Ward & Brownlee 2000, Gonzalez & Richards 2004) that complex life must therefore be rare, on the grounds of the assertion that Earths with Moon-size satellites must be rare, and that in the Moon’s absence wild obliquity fluctuations would occur that would render the environment too inconstant for the evolution of complex or intelligent life. [There is now observational evidence that large planetesimal collisions in other solar systems are common at the end of planetary accretion (Rieke et al. 2005), but, of course, there are currently no statistical data about the frequency or nature of planet–moon combinations that may result.] But again one must ask what Earth may have been like had the Moon never formed—not what the Earth would look like if today one somehow plucked away the Moon. Laskar & Robutel (1993) show that Moonless Earths rotating with periods <12 hr may be stable against chaotic obliquity fluctuations for a large range of obliquity angles. Of course the current Earth’s period is 24 hr, so if we pluck away the Moon today chaos sets in. But if the Moon had never formed, what would Earth’s rotational velocity have been? A simple angular momentum conservation calculation shows that if one tidally evolves the lunar orbit back in time from its current position at 60 R⊕ to an early orbit at 10 R⊕, Earth’s day would have been about 7 hr long, giving an Earth likely stable against chaotic obliquity fluctuations. Touma’s (2000) simulations of the Earth–Moon system take Earth’s initial rotation period to be 5.0 hr, with the Moon at 3.5 R⊕. Of course, this does not demonstrate that Earth’s rotational period would have been this short had the Moon never formed; it is difficult to estimate Earth’s primordial rotation in the absence of the putative Moon-forming impact [see Lissauer, Dones & Ohtsuki (2000) for a discussion of the issues]. But it shows the arbitrary nature of reaching conclusions about Earth’s rarity by plucking away the Moon today, rather than, say, shortly after lunar formation." This comment has been removed by a blog administrator. Sarah Rugheimer said… It is important to distinguish between what is habitable for complex versus microbial life. For Earth-like life at least, those two conditions are very different and it’s difficult to say how evolution would adapt to different conditions. Most of the factors in this list wouldn't be relevant for microbial life even on Earth if those things were changed today. The moon - may not be a deal breaker as Andrew points out. Jupiter as you mentioned is probably neutral since it both protects us and throws stuff in. Size of the planet matters in that we assume currently you need a solid surface. Plate tectonics – probably useful to have a cycle for long term climate stability, but life could arise for the some time without it since we have evidence for life very quickly after Earth cooled. Norm Sleep has many papers on this and here is a great conversation he has about these things, including land fraction coverage and habitability in general (http://astrobiology.arc.nasa.gov/palebluedot/discussions/session2/sleep/default.html & https://pangea.stanford.edu/departments/geophysics/nspapers.html). Ray Pierrehumbert estimates as long as there is 10% surface fraction of water you will have similar climate and climate cycling as Earth. A recent paper by Abbot et al. (2012) also claims that the surface fraction of water doesn't have a large effect on habitability. Activity of star - if the life is under water or ground this doesn't matter at all, and it's unclear whether it would be harmful since there are examples even of animal life on Earth which have high radiation tolerances. Is water the only solvent - Steve Benner would say no (Benner et al., 2004), though water is very abundant compared to some of the other proposed solvents! Low eccentricity - depends on how much time it spends in the HZ (Dressing et al. 2010) and probably extremophiles would do better than complex life. Magnetic field - probably helpful, but less important for life sheltered under water or a layer of soil. Hydrogen in the atmosphere I've not really heard of as being relevant for life other than extending the habitable zone outwards. Binaries - I think the main problem is stability of orbits but if the binary is wide enough this isn't an issue (Eggl 2012, Kaltenegger & Haghighipour 2013). In the end I think you hit on a very important point. Just because a planet is habitable doesn't mean that the planet is "100% likely to have life. Like you said, we just don't know until we have more information about the planetary context. The only way we'll begin to answer these questions is by detecting biomarkers in the atmospheres of a variety (or lack thereof) of planets and exploring other habitable environments up close in our own solar system like on Titan, Mars, Europa and Enceladus. It's also useful I think to note that this notion of a habitable zone around other stars only is relevant for remote detectability of features in the atmosphere. Europa in our own solar system is a prime example of a habitable environment that we would never detect in another star system since there is no interaction between the life and that atmosphere. Furthermore life built on a different biochemistry would have different signatures that currently are hard to predict and unambiguously distinguish as coming from life. So the HZ concept doesn't mean that's the only place life could be, just that since we know life on Earth uses liquid water, it's the best first place to start. Even Earth-type life could thrive in protected environments far outside the traditional HZ such as in Europa. One thing that you didn't mention on your list but could be important and is observable is the C/O ratio. If there is more carbon than oxygen the O would be taken up by CO and CO2 and then there would be none left to form silicates. SiC would take the role of silicates and they are very durable and unlikely to weather, making a climate cycle unlikely (Kuchner & Seager 2005). Those are just some of my thoughts! Great post! :) ### On the Height of J.J. Barea Dallas Mavericks point guard J.J. Barea standing between two very tall people (from: Picassa user photoasisphoto). Congrats to the Dallas Mavericks, who beat the Miami Heat tonight in game six to win the NBA championship. Okay, with that out of the way, just how tall is the busy-footed Maverick point guard J.J. Barea? He's listed as 6-foot on NBA.com, but no one, not even the sports casters, believes that he can possibly be that tall. He looks like a super-fast Hobbit out there. But could that just be relative scaling, with him standing next to a bunch of extremely tall people? People on Yahoo! Answers think so---I know because I've been Google searching "J.J. Barea Height" for the past 15 minutes. So I decided to find a photo and settle the issue once and for all. I started by downloading a stock photo of J.J. from NBA.com, which I then loaded into OpenOffice Draw: I then used the basketball as my metric. Wikipedia states that an NBA basketball is 29.5 inches in circumfe… ### Finding Blissful Clarity by Tuning Out It's been a minute since I've posted here. My last post was back in April, so it has actually been something like 193,000 minutes, but I like how the kids say "it's been a minute," so I'll stick with that. As I've said before, I use this space to work out the truths in my life. Writing is a valuable way of taking the non-linear jumble of thoughts in my head and linearizing them by putting them down on the page. In short, writing helps me figure things out. However, logical thinking is not the only way of knowing the world. Another way is to recognize, listen to, and trust one's emotions. Yes, emotions are important for figuring things out. Back in April, when I last posted here, my emotions were largely characterized by fear, sadness, anger, frustration, confusion and despair. I say largely, because this is what I was feeling on large scales; the world outside of my immediate influence. On smaller scales, where my wife, children and friends reside, I… ### The Force is strong with this one... Last night we were reviewing multiplication tables with Owen. The family fired off doublets of numbers and Owen confidently multiplied away. In the middle of the review Owen stopped and said, "I noticed something. 2 times 2 is 4. If you subtract 1 it's 3. That's equal to taking 2 and adding 1, and then taking 2 and subtracting 1, and multiplying. So 1 times 3 is 2 times 2 minus 1." I have to admit, that I didn't quite get it at first. I asked him to repeat with another number and he did with six: "6 times 6 is 36. 36 minus 1 is 35. That's the same as 6-1 times 6+1, which is 35." Ummmmm....wait. Huh? Lemme see...oh. OH! WOW! Owen figured out x^2 - 1 = (x - 1) (x +1) So $6 \times 8 = 7 \times 7 - 1 = (7-1) (7+1) = 48$. That's actually pretty handy! You can see it in the image above. Look at the elements perpendicular to the diagonal. There's 48 bracketing 49, 35 bracketing 36, etc... After a bit more thought we…
2018-06-25 13:27:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3790951073169708, "perplexity": 2019.1395041099652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867885.75/warc/CC-MAIN-20180625131117-20180625151117-00060.warc.gz"}
https://www.physicsforums.com/threads/proof-of-volume-of-a-ball.383397/
Proof of Volume of a ball dhlee528 Homework Statement http://staff.washington.edu/dhlee528/003.JPG [Broken] Homework Equations x = r sin ( phi) cos ( theta) y = r sin ( phi )sin (theta) z = r cos ( phi ) The Attempt at a Solution $$vol=8 \int_0^\frac{\pi}{2}\int_0^\frac{\pi}{2}\int_0^r \rho^2 \sin(\phi)d\rho d\theta d\phi$$ $$8 \int_0^\frac{\pi}{2}\int_0^\frac{\pi}{2} \sin(\phi)(\frac{\rho^3}{3}){|}_0^r d\theta d\phi$$ $$\frac{4r^3 \pi}{3}\int_0^\frac{\pi}{2}sin(\phi)d\phi$$ $$-\frac{4r^3\pi}{3}[0-1]=\frac{4\pi r^3}{3}$$ I think I got spherical coordinate right but don't know how to do for rectangular or spherical coordinate Last edited by a moderator:
2022-09-26 15:51:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6568507552146912, "perplexity": 5867.380718853859}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00105.warc.gz"}
https://captaincalculator.com/math/exponent/exponent-4-calculator/
Exponent 4 Calculator How to Calculate Exponent 4 The exponent 4 of a number is found by multiplying that number by itself 4 times. $\text{number}^{4}=\text{number} \times \text{number} \times \text{number} \times \text{number}$ Example $5^{4} = 5 \times 5 \times 5 \times 5 = 625$
2020-01-25 20:23:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7819381356239319, "perplexity": 1540.1663273442157}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251681412.74/warc/CC-MAIN-20200125191854-20200125221854-00389.warc.gz"}
https://soriavazquez.github.io/publication/hss-17/
# Low cost constant round MPC combining BMR and oblivious transfer ### Abstract In this work, we present two new actively secure, constant round multi-party computation (MPC)protocols with security against all-but-one corruptions. Our protocols both start with an actively secure MPC protocol, which may have linear round complexity in the depth of the circuit, and compile it into a constant round protocol based on garbled circuits, with very low overhead. 1. Our first protocol takes a generic approach using any secret-sharing-based MPC protocol for binary circuits, and a correlated oblivious transfer functionality. 2. Our second protocol builds on secret-sharing-based MPC with information-theoretic MACs. This approach is less flexible, being based on a specific form of MPC, but requires no additional oblivious transfers to compute the garbled circuit. In both approaches, the underlying secret-sharing-based protocol is only used for one actively secure $F_2$ multiplication per AND gate. An interesting consequence of this is that, with current techniques,constant round MPC for binary circuits is not much more expensive than practical, non-constant round protocols. We demonstrate the practicality of our second protocol with an implementation, and perform ex-periments with up to 9 parties securely computing the AES and SHA-256 circuits. Our running times improve upon the best possible performance with previous protocols in this setting by 60 times. This paper was accepted to the Journal of Cryptology. Type Publication ASIACRYPT 2017
2021-02-27 07:14:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30279138684272766, "perplexity": 2609.663671227325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358203.43/warc/CC-MAIN-20210227054852-20210227084852-00607.warc.gz"}
http://math.stackexchange.com/questions/100011/finding-the-limit-of-frac1t-sqrt1t-frac1t-as-t-tends-to-0
# Finding the limit of $\frac{1}{t\sqrt{1+t}} - \frac{1}{t}$ as $t$ tends to $0$ $$\lim_{t\rightarrow 0}\left(\frac{1}{t\sqrt{1+t}} - \frac{1}{t}\right)$$ I attemped to combine the two fraction and multiply by the conjugate and I ended up with: $$\frac{t^2-t^2\sqrt{1+t}}{t^3+{t\sqrt{1+t}({t\sqrt1+t})}}$$ I couldn't really work it out in my head on what to do with the last term $t\sqrt{1+t}({t\sqrt{1+t}})$ so I left it like that because I think it works anyways. Everything is mathematically correct up to this point but does not give the answer the book wants yet. What did I do wrong? - As $x$ approaches $0$ ?? $x=t$, eh? –  GEdgar Jan 18 '12 at 1:16 Something has gone wrong with your algebra. Can you list out the steps you took in more detail? –  Joe Johnson 126 Jan 18 '12 at 1:22 Perhaps you were trying something like $\dfrac{1}{t\sqrt{1+t}} - \dfrac{1}{t} = \dfrac{1-\sqrt{1+t}}{t\sqrt{1+t}} = \dfrac{1-(1+t)}{t\sqrt{1+t}(1+\sqrt{1+t})} = \dfrac{-1}{\sqrt{1+t}(1+\sqrt{1+t})}$ which has a limit of $\dfrac{-1}{1 \times (1+1)} = -\dfrac{1}{2}$ as $t$ tends to $0$. Added: If you are unhappy with the first step, try instead $\dfrac{1}{t\sqrt{1+t}} - \dfrac{1}{t} = \dfrac{t-t\sqrt{1+t}}{t^2\sqrt{1+t}} = \dfrac{t^2-t^2(1+t)}{t^3\sqrt{1+t}(1+\sqrt{1+t})} = \dfrac{-t^3}{t^3\sqrt{1+t}(1+\sqrt{1+t})}$ $= \dfrac{-1}{\sqrt{1+t}(1+\sqrt{1+t})}$ to get the same result - I think you did that wrong, for the fractions to be combined you have to multiply them by each others denominators. –  user138246 Jan 18 '12 at 1:16 @Jordan: The common denominator is $t\sqrt{1+t}$. You can do it, as you say, to get $t^2\sqrt{1+t}$. You'll just have an extra factor of $t$ in the numerator. –  Joe Johnson 126 Jan 18 '12 at 1:20 @Jordan Henry used a least common denominator:$${1\over t\sqrt{1+t}}-{1\over t}={1\over t\sqrt{1+t}}-{\sqrt{1+t}\over t\sqrt{1+t} } = { 1-\sqrt{1+t}\over t\sqrt{1+t}}$$ –  David Mitra Jan 18 '12 at 1:21 I am not really following what is happening or how that is a valid operation. The rule I have always heard is that you have to multiply be both the denominators or a lcd which is logical to me. If I have 1/2 + 1/4 I can make it 2/4 + 1/4 which works out. –  user138246 Jan 18 '12 at 1:24 @Jordan you can multiply by what is necessary to get both denominators the same. e.g., $${1\over 2}+{1\over4}={2\cdot1\over2\cdot 2}+{1\over4 }$$ or $${3\over 6}+ {1\over 15}= {5\cdot 3\over5\cdot6}+{2\cdot1\over 2\cdot15}$$ –  David Mitra Jan 18 '12 at 1:35 Asymptotics: \begin{align} \frac{1}{\sqrt{1+t}} &= (1+t)^{-1/2} = 1 - \frac{1}{2}\;t + o(t) \\ \frac{1}{t\sqrt{1+t}} &= \frac{1}{t} - \frac{1}{2} + o(1) \\ \frac{1}{t\sqrt{1+t}} - \frac{1}{t} &= - \frac{1}{2} + o(1) . \end{align} - I don't know what that word means or what happened at all here. –  user138246 Jan 18 '12 at 1:24 The Binomial Theorem says that $(1+t)^{-1/2}=1-\frac12t+o(t)$ where $o$ is little-o. The rest is division and subtraction. –  robjohn Jan 18 '12 at 1:46 +1, Been waiting for limit problems to be squashed just like this for a long time, finally the wait is over! –  Arjang Jan 18 '12 at 2:06 @Jordan: en.wikipedia.org/wiki/Asymptotic_analysis. If you don't know, then ask! –  JavaMan Jan 18 '12 at 4:11 The signs $\sim$ should be $=$. –  Did Jan 18 '12 at 6:42 I'd use a substitution to get rid of the surd. $$\mathop {\lim }\limits_{t \to 0} -\frac{1}{t}\left( {1 - \frac{1}{{\sqrt {t + 1} }}} \right) =$$ $$\sqrt {t + 1} = u$$ $$\mathop {\lim }\limits_{u \to 1} -\frac{1}{{{u^2} - 1}}\left( {1 - \frac{1}{u}} \right) =$$ $$\mathop {\lim }\limits_{u \to 1} -\frac{1}{{{u^2} - 1}}\left( {\frac{{u - 1}}{u}} \right) =$$ $$\mathop {\lim }\limits_{u \to 1} -\frac{1}{{u + 1}}\left( {\frac{1}{u}} \right) = -\frac{1}{2}$$ - You could also use L'Hopitals rule: First note that $\frac{1}{t\sqrt{1+t}} - \frac{1}{t} = \frac{1-\sqrt{1+t}}{t\sqrt{1+t}}$ L'Hopitals rule is that if: $f(x)=0$ and $g(x)=0$ then $\lim_{t\to x} \frac{f(x)}{g(x)} = \frac{f'(x)}{g'(x)}$ with some provisos that I'll ignore here... In our case • $f(t) = 1 - \sqrt{1+t}$ So $f'(t) = (-1/2)(1+t)^{-1/2}$ and $f'(0)=-1/2$. • $g(t) = t\sqrt{1+t}$ So $g'(t) = \sqrt{1+t} + (t/2)(1+t)^{-1/2}$ and $g'(0)=1$ So finally we get $f'(0)/g'(0) = -1/2$ as the limit we need. - If the OP knew derivatives, then one could simply interpret the original limit as $f'(0)$, where $f$ is the function $f(t) = \frac{1}{\sqrt{1+t}}-1$. –  JavaMan Jan 18 '12 at 5:27 Let $f:]0,\infty[\to\mathbb{R}$ given by $$f(x)=\frac{1}{\sqrt{x}}.$$ Then $$\frac{1}{t\sqrt{1+t}} - \frac{1}{t}=\frac{f(1+t)-f(1)}{t},$$ so $$\lim_{t\to 0} \frac{1}{t\sqrt{1+t}} - \frac{1}{t}=\lim_{t\to 0} \frac{f(1+t)-f(1)}{t}=f'(1).$$ Since $$f'(x)=-\dfrac{1}{2}\cdot x^{-\frac{3}{2}}$$ in $]0,\infty[,$ we get $$\lim_{t\to 0} \frac{1}{t\sqrt{1+t}} - \frac{1}{t}=\left. -\dfrac{1}{2}\cdot t^{-\frac{3}{2}}\right|_1=-\frac{1}{2}.$$ -
2015-04-26 19:31:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9908167719841003, "perplexity": 529.355798743483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246655962.81/warc/CC-MAIN-20150417045735-00155-ip-10-235-10-82.ec2.internal.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/suppose-planet-reported-orbiting-sun-like-stariota-horologii-period-345days-find-radius-pl-q473794
Suppose that a planet was reported to be orbiting the sun-like starIota Horologii with a period of 345days. Find the radius of the planet's orbit, assuming that IotaHorologii has twice the mass as theSun. (This planet is presumably similar to Jupiter, but it may havelarge, rocky moons that enjoy a pleasant climate.) (Use 2.00 1030 kg for themass of the Sun.) I just can't get it right. I change the days to seconds andmultiplied the suns mass by two for the mass of iota horologii.Then i used the equation I thought was right? Can someonehelp!? Show transcribed image text ### Get this answer with Chegg Study Practice with similar questions Q: Suppose that a planet was reported to be orbiting the sun-like star Iota Horologii with a period of 300.0 days. Find the radius of the planet's orbit, assuming that Iota Horologii has the same mass as the Sun. (This planet is presumably similar to Jupiter, but it may have large, rocky moons that enjoy a pleasant climate. Use 2.00 ✕ 1030 kg for the mass of the Sun.) A: See answer
2016-07-27 20:44:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.857894241809845, "perplexity": 2561.187281406086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827077.13/warc/CC-MAIN-20160723071027-00123-ip-10-185-27-174.ec2.internal.warc.gz"}
https://schmidtynotes.com/web_dev_notes/js/d3/webdev/2019/12/22/d3js-river-flow-api.html
I’m working on a website for a rafting non-profit. I thought it would be cool if they could display the flow data for local rivers. I also thought this would be good time for me to learn more about D3js and the USGS instantaneous flow data API. 900cfs ## Dolores River At Dolores, CO For the design, I need to accomplish a few customizations of a standard line chart. 1. I wanted to use fetch to get the data using the USGS instantaneous flow data API and plot the received data on the fly. 2. The chart needs to be responsive. 3. I wanted to plot an area chart instead of a line chart. 4. I wanted to plot the tick marks inside of the chart instead of in the margins for a nice looking design. I’ll break everythin down below. If you are just here for the JS scripts, here they are. <script src="https://d3js.org/d3.v5.min.js"></script> <script> flowChart(); async function flowChart(){ let waterUrl = "https://nwis.waterservices.usgs.gov/nwis/iv/?format=json&sites=09166500&startDT=2020-04-27&endDT=2020-05-03&parameterCd=00060&siteStatus=all" let timeFormat = d3.timeFormat("%m-%d-%Y %H"); //Call the api const response = await fetch(waterUrl); const jsonData = await response.json(); console.log(jsonData) //Parse the data returned from api let sites = jsonData.value.timeSeries[0]; let riverName = sites.sourceInfo.siteName.toLowerCase().split(/ at | near /); let flowData = sites.values[0].value.map(({dateTime, value})=>({date:new Date(dateTime), value:parseFloat(value)})); //build chart // set the dimensions and margins of the graph let margin = {top: 10, right: 30, bottom: 30, left: 50}, width = 600 - margin.right-margin.left, height = 400 - margin.top - margin.bottom; // append the svg object to the body of the page let svg = d3.select("#my_dataviz") .append("svg") .attr("preserveAspectRatio", "xMinYMin meet") .attr("viewBox", "0 0 " +(width) + " " + (height)) .append("g") .attr("transform", "translate(0 ,0)"); let x = d3.scaleTime() .domain(d3.extent(flowData, function(d){return d.date})) .range([0,width]); svg.append("g") .attr("transform", "translate("+margin.right+","+(height-margin.bottom)+")") .attr("stroke-width", "0") .call(d3.axisBottom(x) .ticks(d3.timeDay.every(1))); let y = d3.scaleLinear() .domain([0, (d3.max(flowData, function(d) { return +d.value; })*1.2)]) .range([height, 0]); svg.append("g") .attr("transform", "translate(40, 0)") .attr("stroke-width", "0") .attr("class", "x-axis") .call(d3.axisLeft(y) .ticks(5)) svg.append("path") .datum(flowData) .attr("fill", "#FF5722") .attr("stroke", 'none') .attr("opacity", "0.45") .attr('d', d3.area() .x(function(d){return x(d.date)}) .y0(y(0)) .y1(function(d){return y(d.value)}) ) } </script> ## Fetching the data and making it usable. The first steps in plotting any chart is getting data. In this case we will be pulling river flow data for 7 days for my home town river, the Dolores River. I used the USGS API generator to generate a URL to pull data for seven days over the summer. https://nwis.waterservices.usgs.gov/nwis/iv/?format=json&sites=09166500&startDT=2019-07-09&endDT=2019-07-16&parameterCd=00060&siteStatus=all There are two ways to use fetch: I prefer calling fetch inside of an asynchronous function. I don’t know why, but this method seems to make more sense to me. <script> flowChart(); async function flowChart(){ let waterUrl = "https://nwis.waterservices.usgs.gov/nwis/iv/?format=json&sites=09166500&startDT=2020-04-27&endDT=2020-05-03&parameterCd=00060&siteStatus=all" //Call the api const response = await fetch(waterUrl); const jsonData = await response.json(); console.log(jsonData) } </script> Let’s break this down: 1. flowChart(); calls the async function. 2. async function flowChart(){} sets us up to write a async function called flowChart() which has already been called. 3. let waterUrl assigns the API url to a variable to be used in the next step. 4. const response = await fetch(waterUrl); fetches the data from the API. await is used here to wait until the data has been returned to assign the data to the variable. 5. Similarly const jsonData = await response.json(); waits for the response to be to be converted to json with .json() and then assigned to the variable. The result should be json data that includes the stream flow data that we want to plot in a timeseries along with a bunch of other information that the API provides. Consoled out — console.log(jsonData) — the beginning of the data should look like this: { "name": "ns1:timeSeriesResponseType", "declaredType": "org.cuahsi.waterml.TimeSeriesResponseType", "scope": "javax.xml.bind.JAXBElement\$GlobalScope", "value": { "queryInfo": { "queryURL": "http://nwis.waterservices.usgs.gov/nwis/iv/format=json&sites=09166500&startDT=2019-07-09&endDT=2019-07-16&parameterCd=00060&siteStatus=all", "criteria": { "locationParam": "[ALL:09166500]", "variableParam": "[00060]", "timeParam": { "beginDateTime": "2019-07-09T00:00:00.000", "endDateTime": "2019-07-16T23:59:59.000" }, "parameter": [] }, "note": [ { "value": "[ALL:09166500]", "title": "filter:sites" }, { "value": "[mode=RANGE, modifiedSince=null] interval={INTERVAL[2019-07-09T00:00:00.000-04:00/2019-07-16T23:59:59.000Z]}", "title": "filter:timeRange" }, //....way more json below } } } Next we will parse the incoming data. <script> flowChart(); async function flowChart(){ let waterUrl = "https://nwis.waterservices.usgs.gov/nwis/iv/?format=json&sites=09166500&startDT=2019-07-09&endDT=2019-07-16&parameterCd=00060&siteStatus=all" //Call the api const response = await fetch(waterUrl); const jsonData = await response.json(); console.log(jsonData) //Parse the data returned from api let sites = jsonData.value.timeSeries[0]; let flowData = sites.values[0].value.map(({dateTime, value})=>({date:new Date(dateTime), value:parseFloat(value)})); } </script> 1. The let sites = jsonData.value.timeSeries[0]; first we create a variable site that will be the base for the rest of the parsing. Within the jsonData variable, we go to value, then timeseries[0]. I did this because I may call more than one river at a time for my application. You can skip this step if you want by pasting jsonData.value.timeSeries[0] in place of sites in the next step. 2. The next step we’ll break down. First we parse down to the time series value data sites.values[0].value. Then we use the .map() function to convert the dateTime variable, and the value variable to an array with a date formatted date column and a numerical value column. We assign the result to a flowData function. The result should look like so: let flowData = sites.values[0].value.map(({dateTime, value})=>({date:new Date(dateTime), value:parseFloat(value)})); Now we have our usable data we need to use D3 to chart the data. ## Making a Responsive Chart let svg = d3.select("#my_dataviz") .append("svg") .attr("preserveAspectRatio", "xMinYMin meet") .attr("viewBox", "0 0 " +(width) + " " + (height)) .append("g") .attr("transform", "translate(0 ,0)"); The key here is many examples give the chart a height and a width. Examples also usually use some fancy javascript to check the height and the width of the window and then reset the size of the chart to make it responsive. A simple way to convert a plain chart to a responsive chart is to set the viewBox attribute — instead of a hard coded height and width — .attr("viewBox", "0 0 " +(width) + " " + (height)) and preserve the aspect ratio .attr("preserveAspectRatio", "xMinYMin meet"). ## Area chart instead of a line chart svg.append("path") .datum(flowData) //some other .attr .attr('d', d3.area() .x(function(d){return x(d.date)}) .y0(y(0)) .y1(function(d){return y(d.value)}) ) To plot an area chart you replace .attr('d', d3.line()) with .attr('d', d3.area()) and provide two y values, one for the upper bound of the area chart and one for the bottom (usually 0), instead of one. The x value stays the same as it would for any line chart. ## Plotting the tick marks inside the chart. This one was tricky for me. For whatever reason I couldn’t figure out how to make the axis have less of a width than the chart. But really that is all you need to do is make the length or width of the axis smaller than the chart. You have to be a little careful though because you want the ticks to line up appropriately with the data. To understand this let’s first look at the base chart. // set the dimensions and margins of the graph let margin = {top: 10, right: 30, bottom: 30, left: 50}, width = 600, height = 400; // append the svg object to the body of the page let svg = d3.select("#my_dataviz") .append("svg") .attr("preserveAspectRatio", "xMinYMin meet") .attr("viewBox", "0 0 " +width + " " + height); As we looked at above we have a svg that is appended to a <div> with a id of #my_dataviz that we set a viewBox attribute on of "0 0" + width + " "+ height + ". Typically, we would set the width and the height to some value minus margins. The margins allow for axis marks outside of the chart. But in this case we want the axis marks to be inside of the chart. So the widths do not subtract the margins. Next we create the x-axis and append that to the svg. // set the dimensions and margins of the graph let margin = {top: 10, right: 30, bottom: 30, left: 50}, width = 600, height = 400; // append the svg object to the body of the page let svg = d3.select("#my_dataviz") .append("svg") .attr("preserveAspectRatio", "xMinYMin meet") .attr("viewBox", "0 0 " +width + " " + height); let x = d3.scaleTime() .domain(d3.extent(flowData, function(d){return d.date})) .range([0,width]); svg.append("g") .attr("transform", "translate(0,"+(height-margin.bottom)+")") .attr("stroke-width", "0") .call(d3.axisBottom(x) .ticks(d3.timeDay.every(1))); We give the x-axis a domain of the flowData, date and a range of the entire width of the chart. We append the axis an <g> element within the svg. We then want to transform the with a .attr to put the axis in place. The difference here from your standard chart is that we need to translate along the y-axis by the height-margin.bottom instead of just the height like you would in a standard plot with the axis below the chart. Subtracting the margin pulls the axis from below the chart (not visible because it is outside of the svg) to within the chart. The last step is to plot the y-axis. //build chart // set the dimensions and margins of the graph let margin = {top: 10, right: 40, bottom: 30, left: 50}, width = 600, height = 400; // append the svg object to the body of the page let svg = d3.select("#my_dataviz") .append("svg") .attr("preserveAspectRatio", "xMinYMin meet") .attr("viewBox", "0 0 " +width + " " + height) let x = d3.scaleTime() .domain(d3.extent(flowData, function(d){return d.date})) .range([0,width]); svg.append("g") .attr("transform", "translate(0,"+(height-margin.bottom)+")") .attr("stroke-width", "0") .attr("class", "x-axis") .call(d3.axisBottom(x) .ticks(d3.timeDay.every(1))); let y = d3.scaleLinear() .domain([0, (d3.max(flowData, function(d) { return +d.value; })*1.2)]) .range([height, 0]); svg.append("g") .attr("transform", "translate(" + margin.right + ", 0)") .attr("stroke-width", "0") .attr("class", "y-axis") .call(d3.axisLeft(y) .ticks(5)) This time we will use d3.scaleLinear because the actual flow volumes are continuous. Domain and Range are similar to above, but I multiply the max of the values by 1.2 because I want some space within the plot for a title. After we append the element we translate by margin.right, to move the axis within the chart. ## The HTML and CSS: The rest of the chart is completed by css and some html. Some imortant things happen here. We hide some of the axis marks because having them inside the chart creates overlap between the x and y-axis. We also style the associated info. Ideally the html for the info would be automatically generated by the chart, but that is a bit much form one tutorial. The non JS stuff looks like so: <script src="https://d3js.org/d3.v5.min.js"></script> <style> .tick line{ visibility:hidden; } .x-axis g:first-of-type{ visibility:hidden; } .y-axis g:first-of-type{ visibility:hidden; } .container{ background:#efefef; position:relative; margin-bottom: 25px; } .container svg{ font-size:12px; font-weight:300; color:#666666; } .chart-text{ position: absolute; width: 100%; margin-top:40px; } .chart-text p, .chart-text h2{ position:relative; width: 100%; text-align:center; } .chart-text p:first-of-type{ font-size:50px; color:rgba(255, 87, 34, 0.6); margin-bottom:0; } .chart-text p:first-of-type span{ color:#777777; font-size:18px; } .chart-text h2{ margin-top:0; line-height:0.8; margin-bottom:10px; } .chart-text p:last-of-type{ color:#777777; font-size:20px; } </style> <div class="container"> <div class="chart-text"> <p>900<span>cfs</span></p> <h2>Dolores River</h2> <p>At Dolores, CO</p> </div> <div id="my_dataviz" class="vis"></div> </div> <script> flowChart(); async function flowChart(){ let waterUrl = "https://nwis.waterservices.usgs.gov/nwis/iv/?format=json&sites=09166500&startDT=2019-07-09&endDT=2019-07-16&parameterCd=00060&siteStatus=all" let timeFormat = d3.timeFormat("%m-%d-%Y %H"); //Call the api const response = await fetch(waterUrl); const jsonData = await response.json(); console.log(jsonData) //Parse the data returned from api let sites = jsonData.value.timeSeries[0]; let riverName = sites.sourceInfo.siteName.toLowerCase().split(/ at | near /); let flowData = sites.values[0].value.map(({dateTime, value})=>({date:new Date(dateTime), value:parseFloat(value)})); //build chart // set the dimensions and margins of the graph let margin = {top: 10, right: 30, bottom: 30, left: 50}, width = 600, height = 400; // append the svg object to the body of the page let svg = d3.select("#my_dataviz") .append("svg") .attr("preserveAspectRatio", "xMinYMin meet") .attr("viewBox", "0 0 " +width + " " + height) let x = d3.scaleTime() .domain(d3.extent(flowData, function(d){return d.date})) .range([0,width]); svg.append("g") .attr("transform", "translate(0,"+(height-margin.bottom)+")") .attr("stroke-width", "0") .attr("class", "x-axis") .call(d3.axisBottom(x) .ticks(d3.timeDay.every(1))); let y = d3.scaleLinear() .domain([0, (d3.max(flowData, function(d) { return +d.value; })*1.2)]) .range([height, 0]); svg.append("g") .attr("transform", "translate(40, 0)") .attr("stroke-width", "0") .attr("class", "y-axis") .call(d3.axisLeft(y) .ticks(5)) svg.append("path") .datum(flowData) .attr("fill", "#FF5722") .attr("stroke", 'none') .attr("opacity", "0.45") .attr('d', d3.area() .x(function(d){return x(d.date)}) .y0(y(0)) .y1(function(d){return y(d.value)}) ) } </script>
2021-10-19 16:00:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35903307795524597, "perplexity": 5262.080928265132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585270.40/warc/CC-MAIN-20211019140046-20211019170046-00049.warc.gz"}
https://quantiki.org/journal-article/tunneling-transform-arxiv14112586v2-physicsgen-ph-updated
# The Tunneling Transform. (arXiv:1411.2586v2 [physics.gen-ph] UPDATED) We supplement the Lorentz transform $L(v)$ with a new "Tunneling" transform $T(v)$. Application of this new transform to elementary quantum mechanics offers a novel, intuitive insight into the nature of quantum tunneling; in particular, the so called "Klein Paradox" is discussed.
2019-05-22 22:50:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7238902449607849, "perplexity": 4295.650084933255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256980.46/warc/CC-MAIN-20190522223411-20190523005411-00169.warc.gz"}
http://physics.stackexchange.com/tags/schrodinger-equation/new
# Tag Info 2 Yes, this looks correct, except that the energy of the state $|2\rangle$ in $|\psi(t)\rangle$ should be $E_2$. I also don't think you want hats on the energies $E_2$ and $E_3$. Such hats are usually used in basic quantum mechanics to indicate operators, bur $E_i$ are the eigenvalues of the operator $\hat{H}$, and hence just ordinary numbers. Since $\hat{H}$ ... 0 Here is my attempt at an answer, following the suggestion of @Lagerbaer. We first subtitute the Fourier Transform for $\psi_{LP}(k)$, $$\psi_{LP}(k)=\int dxe^{-ikx}\psi_{LP}(x),$$ and get \begin{multline} \int dxe^{-ikx}i\frac{d}{dt}\psi_{LP}(x)=\int ... 1 When imposing a periodic boundary condition, the amplitude of the wavefunction at coordinate $x$ must match that at coordinate $x+L$, so we have: $$\Psi(x)=\Psi(x+L)$$ In your previous 'particle in a box' scenario, you mention that the general form of the wavefunction is given by a linear combination of sine and cosine with complex coeficients. It might be ... 0 Some broadly applicable background might be in order, since I remember this aspect of quantum mechanics not being stressed enough in most courses. [What follows is very good to know, and very broadly applicable, but may be considered overkill for this particular problem. Caveat lector.] What the OP lays out is exactly the motivation for finding how an ... 1 For a free particle, the energy/momentum eigenstates are of the form $e^{i k x}$. Going over to that basis is essentially doing a Fourier transform. Once you do that, you'll have the wavefunction in the momentum basis. After that, time-evolving that should be simple. Hint: The fourier transform of a Gaussian is another Gaussian, but the width inverts, in ... 3 Within the superposition of the ground and the first excited state, the wavefuncion oscillates between "hump at left" and "hump at right". Maybe you are asked to find the half-period of these oscillations? 2 Yes, I believe you have to think of it as if it were a semiclassical problem; you evaluate with QM the mean square velocity $\left< v^2 \right>$ of the particle, then calculate its square root; this should give you an estimate of the typical velocity of the particle. Once you have it, you divide the length of the well by it and find the time it takes ... 1 In my view, the important question to answer here is a special case of the more general question Given a space $M$, what are the physically allowable wavefunctions for a particle moving on $M$? Aside from issues of smoothness wavefunctions (which can be tricky; consider the Dirac delta potential well on the real line for example), as far as I can tell ... 2 Your solution is valid. It has zero kinetic energy. It doesn't necessarily have zero energy. It can have any potential energy you'd like. Just because your particle is "freely moving," that doesn't mean the potential is zero. You could have $V(x)=k$ for any constant $k$. The value of $k$ is not observable and has no physical significance. In general there ... 0 1) In general, $\psi(\vec{r},t) = {\sf U}(t,0) \psi(\vec{r},0)$, where ${\sf U}(t,0)$ is the time-evolution operator (a unitary matrix). 2) Given your superposition state at initial time, after time $t$ the wave function would look like $$\psi(r,\theta,\phi,t) = A \left( 2R_{10}Y_{00} e^{-iE_1 t/\hbar} + 4 R_{21}Y_{1,-1} e^{-iE_2 t/\hbar} \right)$$ where ... 0 I am not sure if you are looking for this, but you can define a Lagrangian in such a way that the L-EOM (equation of motion) is the Schrödinger equation. $\cal{L}=\Psi^{t}(i\frac{\partial}{\partial t}+\nabla^2/2m)\Psi$ $\frac{\partial\cal{L}}{\partial\Psi^t}=0$ The second term of the Lagrange-equation (derivative with respect to $\partial_{\mu}\Psi^t$) is ... 3 I want to elaborate on John Rennie's answer. The Schrodinger equation for a free particle is ($\hbar=1$): $$i\frac{\partial}{\partial t}\psi=-\frac{1}{2m}\frac{\partial^2}{\partial x^2}\psi.$$ It is a first-order differential equation in variable $t$. To solve it, you should specify initial data, say, $\psi(t=0)$. At this point, you should be aware that ... 4 When you solve the Schrodinger equation for a free particle you get a family of solutions of the form $\Psi(x,t) = A e^{i(kx - \omega t)}$ and all superpositions of these functions. So just solving the Schrodinger equation doesn't give you a solution for a specific particle. For that you need to specify the initial conditions. If you take the solution to be ... 0 The problems you have been encountered are related to that fact that you try to calculate probability of some unphysical situation. Quantum mechanics can give you probability of an outcome from some experiment. This wave functions does not contain any information (restrictions) concerning the way how you are going to measure it and what you are going to ... 3 First of all you should recall that Schroedinger equation is an Eigenvalue equation. If you are unfamiliar with eigenvalue equations you should consult any math book or course as soon as possible. Answer 1 (my apologies, I will use my own notation, as this is mainly copy-paste from my old notes): First define constants x_0 = ... 1 Time-dependent Schrodinger equation is an elliptic PDE if the Hamiltonian is time-independent. 2 The time-independent Schrodinger equation is mainly useful for describing standing waves. It has serious shortcomings when used to describing traveling waves. If you have an example like a constant potential, then there are only traveling-wave solutions, and the time-independent Schrodinger equation may be the wrong tool for the job. Physically, the ... 0 Picking up on your comment that plane waves are not renormalisable: Only infinite plane waves are not renormalisable, and an infinite plane wave is not physically realistic simply because we can't make, or indeed even observe) infinite objects. Any plane wave we can observe will be finite and therefore normalisable. An infinite plane wave represents an ... 0 In my opinion,if a potential function is authentic, the solution of the time independent Schrodinger equation will have physics meaning. You say a one dimensional universe with constant potential,in fact ,this kind of potential is not existed. I think the plane wave cannot be normalized is a reflection of this. Top 50 recent answers are included
2013-05-22 18:37:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.8737637400627136, "perplexity": 233.395667502472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702298845/warc/CC-MAIN-20130516110458-00063-ip-10-60-113-184.ec2.internal.warc.gz"}
https://compmatsci.wordpress.com/2009/03/01/questioning-gibbs-anisotropy-in-phase-field-models-and-solidification-under-magnetic-fields/
## Questioning Gibbs, anisotropy in phase field models and solidification under magnetic fields ### March 1, 2009 A few papers of interest — to be published in Acta and Scripta: A Perovic et al Our observation of the spinodal modulations in gold-50 at% nickel (Au-50Ni) transformed at high temperatures (above 600K) contradicts non-stochastic Cahn theory with its $\approx$500 degree modulation suppression. These modulations are stochastic because simultaneous increase in amplitude and wavelength by diffusion cannot be synchronized. The present theory is framed as a 2nd order differential uphill/downhill diffusion process and has an increasing time-dependent wave number and amplitude favouring Hillert’s one dimensional (1D) prior formulation within the stochastic association of wavelength and amplitude. R S Qin and H K D H Bhadeshia An expression is proposed for the anisotropy of interfacial energy of cubic metals, based on the symmetry of the crystal structure. The associated coefficients can be determined experimentally or assessed using computational methods. Calculations demonstrate an average relative error of <3% in comparison with the embedded-atom data for face-centred cubic metals. For body-centred-cubic metals, the errors are around 7% due to discrepancies at the {3 3 2} and {4 3 3} planes. The coefficients for the {1 0 0}, {1 1 0}, {1 1 1} and {2 1 0} planes are well behaved and can be used to simulate the consequences of interfacial anisotropy. The results have been applied in three-dimensional phase-field modelling of the evolution of crystal shapes, and the outcomes have been compared favourably with equilibrium shapes expected from Wulff’s theorem. X Li et al Thermoelectric magnetic convection (TEMC) at the scale of both the sample (L = 3 mm) and the cell/dendrite (L = 100 μm) was numerically and experimentally examined during the directional solidification of Al–Cu alloy under an axial magnetic field (Bless-than-or-equals, slant1T). Numerical results show that TEMC on the sample scale increases to a maximum when B is of the order of 0.1 T, and then decreases as B increases further. However, at the cellular/dendritic scale, TEMC continues to increase with increasing magnetic field intensity up to a field of 1 T. Experimental results show that application of the magnetic field caused changes in the macroscopic interface shape and the cellular/dendritic morphology (i.e. formation of a protruding interface, decrease in the cellular spacing, and a cellular–dendritic transition). Changes in the macroscopic interface shape and the cellular/dendritic morphology under the magnetic field are in good agreement with the computed velocities of TEMC at the scales of the macroscopic interface and cell/dendrite, respectively. This means that changes in the interface shape and the cellular morphology under a lower magnetic field should be attributed respectively to TEMC on the sample scale and the cell/dendrite scale. Further, by investigating the effect of TEMC on the cellular morphology, it has been proved experimentally that the convection will reduce the cellular spacing and cause a cellular–dendritic transition.
2017-08-18 08:51:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5768144726753235, "perplexity": 1922.6815414251168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104631.25/warc/CC-MAIN-20170818082911-20170818102911-00131.warc.gz"}
http://math.stackexchange.com/questions/228112/how-to-show-that-closed-subset-of-mathbbr-is-not-compact-if-restricted-to
# How to show that closed subset of $\mathbb{R}$ is not compact if restricted to $\mathbb{Q}$ Basicly I need to show that $\mathbb{R}\cap[0,1]\cap\mathbb{Q}$ is not compact. I was looking at some posts on this topic and all, that I found, used the finite subcover definition of compact set. I wonder if it could be done this way: A compact set is closed and bounded. So showing that the set is not closed would be enough to see that it's not compact. To show that this set is not closed I could choose any irational number in the interval $[0,1]$ and construct a sequence of rationals that converge to it. So it would be a limit point of the set $\mathbb{R}\cap[0,1]\cap\mathbb{Q}$ that is not contained in it. - your approach of showing that the set is not closed is right – La Belle Noiseuse Nov 3 '12 at 12:31 thankyou @Flute – Mykolas Nov 3 '12 at 12:33 Your title asks to show a closed set is not compact, but then the set you are trying to prove is not compact is not closed. – Thomas Andrews Nov 3 '12 at 12:49 @Thomas Andrews thankyou. Ya that was kind of strange, :) – Mykolas Nov 3 '12 at 13:52 To show that $\mathbb{Q} \cap [0,1]$ is not closed, it is sufficient to construct a sequence of rational numbers converging to an irrational one. See here for example. $F=\mathbb{Q} \cap [0,1]$ is dense in $[0,1]$; so if $F$ is compact, it is closed whence $\mathbb{Q} \cap [0,1]= [0,1]$. However, $[0,1]$ contains irrational numbers.
2016-07-02 09:52:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8464019298553467, "perplexity": 130.97705550101824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00121-ip-10-164-35-72.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/99341/is-the-power-of-a-regular-language-regular-is-the-root-of-a-regular-language-re
# Is the power of a regular language regular? Is the root of a regular language regular? If $$A$$ is a regular set, then: $$L_1=\{x\mid\exists n \geq0, \exists y \in A: y=x^n\}$$, $$L_2=\{x\mid \exists n \geq0, \exists y\in A: x=y^n\}$$. Which one of them is regular? My reasoning is since in $$L_2$$ we can have uncountable $$x$$ from even one value of $$y\ (y^0, y^1, y^2,...),\ L_2$$ cannot be regular. But that thinking seems wrong. • $y^0, y^1, y^2,...$ is countable (and infinite if $y$ is not the empty word.) – Apass.Jack Oct 30 '18 at 22:07 The language $$L_2$$ is not necessarily regular. Indeed, consider $$A = a^*b$$. If $$L_2$$ were regular, then so would the following language be: $$L_2 \cap a^*ba^*b = \{ a^nba^nb : n \geq 0 \}.$$ However, this language is not regular (exercise). In contrast, the language $$L_1$$ is regular. We can see this by constructing a DFA for it. Let the DFA for $$L_1$$ have states $$Q$$, initial state $$q_0$$, accepting states $$F$$, and transition function $$\delta$$. The states of the new DFA are labeled by functions $$Q \to Q$$. The idea is that the new DFA is at state $$f\colon Q \to Q$$ if the original DFA moves from state $$q \in Q$$ to state $$f(q)$$ after reading $$w$$ (i.e., if $$\delta(q,w) = f(q)$$ for all $$q \in Q$$). The initial state is the identity function. When at state $$f$$ and reading $$\sigma$$, we move to the state $$g$$ given by $$g(q) = \delta(f(q),\sigma)$$. A state is accepting if $$f^{(n)}(q_0) \in F$$ for some $$n \geq 0$$. • In both these sentences, it is unclear what $q$ is: "The idea is that the DFA is at state f after reading a word w if $\delta(q,w)=f(q)$. When at state $f$ and reading $\sigma$, we move to the state $g$ given by $g(q) =\delta(f(q),\sigma)$." – Eugen Oct 31 '18 at 8:41 • Beautiful proof! – Apass.Jack Oct 31 '18 at 8:45 • @Eugen $q$ is an arbitrary state. It is the argument to $f$ or $g$. – Yuval Filmus Oct 31 '18 at 8:51 $$L_1$$ is regular. Let $$M=(Q,\Sigma,\delta,q_0,F)$$ be a DFA recognizing $$A$$, and we denote by $$M(s)$$ the state $$M$$ reaches finally after reading the string $$s$$. Consider some $$x\in L_1$$, and let $$n$$ be the smallest one such that $$M$$ accepts $$x^n$$. We have $$M(x^n)\in F$$, and $$M(x^0),M(x^1),\ldots,M(x^{n-1})\notin F$$ (otherwise we can choose a smaller $$n$$ instead). Moreover, $$M(x^0),M(x^1),\ldots,M(x^{n-1})$$ must be pairwise different otherwise $$M$$ will never reach $$M(x^n)$$, hence we have $$n\le |Q|$$. This means we can rewrite $$L_1$$ as $$L_1=\bigcup_{n=0}^{|Q|}\{x\mid x^n\in A\}.$$ We only need to prove $$\{x\mid x^n\in A\}$$ is regular for all $$n$$ because the union of finite many regular languages is still a regular language. We prove this claim by mathematical induction. For $$n=0,1$$, this is trivial. Suppose $$\{x\mid x^n\in A\}$$ is regular for some $$n\ge 1$$. Denote by $$M_q=(Q,\Sigma,\delta,q,F)$$, i.e. the DFA by changing the start state of $$M$$ to $$q$$. We have \begin{align} \{x\mid x^{n+1}\in A\}&=\bigcup_{q\in Q}\{x\mid M(x)=q \wedge M_q(x^{n})\in F\}\\ &=\bigcup_{q\in Q}\left(\{x\mid M(x)=q \}\cap\{x\mid M_q(x^{n})\in F\}\right). \end{align} Since $$\{x\mid M(x)=q \}$$ and $$\{x\mid M_q(x^{n})\in F\}$$ (by inductive assumption) are both regular languages for all $$q\in Q$$, $$\{x\mid x^{n+1}\in A\}$$ is also a regular language. Q.E.D. $$L_2$$ is not regular. Let $$A$$ be the language expressed by the regular expression $$0^*1$$, then $$L_2$$ is not regular by pumping lemma. • For $L_1$, if I take $A$ as $0^* 1$ and suppose $y_1=001, y_2=0001$, then is it correct to say $y_1=(001)^1 \rightarrow x=001$, and $y_2=(0001)^1 \rightarrow x=0001$? – Adnan Oct 31 '18 at 14:32 • @Adnan I don't get your point. What are $y_1$ and $y_2$ and what is $x$? – xskxzr Oct 31 '18 at 15:14 • I'm assuming $A$ as $0^* 1$, $y_1$ and $y_2$ are strings of $A$, and $x$ is the string derived as given mapping for $L_1$. – Adnan Oct 31 '18 at 15:18 • @Adnan If I understand you correctly, that's yes, of course. But I don't know why you ask such a question, do you have any doubt about the definition of $L_1$? – xskxzr Oct 31 '18 at 18:45 • No doubts, just trying to present an example to conform my understanding of the method. – Adnan Nov 4 '18 at 20:32 I used the following reasoning, but it has a flaw, see comment below: $$L_2$$ is regular. As $$A$$ is regular, then there is a regular expression $$e$$ such that $$A=L(e)$$. It is easy to see that $$L_2=L(e^*)$$. • That would mean that $L_2$ is equal to $A^*$, and I do not think that is correct. $L_2$ considers powers of the same word. If $A=\{a,b\}$ then $L_2 = a^*+b^*$ which differs from $\{a,b\}^*$. – Hendrik Jan Oct 31 '18 at 1:10
2019-06-19 16:53:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 70, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9246323108673096, "perplexity": 209.9407680691479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999003.64/warc/CC-MAIN-20190619163847-20190619185847-00155.warc.gz"}
http://mathhelpforum.com/algebra/87752-reduction-relationship-linear-law.html
# Math Help - reduction of a relationship to a linear law 1. ## reduction of a relationship to a linear law y= k(x+1) to the power of n find approximate values for k and n given that x 4 8 15 y 4.45 4.60 4.8 2. ## Logs Hello scouser Originally Posted by scouser y= k(x+1) to the power of n find approximate values for k and n given that x 4 8 15 y 4.45 4.60 4.8 Take logs of both sides: $\log y = \log\Big(k(x+1)^n\Big)$ $= \log k +n\log(x+1)$ Plot the graph of $\log y$ against $\log(x+1)$, using the three pairs of values you're given. Draw the best straight line and read off the gradient and intercept. Gradient = $n \approx 0.065$ Intercept = $\log k$, which gives $k \approx 4$
2016-07-30 08:41:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.50215744972229, "perplexity": 2159.0624057537757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257832942.23/warc/CC-MAIN-20160723071032-00070-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.esn-analysis.net/wiki/thread_creation_ratio.html
The thread creation ratio is a pair of two underlying metrics. The (1) ratio between the number of threads and the total posts and (2) the ratio between initiated threads and total threads in the network. Smith et al. (2009) call these two metrics Verbosity and Initation, while Angeletou et al. (2011) are writing about Thread Initation Ratios. These metrics were later picked up by Hacker et al. (2015) and Viol et al. (2016). They are of ego-centric scope as they can be calculated for individuals, although the calculation of an average over the whole network is feasible. The calculation of the single thread creation ratio $st$ and the total thread creation ratio $tt$ is straightforward and can be accomplished in one step each: (1): st := select count of threads / count of posts (2): tt := select count of initiated threads / count of all threads Viol et al. (2016) and Hacker et al. (2015) conclude that a high number of threads compared to the number of posts, is a sign of information sharing. A user with many threads is informing other users about events or other news. However, Hacker mentions that their analysis result does not support this notion for threads, which do not receive any replies. These may be unanswered questions or uninteresting posts. Another notion presented by Viol et al. (2016) is that a high value indicates users who share knowledge and ideas with others, spawning new discussions threads. These discussion threads contribute content and ideas to the network. This fits with the first interpretation that replies are needed in the threads. Hansen et al. (2010) describe such users as discussion starters and Rowe et al. (2013) as expert participants, while Angeletou et al. (2011) speak of popular initators. Due to their creation of threads, they are usually well known in the network and have high visibility. Users with a low ratio only post occasionally and are unlikely to start their own topics. Smith et al. (2009) claim that more threads are better for the network as it indicates the generation of new ideas and discussions. Social relationships are only formed, when other users respond to a thread. Therefore this metric alone is not sufficient to make any claims about Social Capital. However, if a particular thread gathers attention, this indicates a high level of engagement in discussions and the exchange of new ideas. This facilitates Bonding Social Capital as people exchange their thoughts and ideas to form a shared understanding.
2019-05-22 06:35:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4866809546947479, "perplexity": 1232.8214915402116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256764.75/warc/CC-MAIN-20190522063112-20190522085112-00408.warc.gz"}
https://www.vedantu.com/maths/de-morgans-first-law
# De Morgan's First Law ## What is De Morgan’s First Law? In algebra, De Morgan's First law or First Condition states that the complement of the product of two variables is corresponding to the sum of the complement of each variable. In other words, according to De-Morgan's first laws or first theorem if ‘A’ and ‘B’ are the two variables or Boolean numbers. This indicates that the NAND gate function is similar to OR gate function with complemented inputs. Then accordingly the equations are as below;- For NOR Gate $Y = \overline{A} + \overline{B}$ For the Bubbled AND Gate $Y = \overline{A} . \overline{B}$ ### Symbolic representation of De Morgan's First Law Theorem Since the NOR and the bubbled gates can be interchanged, i.e., both gates have just similar outputs for the identical set of inputs. Hence, the equation can be algebraically represented in the figure shown below. This equation presented above is known as DeMorgan's First Theorem. The symbolic illustration of the theorem is presented as shown below. (image will be uploaded soon) ### Role of Complementation Bars Complementation bars are proposed to operate as grouping symbols. Hence, when a bar is broken, the expressions beneath it should remain grouped. Parentheses may be positioned around these grouped expressions as an assistance to give a miss to changing precedence. ### Verifying DeMorgan’s First Theorem Using Truth Table According to DeMorgan's First law, it proves that in conditions where two (or more) input variables are Added and negated, they are equal to the OR of the complements of the separate variables. Hence, the equivalent of the NAND function and is a negative-OR function verifying that A.B = A+B and we can literally prove this using the following table. ## DeMorgan’s First Theorem Proof using Truth Table A B A’ B’ A.B (A.B)’ A’ + B’ 0 0 1 1 0 1 1 0 1 1 0 0 1 1 1 0 0 1 0 1 1 1 1 0 0 1 0 0 Now that you have already understood DeMorgan's First Theorem using the Truth Table. We will make you familiar with another way to prove the theorem i.e. by using logic gates. This is to say, we can also prove that A.B = A+B using logic gates as hereinafter. ### Verifying and Execution of DeMorgan’s First Law using Logic Gates The uppermost logic gate placement of: A.B can be executed considering a NAND gate with inputs A and B. The lowermost logic gate placement, in the beginning, inverts the two inputs yielding A and B which become the inputs to the OR gate. Thus, with this, the output from the OR gate becomes: A+B. Therefore, an OR gate with inverters (NOT gates) on its every input is equal to a NAND gate function, and an independent NAND gate can be showcased in this way the equality of a NAND gate is a negative - OR. (image will be uploaded soon) ### Simplifying DeMorgan’s First Law with Example According to DeMorgan’s First Law, what is an equivalent statement to "The kitchen floor needs mopping and the utensils need washing, but I will not do both."? The two postulations are "I will mop the kitchen floor" and "I will wash the utensils." Simply modify the given statement to an "OR" statement and negate each of these postulations: "Either I will not mop the kitchen floor or I will not wash the utensils." P.S: that this statement lay open the likelihood that one of the tasks is completed, and it is also possible that neither chores are being completed. ### Solved Examples Problem1: How to deduce the following equation to standard form? F = MNO +M'N F’ = (MNO + M’N)’l Solution1: Using the De Morgan's law We get, = (MNO)’ (M’N)’ = (M’+N’+O’) (M+N’) Now, applying the law of distributivity = N’ + (M’+O’) M Again, applying Distributivity = N’ + M’M + O’M = N’ + MO’ (standard form)l Problem2: Apply De Morgan's Law to determine the inverse of the below given equation and reduce to the form of the sum-of-product: F = MO' + MNP' + MOP Solution2: F’= (MO' + MNP' + MOP)’ = (MO’)’ (MNP’)’ (MOP)’ = (M’+O)(M’+N’+P)(M’+O’+P’) = M’+O (N’+P) (O’+P’) = M’+ (N’+P) OP’ = M’ + ON’P’ + OPP’ Thus, we get = M’ + ON’P. ### Fun Facts • Do you know the full form of DeMorgan’s Theorems? Its Demorgan’s theorem. • No matter whether De Morgan's laws apply to sets, propositions, or logic gates, the anatomy always remains the same. FAQ (Frequently Asked Questions) What are DeMorgan's Theorems? DeMorgan’s Theorems typically explain the correspondence between GATES with inverted inputs and GATES with inverted outputs. In simple terms, a NAND gate equals a Negative-OR gate, while a NOR gate equals a Negative-AND gate. There are 2 DeMorgan’s Theorems i.e. 1. DeMorgan’s First law or Theorem 2. DeMorgan’s Second law or Theorem When “breaking-up” a complementation bar in a Boolean expression or equation, the operation without any deviation beneath the break (addition or multiplication) overturns, and moreover the bits of the broken bar still remain over the respective terms or variables. Time and again, it becomes easier to deal with a mathematical problem by fragmenting the longest (uppermost) bar before breaking any bars beneath it. However, you should never try to break two bars in one step! Why is DeMorgan’s Theorem Useful? DeMorgan’s Theorem is chiefly used to solve the different and the longest Boolean algebra expressions. As mentioned above, this theorem describes the equality between the gate with identical inverted input and inverted output thus making it a common application for incorporating the fundamental gate operations like NAND gate and NOR gate. Various other uses of DeMorgan’s Theorem include:- • It is most widely executed in digital programming and even for drawing digital circuit diagrams. • This law is also applicable in computer engineering for the purpose of creating logic gates. • It essentially explains how mathematical concepts, statements as well as expressions are linked through their opposites. • In set theory, the theorem relates to the union and bisection of sets through complements. • In propositional rationale, De Morgan's theorems establish a link between conjunctions and disjunctions of propositions by way of negation.
2020-09-26 16:00:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7167574763298035, "perplexity": 1638.7172750893244}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400244231.61/warc/CC-MAIN-20200926134026-20200926164026-00255.warc.gz"}
http://math.stackexchange.com/questions/234683/question-about-liminf-for-a-pointwise-convergent-sequence-of-functions
# Question about liminf for a pointwise convergent sequence of functions. If $f_n \rightarrow f$ pointwise, then does $$\liminf \int f_n=\lim\int f_n?$$ I know that $\liminf f_n=\lim f_n$ since the sequence converges, but I'm not sure if the $(L)$ integral throws us off. I'm trying to prove the Fatou's Reverse Lemma, and I got stuck. EDITED: $f$ is integrable and $f_n\le f$ - If we had some guarantee that the limit on the right hand side exists, then of course this equality would be true, because for a convergent sequence $a_n = \int f_n d \mu$ we would have $\liminf a_n = \lim a_n$. But in your case there doesn't seem to be such a guarantee. For example, consider a sequence of functions $f_n: \mathbb{R} \to \mathbb{R}$ like this: for $n$ even, $f_n=0$. And for $n$ odd, $f_n = 1_{[n, n+1]}$. Then $\int f_n d\mu$ is $0$ for n even and $1$ for n odd, so $\liminf$ on the LHS is equal to $0$, and the $\lim$ on the RHS doesn't exist. - What if $f_n\le f$ and $f$ is integrable? –  cap Nov 11 '12 at 6:32 I'm going to assume your measure space is $\mathbb{R}$ with Lebesgue measure. Consider $f_n=n1_{[0,\frac{1}{n}]}$ where $1_A$ is the characteristic function of the set $A\subset \mathbb{R}$. Then $f_n \to f=0$ pointwise but $$\liminf\int_{\mathbb{R}} f_n(x)dx=1 > 0= \int_{\mathbb{R}} f(x)dx$$ (You can even take $f_n, f\in C^{\infty}(\mathbb{R})$ so it's not a regularity issue). In other measure spaces it might still be false: For example in $\mathbb{N}$ with counting measure take $f_n(m)= 1$ if $m=1,n$ and zero otherwise then $f_n \to f$ pointwise, where $f(1)=1$ and is zero otherwise, and $$\liminf \int_{\mathbb{N}} f_n(m)dm = 2 > 1 = \int_{\mathbb{N}} f(m)dm$$ So no, in general only one inequality is true in Fatou's lemma. With the edit it's still not true: Take $g_n=-f_n$ as above. You could put $|f_n|\leq f$ but then this is just the dominated convergence theorem. - If $f_{n}=n1_{[0,\frac{1}{n}]}$ then $f_{n}\to 0$ almost everywhere, not pointwise. Since $f_{n}(0)=n$ for all $n\in\mathbb{N}$. –  Thomas E. Nov 26 '12 at 9:15 @ThomasE: Use $f_n=n1_{\left(0,\frac1n\right]}$, instead. –  robjohn Nov 26 '12 at 9:23 @robjohn. Yeah, it's just a matter of small modification. –  Thomas E. Nov 26 '12 at 9:25 The question does not ask about $\int_{\mathbb{R}}f(x)\,\mathrm{d}x$; it only asks about the $\liminf$ and $\lim$ of $\int_{\mathbb{R}}f_n(x)\,\mathrm{d}x$. –  robjohn Nov 26 '12 at 9:32 A slight modification of the usual counterexample works here: $$f_n=\left\{\begin{array}{}n1_{(0,1/n]}&\mbox{if n even}\\0&\mbox{if n odd}\end{array}\right.$$ Here, $f_n\to0$ pointwise, and $$\liminf_{n\to\infty}\int_{\mathbb{R}}f_n(x)\,\mathrm{d}x=0$$ yet $$\limsup_{n\to\infty}\int_{\mathbb{R}}f_n(x)\,\mathrm{d}x=1$$ so the limit does not exist. Of course if $\lim\limits_{n\to\infty}\int_{\mathbb{R}} f_n(x)\,\mathrm{d}x$ exists, then $\liminf\limits_{n\to\infty}\int_{\mathbb{R}} f_n(x)\,\mathrm{d}x=\lim\limits_{n\to\infty}\int_{\mathbb{R}} f_n(x)\,\mathrm{d}x$ by definition.
2013-12-09 05:57:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.948855996131897, "perplexity": 211.31543881623793}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163915534/warc/CC-MAIN-20131204133155-00080-ip-10-33-133-15.ec2.internal.warc.gz"}
https://marshalllab.github.io/MGDrivE/docs_v2/reference/equilibrium_lifeycle.html
This function calculates deterministic equilibria for the mosquito lifecycle model. equilibrium_lifeycle( params, NF, phi = 0.5, log_dd = TRUE, spn_P, pop_ratio_Aq = NULL, pop_ratio_F = NULL, pop_ratio_M = NULL, cube ) Arguments params a named list of parameters (see details) vector of female mosquitoes at equilibrium, for every population in the environment sex ratio of mosquitoes at emergence Boolean: TRUE implies logistic density dependence, FALSE implies Lotka-Volterra model the set of places (P) (see details) May be empty; if not, a named vector or matrix. (see details) May be empty; if not, a named vector or matrix. (see details) May be empty; if not, a named vector or matrix. (see details) an inheritance cube from the MGDrivE package (e.g. cubeMendelian) Value a list with 3 elements: init a matrix of equilibrium values for every life-cycle stage, params a list of parameters for the simulation, M0 a vector of initial conditions Details Equilibrium can be calculated using one of two models: classic logistic dynamics or following the Lotka-Volterra competition model. This is determined by the parameter log_dd, and it changes elements of the return list: K is returned for logistic dynamics, or gamma is returned for Lotka-Volterra dynamics. The places (spn_P) object is generated from one of the following: spn_P_lifecycle_node, spn_P_lifecycle_network, spn_P_epiSIS_node, spn_P_epiSIS_network, spn_P_epiSEIR_node, or spn_P_epiSEIR_network. The initial population genotype ratios are set by supplying the pop_ratio_Aq, pop_ratio_F, and pop_ratio_M values. The default value is NULL, and the function will use the wild-type alleles provided in the cube object. However, one can supply several different objects to set the initial genotype ratios. All genotypes provided must exist in the cube (this is checked by the function). If a single, named vector is provided, then all patches will be initialized with the same ratios. If a matrix is provided, with the number of columns (and column names) giving the initial genotypes, and a row for each patch, each patch can be set to a different initial ratio. The three parameters do not need to match each other. The params argument supplies all of the ecological parameters necessary to calculate equilibrium values. This is used to set the initial population distribution and during the simulation to maintain equilibrium. params must include the following named parameters: • qE: inverse of mean duration of egg stage • nE: shape parameter of Erlang-distributed egg stage • qL: inverse of mean duration of larval stage • nL: shape parameter of Erlang-distributed larval stage • qP: inverse of mean duration of pupal stage • nP: shape parameter of Erlang-distributed pupal stage • muE: egg mortality • muL: density-independent larvae mortality • muP: pupae mortality • muF: adult female mortality • muM: adult male mortality • beta: egg-laying rate, daily • nu: mating rate of unmated females The return list contains all of the params parameters, along with the density-dependent parameter, either K or gamma. These are the parameters necessary later in the simulations. This was done for compatibility with equilibrium_SEI_SIS, which requires several extra parameters not required further in the simulations. For equilibrium with epidemiological parameters, see equilibrium_SEI_SIS. For equilibrium with latent humans (SEIR dynamics), see equilibrium_SEI_SEIR.
2022-07-06 11:08:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6768032908439636, "perplexity": 4606.3246415404865}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104669950.91/warc/CC-MAIN-20220706090857-20220706120857-00544.warc.gz"}
https://www.sarthaks.com/2726985/the-ratio-of-modulus-of-rigidity-to-youngs-modulus-is-0-40-what-will-be-the-poissons-ratio
# The ratio of modulus of rigidity to young’s modulus is 0.40. What will be the Poisson’s ratio? 15 views in General closed The ratio of modulus of rigidity to young’s modulus is 0.40. What will be the Poisson’s ratio? 1. 0.55 2. 0.45 3. 0.25 4. 0.35 by (30.0k points) selected Correct Answer - Option 3 : 0.25 Concept: Relation between modulus of rigidity (G), young’s modulus (E) & Poisson’s ratio (μ) is E=2G(1+μ) Calculation: Given: $\frac {{G}}{{E}}$ = 0.4 E = 2G (1 + μ) $\frac {{E}}{{G}}$ = 2 (1 + μ) $\frac {{1}}{{0.4}}$ = 2 (1 + μ) 1.25 = 1 + μ μ = 0.25 Other relations between elastic constants E = $\frac {{9KG}}{{3K+G}}$ E = 3K (1 - 2μ)
2022-10-06 14:13:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6829519271850586, "perplexity": 5632.421638804284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00281.warc.gz"}
https://www.mysciencework.com/publication/show/rotation-sets-billiards-n-obstacles-torus-d4f50f39
# Rotation Sets of Billiards with N Obstacles on a Torus Authors Type Preprint Publication Date Mar 11, 2016 Submission Date Mar 11, 2016 Identifiers DOI: 10.1007/s12591-015-0269-3 Source arXiv For billiards with $N$ obstacles on a torus, we study the behavior of specific kind of its trajectories, \emph{the so called admissible trajectories}. Using the methods developed in \cite{1}, we prove that the \emph{admissible rotation set} is convex, and the periodic trajectories of admissible type are dense in the admissible rotation set. In addition, we show that the admissible rotation set is a proper subset of the general rotation set.
2018-05-22 17:37:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.710479199886322, "perplexity": 1041.9456505800604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864837.40/warc/CC-MAIN-20180522170703-20180522190703-00616.warc.gz"}
https://www.physicsforums.com/threads/functionals-and-calculus-of-variations.773464/
# Functionals and calculus of variations Tags: 1. Sep 29, 2014 ### "Don't panic!" I have been studying calculus of variations and have been somewhat struggling to conceptualise why it is that we have functionals of the form $$I[y]= \int_{a}^{b} F\left(x,y,y' \right) dx$$ in particular, why the integrand $F\left(x,y,y' \right)$ is a function of both $y$ and it's derivative $y'$? My thoughts on the matter are that as the functional $I$ is itself dependent on the entire function $y(x)$ over the interval $x\in [a,b]$, then if $I$ is expressed in terms of an integral over this interval then the 'size' of the integral will depend on how $y$ varies over this interval (i.e. it's rate of change $y'$ over the interval $x\in [a,b]$) and hence the integrand will depend on $y$ and it's derivative $y'$ (and, in general, higher order derivatives in $y$. I'm not sure if this is a correct understanding and I'm hoping that someone can enlighten me on the subject (particularly if I'm wrong). Thanks. 2. Sep 29, 2014 ### rdt2 Think of the motion of a car. The independent variable is time t, but to describe its path you have to give its (initial) position _and_ its (initial) velocity, the derivative of position. 3. Sep 29, 2014 ### "Don't panic!" Can one imply from this then, that as we initially need to specify the position and the velocity on order to describe the configuration of a physical system, then any function $F$ characterising the dynamics of the system over a given interval must be a function of both position and velocity. (In doing so, we can describe the dynamics of the system at any point in the interval that we are considering by specifying the position and velocity at that point and plugging these values into $F$)?! I'm trying to get an understanding for it in the abstract sense as well, without relating to any particular physical problem as to why the integrand would be a function of some function and it's derivatives (first order and possibly higher order)? 4. Oct 1, 2014 ### davidmoore63@y My understanding is that the calculus of variations uses notation that treats the y and y' as independent variables, even though they aren't actually independent (as you point out). However the theory still works. 5. Oct 1, 2014 ### "Don't panic!" Yeah, I guess I'm really trying to understand why the integrand is treated as a function of $y$ and $y'$, why not just $y$? What's the justification/mathematical (and/or) physical reasoning behind it? Is it just that if you wish to be able to describe the configuration of a physical system and how that configuration evolves in time you need to specify the positions of the components of the system and also how those positions change in time (i.e. the derivatives of the positions). Hence, as we wish the Lagrangian of the system to characterise its dynamics, this implies that the Lagrangian should be a function of both position and velocity?! Last edited: Oct 1, 2014 6. Oct 1, 2014 ### davidmoore63@y The function F depends on the problem you are trying to solve. So for example, if you are trying to minimize the energy of a system, the energy consists of kinetic energy (depends on y') and potential energy (depends on y). However if you are trying to minimize the length of a curve, the integrand is ds=sqrt(1+y'^2) which does not depend on y. So the form of f depends on the problem at hand. Does that help? 7. Oct 1, 2014 ### "Don't panic!" Thanks. I understand in those cases, but would what I said be correct in the more general case of applying the principle of stationary action to a physical system, i.e. one wishes to describe the state of some system at time $t_{0}$ and what state it evolves to at a later (fixed) time $t_{1}$. To do so one must specify the coordinates $q_{i}$ of the components of the system and also how those coordinates change in time, i.e. their derivatives, $\dot{q}_{i}$ in the time interval $t\in [t_{0}, t_{1}]$. Thus, we require a function of the form $$\mathcal{L}= \mathcal{L}\left(q_{i}(t),\dot{q}_{i}(t)\right)$$ to completely specify the state of the system at any time $t\in [t_{0}, t_{1}]$. From this, we define a functional, the action such that $$S\left[q_{i}(t)\right] = \int_{t_{0}}^{t_{1}} \mathcal{L}\left(q_{i}(t),\dot{q}_{i}(t)\right) dt$$ that associates a number to each path $\vec{q}(t)=\left(q_{i}(t)\right)$ between the (fixed) states $\vec{q}\left(t_{0}\right)$ and $\vec{q}\left(t_{1}\right)$. We then invoke the principle of stationary action to assert that the actual physical path taken between these two points is the one which satisfies $\delta S = 0$. Would this be a correct interpretation? 8. Oct 1, 2014 ### davidmoore63@y That is a good description and matches how I understand calculus of variations in the context of general physical systems. 9. Oct 1, 2014 ### "Don't panic!" Great. Thanks very much for your help! 10. Oct 1, 2014 ### "Don't panic!" As a follow-up. Would it be fair to say then, that as $S\left[\vec{q}(t)\right]$ contains information about all possible paths between the points $\vec{q}\left(t_{0}\right)$ and $\vec{q}\left(t_{1}\right)$, this implies that the integrand will be a function of the values of those paths and their derivatives at each point $t\in [t_{0}, t_{1}]$. Now, as at each point along the path in the interval $t\in [t_{0}, t_{1}]$, once we have specified the position we are free to specify how that position changes (i.e. the velocity) at that point independently, as we are considering all possible paths. However, upon imposing the principle of stationary action, we are choosing a particular path, i.e. the one which extremises the action. This re-introduces the explicit dependence of $\dot{q}_{i}(t)$ on $q_{i}(t)$ via the relation $$\delta\dot{q}_{i}(t) = \frac{d}{dt}\left(\delta q_{i}(t)\right)$$ Apologies to re-iterate, just trying to fully firm up the concept in my mind. 11. Oct 2, 2014 ### "Don't panic!" Sorry, please ignore the post above - I realised the error in what I was writing after posting it and the forum won't let me delete it now! Instead of the above post, is the following a correct summary (pertaining to the Lagrangian and why it is dependent on position and velocity): The state of a mechanical system at a given time, $t_{0}$ is completely specified by the positions of the particles, along with their corresponding velocities, within it. Thus, if we wish to describe the state of this system at some later time $t$ in some fixed time interval, then we need to specify how the system evolves over this interval, i.e. we require a function which depends on the in positions of the particles and also the rate at which those positions are changing (i.e. their velocities) at each point within the time interval (a requirement if we wish to consider external forces acting on the particles). This motivates us to consider a function $\mathcal{L}= \mathcal{L}\left(q_{i}(t), \dot{q}_{i}(t)\right)$ which completely specifies the state of a mechanical system at each point $t \in [t_{0},t_{1}]$. 12. Oct 2, 2014 ### homeomorphic My intuition is that the Lagrangian is sort of a cost function. You might not care about y' in some problems. So, you could imagine a problem in which your cost per unit time to travel from point A to point B in a fixed amount of time is strictly a function of position. You would then try to spend as much of your time in the areas of lower cost to minimize your travel expenses. But let's say you want to discourage speeding as well, so you want to penalize higher velocities. My intuition is that it's easier to apply that speeding penalty if you make the Lagrangian also a function of velocity. For example, you could add the speed squared or cubed or whatever you want. So, it's natural to want to introduce y' as a variable to be able to put that into your cost function. It's a pretty flexible construction, so you can imagine that we can just try to penalize any path that doesn't follow the laws of physics that we want, and hopefully, that will give you a description of physics. When you work out the details, it does turn out to work. 13. Oct 3, 2014 ### Stephen Tashi I haven't heard a mathematical answer to that question yet. Let me reiterate the question emphasizing the mathematical aspect. When we have a function such as $y = 3x + x^2$ we denote it as $y = f(x)$, not as $y = f(x,3x,x^2)$ even though evaluation $f$ involves the intermediate steps of evaluating $3x$ and $x^2$. So why is an expression like $F(x,y,y')$ necessary in discussing the integrand in the calculus of variations? Isn't computing $y'$ from $y$ an intermediate step in the process? If we are given $y$ we can find $y', y'',...$ etc. Why not just write the integrand as $F(x,y)$ or even $F(x)$? After all, the integration $\int_a^b F(...) dx$ is ordinary integration. The integrand must be a function of $x$. My conjecture for an explanation: In the expression $I(y) = \int_a^b F(x,y,y') dx$ we see that it's $I(y)$ instead of $I(y,y')$ so the fact that finding $y'$ is needed as an intermediate step isn't recognized in the left hand side. If we have function like $z = x + 3x + x^2$ we can choose to describe it in a way that exhibits intermediate calculations. For example, let $y = 3x + x^2$ and $F(a,b) = a + b$. Then we can write $z = F(x,y)$. By analogy the notation $F(x,y,y')$ indicates a particular choice of representing the integrand that takes pains to exhibit intermediate calculations. It's not a simple algebraic expression. The computation implied by $F(x,y,y')$ is an algorithm. As far as I can see, there is nothing incorrect about notation like $I(y) = \int_a^b G(x,y) dx$ to describe the same functional. It's just that the processes described by $F$ and $G$ would be technically different. Thinking of $F$ and $G$ as computer routines, the routine $F$ requires that you compute $y'$ and then give it as input to $F$. The routine $G$ does not. So I think the notation $F(x,y,y')$ is not a necessary notation. It is a permissible notation that may be helpful if it reminds us of the steps involved in forming the integrand. 14. Oct 3, 2014 ### "Don't panic!" Thanks for your help on the matter. Would it be fair to say the following: The configuration of a system at a given instant in time is completely determined by specifying the coordinates of each of the particles within the system at that instant. However, using just this information one cannot determine the configuration of the system at subsequent instants in time. To do so requires knowledge of the rate of change of these positions at the instant considered. For given values of the coordinates the system can have any velocities (as we are considering the coordinates and velocities of the particles at the same instant in time), and this will affect the configuration of the system after an infinitesimal time interval, $dt$ . Thus, by simultaneously specifying the coordinates and velocities of the particles at a given instant in time, we can, in principle, calculate it's subsequent time evolution. This means that, if the coordinates and velocities of the particles are specified at a given instant, $t_{0}$, then the accelerations of those particles are uniquely defined at that instant, enabling one to construct equations of motion for the system. Following the principle of stationary action, we are motivated to consider a function which summarises the dynamics of a physical system at each given instant in time (over some finite time interval), along all possible paths that the the system could take between two fixed configurations, $\vec{q} (t_{0})$ and $\vec{q} (t_{1})$. As such, taking into account the discussion above, we can imply that for this function to successfully summarise the dynamics of the system at each point, it is sufficient for it to be a function of the coordinates $q_{i}$ and the velocities $\dot{q}_{i} (t)$ of the constituent components of the system, i.e. a function of the form $\mathcal{L} =\mathcal{L} (q_{i} (t), \dot{q}_{i} (t)$ (we need not consider higher order derivatives as it is known that the dynamical state of the system, at a give instant in time, is completely specified by the values of its coordinates and velocities at that instant). Given this, we can then attribute a value to the dynamics of the system, depending on the path, $\vec{q} (t)= (q_{1},\ldots ,q_{n})$, that it takes between the two fixed configurations, $\vec{q} (t_{0})$ and $\vec{q} (t_{1})$. We do so by defining a functional, the action, as follows $$S[\vec{q} (t)] = \int_{t_{0}}^{t_{1}}\mathcal{L} (q_{i} (t), \dot{q}_{i} (t)) dt$$ The principal of stationary action then asserts that the actual path taken by the system between these two fixed configurations is the one for which the action is extremised (i.e. the path which gives an extremal value to this integral). Last edited: Oct 3, 2014 15. Oct 3, 2014 ### Stephen Tashi The question I have about thr physics that followed is what does it say about the mathematical notation like $F(x,y,y')$ or $G(x,y)$ when $y$ is a function of x? Thinking of $F$ and $G$ as being implemented by computer algorithms, what does the argument $y$ represent? One possibility is that $y$ represents a function. In many computer languages an argument can be a function instead of a single number. If we give an algorithm the ability to access the function $y(x)$ then it can in principle compute $y', y'', y'''$. This is the convention that applies to the notation $I(y)$. In that notation, $y$ represents a function. Another possiblity is that $y$ represents a single numerical value. In that case, notation like $G(x,y)$ does not represent giving $G$ the knowledge of the function $y(x)$. So we cannot assume that the algorithm $G$ can compute $y'(x)$. Under the convention that arguments are single numerical values then I don't see how the algorithm $F(x,y,y')$ can reconstruct any information about $y''$ (acceleration) from pure mathematics. To do that, it would have to know the behavior of $y'$ in an interval. Are you saying we have a physical situation where the knowledge of position and velocity at one point in time is sufficient to compute the subsequent behavior of the system (and hence compute any derivative of that behavior that is desired)? ( There is another recent thread where someone remarks that physicists often use ambiguous notation that makes it difficult to distinguish between a function and single numerical value that comes from evaluating that function.) 16. Oct 4, 2014 ### "Don't panic!" I was following the Landau-lifschitz book on classical mechanics to be honest, where they describe it in a similar manner. I think what is perhaps meant is that using this information as initial conditions for an equation of motion one can uniquely determine the acceleration at that initial instant?! My thoughts were that for each possible path between to points, the lagrangian is a function of the coordinates and velocities of this path, such that, at each instant in time along the time interval the lagrangian characterises the dynamics of the system if it were to follow that path (i.e. by plugging in the values of the coordinates and velocities at each instant in time along the path into the lagrangian we can characterise the dynamics of the system along that path). 17. Oct 4, 2014 ### Fredrik Staff Emeritus The notation $F(x,y,y')$ is pretty bad in my opinion. It should be $F(x,y(x),y'(x))$. $y$ is a function. $y(x)$ is an element of the codomain of $y$, so it's typically a number. $F$ doesn't take functions as input. It takes three real numbers. Similarly, I would never write $S[\vec q(t)]$, because $\vec q(t)$ is an element of $\mathbb R^3$, not a function. (It's a "function of t" in the sense that its value is determined by the value of t, but it's still not a function). I would write $$S[\vec q]=\int_a^b L(\vec q(t),\vec q'(t),t)\mathrm dt.$$ (When I do calculations with a pen and paper, I will of course abuse the notation to avoid having to write everything out). $L$ is usually something very simple. In the classical theory of a single particle moving in 1 dimension, as influenced by a potential $V:\mathbb R\to\mathbb R$, it can be defined by $L(r,s,u)=\frac{1}{2}ms^2-V(r)$ for all $r,s,u\in\mathbb R$. Note that this ensures that $L(q(t),q'(t),t)=\frac{1}{2}mq'(t)^2-V(q(t))$ for all t. 18. Oct 4, 2014 ### "Don't panic!" Exactly. That's what I was trying to allude to in my description. The Lagrangian is a function of the values of the coordinates and velocities of the particle at each given instant over the time interval considered. Would what I said in the post (above yours) about why the Lagrangian is a function of coordinates and velocities, in a more general sense, be correct? (I know that for conservative systems it assumes the form $\mathcal{L}=T-V$, but I was trying to justify to myself the reasoning as to why we consider the Lagrangian to be a function of position and velocity in the first place, before considering any particular cases, in which the components, such as $T$ and $V$, are clearly functions of the coordinates and velocities?) Also, is what I said about the action (in previous post), i.e. as a means of attributing a value to the characteristic dynamics of a system due to it following a particular path, $\vec{q}$, enabling us to distinguish the actual physical path taken by the system (using variational techniques), correct? 19. Oct 4, 2014 ### "Don't panic!" In reference to this part I was following Landau-Lifschitz: "If all the coordinates and velocities are simultaneously specified, it is known from experience that the state of the system is completely determined and it's subsequent motion can, in principle, be calculated. Mathematically, this means that, if all the coordinates $q$ and velocities $\dot{q}$ are given at some instant, the accelerations $\ddot{q}$ at that instant are uniquely defined." (Mechanics, L.D. Landau & E.M.Lifschitz) That sounds like the theorem that says (roughly) that if f is a nice enough function, then the differential equation $\vec x''(t)=f(\vec x(t),\vec x'(t),t)$ has a unique solution for each initial condition $\vec x(t_0)=\vec x_0$, $\vec x'(t_0)=\vec v_0$. Lagrangian mechanics is based on a slightly different theorem (I don't recall actually seeing such a theorem, but I'm fairly sure that one exists): A unique solution for each boundary condition $\vec x(t_a)=x_a$, $\vec x(t_b)=\vec x_b$.
2017-10-17 06:50:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9009661674499512, "perplexity": 170.30617460770844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820927.48/warc/CC-MAIN-20171017052945-20171017072945-00040.warc.gz"}
http://ravi-bhide.blogspot.com/2011/08/evaluating-interviewers-part-2.html
Ravi is an armchair futurist and an aspiring mad scientist. His mission is to create simplicity out of complexity and order out of chaos. ## Sunday, August 14, 2011 ### Evaluating interviewers - Part 2 In this post, I show a method to mathematically evaluate an interviewer based on the job performance of the candidate that gets hired. This is a continuation of (but independent of) Evaluating Interviewers - Part 1, where I showed a method to evaluate an interviewer against other interviewers. I am replicating the definitions here from Part 1. Definitions Symbol Definition $C_i$ $i^{th}$ candidate $R_j$ $j^{th}$ interviewer $s_{ij}$ score for the $i^{th}$ candidate by the $j^{th}$ interviewer (this is the grade, usually between 1 and 5, given by the interviewer to the candidate based on the interview) $m_i$ number of interviewers in the interview panel for candidate $i$ (the number of interviewers, usually between 4 and 8, that the candidate faces during the course of the interview process) $n_j$ number of candidates interviewed by interviewer $j$ (can be large, in tens or hundreds, especially for popular interviewers) $\hat{n_j}$ number of candidates interviewed by interviewer $j$ that joined the company/group $p_i$ job performance of $i^{th}$ candidate after joining the company/group (usually between 1 and 5, captured in a company-internal HRM system) $s_i$ average score given by the interview panel for the $i^{th}$ candidate, $s_i=\sum_{j}s_{ij}/{m_i}$ (usually between 1 and 5) What we expect from interview scores We take the interviewer's score $s_{ij}$ as a prediction about the candidate $C_i$'s job performance once hired. The higher the score, the better the predicted job performance. E.g., when an interviewer gives a score of $3.1$ to candidate $C_1$ and $3.2$ to $C_2$, in effect, he is vouching for candidate $C_2$ to out-perform candidate $C_1$, by a margin proportional to $0.1$. Secondly, we expect job performance to be directly and linearly proportional to the score. E.g., if scores of $3.1$ and $3.2$ translate to job performance ratings of $3.1$ and $3.2$ respectively, then a score of $3.3$ should translate to a job performance rating of $3.3$ or thereabouts. In other words, we expect the following from our scores: 1. Ordinality: if $s_{aj}>s_{bj}$, then we hold interviewer $R_j$ to a prediction that candidate $C_a$ would outperform $C_b$ on the job. 2. Linearity: job performance should be directly and linearly proportional to the score. So we expect a plot of job performance (Y-axis) against interview score (X-axis) to be roughly linear for each interviewer, ideally along the $y=x$ line. We will discuss variations from this line and its implications later in the article. We classify an interviewer as good when there is high correlation between the score given by the interviewer to the candidate and the job performance of the candidate post-hire. The higher the correlation, i.e. the lower the variance, the better the interviewer. This is because a lower variance implies better predictability on part of the interviewer. Conversely, the higher the variance, the worse the interviewer. Here is a graph of job performance (Y-axis) against interviewer score (X-axis) for a good interviewer: Here is the graph for a bad interviewer. Notice the high variance, implying a low correlation between interview score and job performance: Easy v/s Hard interviewers Variation from $y=x$ line doesn't necessarily indicate a bad interviewer. For an interviewer to be bad, the correlation between interview score and job performance should be low. Here is an example of a good interviewer with high correlation between interview score and job performance, but whose mean is different from $y=x$ line. Note that the above graph satisfies both the ordinality and linearity conditions and hence the interviewer is a good interviewer. The above graph is for an "easy" interviewer - one who tends to give a higher score than those of his peers. Notice that the mean line hangs below the $y=x$ line. Here is another example of an interviewer with high correlation between interview score and job performance, but whose mean is different from $y=x$ line. This is a "hard" interviewer - one who tends to give a lower score than those of his peers. Notice that the mean line hangs above the $y=x$ line. As opposed to the good interviewers, here are graphs for bad interviewers. In the above case, the interviewer is an easy interviewer - one who tends to give a higher scores than his peers, as seen from the mean line (thicker one parallel to $y=x$ line). However, the low correlation suggests that the interviewer's score does not accurately portray job performance. Here is another bad interviewer - this time a hard one - one who tends to give lower scores than his peers. The above graphs show that both easy and hard interviewers can be good interviewers. And on the flip side, both easy and hard interviewers can be bad interviewers. What really distinguishes good from bad is how "tightly" the points hug the mean line in the graph. With this as the background, here is some math that will order interviewers in the descending order of "goodness". The Math 1. Find the line parallel to $y=x$ that serves as the mean for all points in the graph. There can be different definitions for "mean" here - e.g. one that is a mean of all $x$ and $y$ co-ordinates of the points, one that minimizes the sum of distances to each point, etc. For simplicity, we choose the mean of all $x$ and $y$ coordinates for that interviewer, i.e. $\overline{x}_j$ and $\overline{y}_j$ for interviewer $R_j$ respectively. $\overline{x}_j=\frac{\sum_{k}s_{kj}}{\hat{n_j}}$ $\overline{y}_j}=\frac{\sum_{k}p_k}{\hat{n_j}}$ So the dark line in the graph corresponds to $y=f_j(x)=x+(\overline{y}_j-\overline{x}_j)$. 1. We compute the standard deviation of interviewer $R_j$'s score, $\sigma_j$, as follows. $\sigma_j=\sqrt{\frac{\sum_k{(p_{i_k}-f_j(s_{i_kj}))^2}}{\hat{n_j}-1}}$ where subscript $i_k$ is used to indicate a candidate that the interviewer interviewed and was eventually hired. So, essentially, we are determining the variance of the points with respect to the line $y=f_j(x)$. The lower the $\sigma_j$, the better the interviewer is at predicting the job performance of the candidate. 1. Alternatively, instead of the above steps, we can compute the correlation coefficient between the interview scores and the job performance score. 2. Order interviewers $R_j$ based on descending order of $\sigma_j$ (or the correlation coefficient). This is the list of interviewers - from the best to the worst - in that order! In Closing • We outlined one approach to rank interviewers according to their ability to predict future performance of a job candidate. • There are many ways in which the "goodness" of an interviewer can be defined. Each can alter our algorithm. • There are many ways in which one can define average performance of the interviewer (the dark solid line in the graph). We choose a simple definition. • Regardless of the customization applied to our algorithm, the graphs and the rankings can help the organization better the interview process, thus: 1. if an interviewer is deemed "bad", retrain them 2. if an interviewer is deemed "easy", perhaps discount their score for the candidate by their variance, $\sigma_j$ to determine what a regular interviewer's score would have been for that candidate. 3. similarly, for a "hard" interviewer, add their variance $\sigma_j$ to normalize their score and bring it up to par with other "regular" interviewers. #### 1 comment: 1. I rolled my eyes over the equations, but boy, did I understand the graphs and the concept! I am amazed that regular events can be expressed mathematically and you have done it so simply.
2018-10-19 17:22:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7332477569580078, "perplexity": 1197.0659319906165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512421.5/warc/CC-MAIN-20181019170918-20181019192418-00262.warc.gz"}
https://www.nature.com/articles/s41377-020-0293-0?error=cookies_not_supported&code=5613da32-70ce-4f6b-ae51-921d2a3f1b50
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Chirality-assisted lateral momentum transfer for bidirectional enantioselective separation ## Abstract Lateral optical forces induced by linearly polarized laser beams have been predicted to deflect dipolar particles with opposite chiralities toward opposite transversal directions. These “chirality-dependent” forces can offer new possibilities for passive all-optical enantioselective sorting of chiral particles, which is essential to the nanoscience and drug industries. However, previous chiral sorting experiments focused on large particles with diameters in the geometrical-optics regime. Here, we demonstrate, for the first time, the robust sorting of Mie (size ~ wavelength) chiral particles with different handedness at an air–water interface using optical lateral forces induced by a single linearly polarized laser beam. The nontrivial physical interactions underlying these chirality-dependent forces distinctly differ from those predicted for dipolar or geometrical-optics particles. The lateral forces emerge from a complex interplay between the light polarization, lateral momentum enhancement, and out-of-plane light refraction at the particle-water interface. The sign of the lateral force could be reversed by changing the particle size, incident angle, and polarization of the obliquely incident light. ## Introduction Enantiomer sorting has attracted tremendous attention owing to its significant applications in both material science and the drug industry1,2,3,4,5. In 2006, 80% of drugs approved by the FDA (U.S. Food and Drug Administration) were chiral6,7. Among them, 75% were single enantiomers. Recently, optical enantioseparation has attracted much attention owing to the emergence of optical phenomena8,9,10,11,12,13. Unstructured, plane-wave-like light fields can induce optical lateral forces on appropriately shaped objects as an optical analogue to aerodynamic lift14. Circularly polarized (CP) beams can induce spin-dependent lateral forces on achiral spherical particles when they are placed near an interface15,16. The displacements of particles controlled by the spin of the light can be perpendicular to the direction of the light beam17,18,19. Only a few experimental observations of spin-dependent lateral forces have hitherto been reported. These lateral forces, associated with optical spin–orbit interactions, differ from the “chirality-dependent” lateral forces induced by linearly polarized beams, which deflect dipolar chiral particles with opposite handedness towards opposite lateral directions20,21,22,23,24. Most examples of optical lateral forces induced by chirality are only theoretical predictions based on dipole (radius ≤ 50 nm) or geometrical-optics (e.g., radius > 10 µm) particles under the illumination of beams with intensity gradients25,26. Meanwhile, the chiral particles used in reported experiments are tens of micrometers in size, in the geometrical-optics regime, where the mechanism and methodology are quite different from the dipole approximation and Mie theories. Chirality-dependent lateral forces have been theoretically proposed to be powerful tools for all-optical enantiomer sorting. Most reported methods are only theoretical models based on the analogous photogalvanic effect27, Stern–Gerlach-type deflectors26,28, standing waves23, and plasmonic nanoapertures29,30,31. Experiments on enantioselective optical forces include the use of atomic force microscopy (AFM)30 and helicity-dependent optical forces25,26,32,33,34. The helicity-dependent optical forces require two counterpropagating beams with opposite helicities. These experiments exploring the interactions of light helicity and particle chirality do not belong to the field of optical lateral forces because they are not applicable to linearly polarized beams (see Supplementary Fig. S1). The system with two counterpropagating helical beams also has difficulties in the manipulation of particles smaller than 2 µm. Despite potential applications, there has been no experimental evidence of chirality-dependent lateral forces induced by a single, non-gradient plane wave on a Mie (radius ~ wavelength) chiral particle. ## Results ### Principle of the optical lateral force on Mie chiral particles Cholesteric polymerized microparticles35 floating at an air–water interface provide a suitable model system to experimentally investigate chirality-dependent optical lateral forces (see Fig. 1a). Chiral particles with different handedness κ > 0 and κ < 0, under the illumination of an s-polarized beam with incident angle θ, experience optical lateral forces to the left (Fy < 0) or right (Fy > 0), respectively. The chirality parameter κ from −1 to 1 is used to describe the chirality of the object23. Theoretical analysis shows that both the Poynting vector (P) and spin angular momentum (SAM) contribute to the optical lateral force on a dipole chiral particle20,22, i.e., $$F_{{\mathrm{lateral}}} = F_{{\mathrm{Poynting}}} + F_{{\mathrm{SAM}}} = \frac{{\sigma \left\langle {\mathbf{S}} \right\rangle }}{c} + \omega \gamma _e\left\langle {{\mathbf{L}}_e} \right\rangle$$ (1) where $$< {\mathbf{S}} > = 1/2{\Re} [{\mathbf{E}} \times {\mathbf{H}}^ \ast ]$$ and $$\left\langle {{\mathbf{L}}_e} \right\rangle$$ are the time-averaged Poynting vector and electrical spin density, respectively. σ is the cross-section in vacuum. The lateral force resulting from SAM is usually one order of magnitude smaller than that from the Poynting vector22. Therefore, plotting the Poynting vector surrounding the chiral particle is an intuitive way to elucidate the optical forces. According to Minkowski’s approach36, the optical force increases n times in a dielectric medium (n is the refractive index of the medium) due to the momentum transfer; thus, the medium effect for a liquid with a higher refractive index is more prominent37,38,39,40. A microscopic image of cholesteric polymerized microparticles between crossed polarizers is shown in Fig. 1b, where the light pattern (Maltese cross) on the particles comes from the supramolecular spherulitic arrangement, as sketched in the inset41. After UV exposure of the emulsion, the polymerized particles preserve both the spherical shape and internal supramolecular arrangement of the precursor cholesteric droplets, offering several advantages (compared to liquid crystal droplets) for optical manipulation experiments where stability of the shape and the internal configuration is required. Figure 1c, d shows SEM and TEM images of the particles, respectively. The TEM investigations manifest the self-organization in a radial configuration for our material at R/p ≥ 1.5, where R and p are the radius and pitch of the particles, respectively. Based on the above features, the polymeric microparticles can exhibit chirality at both the molecular (chiral additive molecules) and supramolecular levels, which offers a perfect paradigm for the experimental sorting of chiral particles in the Mie regime. The cholesteric particles immersed half in water and half in air are assumed to be lossless spheres with the real part of the refractive index equal to ~1.5 at 532 nm and a chirality κ of +0.4. Unlike the optical lateral force on dipole chiral particles (R ≤ 100 nm)23, whose sign depends only on the chirality of the particle, our results show that the sign of the optical lateral force on the micro-chiral particles (R ≥ 300 nm) could directly depend on the size (Fig. 1e, f) and chirality (Fig. 1g, h). The force map as a function of particle radius in Fig. 1e shows that the sign of the lateral force can be reversed by changing the particle size and the incident angle when R is on the order of the wavelength (Mie regime) for a fixed chirality κ = +0.4 and an s-polarized beam. The variation in the lateral force with the size and incident angle under the illumination of a p-polarized beam is shown in Fig. 1f. It is noted that the sign of the lateral force could be reversed under different polarizations of light at certain incident angles. For instance, the signs of the lateral forces on different-sized particles are opposite for s- and p-polarized beams when θ = 45°. This effect is also observed in the experiment. The lateral force could also be a function of κ for a fixed radius (R = 500 nm), as shown in Fig. 1g, h. For the s-polarized light, most lateral forces remain negative over a large range of kappa (κ < 0.5), while the forces are positive for p-polarized light over the same range. ### Analysis of the optical lateral force Previous theoretical predictions focused on chiral particles located either above or below the interface. The dipolar approximation, commonly used in the theoretical modeling of optical forces on chiral particles, indicates that the sign of the lateral force depends only on the sign of the chirality (kappa κ)9,21,22,23. For example, theoretical analysis20,21,22,23 shows that dipolar chiral particles with different chiralities κ > 0 and κ < 0 experience optical lateral forces to the left (Fy < 0) and right (Fy > 0), respectively. The sign is not affected by a change in the incident angle of light. Our simulations show that this is also true even if the dipolar particle (R = 50 nm) is located at the interface (e.g., half in air (z > 0) and half in water (z < 0)), as shown in Fig. 2a, where we plot the simulated lateral force versus the incident angle for isotropic chiral spheres with κ = +0.4 (triangles) and κ = −0.4 (circles). For both s- and p-polarized beams, the lateral forces are always negative for κ > 0 (positive for κ < 0) at any incident angle. However, we found unexpected behavior for chiral particles with radius R = 500 nm, as shown in Fig. 2b. The force reverses sign with increasing incident angle at θ ≈ 18° for both s- and p-polarized beams. The sign reserves again at θ ≈ 66° for the p-polarization. Intuitively, we attribute this angle-induced effect to two reasons. Let us divide the plane wave into two regions separated by the axis k1 as shown in Fig. 2c–e. Two parallel light beams from different areas are shined on the boundary of the particle in the incident plane with identical distance to axis k1. Consider their scattering fields in two planes (marked as two red circles in Fig. 2c) parallel to k1. When the medium around the sphere is homogenous, the interaction of the two light beams inside the particle will result in a net zero force in the y direction because of symmetry, as shown in Fig. 2c. However, due to the interface, the portions of air and water in the two planes are different, resulting in different diffraction and momentum exchange at the boundary, which eventually generates a lateral force. The other reason is that the reflection from the interface induces additional light rays on the sphere. The different portions of the refraction area cause a change in the light path to the sphere. The reflection and refraction together contribute to the emergence of the lateral force. When the incident angle changes, the portions of air and water in the relevant planes (blue circles) in Fig. 2e will be different from that in Fig. 2d, resulting in different lateral forces. In addition, different incident angles have different ranges of the water region, where the reflection and refraction are different. We can also comprehend the origin of the optical lateral force on chiral particles by considering the linearly polarized beam as two circularly polarized beams with different handedness (see discussion below). Plots of the lateral force on larger chiral particles (600 nm < R ≤ 1000 nm) are shown in Fig. 2f, g. Small incident angles (e.g., 10°) can easily induce a reversal of the lateral force. This is because when the incident angle is small, the energy is focused near the z-axis, where the size effect is more significant. A small change in size can extraordinarily affect the curvature of the particle boundary near the axis. This is very similar to the linear momentum transfer in the incident plane42,43. The optical lateral force can also have opposite signs at medium (θ = 45°) and large (θ = 80°) incident angles for a p-polarized beam, as shown in Fig. 2g. Meanwhile, the oscillations of the curves in Fig. 2f, g result from the size effect of Mie particles44. The lateral forces on multilayer particles are plotted in Supplementary Fig. S2, which shows that the force difference between inhomogeneous and homogenous chiral microparticles is not prominent. The force difference for small inhomogeneous particles (R = 250 nm) is negligible because of the weak momentum transfer when the particle size is less than the wavelength23,37,43. The lateral force and force difference become larger with increasing particle size. The force difference is more prominent at a larger incident angle, which can be explained by the sketch in Supplementary Fig. S3. In practice, the synthetic chiral particles tend to retain good performance in terms of chirality35. Moreover, the inhomogeneous effect can be eliminated by choosing a proper angle (e.g., 45°). ### Lateral momentum transfer on Mie chiral particles To comprehend the optical lateral force, we plot the yz view of the 3D distribution of the time-averaged Poynting vector surrounding a chiral particle with a radius of 500 nm, as shown in Fig. 3a. The particle with chirality κ = +0.4 is placed at an air–water interface (half in air (z > 0) and half in water (z < 0)) and illuminated by an s-polarized plane wave with an incident angle of 45°. The helix structure of the chiral particle causes the energy flow to spiral and scatter away from the incident plane (xz) to the lateral plane (yz). The energy flow then passes through the surface of the chiral particle and goes into the air and water regions, causing momentum exchange and generating the optical lateral force. The energy flux has distinct asymmetry and higher density in the water region, especially near the particle boundary. It is worth noting that the lateral force Flateral should be multiplied by the refractive index n, which is the refractive index of water (1.33) or air (~1), based on the Minkowski stress tensor. Therefore, the net force in the y direction is dominantly contributed by the energy scattered from the particle to water. The normalized electric field is denser in the +y direction, as shown in the background of Fig. 3a. At the same time, most Poynting vectors point in the +y direction from the particle to water, resulting in a negative force Fy. Since the light is obliquely incident, the normalized electric field is focused in the water after passing through the particle, as shown in Fig. 3b. For chiral particles, our results indicate that the lateral forces arise from a complex interplay between the “out-of-plane” light scattering from the chiral particle to air and water and the abovementioned “in-plane” momentum exchange. To obtain a comprehensive view of the energy scattering from the chiral particle, we show slices of the scattering field along the direction of δ in Fig. 3c–j. The energy scattering has a bias in the +y direction when δ is from −180 to +120 nm. Only slightly more energy is scattered in the –y direction when δ ranges from +180 to +240 nm. As the scattering field is densest and shows a clear bias toward the +y direction in the plane from δ = +60 and +120 nm, the net energy is scattered in the +y direction, resulting in a negative optical lateral force. The chiral particle with κ = +0.4 experiences a positive lateral force when θ < 18°, which can be explained by the plot of the electric field and Poynting vector in the yz plane at x = 200 nm, as shown in Fig. 3k. It shows a distinct bias of energy scattering toward the –y direction when θ = 10°. The energy scattering direction reverses when θ = 45° (x = 0), as shown in Fig. 3l. Detailed simulations of the lateral momentum transfer when θ = 0° and 45° are shown in Supplementary Figs. S4 and S5, respectively. It is noted that because Fy is much smaller when θ = 10° than when θ = 45°, the momentum transfer has a different bias in different layers. It is safe to deduce the optical force using the overall 3D Poynting vector in Supplementary Fig. S4a or using the numerical results in Fig. 1e. It is unambiguous that the momentum has a distinct bias towards the +y direction in most of the layers for θ = 45°, resulting in Fy < 0. Figure 3k, l is chosen to represent the net momentum transfer under different angles. More simulations of the lateral momentum transfer under different incident angles, polarizations, chiralities and sizes can be found in Supplementary Figs. S6S9. ### Experimental setup and sample characterization To observe the lateral movement of Mie chiral particles, a line-shaped laser spot for creating a line trap was introduced into a microscope stage where an optofluidic chip was placed, as shown in Fig. 4a–c. The dimensions of the laser spot were kept at 80 × 600 μm2, controlled by two cylindrical lenses, as shown in Fig. 4c. The 80-µm width is used to generate an optical gradient force to confine microparticles inside the line trap. The 600-µm length mitigates the influence of the optical gradient force on the lateral force. The optical gradient force in the lateral (y-) direction is negligible compared to the optical lateral force (see Supplementary Figs. S10 and S11 for detailed simulations). Chiral particles were synthesized with resonance at 532 nm, as shown in Fig. 4d. The polymeric microparticles exhibit chirality at both the molecular (chiral additive molecules) and supramolecular levels. The chiral supramolecular contribution gives rise to a Bragg-reflection phenomenon for circularly polarized light with the same handedness as the particle chirality and wavelength in a proper range (np < λ < $$n_{II}p$$, where n and $$n_{II}$$ are the refractive indices perpendicular and parallel to the molecular direction, respectively; p is the pitch of the helicoidal supramolecular organization). Omnidirectional reflection occurs based on the supramolecular radial configuration of the helices, while the handedness of the reflected circularly polarized light (CPL) is preserved, acting as a chiral mirror. Depending on the particle chirality, the CPL with opposite handedness propagates with a constant refractive index $$\bar n = \frac{{n_{II} + n_ \bot }}{2} = 1.5$$. The antiparallel reflectance value Rap can be evaluated as the average over the two orthogonal polarization directions with respect to the incidence plane, which can be expressed using the equation $$R_{ap} = \frac{1}{2}\left( {\frac{{{\mathrm{sin}}^2\left( {\theta} \,-\, {\beta } \right)}}{{{\mathrm{sin}}^2\left( {\theta} \,+\, {\beta } \right)}} + \frac{{{\mathrm{tan}}^2\left( {\theta} \,-\, {\beta } \right)}}{{{\mathrm{tan}}^2\left( {\theta} \,+\, {\beta } \right)}}} \right)$$, where θ is the incidence angle at the surface of the sphere and β is the refraction angle. In contrast, the CP light with the same handedness as the helix handedness and wavelength within the selective reflection band can be strongly reflected, and the reflectance Rp can be evaluated from $$R_p = \left| {\tan h\left( {\frac{{\sqrt 2 \pi \left( {n_{II}^2 - n_ \bot ^2} \right)R}}{{3\lambda \sqrt {\left( {n_{II}^2 + n_ \bot ^2} \right)} }}} \right)} \right|^2$$. Finally, the value of particle reflectance Rs depending on the light polarization, the particle size and the light wavelength can be expressed as45,46 $$R_s = R_p\left( {\frac{{1 + \sin 2\phi }}{2}} \right) + R_{ap}$$ (2) where ϕ is the ellipticity angle. Rap, which is related to the refractive index difference at the air–particle interface, has a value of ~0.05. Therefore, Rp is only related to the radius of the particle for the present case, as plotted in Fig. 4e. For particles with R ≥ 6 μm, Rp can reach a value of 1, i.e., the CP parallel component is completely reflected. Since the particles exploited in the experiment have a radius from 0.5–1 μm, Rp ranges from 0.08 to 0.28, as shown in Fig. 4e. The expected Rs at the air–particle interface is in the range of 9‒19%. Because the absorption of the polymer as well as the circular dichroism is very low in the visible range, the transmittance T ≈ 1 − Rs. Based on this assumption, we can introduce and evaluate a “structural dichroism” $$D = \frac{{T_ + - T_ - }}{{T_ + + T_ - }}$$, where T+/− are the transmittances for left/right CP light. D ranges from 0 to (+/−) 1 for (left/right) chiral particles with R ≤ 6 μm and is (+/−) 1 for (left/right) chiral particles with R ≥ 6 μm. As discussed above, the handedness of CP beams affects the reflectivity of chiral particles. When this effect is strong (Rp = 1), the radiation pressure dominates, while for Rp < 0.3, the radiation pressure is reduced and the effect of the lateral force (lateral scattering) on microparticles at the interface occurs. We can also expect different scattering efficiencies in the lateral direction for different CP beams. The optical lateral force on the chiral microparticles can be comprehended by dividing the linearly polarized beam into two CP beams with different helicities. ### Experimental demonstration of the bidirectional sorting of Mie chiral particles Bidirectional sorting of polymeric particles performed at room temperature (20 °C) is shown in Fig. 5a–d. The particles were initially passed through a mechanical filter with 2-µm pores to eliminate particles larger than 2 µm. To avoid or mitigate the complex dependence of lateral forces on the size and chirality, we used s- and p-polarized beams with an incident angle θ = 45° in the experiment according to the simulation results in Figs. 1 and 2. Particles were then freely floated at the air–water interface. Due to the preparation process, some particles with small sizes or slightly different pitches presented weak chirality coupling, which served as references for the lateral movement. Because of the particularities of the experiment and the small particle size, the scattered light of chiral microparticles was used to observe the lateral displacements (see Supplementary Fig. S12). When illuminated with the s-polarized laser beam, the particles with weak chirality coupling were stably trapped inside the line trap, as shown in Fig. 5a, b. Three right-handed microparticles (κ > 0, marked with white circles) experienced an optical lateral force in the –y direction, as shown in Fig. 5a. They had different velocities because of the different sizes and chirality couplings. The maximum velocity of the three particles was ‒8.5 μm/s. The reference particle (marked with white squares) with negligible optical lateral force had an only 21-μm lateral displacement in 24 s, resulting in a velocity of −0.9 μm/s. This movement was caused by the heating-induced vibration of the background flow. Since the polymerized chiral particle and water had negligible absorption of 532 nm light, the velocities of the background flow induced by the heating were normally less than 1 μm/s, which were much smaller than the velocities induced by the lateral forces. Meanwhile, this vibration could be easily characterized by observing particles with the same slow velocity (e.g., F1, F2 and F3 in Fig. 5a) and could be easily eliminated by subtracting this velocity from the overall velocities of chiral microparticles (see Supplementary Fig. S13 for more results). The background particle movement could result from the heating-induced thermophoretic force47,48, which can be estimated using $$F_t = - 9\pi R\eta ^2\Delta T/\left( {2 + C_m/C_p} \right)/(\rho T)$$49, where R and Cp are the radius and thermal conductivity of the particle, respectively. η, Cm, ρ, T, and ∆T are the viscosity, thermal conductivity, density, temperature, and temperature gradient of the medium, respectively. Since particles were placed half in air and half in water, the optical forces could be deduced from the velocities of particles and expressed as Fdrag = 0.5 × 6 πηRv, where η is the viscosity of the liquid and R and v are the radius and velocity of the particle, respectively. Substituting the velocity of 1 µm/s into the equation Ft = Fdrag, we obtained the equivalent temperature gradient of ~0.2 °C/mm, which could be reached when a laser beam is focused on glass or into water with salt or other chemicals47,48. Two left-handed particles (κ < 0, marked with white circles) experienced optical forces in the +y direction, as shown in Fig. 5b. The maximum velocity of the two particles was +3.1 μm/s. The background flow velocity was −0.4 μm/s. The velocities of particles with different handedness under different laser powers are shown in Fig. 5c. When illuminated with an s-polarized beam, particles with κ > 0 and κ < 0 experienced optical forces in the −y and +y directions, respectively. The velocities linearly increased with laser power, showing good feasibility of our method for sorting particles with different chiralities. The averaged velocities of particles with κ > 0 were approximately twice those of particles with κ < 0. The velocities were obtained from the maximum velocities for different sizes in each video. The absolute value of the lateral force increased almost linearly with particle size for both s- and p-polarizations when the radius increased from 250 to 1000 nm, as shown in Fig. 5d. Interestingly, the directions of the lateral forces for the p-polarization were opposite to those for the s-polarization, in accordance with the simulation results in Fig. 2f, g. The absolute values of the lateral forces for small particles (R = 250 nm) under the illumination of the p-polarized beam were much smaller than those under the illumination of the s-polarized beam. However, the lateral forces did not differ greatly for larger particles (R > 250 nm). This effect also coincides with the simulation results. Therefore, the s-polarized beam was a better option for bidirectional sorting of Mie chiral particles than the p-polarized beam. ## Discussion One may have the following question: are there any high-order multipoles in the Mie chiral particles? Recently, broad interest has emerged in the study of intriguing high-order multipoles in dielectric elements, including the multipoles and bound states in the continuum (BIC) in nanocylinders50,51, as well as the multipole resonance enhanced second harmonic generation (SHG) in AlGaAs (aluminium gallium arsenide)52. The existence of electric and magnetic modes enhances the scattering cross sections and optical forces. We could also expect these high-order modes in chiral particles and enhanced optical forces (both radiation and lateral). However, the appearance of these high-order multipoles requires some criteria to be met, e.g., a high refractive index (normally RI > 3), a small size (normally < wavelength/2), and a specific structure (e.g., specific length/radius ratio in cylinders). Since our chiral particles have a low refractive index (~1.5) and a relatively large size (~wavelength), high-order multipoles are unlikely to occur. This can also be concluded from the force maps in Fig. 1e–h, as the distribution of optical force does not have any abrupt change coming from multipoles. In summary, we reveal an unexpected behavior of chirality-dependent lateral forces when chiral microparticles in the Mie regime are located at the interface between air and water. Our numerical simulations show that the sign of the optical lateral force depends not only on the chirality, as expected from the dipole approximation in previous papers, but also strongly on the incident angle, beam polarization, and particle size. The sign reversal of the chirality-dependent lateral force can be regarded as a chiral analogue of “negative” forces or “left-handed” torques. In practice, by choosing s- and p-polarized beams with an incident angle of 45°, for the first time, we demonstrate sorting of Mie cholesteric polymeric microparticles using an optical lateral force. Particles with left and right chirality experience optical lateral forces with opposite directions. Particles with the same chirality experience opposite optical lateral forces under s- and p-polarized beams when θ = 45°. Our studies on Mie chiral microparticles complete the understanding of the recent theoretically proposed extraordinary optical lateral force from the aspect of momentum transfer and open up new avenues for probing and sorting of micro-objects with different chiralities. ## Materials and methods ### Sample preparation and characterization Polymerized liquid crystal microparticles were produced via UV irradiation of micron-sized droplet emulsions of photopolymerizable cholesteric liquid crystals in water. A nematic reactive mesogen, RMS03-001C (Merck KGaA, Germany), was used after solvent evaporation. The cholesteric phase was achieved by doping it with a chiral agent. The molar circular dichroism of R/S811 was measured in the blue–green region of the spectrum by exploiting a mixture of the chiral dopants in ethanol at a concentration of 1.4% by weight. The measured value of the molar circular dichroism for both chemical agents is ∆ε ≈ 1 cm−1. To produce left- and right-handed microparticles, two different mixtures were prepared with a left-handed (ZLI-811 Merck KGaA, Germany) or a right-handed (ZLI-3786 Merck KGaA, Germany) chiral agent. The left- and right-handed chiral dopants lead to a left or right rotation of the nematic director, inducing a left-handed or right-handed supramolecular helicoidal structure, respectively. The chiral dopant concentration was fixed at 22.5 wt% for both mixtures to achieve helicoidal structures with a pitch of ~330 nm, which leads to enhanced coupling with the 532 nm laser beam. Among the different techniques used to manufacture cholesteric droplets, including emulsification and microfluidics approaches, the only feasible method here is emulsification due to the high viscosity of the reactive mesogen. The cholesteric microdroplets were obtained in aqueous emulsions by adding 0.5 wt% of the chiral mesogen mixture into ultrapure water (≥18.2 M_@25 °C, Synergy UV, Millipore), which produced a parallel (i.e., planar) molecular orientation at the interface. The blends were shaken at 20 Hz for 30 s at 90 °C in a glass vessel using a laboratory vortex mixer. Subsequently, polymerized chiral particles were obtained by exposing the emulsions to a 2 mW/cm2 UV lamp (λ = 365 nm, LV202-E, Mega Electronics) at room temperature for 6 h under nitrogen flux. The resulting chiral solid microparticles preserve both the spherical shape and internal supramolecular arrangement of the precursor liquid crystal droplets, allowing the experimental investigation of floating microparticles35. The optical microscope observations reveal that almost all the microparticles have a radial configuration of the helix axes of the particles, while a small pitch dispersion is displayed by the reflected color. The average refractive index of the polymeric chiral particles is 1.5 at 532 nm. The suspension was initially passed through a 2-µm mechanical filter to eliminate particles larger than 2 µm. Dynamic light-scattering (Zetasizer Nano ZS, Malvern) measurements were performed, and a polydispersity index PDI = 0.35 was measured. The transmission spectra of the left- and right-handed polymers are shown in Fig. 4d. Since the density of the microparticles is higher than that of DI water, we used saturated potassium chloride (KC1) deionized water (DI) water to float them on the surface. The refractive index of the saturated KC1 solution at 20° is ~1.336. Due to the low absorption of the materials at the used wavelength, the value of the molecular circular dichroism is very small. However, at this wavelength, the circular dichroism stems from diffraction of light46,53,54. A Bragg-reflection phenomenon46,53,54 occurs for circularly polarized light with the same handedness as the material/particle chirality due to the supramolecular shell arrangement. Such “structural dichroism” can be evaluated by the difference between the transmission coefficients of the two circular polarizations, $$D = \frac{{T_ + - T_ - }}{{T_ + + T_{ - l}}}$$, where T+/− are the transmittances for left/right CP light46,53,54. Accordingly, omnidirectional uniform reflectance occurs for particles with a radial configuration of the helical axes, as shown in Fig. 1d. The structural dichroism D strongly depends on the R/p ratio46,53,54. For large particles ((R/p) > 12, see Fig. 4e), D is $$\cong\!\pm\! 1$$ depending on the particle chirality, and the optical force induced by radiation pressure dominates. Conversely, for small particles, the radiation pressure force is reduced, allowing other optomechanical phenomena to be observed46,53,54, as in the present case. Indeed, the value of D ranges from nearly 0 (for R < 500 nm) to 0.06 (for R ≈ 1000 nm). Therefore, based on the above issues, polymeric microparticles with sizes <2 µm exhibit unique features that enable experimentally investigation of the lateral force and reliable fit of the approximation of spherical particles with uniform chirality adopted in the theoretical modeling. More images of the polymeric chiral microparticles can be found in Supplementary Fig. S14. ### SEM and TEM measurements SEM (Quanta 400 FEG, FEI) analysis was carried out in low vacuum on fully polymerized microparticles after water evaporation. To perform TEM measurements, the polymeric microparticles were first embedded in an epoxy resin (Araldite, Fluka) and successively cut into ultrathin sections of ~100 nm by a diamond knife. The ultrathin sections were collected on copper grids and then examined with a Zeiss EM10 transmission electron microscope at an 80 kV acceleration voltage. The concentric ring structures observed in the TEM images in Fig. 1d correspond to the topography of the thin slices. These corrugations are due to the cutting process and occur due to a certain orientation of the molecular director n with respect to the cutting direction. Moreover, the equidistance between dark and bright concentric rings suggests that the investigated section was within an equatorial region of the particle. ### Chip fabrication and experimental setup The optofluidic chip was made from polydimethylsiloxane (PDMS)55. A PDMS slice was first cut into a block (2 × 2 cm2). A square well (5 × 5 mm2) was drilled at the center of this block using a scalpel. Then, the PDMS block was bonded to a cover slide (0.17 mm) using plasma treatment56. The whole chip was placed onto the stage of an inverted optical microscope (TS 100 Eclipse, Nikon). It was then covered by a culture dish to prevent environmental disturbance from air flow. A c.w. laser (532 nm, Laser Quantum, mpc 6000; laser power, 2 W) was obliquely incident into the holes. The beam was focused into a line trap using a combination of two cylindrical lenses with focal lengths of 300 and 100 mm. The area of this line trap was kept at 80 × 600 μm2 to trap microparticles inside and minimize the lateral gradient force. The chiral microparticles at the air–water interface were imaged through a ×10 microscope objective (NA 0.25, Nikon) using a charge-coupled device camera (Photron Fastcam SA3) with a frame rate of 125 frames per second. ### Simulation details and constitutive relations of chiral particles We simulated the Poynting vector and optical lateral force in COMSOL by applying the constitutive relations of a chiral particle, which can be expressed as $${\mathbf{D}} = \varepsilon _r\varepsilon _0{\mathbf{E}} + i\kappa /c{\mathbf{H}}$$ $${\mathbf{B}} = - i\kappa /c{\mathbf{E}} + \mu _r\mu _0{\mathbf{H}}$$ where εr and µr are the relative permittivity and permeability of the chiral particle, respectively. The sign of kappa (κ) is positive, negative, and zero when the chiral particle is right-handed, left-handed, and nonchiral, respectively. The particle was placed at the interface of water (refractive index n = 1.33) and air (n = 1) under the illumination of a plane wave (wavelength λ = 532 nm). The simulation was conducted in a sphere with a diameter of 2 µm and a PML boundary condition. The maximum size of the mesh was set to λ/8/n, with n being the refractive index of the different media. The optical force was calculated using the Minkowski stress tensor written in COMSOL. ## References 1. 1. Zerrouki, D. et al. Chiral colloidal clusters. Nature 455, 380–382 (2008). 2. 2. Chela-Flores, J. The origin of chirality in protein amino acids. Chirality 6, 165–168 (1994). 3. 3. Frank, H., Nicholson, G. J. & Bayer, E. Rapid gas chromatographic separation of amino acid enantiomers with a novel chiral stationary phase. J. Chromatographic Sci. 15, 174–176 (1977). 4. 4. Margolin, A. L. Enzymes in the synthesis of chiral drugs. Enzym. Microb. Technol. 15, 266–280 (1993). 5. 5. Nguyen, L. A., He, H. & Pham-Huy, C. Chiral drugs: an overview. Int. J. Biomed. Sci. 2, 85–100 (2006). 6. 6. Agranat, I., Caner, H. & Caldwell, J. Putting chirality to work: the strategy of chiral switches. Nat. Rev. Drug Discov. 1, 753–768 (2002). 7. 7. Brooks, W. H., Guida, W. C. & Daniel, K. G. The significance of chirality in drug design and development. Curr. Top. Medicinal Chem. 11, 760–770 (2011). 8. 8. Dogariu, A., Sukhov, S. & Sáenz, J. Optically induced ‘negative forces’. Nat. Photonics 7, 24–27 (2013). 9. 9. Gao, D. L. et al. Optical manipulation from the microscale to the nanoscale: fundamentals, advances and prospects. Light.: Sci. Appl. 6, e17039 (2017). 10. 10. Shi, Y. Z. et al. Sculpting nanoparticle dynamics for single-bacteria-level screening and direct binding-efficiency measurement. Nat. Commun. 9, 815 (2018). 11. 11. Shi, Y. Z. et al. Nanometer-precision linear sorting with synchronized optofluidic dual barriers. Sci. Adv. 4, eaao0773 (2018). 12. 12. Ashkin, A. et al. Observation of a single-beam gradient force optical trap for dielectric particles. Opt. Lett. 11, 288–290 (1986). 13. 13. Shi, Y. Z. et al. Nanophotonic array-induced dynamic behavior for label-free shape-selective bacteria sieving. ACS Nano 13, 12070–12080 (2019). 14. 14. Swartzlander, G. A. Jr. et al. Stable optical lift. Nat. Photonics 5, 48–51 (2011). 15. 15. Antognozzi, M. et al. Direct measurements of the extraordinary optical momentum and transverse spin-dependent force using a nano-cantilever. Nat. Phys. 12, 731–735 (2016). 16. 16. Svak, V. et al. Transverse spin forces and non-equilibrium particle dynamics in a circularly polarized vacuum optical trap. Nat. Commun. 9, 5453 (2018). 17. 17. Bekshaev, A. Y., Bliokh, K. Y. & Nori, F. Transverse spin and momentum in two-wave interference. Phys. Rev. X 5, 011039 (2015). 18. 18. Bliokh, K. Y., Bekshaev, A. Y. & Nori, F. Extraordinary momentum and spin in evanescent waves. Nat. Commun. 5, 3300 (2014). 19. 19. O’Connor, D. et al. Spin-orbit coupling in surface plasmon scattering by nanostructures. Nat. Commun. 5, 5327 (2014). 20. 20. Wang, S. B. & Chan, C. T. Lateral optical force on chiral particles near a surface. Nat. Commun. 5, 3307 (2014). 21. 21. Hayat, A., Mueller, J. P. B. & Capasso, F. Lateral chirality-sorting optical forces. Proc. Natl Acad. Sci. USA 112, 13190–13194 (2015). 22. 22. Chen, H. J. et al. Chirality sorting using two-wave-interference-induced lateral optical force. Phys. Rev. A 93, 053833 (2016). 23. 23. Zhang, T. H. et al. All-optical chirality-sensitive sorting via reversible lateral forces in interference fields. ACS Nano 11, 4292–4300 (2017). 24. 24. Diniz, K. et al. Negative optical torque on a microsphere in optical tweezers. Opt. Express 27, 5905–5917 (2019). 25. 25. Tkachenko, G. & Brasselet, E. Helicity-dependent three-dimensional optical trapping of chiral microparticles. Nat. Commun. 5, 4491 (2014). 26. 26. Kravets, N., Aleksanyan, A. & Brasselet, E. Chiral optical Stern-Gerlach Newtonian experiment. Phys. Rev. Lett. 122, 024301 (2019). 27. 27. Spivak, B. & Andreev, A. V. Photoinduced separation of chiral isomers in a classical buffer gas. Phys. Rev. Lett. 102, 063004 (2009). 28. 28. Li, Y., Bruder, C. & Sun, C. P. Generalized Stern-Gerlach effect for chiral molecules. Phys. Rev. Lett. 99, 130403 (2007). 29. 29. Cao, T. & Qiu, Y. M. Lateral sorting of chiral nanoparticles using Fano-enhanced chiral force in visible region. Nanoscale 10, 566–574 (2018). 30. 30. Zhao, Y. et al. Nanoscopic control and quantification of enantioselective optical forces. Nat. Nanotechnol. 12, 1055–1059 (2017). 31. 31. Zhao, Y., Saleh, A. A. E. & Dionne, J. A. Enantioselective optical trapping of chiral nanoparticles with plasmonic tweezers. ACS Photonics 3, 304–309 (2016). 32. 32. Tkachenko, G. & Brasselet, E. Optofluidic sorting of material chirality by chiral light. Nat. Commun. 5, 3577 (2014). 33. 33. Tkachenko, G. & Brasselet, E. Spin controlled optical radiation pressure. Phys. Rev. Lett. 111, 033605 (2013). 34. 34. Kravets, N. et al. Optical Enantioseparation of racemic emulsions of chiral microparticles. Phys. Rev. Appl. 11, 044025 (2019). 35. 35. Donato, M. G. et al. Polarization-dependent optomechanics mediated by chiral microresonators. Nat. Commun. 5, 3656 (2014). 36. 36. Milonni, P. W. & Boyd, R. W. Momentum of light in a dielectric medium. Adv. Opt. Photonics 2, 519–553 (2010). 37. 37. Ndukaife, J. C. et al. Long-range and rapid transport of individual nano-objects by a hybrid electrothermoplasmonic nanotweezer. Nat. Nanotechnol. 11, 53–59 (2016). 38. 38. Wang, K. et al. Trapping and rotating nanoparticles using a plasmonic nano-tweezer with an integrated heat sink. Nat. Commun. 2, 469 (2011). 39. 39. Lin, S. Y., Schonbrun, E. & Crozier, K. Optical manipulation with planar silicon microring resonators. Nano Lett. 10, 2408–2411 (2010). 40. 40. Shi, Y. Z. et al. High-resolution and multi-range particle separation by microscopic vibration in an optofluidic chip. Lab A Chip 17, 2443–2450 (2017). 41. 41. Cipparrone, G. et al. Chiral self-assembled solid microspheres: a novel multifunctional microphotonic device. Adv. Mater. 23, 5773–5778 (2011). 42. 42. Qiu, C. W. et al. Photon momentum transfer in inhomogeneous dielectric mixtures and induced tractor beams. Light.: Sci. Appl. 4, e278 (2015). 43. 43. Kajorndejnukul, V. et al. Linear momentum increase and negative optical forces at dielectric interface. Nat. Photonics 7, 787–790 (2013). 44. 44. Kruk, S. & Kivshar, Y. Functional meta-optics and nanophotonics governed by mie resonances. ACS Photonics 4, 2638–2649 (2017). 45. 45. Hernández, R. J. et al. Chiral resolution of spin angular momentum in linearly polarized and unpolarized light. Sci. Rep. 5, 16926 (2015). 46. 46. Hernández, R. J. et al. Cholesteric solid spherical microparticles: chiral optomechanics and microphotonics. Liq. Cryst. Rev. 4, 59–79 (2016). 47. 47. Duhr, S. & Braun, D. Why molecules move along a temperature gradient. Proc. Natl Acad. Sci. USA 103, 19678–19682 (2006). 48. 48. Schermer, R. T. et al. Laser-induced thermophoresis of individual particles in a viscous liquid. Opt. Express 19, 10571–10586 (2011). 49. 49. Saxton, R. L. & Ranz, W. E. Thermal force on an aerosol particle in a temperature gradient. J. Appl. Phys. 23, 917–923 (1952). 50. 50. Rybin, M. V. et al. High-Q supercavity modes in subwavelength dielectric resonators. Phys. Rev. Lett. 119, 243901 (2017). 51. 51. Staude, I., Pertsch, T. & Kivshar, Y. S. All-dielectric resonant meta-optics lightens up. ACS Photonics 6, 802–814 (2019). 52. 52. Koshelev, K. et al. Subwavelength dielectric resonators for nonlinear nanophotonics. Science 367, 288–292 (2020). 53. 53. Bouligand, Y. & Livolant, F. The organization of cholesteric spherulites. J. de. Phys. 45, 1899–1923 (1984). 54. 54. Seč, D. et al. Geometrical frustration of chiral ordering in cholesteric droplets. Soft Matter 8, 11982–11988 (2012). 55. 55. Chen, X. Y. & Li, T. C. A novel passive micromixer designed by applying an optimization algorithm to the zigzag microchannel. Chem. Eng. J. 313, 1406–1414 (2017). 56. 56. Chen, X. Y. et al. Numerical and experimental investigation on micromixers with serpentine microchannels. Int. J. Heat. Mass Transf. 98, 131–140 (2016). ## Acknowledgements C.-W.Q. acknowledges the financial support from the Ministry of Education, Singapore (Project No. R-263-000-D11-114) and from the National Research Foundation, Prime Minister’s Office, Singapore under its Competitive Research Program (CRP award NRFCRP15-2015-03 and NRFCRP15-2015-04). Y.Z.S. and A.Q.L. acknowledge the Singapore National Research Foundation under the Competitive Research Program (NRF-CRP13-2014-01) and the Incentive for Research & Innovation Scheme (1102-IRIS-05-04) administered by PUB. T.T.Z. acknowledges the Fundamental Research Funds for the Central Universities (DUT19RC(3)046). J.J.S. was supported by the Spanish Ministerio de Economía y Competitividad (MICINN) and European Regional Development Fund (ERDF) Project FIS2015-69295-C3-3-P and the Basque Dep. de Educación Project PI-2016-1-0041. G.C. and A.M. acknowledge Camilla Servidio for DLS measurements. ## Author information Authors ### Contributions Y.Z.S., T.T.Z., and C.-W.Q. jointly conceived the idea. T.T.Z., Y.Z.S., T.H.Z., and C.-W.Q. performed the numerical simulations and theoretical analysis. Y.Z.S, A.M., and G.C. performed the experiment and fabrication. Y.Z.S. T.T.Z., T.H.Z. A.M., D.P.T., W.Q.D., A.Q.L., G.C., J.J.S., and C.-W.Q. were involved in the discussion. Y.Z.S., T.T.Z., J.J.S., G.C., and C.-W.Q. prepared the paper. C.-W.Q. supervised and coordinated all the work. All authors commented on the paper. ### Corresponding author Correspondence to Cheng-Wei Qiu. ## Ethics declarations ### Conflict of interest The authors declare that they have no conflict of interest. This paper is in deep memory of Prof. Saenz, our beloved and dearest friend, who passed away on 22 March 2020 in Spain. We sorely miss you, Juanjo! ## Rights and permissions Reprints and Permissions Shi, Y., Zhu, T., Zhang, T. et al. Chirality-assisted lateral momentum transfer for bidirectional enantioselective separation. Light Sci Appl 9, 62 (2020). https://doi.org/10.1038/s41377-020-0293-0 • Revised: • Accepted: • Published: • ### Inertial Migration of Neutrally Buoyant Spherical Particles in Square Channels at Moderate and High Reynolds Numbers • Yanfeng Gao • , Pascale Magaud • , Lucien Baldas •  & Yanping Wang Micromachines (2021) • ### On‐Chip Optical Detection of Viruses: A Review • Yuzhi Shi • , Zhenyu Li • , Patricia Yang Liu • , Binh Thi Thanh Nguyen • , Wenshuai Wu • , Qianbin Zhao • , Lip Ket Chin • , Minggui Wei • , Peng Huat Yap • , Xiaohong Zhou • , Hongwei Zhao • , Dan Yu • , Din Ping Tsai •  & Ai Qun Liu • ### Controllable transport of nanoparticles along waveguides by spin-orbit coupling of light • Zhibin Zhang • , Changjun Min • , Yanan Fu • , Yuquan Zhang • , Weiwei Liu •  & Xiaocong Yuan Optics Express (2021) • ### Emerging optofluidic technologies for biodiagnostic applications • Jiandong Wu • , Bo Dai • , Zhenqing Li • , Tingrui Pan • , Dawei Zhang •  & Francis Lin View (2021) • ### Microfluidic channel integrated with a lattice lightsheet microscopic system for continuous cell imaging • Yu-Jui Fan • , Han-Yun Hsieh • , Sheng-Fang Tsai • , Cheng-Hsuan Wu • , Chia-Ming Lee • , Yen-Ting Liu • , Chieh-Han Lu • , Shu-Wei Chang •  & Bi-Chang Chen Lab on a Chip (2021)
2021-05-08 01:26:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5524957180023193, "perplexity": 2474.256126098688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988831.77/warc/CC-MAIN-20210508001259-20210508031259-00264.warc.gz"}
https://www.physicsforums.com/threads/simple-questions-about-lp-spaces.609645/
# Simple questions about Lp spaces 1. May 28, 2012 ### EV33 1. The problem statement, all variables and given/known data My question is just on the definition of L. Is L=Lp where p=∞, i.e., is a measurable function in L if ∫Alf(x)l<∞? 2. Relevant equations *L: The space of all bounded measurable functions on [0,1] (bounded except for possibly on a set of measure zero) *A measurable function is said to belong to Lp if ∫Alf(x)lp<∞. 3. The attempt at a solution Looks like it would be true based on the definition of Lp but I am really not sure since Royden only gives the one definition on L. 2. May 28, 2012 ### EV33 Sorry, this should have have been posted in the calculus and beyond section. 3. May 28, 2012 ### algebrat The infinity norm just take the maximum value of f. For general p norms, as p increases, it puts more weight on the larger terms. If you want an intuitive clue to the idea, notice that i you are measuring the hypotenuse based off of two values (the two sides), then if one of the sides is much larger than the other, than the hypotenuse comes pretty close to the length of the longer side. The infinity norm goes further, and simply gives you the length of the longer side, completely ignoring other lengths. So does it make sense why L^infinity is all the bounded functions etc? The problem with your expression with the integral of magnitude to the infinity, is when that is p, we really mean to take the pth root of the integral. So in the infinity norm, you would take the infinitieth root. So instead (they don't do that), they take the limit in p of the pth root, and prove that it is the supremum of |f|, but I might be forgetting a detail about sets of nonzero measure or something. Last edited: May 28, 2012 4. May 29, 2012 ### Vargo So you need to take the limit of the p-norm as p goes to infty and show that this gives the "essential" supremum (i.e. the L infty norm). You prove this using the squeeze lemma. Lets assume the measure of the entire space is finite for simplicity. I think the same arguments could be adapted, but suppose the measure of the whole space is M. According to Jensen's inequality applied to the convex function phi(x)=x^p, and using the original measure divided by M (giving a probability measure), we can prove that $\|f\|_p \leq M^{1/p}\|f\|_\infty$ As long as M is not zero, then we find that the limsup of the p-norms is less than or equal to the L infinity norm. For the other direction, let epsilon be positive. We know that there is a measurable subset E of positive measure on which the value of f is at least equal to $\|f\|_\infty - \epsilon$. The Lp norm of f is bounded below by its Lp norm calculated over E. $\|f\|_p\geq \left( \int_E |f|^p d\mu\right)^{1/p}\geq \left( (\|f\|_\infty-\epsilon)^p\mu(E)\right)^{1/p}=(\|f\|_\infty-\epsilon)(\mu(E))^{1/p}$ As long as the measure of E is positive, then we may conclude that the limsup of the Lp norm as p gets large is the L infinity norm.
2017-08-18 14:05:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9356062412261963, "perplexity": 384.49390852592137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104636.62/warc/CC-MAIN-20170818121545-20170818141545-00116.warc.gz"}
https://math.stackexchange.com/questions/3109711/why-is-clifford-group-a-group
# Why is clifford group a group? Let $$C(Q)$$ denote the clifford algebra of vector space $$Q$$ with respect to a quadratic form $$q:V \rightarrow \Bbb R$$. Hence we have the relation $$w^2 = Q(w) \cdot 1$$ for $$w \in V$$. Let $$\alpha:C(Q) \rightarrow C(Q)$$ be the canonical automoprhism $$\alpha^2=id, \alpha=-x$$. The Clifford group of $$Q$$ is $$\Gamma (Q) = \{ x \in C(Q)^* \, ; |, \alpha(x) \cdot v \cdot x^{-1} \in V \text{ for all } v \in V \}$$ How is this set closed under inverses? • You can rewrite the condition as $\alpha(x)Vx^{-1}=V$. – Lord Shark the Unknown Feb 12 at 5:39 • Ok it doesn't seem clear to me how the equality holds: the way is define $f_x :V \rightarrow V$, then as $\alpha(x) \cdot v \cdot x^{-1}$. This map is injective, $V$ finite dimensinoal so bijective. Is there an easier way? – CL. Feb 12 at 5:51
2019-02-17 01:21:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9935715198516846, "perplexity": 250.2401266602406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481428.19/warc/CC-MAIN-20190217010854-20190217032854-00143.warc.gz"}
https://electronics.stackexchange.com/questions/614380/hartley-oscillator-not-self-starting/614393
# Hartley oscillator not self-starting I built a Hartley oscillator using an LM358 powered from a single rail of 5V. I used two identical inductors with inductance of 1 mH, a 0.01 uF capacitor, and 10k and 47k resistors for amplifier gain. I noticed that the oscillation does not start by itself, but if I give a voltage kick of 5V to the op-amp inverting input, then the oscillation starts and sustains, but the waveform has a lot of distortions. I thought Hartley oscillators are supposed to be self-starting. Any idea why the circuit is not self-starting? And how can the waveform be made more sinusoidal? PS: the oscilloscope screen was when the capacitor was 10 uF instead. When I was using 0.01 uF, the discortion was even worse and the spike was more prominent. PS2: Here is a schematic of the circuit I used: • Please post a complete schematic of what you actually built. Apr 3 at 4:59 • An actual picture of the circuit as it is built may be helpful too. – Ryan Apr 3 at 7:26 • Are you sure that (-) opamp input and mid-point inductors are tied to the ground? In that configuration, my schematic doesn't work. Apr 4 at 8:33 • Yes - the midpoint between both inductor must be at ground. However, this circuit cannot work because L1 acts as a load to the opamp only. There must be a resistor Ro between opamp output and the top of L1. In this case, we have a 3rd-order highpass ladder topolgy R0-L1-C2_L2 which shifts the phase at w=wo by 180deg. This is required for the Hartley principle. Very often, this resistor is forgotten when transferring from the transistor to the opamp solution. – LvW Apr 4 at 11:09 • Correction: "...top of L1" means: Between opamp output and the common node of L1 and C2. – LvW Apr 4 at 15:00 There are several problems, many related to the relative wimpiness of LM358 when used from low supply voltages. Even though LM358 is a "jellybean" part, it really gets going at relatively high supply voltages, e.g. +/-10V. Using it at 5V is a complex endeavor. Even a TLC272 would be a more forgiving part for such an oscillator. 1. The op-amp's operating point - the positive input - is set at 0V. Since the lower power supply potential is 0V, the op-amp won't be able to swing symmetrically with respect to 0V. For an LM358, whose input includes ground, the operating point should be at (VCC-3V/2), or in this case 1.0V-1.5V. 2. The op-amp output is DC-shorted through the inductors to ground, at least as shown in your circuit. LM358 will have trouble starting up with such a load, and may otherwise misbehave. The LM358 output should be AC-coupled to the tank instead. 3. A Hartley oscillator requires a 3rd order filter network, i.e. the tank must be connected to the active element through a resistance to form an RC lowpass. With transistors, the output resistance takes care of it. With op-amps, the effective resistance is close to zero, so the resistance has to be added. In this case, an AC coupling can do that job, since it has fairly low reactance at the operating frequency, yet still much higher than that of the op-amp output. 4. A DC pull up of approximately 2k-5kOhm to VCC will improve the output stage linearity - a common trick with op-amps with the output stage same as LM358. 5. The op-amp's negative input should be biased to the operating point voltage through L2. 6. The LM358 doesn't have all that much linear output swing when running from 5V. A voltage clamp (D1) in the feedback circuit will crudely control the oscillation amplitude and prevent distortion from exceeding the useful output swing. With a better op-amp, the output voltage swing could be higher, and more diodes could be connected in series with D1. For LM358, a single diode's worth of amplitude is about all you can get from a single Hartley stage, although physical hardware may be more lenient than the simulation. The output amplitude is about 0.5V. For higher output amplitude, use a 2nd gain stage. The output frequency is about 5.6kHz. Operation at higher frequencies is possible but may be problematic as Q drops and the op-amp runs out of gain needed to compensate. If you want to play with old-school 40+ year old parts, an LM13700 would be a much better match. It lends itself naturally to gain control and has more bandwidth, and >10x faster output slew rate vs. LM358. It would have no trouble producing a reasonably clean 5Vpp sine wave, from a single +10V supply. The venerable LM3900 could also act as a variable gain stage for a low-frequency (<10kHz) Hartley oscillator. simulate this circuit – Schematic created using CircuitLab The circuit above is not necessarily the best approach with better op-amps: the operating point will be different, the tank can be DC-coupled to the output via a series resistor, etc. LM358 is a versatile part that requires, let's say, a versatile approach to overcoming its limitations. When building an oscillator it is helpful to realize how and why it can oscillate. The principle of the Hartley oscillator is as follows: An inverting amplifier is equipped with a feedback loop consisting of a third-order highpass which allows a phase shift of 180° at a certain frequency $$\\omega_0\$$ (giving zero phase shift of the loop gain function). 1.) Using a BJT as an active device, the 3rd-order highpass is realized as a ladder structure R-L1-C-L2 with R=output resistance of the inverting BJT stage. 2.) When we replace the BJT with an opamp with a very small output resistance it is absolutely necessary to use an additional resistor R between the opamp output and the rest of the feedback network. Otherwise the circuit cannot oscillate at the desired frequency. • You mentioned inverting amplifier having a feedback loop consisting of a 3rd order highpass filter, is that a general property? Apr 3 at 20:30 • No - it depends on the fedback network. In general, we need zero phase shift at w=wo within the complete feedback loop. For example, when a bandpass is used (with zero phase shift at w=wo) a non-inverting amplifier is required. – LvW Apr 4 at 6:51 Here is what I get with microcap v12, "transient" behavior. Inductors coupling is zero. At starting ... Sometime later ... Note that output is vanishing slowly, ... And zoomed in the last time ... EDIT:
2022-07-01 04:11:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5702775716781616, "perplexity": 2198.737700139268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103920118.49/warc/CC-MAIN-20220701034437-20220701064437-00542.warc.gz"}
https://www.nature.com/articles/s41598-018-29233-9?error=cookies_not_supported&code=068ca3f5-9cba-45d7-81ef-03845a39df30
## Introduction Many social interactions comprise a series of repeated exchanges between individuals. Within the constraints of social convention, such dynamic contexts demand a process of mutual reciprocity1; over the course of a repeated dyadic exchange, for example, both interactants modify their own behaviour in response to their partner’s in an attempt to steer the interaction towards a desired outcome. In this light, repeated exchanges unfold as a two-in-one process whereby each individual’s behaviour is simultaneously a consequence of and antecedent to that of their partner’s. Advancing our understanding of the brain processes underlying such reciprocity is therefore central to social neuroscience research2, but this requires measurement of both interactants’ brains whilst they engage in naturalistic social exchanges. The Ultimatum Game (UG3) presents a simple paradigm to investigate dyadic interaction. A Proposer is asked to choose from a range of options how they wish to divide a sum of money (the “pie”) between themselves and a Responder. The Responder then chooses whether to accept or reject the offer; if they accept it then the pie is divided accordingly, but if they reject it then neither player receives any payoff. Contrary to game theoretic predictions of rational behaviour, modal offers are around 40% of the pie and Responders reject proposals of 20% approximately half the time4. Responders’ behaviour appears to reflect an aversion to inequity; they consider it unfair to be offered disproportionately less than their interaction partner (disadvantageous inequity), and reject such proposals as a challenge to subjugation5. Consistent with this notion, neuroimaging studies demonstrate that rejected offers elicit neural responses in brain systems implicated in subjective feeling states such as pain and disgust (anterior insula [AI]6) and social information processing (anterior [ACC] and anterior-mid cingulate cortices [aMCC; see7]; for meta-analytic reviews see8,9). In contrast, Proposers’ offers are believed to reflect strategic behaviour; in an attempt to maximise their own payoff they avoid offers that are likely to be rejected, such as those with which they earn disproportionately more (advantageous inequity). Although fewer neuroimaging studies have investigated Proposers, the available evidence points to neural responses in frontal midline brain regions during offers that reflect such egoistic strategies (e.g.,10,11,12). The UG is performed typically in a one-shot manner, however – a single round played, ending after the Responder accepts or rejects a proposed division. Whilst this simulates one-off social interactions, it fails to capture the bidirectional and reciprocal property of repeated exchanges13; in such contexts, we do not simply react to another’s behaviour but we interact with them in an attempt to bring about a desirable outcome. Furthermore, previous studies have examined UG performance between individuals who are anonymous to one another, thereby removing the social context in which the majority of day-to-day interactions take place and limiting the applicability of resulting behaviours. Investigating the sequential and reciprocal nature of real-world dyadic interactions requires an iterated UG (iUG13,14,15) in which multiple exchanges occur between the same players. In this situation, both players can adapt to the behaviour of their opponent in order to maximise their own payoff over recursive rounds; Responders can encourage equitable offers by rejecting disadvantageous ones, and Proposers can increase acceptance by adapting to Responder behaviour16. Alternatively, either player can adopt an unwavering strategy; by offering or accepting only those proposals that benefit themselves maximally, players can force their partner into a compromise over fairness and ultimate payoff. In other words, both players can express varying degrees of reciprocity over multiple rounds with the same partner. Reciprocity unfolds as an indirect chain of neural events; through neural coupling, one individual’s brain activity results in a behavioural output, which then elicits systematic neural responses in their interaction partner to initiate a behavioural reaction17. As such, only by measuring brain signals in both interactants simultaneously can we begin to elucidate the interpersonal neural processes underlying reciprocity18. By employing this “hyperscanning” method, neuroscientific studies report spatially and temporally synchronised brain signals between interactants that vary with the nature of the social exchange (for reviews see19,20). One particular form of neural coupling is alignment – that is, correlated neural signals between brains21. This is analogous to a wireless communication system, through which a sender and receiver become synchronised to a transmitted signal (e.g., light or sound22). The signals driving neural alignment vary in their level of abstraction, however; while sensory cortices will align to interactants’ physical movements, correlated signals in higher-order brain regions reflect a shared understanding of the intentions behind the actions18,21,23. To investigate whether reciprocity during real-world, repeated dyadic exchanges elicits patterns of neural alignment, the present study performed functional magnetic resonance imaging on pairs of individuals simultaneously (dual-fMRI) while they played a modified iUG designed to encourage reciprocity. Unlike other hyperscanning methods (e.g., fNIRS), dual-fMRI affords direct localisation of inter-brain effects within cortical and subcortical regions. To capture behavioural reciprocity over multiple rounds of economic exchange, we developed a novel adaptation of a model from experimental economics24 that fits each player’s round-by-round behaviour (the proposed division or its acceptance/rejection) to an estimate of expected utility (EU) on a given exchange. Crucially, this estimate of EU considered not only the distribution of payoff between players, thereby incorporating any social preferences (inequity aversion), but also the extent to which their choices reflect a reaction to their partner’s prior behaviour; if player A considers B’s past behaviour to have been fair then they will perceive greater utility in increasing B’s relative payoff, but if A believes B’s past behaviour to have been unfair they will take pleasure in decreasing B’s payoff in favour of their own (positive and negative reciprocity, respectively). By combining functional neuroimaging data with estimates of EU from our reciprocity model, we were then able to investigate whether brain responses map onto utility evaluations influenced by an opponent’s prior behaviour – that is, the neural coupling associated with reciprocal behaviour. We also assessed neural alignment specifically by measuring covariance in brain signals between interacting players21,25, and investigated whether this is related to their expression of reciprocity. We hypothesised that greater reciprocity would be associated with stronger co-activation between interacting players’ brains in regions implicated in fairness evaluations and strategic behaviour on the UG – namely, AI and ACC/aMCC. Interestingly, when selecting a division from two alternatives (the choice set), Proposers are more likely to offer the fairer of the two when the other option becomes more selfish26. Furthermore, even if the fairer of the two options rewards the Responder with disproportionately less than their opponent, they are more likely to accept it because they consider it justified27,28. This contextual effect appears to reflect each player’s consideration of their opponent’s motivation: As a means of risk aversion, Proposers avoid the most selfish option because it is more likely to be rejected, decreasing its utility; and Responders accept offers that reward them disproportionately less if the alternative division incurs a greater cost to the Proposer. Since these decisions will be influenced by an evaluation of the other player’s prior behaviour, thereby eliciting more reciprocity, our modification of the iUG permitted comparisons between two different contexts: On Proposer-Responder (PR) rounds, the choice set required Proposers to choose between a division that presented themselves with either advantageous or disadvantageous inequity (e.g., 60:40 vs. 40:60). Since the cost to the Proposer was far greater for the latter (very generous) division, both players were more likely to regard the former (selfish) option as justified28,29. Moreover, very generous offers on PR exchanges indicate that the Proposer perceived greater utility in increasing the Responder’s relative payoff at a cost to themselves, signalling a high degree of co-operative intent. In contrast, on Proposer-Proposer (PP) exchanges the Proposer had to choose between two divisions that differed only in magnitude of advantageous inequity (e.g., 60:40 vs. 70:30). Since the relative increase in cost to the Proposer by offering the least selfish division was reduced, a very selfish offer on PP exchanges indicated low co-operative intent. Thus, choices on PR and PP rounds were intended to elicit strong expressions of positive and negative reciprocity by both players. We predicted greater neural alignment during the former, in which more shared (co-operative) intentionality would be elicited. ## Results We report the results of non-parametric statistical analyses if Kolmogorov–Smirnov tests revealed that normality was violated by at least one of the assessed variables. Values present means (±SD). ### Behaviour All players made their choices (the selection of a division to offer, and the decision to accept or reject the proposed division) within the 4 second limit on all exchanges. Interestingly, however, response times (RTs) differed between players and conditions; a mixed-plot ANOVA, with the within-subject factor Condition (PR vs. PP) and between-subject factor Player (Proposer vs. Responder), revealed that RTs were greater for Proposers compared with Responders (2007.28 [±439.89] vs. 1106.99 [±429.72] ms; F[1,36] = 46.73, p < 0.001, ηp2 = 0.57) and on PR relative to PP exchanges (1641.71 [±637.81] vs. 1472.56 [±608.78] ms; F[1,36] = 13.43, p = 0.001, ηp2 = 0.27). There was no Player-by-Condition interaction (F[1,36] = 1.07, p = 0.308, ηp2 = 0.029). This revealed that (a) both players took longer to make decisions when faced with a choice between advantageous and disadvantageous inequity, but (b) Responders had already begun to evaluate the choice set before an offer was made. Next we assessed the pattern of proposals and decisions across choice sets, both within and between the PR and PP conditions. We focused on two aspects of player behaviour; specifically, the proportion of offers that benefited the Proposer maximally (i.e. those with maximal advantageous inequity; MAXOffer), and the proportion of these offers that were accepted by the Responder (MAXAccept). Figure 1A presents the distribution of each behavioural measure. This indicates higher proportions of both MAXOffer and MAXAccept for choice sets comprising the PR relative to the PP condition, which was confirmed with non-parametric comparisons (respectively, Z[19] = 3.82, p < 0.001; and Z[19] = 2.11, p = 0.033). The number of MAXOffer and MAXAccept also appeared to decrease with higher payoff for the Proposer or, in turn, lower payoff for the Responder. To quantify this, for each choice set we computed the payoff for each player presented by the division with maximal advantageous inequity. Spearman correlations confirmed that with increasing payoff for the Proposer, both MAXOffer and MAXAccept decreased (respectively, ρ[18] = −0.53, p = 0.016; and ρ[18] = −0.64, p = 0.003). Since Proposers selected an offer from two alternatives, the degree of payoff in the division with maximal advantageous inequity could also be expressed relative to the other option. To investigate whether MAXOffer and MAXAccept differed according to this relative measure, we compared the payoff to each player between the two divisions of each choice set; higher values represented a relative increase in the Proposer’s payoff or, in turn, a decrease in the Responder’s payoff for the division with maximal advantageous inequity (see Table S1). No significant relationships were observed between this relative measure of payoff and MAXOffer or MAXAccept (respectively, ρ[18] = 0.39, p = 0.089; and ρ[18] = 0.29, p = 0.217). Together, these results imply that neither player’s choices were driven solely by absolute or relative measures of payoff, which was confirmed in subsequent modelling procedures (see below). We then applied our adapted reciprocity model to estimate the degree to which each player’s choices reflected reciprocal reactions to their partner’s prior behaviour, and from this we modelled each player’s round-by-round EU. For Proposers, greater values of EU represent higher utility for the division with least advantageous inequity (the generous offer); for Responders, it represented greater utility in accepting a proposed division. As shown in Fig. 1B, the probability of MAXOffers and MAXAccepts varied according to EU; indeed, estimates of EU correctly predicted these behaviours on 73.46 (±7.58 [AIC = 2345.5; BIC = 2454.4; Log-likelihood = −1153.7]) and 85.57 percent of rounds (±10.14 [AIC = 1272.2; BIC = 1381.1; Log-likelihood = −617.1]), respectively. Reciprocity parameters, α, were lower for Proposers (0.06 [±0.03]) than Responders (0.41 [±0.37]; Z[18] = 5.04, P < 0.001), suggesting that the latter players’ decisions reflected stronger reactions to their partner’s offers. Estimated values of α for Proposers were correlated negatively with MAXOffers across the PR (ρ[17] = −0.78, p < 0.001) and PP condition (ρ[17] = −0.88, p < 0.001), however; the more reciprocity they showed, the less likely there were to offer divisions that benefited themselves maximally (see Fig. 1C). No such relationship was observed between Responders’ α and MAXAccepts for either the PR (ρ[17] = −0.15, p = 0.551) or PP condition (ρ[17] = −0.07, p = 0.786). Finally, Proposer α estimates correlated positively with the amount of time they took to decide the division they wished to offer on PR (ρ[17] = 0.58, p = 0.009) but not PP rounds (ρ[17] = −0.22, p = 0.377). Across all Proposers the optimal Memory parameter was 73, identifying the range of preceding rounds that maximized the accuracy of Proposers’ predictions of their opponents’ decisions. There was only a slight benefit beyond a range of 20, however; for round-by-round estimates of EU, estimates of each player’s reciprocity parameter, and the accuracy in predicting Proposers’ choices, correlations were highly similar for models with a Memory parameter of 20 and upwards (see Table S2). To evaluate our adapted reciprocity model we compared it against a variety of alternatives (see Supplementary Materials for full model specifications). First we tested a nested model by fixing the reciprocity parameter to α = 0 for both players. This self-regarding model evaluated the assumption that both players care only about their own onetary payoff. The likelihood ratio (L) demonstrated that our reciprocity model outperformed this mself-regarding model when applied to both Proposers (L[19] = −1037.2 [AIC = 2074.5]; p < 0.001) and Responders (L[19] = −1553.0 [AIC = 3106.0]; p < 0.001). Second, given the relatively low estimates of Proposer’s reciprocity, we tested whether the reciprocity utility function should be applied only to Responders. We achieved this by evaluating the change in model fit by fixing only the Proposer’s reciprocity parameters to α = 0; the same reciprocity model was applied to Responders. Again, our model fitted Proposers’ choices more accurately than this nested self-regarding model (L[19] = −1609.3 [AIC = 3219.5]; p < 0.001). Finally, we assessed whether choices reflect learning processes over multiple rounds rather than reciprocal reactions. To do so, we modelled each player’s behavioural data with a three-parameter extension of the reinforcement learning model30. This model contains a forgetting parameter φ, an experimentation parameter ε, and a strength parameter s (the simple one-parameter reinforcement learning model is a special case of this three-parameter extension, with φ = 1 and ε = 0). Each parameter was fitted to maximize the log-likelihood function, separately for Proposers (φ = 0.93, ε = 0.31, s = 3.2) and Responders (φ = 0.27, ε = 0.15, s = 3.2). The fit of this reinforcement learning model against both Proposer and Responder behaviour was substantially poorer than that of our reciprocity model, according to both AIC and BIC criteria (Proposers = 2866.2 and 2883.4; and Responders = 1922.74 and 1939.9; p < 0.001). This confirmed that each player’s choices were driven largely by evaluations of utility that incorporated a reaction to their opponent’s prior behaviour. As an exploratory analysis, we assessed whether performance of either player on the iUG was related to personality variables measured with the Action Control Scale (ACS-90)31; and Interpersonal Reactivity Index (IRI)32. Given the exploratory, post-hoc nature of these analyses, however, we do not present the results in the main body of text; instead the reader can consult them in Table S3 of the Supplementary Material. In brief, neither MAXOffers nor MAXAccepts were associated with scores on either personality instrument. ### Neuroimaging Despite the behavioural differences, the PRMOD > PPMOD and PPMOD > PRMOD contrasts revealed no significant differences in EU-modulated brain responses between the PR and PP conditions for either player. When collapsing across the two conditions, both players exhibited two similar patterns of BOLD signal expressing the UGMOD > CTRL contrast: In the first, brain responses were modulated positively by EU – that is, they were greater when Proposers saw more utility in offering the least advantageously inequitable (more generous) division, and Responders saw greater utility in accepting the proposed division. This pattern encompassed primary striate and ventro-medial prefrontal cortex (vmPFC) in both players, and the superior temporal sulcus (STS) in Proposers (Fig. 2A). The second pattern represents BOLD signals modulated negatively by EU – these brain responses became stronger in Proposers with lower EU for the more generous division, and when Responders saw less utility in accepting the offered division. This second pattern encompassed primary and secondary striate cortices, aMCC and supplementary motor cortex (SMA), lateral prefrontal cortices and the insulae in both players; and, in Proposers, the thalamus (Fig. 2B). Clusters of brain regions exhibiting these two opposing patterns are listed in Table S4. Despite the apparent difference between players in EU modulation within the STS and thalamus, direct comparisons between player roles revealed stronger positive modulation for Proposers only in the left primary motor cortex. This was true even after more lenient thresholding (pFWE < 0.01). In other words, these seemingly unique EU-modulated brain responses appear to be present in both players but to subtly (non-significantly) different degrees. As specified in Table 1, intra-dyad correlations (IDC) in brain responses measured across all rounds of each experimental condition revealed greater inter-brain alignment between interacting players on both PR and PP relative to the CTRL condition. For the PPIDC > CTRLIDC contrast, greater IDC was observed in bilateral occipital and extra-striate cortices, and right inferior parietal cortex. In addition to these posterior sites, the PRIDC > CTRLIDC contrast revealed increased IDC in lateral prefrontal cortices, aMCC, posterior cingulate cortex, and bilateral AI. The PRIDC > PPIDC contrast revealed the extent of this differential IDC between experimental conditions, with stronger alignment over PR compared with PP rounds in right aMCC, AI, and lateral temporal cortex, and bilateral inferior occipital cortices. To investigate whether the strength of IDC was related to the degree of reciprocity, we performed an ROI analysis at the location in which IDC expressed the PRIDC > PPIDC contrast maximally (aMCC; x = 4, y = 34, z = 28). This revealed that greater IDC in the PR condition within right aMCC was correlated positively with estimates of reciprocity in Proposers (ρ[17] = 0.65, p = 0.003) but not in Responders (ρ[17] = 0.40, p = 0.089). Interestingly, no relationships were observed between IDC expressing this contrast in the right AI (x = 34, y = 26, x = −4) and reciprocity estimates of either Proposers (ρ[17] = 0.44, p = 0.062) or Responders (ρ[17] = 0.26, p = 0.291). Results from the PRIDC > PPIDC contrast and the relationship with Proposers’ α estimates are illustrated in Fig. 3. ## Discussion By scanning the brains of two individuals simultaneously while they are engaged in recursive economic exchanges with one another, we have explored brain processes associated with the bidirectional reciprocity characterising real-world, repeated dyadic interactions. This revealed three important findings: First, by modelling EU in a way that incorporates the degree of reciprocity displayed by each interactant, we show that both players’ choices on the iUG were influenced not only by their own payoff or social preferences, but also by their reactions to their opponent’s prior behaviour. Second, both players exhibited opposing patterns of neural response modulated positively or negatively by these estimates of EU. Such modulation reveals neural coupling, whereby the brain of one interactant responds to the behaviour of their interaction partner. Third, neural signals within right AI and aMCC are correlated between interacting players, particularly during exchanges that require choices between advantageous and disadvantageous inequity – those in which decisions are more likely to be driven by reciprocal tendencies. Interestingly, this pattern of inter-brain alignment was stronger with more reciprocating Proposers. Cox et al.’s24 reciprocity model estimates EU by considering a range of individual-specific parameters. By weighing each player’s payoff against that of their opponent, it incorporates any risk aversion shown by Proposers26 or norm-seeking behaviour33 and avoidance of subjugation by Responders5,34. In our adaptation, however, we incorporated the additional probability that the Responder would accept an offer given their previous decisions. As such, risk aversion shown by the Proposer is modelled as a flexible adaptation to the Responder’s behaviour updated on a round-by-round basis. The reciprocity model also considers the extent to which a player’s choices are influenced by their emotional reaction to the prior behaviour of their interaction partner: If the Proposer predicts that their opponent is likely to reject the more selfish division, and they consider the Responder to have reacted reasonably to past offers, they see more utility in increasing their payoff at a cost to their own. Conversely, if the Proposer believes that the Responder has behaved uncooperatively in the past, they will be unwilling to change their egoistic motives despite their predictions (positive and negative reciprocity, respectively). Likewise, the Responder is more likely to accept a division that disadvantages themselves disproportionately more than their opponent if they consider the Proposer to have behaved fairly (generously, or with justified selfishness) in the past; but they see greater utility in rejecting such offers as a means of retaliating against prior unacceptable offers. Our adapted reciprocity model outperformed a self-regarding model without any estimate of reciprocity (a model that considered only inequity aversion for Responders and adaptive risk aversion for Proposers) and a reinforcement-learning model30. We interpret the superior fit of our model to reflect the novel aspects of our experimental design, which allowed a more accurate simulation of real-world social decision making. In our two-choice iUG (a) repeated exchanges were made between the same two individuals, allowing both players to adapt and express their reactions to the opponent’s behaviour; (b) Responders saw the choices from which Proposers selected their offer, and Proposers saw the decisions of the Responder; and (c) two conditions were implemented to encourage stronger reciprocity between players. This differs from other paradigms in which the social context is largely removed; many studies employ a one-shot version of the UG whereby the offers on each round are made by different anonymous Proposers28,33,35, players’ intentionality is masked28, or feedback about their choices is concealed from their opponent26. Fairness evaluations have been shown to adapt over multiple rounds, however; Responders compare offers against normative reference points that change over successive rounds in response to Proposers’ behaviour, and both subjective feelings and affective neural responses are sensitive to violations of these adaptive norms33,36. Proposers also adapt to their opponent, but on a strategic level; a lower frequency of generous offers on the Dictator Game is taken as evidence that such proposals reflect strategic self interest37. Further, Winter and Zamir38 demonstrate that Proposers’ strategies emerge during the course of the game, becoming more fair or unfair in response to, respectively, unforgiving or tolerant Responders. Likewise, Billeke et al.11 report that while some Proposers adapt their offers to Responder’s behaviour, others adopt more unwavering strategies. We extend these findings by accurately modelling choices over multiple exchanges with the same known partner, permitting expressions of reciprocity during reputation building; players with low reciprocity appeared to try and maximise their advantage by establishing a “tough” reputation with unwavering unfair offers, and/or occasionally rejecting lower fair offers39. Our modification of the iUG might also explain the lack of association between self-reported trait empathy and proposals or acceptances/rejections observed in our exploratory analyses. Barraza and Zak40, for example, report more generous proposals after empathy induction in participants high on trait empathy. This relationship between Proposer behaviour and empathy was observed in a one-shot version of the UG between anonymous players, however, a context in which reciprocal tendencies cannot influence adaptive behaviour over repeated exchanges. Similarly, Shamay-Tsoory et al.41 observed that patients with lesions to the vmPFC who reported less empathy (perspective taking) were more likely to reject offers, but these Responders played against a pre-programmed computer. Finally, Lockwood et al.42, report that reward-based learning of self- and other-reward was related to individual’s empathy, but this learning was observed in response to non-social symbolic stimuli. Over the course of our modified iUG, players had the opportunity to maximise their payoff by learning how best to adapt to the behaviour of their opponent. The fact that expressions of reciprocity were unrelated to trait empathy suggests that such adaptation reflects self-oriented strategies or affectivity, rather than other-oriented considerations (e.g., guilt). Importantly, Proposers’ offers on each round were restricted to those presented by specific choice sets, which were known to the Responder. As such, on some rounds they were free to propose selfish offers without feeling remorse. Turning now to our neuroimaging results, both players expressed two opposing patterns of brain response modulated by EU estimates: Firstly, in the ventro-medial prefrontal cortex (vmPFC) of both players, stronger neural responses were elicited for divisions with higher EU – that is, more generous divisions for the Proposer and their acceptance by Responders. This converges with and extends existing findings; fair offers have been shown to engage the vmPFC more than unfair offers8, and patients with vmPFC lesions are more inclined to accept unfair offers36. Hutcherson et al.43 propose that vmPFC combines various information to weigh value for one’s self against that for another (player), and uses the information to generate adaptive responses. Our findings demonstrate that this brain region is implicated specifically in positive valuations. In contrast, throughout the aMCC and AI the responses of both players’ brains were stronger for divisions with lower EU – in other words, selfish divisions and offers that were rejected. The pattern of negatively modulated brain responses also advances previous findings; meta-analyses8,9 show consistent engagement of ACC/aMCC and AI of Responder’s brains in response to unfair offers, and neural responses in dorsal ACC and bilateral AI of Responders’ brains were found to be modulated by the degree of payoff inequity in unfair offers35. Given the accuracy of our reciprocity model in predicting players’ choices, these two patterns of EU-modulated brain responses appear to reflect opposing evaluations of utility driven by reactions to the behaviour of our interaction partner(s), which then drive opposing behavioural adaptations – specifically, positive or negative reciprocal responses. There is accumulating evidence that the gyral aspect of the ACC processes the rewards for others, whereas the sulcus seems more sensitive to first-person reward7,44. Together with our findings, this indicates that the aMCC might compute the difference between self- and other-reward, whereas AI performs an emotional evaluation of any reward discrepancy that drives an affective (reciprocal) response to unfair reward discrepancy. This functional dissociation between aMCC and AI might explain the specificity of relationship between Proposer reciprocity estimates and inter-brain coupling in the aMCC during rounds that require decisions between advantageous and disadvantageous inequity; in our procedure, both players saw the choice set from which Proposers had to make decisions, and would have processed the difference in inequity presented by the two constituent divisions. In contrast, any affective reaction to the offered division would have been greater in Responders than Proposers, resulting in less covariance between players. Future studies should attempt to delineate the functions of these two brain regions during economic exchanges by modelling responses of the aMCC and/or AI as mediators of reciprocal choices – specifically, offers of unfair divisions and their rejections. Alternatively, our findings could be extended by investigating whether the parameters used in our model to estimate reciprocity – specifically, parameters representing players’ emotional state (θ) and choice stochasticity (ϵ) – serve to modulate intra-subject brain responses differentially. This might dissociate between neural processes associated with emotional reactivity and error processing during economic choices. Brain regions encompassed by the two opposing patterns of EU-modulated responses have been implicated in different forms of learning during social decision making. The reinforcement learning framework suggests that decision making is driven by differences between predicted and actual reward outcomes (prediction errors). The ACC (particularly the sulcal aspect) is involved in reward prediction errors45, especially those concerning the rewards that others will receive46. In contrast, the lateral temporal cortex is engaged during social prediction errors45 and its response profile differentiates between individuals according to their social learning strategy16 – it is engaged more in individuals who predict rewards on the basis of their interaction partners’ behaviour, relative to those who act according to more simple learned associations between their own actions and rewards. The differential sensitivity of these two brain systems that we have observed points to a dissociable contribution of reward-based and social learning processes in utility evaluations of generous (more costly) and selfish offers during complex, repeated interactions. The equivalence of EU-modulated brain responses observed between the PR and PP conditions suggests that players engaged in similar evaluations of utility in both types of exchange. Wang et al.12 also report similar brain responses between players in dorsal ACC. This might also explain why both players expressed highly similar patterns of EU modulation; Responders took less time to accept/reject an offer than Proposers took to make the proposal on both conditions, suggesting that Responders had already begun a similar evaluation of the choice sets before an offer was made. Moreover, both players were slower to make their choices on rounds in which choice sets involved decisions between advantageous and disadvantageous inequity, suggesting that such decisions were more cognitively demanding. Since the proportions of selfish offers and their acceptance were greater on these PR rounds, these decisions appear to involve a mutual appreciation of the intention behind selfishness. Interestingly, estimates of Proposers’ reciprocity were correlated positively with their response times and negatively with the number of selfish offers made on PR exchanges – the more adaptive they were, the less selfish their offers. Finally, greater inter-brain alignment in aMCC on PR compared with PP rounds was associated strongly with the degree of Proposers’ reciprocity. Taken together, we propose that inter-brain alignment reflects a mutual effort of players to adapt to their interaction partner by inferring the intentions behind their actions – a process that involves an evaluation of their prior behaviour. Consistent with this interpretation, the response of the aMCC has been associated with task complexity, uncertainty, predicted value, and social decision making7,44,47. Furthermore, hyperscanning research consistently demonstrates neural coupling within the ACC48,49,50, which is suggested to reflect accuracy in individuals’ representations of their interaction partners’ intentions48 and estimates of their behaviour19,49. Such an interpretation fits our pattern of results nicely: On PR rounds, which require decisions between advantageous and disadvantageous inequity, we observed greater inter-brain alignment within aMCC among dyads comprising more reciprocating Proposers – the player who initiates each exchange. Our adapted iUG paradigm affords multiple applications: For example, inter-brain effects could represent effective neuromarkers for the quality of social communication1, providing assessment of social dimensions along which certain psychiatric illnesses might be described51. It is important to acknowledge the limitations of our study that can be addressed in future research, however: Firstly, we have not considered some important factors that might influence the degree of reciprocity shown by either player. Behaviour during the UG appears to be influenced partly by variability in strategic reasoning52 and prosocial predispositions53. We observed large variability in the estimates of reciprocity among our sample of Proposers that appeared to be unrelated to trait empathy, and future studies should examine if and how individual differences in other personality variables influence reciprocal behaviour during iUG. Furthermore, by examining only all-male dyads we have investigated a very specific type of dyadic interaction, and we have not explored potential sex differences in inter-brain effects54. Secondly, our design did not include choice sets that present a fair allocation of payoff, so it was not possible to contrast this behaviour with that following unfair options. Future studies should incorporate this condition to study all possible outcomes of repeated interaction. Finally, while our measure of brain-to-brain alignment across entire rounds permitted us to examine the degree of neural coupling through a bidirectional exchange, this crude measurement offers no insights into directionality. Neural coupling during sequential exchanges will necessarily be circular – just as the Responder’s brain synchronises to the offer of the Proposer, their decision to accept or reject will lead to systematic neural responses in the Proposer that influence future offers. To investigate such circular brain-to-brain coupling, methods for assessing directed between-brain dependencies should be developed for hyperscanning research (e.g., inter-brain psychophysiological interactions). ## Methods ### Participants The initial sample comprised 40 males recruited from various faculties of Masaryk University, Czech Republic, who participated for monetary compensation. These individuals were paired to form 20 age-matched dyads (mean age difference = 1.2 years), the members of which had never met prior to the experiment. Male-male dyads were measured exclusively to avoid potentially confounding factors of mixed-sex interactions. Neuroimaging data from both participants comprising one dyad were omitted due to excessive head motion (see below). The 38 males comprising the remaining 19 dyads were all right-handed, had a mean age of 24.6 years (standard deviation [SD] = 3.7; range = 19.8–38.0), reported normal or corrected-to-normal vision and no history of neurological diseases or psychiatric diagnosis. All participants provided informed consent prior to the experimental procedure, which was approved by the Research Ethics Committee of Masaryk University. All methods were carried out in accordance with the declaration of Helsinki. ### Procedure Participants were introduced to one another for the first time on the day of scanning, during which they exchanged names and shook hands before being sent to one of two scanners located in adjacent rooms (see Imaging Protocol). Player roles were assigned randomly at the start but remained fixed throughout the procedure – one participant played the role of Proposer and the other Responder on all rounds. Fixing roles in this way allowed players the opportunity to learn about and adapt to their partner’s behaviour over a relatively short period. Players were told explicitly that throughout the experiment they would play with the same individual to whom they had just been introduced, and confirmed that they believed this to be true throughout the experiment. Each dyad underwent two functional runs performed successively in a single scanning session. In an event-related fashion, the two runs together comprised 120 rounds (events) of the iUG divided equally among two types of exchange (see Stimuli) and 60 rounds of a control condition (CTRL). Each UG round started with the Proposer being given four seconds to choose one of two divisions of the pie (the choice set; see Stimuli) between themselves and the Responder (Choice period). After this fixed period, the Proposer’s offer was highlighted for four secs (Offer period), during which the Responder could either accept or reject the proposal. After this four-sec period the Responder’s decision was then presented for a final four secs (Decision period). The exact same procedure was followed on CTRL rounds, but the choice set comprised two alternative divisions of colour between the players; rather than dividing a pie, Proposers were required to choose the colour they preferred for themselves and the colour that should go to the Responder, and the Responder then accepted or rejected that offer. Both players were instructed that CTRL rounds had no monetary consequence. Each round ended with a jittered inter-trial interval, with a fixation cross presented pseudo-randomly for 2–4 (mean = 3) secs. An example UG and CTRL round is illustrated in Fig. 4. The same fixed sequence of choice sets (one for each run) was used for all pairs, which was defined by a genetic algorithm for design optimisation55 set to maximise contrast detection between conditions (see below). All stimuli were presented to both players simultaneously – Responders saw the initial choice set from which Proposers selected their offer, and Proposers saw the Responder’s accept/reject decision. Players were instructed at the start that they would receive the outcome of six rounds selected at random, and the mean payout was 270 CZK (approx. €10). At no point was any information given to participants on the number of rounds remaining in the task. As pilot data for a future study, this sample also completed two personality instruments: the ACS-90 and the IRI. Since no a priori hypotheses were formulated concerning these data, they are not presented in this paper. Instead, the reader is referred to the Supplementary Material for further information. ### UG Stimuli On each round of the iUG, players were presented with a choice of two possible divisions of 100 CZK (approx. €4) that differed in the degree of inequity (the “choice set”), and Proposers were required to select one division to offer the Responder. An example choice set is illustrated in Fig. 4, and Supplementary Table S1 lists all the choice sets used in the experiment. To encourage positive and negative reciprocity, we selected 10 choice sets for which repeated proposals and acceptances of minimally inequitable divisions were lowest in behavioural piloting. The choice sets took two forms: On Proposer-Responder (PR) rounds, one division presented the Proposer with advantageous inequity while the second presented them with disadvantageous inequity – in other words, greater relative payoff was achieved by the Proposer for one division but the Responder in the other (e.g., 70:30|30:70). Conversely, on Proposer-Proposer (PP) rounds both divisions presented a greater relative payoff for the Proposer, differing only in magnitude (e.g., 70:30|60:40). Presenting a choice between advantageous and disadvantageous inequity on PR rounds was intended to encourage greater expressions of positive or negative reciprocity from both players. ### Reciprocity Model Unlike other distributional preference models that take into account only the final relative payoff between players34,56, Cox et al.’s24 reciprocity model attempts to fit the behavioural observation that choices depend not only on the final monetary distribution but also on any available alternatives. Furthermore, this model also considers that player choices are influenced largely by their emotional reactions to their partner’s prior behaviour – specifically, whether their proposal or decision to accept or reject reflects positive or negative reciprocity. Finally, unlike higher beliefs equilibrium models29,57 the reciprocity model is tractable and enables the estimation of behavioural parameters. In our adaptation, for each player the EU of each division of the pie was specified as: $$U(x,100-x)=x+(\theta \,+\,{\epsilon })(100-x)$$ (1) here, x is the player’s portion of the division, θ is a scalar representing their emotional state, and ϵ represents random shock with standard logistic distribution. Random shock represents an unobserved component of the utility function – a random variable that adds stochasticity to each player’s choice behaviour (e.g., unintended responses). The emotional state was formulated as: $$\theta ={\alpha }_{i}(x-{x}_{0})$$ (2) Equation (2) incorporates a player-specific reciprocity parameter, α, which serves to weight a comparison of the player’s share, $$x$$, against a fairness reference point, $${x}_{0}$$, by the extent to which a player’s choices are influenced by their partner’s prior behaviour. The reference point, $${x}_{0},$$ is a parameter estimated with α – it is different for each choice set. Using this utility function, we modelled round-by-round EU for both players. The Responder accepts a proposal if: $$x+(\theta \,+\,{\epsilon })(100-x) > 0$$ (3) The Proposer offers the least advantageously inequitable (more generous) division if: $${P}_{1}({x}_{1}+(\theta +{\epsilon })(100-{x}_{1})) > \,{P}_{2}({x}_{2}+(\theta +{\epsilon })(100-{x}_{2}))$$ (4) In equation (4), $${x}_{1}$$ and $${x}_{2}$$ represent the division with minimal (or disadvantageous) and maximal advantageous inequity, respectively, and $${P}_{i}$$ represents the probability that the Responder will accept a division given their prior behaviour. In other words, the Proposer makes an offer that benefits themselves maximally only if they believe the offer is likely to be accepted. The Supplementary Material gives a full description of the procedures with which the various parameters were estimated. ### Imaging Protocol For each individual, functional and structural MR data were acquired with one of two identical 3T Siemens Prisma scanners and 64-channel bird-cage head coil. Players were allocated to one of the two scanners in a counterbalanced fashion, ensuring an even number of Proposers and Responders were scanned in each. Blood-oxygen-level dependent (BOLD) images were acquired with a T2*-weighted echo-planar imaging (EPI) sequence with parallel acquisition (i-PAT; GRAPPA acceleration factor = 2; 34 axial slices; TR/TE = 2000/35 msec; flip angle = 60°; matrix = 68 × 68 × 34, 3 × 3 × 4 mm voxels). Axial slices were acquired in interleaved order, each slice oriented parallel to a line connecting the base of the cerebellum to the base of orbitofrontal cortex permitting whole-brain coverage. Functional imaging was performed in two runs, both comprising 690 volumes (23 mins). Four dummy volumes were acquired at the beginning of each run to allow the gradients to reach steady state. For localisation and co-registration, a high-resolution T1-weighted structural MR image was acquired prior to the functional runs (MPRAGE, TR/TE = 2300/2.34 msec; flip angle = 8°; matrix = 240 × 224 × 224, 1 mm3 voxels). For a given dyad, volume acquisition was synchronised between scanners (mean asynchrony = 1.13 [SD = 3.83] msec) with use of a programmable signal generator (Siglent SDG1025, www.siglent.com; mean acquisition delay = 10 [SD = 3.49] msec). ### Pre-processing For every subject, each of the two time-series were pre-processed separately using a variety of tools packaged within FMRIB’s software library (FSL58), full details of which are provided in the Supplementary Materials. Importantly, both players from one pair exceed our exclusion criterion of 1 mm of movement in any direction for either run, and were omitted from all subsequent analyses. ### General linear modelling All fMRI data modeling was performed in the same platform – SPM12 (http://www.fil.ion.ucl.ac.uk). General linear modelling was performed on the pre-processed time-series in a two-step process: At the individual level, within-subject fixed-effects analyses were used for parameter estimation across both runs. Event-related responses were modelled with durations determined by the participants’ response time in each period of interest (see below), convolved with the canonical hemodynamic response function provided by SPM12: to capture brain responses that reflect reciprocal reactions in each player to their partner’s prior behaviour, for Proposers we modelled the Choice periods of each round until an offer was selected, while for Responders it covered the Offer period until a decision had been made to accept or reject the proposed division. This resulted in three task regressors for each participant, corresponding to the mean effect of the respective period in PR, PP or CTRL rounds. The remaining part(s) of the rounds were modelled as regressors of no interest. For the PR and PP task regressors we added parametric modulators that expressed the round-by-round EU estimated with the reciprocity model (PRMOD and PPMOD); and by collapsing across the PR and PP conditions we also examined the modulatory effect throughout all UG rounds (UGMOD). To examine brain responses in Proposers and Responders separately, statistical evaluation of parameter estimates from these first-level analyses were performed in the following group-level whole-brain random-effects contrasts using one-sample t-tests: PRMOD vs. PPMOD, UGMOD > CTRL. Comparisons between players were then performed with independent-sample t-tests of the same contrasts. Cluster-wise thresholding was applied at p < 0.001, with family-wise error (FWE) correction for multiple comparisons.
2023-03-29 20:56:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.511538565158844, "perplexity": 3582.4835534678427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949025.18/warc/CC-MAIN-20230329182643-20230329212643-00535.warc.gz"}
http://www.r-bloggers.com/the-unavoidable-instability-of-brand-image/
The Unavoidable Instability of Brand Image June 4, 2014 By (This article was first published on Engaging Market Research, and kindly contributed to R-bloggers) "It may be that most consumers forget the attribute-based reasons why they chose or rejected the many brands they have considered and instead retain just a summary attitude sufficient to guide choice the next time." This is how Dolnicar and Rossiter conclude their paper on the low stability of brand-attribute associations. Evidently, we need to be very careful how we ask the brand image question in order to get test-retest agreement over 50%. "Is the Fiat 500 a practical car?" Across all consumers, those that checked "Yes" at time one will have only a 50-50 chance of checking "Yes" again at time two, even when the time interval is only a few weeks. Perhaps, brand-attribute association is not something worth remembering since consumers do not seem to remember all that well. In the marketplace a brand attitude, such as an overall positive or negative affective response, would be all that a consumer would need in order to know whether to approach or avoid any particular brand when making a purchase decision. If, in addition, a consumer had some way of anticipating how well the brand would perform, then the brand image question could be answered without retrieving any specific factual memories of the brand-attribute association. By returning the consumer to the purchase context, the focus is placed back on the task at hand and what needs to be accomplished. The consumer retrieves from memory what is required to make a purchase. Affect determines orientation, and brand recognition provides performance expectations. Buying does not demand a memory dump. Recall is selective. More importantly, recall is constructive. For instance, unless I have tried to sell or buy a pre-owned car, I might not know whether a particular automobile has a high resale value. In fact, if you asked me for a dollar value, that number would depend on whether I was buying or selling. The buyer is surprised (as in sticker shock) by how expensive used cars can be, and the seller is disappointed by how little they can get for their prized possession. In such circumstances, when asked if I associate "high resale value" with some car, I cannot answer the factual question because I have no personal knowledge. So I answer a different, but easier, question instead. "Do I believe that the car has high resale value?" Respondents look inward and ask themselves, introspectively, "When I say 'The car has high resale value,' do I believe it to be true?" The box is checked if the answer is "Yes" or a rating is given indicating the strength of my conviction (feelings-as-information theory). Thus, perception is reality because factual knowledge is limited and unavailable. How might this look in R? A concrete example might be helpful. The R package plfm includes a data set with 78 respondents who were asked whether or not they associated each of 27 attributes with each of 14 European car models. That is, each respondent filled in the cells of a 14 x 27 table with the rows as cars and the columns as attributes. All the entries are zero or one identifying whether the respondent did (1) or did not (0) believe that the car model could be described with the attribute. By simply summing across the 78 different tables, we produce the aggregate cross-tabulation showing the number of respondents from 0 to 78 associating each attribute with each car model. A correspondence analysis provides a graphic display of such a matrix (see the appendix for all the R code). Well, this ought to look familiar to anyone working in the automotive industry. Let's work our way around the four quadrants: Quadrant I Sporty, Quadrant II Economical, Quadrant III Family, and Quadrant IV Luxury. Another perspective is to see an economy-luxury dimension running from the upper left to the lower right and a family-sporty dimension moving from the lower left to the upper right (i.e., drawing a large X through the graph). I have named these quadrants based only on the relative positions of the attributes by interpreting only the distances between the attributes. Now, I will examine the locations of the car models and rely only the distances between the cars. It appears that the economy cars, including the partially hidden Fiat 500, fall into Quadrant II where the Economical attributes also appear. The family cars are in Quadrant III, which is where the Family attributes are located. Where would you be if you were the BMW X5? Respondents would be likely to associate with you the same attributes as the Audi A4 and the Mercedes C-class, so you would find yourself in the cluster formed by these three car models. Why am I talking in this way? Why don't I just say that the BMW X5 is seen as Powerful and therefore placed near its descriptor? I have presented the joint plot from correspondence analysis, which means that we interpret the inter-attribute distances and the inter-car distances but not the car-attribute distances. It is a long story with many details concerning how distances are scaled (chi-square distances), how the data matrix is decomposed (singular value decomposition), and how the coordinates are calculated. None of this is the focus of this post, but it is so easy to misinterpret a perceptual map that some warning must be issued. A reference providing more detail might be helpful (see Figure 5c). Using the R code at the end of this post, you will be able to print out the crosstab. Given the space limitation, the attribute profiles for only a few representative car models have been listed below. To make it easier, I have ordered the columns so that the ordering follows the quadrants: the Mazda MX5 is sporty, the Fiat 500 is city focus, the Renault Espace is family oriented, and the BMW X5 is luxurious. When interpreting these frequencies, one needs to remember that it is the relative profile that is being plotted on the correspondence map. That is, two cars with the same pattern of high and low attribute associations would appear near each other even if one received consistently higher mentions. You should check for yourself, but the map seems to capture the relationships between the attributes and the cars in the data table (with the exception of Prius to be discussed next). Mazda MX5 Fiat 500 Renault Espace BMW X5 VW Golf Toyota Prius Sporty 65 8 1 47 29 8 Nice design 40 35 17 31 20 9 Attractive 39 40 12 36 33 10 City focus 9 58 5 1 30 26 Agile 22 53 9 15 40 10 Economical 3 49 17 1 29 42 Original 22 37 7 8 5 19 Family Oriented 1 3 74 41 12 39 Practical 6 39 52 23 44 16 Comfortable 12 6 47 46 27 23 Versatile 5 5 39 30 25 21 Luxurious 28 6 10 58 12 11 Powerful 37 1 9 57 20 9 Status symbol 39 12 6 51 23 16 Outdoor 13 1 20 46 6 4 Safe 4 5 23 40 40 19 Workmanship 13 3 4 28 14 19 Exclusive 17 14 3 19 0 8 Reliable 17 11 17 38 58 27 Popular 5 24 27 13 55 10 Sustainable 8 7 18 19 43 29 High trade-in value 4 3 0 36 41 4 Good price-quality ratio 11 20 15 7 30 21 Value for the money 9 7 12 8 24 10 Environmentally friendly 6 32 7 2 20 51 Technically advanced 17 2 6 32 10 46 Green 0 10 2 2 6 36 Now, what about Prius? I have included in the appendix the R code to extract a third dimension and generate a plot showing how this third dimension separates the attributes and the cars. If you run this code, you will discover that the third dimension separates Prius from the other cars. In addition, Green and Environmentally Friendly can be found nearby, along with "Technically Advanced." You can visualize this third dimension by seeing Prius as coming out of the two-dimensional map along with the two attributes. This allows us to maintain the two-dimensional map with Prius "tagged" as not as close to VW Golf as shown (e.g., shadowing the Prius label might add the desired 3D effect). The Perceptual Map Varies with Objects and Features What would have happened had Prius not be included in association task? Would the Fiat 500 been seen as more environmentally friendly? The logical response is to be careful about what cars to include in the competitive set. However, the competitive set is seldom the same for all car buyers. For example, two consumers are considering the same minivan, but one is undecided between the minivan and a family sedan and the other is debating between the minivan and a SUV. Does anyone believe that the comparison vehicle, the family sedan or the SUV, will not impact the minivan perceptions? The brand image that I create in order complete a survey is not the brand image that I construct in order to make a purchase. The correspondence map is a spatial representation of this one particular data matrix obtained by recruiting and surveying consumers. It is not the brand image. As I have outlined in previous work, brand image is not simply a network of association evoked by a name, a package, or a logo. Branding is a way of seeing, or as Douglas Holt describes it, "a perceptual frame structuring product experience." I used the term "affordance" in my earlier post to communicate that brand benefits are perceived directly and immediately as an experience. Thus, brand image is not a completed project, stored always in memory, and waiting to be retrieved to fill in our brand-attribute association matrix. Like preference, brand image is constructed anew to complete the task at hand. The perceptual frame provides the scaffolding, but the specific requirements of each task will have unique impacts and instability is unavoidable. Even if we attempt to keep everything the same at two points in time, the brand image construction process will amplify minor fluctuations and make it difficult for an individual to reproduce the same set of responses each time. However, none of this may impact the correspondence map for we are mapping aggregate data, which can be relatively stable even with considerable random individual variation. Yet, such instability at the individual level must be disturbing for the marketer who believes that brand image is established and lasting rather than a construction adapting to the needs of the purchase context. The initial impulse is to save brand image by adding constraints to the measurement task in order to increase stability. But this misses the point. There is no true brand image to be measured. We would be better served by trying to design measurement tasks that mimic how brand image is constructed under the conditions of the specific purchase task we wish to study. The brand image that is erected when forming a consideration set is not the brand image that is assembled when making the final purchase decision. Neither of these will help us understand the role of image in brand extensions. Adaptive behavior is unstable by design. Appendix with R code: library(plfm)data(car)str(car)car$freq1t(car$freq1[c(14,11,7,5,1,4),]) library(anacor)ca<-anacor(car$freq1)plot(ca, conf=NULL) ca3<-anacor(car$freq1, ndim=3)plot(ca3, plot.dim=c(1,3), conf=NULL) Created by Pretty R at inside-R.org
2014-07-28 16:33:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28910666704177856, "perplexity": 1045.363793230144}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261249.37/warc/CC-MAIN-20140728011741-00177-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.wptricks.com/question/ajax-wordpress-json-return-unknown-characters-fo-non-english-characters/
## ajax – WordPress JSON return unknown characters fo non English characters Question for a project i create an endpoint something like wp-json/HSE/v1/reports which return json file everything is okey , also Engligh words , but for non English words i have real problem that its bring back something like u0645u0627u0647u0627u0646 u0633u06ccu0631u062cu0627u0646 its confusing me at all . i also check the wp-json/wp/v2/posts and watch the same problem .English words are fine but non English words are not readable . what should i do to fix this ? anyone can help me please ? 0 2 months 2021-05-06T02:33:02-05:00 0 Answers 0 views 0
2021-06-23 14:46:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17664790153503418, "perplexity": 6012.408967222791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488539480.67/warc/CC-MAIN-20210623134306-20210623164306-00248.warc.gz"}
http://mathhelpforum.com/calculus/39559-sequences-limits.html
1. ## Sequences - Limits Okay, I attempted 3 of these and I don't know how to do the other two. So I hope I can get you guys to check if I am on the right track and teach me how to do the other two. Thanks a lot, a lot, a lot! Use the airthmetic of limits, standard limits (clearly stated) or appropriate rules (clearly stated) to compute the limit of each sequence ${a_n}$ if it exists. Otherwise explain why the sequence diverges. (a) $a_n$ = $\frac{log n + 5n^2}{2n^2 + 100}$ I divided the whole thing with $n^2$ and using standard limits I got, $\frac{\lim\infty{\frac{log n}{n^2}} + 5}{2}$ Then I used l'Hopital's rule to solve the limit for the $\frac{log n}{n^2}$ and got 1/2 and just substituted it back in and my answer is 11/4. (b) $a_n = \frac{3^n + n!}{100^n + n^7}$ I don't know how to do this one! (c) $a_n = (2^n + 1)^\frac{1}{n}$ This one is to use sandwich rule right? (d) $a_n = cos(\frac {\pi n}{3n + 5})$ Used continuity rule and got, $cos(\frac {lim \pi n}{lim 3n + 5})$ $cos(\frac { \pi lim n}{3 lim n + lim 5})$ divided the whole thing by n $cos(\frac { \pi lim 1}{3 lim 1 + lim \frac{5}{n}})$ $cos(\frac {\pi}{3})$ = 0.5 (e) $a_n = n tan(\frac{1}{n})$ I don't know how to do this one either. 2. Hello Originally Posted by pearlyc Okay, I attempted 3 of these and I don't know how to do the other two. So I hope I can get you guys to check if I am on the right track and teach me how to do the other two. Thanks a lot, a lot, a lot! Use the airthmetic of limits, standard limits (clearly stated) or appropriate rules (clearly stated) to compute the limit of each sequence ${a_n}$ if it exists. Otherwise explain why the sequence diverges. (a) $a_n$ = $\frac{log n + 5n^2}{2n^2 + 100}$ I divided the whole thing with $n^2$ and using standard limits I got, $\frac{\lim\infty{\frac{log n}{n^2}} + 5}{2}$ Then I used l'Hopital's rule to solve the limit for the $\frac{log n}{n^2}$ and got 1/2 and just substituted it back in and my answer is 11/4. L'Hôpital's rule would yield : $\frac{\frac 1n}{2n}=\frac{1}{2n^2}$, and the limit of this is 0. don't havd time for thinking about the following ones, sorry 3. Originally Posted by pearlyc (d) $a_n = cos(\frac {\pi n}{3n + 5})$ Used continuity rule and got, $cos(\frac {lim \pi n}{lim 3n + 5})$ $cos(\frac { \pi lim n}{3 lim n + lim 5})$ divided the whole thing by n $cos(\frac { \pi lim 1}{3 lim 1 + lim \frac{5}{n}})$ $cos(\frac {\pi}{3})$ = 0.5 This is right I will answer the remaining in a while.. 4. Originally Posted by pearlyc Okay, I attempted 3 of these and I don't know how to do the other two. So I hope I can get you guys to check if I am on the right track and teach me how to do the other two. Thanks a lot, a lot, a lot! Use the airthmetic of limits, standard limits (clearly stated) or appropriate rules (clearly stated) to compute the limit of each sequence ${a_n}$ if it exists. Otherwise explain why the sequence diverges. (a) $a_n$ = $\frac{log n + 5n^2}{2n^2 + 100}$ I divided the whole thing with $n^2$ and using standard limits I got, $\frac{\lim\infty{\frac{log n}{n^2}} + 5}{2}$ Then I used l'Hopital's rule to solve the limit for the $\frac{log n}{n^2}$ and got 1/2 and just substituted it back in and my answer is 11/4. (b) $a_n = \frac{3^n + n!}{100^n + n^7}$ I don't know how to do this one! (c) $a_n = (2^n + 1)^\frac{1}{n}$ This one is to use sandwich rule right? (d) $a_n = cos(\frac {\pi n}{3n + 5})$ Used continuity rule and got, $cos(\frac {lim \pi n}{lim 3n + 5})$ $cos(\frac { \pi lim n}{3 lim n + lim 5})$ divided the whole thing by n $cos(\frac { \pi lim 1}{3 lim 1 + lim \frac{5}{n}})$ $cos(\frac {\pi}{3})$ = 0.5 (e) $a_n = n tan(\frac{1}{n})$ I don't know how to do this one either. For b) Remember that the factorial function grows faster than polynomials or exponentials. The series diverges . For c) try this trick $a_n = (2^n + 1)^\frac{1}{n}$ Take the natural log of both sides $\ln(a_n) = \ln \left((2^n + 1)^\frac{1}{n}\right))$ using log properties we get $\ln(a_n)=\frac{1}{n} \cdot \ln(2^n+1) =\frac{\ln(2^n+1)}{n}$ We can now use L'hospitals rule to get $\ln(a_n)=\frac{\frac{(\ln(2))2^n}{2^n+1}}{1}$ Now letting n go to infinity gives $\ln(a_n)=\ln(2) \iff a_n=2$ For d you are correct for e) rewrite as $\frac{\tan(\frac{1}{n})}{\frac{1}{n}}$ and use L.H rule Good luck. 5. Hi (c) $a_n = (2^n + 1)^\frac{1}{n}$ This one is to use sandwich rule right? Yes, you can use the squeeze theorem : $2^{n+1} > 2^n+1 > 2^n$ hence ... 6. Whoa, thanks for the many responses guys (: Took your guidance and attempted the questions! For (c), this is how far I've got .. $\sqrt[n]{2^n} <\sqrt[n]{2^n + 1} < \sqrt[n]{2^{n+1}}$ $lim (2)^\frac{1}{n} < \sqrt[n]{2^n + 1} < lim 2.2^\frac{1}{n}$ $1 < \sqrt [n]{2^n+1} < 2$ Where do I go from here? Hmm. As for (e), I followed TheEmptySet's advice and used L.H. rule, and this is what I've got, After differentiating, $\frac {-1}{x^2} sec^2x$ I don't know where to go from here too 7. Originally Posted by pearlyc $\sqrt[n]{2^n} <\sqrt[n]{2^n + 1} < \sqrt[n]{2^{n+1}}$ $lim (2)^\frac{1}{n} < \sqrt[n]{2^n + 1} < lim 2.2^\frac{1}{n}$ You missed a minor point. Try again... what is $\sqrt[n]{2^n}$ As for (e), I followed TheEmptySet's advice and used L.H. rule, and this is what I've got, After differentiating, $\frac {-1}{x^2} sec^2x$ I don't know where to go from here too LH rule can be a little dangerous here. Instead modify the question a little bit and see if you can recognize the limit... When $n \to \infty, \frac1{n} \to 0$, so lets call $\frac1{n}$ as $\theta$. Now where have I seen $\lim_{\theta \to 0} \frac{\tan \theta}{\theta}$? If you havent seen this limit before, LH is still an option on this... 8. Oh OOPS! Thanks, hahaha. Eh, I still don't really get that tan question! 9. Originally Posted by pearlyc Oh OOPS! Thanks, hahaha. Eh, I still don't really get that tan question! Limit(n to infinity) is equivalent to Try L'Hospitals rule 10. Originally Posted by pearlyc As for (e), I followed TheEmptySet's advice and used L.H. rule, and this is what I've got, After differentiating, $\frac {-1}{x^2} sec^2x$ I don't know where to go from here too Well part of the problem is you shouldn't have ended up here. Note that the orginial sequence was $n\tan\left( \frac{1}{n}\right)$ Rewriting as $\frac{\tan\left( \frac{1}{n}\right)}{\frac{1}{n}}$ as $n \to \infty$ this goes to $\frac{0}{0}$ Now applying L'hospitials rule we get $\frac{\sec^{2}(\frac{1}{n})\cdot (\frac{-1}{n^2})}{(\frac{-1}{n^2})}$ This is where your error occured you forgot to use the chain rule when taking the derivative of the tangent function Now when we reduce we get $\lim_{n \to \infty}{\sec^{2}\left( \frac{1}{n}\right)} \to \sec^2(0)=1^2=1$ I hope this clears it up. Good luck.
2013-12-19 08:37:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 67, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9070823788642883, "perplexity": 635.1395330402539}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345762590/warc/CC-MAIN-20131218054922-00064-ip-10-33-133-15.ec2.internal.warc.gz"}
http://m-phi.blogspot.com/2013/07/epistemological-reductionism-and.html
## Wednesday, 3 July 2013 ### Epistemological Reductionism and Sceptical Access Problems Some thoughts on epistemological reductionism. Epistemological reductionism is, broadly speaking, an attempt to answer sceptical worries concerning epistemic "access". For example, how are we to have representational epistemic access to: • states of affairs (e.g., future ones or long past ones), • mathematicalia (e.g., infinite sets), • moral properties (e.g., the property of being morally obliged somehow), • possibilities (e.g., a possible world in which there are $\aleph_0$ members of the Beatles) • the structure of space and time (e.g., the fine-grained topology of space below the Planck scale), • causal connections (e.g., the connection between the magnetic field and force on a nearby electron), • etc.? Epistemological reductionism aims to answer these "sceptical access problems" by proposing certain kinds of reduction, such as: 1. If $p$, then it is knowable that $p$. 2. If a term $t$ has a value, then that value can be computed/constructed. 3. If a term $t$ has a value, then that value has been physically tokened. 4. If $P$ is a proof of $\phi$, then someone (or some community) grasps and accepts $P$. Each of these reductionist proposals attempts to "close the gap" between the world and the mind. For example, if $p$, then rational inquiry would yield an epistemic warrant for $p$. This is the core assumption of Semantic Anti-Realism: that each truth is knowable. (A similar view was advocated by Kant, Peirce and Dummett.) However, Descartes, Hume, Russell and Popper all argued, in their own way, that these epistemic "gaps" cannot be closed. (Descartes went on to try and close the gap by a complicated argument, set out in his Meditations, involving God.) For the possibility of the obtaining of a state of affairs, of which we are non-cognizant cannot, at least not with certainty, be ruled out. That said, such a conclusion does not imply that one ought to be a sceptic. Human cognition, which I assume is neurophysiologically much like primate cognition (and in some respects like all animal cognition), presumably functions reasonably well in acquiring representational states which count as knowledge. Unfortunately, little is understood on this important topic in cognitive psychology, mainly because it is incredibly unclear what these representational states are. It merely says that we can't rule out sceptical scenarios. #### 2 comments: 1. "2. If a term t has a value, then that value can be computed/constructed." Somebody doesn't like non-constructive proofs. What does the word "value" mean in this context? Are we denying the Axiom of Choice here? One thing often ignored by AC-haters is that the consequences of denying AC are worse than the consequences of accepting it. If one is a constructivist, fine ... but your real line is full of holes and the intermediate value theorem is false. I can't live in a mathematical world like that and neither can the vast majority of working mathematicians. If you want to have a continuum, then there must be an uncountable infinity of points that can never be defined, named, characterized, outputted by a Turing machine, approximated by an algorithm, etc. I just don't understand the desire to name everything. Fact is there simply aren't enough names. Deal with it. 2. Thanks, Anon, Yes, I agree :) I'm thinking here of very low-level, computational terms in arithmetic, i.e., numeral terms in arithmetic, such as $(2 \cdot 3) + 5$, or $2^{2^{2^{2}}}$, etc. Ultra-finitists think that if a term $t$ has a value, then there should be an actual computation verifying it. Yes, one could generalize the point to all sorts of valuations, and to cases where AC becomes relevant. For a case where AC isn't relevant, we could consider e.g., $\| GC \|_{\mathbb{N}}$, i.e., the truth value of Goldbach's Conjecture in the standard model $\mathbb{N}$. No one knows what this is. But normally we assume that each arithmetic statement has a truth value, even though we're not guaranteed to ever find out. Cheers, Jeff
2019-04-24 08:55:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6147122383117676, "perplexity": 1211.3385485985189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578636101.74/warc/CC-MAIN-20190424074540-20190424100540-00416.warc.gz"}
http://www.homermultitext.org/chsseminar/2018/scholion-markers/
Indexing scholion markers For the Upsilon 1.1 or Venetus B manuscripts, scholion markers within the Iliad text link passages in the Iliad to scholia. We need to record these after editing the scholia. Create an index file 1. In your repository, please create a directory (folder) named scholion-markers 2. In the scholion-markers directory, create a file with a name ending in .cex 3. Add this heading line to the cex file: reading#image#scholion#linked text Each line represents one entry, with four pieces of information. 1. The reading of the marker. Use HMT XML markup as you would in your edition. For example, if the marker is a Greek numeral 1, you should record <num>α</num> 2. A region of interest on an image illustrating the marker and the Iliadic word it is placed over. 3. The CTS URN for the scholion this marker links to. 4. A CTS URN for the Iliad line that is linked, including a subreference (beginning @) identifying the word that is marked. Example Here is a valid entry: <num>Θ</num>#urn:cite2:hmt:vbbifolio.v1:vb_128v_129r@0.5227,0.6307,0.03371,0.03202#urn:cts:greekLit:tlg5026.vb:129r_9#urn:cts:greekLit:tlg0012.tlg001.vb:10.1@παρὰ Breaking out each part: 1. <num>Θ</num> is the reading (numeric 9) 2. urn:cite2:hmt:vbbifolio.v1:vb_128v_129r@0.5227,0.6307,0.03371,0.03202 is the image reference (illustrated below) 3. urn:cts:greekLit:tlg5026.vb:129r_9 is the URN for the scholion linked to this passage 4. urn:cts:greekLit:tlg0012.tlg001.vb:10.1@παρὰ is the Iliad passage website © 2018, the Homer Multitext project
2019-07-16 04:34:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2620265483856201, "perplexity": 12329.63351654443}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524502.23/warc/CC-MAIN-20190716035206-20190716061206-00196.warc.gz"}
https://www.gamedev.net/forums/topic/371075-char-to-lpcwstr/
# char* to LPCWSTR This topic is 4529 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I created a window wrapper class long ago with the Visual C++ 2005 Express beta. The class works fine on all the projects that I've created with that version of vc++. However, when I try to create a new project and use that class in the exact same way, it creates a few errors: c:\reality 101\c++ game engine\input engine\02 - using separate functions\window.cpp(94) : error C2440: '=' : cannot convert from 'char *' to 'LPCWSTR' Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast c:\reality 101\c++ game engine\input engine\02 - using separate functions\window.cpp(111) : error C2664: 'CreateWindowExW' : cannot convert parameter 2 from 'char *' to 'LPCWSTR' Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast The problem areas are here: wcex.lpszClassName = m_ClassName; //Where m_ClassName is char* m_hWnd = CreateWindowEx(WS_EX_CLIENTEDGE, m_ClassName, m_WindowTitle, dwstyles, rWindow-&gt;left, rWindow-&gt;top, rWindow-&gt;right-rWindow-&gt;left, rWindow-&gt;bottom-rWindow-&gt;top, NULL, NULL, *m_phInstance, (void*)this); I'm sure if the second function had moved beyond the m_ClassName error, m_WindowTitle would have produced the same error. Now this typecasting was never a problem when I was working before, and in fact the old projects that I'm using with the new C++ Express still work fine. Does anyone know what might be wrong here? ##### Share on other sites VC++ 2k5 defaults to UNICODE, so either change your char* to wchar_t* or change the project to MBCS in the project properties. Cheers, Pat. ##### Share on other sites wchar_t* sounds like a great start, however my function call no longer works. g_Window = new Window(hInstance, "class", "DI 2", winTitle, 50, 50, 640, 480); Where the inputs for "class" and "DI 2" are now of type wchar_t* Now, however, I'm getting a new error: c:\reality 101\c++ game engine\input engine\02 - using separate functions\main.cpp(17) : error C2664: 'Window::Window(HINSTANCE &,wchar_t *,wchar_t *,DWORD,int,int,int,int)' : cannot convert parameter 2 from 'const char [6]' to 'wchar_t *' Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast I've tried typecasting "class" and "DI 2" to (wchar_t*), but then the title of the window shows up as rubbish. Is there another way to do this? ##### Share on other sites project->properties->Conifiguration properties->general and then under project defaults under the character set option change the default "Use Unicode Character Set" to "USe Multi-byte Character Set" OR Change: g_Window = new Window(hInstance, "class", "DI 2", winTitle, 50, 50, 640, 480); to g_Window = new Window(hInstance, L"class", L"DI 2", winTitle, 50, 50, 640, 480); Notice the 'L' macro infront of where you have char strings. ##### Share on other sites A good practice is to always wrap your texts into the predefined TEXT () macro, depends on the UNICODE symbol has been defined or not, the macro will put an appropriate prefix "L" in front of the text if necessary: "AABB" -> TEXT ("AABB") It sounds a bit overwhelming but you'll feel fortunate if sometime in the future someone somehow requires your project to be Unicode-friendly. As does with string-related common Win32 API functions, there actually have been two versions for each one. For example: MessageBox () is a macro of MessageBoxA () and MessageBoxW (). ##### Share on other sites Ahh thanks guys. This is what worked: Changes all the char* datatypes into wchar_t*, did the same with function members, and whenever I use the function, I put an L in front of the text that I want to use. Works fine. ##### Share on other sites I kinda inclined to Seleton's recommendation, you can switch from MBCS to Unicode, and vice-versa, with just a single compilation flag. But instead of TEXT I prefer to use TCHAR and _T macro: TCHAR szString = _T("Content"); ##### Share on other sites Quote: Original post by HaywireGuyI kinda inclined to Seleton's recommendation, you can switch from MBCS to Unicode, and vice-versa, with just a single compilation flag. But instead of TEXT I prefer to use TCHAR and _T macro:    TCHAR szString = _T("Content"); Yeah, I agree with HaywireGuy. in tchar.h there are a bunch of things helpful for this kind of problem. its basically like this... #if defined(_MBCS)#define TCHAR char#elif defined(_UNICODE)#define TCHAR wchar_t#endif/* then there are a bunch of string related functions that are defined to use theproper MultiByte / Unicode characters..Its very useful so you never have to worry about which character set your using.*/ • 34 • 12 • 10 • 9 • 9 • ### Forum Statistics • Total Topics 631354 • Total Posts 2999503 ×
2018-06-18 11:49:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24796797335147858, "perplexity": 9558.555002326402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859766.6/warc/CC-MAIN-20180618105733-20180618125733-00068.warc.gz"}
https://stats.stackexchange.com/questions/246488/does-the-proportional-hazards-assumption-still-matter-if-the-covariate-is-time-d
# Does the proportional hazards assumption still matter if the covariate is time-dependent? If I estimate a Cox Proportional Hazards model and my covariate of interest is dependent (continuous or categorical), does the proportional hazards assumption still matter? I recently went to a presentation where the speaker said that when using a time-dependent covariate, the importance of satisfying this assumption didn't matter but didn't really offer any justification for this, nor did he offer a reference. You are still assuming that the effect of the value at each covariates/factor at each timepoint is the same, you simply allow the covariate to vary its value over time (but the change in the log-hazard rate associated with a particular value is still exactly the same across all timepoints). Thus, it does not change the assumption. Or was the presenter perhaps talking about also putting the covariate by time (or log(time)) interaction in the model as a time-dependent covariate? If you do that (for all covariates), then you have a model that might possibly approximate (a linear interaction cannot fully capture the possibly more complex things that may be going on in any one dataset, but may be okay for approximately capturing it) a model that does not make such an assumption. I may be wrong but I believe that Björn's answer is not completely correct. The proportional hazards assumption means that the ratio of the hazard for a particular group of observations (determined by the values of the covariates) to the baseline hazard (when all covariates are zero) is constant over time. If there are time-varying covariates this is not true, and therefore the Cox model no longer assumes proportional hazards. Here is a quote I have recently come across from David Collett's book, Modelling Survival Data in Medical Research (2nd ed., 2003, p. 253), that may be helpful: It is important to note that in the model given in equation $h_i(t) = \exp \left\{ \sum_{j=1}^p \beta_j x_{ji}(t) \right\} h_o(t)$, the values of the variables $x_{ji}(t)$ depend on the time $t$, and so the relative hazard $h_i(t)/h_0(t)$ is also time-dependent. This means that the hazard of death at time $t$ is no longer proportional to the baseline hazard, and the model is no longer a proportional hazards model. The accepted answer to this question on CV may also be relevant. • This might, however, be more of a terminological rather than a practical distinction. Even if the presence of time-dependent covariate values means that the proportional hazards (PH) assumption does not hold, approaches based on partial likelihood for analyzing Cox PH models still can be used reliably with time-dependent covariates, as references linked from the CV question you cite make clear. The underlying assumption with time-dependent covariate values is as Björn stated: "the change in the log-hazard rate associated with a particular value is still exactly the same across all timepoints." – EdM Jul 9 '18 at 15:20 • Thank you for your comment. I think you raise a good point. Perhaps one practical aspect where this question could be important is in whether or not one would need to test the assumption in an applied setting. In a model with only fixed-time covariates I believe it is advisable to test the proportional hazards assumption, for instance by checking that the Schoenfeld residuals for the different variables are approximately constant over time. I think, however, that this would not make sense with time-varying covariates, though I may be wrong. – George Costanza Jul 9 '18 at 19:14 • You can test the assumption that "the change in the log-hazard rate associated with a particular value [of a covariate] is still exactly the same across all timepoints," which for Cox models with time-dependent covariates (assuming that the current value of the covariate determines the instantaneous hazard versus baseline) is the analog of the strict PH assumption. For example, this document (linked from the CV page you cite) shows how to do so by testing the significance of adding a type of covariate*time interaction term to the model. – EdM Jul 9 '18 at 22:08
2021-02-25 17:12:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7743350863456726, "perplexity": 527.5740441844674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351374.10/warc/CC-MAIN-20210225153633-20210225183633-00632.warc.gz"}
https://datascience.stackexchange.com/tags/r/new
# Tag Info ## New answers tagged r 1 The roc function in the pROC package allows you to extract the sensitivity and specificity values. I will give an example below. Keep in mind that the $y$-axis is sensitivity, but the $x$-axis is $1 - specificity$. library(pROC) set.seed(2021) N <- 1000 x1 <- rnorm(N) x2 <- rnorm(N) x3 <- rnorm(N) z <- x1 + x2 + x3 pr <- 1/(1 + exp(-z)) y &... 1 You can use the general train from caret to train the model The new entry needs to be added in the form of the Train set, only then it will be able to predict I would have done this like this: library(caret) model_knn<-train(Species ~ ., data = db_class[row_train,], method = "knn",tuneLength = 10) #You can select any other tune length too. ... 0 Your output is showing the death for Kedah only, but it is printing Johor in the title. Instead of editing it every time in the ggplot2 code, I prefer to create a separate list and filter it out in the ggplot2 code. And, instead of glue, I used a simple paste0. Solution: selected_state <- 'Kedah' death_state%>% filter(State %in% selected_state)%>%... 0 Here is a solution by using bisect Python standard library from bisect import bisect from random import sample data = sample(range(10_000), 1_000) breakpoints = [1, 5, 25, 50, 150, 250, 1_000, 5_000, 10_000] buckets = {} for i in data: buckets.setdefault(breakpoints[bisect(breakpoints, i)], []).append(i) this will result in a dictionary with ... 0 The question is why was scikit designed this way. Only a few people can factually answer that question. I have my opinion, but that is all that it is. However formulas can be used with scikit or statsmodels or other packages. Patsy gives the ability. This can be used with scikit as the output of Patsy functions a lot like numpy arrays. An example is here. ... 0 To determine whether a time series is additive or multiplicative we can use seasonal_decompose which provides us 3 seperate components trend,seasonility,and residual.We can check the variance of seasonality and residual components for additive and multiplicative decompose. The seasonality and residual components with constant variance represent the time ... Top 50 recent answers are included
2021-09-28 01:52:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3861607611179352, "perplexity": 2271.7880901504222}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058589.72/warc/CC-MAIN-20210928002254-20210928032254-00338.warc.gz"}
https://cs.stackexchange.com/questions/83784/geometric-intuition-behind-vc-dimension
# Geometric intuition behind VC-dimension Recently, I learnt about VC-dimension and how its boundedness assures PAC learnability on uncountable range spaces (let's assume that hypothesis class is the same as the family of concepts we want to learn). My question is simple: What is/are the geometric intuition(s) behind the concept of VC dimension? The VC dimension is a complexity measure for a family of boolean functions over some domain $\mathcal{X}$. Families who allow "richer" behavior have a higher VC dimension. Since $\mathcal{X}$ can be arbitrary, there isn't a general geometric interpretation. However, if you think of $\mathcal{X}$ as $\mathbb{R}^d$, then you can think of binary functions as manifolds, whose boundary is what's separating positive and negative labels. Families with more "complex" boundaries have a high VC dimension, whereas simple manifolds do not, e.g. the dimension of linear separators is $O(d)$, while convex polygons (with unbounded number of edges) have infinite VC dimension. The more complex you allow the boundary to be, the more likely it is that you can find a large set for which you can agree with any labeling, by avoiding the negative labels in a "snake like" shape. • Suppose the range space is $(\mathbb{R}^d, \mathcal{M})$, where $\mathcal{M}$ is a family of manifolds. To each $M \in \mathcal{M}$ you are associating the canonical indicator function $\mathbb{1}_M$, right? Nov 12 '17 at 10:01
2021-11-29 17:04:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8254061341285706, "perplexity": 447.83611037023746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00063.warc.gz"}
https://www.physicsforums.com/threads/question-on-the-particles-that-formed-the-earth.919113/
# I Question on the particles that formed the Earth. 1. Jul 1, 2017 ### Damian79 Full disclosure, I am a creationist, but i want to know the finer points about the big bang and the creation of the universe. So we know that the formation of new rock from lava doesnt make them "day zero" rocks, ie they still would be considered aged when we do radiometric dating. So we know these changes dont change their "clocks" on how old they are, I think this is accepted among creationists and non creationists alike. So how do we know when the earth was formed by the particles of the big bang that the particles from the big bang havent aged on the way to the creation of the Earth assuming the particles from the big bang are "day zero" particles? Could being in the proximity of antimatter age or reverse age matter? So many questions regarding this but I'll stat here. 2. Jul 1, 2017 ### Orodruin Staff Emeritus This is false. For example, potassium-argon dating is performed by comparing the potassium and argon abundances in the rock. The argon created by potassium decays while the rock is molten, but once it solidifies it traps the argon. The rock is therefore "day zero" due to not having any argon in it when it is formed and you can perform the dating by comparing the amounts of potassium and argon. For basic information on K-Ar dating, see the wikipedia page. 3. Jul 1, 2017 ### Staff: Mentor All dating methods where one element can form a crystal but its decay product cannot form the same crystal start at zero age when the rock solidifies. All dating methods using radiation damage in solids start at zero age. Basically all dating methods for anorganic material rely on one of these two ideas. Not a coincidence, you need a well-known initial state. It was not. The big bang only produced hydrogen, helium and tiny amounts of lithium. Most of Earth is made out of heavier elements that formed in stars later. For things like the overall uranium isotope ratio (238 to 235; 234 is produced from 238 decay so that is special), what we see is indeed not the age of the Earth, it is the age of the uranium, and it is a bit older than Earth. This ratio on its own is not used for dating. No. And there are no relevant amounts of antimatter around anyway. 4. Jul 1, 2017 ### Staff: Mentor Hi Damian79. Welcome to PF! Before we begin this discussion (which appears to have already started while I was typing this), I'd like to make it clear that ALL discussion should take place in the context of known science. This means that if someone tells you that X is true or Y is the way that something works, we are talking about those things as currently understood by the mainstream scientific community. There is no discussion of "absolute truth" here. I say this because I want to avoid many of the issues that often plague these conversations where criticism is given of the scientific view for not "truly" knowing what happened in the past or at large distances. We fully know and admit that we can't know any absolute truth and any statements or facts given here should always be understood as being part of a theory or model that is always being tested and verified to the best of our abilities. And rather than being a weakness of science, it's actually a strength in that it allows us to constantly ensure that our body of knowledge is as accurate as possible For starters, this is not how cosmologists and other scientists model and understand the formation of the Earth or anything within the universe. It would be beyond the scope of this post and probably this thread to give you the entire history of the universe as given in the standard model of cosmology (you can find a decent explanation on wikipedia), but we can talk about a few key points. Note that this is a very brief and general overview and is not intended to be an extremely accurate description. 1. The big bang and subsequent evolution of the universe resulted in the formation of mostly hydrogen and helium, with a tiny smattering of lithium and a few other light elements (we're going to mostly ignore dark matter here, as it's not well understood yet and doesn't do much except provide extra gravity help form galaxies and galaxy clusters). 2. These atoms eventually coalesced under gravity to form the galaxies and then the first stars. 3. The fusion of light elements inside these stars created heavier elements like carbon, oxygen, nitrogen, etc. These first stars were very, very massive and eventually underwent supernova, spreading their heavier elements out into the universe to mix with the hydrogen and helium gas still out there. Galaxy and star formation continued, pumping out larger quantities of heavier elements over time. 4. During subsequent star formation, the heavier elements formed what we call "dust". Now, dust is a very different thing that hydrogen and helium gas and has a profound impact on the events of star formation. With only hydrogen and helium (and perhaps trace quantities of lithium), the collapsing gas cloud tends to just get blown away once the proto-star becomes hot enough to emit lots of radiation and solar wind. There is no formation of rocky planets at this time because there are no heavier elements. However, once you add carbon, oxygen, nitrogen, iron, and the dozens of other heavier elements (including uranium) to the collapsing cloud of dust and gas, things change. Heavy elements are much denser than either hydrogen or helium and when the collapsing cloud of dust and gas forms a large, rotating disk surrounding the proto-star they tend to "stick together" to form molecules, dust grains, and small rocks that aren't simply blown away when the proto-star heats up. Over time, these rocks collide and merge with other rocks to form larger bodies, which then collide with more material, building up what are called "planetesimals". Further merging of these planetesimals results in the formation of proto-planets which eventually become full-fledged planets as they finally merge with the remaining material. 5. Now, this is where a crucial part of dating the ages of rocks comes into play. At first, the proto-planets and newborn planets are very, very hot. So hot that they are essentially completely molten. Over time they cool down and the different elements are able to form solid rock. The particular composition of this rock is extremely important. We know that certain elements only bond in certain ways with other elements. For example, a particular type of rock is formed by silicon, oxygen, and zirconium and is known as Zircon. Zircon has the property that it readily incorporates uranium into itself, but it strongly rejects lead during its formation. So as the Earth cooled, zircon formed wherever there was sufficient quantities of oxygen, silicon, zirconium, and uranium. However, uranium is radioactive and has a half-life of about 4-billion years (experiments have verified this to a very high precision). Over time, part of the uranium that was taken up into zircon decays into various other elements, which themselves also decay into lighter elements. This chain of decay eventually stops at lead. As I said above, lead is strongly rejected by zircon when zircon is initially forming. So we can say with good confidence that any lead present inside zircon is the result of the decay of uranium. By looking at the ratio of lead to uranium, and knowing the decay rate of uranium and its decay products, we can reliably date the age of a sample of rock. Obviously things are more complicated than I've described them, but that's the general idea behind radiometric dating. Now, the reason I explained all of this was to give a very basic overview of how we date rocks and to show that much of the atoms making up the Earth were not formed directly via the big bang, but inside of massive stars and supernovae. When it comes to dating the age of the universe things get a bit more complicated and we have to use multiple methods that are very difficult to explain if you know very little about astrophysics. For example, I could tell you that we can date the age of a star cluster by looking at the type of stars remaining in the cluster (the ones that haven't undergone supernova yet), but you'd need to know about the details of how stars work to understand why that particular type of dating method works. And things only get more complicated from there. No. Antimatter is understood pretty well. It does not have any "mystical" properties that normal matter lacks. Antimatter works just like matter in all respects except that the sign of certain properties change (charge goes from positive to negative or vice versa as an example). 5. Jul 1, 2017 ### Damian79 I am a little confused by what you are saying. Do fresh lava rocks return a result of possibly zero days old when radiometric dating is done on them? Do you have a link that shows this? 6. Jul 1, 2017 ### Orodruin Staff Emeritus In molten rock, the argon escapes. When it solidifies there will therefore be no argon. If you make a measurement right after the rock has solidified, you will get an age of zero. Due to the long half-life of potassium-40, "zero" essentially means that you know that the rock is "less than 100000 years" as it takes some time for a measurable amount of argon to accumulate. I also suggest you read @Drakkith 's post regarding uranium-lead dating, which is based on a similar principle. 7. Jul 1, 2017 ### Damian79 Thank you for that primer Drakkith. So we get the dates from calculating the amount of material created by the original material? Or am I wrong here? 8. Jul 1, 2017 ### Staff: Mentor A good source on the general methods of radiometric dating is the Isochron Dating article at Talk.Origins: http://www.talkorigins.org/faqs/isochron-dating.html Potassium-Argon is one of the methods to which the general principles given in this article apply. 9. Jul 1, 2017 ### Damian79 I dont see any examples of fresh rocks coming up in the links of "potassium argon dating fresh lava rocks" that have low dates listed in the links. Perhaps my google search is borked because of my search history, so I can only see those dates from creationists which I know are contested. 10. Jul 1, 2017 ### Orodruin Staff Emeritus Yes, but you also need to know how much of the original material is left. Otherwise you cannot know the fraction of the original material that has decayed and, by extension, the age of the sample. Let us take a hands-on example with made up numbers. Let us say that your friend has a bunch of peaches and you know that every day your friend will eat half of the peaches that are left, leaving only the seed. If you only count the seeds, you have no way of knowing when the peaches were picked. However, if you see that there are 4 peaches and 28 seeds, then you know that • there were 8 peaches and 24 seeds 1 day ago • there were 16 peaches and 16 seeds 2 days ago • there were 32 peaches and 0 seeds 3 days ago and consequently the peaches were picked 3 days ago. Without the information of how many peaches there were or without the information on how many seeds there were, you would not have been able to obtain the information on when there was no seeds. Because of low accuracy for young rock, it is very impractical to use K-Ar dating on young rock (all it will tell you is that the rock is less than 100000 years). For young rock, it is much more interesting to use dating methods that employ nuclei that decay faster, since they will give more accurate results. Of course, you can try to do K-Ar dating on fresh rock, but it will just come out with zero argon abundance and this is not a very exciting result. 11. Jul 1, 2017 ### Orodruin Staff Emeritus To put this in a formula. The basic idea is based on having a number of nuclei $N_0$ of the parent nucleus and none of the daughter at time zero. A priori, you do not know $N_0$. The number of parent nuclei after a time $t$ has passed will be given by $N_P = N_0 2^{-t/t_0}$, where $t_0$ is the half-life of the parent. This also means that the number of daughter nuclei that have been produced are $N_D = N_0 (1 - 2^{-t/t_0})$ and consequently the ratio $R = N_D/N_P$ at time $t$, which is what you can measure, is given by $$R = \frac{1-2^{-t/t_0}}{2^{-t/t_0}} = 2^{t/t_0} - 1 = e^{\ln(2) t/t_0} - 1$$ and we can solve for $t$ as $$t = \frac{t_0}{\ln(2)} \ln(R+1).$$ If you only knew $N_D$ or $N_P$, you would not know what $R$ was. Note that there is no need to know the original number $N_0$, you can make do with just things that you can measure today. 12. Jul 1, 2017 ### Damian79 I see. That is the issue I am currently having to accept all. I want to see a result that comes to 0.1 or less million years old. Has there been any tests done to prove the assumption that all the argon would leak out and give an almost zero day result? Has there been a study of the rate of argon leaving the rock? So at least I can be lead to believe that at the start, the age of the rocks would be zero? 13. Jul 1, 2017 ### Orodruin Staff Emeritus This will be difficult to find. Not because it is not possible, but because it is very basic and rather uninteresting to do such a study although it would in principle be very easy to do it. Just take some freshly formed rock and try to measure its argon content, you will get zero. I am not a geologist so I do not know the early publication history regarding radiogenic dating. It would however have made sense for early scientists to do such tests with known young samples. 14. Jul 1, 2017 ### Staff: Mentor Pierre-Yves Gillot, Yves Cornette: The Cassignol technique for potassium—Argon dating, precision and accuracy: Examples from the Late Pleistocene to Recent volcanics from southern Italy 2000 years is short enough to use well-documented volcanic eruptions. Table IV compares the measured ages with the actual eruption dates. Eolian islands: Eruptions 1400-1500 years ago, K-Ar measurements range from "0 to 4000 years ago" to "1200-2000 years ago" depending on the sample. Isle of Ischia: Eruption 715 years ago, K-Ar measurements go from "0 to 2000 years ago" to "300 to 1500 years ago". Random example, not the only such study. 15. Jul 1, 2017 ### Staff: Mentor In addition to the above examples, note that it is a very, very well understand fact that gases in a liquid will diffuse from areas of higher concentrations to areas of lower concentrations if possible (perhaps "concentration" is not the right word. Partial pressures perhaps?). 16. Jul 1, 2017 ### Orodruin Staff Emeritus I stand corrected. 17. Jul 1, 2017 ### Damian79 That about wraps it up for the questions from me. Thanks you for such quick responses. Sorry for the late reply, I had to do something.
2017-08-21 10:59:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4196290075778961, "perplexity": 813.9318604339319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108264.79/warc/CC-MAIN-20170821095257-20170821115257-00342.warc.gz"}
http://www.nag.com/numeric/fl/nagdoc_fl24/html/G08/g08cbf.html
G08 Chapter Contents G08 Chapter Introduction NAG Library Manual # NAG Library Routine DocumentG08CBF Note:  before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details. ## 1  Purpose G08CBF performs the one sample Kolmogorov–Smirnov test, using one of the standard distributions provided. ## 2  Specification SUBROUTINE G08CBF ( N, X, DIST, PAR, ESTIMA, NTYPE, D, Z, P, SX, IFAIL) INTEGER N, NTYPE, IFAIL REAL (KIND=nag_wp) X(N), PAR(2), D, Z, P, SX(N) CHARACTER(*) DIST CHARACTER(1) ESTIMA ## 3  Description The data consist of a single sample of $n$ observations denoted by ${x}_{1},{x}_{2},\dots ,{x}_{n}$. Let ${S}_{n}\left({x}_{\left(i\right)}\right)$ and ${F}_{0}\left({x}_{\left(i\right)}\right)$ represent the sample cumulative distribution function and the theoretical (null) cumulative distribution function respectively at the point ${x}_{\left(i\right)}$ where ${x}_{\left(i\right)}$ is the $i$th smallest sample observation. The Kolmogorov–Smirnov test provides a test of the null hypothesis ${H}_{0}$: the data are a random sample of observations from a theoretical distribution specified by you against one of the following alternative hypotheses: (i) ${H}_{1}$: the data cannot be considered to be a random sample from the specified null distribution. (ii) ${H}_{2}$: the data arise from a distribution which dominates the specified null distribution. In practical terms, this would be demonstrated if the values of the sample cumulative distribution function ${S}_{n}\left(x\right)$ tended to exceed the corresponding values of the theoretical cumulative distribution function ${F}_{0}\left(x\right)$. (iii) ${H}_{3}$: the data arise from a distribution which is dominated by the specified null distribution. In practical terms, this would be demonstrated if the values of the theoretical cumulative distribution function ${F}_{0}\left(x\right)$ tended to exceed the corresponding values of the sample cumulative distribution function ${S}_{n}\left(x\right)$. One of the following test statistics is computed depending on the particular alternative null hypothesis specified (see the description of the parameter NTYPE in Section 5). For the alternative hypothesis ${H}_{1}$. • ${D}_{n}$ – the largest absolute deviation between the sample cumulative distribution function and the theoretical cumulative distribution function. Formally ${D}_{n}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{{D}_{n}^{+},{D}_{n}^{-}\right\}$. For the alternative hypothesis ${H}_{2}$. • ${D}_{n}^{+}$ – the largest positive deviation between the sample cumulative distribution function and the theoretical cumulative distribution function. Formally ${D}_{n}^{+}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{{S}_{n}\left({x}_{\left(i\right)}\right)-{F}_{0}\left({x}_{\left(i\right)}\right),0\right\}$ for both discrete and continuous null distributions. For the alternative hypothesis ${H}_{3}$. • ${D}_{n}^{-}$ – the largest positive deviation between the theoretical cumulative distribution function and the sample cumulative distribution function. Formally if the null distribution is discrete then ${D}_{n}^{-}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{{F}_{0}\left({x}_{\left(i\right)}\right)-{S}_{n}\left({x}_{\left(i\right)}\right),0\right\}$ and if the null distribution is continuous then ${D}_{n}^{-}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{{F}_{0}\left({x}_{\left(i\right)}\right)-{S}_{n}\left({x}_{\left(i-1\right)}\right),0\right\}$. The standardized statistic $Z=D×\sqrt{n}$ is also computed where $D$ may be ${D}_{n},{D}_{n}^{+}$ or ${D}_{n}^{-}$ depending on the choice of the alternative hypothesis. This is the standardized value of $D$ with no correction for continuity applied and the distribution of $Z$ converges asymptotically to a limiting distribution, first derived by Kolmogorov (1933), and then tabulated by Smirnov (1948). The asymptotic distributions for the one-sided statistics were obtained by Smirnov (1933). The probability, under the null hypothesis, of obtaining a value of the test statistic as extreme as that observed, is computed. If $n\le 100$ an exact method given by Conover (1980), is used. Note that the method used is only exact for continuous theoretical distributions and does not include Conover's modification for discrete distributions. This method computes the one-sided probabilities. The two-sided probabilities are estimated by doubling the one-sided probability. This is a good estimate for small $p$, that is $p\le 0.10$, but it becomes very poor for larger $p$. If $n>100$ then $p$ is computed using the Kolmogorov–Smirnov limiting distributions, see Feller (1948), Kendall and Stuart (1973), Kolmogorov (1933), Smirnov (1933) and Smirnov (1948). ## 4  References Conover W J (1980) Practical Nonparametric Statistics Wiley Feller W (1948) On the Kolmogorov–Smirnov limit theorems for empirical distributions Ann. Math. Statist. 19 179–181 Kendall M G and Stuart A (1973) The Advanced Theory of Statistics (Volume 2) (3rd Edition) Griffin Kolmogorov A N (1933) Sulla determinazione empirica di una legge di distribuzione Giornale dell' Istituto Italiano degli Attuari 4 83–91 Siegel S (1956) Non-parametric Statistics for the Behavioral Sciences McGraw–Hill Smirnov N (1933) Estimate of deviation between empirical distribution functions in two independent samples Bull. Moscow Univ. 2(2) 3–16 Smirnov N (1948) Table for estimating the goodness of fit of empirical distributions Ann. Math. Statist. 19 279–281 ## 5  Parameters 1:     N – INTEGERInput On entry: $n$, the number of observations in the sample. Constraint: ${\mathbf{N}}\ge 3$. 2:     X(N) – REAL (KIND=nag_wp) arrayInput On entry: the sample observations ${x}_{1},{x}_{2},\dots ,{x}_{n}$. Constraint: the sample observations supplied must be consistent, in the usual manner, with the null distribution chosen, as specified by the parameters DIST and PAR. For further details see Section 8. 3:     DIST – CHARACTER(*)Input On entry: the theoretical (null) distribution from which it is suspected the data may arise. ${\mathbf{DIST}}=\text{'U'}$ The uniform distribution over $\left(a,b\right)-U\left(a,b\right)$. ${\mathbf{DIST}}=\text{'N'}$ The Normal distribution with mean $\mu$ and variance ${\sigma }^{2}-N\left(\mu ,{\sigma }^{2}\right)$. ${\mathbf{DIST}}=\text{'G'}$ The gamma distribution with shape parameter $\alpha$ and scale parameter $\beta$, where the mean $\text{}=\alpha \beta$. ${\mathbf{DIST}}=\text{'BE'}$ The beta distribution with shape parameters $\alpha$ and $\beta$, where the mean $\text{}=\alpha /\left(\alpha +\beta \right)$. ${\mathbf{DIST}}=\text{'BI'}$ The binomial distribution with the number of trials, $m$, and the probability of a success, $p$. ${\mathbf{DIST}}=\text{'E'}$ The exponential distribution with parameter $\lambda$, where the mean $\text{}=1/\lambda$. ${\mathbf{DIST}}=\text{'P'}$ The Poisson distribution with parameter $\mu$, where the mean $\text{}=\mu$. Any number of characters may be supplied as the actual parameter, however only the characters, maximum 2, required to uniquely identify the distribution are referenced. Constraint: ${\mathbf{DIST}}=\text{'U'}$, $\text{'N'}$, $\text{'G'}$, $\text{'BE'}$, $\text{'BI'}$, $\text{'E'}$ or $\text{'P'}$. 4:     PAR($2$) – REAL (KIND=nag_wp) arrayInput/Output On entry: if ${\mathbf{ESTIMA}}=\text{'S'}$, PAR must contain the known values of the parameter(s) of the null distribution as follows. If a uniform distribution is used, then ${\mathbf{PAR}}\left(1\right)$ and ${\mathbf{PAR}}\left(2\right)$ must contain the boundaries $a$ and $b$ respectively. If a Normal distribution is used, then ${\mathbf{PAR}}\left(1\right)$ and ${\mathbf{PAR}}\left(2\right)$ must contain the mean, $\mu$, and the variance, ${\sigma }^{2}$, respectively. If a gamma distribution is used, then ${\mathbf{PAR}}\left(1\right)$ and ${\mathbf{PAR}}\left(2\right)$ must contain the parameters $\alpha$ and $\beta$ respectively. If a beta distribution is used, then ${\mathbf{PAR}}\left(1\right)$ and ${\mathbf{PAR}}\left(2\right)$ must contain the parameters $\alpha$ and $\beta$ respectively. If a binomial distribution is used, then ${\mathbf{PAR}}\left(1\right)$ and ${\mathbf{PAR}}\left(2\right)$ must contain the parameters $m$ and $p$ respectively. If an exponential distribution is used, then ${\mathbf{PAR}}\left(1\right)$ must contain the parameter $\lambda$. If a Poisson distribution is used, then ${\mathbf{PAR}}\left(1\right)$ must contain the parameter $\mu$. If ${\mathbf{ESTIMA}}=$, PAR need not be set except when the null distribution requested is the binomial distribution in which case ${\mathbf{PAR}}\left(1\right)$ must contain the parameter $m$. On exit: if ${\mathbf{ESTIMA}}=\text{'S'}$, PAR is unchanged. If ${\mathbf{ESTIMA}}=$, then ${\mathbf{PAR}}\left(1\right)$ and ${\mathbf{PAR}}\left(2\right)$ are set to values as estimated from the data. Constraints: • if ${\mathbf{DIST}}=\text{'U'}$, ${\mathbf{PAR}}\left(1\right)<{\mathbf{PAR}}\left(2\right)$; • if ${\mathbf{DIST}}=\text{'N'}$, ${\mathbf{PAR}}\left(2\right)>0.0$; • if ${\mathbf{DIST}}=\text{'G'}$, ${\mathbf{PAR}}\left(1\right)>0.0$ and ${\mathbf{PAR}}\left(2\right)>0.0$; • if ${\mathbf{DIST}}=\text{'BE'}$, ${\mathbf{PAR}}\left(1\right)>0.0$ and ${\mathbf{PAR}}\left(2\right)>0.0$ and ${\mathbf{PAR}}\left(1\right)\le {10}^{6}$ and ${\mathbf{PAR}}\left(2\right)\le {10}^{6}$; • if ${\mathbf{DIST}}=\text{'BI'}$, ${\mathbf{PAR}}\left(1\right)\ge 1.0$ and $0.0<{\mathbf{PAR}}\left(2\right)<1.0$ and ${\mathbf{PAR}}\left(1\right)×{\mathbf{PAR}}\left(2\right)×\left(1.0-{\mathbf{PAR}}\left(2\right)\right)\le {10}^{6}$ and ${\mathbf{PAR}}\left(1\right)<1/\mathrm{eps}$, where , see X02AJF; • if ${\mathbf{DIST}}=\text{'E'}$, ${\mathbf{PAR}}\left(1\right)>0.0$; • if ${\mathbf{DIST}}=\text{'P'}$, ${\mathbf{PAR}}\left(1\right)>0.0$ and ${\mathbf{PAR}}\left(1\right)\le {10}^{6}$. 5:     ESTIMA – CHARACTER(1)Input On entry: ESTIMA must specify whether values of the parameters of the null distribution are known or are to be estimated from the data. ${\mathbf{ESTIMA}}=\text{'S'}$ Values of the parameters will be supplied in the array PAR described above. ${\mathbf{ESTIMA}}=\text{'E'}$ Parameters are to be estimated from the data except when the null distribution requested is the binomial distribution in which case the first parameter, $m$, must be supplied in ${\mathbf{PAR}}\left(1\right)$ and only the second parameter, $p$ is estimated from the data. Constraint: ${\mathbf{ESTIMA}}=\text{'S'}$ or $\text{'E'}$. 6:     NTYPE – INTEGERInput On entry: the test statistic to be calculated, i.e., the choice of alternative hypothesis. ${\mathbf{NTYPE}}=1$ Computes ${D}_{n}$, to test ${H}_{0}$ against ${H}_{1}$, ${\mathbf{NTYPE}}=2$ Computes ${D}_{n}^{+}$, to test ${H}_{0}$ against ${H}_{2}$, ${\mathbf{NTYPE}}=3$ Computes ${D}_{n}^{-}$, to test ${H}_{0}$ against ${H}_{3}$. Constraint: ${\mathbf{NTYPE}}=1$, $2$ or $3$. 7:     D – REAL (KIND=nag_wp)Output On exit: the Kolmogorov–Smirnov test statistic (${D}_{n}$, ${D}_{n}^{+}$ or ${D}_{n}^{-}$ according to the value of NTYPE). 8:     Z – REAL (KIND=nag_wp)Output On exit: a standardized value, $Z$, of the test statistic, $D$, without any correction for continuity. 9:     P – REAL (KIND=nag_wp)Output On exit: the probability, $p$, associated with the observed value of $D$ where $D$ may be ${D}_{n},{D}_{n}^{+}$ or ${D}_{n}^{-}$ depending on the value of NTYPE (see Section 3). 10:   SX(N) – REAL (KIND=nag_wp) arrayOutput On exit: the sample observations, ${x}_{1},{x}_{2},\dots ,{x}_{n}$, sorted in ascending order. 11:   IFAIL – INTEGERInput/Output On entry: IFAIL must be set to $0$, $-1\text{​ or ​}1$. If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details. For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{​ or ​}1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is $0$. When the value $-\mathbf{1}\text{​ or ​}\mathbf{1}$ is used it is essential to test the value of IFAIL on exit. On exit: ${\mathbf{IFAIL}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6). ## 6  Error Indicators and Warnings If on entry ${\mathbf{IFAIL}}={\mathbf{0}}$ or $-{\mathbf{1}}$, explanatory error messages are output on the current error message unit (as defined by X04AAF). Errors or warnings detected by the routine: ${\mathbf{IFAIL}}=1$ On entry, ${\mathbf{N}}<3$. ${\mathbf{IFAIL}}=2$ On entry, an invalid code for DIST has been specified. ${\mathbf{IFAIL}}=3$ On entry, ${\mathbf{NTYPE}}\ne 1$, $2$ or $3$. ${\mathbf{IFAIL}}=4$ On entry, ${\mathbf{ESTIMA}}\ne \text{'S'}$ or $\text{'E'}$. ${\mathbf{IFAIL}}=5$ On entry, the parameters supplied for the specified null distribution are out of range (see Section 5). Apart from a check on the first parameter for the binomial distribution (${\mathbf{DIST}}=\text{'BI'}$) this error will only occur if ${\mathbf{ESTIMA}}=\text{'S'}$. ${\mathbf{IFAIL}}=6$ The data supplied in X could not arise from the chosen null distribution, as specified by the parameters DIST and PAR. For further details see Section 8. ${\mathbf{IFAIL}}=7$ The whole sample is constant, i.e., the variance is zero. This error may only occur if (${\mathbf{DIST}}=\text{'U'}$, $\text{'N'}$, $\text{'G'}$ or $\text{'BE'}$) and ${\mathbf{ESTIMA}}=\text{'E'}$. ${\mathbf{IFAIL}}=8$ The variance of the binomial distribution (${\mathbf{DIST}}=\text{'BI'}$) is too large. That is, $\mathit{mp}\left(1-p\right)>1000000$. ${\mathbf{IFAIL}}=9$ When ${\mathbf{DIST}}=\text{'G'}$, in the computation of the incomplete gamma function by S14BAF the convergence of the Taylor series or Legendre continued fraction fails within $600$ iterations. This is an unlikely error exit. ## 7  Accuracy The approximation for $p$, given when $n>100$, has a relative error of at most 2.5% for most cases. The two-sided probability is approximated by doubling the one-sided probability. This is only good for small $p$, i.e., $p<0.10$ but very poor for large $p$. The error is always on the conservative side, that is the tail probability, $p$, is over estimated. The time taken by G08CBF increases with $n$ until $n>100$ at which point it drops and then increases slowly with $n$. The time may also depend on the choice of null distribution and on whether or not the parameters are to be estimated. The data supplied in the parameter X must be consistent with the chosen null distribution as follows: • when ${\mathbf{DIST}}=\text{'U'}$, then ${\mathbf{PAR}}\left(1\right)\le {x}_{i}\le {\mathbf{PAR}}\left(2\right)$, for $i=1,2,\dots ,n$; • when ${\mathbf{DIST}}=\text{'N'}$, then there are no constraints on the ${x}_{i}$'s; • when ${\mathbf{DIST}}=\text{'G'}$, then ${x}_{i}\ge 0.0$, for $i=1,2,\dots ,n$; • when ${\mathbf{DIST}}=\text{'BE'}$, then $0.0\le {x}_{i}\le 1.0$, for $i=1,2,\dots ,n$; • when ${\mathbf{DIST}}=\text{'BI'}$, then $0.0\le {x}_{i}\le {\mathbf{PAR}}\left(1\right)$, for $i=1,2,\dots ,n$; • when ${\mathbf{DIST}}=\text{'E'}$, then ${x}_{i}\ge 0.0$, for $i=1,2,\dots ,n$; • when ${\mathbf{DIST}}=\text{'P'}$, then ${x}_{i}\ge 0.0$, for $i=1,2,\dots ,n$. ## 9  Example The following example program reads in a set of data consisting of 30 observations. The Kolmogorov–Smirnov test is then applied twice, firstly to test whether the sample is taken from a uniform distribution, $U\left(0,2\right)$, and secondly to test whether the sample is taken from a Normal distribution where the mean and variance are estimated from the data. In both cases we are testing against ${H}_{1}$; that is, we are doing a two tailed test. The values of D, Z and P are printed for each case. ### 9.1  Program Text Program Text (g08cbfe.f90) ### 9.2  Program Data Program Data (g08cbfe.d) ### 9.3  Program Results Program Results (g08cbfe.r)
2014-11-26 18:07:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 223, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9962069988250732, "perplexity": 1058.392008173876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007301.29/warc/CC-MAIN-20141125155647-00020-ip-10-235-23-156.ec2.internal.warc.gz"}
https://study.com/academy/answer/a-circular-coil-has-500-turns-and-a-radius-14-cm-the-coil-is-moved-in-0-35-seconds-from-an-area-where-there-is-no-magnetic-field-into-an-area-with-a-magnetic-field-of-strength-6-7-times-10-2-t.html
# A circular coil has 500 turns and a radius 14 cm. The coil is moved in 0.35 seconds from an area... ## Question: A circular coil has 500 turns and a radius 14 cm. The coil is moved in 0.35 seconds from an area where there is no magnetic field into an area with a magnetic field of strength {eq}6.7 \times 10^{-2}\ T {/eq}. The coil remains perpendicular to the magnetic field at all times. a) Find the magnitude of the induced EMF in the coil. b) If the coil has a resistance of 2.7 Ω, find the current in the coil. c) After moving into the field, the coil now remains stationary in the field for 3 seconds. Find the current induced in the coil during this interval. Faraday's Law states that the magnitude of the emf induced in a loop is directly proportional to the rate of change of the magnetic flux linked with the loop, mathematically {eq}\begin{align} \epsilon = \frac{N\Delta \Phi}{\Delta t} \end{align} {/eq} Where {eq}\Delta \Phi {/eq} is the change in the magnetic flux, N is the number of turns in the loop, and {eq}\Delta t {/eq} is the time taken. Data Given • Number of turns in the coil {eq}N = 500 {/eq} • Radius of the coil {eq}r = 14 \ \rm cm = 0.14 \ \rm m {/eq} • The final magnetic field linked with the coil {eq}B_f = 6.7 \times 10^{-2} \ \rm T {/eq} • Time elapsed {eq}\Delta t = 0.35 \ \rm s {/eq} • Resistance of the coil {eq}R = 2.7 \ \Omega {/eq} Part A) Let us use the Faraday's law to calculate the emf induced in the loop {eq}\begin{align} \epsilon = \frac{N\Delta \Phi}{\Delta t} \end{align} {/eq} {eq}\begin{align} \epsilon = \frac{NA \Delta B)}{\Delta t} \end{align} {/eq} {eq}\begin{align} \epsilon = \frac{500 \times \pi \times (0.14\ \rm m)^2 \times (6.7 \times 10^{-2} \ \rm T-0 \ \rm T)}{0.35 \ \rm s} \end{align} {/eq} {eq}\begin{align} \color{blue}{\boxed{ \ \epsilon = 5.89 \ \rm V \ }} \end{align} {/eq} Part B) Currnt in the coil, using Ohm's law {eq}\begin{align} I = \frac{V}{R} \\ I = \frac{ 5.89 \ \rm V}{2.7 \ \rm \Omega} \\ \color{blue}{\boxed{ \ I = 2.2 \ \rm A \ }} \end{align} {/eq} Part C) As the coil is stationary in the field for the 3 s it means the flux linked with coil remains constant and induced emf and hence induced current during this interval will be zero. {eq}\begin{align} \color{blue}{\boxed{ \ I' =0 \ \rm A \ }} \end{align} {/eq}
2020-04-10 13:51:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 7, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997798800468445, "perplexity": 1550.3443033067285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371896913.98/warc/CC-MAIN-20200410110538-20200410141038-00199.warc.gz"}
https://www.transtutors.com/questions/the-following-account-appears-in-the-ledger-after-only-part-281493.htm
# The following account appears in the ledger after only part The following account appears in the ledger after only part of the postings has been completed for January: Work in Process Balance, January 1 …. $15,500 Direct materials ……… 86,200 Direct labor ………….. 64,300 Factory overhead …….. 93,700 Jobs finished during January are summarized as follows: Job 320 …$57,600 Job 326 ….. 75,400 Job 327 … \$26,100 Job 350 ….. 94,800 a. Journalize the entry to record the jobs completed. b. Determine the cost of the unfinished jobs at January 31.
2020-01-25 12:24:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.226205512881279, "perplexity": 8282.999327008545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672440.80/warc/CC-MAIN-20200125101544-20200125130544-00231.warc.gz"}
https://practicepaper.in/gate-cse/data-structure
# Data Structure Question 1 Consider the following ANSI C program: #include < stdio.h > #include < stdlib.h > struct Node{ int value; struct Node *next;}; int main( ) { struct Node *boxE, *head, *boxN; int index=0; boxE=head= (struct Node *) malloc(sizeof(struct Node)); for (index =1; index < = 3; index++){ boxN = (struct Node *) malloc (sizeof(struct Node)); boxE -> next = boxN; boxN -> value = index; boxE = boxN; } for (index=0; index < = 3; index++) { printf("Value at index %d is %d\n", index, head -> value); printf("Value at index %d is %d\n", index+1, head -> value); } } Which one of the following statements below is correct about the program? A Upon execution, the program creates a linked-list of five nodes B Upon execution, the program goes into an infinite loop C It has a missing returnreturn which will be reported as an error by the compiler D It dereferences an uninitialized pointer that may result in a run-time error GATE CSE 2021 SET-2      Link List Question 1 Explanation: Question 2 Consider a complete binary tree with 7 nodes. Let A denote the set of first 3 elements obtained by performing Breadth-First Search (BFS) starting from the root. Let B denote the set of first 3 elements obtained by performing Depth-First Search (DFS) starting from the root. The value of |A-B| is _____________ A 3 B 4 C 1 D 2 GATE CSE 2021 SET-2      Binary Tree Question 2 Explanation: Question 3 What is the worst-case number of arithmetic operations performed by recursive binary search on a sorted array of size n? A $\Theta (\sqrt{n})$ B $\Theta ( \log _2 (n))$ C $\Theta ( n^2)$ D $\Theta ( n)$ GATE CSE 2021 SET-2      Array Question 3 Explanation: Question 4 Let H be a binary min-heap consisting of n elements implemented as an array. What is the worst case time complexity of an optimal algorithm to find the maximum element in H? A $\Theta (1)$ B $\Theta (\log n)$ C $\Theta ( n)$ D $\Theta (n \log n)$ GATE CSE 2021 SET-2      Heap Tree Question 4 Explanation: Question 5 Consider a dynamic hashing approach for 4-bit integer keys: 1. There is a main hash table of size 4. 2. The 2 least significant bits of a key is used to index into the main hash table. 3. Initially, the main hash table entries are empty. 4. Thereafter, when more keys are hashed into it, to resolve collisions, the set of all keys corresponding to a main hash table. entry is organized as a binary tree that grows on demand. 5. First, the 3rd least significant bit is used to divide the keys into left and right subtrees based on the 4th least significant bit. 6. To resolve more collisions, each node of the binary tree is further sub-divided into left and right subtrees based on the 4th least significant bit. 7. A split is done only if it is needed, i.e., only when there is a collision. Consider the following state of the hash table. Which of the following sequences of key insertions can cause the above state of the hash table (assume the keys are in decimal notation)? A 5,9,4,13,10,7 B 9,5,10,6,7,1 C 10,9,6,7,5,13 D 9,5,13,6,10,14 GATE CSE 2021 SET-1      Hashing Question 5 Explanation: Question 6 Consider the following sequence of operations on an empty stack. push(54); push(52); pop(); push(55); push(62); s=pop(); Consider the following sequence of operations on an empty queue. enqueue(21); enqueue(24); dequeue(); enqueue(28); enqueue(32); q=dequeue(); The value of s+q is ___________. A 94 B 83 C 79 D 86 GATE CSE 2021 SET-1      Stack Question 6 Explanation: Question 7 A binary search tree T contains n distinct elements. What is the time complexity of picking an element in T that is smaller than the maximum element in T? A $\Theta(n\log n)$ B $\Theta(n)$ C $\Theta(\log n)$ D $\Theta(1)$ GATE CSE 2021 SET-1      Binary Search Tree Question 7 Explanation: Question 8 Let P be an array containing n integers. Let t be the lowest upper bound on the number of comparisons of the array elements, required to find the minimum and maximum values in an arbitrary array of n elements. Which one of the following choices is correct? A $t \gt 2n-2$ B $t \gt 3\lceil \frac{n}{2}\rceil \text{ and } t\leq 2n-2$ C $t \gt n \text{ and } t\leq 3\lceil \frac{n}{2}\rceil$ D $t \gt \lceil \log_2(n)\rceil \text{ and } t\leq n$ GATE CSE 2021 SET-1      Array Question 8 Explanation: Question 9 A stack is implemented with an array of ${ }^{\prime} A[0 \ldots N-1]^{\prime}$ and a variable $\text { 'pos'. }$ The push and pop operations are defined by the following code. push (x) A[pos] <- x pos <- pos -1 end push pop() pos <- pos+1 return A[pos] end pop Which of the following will initialize an empty stack with capacity N for the above implementation? A $\text { pos } \leftarrow-1$ B $\text { pos } \leftarrow 0$ C $\text { pos } \leftarrow 1$ D $\text { pos } \leftarrow N-1$ ISRO CSE 2020      Stack Question 9 Explanation: Question 10 Of the following, which best approximates the ratio of the number of nonterminal nodes in the total number of nodes in a complete K-ary tree of depth N ? A 1/N B N-1/N C 1/K D K-1/K ISRO CSE 2020      n-ary Tree Question 10 Explanation: There are 10 questions to complete.
2021-12-07 03:39:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 22, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3604821562767029, "perplexity": 1809.8317607487443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363332.1/warc/CC-MAIN-20211207014802-20211207044802-00367.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=into&paperid=338&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Itogi Nauki i Tekhniki. Ser. Sovrem. Mat. Pril. Temat. Obz.: Year: Volume: Issue: Page: Find Itogi Nauki i Tekhniki. Ser. Sovrem. Mat. Pril. Temat. Obz., 2018, Volume 151, Pages 37–44 (Mi into338) Analogs of the Lebesgue Measure in Spaces of Sequences and Classes of Functions Integrable with respect to These Measures Moscow Institute of Physics and Technology (State University) Abstract: We examine translation-invariant measures on Banach spaces $l_p$, where $p\in[1,\infty]$. We construct analogs of the Lebesgue measure on Borel $\sigma$-algebras generated by the topology of pointwise convergence ($\sigma$-additive, invariant under shifts by arbitrary vectors, regular measures). We show that these measures are not $\sigma$-finite. We also study spaces of functions integrable with respect to measures constructed and prove that these spaces are not separable. We consider various dense subspaces in spaces of functions that are integrable with respect to a translation-invariant measure. We specify spaces of continuous functions, which are dense in the functional spaces considered. We discuss Borel $\sigma$-algebras corresponding to various topologies in the spaces $l_p$, where $p\in[1,\infty]$. For $p\in [1, \infty)$, we prove the coincidence of Borel $\sigma$-algebras corresponding to certain natural topologies in the given spaces of sequences and the Borel $\sigma$-algebra corresponding to the topology of pointwise convergence. We also verify that the space $l_\infty$ does not possess similar properties. Keywords: translation-invariant measure, topology of pointwise convergence, Borel $\sigma$-algebra, space of integrable functions, approximation of integrable functions by continuous functions Full text: PDF file (185 kB) Bibliographic databases: UDC: 517.982, 517.983 MSC: 28C20, 81Q05, 47D08 Citation: D. V. Zavadskii, “Analogs of the Lebesgue Measure in Spaces of Sequences and Classes of Functions Integrable with respect to These Measures”, Quantum probability, Itogi Nauki i Tekhniki. Ser. Sovrem. Mat. Pril. Temat. Obz., 151, VINITI, Moscow, 2018, 37–44 Citation in format AMSBIB \Bibitem{Zav18} \by D.~V.~Zavadskii \paper Analogs of the Lebesgue Measure in Spaces of Sequences and Classes of Functions Integrable with respect to These Measures \inbook Quantum probability \serial Itogi Nauki i Tekhniki. Ser. Sovrem. Mat. Pril. Temat. Obz. \yr 2018 \vol 151 \pages 37--44 \publ VINITI \publaddr Moscow \mathnet{http://mi.mathnet.ru/into338} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=3903364}
2019-08-23 19:03:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7669002413749695, "perplexity": 3539.578525993609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318952.90/warc/CC-MAIN-20190823172507-20190823194507-00336.warc.gz"}
https://physics.stackexchange.com/questions/384111/scattering-absorption-emission-and-virtual-photons
# Scattering, absorption, emission and virtual photons From reading many questions on this site I have the following conclusions: 1. Interaction of a photon and a free electron is an instantaneous process of scattering (transfer of momentum) between said particles. 2. Interaction of a photon and an electron bound in atom is a very fast but not instantaneous (electron cloud has to restructure itself via resonant oscillation) process of absorption in which the photon is annihilated and atom ends up in excited state. Later another photon can be emitted from the atom taking away the excitation. This is not scattering as scattering happens for free electrons and absorption for bound. 3. Virtual photons (and other virtual particles) don't exist and are only a mathematical tool. Any view that they are "real virtual photons" is wrong. Now to the questions: 1. I was told that spontaneous emissions is a stimulated emission, stimulated by a vacuum fluctuation photon coming from $\frac{\hbar\omega}{2}$ in the photon field hamiltonian. How do I connect this with the fact that virtual photons don't exist? 2. In a process of second harmonic generation, are photons absorbed or scattered, as there is no real absorption? Is "virtual energy level" also only a mathematical tool? If so, why SHG is stronger if there is a real energy level nearby? How would SHG look on a Feynman diagram? How would it look on a Bloch sphere?
2021-02-27 19:32:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5664388537406921, "perplexity": 726.1975933139533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359082.48/warc/CC-MAIN-20210227174711-20210227204711-00497.warc.gz"}
https://socratic.org/questions/57f5e1be11ef6b33b2156196
# Using the rational root theorem, what are the possible rational roots of x^3-34x+12=0 ? Oct 6, 2016 According to the theorem, the possible rational roots are: $\pm 1$, $\pm 2$, $\pm 3$, $\pm 4$, $\pm 6$, $\pm 12$ #### Explanation: $f \left(x\right) = {x}^{3} - 34 x + 12$ By the rational root theorem, any rational zeros of $f \left(x\right)$ are expressible in the form $\frac{p}{q}$ for integeres $p , q$ with $p$ a divisor of the constant term $12$ and $q$ a divisor of the coefficient $1$ of the leading term. That means that the only possible rational zeros are: $\pm 1$, $\pm 2$, $\pm 3$, $\pm 4$, $\pm 6$, $\pm 12$ Trying each in turn, we eventually find that: $f \left(\textcolor{b l u e}{- 6}\right) = {\left(\textcolor{b l u e}{- 6}\right)}^{3} - 34 \left(\textcolor{b l u e}{- 6}\right) + 12$ $\textcolor{w h i t e}{f \left(\textcolor{w h i t e}{- 6}\right)} = - 216 + 204 + 12$ $\textcolor{w h i t e}{f \left(\textcolor{w h i t e}{- 6}\right)} = 0$ So $x = - 6$ is a rational root. The other two roots are Real but irrational.
2020-08-14 09:04:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 24, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9336185455322266, "perplexity": 259.31329999835043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739182.35/warc/CC-MAIN-20200814070558-20200814100558-00452.warc.gz"}
https://math.stackexchange.com/questions/3108604/optimization-methods-in-banach-spaces
# Optimization Methods in Banach Spaces does anyone know if there's a theory for the following problem: Optimize the task \begin{align*} T_\phi(\tilde{u})&=\inf\limits_u T_\phi(u)\\ Au&=b\\ u&\in L^p(\Omega),\,\Omega\subset \mathbb{R}^n \end{align*} for (nonlinear map) $$A:L^p(\Omega)\to Z$$, with $$Z$$ arbitrary local convex topological vector space and $$T_\phi:L^p(\Omega)\to \mathbb{R}\cup\{\pm\infty\}$$ defined by $$\int\limits_{\Omega}\phi(u(x))\,dx$$ a weakly lower semiconinous convex function, with weakly compact level set. The map $$\phi$$ is also lower semiconinous convex. Or exist there a theorem, that such a problem possess a optimal solution? • My idee was to use the theory of monoton operators, but the problem is that the image of $A$ is not $(L^p)^*$. – FuncAna09 Feb 11 at 11:45 • Are there any more conditions on $A$? Without any additional conditions, you don't necessarily have minimizers. – MaoWao Feb 11 at 23:25 • The question is what conditions must be placed on a nonlinear A for the task to have a non-trivial solution. – FuncAna09 Feb 12 at 7:44 • If I replaced $Z$ with $(L^p)^*$ and suppose $A$ is monotonous, hemistic and coercive, then there should be a solution to the problem. This would follow from the theorem of Browder and Minty. – FuncAna09 Feb 12 at 7:50
2019-04-25 16:27:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6657045483589172, "perplexity": 696.5110234770797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578727587.83/warc/CC-MAIN-20190425154024-20190425180024-00434.warc.gz"}
https://en.wikipedia.org/wiki/Rayleigh_number
# Rayleigh number In fluid mechanics, the Rayleigh number (Ra) for a fluid is a dimensionless number associated with buoyancy-driven flow[1][2][3], also known as free or natural convection. It is named after Lord Rayleigh.[4]. The Rayleigh number is used to describe fluids (such as water or air) when the mass density of the fluid is not uniform, but is higher in some parts of the fluid than in others. Gravity acts on these differences in mass density, to make the denser parts of the fluid fall, these falling of parts of the fluid is flow driven by gravity acting on mass density gradients, which is called convection. When the Rayleigh number Ra is below a critical value for that fluid, there is no flow, whereas above it, the density difference in the fluid drives flow: convection[2]. Lord Rayleigh studied[1] the case of Rayleigh-Bénard convection[5]. Most commonly, the mass density differences are caused by temperature differences, typically fluids expand and become less dense as they are heated. Then, below the critical value of Ra. heat transfer is primarily in the form of diffusion of the thermal energy; when it exceeds the critical value, heat transfer is primarily in the form of convection. When the mass density difference is caused by a temperature difference, Ra is, by definition, the ratio of the timescale for thermal transport due to thermal diffusion, to the timescale for thermal transport due to fluid falling at speed ${\displaystyle u}$under gravity[3] ${\displaystyle \mathrm {Ra} ={\frac {\mbox{timescale for thermal transport via diffusion}}{{\mbox{timescale for thermal transport via flow at speed}}~u}}}$ This means it is a type[3] of Péclet number. For a volume of fluid a size ${\displaystyle l}$across (in all three dimensions), with a mass density difference ${\displaystyle \Delta \rho }$, then the force of gravity is of order ${\displaystyle \Delta \rho l^{3}g}$, for ${\displaystyle g}$ the acceleration due to gravity. From the Stokes equation, when the volume of fluid is falling at speed ${\displaystyle u}$the viscous drag is of order ${\displaystyle \eta lu}$, for ${\displaystyle \eta }$ the viscosity of the fluid. Equating these forces we see that the speed ${\displaystyle u\sim \Delta \rho l^{2}g/\eta }$. So the timescale for transport via flow is ${\displaystyle l/u\sim \eta /\Delta \rho lg}$. The timescale for thermal diffusion across a distance ${\displaystyle l}$ is ${\displaystyle l^{2}/\alpha }$, where ${\displaystyle \alpha }$ is the thermal diffusivity. So the Rayleigh number Ra is ${\displaystyle \mathrm {Ra} ={\frac {l^{2}/\alpha }{\eta /\Delta \rho lg}}={\frac {\Delta \rho l^{3}g}{\eta \alpha }}={\frac {\rho \beta \Delta Tl^{3}g}{\eta \alpha }}}$ where we approximated the density difference ${\displaystyle \Delta \rho =\rho \beta \Delta T}$ for a fluid of average mass density ${\displaystyle \rho }$ with a thermal expansion coefficient ${\displaystyle \beta }$ and a temperature difference ${\displaystyle \Delta T}$ across the volume of fluid ${\displaystyle l}$across. The Rayleigh number can be written as the product of the Grashof number, which describes the relationship between buoyancy and viscosity within a fluid, and the Prandtl number, which describes the relationship between momentum diffusivity and thermal diffusivity, ie Ra=Gr*Pr[3][2]. Hence it may also be viewed as the ratio of buoyancy and viscosity forces multiplied by the ratio of momentum and thermal diffusivities. For a uniform wall heating flux, a modified Rayleigh number is defined as: ${\displaystyle \mathrm {Ra} _{x}^{*}={\frac {g\beta q''_{o}}{\nu \alpha k}}x^{4}}$ where: x is the characteristic length Rax is the Rayleigh number for characteristic length x q"o is the uniform surface heat flux k is the thermal conductivity.[6] For most engineering purposes, the Rayleigh number is large, somewhere around 106 to 108. ## Rayleigh-Darcy number for convection in a porous medium The Rayleigh number above is for convection in a bulk fluid such as air or water, but convection can also occur when the fluid is inside and fills a porous medium, such as porous rock saturated with water[7]. Then the Rayleigh number, sometimes called the Rayleigh-Darcy number, is different. In a bulk fluid, i.e., not in a porous medium, from the Stokes equation, the falling speed of a domain of size ${\displaystyle l}$ of liquid ${\displaystyle u\sim \Delta \rho l^{2}g/\eta }$. In porous medium, this expression is replaced by that from Darcy's law ${\displaystyle u\sim \Delta \rho kg/\eta }$, with ${\displaystyle k}$ the permeability of the porous medium. The Rayleigh or Rayleigh-Darcy number is then ${\displaystyle \mathrm {Ra} ={\frac {\rho \beta \Delta Tklg}{\eta \alpha }}}$ This also applies to A-segregates, in the mushy zone of a solidifying alloy[8]. A-segregates are predicted to form when the Rayleigh number exceeds a certain critical value. This critical value is independent of the composition of the alloy, and this is the main advantage of the Rayleigh number criterion over other criteria for prediction of convectional instabilities, such as Suzuki criterion. Torabi Rad et al. showed that for steel alloys the critical Rayleigh number is 17.[9] Pickering et al. explored Torabi Rad's criterion, and further verified its effectiveness. Critical Rayleigh numbers for lead–tin and nickel-based super-alloys were also developed.[10] ### Geophysical applications In geophysics, the Rayleigh number is of fundamental importance: it indicates the presence and strength of convection within a fluid body such as the Earth's mantle. The mantle is a solid that behaves as a fluid over geological time scales. The Rayleigh number for the Earth's mantle due to internal heating alone, RaH, is given by: ${\displaystyle \mathrm {Ra} _{H}={\frac {g\rho _{0}^{2}\beta HD^{5}}{\eta \alpha k}}}$ where: H is the rate of radiogenic heat production per unit mass η is the dynamic viscosity k is the thermal conductivity D is the depth of the mantle.[11] A Rayleigh number for bottom heating of the mantle from the core, RaT, can also be defined as: ${\displaystyle \mathrm {Ra} _{T}={\frac {\rho _{0}^{2}g\beta \Delta T_{sa}D^{3}C_{P}}{\eta k}}}$ where: ΔTsa is the superadiabatic temperature difference between the reference mantle temperature and the core–mantle boundary CP is the specific heat capacity at constant pressure.[11] High values for the Earth's mantle indicates that convection within the Earth is vigorous and time-varying, and that convection is responsible for almost all the heat transported from the deep interior to the surface. ## Notes 1. ^ a b Baron Rayleigh (1916). "On convection currents in a horizontal layer of fluid, when the higher temperature is on the under side". London Edinburgh Dublin Phil. Mag. J. Sci. 32: 529–546. 2. ^ a b c Çengel, Yunus; Turner, Robert; Cimbala, John (2017). Fundamentals of thermal-fluid sciences (Fifth edition ed.). New York, NY. ISBN 9780078027680. OCLC 929985323. 3. ^ a b c d Squires, Todd M.; Quake, Stephen R. (2005-10-06). "Microfluidics: Fluid physics at the nanoliter scale". Reviews of Modern Physics. 77 (3): 977–1026. doi:10.1103/RevModPhys.77.977. 4. ^ Chandrasekhar, S. (1961). Hydrodynamic and Hydromagnetic Stability. London: Oxford University Press. p. 10. 5. ^ Ahlers, Guenter; Grossmann, Siegfried; Lohse, Detlef (2009-04-22). "Heat transfer and large scale dynamics in turbulent Rayleigh-B\'enard convection". Reviews of Modern Physics. 81 (2): 503–537. arXiv:0811.0471. doi:10.1103/RevModPhys.81.503. 6. ^ M. Favre-Marinet and S. Tardu, Convective Heat Transfer, ISTE, Ltd, London, 2009 7. ^ Lister, John R.; Neufeld, Jerome A.; Hewitt, Duncan R. (2014). "High Rayleigh number convection in a three-dimensional porous medium". Journal of Fluid Mechanics. 748: 879–895. arXiv:0811.0471. doi:10.1017/jfm.2014.216. ISSN 1469-7645. 8. ^ Torabi Rad, M; Kotas, P; Beckermann, C (2013). "Rayleigh number criterion for formation of A-Segregates in steel castings and ingots". Metall. Mater. Trans. A. 44A: 4266–4281. 9. ^ Torabi Rad, M; Kotas, P; Beckermann, C (2013). "Rayleigh number criterion for formation of A-Segregates in steel castings and ingots". Metall. Mater. Trans. A. 44A: 4266–4281. 10. ^ Pickering, EJ; Al-Bermani, S; Talamantes-Silva, J (2014). "Application of criterion for A-segregation in steel ingots". Materials Science and Technology. 11. ^ a b Bunge, Hans-Peter; Richards, Mark A.; Baumgardner, John R. (1997). "A sensitivity study of three-dimensional spherical mantle convection at 108 Rayleigh number: Effects of depth-dependent viscosity, heating mode, and endothermic phase change". Journal of Geophysical Research. 102 (B6): 11991–12007. Bibcode:1997JGR...10211991B. doi:10.1029/96JB03806. ## References • Turcotte, D.; Schubert, G. (2002). Geodynamics (2nd ed.). New York: Cambridge University Press. ISBN 0-521-66186-2.
2018-12-18 23:36:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 28, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9052106142044067, "perplexity": 2084.6202252041444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829997.74/warc/CC-MAIN-20181218225003-20181219011003-00441.warc.gz"}
https://firmfunda.com/maths/integers/integers-multiplication-division/integer-division-first-principles
maths > integers Integer Division : First Principles what you'll learn... overview This page introduces division of integers as -- one number, dividend, is split into the number of parts given by the second number, divisor. The count or measure of one part is the result, quotient. And the remaining count of dividend, that could not be split, is the remainder of the division. The definition of division in first principles form the basis to understanding simplified procedure for division of large numbers. split In whole numbers, 6÷2$6 \div 2$ means: dividend 6$6$ is split into 2$2$ equal parts and one part is put in. In integers, 2$2$ and 2$- 2$ are understood as received:2=2$\textrm{\left(r e c e i v e d\right\rangle} 2 = 2$ and $\textrm{g i v e n : 2} = - 2$. It is also called $\textrm{\left(a l i g \ne d\right\rangle} 2 = 2$ and $\textrm{o p p o s e d : 2} = - 2$ Integers are "directed" whole numbers. A whole number division represents splitting the dividend into divisor number of parts and one part is put-in. In integers, •  positive divisor represents: one part is put-in •  negative divisor represents: one part is taken-away This is explained with an example in the coming pages. put-in a received part A girl has a box of candies. The number of candies in the box is not counted. But, she maintains a daily account of how many are received or given. $6$ received is split into $2$ equal parts. In the box, one part of that is put-in. (To understand this : $6$ candies received is shared with her brother and only her part is put in the candy-box.) The numbers in the integer forms are $\textrm{\left(r e c e i v e d\right\rangle} 6 = 6$ and $\textrm{\left(r e c e i v e d\right\rangle} 2 = 2$. The number of candies received is $\textrm{\left(r e c e i v e d\right\rangle} 6 = 6$ is split into $2$ parts and one part $\textrm{\left(r e c e i v e d\right\rangle} 3 = 3$ is put-in $6 \div 2 = 3$ put-in a given part Considering the box of candies and the daily account of number of candies received or given. $6$ given is split into $2$ equal parts. In the box, one part of {$2$ equal parts of $6$ given} is put-in. (Her brother and she gave $6$ candies and only her part is reflected for her number.) The numbers in the integer forms are $\textrm{\left(g i v e n\right\rangle} 6 = - 6$ and $\textrm{\left(r e c e i v e d\right\rangle} 2 = 2$. The number of candies received is $\textrm{\left(g i v e n\right\rangle} 6 = - 6$ is split into $2$ parts and one part $\textrm{\left(g i v e n\right\rangle} 3 = - 3$ is put-in $\left(- 6\right) \div 2 = - 3$ Considering division of $\left(- 6\right) \div 2$. The numbers are given in integer form. The numbers in directed whole numbers form are $\textrm{\left(g i v e n\right\rangle} 6$ and $\textrm{\left(r e c e i v e d\right\rangle} 2$. The Division is explained as $\textrm{\left(g i v e n\right\rangle} 6 = - 6$ is the dividend $\textrm{\left(r e c e i v e d\right\rangle} 2$ is divisor Division is dividend split into divisor number of parts and one part is put-in. $- 6$ split into $2$ parts is $- 3$ and $- 3$. One part of that is $- 3$. Thus the quotient of the division is $= \textrm{\left(g i v e n\right\rangle} 3$. The same in integer form $= \left(- 6\right) \div 2$ $= - 3$ take-away a received part Considering the box of candies and the daily account of number of candies received or given. $6$ received is split in $2$ equals part of which one part is to be taken-away. From the box, one part of {$2$ equal part of $6$ received} is taken-away. (Her brother and she returned $6$ candies that was received earlier and only her part is reflected for her number.) The numbers in the integer forms are $\textrm{\left(r e c e i v e d\right\rangle} 6 = 6$ and $\textrm{\left(g i v e n\right\rangle} 2 = - 2$. The number of candies received is $\textrm{\left(r e c e i v e d\right\rangle} 6 = 6$ is split into $2$ parts and one part $\textrm{\left(r e c e i v e d\right\rangle} 3 = 3$ is taken away, which is $\textrm{\left(g i v e n\right\rangle} 3 = - 3$ $6 \div \left(- 2\right) = - 3$ Considering division of $6 \div \left(- 2\right)$. The numbers are given in integer form. To understand first principles of division, let us convert that to directed whole numbers form $\textrm{\left(r e c e i v e d\right\rangle} 6$ and $\textrm{\left(g i v e n\right\rangle} 2$. The Division is explained as $\textrm{\left(r e c e i v e d\right\rangle} 6 = 6$ is the dividend $\textrm{\left(g i v e n\right\rangle} 2 = - 2$ is divisor Division is dividend split into divisor number of parts and one part is taken away since divisor is negative. $6$ split into $2$ parts is $3$ and $3$. One part of that is $3$. Since divisor is negative, one part $3$ is taken-away. $\textrm{\left(r e c e i v e d\right\rangle} 3$ taken away is $\textrm{\left(g i v e n\right\rangle} 3$. Thus the quotient of the division is $= \textrm{\left(g i v e n\right\rangle} 3$. The same in integer form $= 6 \div \left(- 2\right)$ $= - 3$ take away a given part Considering the box of candies and the daily account of number of candies received or given. $6$ given is split into $2$ equal parts of which one part is to be taken-away. In the box, a part of {$2$ equal part of $6$ given} is taken-away. (Her brother and she got back $6$ candies which were given earlier and only her part is reflected for her number.) The numbers in the integer forms are $\textrm{\left(g i v e n\right\rangle} 6 = - 6$ and $\textrm{\left(g i v e n\right\rangle} 2 = - 2$. The number of candies received is $\textrm{\left(g i v e n\right\rangle} 6 = - 6$ is split into $2$ parts and one part $\textrm{\left(g i v e n\right\rangle} 3 = - 3$ is taken-away, which is $\textrm{\left(r e c e i v e d\right\rangle} 3 = + 3$ $\left(- 6\right) \div \left(- 2\right) = + 3$ Considering division of $\left(- 6\right) \div \left(- 2\right)$. The numbers are given in integer form. To understand first principles of division, let us convert that to directed whole numbers form $\textrm{\left(g i v e n\right\rangle} 6$ and $\textrm{\left(g i v e n\right\rangle} 2$. The Division is explained as $\textrm{\left(g i v e n\right\rangle} 6 = - 6$ is the dividend $\textrm{\left(g i v e n\right\rangle} 2 = - 2$ is divisor Division is dividend split into divisor number of parts and one part is is taken away since the divisor is negative. $- 6$ split into $2$ parts is $- 3$ and $- 3$. One part of that is $- 3$. Since divisor is negative, one part $- 3$ is taken-away. $\textrm{\left(g i v e n\right\rangle} 3$ taken away is $\textrm{\left(r e c e i v e d\right\rangle} 3$. Thus the quotient of the division is $= \textrm{\left(r e c e i v e d\right\rangle} 3$. The same in integer form $= \left(- 6\right) \div \left(- 2\right)$ $= 3$ The summary of integer division illustrative examples: •  $6 \div 2 = 3$ $6$ received split into $2$ parts and one part is put-in = $3$ received •  $\left(- 6\right) \div 2 = - 3$ $6$ given split into $2$ parts and one part is put-in = $3$ given •  $6 \div \left(- 2\right) = - 3$ $6$ received split into $2$ parts and one part is taken-away = $3$ given •  $\left(- 6\right) \div \left(- 2\right) = 3$ $6$ given split into $2$ parts and one part is taken-away = $3$ received The above is concise form to capture the integer division in first principles. revising The division $7 \div 3$ is understood as $\textrm{\left(r e c e i v e d\right\rangle} 7$ is split into $3$ equal parts and one part is put-in (positive divisor). The remainder is what is remaining in $\textrm{\left(r e c e i v e d\right\rangle} 7$. The result is quotient $2$ and remainder $1$ This is verified with $2 \times 3 + 1 = 7$ (quotient multiplied divisor + remainder = dividend ) The division $\left(- 7\right) \div 3$ is understood as $\textrm{\left(g i v e n\right\rangle} 7$ is split into $3$ equal parts and one part is put-in (positive divisor). The remainder is what is remaining in $\textrm{\left(r e c e i v e d\right\rangle} 7$. The result is quotient $- 2$ and remainder $- 1$ This is verified with $\left(- 2\right) \times 3 + \left(- 1\right) = - 7$ the division $7 \div \left(- 3\right)$ is understood as $\textrm{\left(r e c e i v e d\right\rangle} 7$ is split into $3$ equal parts and one part is taken-away (negative divisor). The remainder is what is remaining in $\textrm{\left(r e c e i v e d\right\rangle} 7$. The result is quotient $- 2$ and remainder $1$ This is verified with $\left(- 2\right) \times \left(- 3\right) + 1 = 7$ The division $\left(- 7\right) \div \left(- 3\right)$ is understood as $\textrm{\left(g i v e n\right\rangle} 7$ is split into $3$ equal parts and one part is taken-away (negative divisor). The remainder is what is remaining in $\textrm{\left(r e c e i v e d\right\rangle} 7$. The result is quotient $2$ and remainder $- 1$ This is verified with $2 \times \left(- 3\right) + \left(- 1\right) = - 7$ The summary of integer division illustrative examples: •  $7 \div 2 = 3$ with $1$ remainder $7$ received split into $2$ parts and one part is put-in = $3$ received and remainder $1$ received •  $\left(- 7\right) \div 2 = - 3$ with $- 1$ remainder $- 7$ given split into $2$ parts and one part is put-in = $3$ given and remainder $1$ given •  $7 \div \left(- 2\right) = - 3$ with $1$ remainder $7$ received split into $2$ parts and one part is taken-away = $3$ given and remainder $1$ received •  $\left(- 7\right) \div \left(- 2\right) = 3$ with $- 1$ remainder $7$ given split into $2$ parts and one part is taken-away = $3$ received and remainder $1$ given. Remainder takes the sign of the dividend. summary Integer Division -- First Principles: Directed whole numbers division is splitting the dividend into divisor number of equal parts with direction taken into account. If the divisor is positive, then one part is put-in. If the divisor is negative, then one part is taken-away. Remainder is that of the dividend retaining direction information. Outline
2021-12-03 21:48:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 166, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.701507031917572, "perplexity": 893.6478350805766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362919.65/warc/CC-MAIN-20211203212721-20211204002721-00410.warc.gz"}
https://www.transtutors.com/questions/only-one-firm-produces-and-sells-soccer-balls-in-the-country-of-wiknam-and-as-the-st-3388736.htm
# Only one firm produces and sells soccer balls in the country of Wiknam, and as the story begins,... 1 answer below » Only one firm produces and sells soccer balls in the country of Wiknam, and as the story begins, international trade in soccer balls is prohibited. The following equations describe the monopolist’s demand, marginal revenue, total cost, and marginal cost: Demand: P = 10 – Q Marginal Revenue: MR = 10 – 2Q Total Cost: TC = 3 + Q + 0.5Q2 Marginal Cost: MC = 1 + Q Where Q is quantity and P is the price measured in Wiknamian dollars. a. How many soccer balls does the monopolist produce? At what price are they sold? What is the monopolist’s profit? b. One day, the King of Wiknam decrees that henceforth there will be free trade—either imports or exports— of soccer balls at the world price of $6. The firm is now a price taker in a competitive market. What happens to domestic production of soccer balls? To domestic consumption? Does Wiknam export or import soccer balls? c. In our analysis of international trade in Chapter 9, a country becomes an exporter when the price without trade is below the world price and an importer when the price without trade is above the world price. Does that conclusion hold in your answers to parts (a) and (b)? Explain. d. Suppose that the world price was not$6 but, instead, happened to be exactly the same as the domestic price without trade as determined in part (a). Would allowing trade have changed anything in the Wiknamian economy? Explain. How does the result here compare with the analysis in Chapter 9? Kandregula S Date MR=MC for monopoly lo-2Q = 1tQ 9=3Q Q=3] P= 10-Q z 10-3 = 7 (P=7& = 7(3) - 3- Q -0.502 21-3-3-0.5 (9) 21-3-3 -4.5 =... ## Plagiarism Checker Submit your documents and get free Plagiarism report Free Plagiarism Checker ## Recent Questions in Design and Analysis of Algorithms Looking for Something Else? Ask a Similar Question
2021-05-14 07:29:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24462628364562988, "perplexity": 4179.141779401999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00569.warc.gz"}
https://dls.westcollegescotland.ac.uk/mod/glossary/showentry.php?eid=22
Question: #### Can I use the College library and PCs? (Last edited: Thursday, 29 April 2021, 12:43 PM)
2022-06-24 22:08:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8628464341163635, "perplexity": 12097.522480961019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033816.0/warc/CC-MAIN-20220624213908-20220625003908-00571.warc.gz"}
https://learn.saylor.org/course/view.php?id=67&sectionid=636
• ### Unit 5: Set Theory Computer scientists often find themselves working with sets of homogeneous or heterogeneous values. Scientists have devised set theory in order to respond to these situations. In this unit, we will learn the basics of set theory, taking a look at definitions and notations and using the proof methods and counterexample means we introduced earlier to establish set properties.a Set theory is a fundamental tool of mathematics and often used as the starting point for its study and to define basic concepts, such as functions, numbers, and limits. Completing this unit should take you approximately 15 hours.
2019-09-19 17:55:45
{"extraction_info": {"found_math": true, "script_math_tex": 36, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9139756560325623, "perplexity": 682.0173338337926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573561.45/warc/CC-MAIN-20190919163337-20190919185337-00466.warc.gz"}
http://mathoverflow.net/questions/77952/name-this-periodic-tiling?sort=oldest
# Name this periodic tiling Hello MO, I've been working on a problem I'm working on in ergodic theory (finding Alpern lemmas for measure-preserving $\mathbb R^d$ actions) and have found some neat tilings, that I presume were previously known. They are periodic tilings of $\mathbb R^d$ by a single prototile consisting of any box with a smaller box removed from a corner. The two-dimensional case is illustrated in the figure below: There exist versions of this in arbitrary dimensions, irrespective of the size of the boxes. Furthermore, there appear to be many essentially different tilings using the same prototile. Does anybody recognize the $d$-dimensional version, or have a name for it? Thanks - This is essentially hexagonal tiling, isn't it? –  Yuri Bakhtin Oct 12 '11 at 20:49 I guess so in 2 dimensions. My question is really about the higher-dimensional version. –  Anthony Quas Oct 12 '11 at 20:58 If you take the centers of the boxes, do you get the plain old cartesian lattice up to an affine-linear transformation? $\mathbb Z^d \subset \mathbb R^d$ ? –  Ryan Budney Oct 12 '11 at 21:46 Yes. The prototile is a fundamental domain for $\mathbb R^d/M\mathbb Z^d$ for some matrix $M$. –  Anthony Quas Oct 12 '11 at 22:44 Take any nice region in $\mathbb{R}^d$. Pick a lattice s.t. the translates of the region by lattice vectors cover $\mathbb{R}^d$. Pick any self-consistent scheme to remove overlaps (this smells nontrivial in general). Then you have a tessellation. –  Steve Huntsman Oct 13 '11 at 0:06 Kolountzakis worked with some tilings of this sort in his paper "Lattice tilings by cubes: whole, notched and extended", Electr. J. Combinatorics 5 (1998), 1, R14. - Thanks very much for the reference! Apparently Kolountzakis re-proved using harmonic analysis a result of Sherman Stein that gave a direct proof that the "notched cubes" tile $\mathbb R^d$ ("notched cubes" being exactly what is being asked about in the question) –  Anthony Quas Oct 13 '11 at 8:47 Hi Anthony, I maybe should walk down the hall... but this is easier. Dual to the $d$-cubical tiling of $\mathbb R^d$ would be the "cross-polytope" tiling. This is the tiling made up of the duals -- the vertices are the centres of the $d$-cubes, the edges of the cross-polytope are the faces of the $d$-cubes, and so on. To me it looks like you get your tiling from the cross-polytope tiling, by simply scaling up each tile appropriately -- scaling each tile at one of its vertices, and doing the scaling symmetrically with respect to the translation symmetry of the tiling. So as you scale, part of the tile vanishes (from a growing tile eating it up) and part gets created (via scaling). Your picture appears to be consistent with something like that. Or rather than the cross-polytope tiling, it could the the same idea but with the cubical tiling. edit: Take this procedure for generating a tiling of $\mathbb R^n$. Let $M$ be an $n \times n$ invertible matrix with real entries. Let $\vec v \in \mathbb Z^n$. In the lexicographical order on $\mathbb Z^n$ we can lay down a "tile" being $[0,1]^n + M\vec v$, where whenever we place a tile, it overwrites any old tile that it may be placed on top of. Provided the norm of the matrix is small enough, this procedure writes over the entire plane. It produces a tiling a fair bit more general than what you're talking about. I'd call it perhaps a linear overlapping translation of the cubical tiling. But I don't know if there's standard names for such thing. Perhaps "lizard scales" ? The tiling in your picture looks like something like this, with a 2x2 matrix with entries (left to right then top to bottom) $2/3, -1/3, 1/3, 3/4$. I seem to have forgotten how to typeset a $2\times 2$ matrix in mathjax or whatever is powering this website nowadays... Does that make sense? - Wouldn't this give you something where translates in the original lattice directions didn't have a dense orbit? This, in fact, was the main reason for me to look at this: say the big boxes are unit cubes. If you take a point and repeatedly move by 1 in the `vertical' direction, then the images become dense in the prototile (for some choices of the sub-box). –  Anthony Quas Oct 12 '11 at 22:58 I'm not following your comment. In my edited response, would this condition be satisfied if the matrix $M$ had irrational entries? –  Ryan Budney Oct 13 '11 at 0:03 My condition was that multiples of $e_d$, the $d$th coordinate vector, are dense in $\mathbb R^d/M\mathbb Z^d$. Having irrational entries is the right kind of condition to get this, but is not sufficient: basically you have to ensure that $e_d$ doesn't belong to any proper closed subgroup of $\mathbb R^d/M\mathbb Z^d$. –  Anthony Quas Oct 13 '11 at 0:25 Is it obvious that for any BOX$\setminus$box, it arises as a "linear overlap tiling of a cubic tiling"? –  Anthony Quas Oct 13 '11 at 1:06 If you take vertices at the centres of the d-cubes and edges through the faces of the d-cubes, you just get the cubical tiling back again; it's self-dual. –  kundor Oct 1 '13 at 15:51
2014-07-23 14:25:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8624079823493958, "perplexity": 661.4694366953545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997878518.58/warc/CC-MAIN-20140722025758-00147-ip-10-33-131-23.ec2.internal.warc.gz"}