url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.physicsforums.com/threads/i-need-a-good-explanation-of-phase-and-magnitude-spectra.547362/
# I need a good explanation of phase and magnitude spectra I understand that having a periodic signal x(t) we can find a signal y(t) which uses harmonically related exponentials in order to construct the x(t) signal each exponential has a frequency and magnitude, for example $$3*e^{j 2 \omega}$$ has a frequency of $$\frac{2 \pi}{2 \omega} = \frac{pi}{\omega}$$ and also a magnitude of 3 similarly $$2*e^{j 3 \omega}$$ has a frequency of $$\frac{2 \pi}{3 \omega}$$ and a magnitude of 2 now if we plot the amplitude spectra of y(t) we will get discrete values where for the x coordinate we will be having the frequency, and for the y the amplitude on that frequency so we will have a discrete value on the frequency(x) $$\frac{\pi}{\omega}$$ with an amplitude(y) of 3 and also a value for the frequency(x) $$\frac{2 \pi}{3 \omega}$$ with an amplitude of 2 I hope that I'm correct now, the thing is that I don't understand the phase, what will the phase represent? for example suppose we have this graph [PLAIN]http://img191.imageshack.us/img191/8259/unledpsg.png [Broken] what does the phase graph represent? what are these lines referring to? if you can please explain the x,y coordinates and what they mean Last edited by a moderator:
2020-04-04 09:28:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8053084015846252, "perplexity": 591.0020097151255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521574.59/warc/CC-MAIN-20200404073139-20200404103139-00476.warc.gz"}
https://mersenneforum.org/showthread.php?s=d97ff5f646614d459febf623f5580c6d&p=590587
mersenneforum.org Choose your own K and work on finding a top-5000 prime! Register FAQ Search Today's Posts Mark Forums Read 2021-10-14, 14:43 #430 diep     Sep 2006 The Netherlands 11000011012 Posts Note that k=733 has been searched to 5M in 2015/2016. Still have the LLR residues of it. In rieselprime i see it listed under nplb drive14 (MR3). 2021-10-14, 20:56   #431 gd_barnes May 2007 Kansas; USA 5×2,111 Posts Quote: Originally Posted by diep Note that k=733 has been searched to 5M in 2015/2016. Still have the LLR residues of it. In rieselprime i see it listed under nplb drive14 (MR3). Can you provide links to the postings where this testing was done? Alternatively posting the residues would be sufficient. That will give us the best history for updating the Wiki. NPLB tests all k's in the k=301-1001 range so k=733 becomes a double-check. 2021-10-14, 21:03   #432 diep Sep 2006 The Netherlands 11·71 Posts Quote: Originally Posted by gd_barnes Can you provide links to the postings where this testing was done? Alternatively posting the residues would be sufficient. That will give us the best history for updating the Wiki. NPLB tests all k's in the k=301-1001 range so k=733 becomes a double-check. Yes - as you can see from my 2015 posting. I took k=733 then and tested it until into 2016. Just have been checking on my computer. Seems a couple of hundreds weren't run yet (nearby 5M). Finishing those at a handful of cores now. All other residues are there it seems. Where can i upload them to? 2021-10-14, 21:19   #433 gd_barnes May 2007 Kansas; USA 5×2,111 Posts Quote: Originally Posted by diep Yes - as you can see from my 2015 posting. I took k=733 then and tested it until into 2016. Just have been checking on my computer. Seems a couple of hundreds weren't run yet (nearby 5M). Finishing those at a handful of cores now. All other residues are there it seems. Where can i upload them to? I see this posting where you reserved it: https://mersenneforum.org/showpost.p...&postcount=293 I don't see any posts where you gave status reports. Were there any? We will update the Wiki to show the whole history. When you are done with your testing you can attach a file of residues to a posting here. It's a low-weight k so I think a zipped version of the file will be within the site's attachment size requirement. Last fiddled with by gd_barnes on 2021-10-14 at 21:21 2021-10-14, 21:51 #434 diep     Sep 2006 The Netherlands 11×71 Posts No worries - i'll give a status update when it finished 5M. 2021-10-17, 22:30 #435 diep     Sep 2006 The Netherlands 11×71 Posts k=32767 - searched to 7M. continuing (sieved it to 30M and will continue sieving it further as removal rate still was ok though electricity costs \$ - will fire up box in office in winter as office may not drop under 15 celcius for equipment and epoxies and other stuff here, very accurate bearings huh and gas price is becoming 1.30 euro a m^3 now for heating so better use electricity!) With LLR 3.8.23 gwnum 29.8 i've had quite some errors (which then trigger of course a fft-size increase) searching it. Not very happy about those errors. Intend to test out LLR2 here a little and will decide whether i give it a shot at it later on. With the gerbicz turned on of course if it has it. 2021-10-17, 23:46   #436 diep Sep 2006 The Netherlands 11×71 Posts Quote: Originally Posted by gd_barnes Side note on Diep's search of k=32767: Per this posting: https://mersenneforum.org/showpost.p...&postcount=385 He began sieving/testing at n=5M. So n=5M-7M is complete and there is a testing gap for n=1M-5M for this k. Why do you post this nonsense? I said searched 'until' 7m. Of course i started at zero there. 2021-10-17, 23:51   #437 gd_barnes May 2007 Kansas; USA 5·2,111 Posts Quote: Originally Posted by diep Why do you post this nonsense? I said searched 'until' 7m. Of course i started at zero there. My apologies. The post that I linked to gave the impression that you only sieved n=5M-30M. I will delete it. 2021-10-17, 23:59 #438 diep     Sep 2006 The Netherlands 11×71 Posts Aha i see - your Ingliz understanding is not so 'great'. "right now" in that old posting means the break even point shifted. So after initial sieving of [2m - 30m] as i had already tested up until 2m with a small sieve - the break even point shifted for testing versus sieving. So i started testing until 4m and then sieved [4m - 30m]. Then when break even point shifted again - i started testing 4m-5m and started sieving 5m-30m. Right now would still continue sieving [7m-30m] because the machine that would sieve when it is cold here, it's not so fast in testing. 2021-10-18, 20:18 #439 paulunderwood     Sep 2002 Database er0rr 2×7×281 Posts k=23 We would like to reserve k=23 from n=5,000,000 under "Underwood et al" 2021-11-03, 02:52 #440 Happy5214     "Alexander" Nov 2008 The Alamo City 52·31 Posts (Note to mods: still not clear if this is the right thread. Please confirm.) As of November 1, the (near-)Woodall k's listed in https://www.mersenneforum.org/showpo...&postcount=363 have been completed to n=575k. Four primes were found. I'll just dump them in a code block since I don't feel like creating a symlink with the txt extension. Code: 1183953 555344 1183953 559022 8508301 567213 667071 574638 Similar Threads Thread Thread Starter Forum Replies Last Post Unregistered Information & Answers 9 2012-06-24 13:50 jmb1982 Software 2 2009-04-07 09:33 SlashDude Riesel Prime Search 121 2008-01-03 08:47 SlashDude Riesel Prime Search 538 2007-05-08 01:42 lsoule 15k Search 13 2005-09-19 20:24 All times are UTC. The time now is 07:50. Thu Dec 2 07:50:05 UTC 2021 up 132 days, 2:19, 0 users, load averages: 0.85, 1.05, 1.14
2021-12-02 07:50:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28641921281814575, "perplexity": 8259.239872504248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361169.72/warc/CC-MAIN-20211202054457-20211202084457-00581.warc.gz"}
https://anyaulinichbooks.com/download-book-linear-algebra-and-its-applications-6th-edition-original-pdf/embed/
linear algebra and its applications 6th edition pdf     Linear algebra is relatively easy for students in the early stages of the course, when the material is presented in a familiar and specific environment. But when abstract concepts are introduced, students often hit a brick wall. Coaches seem to agree that some concepts are … Continue reading Download Book Linear Algebra and Its Applications, 6th Edition – Original PDF
2022-05-26 04:19:59
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8155840039253235, "perplexity": 355.7526420486859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662601401.72/warc/CC-MAIN-20220526035036-20220526065036-00349.warc.gz"}
https://www.nature.com/articles/s41598-021-82077-8?error=cookies_not_supported&code=50bc6c3b-e34c-4f83-9d21-46eff87f7303
## Introduction The patch-clamp technique has contributed to major advances in the characterization of ion channel biophysical properties and pharmacology, thanks to the versatility of the readouts: (i) unitary currents allowing the study of a single channel conductance, open probability and kinetics, and (ii) whole-cell currents allowing characterization of a population of channels, their pharmacology, the macroscopic properties of the gates, but also the gating kinetics1,2. As for any technique, some practical limits have to be taken into account. As schematized in Fig. 1A,B, a major caveat in the whole-cell configuration of the voltage-clamp technique is due to the fact that the pipette tip in manual patch-clamp, or the glass perforation in planar automated patch-clamp, creates a series resistance (RS) in the order of the MΩ. Consequently, according to the Ohm’s law, when a current flowing through the pipette is in the order of the nA, it leads to a voltage deviation of several mV at the pipette tip or the glass perforation. The actual voltage applied to the cell membrane (Vm) is therefore different than the voltage clamped by the amplifier and applied between the two electrodes (pipette and bath electrodes, Vcmd). This leads for example to an erroneous characterization of a channel voltage-dependent activation process. This caveat was described early on when the patch-clamp technique was developed3. However, with the development of automated patch-clamp for the industry and academia, the technique that was formerly used exclusively by specialized electrophysiologists, has been popularized to scientists that are not always aware of the limits of the technique. In that respect, we now extensively witness new publications that report ionic currents in the range of several nA, that undoubtedly have led to incorrect voltage clamp and erroneous conclusions. Early on, this problem was partially solved by the development of amplifiers with the capacity to add a potential equivalent to the lost one (VS), a function which is called RS compensation4. Examples of Na+ currents generated by NaV1.5 voltage-gated Na+ channels, recorded from a transfected COS-7 cell, with and without RS compensation, are shown in Fig. 1C–E. These recordings illustrate the kind of errors that can be induced in evaluating activation in the absence of RS compensation. However, compensation rarely reaches 100% and some high-throughput systems have limited compensation abilities, to avoid over-compensation and consequent current oscillation that can lead to seal disruption. Here, we used a mathematical model to study in detail the impact of various levels of RS and current amplitude on the steady-state activation and dose–response curves of the cardiac voltage-dependent Na+ current INa, as well as the steady-state activation curve of the cardiac voltage-dependent K+ current Ito. We then predicted the impact of various levels of RS on the Na+ current activation parameters and compared this prediction to whole-cell voltage-clamp recordings obtained in manual patch-clamp analyses of cells transiently expressing Nav1.5 channels. Finally, we looked at the impact of RS in whole-cell voltage-clamp recordings of Nav1.5 currents obtained in automated patch-clamp using the Nanion SyncroPatch 384PE. This study highlights potential incorrect interpretations of the data and allows proposing simple guidelines for the study of voltage-gated channels in patch-clamp, which will help in the design of experiments and in the rationalization of data analyses to avoid misinterpretations. The aim of this study was thus to use kinetic models of specific ion currents to generate current ‘recordings’ that take into account the voltage error made using the whole-cell configuration of the patch-clamp technique. We then used and compared these data and experimental observations to propose simple rules for good quality patch-clamp recordings. ## Results In order to calculate the current (I) recorded at a given voltage, in a cell, we used Hodgkin–Huxley models of voltage-gated channels5. For this calculation, we need to determine the actual membrane potential (Vm) but we only know the potential applied between the pipette and reference electrodes (Vcmd), illustrated in Fig. 1A,B. The voltage error between Vm and Vcmd is the voltage generated at the pipette tip (VS), which depends on the series resistance (RS) and the current. $${V}_{m}= {V}_{cmd}- {R}_{s}\times I$$ I is the resultant of the membrane resistance variations, due to channels opening or closing, Rm. For voltage-gated channels, Rm varies with voltage, Vm, and time. Since I is a function of Vm, through channels voltage-dependence, and Vm depends on I (cf. the equation above), the value of I can only be obtained through an iterative calculation at each time step (see supplemental information with a limited number of equations, for further details). We started to model the current conducted by cardiac voltage-gated Na+ channels (Nav1.5 for the vast majority) for a combination of series resistance (RS) values and current amplitude ranges (depending on the amount of active channels in the membrane, Fig. 2A,B). First, when RS is null (the ideal condition, which can almost be reached experimentally if RS compensation is close to 100%), the voltage error is null and the shapes of the recordings are identical, independent of the current amplitude (in green in Fig. 2B). Consistent with the voltage error being proportional to both RS and current amplitude values, we observed that combined increase in RS and current amplitude leads to alteration in the current traces, due to a deviation of Vm from Vcmd (Fig. 2B). When RS is equal to 2 MΩ (in orange), alterations in the shape of the currents are observed only when current amplitude reaches several nA (high expression of ion channels, bottom), with, for instance, time to peak at − 45 mV increasing from 0.9 ms (at RS = 0 MΩ) to 1.15 ms (at RS = 2 MΩ). When increasing RS to 5 MΩ, alterations are minor in the medium range of current, but are emphasized when currents are large (middle and bottom, in red), with time to peak at -45 mV reaching 1.6 ms. As illustrated in Fig. 2B, when I and RS are elevated, the voltage applied to the membrane, Vm, can reach − 14 mV, whereas the applied voltage command, Vcmd, is − 40 mV (bottom right, Vm inset). Thus, in these conditions, the voltage deviation represents 26 mV at the peak of the effect, which is not insignificant (Fig. 2C). The impact of RS on current amplitude is the highest for large amplitudes (Fig. 3A, 10-nA range), e.g. at potentials between − 40 and − 20 mV. At such potentials, activation and inactivation time courses are clearly altered by high RS values. Altogether, this leads to major artefactual modifications of the current–voltage and activation curves (Fig. 3B,C). Indeed, except when RS is null, increasing the current amplitude range, from 1 to 10 nA, shifts the voltage-dependence of activation (V0.5) towards hyperpolarized potentials (Fig. 3C). For the largest currents, series resistance of 2 and 5 MΩ induces − 7 mV and − 11 mV shifts of V0.5, respectively. The slope factor k is also drastically reduced by a factor of 1.5 and 1.8, respectively. Besides impact on the characteristics of voltage-dependent activation, RS may also impact channel pharmacological characteristics. In order to model this impact, we established, for various values of RS, the relationship between the theoretical values of the peak Na+ current, Ipeak (with no voltage error) and the measured values of Ipeak. We calculated this relationship at a potential that could be used to establish the dose–response curve, here -20 mV. First, when RS is null, the voltage error is null and both values (theoretical and computed values) are the same. As RS increases, the measured INa curve is inflected accordingly (Fig. 4A, middle and right). We used this relationship to look at the impact of RS on the apparent effects of a channel blocker—tetrodotoxin (TTX)—on the Na+ current. We started with published data on TTX6 to generate the theoretical (RS = 0 MΩ) dose–response curve and current traces in the presence of various concentrations of TTX (Fig. 4B,C, green). Then we used the relationship between the theoretical and measured values of Ipeak at RS of 2 and 5 MΩ (established in Fig. 4A), to build the dose–response curves, for these RS values (see “Methods” section for details). In the absence of TTX (Fig. 4B, left), current amplitude is high and voltage-clamp is not efficient, thus there are major differences between theoretical (green) and measured (orange and red) amplitudes. When inhibitor concentration increases, remaining current amplitudes decrease and the voltage-clamp improves. Hence, with higher TTX doses the theoretical and measured values become closer, independent of the RS values (Fig. 4B, right). This leads to an artefactual shift of the resulting dose–response curve towards higher concentrations (Fig. 4C). For Ipeak = − 27 nA, RS of 2 and 5 MΩ induce an increase of IC50 by a factor of 1.7 and 1.8, respectively. For low Ipeak (− 2.7 nA), these modifications are minimal. We applied the same modeling strategy to study the impact of RS on the ‘measurement’ of the voltage-gated K+ current Ito , using a Hodgkin-Huxley model of this current5 (Fig. 5A). As for the Na+ current, we modeled the Ito current for a combination of values of series resistance and current amplitude. Again, when RS is null, the voltage error is null and the shapes of the recordings are identical, independent of the current amplitude (in green in Fig. 5B). Consistent with voltage error being proportional to both RS and current amplitude, we observed that combined increase in RS and current amplitude leads to alteration in the recordings, due to a deviation of Vm from Vcmd (Fig. 5B). Ito current characteristics, nonetheless, are less sensitive to RS and current amplitude than INa: when RS is equal to 5 MΩ (in orange), alteration in the shape of the recordings is significant only when current amplitude is tenfold higher than Na+ currents (Fig. 5B, bottom center). However, a reduction in current amplitude is readily obtained for intermediate RS and current amplitudes. When RS reaches 15 MΩ and current amplitude is equal to several tens of nA, conditions routinely observed in automated patch-clamp with stable cell lines7,8, the model predicts a major modification of the activation curve and apparition of a delayed inactivation (Fig. 5B, bottom right). When RS is not null, increasing peak current amplitude up to 100 nA leads to a major shift in voltage-dependence of activation as follows: for a peak current of 100 nA, RS of 5 and 15 MΩ induces − 9 mV and − 16 mV shifts of the half-activation potential, respectively. The slope is also drastically increased by a factor of 1.8 and 2.4, respectively (Fig. 5C). Noteworthy, when Rs is 15 MΩ and amplitudes are in the order of several tens of nA, major voltage deviation occurs, decreasing the current amplitude by a factor of ten. This may falsely give the impression that the current is not high and thus that the introduced voltage error is negligible. In order to test whether the model reproduces experimental data, we used a set of data of heterologously-expressed Nav1.5 currents recorded in COS-7 cells using manual voltage-clamp (qualitatively validated or not, to include highly ‘artefacted’, erroneous data). When using transient transfection systems, recorded currents are very variable from cell to cell, with peak currents measured at − 20 mV ranging from 391 pA to 17.8 nA in the chosen cell set (52 cells). We used this variability to study the effect of current amplitude on the activation curve properties. First, in order for the model to be as close as possible to experimental data, we modified the previously published Hodgkin-Huxley model to match the properties of the Nav1.5 current obtained in optimal experimental conditions (Fig. 6). We used as reference group, the cells presenting peak current amplitudes (measured at − 20 mV) in a range smaller than 1 nA (7 cells), and with RS compensation allowing residual RS of around 2 MΩ. The initial model (Fig. 3) suggests negligible alteration of V0.5 and k in these conditions. The model was then optimized by adjusting the Hodgkin-Huxley equations (Eqs. 9 and 10 in the “Methods” section) to obtain V0.5, k and inactivation time constants that are similar to averaged values of the 7 reference cells (Fig. 6B,C). We then split the 52 cells in six groups according to current amplitude range (the 7 reference cells, then four groups of 10 cells, and a last group of 5 cells with a peak I−20 mV greater that 10 nA), and plotted, for each group, the mean V0.5 (Fig. 7A) and k values (Fig. 7B) as a function of mean current amplitude. We observed a decrease in both V0.5 and k when current range increases. These relationships were successfully fitted by the computer model when RS was set to 2 MΩ, which is close to the experimental value, after compensation (RS = 2.3 ± 0.2 MΩ). In these conditions, if we accept maximal inward peak current amplitudes up to 7 nA, the error in V0.5 is below 10 mV and k remains greater than 5 mV. Experimentally, current amplitudes larger than 7 nA should be prevented or discarded, to prevent larger errors in evaluating V0.5 and k. Nevertheless, the benefit of such a representation (Fig. 7) is obvious as a correlation can be drawn between current amplitude and V0.5 and k values, with a more reliable evaluation of these values at low current amplitude levels. We then used a dataset of INa currents obtained from neonatal mouse ventricular cardiomyocytes, with values of current amplitudes that are frequently published, ranging from 1.8 to 10.3 nA, and using 80% series resistance compensation. We drew similar plots as in Fig. 7, of the experimental activation parameters, V0.5 and k, as a function of current amplitude in Fig. 8A,B, respectively, and we added the model of heterologously-expressed Nav1.5 currents (in orange) generated above. The model does not fit exactly to the data, suggesting that INa properties are slightly different in cardiomyocytes and transfected COS-7 cells. Interestingly, however, the exponential fits of the data follow the same trend, parallel to the COS-7 model, suggesting the same effect of Rs on V0.5 and k. Similar to transfected COS-7 cells, cardiomyocytes with currents greater than 7 nA display mean V0.5 ~ 5-mV more negative, and mean k ~ 1-mV smaller than cardiomyocytes with currents smaller than 7 nA (Fig. 8A,B), suggesting that using a 7 nA amplitude cut-off is appropriate. This comparison shows that differences in activation parameters may be blurred or exaggerated by inappropriate data pooling of cells with excessive current amplitude. Finally, we used a set of data from HEK293 cells stably expressing the Nav1.5 channels, obtained using the automated patch-clamp set-up Syncropatch 384PE (Fig. 9). Cells were grouped by intervals of 500 pA: 0–500 pA, 500–1000 pA, etc. The first experimental group has a mean inward current amplitude lower than the reference group of transfected COS-7 cells (− 267 ± 67 pA, n = 7 vs − 608 ± 48 pA, n = 7, respectively). It should be noticed that in this amplitude range, activation parameters are more difficult to determine. This is reflected by the large s.e.m. values for mean V0.5 and k. We postulate that HEK293 endogenous currents may non-specifically affect the properties of the recorded currents when they are in the 0–500 pA range. For the following groups with larger INa amplitudes, V0.5 seems to be stable. Hence, a V0.5 value around − 25 mV appears to be reliable. When current amplitudes are lower than 3.5 nA, the V0.5 change is less than 10 mV and k remains greater than 5 mV. Therefore, it is essential to perform experiments in conditions in which the inward current value is comprised between 500 pA and 3.5 nA when using such an automated patch-clamp system, and to exclude data with higher peak current amplitudes. These limits are more stringent than for manual patch-clamp as seen above (7 nA), but this is consistent with the limited compensation capabilities of some automated patch-clamp systems demonstrating slow response time for RS compensation to avoid over-compensation and consequent current oscillation that can lead to seal disruption. ## Discussion Even though effects of series resistance have been described very early3, a lot of published measured currents are in the range of several nA, which often leads to incorrect voltage clamp. We developed a simple model, using published kinetic models of ion currents, to simulate and describe such a caveat. We used both an inward current generated by a voltage-gated Na+ channel and an outward current generated by a voltage-gated K+ channel, both of them characterized by fast activation kinetics. Using these models and experimental recordings, we observed that large series resistance may give erroneous activation curves (Figs. 1,2,3,5,7,8,9) and dose–response curves (Fig. 4). A similar mathematical model, taking into account the RS impact has been used to study the causes of variability of current recordings obtained from the voltage-gated K+ channel Kv11.19. Here we used such a model to provide a guideline focusing on parameters that the manipulator can easily act on: current amplitude and RS. We observed that the activation parameters of cardiac voltage-gated Na+ current INa are much more sensitive than those of the cardiac voltage-gated K+ current Ito: a current amplitude range of 10 nA combined with a RS of 5 MΩ shows almost no alteration of the activation curve of the voltage-gated K+ channel (Fig. 5B, center and Fig. 5C, middle), whereas the same condition with the voltage-gated Na+ channel, shows a major alteration of the activation curve (Fig. 2B, bottom right and Fig. 3C, right). The simplest interpretation of this observation may be associated with the fact that, for Na+ channels, the increase in Na+ entry induced by depolarization further depolarizes the membrane and creates instability. For K+ channels, the increase of K+ outflow induced by depolarization tends to repolarize the membrane and limits instability. However, in extreme cases, repolarization prevents the occurrence of inactivation, leading to delayed inactivation (Fig. 5B, bottom right). For the voltage-gated Na+ channel Nav1.5, we concluded that it is essential to prevent recording inward current amplitudes greater than 7 nA when residual RS is around 2 MΩ to get a reasonable estimate of the activation gate characteristics in the manual patch-clamp technique (Fig. 7). When using an automated patch system, the limit is lowered to 3.5 nA (Fig. 9). In order to test for activation changes, induced, for example, by drugs, mutations or post-translational modifications that are associated with current amplitude changes, it is advisable to generate plots of V0.5 or K as a function of Ipeak in both conditions to early detect artefacts due to excessive current amplitude. This a priori caution will allow adapting experimental conditions to record currents below 7 nA. To summarize, we suggest simple guidelines for the voltage-gated Na+ channel Nav1.5: 1. 1. Always compensate RS as much as possible, 2. 2. RS values around 2 MΩ after compensation allow recordings with a maximal inward current of 7 nA in manual patch-clamp, 3. 3. Using Nanion Syncropatch 384PE, recordings with a maximal inward current of 3.5 nA can be used. These guidelines may be extended to other NaV isoforms contingent of generation of plots as in Figs. 7 and 9. The guidelines are less stringent to record reliable K+ outward currents. However, one should always compensate RS as much as possible. From Fig. 5B,C, for RS values up to 5 MΩ after compensation, recordings with a maximal current of 10 nA will be highly reliable. With Nanion Syncropatch384PE, RS values up to 15 MΩ after compensation allow recordings with a maximal current of 10 nA, but inhibitors or various transient transfection conditions should be used to make sure that the measured current amplitude is not saturating due to voltage deviation (Fig. 5B, bottom right). For any current generated by voltage-gated channels, it is judicious to draw activation slope vs. amplitude plots and activation V0.5 vs. amplitude plots in a preliminary study to determine adapted conditions, a prerequisite to obtain reliable data and results. For any other ion-channel type—ligand-gated, lipid-gated, regulated by second messengers or else—low membrane resistance i.e. high expression of active ion channels associated with high RS values will also interfere with adequate voltage command and current measurements. 1. 1. RS values are much lower when pipettes with low resistance (‘large’ pipettes) are used. When using amplifiers combining RS and Cm compensation, suppression of pipette capacitance currents is of high interest since uncompensated pipette capacitance has a detrimental effect on the stability of the series resistance correction circuitry. This can be achieved by the use of borosilicate glass pipettes and wax or Sylgard coating10. When using Nanion Syncropatch384PE or other automated patch-clamp systems, low resistance chips are preferred. 2. 2. When over-expressed channels are studied, transfection has to be adapted to produce a reasonable amount of channels to generate the desired current amplitude, or, when cell lines stably expressing the channel of interest are used, the clones generating the desired current amplitude range are preferably chosen. Any current, including native currents, can also be reduced when pipette and extracellular concentrations of the carried ion are reduced. In addition, the concentration gradient can be changed to limit the electrochemical gradient. Also inhibitors, such as TTX for NaV channels, may be used at low concentration to reduce the current amplitude, as long as the inhibitor does not modify the biophysics of the WT and/or mutant channels, and it does not interfere with the action of other pharmacological compounds. Therefore, any patch-clamp experiment needs to be carefully designed to reach appropriate conditions, guaranteeing rigorous analysis of the current. Finally, in native cells (excitable or non-excitable), the current passing through an ion channel type is always recorded in combination with other currents (leak current, at the minimum), and is generally isolated pharmacologically (e.g., TTX) or through other means, all involving subtraction of currents (e.g., P/n). The voltage error caused by Rs also depends on these other currents. Thus, it would be interesting to also model this situation to have a more integrated view of Rs-induced incorrect voltage clamp. ## Methods ### Computer models #### Application to whole-cell ion currents INa and Ito currents were modeled using a Hodgkin–Huxley model of channel gating based on previously published models (O'Hara et al., 2011). For cardiac INa, we did not include the slow component of h, which only represents 1% of h inactivation (O'Hara et al., 2011). $${m}_{\infty }=\frac{1}{1+\mathrm{exp}\left(-\frac{Vm+39.57}{9.871}\right)}$$ (1) $${\tau }_{m}=\frac{1}{6.765\times \mathrm{exp}\left(\frac{Vm+11.64}{34.77}\right)+8.552\times \mathrm{exp}\left(-\frac{Vm+77.42}{5.955}\right)}$$ (2) $${j}_{\infty }=\frac{1}{1+\mathrm{exp}\left(\frac{Vm+82.9}{6.086}\right)}$$ (3) $${\tau }_{j}=2.038+\frac{1}{0.02136\times \mathrm{exp}\left(-\frac{Vm+100.6}{8.281}\right)+0.3052\times \mathrm{exp}\left(\frac{Vm+0.9941}{38.45}\right)}$$ (4) $${h}_{\infty }=\frac{1}{1+\mathrm{exp}\left(\frac{Vm+82.9}{6.086}\right)}$$ (5) $${\tau }_{h}=\frac{1}{1.432\times {10}^{-5}\times \mathrm{exp}\left(-\frac{Vm+1.196}{6.285}\right)+6.149 \times \mathrm{exp}\left(\frac{Vm+0.5096}{20.27}\right)}$$ (6) The time-dependent gate values (m, h and j), were computed at every time step11 with an “adaptive time-step” method as: $${y}_{t+tstep}={y}_{\infty }- \left({y}_{\infty -}{y}_{t}\right)\times\mathrm{exp}(-\mathrm{t}/\mathrm{tstep})$$ (7a) with y being the time-dependent gate value, and tstep, an adaptive time step. tstep was initialized to 0.1 µs, doubled when all the relative variations of m, h, or j were smaller than 0.5 × 10–5 and was halved when one of the relative variations of m, h, and j was greater than 10–5. When this limit was reached, the computation went one tstep backward and repeated again with the reduced tstep value to prevent divergence. To validate this method, we also used an "LSODE" method (cf. example in supplemental Fig. 1). m, h and j were solved as: $$\frac{dy}{dt}=\frac{{y}_{\infty } -y}{{\tau }_{y}}$$ (7b) using R software (v3.6.3, https://www.r-project.org) and the LSODE12 method from deSolve package (v1.28). In the most critical condition: with a large Na+ current (Gmax = 6 µS) and large series resistance (Rs = 5 MΩ), both methods gave identical Na+ currents (cf. supplemental Fig. 1). Therefore, the “adaptive time-step” method was used to compute m, h and j values. INa was calculated as follows: $${I}_{Na}= {G}_{Na}\times \left(Vm- {E}_{Na}\right)\times {m}^{3}\times j \times h$$ (8) with $${E}_{Na}= \frac{R T}{z F}\mathrm{log}\left(\frac{\left[Na^{+}\right]out}{\left[Na^{+}\right]int}\right)$$, [Na+]out = 145 mM and [Na+]in = 10 mM. To model overexpressed Nav1.5 currents, we adjusted some parameters, shown in bold, to fit the characteristics of the current when peak amplitude is less than 1 nA (cf results section and Fig. 6). $${m}_{\infty }=\frac{1}{1+\mathrm{exp}\left(-\frac{Vm+\bf42.57}{\bf12}\right)}$$ (9) $${\tau }_{h}=\frac{\bf4}{1.432 \times{10}^{-5}\times \mathrm{exp}\left(-\frac{Vm+1.196}{6.285}\right)+6.149 \times \mathrm{exp}\left(\frac{Vm+0.5096}{20.27}\right)}$$ (10) To model cardiac Ito, we did not include the CaMK dependent component, since at low Ca2+ pipette concentration (< 100 nM), this component is negligible (2%) (O'Hara et al., 2011). $${a}_{\infty }=\frac{1}{1+\mathrm{exp}\left(-\frac{Vm-14.34}{14.82}\right)}$$ (11) $${\tau }_{a}=\frac{1.0515}{\frac{1}{1.2089\times (1+\mathrm{exp}\left(-\frac{Vm-18.41}{29.38}\right))}+\frac{3.5}{1+\mathrm{exp}\left(\frac{Vm+100}{29.38}\right)}}$$ (12) $${i }_{\infty }=\frac{1}{1+\mathrm{exp}\left(\frac{Vm+43.94}{5.711}\right)}$$ (13) $${\tau }_{i,fast}=4.562+\frac{1}{0.3933\times \mathrm{exp}\left(-\frac{Vm+100}{100}\right)+0.08004 \times \mathrm{exp}\left(\frac{Vm+50}{16.59}\right)}$$ (14) $${\tau }_{i,slow}=23.62+\frac{1}{0.001416\times \mathrm{exp}\left(-\frac{Vm+96.52}{59.05}\right)+1.78\times{10}^{-8}\times \mathrm{exp}\left(\frac{Vm+114.1}{8.079}\right)}$$ (15) $${A}_{i,fast}=\frac{1}{1+\mathrm{exp}\left(\frac{Vm-213.6}{151.2}\right)}$$ (16) $${A}_{i,slow}=1-{A}_{i,fast}$$ (17) $$i={A}_{i,fast}\times {i}_{fast}+{A}_{i,slow}\times {i}_{slow}$$ (18) a, $${i}_{fast}$$ and $${i}_{slow}$$ were computed at every time step11 using the “adaptive time-step” method (see above). Ito was calculated as follows: $${I}_{to}= {G}_{to}\times \left(Vm- {E}_{K}\right)\times a\times i$$ (19) with $${E}_{K}= \frac{R T}{z F}\mathrm{log}\left(\frac{\left[K^{+}\right]out}{\left[K^{+}\right]int}\right)$$, [K+]out = 5 mM and [K+]in = 145 mM. For details on the kinetic models, please see5. Membrane potential was computed as follows at each time step: $$\frac{dVm}{dt}=\frac{Vcmd -Vm}{{R}_{s}\times {C}_{m}}- \frac{i}{{C}_{m}}$$ (20) We hypothesized that the amplifier response time was not limiting. Membrane capacitance used was 20 pF and considered electronically compensated. Errors due to poor space clamp were considered negligible in small cells like COS-7 cells but it is worth mentioning that they should potentially be taken into account in bigger cells such as cardiomyocytes and in cells with complex morphologies such as neurons. Noteworthy, in some situations, specific protocols can reduce these artefacts linked to poor space-clamp13. Beyond technical issues due to the patch pipette, additional resistances, due to the narrow T-tubular lumen, are also not negligible in cardiac cell T-tubules and lead to delay in T-tubular membrane depolarization14. #### Application to pharmacological investigations Before investigating the effects of TTX, we computed the incidence of peak current amplitude at − 20 mV on its measured value with various Rs values (Fig. 4A). TTX effects were modeled by first constructing the theoretical dose–response curve with Rs = 0 MΩ. Knowing the experimental IC50 and Hill coefficient6, we calculated the TTX concentrations necessary to get 0.75 of the current (GNa = 1.5 µS instead of GNa = 2 µS in the absence of TTX), 0.5 (GNa = 1 µS instead of GNa = 2 µS in the absence of TTX), 0.3, … and 10–3 of the current. Then, for a given Rs (2 or 5 MΩ), we used the relationship between theoretical and observed measured values of peak current (Fig. 4A) to deduce the corresponding measured Ipeak value of the residual current after TTX application. For instance, when Rs = 5 MΩ, for a theoretical current of − 27 nA, the measured current is about − 13 nA (see in Fig. 4A red chart). The effects of a 50% reduction of the theoretical value of − 27 nA (corresponding to the effect of 2.3 µM TTX, the IC50 value) results in a measured remaining current of − 8.3 nA when Rs = 5 MΩ (see in Fig. 4A red chart). Therefore, the apparent effect of 2.3 µM TTX on a measured current of − 13 nA is modeled by a − 8.3/− 13 ≈ 0.64 factor on GNa in Eq. (8). Similar computations have been conducted for different control current amplitudes, Rs values, and TTX “doses”, and the corresponding dose–response curves have been built. #### Cell culture and transfection The African green monkey kidney-derived cell line, COS-7, was obtained from the American Type Culture Collection (CRL-1651) and cultured in Dulbecco’s modified Eagle’s medium (GIBCO) supplemented with 10% fetal calf serum and antibiotics (100 IU/mL penicillin and 100 µg/mL streptomycin) at 5% CO2 and 95% air, maintained at 37 °C in a humidified incubator. Cells were transfected in 35-mm Petri dishes when the culture reached 50–60% confluence, with DNA (2 µg total DNA) complexed with jetPEI (Polyplus transfection) according to the standard protocol recommended by the manufacturer. COS-7 cells were co-transfected with 200 ng of pCI-SCN5A (NM_000335.4), 200 ng of pRC-SCN1B (NM_001037) (kind gifts of AL George, Northwestern University, Feinberg School of Medicine) and 1.6 µg pEGFP-N3 plasmid (Clontech). Cells were re-plated onto 35-mm Petri dishes the day after transfection for patch-clamp experiments. HEK293 cells stably expressing hNav1.5 were cultured in Dulbecco’s Modified Eagle’s Medium (DMEM) supplemented with 10% fetal calf serum, 1 mM pyruvic acid, 2 mM glutamine, 400 µg/ml of G418 (Sigma), 100 U/mL penicillin and 100 μg/mL streptomycin (Gibco, Grand Island, NY) at 5% CO2 and 95% air, maintained at 37 °C in a humidified incubator. #### Statement on the use of mice All investigations conformed to directive 2010/63/EU of the European Parliament, to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH Publication No. 85-23, revised 1985) and to local institutional guidelines. #### Neonatal mouse ventricular cardiomyocyte isolation and culture Single cardiomyocytes were isolated from the ventricles of mouse neonates aged from postnatal day 0 to 3 by enzymatic and mechanical dissociation in a semi-automated procedure by using the Neonatal Heart Dissociation Kit and the GentleMACS™ Dissociator (Miltenyi Biotec). Briefly, hearts were harvested, and the ventricles were separated from the atria, and digested in the GentleMACS™ Dissociator. After termination of the program, the digestion was stopped by adding medium containing Dulbecco’s Modified Eagle’s Medium (DMEM) supplemented with 10% horse serum, 5% fetal bovine serum and 100 U/ml penicillin and 100 μg/ml streptomycin. The cell suspension was filtered to remove undissociated tissue fragments, and centrifugated. The cell pellet was resuspended in culture medium, and the cells were plated in 60 mm-diameter Petri dishes at 37 °C for 1.5 h. The non-plated myocytes were then resuspended, plated on laminin-coated dishes at a density of 50 000 cells per plate, and incubated in 37 °C, 5% CO2: 95% air incubator. After 24 h-plating, medium was replaced by DMEM supplemented with 1% fetal bovine serum and 100 U/mL penicillin and 100 μg/mL streptomycin, and electrophysiological experiments were performed 48 h following isolation. #### Manual electrophysiology on transfected COS-7 cells One or 2 days after splitting, COS-7 cells were mounted on the stage of an inverted microscope and constantly perfused by a Tyrode solution maintained at 22.0 ± 2.0 °C at a rate of 1–3 mL/min; HEPES-buffered Tyrode solution contained (in mmol/L): NaCl 145, KCl 4, MgCl2 1, CaCl2 1, HEPES 5, glucose 5, pH adjusted to 7.4 with NaOH. During Na+ current recording, the studied cell was locally superfused15 with a extracellular solution used to prevent endogenous K+ currents, containing (in mmol/L): NaCl, 145; CsCl, 4; CaCl2, 1; MgCl2, 1; HEPES, 5; glucose, 5; pH adjusted to 7.4 with NaOH. Patch pipettes (tip resistance: 0.8 to 1.3 MΩ) were pulled from soda lime glass capillaries (Kimble-Chase) and coated with dental wax to decrease pipette capacitive currents. The pipette was filled with Na+ intracellular medium containing (in mmol/L): CsCl, 80; gluconic acid, 45; NaCl, 10; MgCl2, 1; CaCl2, 2.5; EGTA, 5; HEPES, 10; pH adjusted to 7.2 with CsOH. Stimulation and data recording were performed with pClamp 10, an A/D converter (Digidata 1440A) and an Axopatch 200B (all Molecular Devices) or an Alembic amplifier (Alembic Instruments). Currents were acquired in the whole-cell configuration, filtered at 10 kHz and recorded at a sampling rate of 33 kHz. Before series resistance compensation, a series of 50 25-ms steps were applied from − 70 mV to − 80 mV to subsequently calculate off-line Cm and RS values from the recorded current. To generate the Nav1.5 activation curve, the membrane was depolarized from a holding potential of − 100 mV to values between − 80 mV and + 50 mV (+ 5-mV increment) for 50 ms, every 2 s. Activation curves were fitted by a Boltzmann equation: G = Gmax/(1 + exp (− (Vm − V0.5)/k)), in which G is the conductance, V0.5 is the membrane potential of half-activation and k is the slope factor. For Fig. 7, cells were grouped by 10, except the first group which includes the cells with a absolute peak I− 20 mV of less that 1000 pA (n = 7) and the last group which includes the cells with a peak I− 20 mV greater that 10 nA (n = 5). #### Electrophysiology on cardiomyocytes Whole-cell Nav currents were recorded at room temperature 48 h after cell isolation with pClamp 10, an A/D converter (Digidata 1440A) and an Axopatch 200B amplifier (all Molecular Devices). Current signals were filtered at 10 kHz prior to digitization at 50 kHz and storage. Patch-clamp pipettes were fabricated from borosilicate glass (OD: 1.5 mm, ID: 0.86 mm, Sutter Instrument, Novato, CA) using a P-97 micropipette puller (Sutter Instrument), coated with wax, and fire-polished to a resistance between 0.8 and 1.5 MΩ when filled with internal solution. The internal solution contained (in mM): NaCl 5, CsF 115, CsCl 20, HEPES 10, EGTA 10 (pH 7.35 with CsOH, ~ 300 mosM). The external solution contained (in mM): NaCl 20, CsCl 103, TEA-Cl (tetraethylammonium chloride) 25, HEPES 10, glucose 5, CaCl2 1, MgCl2 2 (pH 7.4 with HCl, ~ 300 mosM). All chemicals were purchased from Sigma. After establishing the whole-cell configuration, stabilization of voltage-dependence of activation and inactivation properties was allowed during 10 min. Before series resistance compensation, series of 25-ms steps were applied from − 70 mV to − 80 mV and to − 60 mV to subsequently off-line calculate Cm and RS values from the recorded currents. After compensation of series resistance (80%), the membrane was held at a HP of − 120 mV, and the voltage-clamp protocol was carried out as follows. To determine peak Na+ current–voltage relationships, currents were elicited by 50-ms depolarizing pulses to potentials ranging from − 80 to + 40 mV (presented at 5-s intervals in 5-mV increments) from a HP of − 120 mV. Peak current amplitudes were defined as the maximal currents evoked at each voltage, and were subsequently leak-corrected. To analyze voltage-dependence of activation properties, conductances (G) were calculated, and conductance-voltage relationships were fitted with a Boltzmann equation. Data were compiled and analyzed using ClampFit 10 (Axon Instruments), Microsoft Excel, and Prism (GraphPad Software, San Diego, CA). #### High-throughput electrophysiology Automated patch-clamp recordings were performed using the SyncroPatch 384PE from Nanion (München, Germany). Single-hole, 384-well recording chips with medium resistance (4.77 ± 0.01 MΩ, n = 384) were used for recordings of HEK293 cells stably expressing human Nav1.5 channel (300 000 cells/mL) in whole-cell configuration. Pulse generation and data collection were performed with the PatchControl384 v1.5.2 software (Nanion) and the Biomek v1.0 interface (Beckman Coulter). Whole-cell recordings were conducted according to the recommended procedures of Nanion. Cells were stored in a cell hotel reservoir at 10 °C with shaking speed at 60 RPM. After initiating the experiment, cell catching, sealing, whole-cell formation, buffer exchanges, recording, and data acquisition were all performed sequentially and automatically. The intracellular solution contained (in mM): 10 CsCl, 110 CsF, 10 NaCl, 10 EGTA and 10 HEPES (pH 7.2, osmolarity 280 mOsm), and the extracellular solution contained (in mM): 60 NaCl, 4 KCl, 100 NMDG, 2 CaCl2, 1 MgCl2, 5 glucose and 10 HEPES (pH 7.4, osmolarity 298 mOsm). Whole-cell experiments were performed at a holding potential of − 100 mV at room temperature (18–22 °C). Currents were sampled at 20 kHz. Activation curves were built by 50 ms-lasting depolarization from − 80 mV to 70 mV (+ 5 mV increment), every 5 s. Activation curves were fitted by Boltzmann equation. Stringent criteria were used to include individual cell recordings for data analysis (seal resistance > 0.5 GΩ and estimated series resistance < 10 MΩ).
2023-02-04 23:15:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5326120257377625, "perplexity": 3947.4062916288667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500154.33/warc/CC-MAIN-20230204205328-20230204235328-00727.warc.gz"}
https://techcommunity.microsoft.com/t5/excel/autosum/td-p/2994349
New Contributor # AutoSum My total includes the letter ‘J’ Help me figure this out? 4 Replies # Re: AutoSum Could you post your workbook or screenshots? We need some more info on what is happening. Have you checked the cell formatting? # Re: AutoSum I think you may be getting confused with the formula bar up top and the results, the sum look fine to me! The formula bar display's the formula that is being calculated in each cell. The =SUM(J2:J1226) is telling you that the cells in column J, J2 all the way through J1226, are being summed. The total sum is the 2,752B number seen at the bottom, is being calculated correctly. Also if this is sensitive information I would remove the example file. # Re: AutoSum Thank you. Not sensitive info… state budget.
2021-12-05 21:43:31
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8421909213066101, "perplexity": 2690.9962089702744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363216.90/warc/CC-MAIN-20211205191620-20211205221620-00506.warc.gz"}
https://en.wikipedia.org/wiki/Sparse_array
# Sparse array In computer science, a sparse array is an array in which most of the elements have the default value (usually 0 or null). The occurrence of zero-value elements in a large array is inefficient for both computation and storage. An array in which there is a large number of zero elements is referred to as being sparse. In the case of sparse arrays, one can ask for a value from an "empty" array position. If one does this, then for an array of numbers, a value of zero should be returned, and for an array of objects, a value of null should be returned. A naive implementation of an array may allocate space for the entire array, but in the case where there are few non-default values, this implementation is inefficient. Typically the algorithm used instead of an ordinary array is determined by other known features (or statistical features) of the array. For instance, if the sparsity is known in advance or if the elements are arranged according to some function (e.g., the elements occur in blocks). A heap memory allocator in a program might choose to store regions of blank space in a linked list rather than storing all of the allocated regions in, say a bit array. ## Representation Sparse Array can be represented as `Sparse_Array[{pos1 -> val1, pos2 -> val2,...}]` or `Sparse_Array[{pos1, pos2,...} -> {val1, val2,...}]` which yields a sparse array in which values ${\displaystyle val_{i}}$ appear at positions ${\displaystyle pos_{i}}$. ## Sparse Array as Linked List An obvious question that might be asked is why we need a linked list to represent a sparse array if we can represent it easily using a normal array. The answer to this question lies in the fact that while representing a sparse array as a normal array, a lot of space is allocated for zero or null elements. For example, consider the following array declaration: `double arr[1000][1000]` When we define this array, enough space for 1,000,000 doubles is allocated. If each double requires 8 bytes of memory, this array will require 8 million bytes of memory. Because this is a sparse array, most of its elements will have a value of zero (or null). Hence, defining this array will soak up all this space and waste memory (compared to an array in which memory has been allocated only for the nonzero elements). An effective way to overcome this problem is to represent the array using a linked list which requires less memory as only elements having non-zero value are stored. This involves a time-space trade-off: though less memory is used, average access and insertion time becomes linear in the number of elements stored because the previous elements in the list must be traversed to find the desired element. A normal array has constant access and insertion time. A sparse array as a linked list contains nodes linked to each other. In a one-dimensional sparse array, each node includes the non-zero element's "index" (position), the element's "value", and a node pointer "next" (for linking to the next node). Nodes are linked in order as per the index. In the case of a two-dimensional sparse array, each node contains a row index, a column index (which together give its position), a value at that position and a pointer to the next node.
2016-10-27 03:37:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28353357315063477, "perplexity": 582.7662274302578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721067.83/warc/CC-MAIN-20161020183841-00542-ip-10-171-6-4.ec2.internal.warc.gz"}
https://scioly.org/forums/viewtopic.php?f=193&t=6236
## Simple Machines B/Compound Machines C Eggo Member Posts: 47 Joined: September 3rd, 2014, 6:40 pm Division: C State: CA Has thanked: 0 Been thanked: 0 ### Simple Machines B/Compound Machines C Welcome to the Marathon for Simple Machines and Compound Machines! Remember to hide your answers! Note: (I thought I might as well start a question marathon for Simple Machines and Compound Machines since no one else did. ) Lets start off with: What class of a lever is the human forearm and why? Anatomy, Disease Detectives, Circuit Lab, Mousetrap Vehicle Medal Count: 51 bernard Administrator Posts: 2327 Joined: January 5th, 2014, 3:12 pm Division: Grad State: WA Pronouns: He/Him/His Has thanked: 164 times Been thanked: 718 times Contact: ### Re: Simple Machines B/Compound Machines C A human forearm is a [b]Class 3[/b] lever because the load and effort are on the same side of the fulcrum, with the load farther from the fulcrum. Here are some diagrams that illustrate this: [url=http://leo.koppel.ca/backhoe/levers.png]diagram of levers[/url] and [url=http://sciencelearn.org.nz/var/sciencelearn/storage/images/contexts/sporting_edge/sci_media/bent_arm/14731-3-eng-NZ/bent_arm_full_size_landscape.jpg]diagram of arm[/url]. "One of the ways that I believe people express their appreciation to the rest of humanity is to make something wonderful and put it out there." – Steve Jobs Eggo Member Posts: 47 Joined: September 3rd, 2014, 6:40 pm Division: C State: CA Has thanked: 0 Been thanked: 0 ### Re: Simple Machines B/Compound Machines C Correct! Your turn! Anatomy, Disease Detectives, Circuit Lab, Mousetrap Vehicle Medal Count: 51 XturtleX Member Posts: 8 Joined: November 17th, 2014, 6:57 am Division: C State: TX Has thanked: 0 Been thanked: 0 ### Re: Simple Machines B/Compound Machines C A pulley has an AMA of 4, it is used to lift up a 100 N block. How much force will someone have to exert to lift it. Cinco Ranch Science Olympiad Unome Moderator Posts: 4315 Joined: January 26th, 2014, 12:48 pm Division: Grad State: GA Has thanked: 216 times Been thanked: 75 times ### Re: Simple Machines B/Compound Machines C 25N Userpage Opinions expressed on this site are not official; the only place for official rules changes and FAQs is soinc.org. bernard Administrator Posts: 2327 Joined: January 5th, 2014, 3:12 pm Division: Grad State: WA Pronouns: He/Him/His Has thanked: 164 times Been thanked: 718 times Contact: ### Re: Simple Machines B/Compound Machines C Unome wrote: 25N Unome, your answer seems correct to me. Since the asker has not been on for a few days, feel free to go ahead and ask the next question. $AMA = \frac{F_{out}}{F_{in}} \to 4 = \frac{100 \hspace{1} N}{F_{in}} \to F_{in} = 25 \hspace{1} N$ "One of the ways that I believe people express their appreciation to the rest of humanity is to make something wonderful and put it out there." – Steve Jobs Unome Moderator Posts: 4315 Joined: January 26th, 2014, 12:48 pm Division: Grad State: GA Has thanked: 216 times Been thanked: 75 times ### Re: Simple Machines B/Compound Machines C A 1st class lever exists with three weights. One side contains a 4.0 kilogram weight at 0.600 meters from the fulcrum, and a weight with a mass of K at 1.3 meters from the fulcrum. The other side has a 7.0 kilogram weight with a volume of 700. centimeters cubed immersed in water, at a distance of 1.0 meters from the fulcrum. Find K (Rounded to sigfigs) Userpage Opinions expressed on this site are not official; the only place for official rules changes and FAQs is soinc.org. ageek Member Posts: 1 Joined: March 3rd, 2014, 3:29 pm Division: C State: MI Has thanked: 0 Been thanked: 0 ### Re: Simple Machines B/Compound Machines C I'm not exactly sure what the "immersed in water" or the volume has on the effect, except that I can assume that it's balanced (I hope) If the kg*m is the same on either side, 7.0kg*1.0m-4.0kg*0.6m=4.6kg*m,4.6kg*m/1.3m~=3.538kg chinesesushi Member Posts: 259 Joined: September 17th, 2013, 4:57 pm Division: C State: CA Has thanked: 0 Been thanked: 13 times ### Re: Simple Machines B/Compound Machines C Unome wrote: A 1st class lever exists with three weights. One side contains a 4.0 kilogram weight at 0.600 meters from the fulcrum, and a weight with a mass of K at 1.3 meters from the fulcrum. The other side has a 7.0 kilogram weight with a volume of 700. centimeters cubed immersed in water, at a distance of 1.0 meters from the fulcrum. Find K (Rounded to sigfigs) K = 3.0 kg Never argue with an idiot, they will drag you down to their level and then beat you with experience. Before you criticize a man, walk a mile in his shoes. That way you'll be a mile away and he'll be shoeless. You should only create problems, that only you know solutions to. Unome Moderator Posts: 4315 Joined: January 26th, 2014, 12:48 pm Division: Grad State: GA Has thanked: 216 times Been thanked: 75 times ### Re: Simple Machines B/Compound Machines C Correct! Your turn. Userpage Opinions expressed on this site are not official; the only place for official rules changes and FAQs is soinc.org. ### Who is online Users browsing this forum: No registered users and 2 guests
2022-07-06 07:55:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4913072884082794, "perplexity": 9638.496789463872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104668059.88/warc/CC-MAIN-20220706060502-20220706090502-00668.warc.gz"}
https://stacks.math.columbia.edu/tag/034K
An element of an algebra over a ring is integral over the ring if and only if it is locally integral at every prime ideal of the ring. Lemma 10.35.12. Let $\varphi : R \to S$ be a ring map. Let $x \in S$. The following are equivalent: 1. $x$ is integral over $R$, and 2. for every prime ideal $\mathfrak p \subset R$ the element $x \in S_{\mathfrak p}$ is integral over $R_{\mathfrak p}$. Proof. It is clear that (1) implies (2). Assume (2). Consider the $R$-algebra $S' \subset S$ generated by $\varphi (R)$ and $x$. Let $\mathfrak p$ be a prime ideal of $R$. Then we know that $x^ d + \sum _{i = 1, \ldots , d} \varphi (a_ i) x^{d - i} = 0$ in $S_{\mathfrak p}$ for some $a_ i \in R_{\mathfrak p}$. Hence we see, by looking at which denominators occur, that for some $f \in R$, $f \not\in \mathfrak p$ we have $a_ i \in R_ f$ and $x^ d + \sum _{i = 1, \ldots , d} \varphi (a_ i) x^{d - i} = 0$ in $S_ f$. This implies that $S'_ f$ is finite over $R_ f$. Since $\mathfrak p$ was arbitrary and $\mathop{\mathrm{Spec}}(R)$ is quasi-compact (Lemma 10.16.10) we can find finitely many elements $f_1, \ldots , f_ n \in R$ which generate the unit ideal of $R$ such that $S'_{f_ i}$ is finite over $R_{f_ i}$. Hence we conclude from Lemma 10.22.2 that $S'$ is finite over $R$. Hence $x$ is integral over $R$ by Lemma 10.35.4. $\square$ Comment #886 by on In the statement, instead of $\mathfrak p \in R$ it should say $\mathfrak p \subset R$ prime ideal. In the proof, instead of Assume (1) it should say Assume (2). Proposed slogan: An element of an algebra over a ring is integral over the ring if and only if it is locally integral at every prime ideal of the ring. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2019-03-23 18:36:09
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9814113974571228, "perplexity": 313.3902853086091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202924.93/warc/CC-MAIN-20190323181713-20190323203713-00161.warc.gz"}
https://en-academic.com/dic.nsf/enwiki/47791
# Evapotranspiration Evapotranspiration Evapotranspiration (ET) is a term used to describe the sum of evaporation and plant transpiration from the earth's land surface to atmosphere. Evaporation accounts for the movement of water to the air from sources such as the soil, canopy interception, and waterbodies. Transpiration accounts for the movement of water within a plant and the subsequent loss of water as vapor through stomata in its leaves. Evapotranspiration is an important part of the water cycle. An element (such as a tree) that contributes to evapotranspiration can be called an evapotranspirator. [http://www.oslpr.org/download/en/2000/0031.pdf] Potential evapotranspiration (PET) is a representation of the environmental demand for evapotranspiration and represents the evapotranspiration rate of a short green crop, completely shading the ground, of uniform height and with adequate water status in the soil profile. It is a reflection of the energy available to evaporate water, and of the wind available to transport the water vapour from the ground up into the lower atmosphere. Evapotranspiration is said to equal potential evapotranspiration when there is ample water. Evapotranspiration and the water cycle Evapotranspiration is a significant water loss from a watershed. Types of vegetation and land use significantly affect evapotranspiration, and therefore the amount of water leaving a watershed. Because water transpired through leaves comes from the roots, plants with deep reaching roots can more constantly transpire water. Thus herbaceous plants transpire less than woody plants because herbaceous plants usually lack a deep taproot. Also, woody plants keep their structure over long winters while herbaceous plants must grow up from seed in the spring in seasonal climates, and will contribute almost nothing to evapotranspiration in the spring. Conifer forests tend to have much higher rates of evapotranspiration than deciduous forests.Fact|date=February 2007 This is because their needles give them superior surface area,Dubious|date=March 2008 resulting in more pores for transpiration, and allowing for more droplets of rain to be suspended in and around the needles and branches, where some of the droplets can then be evaporated. Factors that affect evapotranspiration include the plant's growth stage or level of maturity, percentage of soil cover, solar radiation, humidity, temperature, and wind. Through evapotranspiration, forests reduce water yield, except for in unique ecosystems called cloud forests. Trees in cloud forests condense fog or low clouds into liquid water on their surface, which drips down to the ground. These trees still contribute to evapotranspiration, but often condense more water than they evaporate or transpire. In areas that are not irrigated, actual evapotranspiration is usually no greater than precipitation, with some buffer in time depending on the soil's ability to hold water. It will usually be less because some water will be lost due to percolation or surface runoff. An exception is areas with high water tables, where capillary action can cause water from the groundwater to rise through the soil matrix to the surface. If potential evapotranspiration is greater than actual precipitation, then soil will dry out, unless irrigation is used. Evapotranspiration can never be greater than PET, but can be lower if there is not enough water to be evaporated or plants are unable to readily transpire. Estimating evapotranspiration Evapotranspiration be measured or estimated using several methods. Indirect methods Pan evaporation data can be used to estimate lake evaporation, but transpiration and evaporation of intercepted rain on vegetation are unknown. There are three general approaches to estimate evapotranspiration indirectly. Catchment water balance Evapotranspiration may be estimated by creating an equation of the water balance of a catchment (or watershed). The equation balances the change in water stored within the basin (S) with inputs and exports: $Delta S = P - ET - Q - D ,!$ The input is precipitation (P), and the exports are evapotranspiration (which is to be estimated), streamflow (Q), and groundwater recharge (D). If the change in storage, precipitation, streamflow, and groundwater recharge are all estimated, the missing flux, ET, can be estimated by rearranging the above equation as follows: $ET = P -Delta S - Q - D ,!$ Hydrometeorological equations The most general and widely used equation for calculating reference ET is the Penman equation. The Penman-Monteith variation is recommended by the Food and Agriculture Organization. [cite book |last=Allen |first=R.G. |coauthors=Pereira, L.S.; Raes, D.; Smith, M. |title=Crop Evapotranspiration—Guidelines for Computing Crop Water Requirements |url=http://www.fao.org/docrep/X0490E/x0490e00.HTM |accessdate=2007-10-08 |series=FAO Irrigation and drainage paper 56 |year=1998 |publisher=Food and Agriculture Organization of the United Nations |location=Rome, Italy |isbn=92-5-104219-5 ] The simpler Blaney-Criddle equation was popular in the Western United States for many years but it is not as accurate in regions with higher humidities. Other solutions used includes Makkink, which is simple but must be calibrated to a specific location, and Hargreaves. To convert the reference evapotranspiration to actual crop evapotranspiration, a crop coefficient and a stress coeficient must be used. Energy balance A third methodology to estimate the actual evapotranspiration is the use of the energy balance. $lambda E = R_n + G - H ,!$ where λE is the energy needed to change the phase of water from liquid to gas, Rn is the net radiation, G is the soil heat flux and H is the sensible heat flux. Using instruments like a scintillometer, soil heat flux plates or radiation meters, the components of the energy balance can be calculated and the energy available for actual evapotranspiration can be solved. Eddy covariance The most direct method of measuring evapotranspiration is with the eddy covariance technique in which fast fluctuations of vertical wind speed are correlated with fast fluctuations in atmospheric water vapor density. This directly estimates the transfer of water vapor (evapotranspiration) from the land (or canopy) surface to the atmosphere. Potential evapotranspiration [ Hawaii, Hilo and Pahala.] Potential evapotranspiration (PET) is the amount of water that could be evaporated and transpired if there was sufficient water available. This demand incorporates the energy available for evaporation and the ability of the lower atmosphere to transport evaporated moisture away from the land surface. PET is higher in the summer, on less cloudy days, and closer to the equator, because of the higher levels of solar radiation that provides the energy for evaporation. PET is also higher on windy days because the evaporated moisture can be quickly moved from the ground of plants, allowing more evaporation to fill its place. PET is expressed in terms of a depth of water, and can be graphed during the year (see figure). There is usually a pronounced peak in summer, which results from higher temperatures. Potential evapotranspiration is usually measured indirectly, from other climatic factors, but also depends on the surface type, such free water (for lakes and oceans), the soil type for bare soil, and the vegetation. Often a value for the potential evapotranspiration is calculated at a nearby climate station on a reference surface, conventionally short grass. This value is called the reference evapotranspiration, and can be converted to a potential evapotranspiration by multiplying with a surface coefficient. In agriculture, this is called a crop coefficient. The difference between potential evapotranspiration and precipitation is used in irrigation scheduling. Average annual PET is often compared to average annual precipitation, P. The ratio of the two, P/PET, is the aridity index. References ee also * Hydrologic Evaluation of Landfill Performance (HELP) * Soil plant atmosphere continuum * [http://bosque.unm.edu/~cleverly/index.html New Mexico Eddy Covariance Flux Network (Rio-ET)] * [http://wwwcimis.water.ca.gov/cimis/welcome.jsp California's Irrigation Management Information System (CIMIS)] * [http://texaset.tamu.edu/ Texas Evapotranspiration Network] * [http://www.llansadwrn-wx.co.uk/evap/lysim.html Use and Construction of a Lysimeter to Measure Evapotranspiration] * [http://ga.water.usgs.gov/edu/watercycleevapotranspiration.html Evapotranspiration, from the U.S. Geological Survey's Water Cycle Web site] * [http://www.washoeet.dri.edu/ Washoe County (NV) Et Project] * [http://itrc.org Irrigation and Training Research Center at Cal Poly San Luis Obispo] * [http://www.ramin.com.au/creekcare/transpiration-benefits-for-urban-catchments-report.shtml Transpiration Benefits For Urban Catchment Management] Wikimedia Foundation. 2010. ### Look at other dictionaries: • Evapotranspiration — Évapotranspiration L évapotranspiration correspond à la quantité d eau totale transférée du sol vers l atmosphère par l évaporation au niveau du sol et par la transpiration des plantes. Elle correspond au flux de chaleur latente dans le bilan d… …   Wikipédia en Français • Evapotranspiration — bezeichnet in der Meteorologie die Summe aus Transpiration und Evaporation, also der Verdunstung von Wasser aus Tier und Pflanzenwelt, sowie der Bodenoberfläche. Der Evapotranspirationswert spielt eine wichtige Rolle in der Hydrologie und im… …   Deutsch Wikipedia • évapotranspiration — [ evapotrɑ̃spirasjɔ̃ ] n. f. • 1974; de évapo(ration) et transpiration ♦ Didact. Quantité d eau évaporée par le sol, les nappes liquides, et la transpiration des plantes. ● évapotranspiration nom féminin Quantité d eau évaporée par le sol, les… …   Encyclopédie Universelle • Evapotranspiration — Evapotranspiration,   Biologie: Wasserdampfabgabe eines Pflanzenbestandes an die Atmosphäre, bestehend aus der nicht regulierbaren Verdunstung des Bodens (Evaporation) und niederer Pflanzen und der regulierbaren Transpiration höherer Pflanzen …   Universal-Lexikon • evapotranspiration — ☆ evapotranspiration [ē vap΄ō tran΄spə rā′shən ] n. [ EVAPO(RATION) + TRANSPIRATION] the total water loss from the soil, including that by direct evaporation and that by transpiration from the surfaces of plants …   English World dictionary • Évapotranspiration — Représentation schématique du bilan évapotranspiration/alimentation de la nappe/ruissellement (en anglais) L évapotranspiration correspond à la quantité d eau totale transférée du sol vers l atmosphère par l évaporation au niveau du sol et par la …   Wikipédia en Français • evapotranspiration — /i vap oh tran speuh ray sheuhn/, n. Meteorol. 1. the process of transferring moisture from the earth to the atmosphere by evaporation of water and transpiration from plants. 2. Also called flyoff, water loss. the total volume transferred by this …   Universalium • Evapotranspiration — evapotranspiracija statusas T sritis ekologija ir aplinkotyra apibrėžtis Bendras vandens išgarinimas tam tikro dydžio Žemės paviršiaus plote. Apima dirvožemio ir augalų išgarinamą (įskaitant transpiraciją) vandenį. atitikmenys: angl.… …   Ekologijos terminų aiškinamasis žodynas • evapotranspiration — evapotranspiracija statusas T sritis ekologija ir aplinkotyra apibrėžtis Bendras vandens išgarinimas tam tikro dydžio Žemės paviršiaus plote. Apima dirvožemio ir augalų išgarinamą (įskaitant transpiraciją) vandenį. atitikmenys: angl.… …   Ekologijos terminų aiškinamasis žodynas • evapotranspiration — noun Etymology: evaporation + transpiration Date: 1938 loss of water from the soil both by evaporation and by transpiration from the plants growing thereon …   New Collegiate Dictionary
2021-04-14 14:17:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5930930376052856, "perplexity": 11111.938486697843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077818.23/warc/CC-MAIN-20210414125133-20210414155133-00373.warc.gz"}
https://diy.stackexchange.com/questions/130432/how-should-i-connect-my-humidifier-to-hum-terminals/130434
# How should I connect my humidifier to HUM terminals? Had my system replaced last summer. When they hooked up the humidifier, they just went to the transformer output for power. Now anytime humidity is below the set point, the water runs even if the system is off. Doing some research I found out this unit has HUM terminals. How do I connect my humidifier to the s9v2 HUM terminals? I put a multi-meter across the terminals to see if its 120v or 24v but just get 0v. This was measured during a call for heat and with no call. Am I measuring correctly? Or is it from one terminal to somewhere else? Model number S9V2B060U3PSAAA Thanks, Scot • The HUM terminals may just provide contact closure, acting like a switch. If you find no voltage across the in AC or DC mode, then I would switch to Ohms and check to see see if it's closed only when the blower is running. – Tyson Jan 8 '18 at 1:40 • There are about a bazillion ways to control a humidifier so it is difficult to tell from what you have written. Usually with a 24 volt humidifier I would run it through the W terminal so it will only have power when heat is called for. With Trane and American Standard at least, their HUM terminals are 120 volts. I almost never used them. – user76730 Jan 8 '18 at 8:08 The HUM terminal on most furnaces is a 120VAC terminal. If you want to use the HUM terminal to control the humidifier, you're going to have to use a step down transformer, to drop that down to 24VAC. To measure the voltage on the HUM terminal: 1. Set your volt meter to measure AC volts. 2. Place one probe on one HUM terminal. 3. Place the other probe on the other HUM terminal. When the furnace is running in heat mode, you should measure ~120 volts about 1 second after the blower motor starts. "HUM relay closes on any heating call (HP/Gas) approximately 1 second after the blower motor starts" - From manufacturer's literature According to the manufacturer, the HUM terminal is rated for 1 ampere at 120 volts. You may be able to simply connect the wire from the humidifier to the W terminal in the furnace, instead of directly to the transformer. That way, the humidifier will only get power when the thermostat is calling for heat. However, without knowing more details about your system, it's impossible to say for sure if that will work. Please provide the make and model of the humidifier, and a wiring diagram/photos, if you want a more accurate answer. Your HUM terminal is probably 24v DC. Check your schematics to confirm. You could then test for current when the furnace is providing heat. If your HUM terminal is rated for enough current to power your humidifier (unless you have an unusual setup, they probably are) and if your humidifier uses 24v DC. You can probably skip using the transformer and wire your humidifier direct to the HUM and common terminals. This worked for me on my furnace. Your mileage may vary. Don’t make any changes unless you confirmed all the details and are confident in your changes. • HVAC controls are typically 24VAC, not DC. Also, most HUM terminals are line voltage, not low voltage. – Tester101 Jan 8 '18 at 12:15
2019-10-16 11:21:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4797389507293701, "perplexity": 1804.1861012016932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00201.warc.gz"}
https://www.physicsforums.com/threads/friction-between-table-and-pool-ball.56119/
# Friction between table and pool ball 1. Dec 9, 2004 ### Drey0287 Assume the weight of a pool ball is .17kg. If the ball traveled 1.2 m in 5.3 seconds, what is the force of Friction and coefficient of friction between the pool ball and the table? Now, All I know how to do it calculate the normal force which is (9.8)(.17) = .1666 but where do i get from here??? 2. Dec 9, 2004 ### dextercioby Hints:Use Newton's second law for the rotational motion.(torque is the sum of momenta of all forces).From there u should find µ.The normal force u calculated should be ten times bigger. $$I=\frac{3mr^{2}}{5}$$ Daniel. 3. Dec 9, 2004 ### soccerjayl I find the acceleration to be 0.04272 m/s^2 Therefore, can you not find the force of friction through F=ma? F(subF)=(.17kg)(0.04272 m/s^2) mew=F(subF)/F(subN) mew=0.007262/0.1666 mew=0.0436 check my work, i may be wrong 4. Dec 9, 2004 ### dextercioby My µ approximately equal to 0,003. Show your reasoning to depict you mistakes. Daniel.
2017-01-22 01:57:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6914771199226379, "perplexity": 2248.4758780567527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00140-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/close-to-sturm-liouville-form.381704/
# Close to Sturm-Liouville form 1. Feb 25, 2010 ### kalphakomega Close to Sturm-Liouville form.... I got an O.D.E down to the form fll(x) + ($$\lambda$$ - 16x2)f(x) = 0 I omitted some constants to make it look simple. What I'm trying to do is find a function f(x) to normalize. Solving by using roots ended up giving me an exponential function I am unable to solve. However I think if I could convert the above to proper Sturm-Liouville form I might find an alternative expression for y(x) so that I could normalize its square. Any thoughts? I'm not completely competent in the aspects D.E as of yet so I may have missed a simpler route. Input is greatly appreciated. 2. Feb 25, 2010 ### yungman Re: Close to Sturm-Liouville form.... You look into modified Bessel, Parametric Bessel equation? I don't have the answer, just look very similiar to one of those. 3. Feb 26, 2010 ### gato_ Re: Close to Sturm-Liouville form.... I don't know what you mean by using roots, but your equation is inhomogeneous. This is actually a rescaled version of the quantum harmonic oscillator, the solutions of which are given in terms of Hermite functions. Look here for a concise reference: http://www.fisica.net/quantica/quantum_harmonic_oscillator_lecture.pdf 4. Feb 26, 2010 ### kosovtsov Re: Close to Sturm-Liouville form.... The general solution to your ODE is as follows $$f(x) = \frac{1}{\sqrt{x}}[C_1 WhittakerM(\frac{\lambda}{16},\frac{1}{4},4x^2)+C_2 WhittakerW(\frac{\lambda}{16},\frac{1}{4},4x^2)]$$ where C1 and C2 are arbitrary constants.
2018-09-25 19:25:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5535542964935303, "perplexity": 794.0741905154348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267162385.83/warc/CC-MAIN-20180925182856-20180925203256-00303.warc.gz"}
https://stats.stackexchange.com/questions/252222/logistic-regression-for-non-binary-classification-multi-class-in-r
# Logistic Regression for non-binary classification (multi-class) in R I am trying to use glm(family = binomial(link = 'logit')) for a classification task with 14 classes. I know that logistic regression is used in R for binary classification and as a result it outputs the probabilities for the predicted value being either 0 or 1. But is it possible to also use it for a non-binary classification task? I have 14 classes and 93 features in my dataset. This is how I have written it, and of course it does not work, because this is the approach that I use when I only have two classes; log.model <- glm(fold1\$class ~ . - id, data=fold1, family = binomial(link = 'logit')) predict.glm(log.model, newdata=fold1.test.set, type = "response") • You have tagged your question with multinomial-logit which is what you are looking for. Perhaps revising some of the questions and answers there might help you? – mdewey Dec 18 '16 at 14:40 • @mdewey: well, it was me doing the re-tagging, so that people following the relevant tag can see the post ... – kjetil b halvorsen Dec 18 '16 at 15:17 • @kjetilbhalvorsen mystery solved, I did wonder why the OP had tagged it. – mdewey Dec 18 '16 at 15:52 As you note glm won't do it: the family=binomial part, implies two-way, not multi-way. To look through packages you already have installed, try ??multinomial and look through the results. Among others the nnet package has a multinom, and there are several Bayesian R packages that support multinomial logistic regression including brms. (You can also do searches like ??"multinomial logistic" or ??"ordinal logistic".) For packages you don't have installed, search on CRAN. • Great answer. I just tried multinom and it worked perfectly, but the misclassification error is disastrous. equal to 82% – l.. Dec 18 '16 at 19:25
2019-10-20 17:46:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4357607960700989, "perplexity": 1520.056500133706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986717235.56/warc/CC-MAIN-20191020160500-20191020184000-00299.warc.gz"}
https://stites.io/posts/2016-09-30-tracing-in-haskell.html
Today I learned about traceM and traceShowM. First off: wow! debugging is so much cleaner now! Secondly, this is important because with trace you are at the will of lazy evaluation - so if you are running things monadically, you now have the ability to garuntee more ordering of your statements. Talk about cool.
2018-10-23 10:48:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5164843797683716, "perplexity": 1771.3554149291008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516123.97/warc/CC-MAIN-20181023090235-20181023111735-00077.warc.gz"}
https://www.taylorfrancis.com/chapters/mono/10.1201/b11047-8/overview-fuel-ethanol-production-distillers-grains-keshun-liu-kurt-rosentrater
## ABSTRACT Modern societies face many challenges, including growing populations, increased demands for food, clothing, housing, consumer goods, and the concomitant raw materials required to produce all of these. Additionally, there is a growing need for energy, which is most easily met by use of fossil fuels (e.g., coal, natural gas, and petroleum). In 2008, the overall U.S. demand for energy was 99.3 × 1015 Btu (1.05 × 1014 MJ); 84% of this was supplied by fossil sources (U.S. EIA, 2009). Transportation fuels accounted for 28% of all energy consumed during this time, and nearly 97% of this came from fossil sources. Domestic production of crude oil was 4.96 million barrels per day, whereas imports were 9.76 million barrels per day (nearly two-thirds of the total U.S. demand) (U.S. EIA, 2009). Many argue that this scenario is not sustainable in the long term, for a variety of reasons (such as the need for energy independence and global warming), and other alternatives are needed.
2022-06-27 08:59:35
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.888272225856781, "perplexity": 2748.425061384511}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103329963.19/warc/CC-MAIN-20220627073417-20220627103417-00004.warc.gz"}
https://www.vedantu.com/question-answer/find-the-equation-of-the-locus-of-a-point-which-class-10-maths-cbse-5f5e6b238f2fe24918c2457f
Question # Find the equation of the locus of a point which moves such that the ratio of its distances from$\left( {2,0} \right)$ and $\left( {1,3} \right)$is$5:4$. Verified 129.3k+ views Hint: Locus of a point is the set of all points that have the same property or we can say the set of points that behaves in the same way. The third theorem of locus states that the locus equidistance from point P and Q is the perpendicular bisector of the line segment joining the two points. Distance formulae is the method to find the distance between two points S(x, y) and T(x, y) given by the formulae, $D = \sqrt {{{\left( {{x_T} - {x_S}} \right)}^2} + {{\left( {{y_T} - {y_s}} \right)}^2}}$ In the question, the locus of the point is desired such that the ratio of its distances from$\left( {2,0} \right)$ and $\left( {1,3} \right)$ is $5:4$ for which consider a point on the line joining the points $\left( {2,0} \right)$ and $\left( {1,3} \right)$ and at the same time dividing the line segment in the ratio of $5:4$ and use the distance formula accordingly. Let us consider the point$M(x,y)$which lies between the point $A\left( {2,0} \right)$and$B\left( {1,3} \right)$, the point $M(x,y)$divides the line into $5:4$ratio, Now find the distance of point M from point A and from point B by using the distance formula ${d_{MA}} = \sqrt {{{\left( {{x_M} - {x_A}} \right)}^2} + {{\left( {{y_M} - {y_A}} \right)}^2}} \\ = \sqrt {{{\left( {x - 2} \right)}^2} + {{\left( {y - 0} \right)}^2}} \\ = \sqrt {{{\left( {x - 2} \right)}^2} + {{\left( y \right)}^2}} \\$ And the distance from B ${d_{MB}} = \sqrt {{{\left( {{x_M} - {x_B}} \right)}^2} + {{\left( {{y_M} - {y_B}} \right)}^2}} \\ = \sqrt {{{\left( {x - 1} \right)}^2} + {{\left( {y - 3} \right)}^2}} \\$ Given that the point $M(x,y)$divides the line AB into$5:4$ratio, hence we can write $\dfrac{{{d_{MA}}}}{{{d_{MB}}}} = \dfrac{5}{4}$ This is equal to $\dfrac{{{d_{MA}}}}{{{d_{MB}}}} = \dfrac{5}{4} \\ \dfrac{{\sqrt {{{\left( {x - 2} \right)}^2} + {{\left( y \right)}^2}} }}{{\sqrt {{{\left( {x - 1} \right)}^2} + {{\left( {y - 3} \right)}^2}} }} = \dfrac{5}{4} \\ \dfrac{{{{\left( {x - 2} \right)}^2} + {y^2}}}{{{{\left( {x - 1} \right)}^2} + {{\left( {y - 3} \right)}^2}}} = \dfrac{{{5^2}}}{{{4^2}}} \\ \dfrac{{{{\left( {x - 2} \right)}^2} + {y^2}}}{{{{\left( {x - 1} \right)}^2} + {{\left( {y - 3} \right)}^2}}} = \dfrac{{25}}{{16}} \\$ Now use the cross multiplication to solve the equation further, hence we can write $16\left\{ {{{\left( {x - 2} \right)}^2} + {y^2}} \right\} = 25\left\{ {{{\left( {x - 1} \right)}^2} + {{\left( {y - 3} \right)}^2}} \right\} \\ 16\left\{ {{x^2} - 4x + 4 + {y^2}} \right\} = 25\left\{ {{x^2} - 2x + 1 + {y^2} - 6y + 9} \right\} \\ 16{x^2} - 64x + 16{y^2} + 64 = 25{x^2} - 50x + 25 + 25{y^2} - 150y + 225 \\ 9{x^2} + 9{y^2} + 14x - 150y + 186 = 0 \\$ Hence we get the equation of the locus at the point $M(x,y)$equal to $9{x^2} + 9{y^2} + 14x - 150y + 186 = 0$ Note: Locus can be a set of points, lines, line segments, curve, and surface which satisfy one or more properties. If the locus of the point divides the line between two points in the ratio $1:1$then the point locus is equidistant from both the points.
2021-11-28 21:28:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8281091451644897, "perplexity": 178.7925483262111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358591.95/warc/CC-MAIN-20211128194436-20211128224436-00082.warc.gz"}
https://eccc.weizmann.ac.il/report/2022/015/
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > DETAIL: Paper: TR22-015 | 12th February 2022 18:21 Lower Bounds for Unambiguous Automata via Communication Complexity TR22-015 Authors: Mika Göös, Stefan Kiefer, Weiqiang Yuan Publication: 12th February 2022 18:29 Keywords: Abstract: We use results from communication complexity, both new and old ones, to prove lower bounds for unambiguous finite automata (UFAs). We show three results. $\textbf{Complement:}$ There is a language $L$ recognised by an $n$-state UFA such that the complement language $\overline{L}$ requires NFAs with $n^{\tilde{\Omega}(\log n)}$ states. This improves on a lower bound by Raskin. $\textbf{Union:}$ There are languages $L_1$, $L_2$ recognised by $n$-state UFAs such that the union $L_1\cup L_2$ requires UFAs with $n^{\tilde{\Omega}(\log n)}$ states. $\textbf{Separation:}$ There is a language $L$ such that both $L$ and $\overline{L}$ are recognised by $n$-state NFAs but such that $L$ requires UFAs with $n^{\Omega(\log n)}$ states. This refutes a conjecture by Colcombet. ISSN 1433-8092 | Imprint
2022-05-17 17:31:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7474587559700012, "perplexity": 2611.01461743896}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662519037.11/warc/CC-MAIN-20220517162558-20220517192558-00236.warc.gz"}
http://bactra.org/notebooks/random-fields.html
Notebooks ## Random Fields 10 Feb 2018 18:21 Stochastic processes where the index variable is space, or something space-like. (Formally, one-dimensional space works a lot like time; and space-time works a lot like a higher-dimensional space, though not always.) This is of course important for modeling spatial and spatio-temporal data, but also data on networks, and statistical mechanics. Recommended, general: • Pierre Brémaud, Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues • Carlo Gaetan and Xavier Guyon, Spatial Statistics and Modeling [Mini-review] • Xavier Guyon, Random Fields on a Network • Peter Guttorp, Stochastic Modeling of Scientific Data • Brian D. Ripley, Statistical Inference for Spatial Processes • Rinaldo B. Schinazi, Classical and Spatial Stochastic Processes Recommended, of more specialized interest: • J.-R. Chazottes, P. Collet, C. Kuelske and F. Redig, "Deviation inequalities via coupling for stochastic processes and random fields", math.PR/0503483 • Jérôme Dedecker, Paul Doukhan, Gabriel Lang, José Rafael León R., Sana Louhichi and Clémentine Prieur, Weak Dependence: With Examples and Applications • David Griffeath, "Introduction to Markov Random Fields", ch. 12 in Kemeny, Knapp and Snell, Denumerable Markov Chains [One of the proofs of the equivalence between the Markov property and having a Gibbs distribution, conventionally but misleadingly called the Hammersley-Clifford Theorem. Pollard, below, provides an on-line summary.] • David Pollard, "Markov random fields and Gibbs distributions" [Online PDF. A proof of the theorem linking Markov random fields to Gibbs distributions, following the approach of David Griffeath.] • Jan Ambjorn et al., Quantum Geometry: A Statistical Field Theory Approach [I am interested in the stuff about random surfaces.] • Alexei Andreanov, Giulio Biroli, Jean-Philippe Bouchaud, and Alexandre Lefèvre, "Field theories and exact stochastic equations for interacting particle systems", Physical Review E 74 (2006): 030101 = cond-mat/0602307 • K. Bahlali, M. Eddahbi and M. Mellouk, "Stability and genericity for SPDEs driven by spatially correlated noise", math.PR/0610174 • Raluca M. Balan, "A strong invariance principle for associated random fields", Annals of Probability 33 (2005): 823--840 = math.OR/0503661 • M. S. Bartlett, "Physical Nearest-Neighbour Models and Non-Linear Time Series", Journal of Applied Probability 8 (1971): 222--232 [JSTOR] • Michel Bauer, Denis Bernard, "2D growth processes: SLE and Loewner chains", math-ph/0602049 • Denis Belomestny, Vladimir Spokoiny, "Concentration inequalities for smooth random fields", arxiv:1307.1565 • Anton Bovier, Statistical Mechanics of Disordered Systems • Alexander Bulinski and Alexey Shashkin, "Strong invariance principle for dependent random fields", math.PR/0608237 • M. Cassandro, A. Galves and E. Löcherbach, "Partially Observed Markov Random Fields Are Variable Neighborhood Random Fields", Journal of Statistical Physics 147 (2012): 795--807, arxiv:1111.1177 • Ruslan K. Chornei, Hans Daduna, and Pavel S. Knopov <li>Giuseppe Da Prato, Arnaud Debussche and Luciano Tubaro, "Coupling for some partial differential equations driven by white noise", math.AP/0410441 • Jean-Dominique Deuschel and Andreas Greven (eds.), Interacting Stochastic Systems [This looks deeply cool] • Rick Durrett, Stochastic Spatial Models: A Hyper-Tutorial • Vlad Elgart and Alex Kamenev, "Rare Events Statistics in Reaction--Diffusion Systems", cond-mat/0404241 [i.e., large deviations] • H. Follmer, "On entropy and information gain in random fields", Z. Wahrsh. verw. Geb. 26 (1973): 207--217 • T. Funaki, D. Surgailis and W. A. Woyczynski, "Gibbs-Cox Random Fields and Burgers Turbulence", Annals of Applied Probability 5 (1995): 461--492 • L. Garcia-Ojalvo and J. Sancho, Noise in Spatially Extended Systems • Geoffrey Grimmett, Probability on Graphs: Random Processes on Graphs and Lattices • B. M. Gurevich and A. A. Tempelman, "Markov approximation of homogeneous lattice random fields", Probability Theory and Related Fields 131 (2005): 519--527 • Allan Gut and Ulrich Stadtmuller, "Cesaro Summation for Random Fields", Journal of Theoretical Probability 23 (2010): 715--728 • Reza Hosseini, "Conditional information and definition of neighbor in categorical random fields", arxiv:1101.0255 ["Who then is my neighbor?" (Not an actual quote from the paper.)] • Xiangping Hu, Daniel Simpson, Finn Lindgren, Havard Rue, "Multivariate Gaussian Random Fields Using Systems of Stochastic Partial Differential Equations", arxiv:1307.1379 • Niels Jacob and Alexander Potrykus, "Some thoughts on multiparameter stochastic processes", math.PR/0607744 • Mark Kaiser, "Statistical Dependence in Markov Random Field Models" [abstract, preprint] • Wolfgang Karcher, Elena Shmileva, Evgeny Spodarev, "Extrapolation of stable random fields", arxiv:1107.1654 • M. Kerscher, "Constructing, characterizing, and simulating Gaussian and higher-order point distributions," astro-ph/0102153 • Ross Kindermann and J. Laurie Snell, Markov Random Fields and Their Applications [Free online!] • P. Kotelenez, Stochastic Space-Time Models and Limit Theorems • Michael A. Kouritzin and Hongwei Long, "Convergence of Markov chain approximations to stochastic reaction-diffusion equations", Annals of Applied Probability 12 (2002): 1039--1070 • Jean-Francois Le Gall, Spatial Branching Processes, Random Snakes and Partial Differential Equations • Atul Mallik, Michael Woodroofe, "A Central Limit Theorem For Linear Random Fields", arxiv:1007.1490 • Jonathan C. Mattingly, "On Recent Progress for the Stochastic Navier Stokes Equations", math.PR/0409194 ["We give an overview of the ideas central to some recent developments in the ergodic theory of the stochastically forced Navier Stokes equations and other dissipative stochastic partial differential equations."] • A. I. Olemskoi, D. O. Kahrchenko and I. A. Knyaz', "Phase transitions induced by noise cross-correlations", cond-mat/0403583 • Rupert Paget, "Strong Markov Random Field Model", IEEE Transactions on Pattern Analysis and Machine Intelligence 26 (2004): 408--413 • Marcelo Pereyra, Nicolas Dobigeon, Hadj Batatia, Jean-Yves Tourneret, "Computing the Cramer-Rao bound of Markov random field parameters: Application to the Ising and the Potts models", arxiv:1206.3985 • Liang Qiao, Radek Erban, C. T. Kelley and Ioannis G. Kevrekidis, "Spatially Distributed Stochastic Systems: equation-free and equation-assisted preconditioned computation", q-bio.QM/0606006 • Havard Rue and Leonhard Held, Gaussian Markov Random Fields: Theory and Applications • Jeffrey E. Steif, "Consistent estimation of joint distributions for sufficiently mixing random fields", Annals of Statistics 25 (1997): 293--304 [Extension of the Marton-Shields result to random fields in higher dimensions] • Andre Toom, "Law of Large Numbers for Non-Local Functions of Probabilistic Cellular Automata", Journal of Statistical Physics 133 (2008): 883--897 • M. N. M. van Lieshout, "Markovianity in space and time", math.PR/0608242 • Divyanshu Vats and Jose M. F. Moura, "Telescoping Recursive Representations and Estimation of Gauss-Markov Random Fields", arxiv:0907.5397
2018-08-21 13:34:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5249156951904297, "perplexity": 14656.01454182771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221218189.86/warc/CC-MAIN-20180821132121-20180821152121-00454.warc.gz"}
https://www.imrpress.com/journal/CEOG/46/6/10.12891/ceog5022.2019/htm
NULL Section All sections Countries | Regions Countries | Regions Article Types Article Types Year Volume Issue Pages IMR Press / CEOG / Volume 46 / Issue 6 / DOI: 10.12891/ceog5022.2019 6 19 Views Chapters Figures References Open Access Original Research Artificial neural network models for prediction of premature ovarian failure Y. Wu1,*L. Tong2 Show Less 1 Huarun Wuhan Iron and Steel General Hospital of Wuhan University of Science and Technology, Wuhan, China 2 School of Nursing, Wuchang University of Technology, Wuhan, China Clin. Exp. Obstet. Gynecol. 2019 , 46(6), 958–963; https://doi.org/10.12891/ceog5022.2019 Published: 10 December 2019 Abstract Aim: The aim of this study is to develop and optimize artificial neural network models for accurate prediction of premature ovarian failure (POF), to test these models on data collected prospectively from different centres. Materials and Methods: The study used data from 316 women presenting to six communities governed by a street in Wuhan, Hubei, China. Unbiased randomization was divided into training samples (177 cases), test samples (44 cases), and adherence samples (95 cases). Data from training samples and test samples were used to train the models, which were then tested on independent data from adherence samples. From 35 potential factors, variables were selected by Analytic Hierarchy Process (AHP), and then were used in the ANN model to make the prediction. Results: The predicting accuracy of the train set, validation set, and test set were 98.73%, 94.15%, and 92.15%, respectively, when the generalization ability was verified. Conclusion: This study confirms that artificial neural network can offer a useful approach for developing diagnostic algorithms for POF prediction. Keywords Premature ovarian failure Analytic hierarchy process Artificial neural network Prediction model Introduction Premature ovarian failure (POF) is a common reproductive endocrine disease. Recent and long-term complications such as fertility loss, osteoporosis, genital atrophy, cardiovascular diseases, and metabolic system diseases are deeply puzzling every patient and family [1]. The factors that affect POF are large and complex, so there are many predictors that need to be considered. There is a non-linear relationship between different prediction parameters of POF, which leads to the fact that traditional methods cannot accurately describe the correlation between different parameters, and there is a large deviation in prediction results of POF [2]. In this case, the effective prediction mode of POF has become the key research direction of the relevant scholars. In recent years, with the rapid development of information technology, the prediction models of premature ovarian failure based on multiple linear regression, support vector machine and neural network have emerged [3]. Multiple linear regression is evaluated based on the linear relationship, but it is not good at approximation to the nonlinear problem. Support vector machine is only to solve the small sample, but it is long training time and low operating efficiency when the sample is large [4]. However, the artificial neural network is nonlinear, realtime optimization and intelligent learning ,it has become the main method for predicting premature ovarian failure model. In view of the above problems, this paper proposes a prediction method of premature ovarian failure based on AHP method and BP Neural Network (BPNN) method. All predictors of premature ovarian failure can be analysed by the AHP and the weights of the influence factors can be calculated. According to the weight, select the higher impact predictors of premature ovarian failure. The index is considered as the input parameter of the BPNN. After training the neural network model based on clinical data, we can evaluate the accuracy of premature ovarian failure prediction. The experimental results demonstrate the highly accurate evaluation performance of the proposed method. Materials and Methods There were 316 women who met the inclusion criteria, they were collected in six communities governed by a street in Wuhan. Inclusion criteria: (1) the age range was from 35 to 40 years, (2) in the past three months, there was no history of hormone medicine treatment, or history of pregnancy, abortion and lactation, (3) there was as complete uterus and at least one ovary, and no abnormalities were found by B-mode ultrasound, (4) living in six communities for more than one year, (5) there was a clear and spontaneous end of the menstrual period, (6) they had good compliance and were willing to participate in this research. Exclusion criteria: (1) Severe complications, such as malignant tumor, renal failure, etc, (2) endocrine diseases such as polycystic ovary syndrome (PCOS), diabetes, thyroid, and breast. All the participants signed the informed consent. They all completed the relevant questionnaire survey (a type behavior, history of mumps, history of gynecologic surgery, history of using ovulation drugs, and history of marriage and childbearing). They required an endocrinological examination and ultrasound examination on the third day of menstruation. From May 2012 to June 2017, the subjects were visited once every four months and followed up until the age of 40. The follow-up was used to identify whether the participants had POF, including menstrual status, female hormones, special medication, and gynecologic surgery, etc. Exclusion criteria: during the follow-up period, the treatment of hormone and psychotropic medicine, or pregnancy, abortion and lactation were performed. At last, two participants underwent hysterectomy, two participants had sex hormone therapy, and 21 participants lost to follow-up. All participants above were eliminated. A total of 316 participants were included in the study. The first step to build a prediction model for POF is to establish a corresponding prediction parameter set [5]. The prediction of POF is influenced by many factors, such as risk factors, biochemistry, and ultrasound. In this paper, through systematic analysis and expert reviews, referring to the related literature and research, the prediction criteria include risk factor, biochemical index, B ultrasonic index, and Test index are presented, and each evaluation criterion contains multiple sub evaluation indexes. The analytic hierarchy process is used to establish the prediction index system of POF as shown in Figure 1. The relative importance of calculating the same level prediction index is calculated and the comprehensive weight is obtained by AHP. The key step of the AHP is to construct the index judgment matrix. In order to reduce the influence of subjective factors, compared with each other among the predictors of POF and construct judgment matrix A. Matrix A elements in value evaluation index for POF prediction of results is of relative importance, including clinicians’ and experts’ scoring in order to determine and judge matrix elements in the evaluation standards (Table 1). The element value in matrix A indicates the relative importance of predictive indicators of POF. This paper uses both clinician and expert to allocate scores. The criterion of the assignment of elements in matrix A is shown in Table 1. Figure 1. — Neural network prediction model. Table 1Evaluation criteria of elements in matrix A. Assignment(Wi/Wj) Explanation 1 The two indicators have the same importance 3 Vi index is slightly more important than Vj 5 Vi index is obviously more important than Vj 7 Vi index is more important than Vj 9 Vi index is more important than Vj According to the prediction index matrix, the weights W can be obtained by AW = λmaxW. Then the normalization operation is carried out to calculate the relative importance weights of the upper levels. Finally, a consistency test for this matrix is implemented. Then to test the consistency of the judgment matrix from the high to the lower level. Finally, the forecast index is sorted according to the weight. The authors sought to exploit the sophisticated pattern recognition capabilities of artificial neural networks for analysis of clinical data from patients presenting with POF. The artificial neural networks used in this study were BP neural network that has been used extensively in medical pattern classification tasks. BPNN consists of the input layer, a number of hidden layers, and the output layer, which is one the most popular networks. The BP algorithm is a supervised learning algorithm. The error back propagation algorithm of BP neural network is a typical supervised learning algorithm. That is, the weight and the threshold of each layer can be adjusted through the forward propagation and back propagation error; the weight adjustment process is the BP neural network learning and training process. The process of reducing the output error is a cycle of reciprocation until it reaches the termination condition. The topology structure of the neural network is composed of the number of network layers, the number of nodes in each layer, and the connection mode between the nodes [6]. The BP neural network can have one or more hidden layers. The three-layer BP neural network can complete any mapping of n-dimension to m-dimension. That is, an implicit layer has completely been able to simulate any nonlinear relationship. The three-layer neural network is created through the prediction index system of POF based on the analytic hierarchy process as shown in Figure 2. Seven higher weighted prediction indexes in Table 2 are used as input to neural networks, and the three layers’ construction is adopted for the proposed BPNN method. The input nodes is the seven predictors in the analytic hierarchy process and the output node is set as one. The number of hidden layer nodes is determined by empirical formula and preliminary experiment, and is finally set to six. The process of predicting POF is as follows: Figure 2. — Neural network prediction model. Table 2Predictors of premature ovarian failure and their weight. Factors Weights Type A behavior 0.4206* History of mumps 0.1205 History of Gynecological surgery 0.4185* History of the use of ovulatory drugs 0.2053 Obstetrical history 0.1854 Follicle stimulating hormone 0.3453* Follicular stimulating hormone / luteinizing hormone 0.2293 Anti-Müllerian hormone 0.5448* Inhibin B 0.4878* Antral follicle count 0.4556* Peak systolic velocity 0.2843 Resistance index 0.2214 Clomiphene stimulation test 0.3665* Exogenous FSH stimulation test 0.1604 *Top seven high weight prediction indicators. Step 1: The input layer is set to Xk =(x1, x2, ..., xn), where xi represents the prediction related index of the input layer. In addition, the ownership value and the neuron threshold are assigned a random number of the distribution over (0, 1). Step 2: The corresponding output layer is set to Y = y. Step 3: The inputs for each unit of the hidden layer are: where, wij is the connection weight of the input layer to the hidden layer, θj is the threshold of the hidden layer unit, and p is the number of hidden layer units. The model activation function uses the Sigmoid function, that is f (x) = 1 / (1+e -x). Then the output of the hidden layer unit is: Step 4: the input to the output layer unit is: the output of the output layer unit is: Where, vjt is the connection weight from the hidden layer to the output layer; γt is the threshold of the output layer unit. (1) $S_{i}=\sum\limits_{i=1}^{k}$ wij, xi - θj, j =1, 2, ..., p Steps 1 through 4 are the forward calculation propagation of the model. In the process of error back propagation, the BP neural network should be trained to adjust the threshold γt and the connection weights wij and vjt, so as to continuously (2) bj = 1 / [1 + exp (θj -$\sum\limits_{i=1}^{k}$ wij xi )] (3) Lt = $\sum\limits_{i=1}^{n}$ v b - γ (4) Yt = 1 / [1 + exp ( γt -$\sum\limits_{i=1}^{n}$ vjt bj)] reduce the error to the required accuracy range. Where equation (4) is the final constructed prediction model. Step 5: Weight correction. Recursively from the output layer to the hidden layer, the formula is: Where, wij (t) is the connection weight from neuron i (input layer or hidden layer neuron) to upper layer neuron j (hidden layer or output layer neuron) at time T. yi is the actual output of neuron i at time T. η is the step adjustment factor, 0 < η < 1. α is the smoothing factor, 0 < α < 1. δj is a value related to the deviation. For the hidden node, (5) wij (T + 1) = wij (T) + ηδjyi [wij (T) - wij (T - 1)] where xj is the actual output value of the hidden node j. for the output node, Where tj is the expected value of the output. The above steps are cycled until the weight is stable. At this time, the error of the actual output value of the ovarian premature aging predictor index and the expected output value of the output layer becomes sufficiently small, and the ovarian premature aging prediction result is outputted. (6) δj =xj (1 - xj) $\sum\limits_{k=0}^{m}$ δk wki (7) δk = xj (1 - xj) (tj - xj) Results In order to test the performance of the prediction model of AHP-BPNN POF, the simulation experiment was carried out on the clinical data platform. The experimental data came from the Wuhan clinical medical scientific research project (WX15D15) “Research on prediction model of premature ovarian failure based on artificial neural network”, and a total of 316 data were collected. Each data included 14 evaluation indicators, and some of the data are shown in Table 3. Table 3data on predictors of premature ovarian failure Number X1 X2 X3 X4 X14 Y 1 0.02 14.49 0 T 78.71 0.87 2 0.36 19.63 0 T 75.95 0.75 3 0.33 14.05 0 T 56.82 0.64 4 0.13 13.38 0 F 91.57 0.60 5 2.90 89.40 3 T 15.40 0.50 6 2.52 69.71 3 F 19.60 0.32 7 3.90 88.30 6 T 6.69 0.86 8 5.83 106.39 10 F 8.29 0.84 9 6.32 105.29 9 F 8.52 0.16 316 8.16 92.93 9 F 8.58 0.83 Too large or too small the sample data value will increase the computational complexity and the length of training. To this end, it is normalized and zoomed to a closed interval [0,1], which is shown as follows: $X_{i}^{’}=\frac{X_{i}-X_{im in}}{X_{imax}-X_{i min}}$ where represents Xi the ith index. Xi min and Ximax represent the minimum and maximum value of the ith index respectively; X i′ represents the normalized value. The AHP is adopted to obtain the weight of the predictive index. Finally seven indicators are achieved, such as AMH, INHB, AFC, A type behavior, surgery, clomiphene provocation test, and FSH. Then, the data of the sven indexes are processed to achieve a new set of data. The data is divided into two parts, in which 263 data are selected as the training sample set, and the remaining 53 data are used as the test sample set. The training samples are regarded as the input of the BPNN for training. The specific training process of the BP neural network is shown in Figure 3. The test set is assessed by using to establish optimal prediction model of POF, and actual output and model output results are shown in Figure 4. The correlation coefficient between the two types of output is 0.9450, and the prediction accuracy is 94.73%. The results showed that the effectiveness and the feasibility of the proposed method for predicting POF. In order to further detect the generalization ability of AHP-BPNN, 316 data are divided into training set, validation set, and test set. Figure 3. — The learning process of BP neural network. Figure 4. — AHP-BPNN real output and model output correlation change curve. The number of samples is 210, 84, and 22, respectively. First, the training set is input into the BPNN for learning. Then, the validation set is used to demonstrate validity of the proposed model. Finally, the optimal model is selected to examine the test sets. The predicting accuracy of the training set, the validation set. and the test set are 98.73%, 94.15%, and 92.15% respectively. The high predicting accuracy of the test set indicates that the generalization ability of the AHP-BPNN model is good, and it can avoid the problem of overfitting. Discussion In this study, a screening by AHP important predictor of POF is AMH, INHB, AFC, A-type behavior, history of gynecological surgery, clomiphene citrate challenge test, and FSH. Both AMH and INHB are members of the transforming growth factor β superfamily. The former level has nothing to do with the menstrual cycle and is secreted by the antral follicles and granule cells of the sinusoidal follicles. It is positively correlated with the number of retrieved eggs and ovarian reactivity [7]. The latter level began to increase from the follicular phase, reached the peak of ovulation period, the luteal phase gradually decreased, secreted by granulosa cells, feedback inhibition of pituitary secretion of FSH, with a direct reflection of ovarian reserve function. The antral follicle number refers to the sum of the number of all sinusoidal follicles (2-10 mm in diameter) in both ovaries measured by ultrasound. Sinusoidal follicles are highly sensitive and responsive to FSH, and are highly correlated with INH and AMH, so this indicator is often used clinically to reflect the remaining follicular pool of the ovary. In the process of ovarian dysfunction, the decrease in the number of primordial follicles is consistent with the decrease in the number of FSH-sensitive sinusoidal follicles, so this indicator can reflect the state of ovarian reserve [8]. Chen et al. proposed that the critical value of AFC for predicting ovarian dysfunction: AFC ≤ 7 (any age) or AFC ≤ 10 (38 years or older) [9]. It is noteworthy that although AFC gradually decreases with age, and is closely related to the age of sterilization and menopause, only a small amount of AFC can be used to make effective clinical predictions [10]. Project monitoring is more subjective, so AFC has certain limitations as a predictor [11]. People with Type A behavior are subject to fluctuating emotions, and even if they rest, they cannot relax. They lack patience, have a strong sense of time, and do things quickly. Depression, disharmony with family members, and type A behavioral response pattern interfere with the hypothalamic-pituitaryovarian axis, resulting in negative conditioned reflexes and further abnormal secretion of hypothalamic FSH, LH, and estrogen (E2), menstrual cycle changes, and eventually develop into amenorrhea. Many studies at home and abroad show that different gynecological procedures and paths, intraoperative resection range, resection area, and different hemostasis modes all have different effects on ovarian function, patient’s psychology, and postoperative quality of life [12]. In recent years, because laparoscopic techniques have the advantages of no laparotomy, less trauma, quicker recovery, and less postoperative pain, the use of laparoscopic techniques has gradually increased in ovarian surgery [13]. Electrosurgical instruments are applied during the laparoscopic surgery. Electric heating may damage ovarian tissue or affect ovarian blood flow, thereby affecting ovarian function [14]. Therefore, gynecological surgery, especially laparoscopic surgery, requires a correct understanding of the use of electrosurgical equipment, strict control of surgical methods, and surgical indications, selection of the appropriate way to stop bleeding, thereby reducing the impact on ovarian reserve function. It not only achieves a minimally invasive impact on the surface, but also achieves a minimally invasive affect on ovarian function [15]. Clomiphene, as an estrogen receptor antagonist, can directly compete with the estrogen receptor on the hypothalamic pituitary, and prevent the inhibitory effect of estrogen on FSH. In the menstrual cycle of 5-9 days, a daily dose of 100 mg clomiphene, and monitoring the change of FSH level. After withdrawal, when the E2 and INHB secreted by growth follicles could not inhibit the increase of FSH, that is, FSH > 20I U/L or two standard values above the baseline value. It indicated that the ovarian reserve function decreased, which was the occult phase of POF. Researchers believe that the best indicator for predicting ovarian reserve status is CCCT [16]. Using the above single indicators to predict POF failure, their accuracy is limited. With the development of predictors of POF, a complex approach to multiple factors has been developed, and a prediction method for POF based on AHP-BPNN has been proposed in this paper and been tested through clinical trials. The results show that AHP-BPNN uses AHP to filter the importance indicators, simplifies the neural network model structure, and greatly reduces the model computation time. At the same time, the BP neural network with non-linear approximation ability can predict complex POF. The prediction accuracy of POF and the operating efficiency of the predictive system are more scientific and accurate, and have good application prospects in clinical practice. Acknowledgements The work described in this paper was supported by the Health and Family Planning Commission of Hubei, China, (Grant No. WJ2018H0102) and the Fourth Batch of Wuhan Middle-Young Medical Talents Project in Hubei, China. These financial contributions are gratefully acknowledged.
2022-08-13 09:28:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5916345119476318, "perplexity": 2303.744237128892}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571911.5/warc/CC-MAIN-20220813081639-20220813111639-00011.warc.gz"}
https://datascience.stackexchange.com/questions/60574/kernel-trick-and-inner-product-preservation
# Kernel Trick and Inner Product Preservation I understand that the point of using the kernel trick is to project the problem onto a higher dimensional space, where the problem is linearly separable. In this explanation, https://www.quora.com/What-is-the-kernel-trick, it states that the inner product $$\langle x,y \rangle$$ will be equal to $$\langle \phi(x),\phi(y)\rangle$$. Understanding this equality seems to be key to understanding how this trick works. My question is, how do know that our function $$\phi$$ will preserve the inner product and what are the conditions for this to happen? I have tried google searching this and despite many references to the kernel trick, I do not believe that this has been answered anywhere. It doesn't mean that $$\langle x, y \rangle = \langle \phi(x), \phi(y)\rangle$$. Kernel method in general means that for an algorithm that involve $$\langle x, y\rangle$$, we can replace it with a function $$K(x,y)=\langle \phi(x), \phi(y)\rangle$$ where computing $$K(x,y)$$ is easy. It is known that for $$K$$ that satisfies Mercer's theorem, there is a corresponding $$\phi$$ which is the map to the higher dimensional space. We do not need to know $$\phi$$ explicitly and $$\phi$$ can be very complicated. • what gives you the right to replace with $K(x,y)$? Also are you saying that the equality in the guys post is by chance? Sep 22 '19 at 8:11 • for some algorithms, we might find that it just involve inner product with some features space, in that case, rather than computing $\langle x, y \rangle$, that is working with $x$ directly, we might like to perform a transformation $\phi(x)$, and then work with it. Since it just involves inner product, what matters is just $\langle \phi(x), \phi(y) \rangle$, and we can compute it via $K(x,y)$ and we don't really care about $\phi$ explicitly sometimes. As for the "by chance" thing, I might need you to be more explicit. Sep 22 '19 at 8:17 • what is the justification for be able to swap to $K(x,y)$? Sep 22 '19 at 9:59 • The justification is rather than working with $x$ directly, we want to work with $\phi(x)$, for example, rather than working with $x$, you might want to work with $(x, x^2)$. Then you want to work with inner product of $(x, x^2)$ and $(y, y^2)$, this is denote as $K(x,y)= \langle (x, x^2), (y, y^2) \rangle$. Sep 22 '19 at 10:05 • Yes but why can you swap $x$ for $\phi(x)$? What are the requirements on $\phi$? Sep 22 '19 at 10:06
2022-01-25 20:57:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8602035045623779, "perplexity": 145.3331154091703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304872.21/warc/CC-MAIN-20220125190255-20220125220255-00499.warc.gz"}
https://www.gamedev.net/forums/topic/565400-implementing-t9-with-1-mb-ram/
Implementing T9 with 1 MB RAM Recommended Posts Hi guys, i would like to talk about what i'm going to code, cause i need some suggestions :) Embedded platform, RAM<1MB avaible, cpu < 100 Mhz. I'm gonna write a T9 for that embedded system, but i'm sure i will encounter problems with the ram: i need to use as less RAM as i can, cause i need all the ram for the main program. What i would do: Load the dictionary into the RAM, then use some simple string comparing to find the nearest 4/5 words. But this is a problem: the dictionary could be very big, using too much RAM. Any ideas on another way to do this? Share on other sites db on file ordered. binary search (on file) and load only needed data in real time. Share on other sites Quote: Original post by feal87db on file ordered.binary search (on file) and load only needed data in real time. I forgot to say that the "disk" is a memory stick, and access to it is damn slow Share on other sites Quote: Original post by roby65 Quote: Original post by feal87db on file ordered.binary search (on file) and load only needed data in real time. I forgot to say that the "disk" is a memory stick, and access to it is damn slow i've done a similiar app for an old (dr.dos only) PDT from symbol,and works fast. speed is not necessary, seeks times are. do a check Share on other sites Quote: Original post by feal87i've done a similiar app for an old (dr.dos only) PDT from symbol,and works fast. speed is not necessary, seeks times are. do a check So, should i load words from file everytime the user types anything? Share on other sites Quote: Original post by roby65 Quote: Original post by feal87i've done a similiar app for an old (dr.dos only) PDT from symbol,and works fast. speed is not necessary, seeks times are. do a check So, should i load words from file everytime the user types anything? Yep,only the needed words (5-6) using a binary search on file. Share on other sites Instead of a full blown dictionary (that is, if you really mean something like a c++-map, c#-dictionary, of similar associative arrays/containers), you could build something like this (pseudo-code), with keys only: const num_letters = ...;Entry { Entry *next[num_letters];}; yielding a structure like +-------A--------+ +---B---+ +---L---+ E N L P R O E H R R G A A M O T A R I L I O C N Unsupported letters would just be NULL, and if you are careful, lookup per Entry is O(const). If even this key-only structure is too much, you could still cut it off at some level (i.e. for very long or uncommon words) and then load from disk. Not trivial, but then you are not on a trivial system. Also don't forget that 1 MiB = 1024 KiB = 1048576 Byte, i.e., if you do it non-naive, then that's plenty of space. sidenote: You are aware of that T9 is a patented "idea"? Share on other sites What phresnel is talking about is called a trie. Share on other sites If you have some cycles to waste, you can also make it more compact: struct Letters { bool a:1, b:1, c:1, d:1, e:1, f:1, g:1; bool h:1, i:1, j:1, k:1, l:1, m:1, n:1; bool o:1, p:1, q:1, r:1, s:1, t:1, u:1; bool v:1, w:1, x:1, y:1, z:1;};struct Node { Letters letters; Node *children;}; This would be 8 bytes per node on x86. E.g., when you have a node with a, k, x set, then children[0] would be your a node, children[1] would be your k node, children[2] would be your x node. Of course you should add some helper functions to that structure, like size(), char operator[](size_t), etc, otherwise your code gets scrambled ;) Quote: Original post by alvaroWhat phresnel is talking about is called a trie. Tbh, I didn't know that it has a name :D Create an account Register a new account • Forum Statistics • Total Topics 628367 • Total Posts 2982279 • 10 • 9 • 13 • 24 • 11
2017-11-23 07:25:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19238391518592834, "perplexity": 6276.126465260704}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806760.43/warc/CC-MAIN-20171123070158-20171123090158-00116.warc.gz"}
https://processdesign.mccormick.northwestern.edu/index.php?title=Estimation_of_production_cost_and_revenue&diff=prev&oldid=1236
Difference between revisions of "Estimation of production cost and revenue" Variable Cost of Production Estimating Variable Production Costs Raw Materials Cost These are the costs of chemical feed stocks required by the process. Feed stocks flow rates are obtained from PFD [6]. Utilities Cost These are the costs of the various utilities streams required by the process. The flowrates for the utilities streams are located on the PFD [6]. This includes: • Fuel gas, oil, or coal • Electric power • Steam • Cooling water • Process water • Boiler feed water • Air • Inert gas • Refrigeration Waste Disposal Costs These are defined as the cost of waste treatment to protect the environment [6]. Fixed Cost of Production Estimating Fixed Production Costs Labor Costs These are the costs attributed to the personnel required to operate the process plant [6]. Maintenance Costs Part of these costs are associated with labor and materials necessary to maintain plant production [6]. Research and Development These are the costs of research done in developing the process and/or products. This includes salaries for researchers as well as funds for research related equipment and supplies [6]. Revenues The revenues of a process are the income earned form sales of the main products and the by-products. Revenue can be impacted by market fluctuations and production rates. By-Product Revenues Besides selling the main product from a process, by-products from separations and reactions can also be valuable in the market. Often it is more difficult to decide which by-products to recover and purify than it is to make decisions on the main product. By-products made in stoichiometric ratios from reactions must be either sold off or managed through waste disposal. Other by-products are sometimes produced through feed impurities or by nonselective reactions. There are several potential valuable by-products from a process: 1. Materials produced in stoichiometric quantities by the reactions that create the main product. If they are not recovered then the waste disposal expenses will be large. 2. Components that are produced in high yield by side reactions. 3. Components formed in high yield from feed impurities. Many sulfurs are produced as a by-product of fuels manufacture. 4. Components that are produced in low yield but have high value. An example includes acetophenone which is recovered as a by-product of phenol manufacture. 5. Degraded consumables (e.g. solvents, etc.) that have reuse value. A rule of thumb that can be used for preliminary screening of by-products for large plants is that for by-product recovery to be economically feasible the net benefit must be greater than \$200,000 a year. A net benefit can be calculated by adding the possible resale value of the by-product and the avoided waste disposal cost [1]. Margin The gross margin of a process is defined as the sum of product and by-product revenues minus the raw material cost. Gross margin = Revenues - Raw materials costs Because raw materials are most often the most expensive variable cost of a process, the gross margin is a good gauge as to what the total profitability of a process will be. Raw materials and product pricing are often subject to high degrees of variability which can be difficult to forecast. The size of margins are highly versatile depending on the industry. For many petrochemical industries the margin may be only 10%; however, for industries such as food additives and pharmaceuticals the margins are generally much higher [1]. Profits There are several standards for calculating company profits. The cash cost of production (CCOP) is the sum of the fixed and variable production costs. $CCOP = VCOP + FCOP$ where $VCOP$ is the variable cost of production and $FCOP$ is the fixed cost of production. Gross profit, which should not be confused with gross margin, is then calculated by the following equation, $Gross profit = Main product revenues - CCOP$ Finally profit can be calculated by subtracting the income taxes that the plant would be subject to depending on the tax code of the county the plant is located in. $Net profit = gross profit - taxes$ References 1. Wankat, P.C. (2012). Separation Process Engineering. Upper Saddle River: Prentice-Hall. 2. Towler, G.P. and Sinnot, R. (2012). Chemical Engineering Design: Principles, Practice and Economics of Plant and Process Design.Elsevier. 3. Biegler, L.T., Grossmann, L.E., and Westerberg, A.W. (1997). Systematic Methods of Chemical Process Design. Upper Saddle River: Prentice-Hall. 4. Peters, M.S. and Timmerhaus, K.D. (2003). Plant Design and Economics for Chemical Engineers, 5th Edition. New York: McGraw-Hill. 5. Seider, W.D., Seader, J.D., and Lewin, D.R. (2004). Process Design Principles: Synthesis, Analysis, and Evaluation. New York: Wiley. 6. Turton, R.T., Bailie, R.C., Whiting, W.B., and Shaewitz, J.A. (2003). Analysis, Synthesis, and Design of Chemical Processes Upper Saddle River: Prentice-Hall.
2022-11-29 13:39:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46834129095077515, "perplexity": 4725.9740022552705}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00227.warc.gz"}
https://www.physicsforums.com/threads/how-do-i-simplify-this-inequality.636349/
# Homework Help: How do I simplify this inequality 1. Sep 16, 2012 ### MeMoses 1. The problem statement, all variables and given/known data So i'm following along with my physics book and I get to the point where Mg * abs(sin(θ) - cos(θ)) <= μMg * (cos(θ) + sin(θ) Next they say: If tan(θ) >= 1 then sin(θ) - cos(θ) <= μ(cos(θ) + sin(θ)) => tan(θ) <= (1+μ) / (1-μ) 2. Relevant equations 3. The attempt at a solution How do I go from sin(θ) - cos(θ) <= μ(cos(θ) + sin(θ)) to tan(θ) <= (1+μ) / (1-μ)? Could someone walk me through this or at least get me started? Thanks 2. Sep 16, 2012 ### SammyS Staff Emeritus You get to do most of the walking. (Hopefully, you know by now that that's how we do things here at PF.) Take $\sin(\theta) - \cos(\theta) \le \mu(\cos(\theta) + \sin(\theta))$​ and divide both sides by cos(θ) . Simplify, and the only trig function remaining is tan . Now, try to isolate tan(θ). 3. Sep 16, 2012 ### MeMoses Thanks, that's all I needed
2018-10-23 18:32:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.562640368938446, "perplexity": 6203.23304301558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516892.84/warc/CC-MAIN-20181023174507-20181023200007-00469.warc.gz"}
http://math.stackexchange.com/questions/126308/if-a-rational-number-has-a-finite-decimal-representation-then-it-has-a-finite-r/126310
# If a rational number has a finite decimal representation, then it has a finite representation in base $b$ for any $b>1?$ Is it true that if a rational number has a finite decimal representation, then it has a finite representation in base $b$ for any $b>1?$ I would like to know if there is a book where this subject is fully detailed. EDIT: Improve the question! - Not true. 0.1 base 10 = 0.00011001100110011.... in base 2. –  Apprentice Queue Mar 30 '12 at 16:25 The answer to your question is fairly easy as written: a number with finite representation in base $10$ has finite representation in base $100$. –  Arturo Magidin Mar 30 '12 at 16:26 @ArturoMagidin: I think I messed up my question! Unfortunately, I do not know how to improve it. –  spohreis Mar 30 '12 at 16:30 If you meant "finite representation in base $b$ for any $b\gt 1$", then write i like that. It currently reads "...has finite representation in relation to at least one base $b$, $b\neq 10$." –  Arturo Magidin Mar 30 '12 at 16:31 Theorem. A rational number $\frac{a}{b}$, $\gcd(a,b)=1$, has finite representation in base $k$ if and only if there exists $n\gt 0$ such that $b|k^n$. Proof. Suppose $b|k^n$. Then $k^n = bq$, so we can write $$\frac{a}{b} = \frac{aq}{bq} = \frac{aq}{k^n}$$ which has finite representation. Conversely, if $\frac{a}{b}$ has finite representation, then we can write it as $\frac{r}{k^n}$ for some $n\gt 0$, hence $br=ak^n$; since $\gcd(a,b)=1$, then $b|k^n$. $\Box$ Corollary. A rational number $\frac{a}{b}$, $\gcd(a,b)=1$, has finite representation in base $k$ if and only if the only primes that divide $b$ also divide $k$. If we interpret your question as: "if $\frac{a}{b}$ has finite decimal representation, then it has finite representation in base $k$ for any $k\gt 1$", then the answer is "no". $\frac{1}{2}$ does not have finite representation in base $3$. If we interpret it as "if $\frac{a}{b}$ has finite decimal representation, then it has finite representation in base $k$ for some $k$," then the answer is "yes" (though it trivially does in base $10^n$ for any $n$). - Thank you very much! –  spohreis Mar 30 '12 at 16:32 An $\rm x\in\mathbb{Q}$ has a finite digital representation in base $\rm b$ if and only if $\rm x= n/b^k$ for some integers $\rm n,k$, with $\rm k\ge 0$. Since $\rm x= nm^k /(mb)^k$, $\rm x$ is also has finite representation in base $\rm mb$ for any $\rm m\in\mathbb{N}$. Wait, do you mean some other base, or any other base? These are distinct questions. - $\frac{1}{13}$ in base $10 : 0.0\overline{769230}$ $\frac{1}{13}$ in base $13 : 0.1$ - Every number with a finite expansion in base $a$ has a finite expansion in base $b$ if and only if there exists a positive integer $n$ such that $a\mid b^n$. The expansion in base $b$ of every decimal is finite if and only if $10\mid b$. - Hardy and Wright has a good analysis of the theory of "decimals" in different bases. -
2015-05-24 22:17:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9426349401473999, "perplexity": 130.6434484702741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928078.25/warc/CC-MAIN-20150521113208-00176-ip-10-180-206-219.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1617246/solving-integral-using-trig-substitution-tanx-2-t
# Solving integral using trig substitution $\tan(x/2)=t$ I have problems with solving the following integral: $$\int{{\sin x - \cos x}\over {\sin x + \cos x}} \, dx$$ Could anybody please help me to find the solution and show me the method how it can be solved? I already tried to solve similar ones but I get always stuck when trying the technique with partial fraction decomposition. • What was your exact attempt? Where did you get stuck? And please use $\LaTeX$ next time. – Adrian Jan 18 '16 at 19:28 • Please include your thoughts and efforts (work in progress) in this and future posts. You are more likely to receive positive/constructive feedback that way. Formatting your post helps too. Formatting tips here. – Em. Jan 18 '16 at 19:29 • My answer doesn't tell you what the bottom line is, nor does it tell you how to evaluate the integral of the rational function that results from the substitution, but I think it gives a more detailed account of the substitution itself than the other answers do, and this particular substitution needs a fairly detailed account in order to be properly understood. ${}\qquad{}$ – Michael Hardy Jan 18 '16 at 20:11 • @addy2012 : It is at best very silly to call the math-notation typesetting software $\text{“}\LaTeX\text{''}$. It is MathJax. $\LaTeX$ doesn't just do mathematical notation, and that's not even most of what it does. People who master MathJax under the mistaken impression that they then know $\LaTeX$ are in for a shock if they try to use actual $\LaTeX$ and find they know next to nothing about it. ${}\qquad{}$ – Michael Hardy Jan 18 '16 at 20:12 • \relax, please :) – JnxF Jan 18 '16 at 20:32 Implementing the half-angle substitution (per your title), you have $$t=\tan\frac{x}{2}\implies\mathrm{d}t=\frac{1}{2}\sec^2\frac{x}{2}\,\mathrm{d}x$$ From the fact that $t=\tan\dfrac{x}{2}$, you can extract the following: $$\begin{cases}\sin x=2\sin\dfrac{x}{2}\cos\dfrac{x}{2}=\dfrac{2t}{1+t^2}\\[1ex] \cos x=\cos^2\dfrac{x}{2}-\sin^2\dfrac{x}{2}=\frac{1-t^2}{1+t^2}\\[1ex] \sec^2\dfrac{x}{2}=1+t^2\end{cases}$$ All of this tells you your initial integral is equivalent to $$\int\frac{\sin x-\cos x}{\sin x+\cos x}\,\mathrm{d}x=\int\frac{\frac{2t}{1+t^2}-\frac{1-t^2}{1+t^2}}{\frac{2t}{1+t^2}+\frac{1-t^2}{1+t^2}}\times\frac{2}{1+t^2}\,\mathrm{d}t=-2\int\frac{t^2+2t-1}{(t^2-2t-1)(1+t^2)}\,\mathrm{d}t$$ Decomposing into partial fractions yields $$-2\int\left(\frac{t-1}{t^2-2t-1}-\frac{t}{t^2+1}\right)\,\mathrm{d}t$$ Both integrals can easily be computed with substitutions that use the integrand's term's respective denominators. $$\int\frac{\sin(x)-\cos(x)}{\sin(x)+\cos(x)}\space\text{d}x=$$ Substitute $u=\sin(x)+\cos(x)$ and $\text{d}u=(\cos(x)-\sin(x))\space\text{d}x$: $$-\int\frac{1}{u}\space\text{d}u=-\ln\left|u\right|+\text{C}=-\ln\left|\sin(x)+\cos(x)\right|+\text{C}$$ Hint: substitute $t=\sin x +\cos x$ and $dt=(\cos x - \sin x)dx$ \begin{align} \tan \frac x 2 & = t \\[8pt] \frac x 2 & = \arctan t \\[8pt] x & = 2\arctan t \\[8pt] \sin x & = \sin(2\arctan t) = \sin(2 \, \bullet) = 2 \sin(\bullet)\cos(\bullet) \\ & = 2\sin(\arctan t)\cos(\arctan t), \\[8pt] \cos x & = \cos(2\arctan t) = \cos(2\,\bullet) = \cos^2(\bullet) - \sin^2(\bullet) \\ & = \cos^2(\arctan t) - \sin^2(\arctan t). \end{align} Now let us consider what $\sin(\arctan t)$ and $\cos(\arctan t)$ are. $\tan = \dfrac{\text{opposite}}{\text{adjacent}} = \dfrac t 1$ so $\text{hypotenuse} = \sqrt{t^2+1^2}$, and so we have $$\sin(\arctan t) = \frac{\text{opposite}}{\text{hypotenuse}} = \frac t {\sqrt{t^2+1}}.$$ Similarly $$\cos(\arctan t) = \frac{\text{adjacent}}{\text{hypotenuse}} = \frac 1 {\sqrt{t^2+1}}.$$ Consequently $$2\sin(\arctan t)\cos(\arctan t) = 2\cdot \frac 1 {\sqrt{t^2+1}} \cdot \frac t {\sqrt{t^2+1}} = \frac{2t}{1+t^2}.$$ Similarly $$\cos^2(\arctan t) - \sin^2(\arctan t) = \frac 1 {1+t^2} - \frac t {1+t^2} = \frac{1-t^2}{1+t^2}.$$ So now we have \left. \begin{align} \sin x & = \frac{2t}{1+t^2}, \\[10pt] \cos x & = \frac{1-t^2}{1+t^2}. \end{align} \right\} \tag 1 Finally, since $x = 2\arctan t$, we have $$dx = \frac{2\,dx}{1+t^2}. \tag 2$$ Your ultimate answer comes from $(1)$ and $(2)$, followed by actually evaluating the resulting integral of a rational function. Notice, $$\int \frac{\sin x-\cos x}{\sin x+\cos x}\ dx$$$$=\int \frac{-(\cos x-\sin x)}{\sin x+\cos x}dx$$ $$=-\int \frac{d(\sin x+\cos x)}{\sin x+\cos x}$$ $$=\color{red}{-\ln|\sin x+\cos x|+C}$$ Notice, $$\int \frac{\sin x-\cos x}{\sin x+\cos x}\ dx$$ $$=\int \frac{\frac{\sin x}{\cos x}-1}{\frac{\sin x}{\cos x}+1}\ dx$$ $$=\int \frac{\tan x-1}{\tan x+1}\ dx$$ $$=\int \frac{\tan x-\tan\frac \pi4}{1+\tan x\tan\frac \pi4}\ dx$$ $$=\int\tan\left(x-\frac{\pi}{4}\right)\ dx$$ $$=\color{blue}{\ln\sec\left(x-\frac{\pi}{4}\right)+C}$$
2019-11-19 15:59:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9777312874794006, "perplexity": 607.6090054177834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670156.86/warc/CC-MAIN-20191119144618-20191119172618-00252.warc.gz"}
http://mathematica.stackexchange.com/questions/linked/1137
466 views ### Copying Greek text from notebooks as unicode [duplicate] Possible Duplicate: How to “Copy as Unicode” from a Notebook? How can I copy Greek text from notebooks as proper Unicode that can be pasted into other applications? If I type ... 2k views ### How can I wrap text around a circle? How can I wrap text around a circle? For example: the text in the sectors of this chord plot. Perhaps one could use FilledCurve[] and then apply a ... 632 views ### Join lists with nested list Is there a way of smarter way of joining list of the form l1 = {a,{b,c}}; l2 = {d,{e,f}}; l3 = {g,{h,i}}; To obtain ... 1k views ### Plot a 2D vector path onto a surface In my calculus 3 course, we're studying gradients and have a project that takes a combination of 3D Gaussian radial surfaces and a basic parametric path $r(t) = \{x(t),y(t)\}$ to see how the gradient ... 953 views ### Clipboard with transparency After reading this question I have determined that Rasterize[Graphics[Circle[]], "Image", Background -> None] allows you to do ... 3k views ### Is there a way to run Python from within Mathematica? I know there is some support for running Mathematica from Python, but is there any way to do the reverse. For example, to import some Python classes and use them in Mathematica? 377 views ### What does None mean in a control specification for Manipulate? I am now struggling to understand code that contains the following (simplified) Manipulate structure. ... 397 views ### For a given expression: if it appears, remove it, but if it is absent, add it While reformatting Szabolcs's code from (42660) I noticed this interesting operation: ... 2k views ### How to dynamically name and date a file for export? I have various functions that export Excel and text files, and need to have them named with a name based on the content of some variables in the function and the time and date. How can this be done ... 3k views ### Does Mathematica support variable frame rate for any video format, in analogue of GIF-style “DisplayDurations”? The good old GIF animation format allows us to set the duration of each individual frame in the animation separately. This is especially useful if some frames in ... 524 views ### Problem importing URL with Greek characters I'm considering buying a car. So I thought why not make a web-crawler in Mathematica to pile-up car data? Brilliant idea. Then I found this Greek website, gocar.gr, which just so happens to have all ... 1k views ### Problem with SphericalPlot3D plotting I want to plot the real part of the SphericalHarmonicY[1,1,θ,φ]. That is, I want to plot $$- \frac{1}{2}\sqrt {\frac{3}{{2\pi }}} \cos[\phi ]\sin[\theta ]$$ To ...
2015-07-29 22:16:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6788825392723083, "perplexity": 2542.8612788922474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986646.29/warc/CC-MAIN-20150728002306-00159-ip-10-236-191-2.ec2.internal.warc.gz"}
https://xianblog.wordpress.com/tag/bibpunct/
## Natbib error Posted in Statistics, University life with tags , , , on July 21, 2011 by xi'an When compiling my papers with the PNAS options, ```\documentclass{pnastwo} \usepackage{amssymb,amsfonts,amsmath} \usepackage[numbers,round]{natbib} \bibpunct{(}{)}{,}{a}{,}{,} \bibliographystyle{pnas} ``` I would get a compilation error ! Package natbib Error: Bibliography not compatible with author-year citations. (natbib) Press  <return> to continue in numerical citation style. See the natbib package documentation for explanation. that was quite minor (just press <return>!) but nonetheless annoying. This seemed to be due to the latest version of natbib. I eventually found a solution by Ulrike Fisher on this forum, namely to replace {a} with {n} in the above \bibpunct line. ```\bibpunct{(}{)}{,}{n}{,}{,} ```
2022-09-26 19:29:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8222006559371948, "perplexity": 8988.046730959137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00314.warc.gz"}
https://xplaind.com/934664/bond-premium-amortization
# Bond Premium Amortization When a bond is issued at a price higher than its par value, the difference is called bond premium. The bond premium must be amortized over the life of the bond using the effective interest method or straight-line method. A bond has a stated interest rate which is also called coupon rate. It pays periodic interest payments i.e. coupon payments based on the stated interest rate. If the market interest rate is lower than the coupon rate, the bond must trade at a price higher than its par value. It is because the bond is overcompensating the bond-holder in terms of interest payments and the bond must fetch a premium. This is based on the most fundamental time value of money relationship in that the present value decreases with an increase in the interest rate. A bond is valued at the present value of its future cash flows (i.e. coupon payments and the par value) determined based on the market interest rate. ## Issuance of Bond at Premium Let’s consider a conventional bond with the following features: Face value of bond $1,000 Annual stated interest rate (coupon rate) 5% Maturity in years 5 Coupon payments per year 2 Market interest rate 4.8% By just comparing the market interest rate with the annual coupon rate, you can tell if the bond will trade a discount or premium. In this case, the bond will trade at a premium, hence it can be called a premium bond. It is because the bond pay interest at 5% which is higher than the prevailing interest rate in the market. The bond premium equals bond value determined at market interest rate minus the par value. The bond value is determined based on the market interest rate using the bond price formula: $$Bond\ Price\ (P)\\=1,000\times2.5\%\times\frac{1-{(1+2.4\%)}^{-2\times5}}{2.4\%}+\frac{1,000}{{(1+2.4\%)}^{2\times5}}\\=1,008.80$$ The bond will be issued at a premium of$8.80 per bond. If 100,000 bonds are issued, it must be recorded using the following journal entry: Account Dr Cr Cash 100,879,746 Bond payable 100,000,000 Bond premium 100,879,746 ## Payment of Interest and Amortization of Premium After the first six-month period, you will pay interest on the bond based on the coupon rate. Your interest payment will be $2,500,000 (=100,000 ×$1,000 × 5%/2). At the time of issue of bonds, you received a cash of $100.9 million but your liability is$100 million. The difference of $0.9 million will be used over the life of the bond to reduce your interest expense. There are two methods to work out periodic amortization of bond premium: the effective interest method and the straight-line method. Under the effective interest method, bond premium amortized each period is calculated using the following formula: $$Bond\ Premium\ Amortized=P\times m\ -\ F\ \times\ c$$ Where P is the bond issue price, m is the periodic market interest rate, F is the face value of the bond and c is the periodic coupon rate. Under the straight-line method, bond premium is amortized equally in each period. The journal entry for payment of interest and bond premium amortization is the same regardless of the method used. Let’s say you use the straight-line amortization method. The bond interest payment and amortization journal entry would be: Account Dr Cr Interest expense$2,587,975 Bond premium (879,746/10) $87,975 Cash$2,500,000 ## Bond Premium Amortization Schedule Under the effective interest method, bond premium amortization in each period is different. It is useful to create an amortization schedule in such a situation. An amortization schedule lists each interest payment and reconciles it with interest expense showing period-wise amortization of bond premium. Following is the amortization schedule relevant to the bond payable discussed above: Period Interest Payment Interest Expense Amortization of Bond Premium Bond Premium Carrying Value of Bond Payable 0 879,746 100,879,746 1 2,500,000 2,421,114 78,886 800,860 100,800,860 2 2,500,000 2,419,221 80,779 720,081 100,720,081 3 2,500,000 2,417,282 82,718 637,363 100,637,363 4 2,500,000 2,415,297 84,703 552,659 100,552,659 5 2,500,000 2,413,264 86,736 465,923 100,465,923 6 2,500,000 2,411,182 88,818 377,105 100,377,105 7 2,500,000 2,409,051 90,949 286,156 100,286,156 8 2,500,000 2,406,868 93,132 193,024 100,193,024 9 2,500,000 2,404,633 95,367 97,656 100,097,656 10 2,500,000 2,402,344 97,656 0 100,000,000 Written by Obaidullah Jan, ACA, CFA and last modified on
2018-08-21 20:24:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18209755420684814, "perplexity": 7497.334815488585}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221218899.88/warc/CC-MAIN-20180821191026-20180821211026-00692.warc.gz"}
http://zbmath.org/?q=an:1060.39031
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) On the different definitions of the stability of functional equations. (Sur les définitions différentes de la stabilité des équations fonctionnelles.) (French) Zbl 1060.39031 In a great number of papers and monographs dealing with the stability of functional equations, various types of this concept are considered. The paper under review is a kind of survey; the main types of the stability are defined and compared. Here are the list of them: Let $\left(*\right)$ $L\left(f\right)=R\left(f\right)$ be a functional equation with an unknown function $f$ and let $\rho$ be a metric in the target space. The equation $\left(*\right)$ is (uniquely) stable if for each $\epsilon >0$ there exists $\delta >0$ such that for each $g$ satisfying $\rho \left(L\left(g\right),R\left(g\right)\right)<\delta$, for all the variables of $g$, there exists (a unique) solution $f$ of the equation such that $\rho \left(g,f\right)<\epsilon$ (for all the variables of $f$ and $g$). The equation $\left(*\right)$ is (uniquely) $b$-stable if for each $g$ for which $\rho \left(L\left(g\right),R\left(g\right)\right)$ is bounded, there exists (a unique) solution $f$ of $\left(*\right)$ such that $\rho \left(f,g\right)$ is bounded. One says that $\left(*\right)$ is (uniquely) uniformly $b$-stable if for each $\delta >0$ there exists $\epsilon >0$ such that for each $g$ satisfying $\rho \left(L\left(g\right),R\left(g\right)\right)<\delta$ there is a (unique) solution $f$ of $\left(*\right)$ such that $\rho \left(f,g\right)<\epsilon$. In particular, if for some $\alpha >0$, $\epsilon =\alpha \delta$, $\left(*\right)$ is said to be strongly stable (or strongly and uniquely stable). There are also considered definitions of not uniquely and totally not uniquely stability as well as (uniquely/not uniquely/totally not uniquely) iterative stability. The equation $\left(*\right)$ is superstable if for each $g$ for which $\rho \left(L\left(g\right),R\left(g\right)\right)$ is bounded, $g$ is bounded or it is a solution of the equation $\left(*\right)$; if functions $g$, in the case considered, are bounded by the same constant, $\left(*\right)$ is called strongly superstable. If for each $g$ for which $\rho \left(L\left(g\right),R\left(g\right)\right)$ is bounded, $g$ a solution of $\left(*\right)$, then we call $\left(*\right)$ completely superstable. There are suitable examples for the above definitions and comparisons. Also some properties and related results are proved. The so-called Hyers’ operator and the stability of conditional functional equations are also mentioned. ##### MSC: 39B82 Stability, separation, extension, and related topics 39-02 Research monographs (functional equations) 39B52 Functional equations for functions with more general domains and/or ranges
2014-04-23 17:05:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 45, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8248490691184998, "perplexity": 1408.3119890142448}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/does-acceleration-slow-time.919565/page-2#post-5801225
# Does acceleration slow time? Z3r0 Well I know for a fact that my clock at work ticks much slower than the clock at the bar. Cheers :D Unbelievable! Show us the data! :0) Z3r0 phinds Gold Member Hi everybody, At Wiki, the clock hypothesis states that the rate of a clock does not depend on its acceleration but only on its instantaneous velocity, No, it does not depend on either. All clocks tick at one second per second. You are likely referring to what it APPEARS to do to an observer. so it means that, for two clocks at rest one beside the other, if a clock stays at rest while the other accelerates away, the one that is accelerating begins to slow down at the beginning of the acceleration, and goes on slowing down during the acceleration No, it does neither, it just keeps ticking at one second per second. Again, you are referring to what an observer sees, not what the clock is doing. , which means that we can predict which clock is going to slow down during the acceleration We know which clock will appear to be going slower, yes whereas relativity tells us that we cannot predict anymore which clock is going to slow down after the acceleration has stopped. This is not correct. The clocks are symmetrical after acceleration stops. Each appears to the other to be running slow but both are ticking at one second per second. Last edited: Dale Mentor 2020 Award which means that we can predict which clock is going to slow down during the acceleration Think this through a little more. What if the clocks start out moving and one of them accelerates to rest? Ibix pervect Staff Emeritus Thanks for sorting through my poor phrasing to understand my question, and for the more exact thought experiment. So, because both clocks have the same velocity relative to some observer, she will observe them both to run at the same speed. That one clock is under greater acceleration than the other has no impact. (Ibix also seems to confirm this for me, thanks). I don't so much ask why does gravity affect time as why doesn't acceleration? If I understand Einstein's elevator thought experiment, then it seems like it'd be easy to tell, from inside, whether you were on the surface of Earth or accelerating at 1g through space. Only if on earth would the clock on the ceiling tick more slowly than the one on the floor? I could be wrong, but my impression is that when you talk about "gravity affecting time", you presume some universal notion of time, absolute time, that exists for gravity to effect. This is not the case in special - or general - relativity. Unfortunately, it seems a bit of a digression to go off and try to explain this point, but it also seems like an obstacle to prevent any further progress without explaiing it. I shall try another approach, that might be of some help. This is to propose a different experiment, basically a variant of something that has actually been carried out, the "Harvard clock tower" experiment, aka the Pound-Rebka experiment, and compare it to the experiment you suggest. Basically , a signal emitter is placed at a high altitude, and a receiver is placed at a low altitude. And one looks for a doppler shift of the transmitted signal. One predicts from conservation of energy arguments, and measures experimentally, that the doppler shift exists. Perhaps it is not at first obvious what this has to do with 'time'. The emitting source can be regarded as being some sort of clock in its own right, and it can be compared locally to some standard clock (currently the standard is a cesium atomic clock), and it can be found that the emitting source keeps the same sort of time as the standard clock. An identical emitting source can be placed at the receiver's position. This lower emitting source, too, can be syncronized to a standard atomic clock at the same lower altitude location. But the doppler-shifted signal from the upper clock will not, due to the doppler shift, will not and can not have the same frequency as the non-doppler shifted signal from lower emission source. So we at least start to glimpse the issue here. The experiment you proposed doesn't show any difference in clock rates. But the Pound-Rebka experiment does. "Clock rates" is an ambiguous term, here, we have several clocks, and just as importantly, we need the details of how we compare these clocks. So we need to be more precise in our language. So, we need to have the right words to talk concisely about the difference between the two experiments. The one set of experiments shows no difference in 'time', but the other set of experiments does. The right words here turn out to be 'proper time' and 'coordinate time'. There are some other active (and long) threads on this already, it would be off topic to go into all of the details here, I think. The point I want to make is that one can concisely describe the results of the two experiments by saying that in an inertial frame of reference, the ratio of proper time to coordinate time does not depend on acceleration, only on velocity. In a non-inertial frame of reference, though, the ratio of proper time to coordinate time depends on acceleration and position within the frame. This later observation about non-inertial frames is true both in the case of the non-inertial frame of an accelerating elevator, or in the non-inertial frame that's due to gravity. I think they will agree on times (and be slower than your clock if you also are standing at the tangent point.) But I need to sit down with pencil and paper and work out the details. (It's been awhile since I played with this.) I will do this and post my figures. In the mean time, does this scenario address your question? How do they agree in their time despite that both of them are non-inertial frames with different acceleration? Ibix 2020 Award How do they agree in their time despite that both of them are non-inertial frames with different acceleration? Because their paths through spacetime are the same "length". Because their paths through spacetime are the same "length". But I think their metric are different in their respective accelerating frames. And the metric is a function of the acceleration not as simple as in Minkowski metric. But I think their metric are different in their respective accelerating frames. And the metric is a function of the acceleration not as simple as in Minkowski metric. But this confuses me again because this would mean the proper time is not invariant. I mean in accelerating frame with no gravity, will the metric in front of ##dt## be a function of the acceleration? if yes, then how the proper time attached to the clock at rest in this frame is invariant? Last edited: Dale Mentor 2020 Award But this confuses me again because this would mean the proper time is not invariant. Proper time is invariant. if yes, then how the proper time attached to the clock at rest in this frame is invariant? In the case of the accelerated frame the proper acceleration is in the metric. In the case of an inertial frame the proper acceleration is in the expression of the worldline. Either way the proper time is affected by the proper acceleration. Think this through a little more. What if the clocks start out moving and one of them accelerates to rest? Hi Dale, I see two possibilities: If both clocks are at rest side by side and clock A accelerates away from clock B and then decelerates to rest with regard to clock B after a while, then to me, clock A is the one which will have slowed down even if we cannot measure it from a distance, which should show if we accelerate clock A towards clock B and reunite the two clocks again. If clock B accelerates to rest with regard to clock A after clock A has accelerated away from it, then to me, clock A would still have slowed down with regard to clock B for a while, which should also show if we accelerate clock A towards clock B and reunite the two clocks. Those are circumstances where we know which clock has accelerated, thus which one is actually moving with regard to the other. That's what happens when we send probes for instance, or when we accelerate particles. We also know that an atmospheric muon lives longer than a laboratory one because we know where it started to move and at what speed it has traveled. When we know which clock is traveling, which is the case for practical problems, we can still use relativistic calculations to know how much it has slowed down even if it is no more a relativity problem. Difficult relativity problems seem to be reserved to situations where it is impossible to tell where the motion comes from, thus to useless situations. Dale Mentor 2020 Award I see two possibilities Both of the possibilities you mention are more complicated than the scenario you described in post 25. In post 25 you had only a single acceleration period for a single clock. Consider just that scenario from the reference frame where the inertial clock ends at rest and then consider the same scenario from the reference frame where the accelerating clock ends at rest. Ibix 2020 Award @Raymond Potvin - that wasn't quite what Dale asked. He was asking what happens if two clocks are travelling side by side at 0.6c and one of them accelerates at 1g until it is at rest with respect to you. Compare and contrast what happens if the two clocks are side by side at rest with respect to you and one accelerates at 1g to 0.6c. Which one ticks slowly? Is it always the one that accelerated, which is what you seem to be claiming in #25. phinds Gold Member Hi Dale, I see two possibilities: If both clocks are at rest side by side and clock A accelerates away from clock B and then decelerates to rest with regard to clock B after a while, then to me, clock A is the one which will have slowed down even if we cannot measure it from a distance, which should show if we accelerate clock A towards clock B and reunite the two clocks again. Again, it will not have "slowed down", it will have produced fewer ticks. That is, it will still be ticking at one second per second but a different number of seconds will have passed for it (fewer in this example) because it took a different path through spacetime. I keep pointing this out in response to your posts because "slowing down" is seriously misleading and people who believe that clocks run slower in their own reference frames get all confused as to how biological processes could slow down too (as they would have to if "slowing down" were true). Again, it will not have "slowed down", it will have produced fewer ticks. That is, it will still be ticking at one second per second but a different number of seconds will have passed for it (fewer in this example) because it took a different path through spacetime. I keep pointing this out in response to your posts because "slowing down" is seriously misleading and people who believe that clocks run slower in their own reference frames get all confused as to how biological processes could slow down too (as they would have to if "slowing down" were true). Hi Phinds, I prefer to call a cat a cat: if something ages less because of motion, then I prefer to look for a physical phenomenon. If particles' frequencies go down when we accelerate them, then I prefer to attribute this dilation to the time their components take to produce those frequencies. Of course it doesn't work if we don't know they are the ones that have accelerated with regard to the detector, but if we do, it seems to work. To me, if a twin ages less than the other, it is because his metabolism slows down during the time he is traveling with regard to his brother. If a clock records less time, it is because its atoms' frequencies slow down. More generally, if light takes more time between the mirrors of the moving light clock, it means that the molecules of the mirrors take more time to reflect it, that the bonding between the atoms of those molecules also take more time to be executed, and so on for the components of those atoms. To me, this is the only way the laws of physics can stay the same for all observers on inertial motion, and also the only way to explain the null result of the MM experiment. Ibix 2020 Award If particles' frequencies go down when we accelerate them, then I prefer to attribute this dilation to the time their components take to produce those frequencies. Of course it doesn't work if we don't know they are the ones that have accelerated with regard to the detector, but if we do, it seems to work. Contradicting the principle of relativity doesn't seem to me like a good way to go about understanding the theory of relativity. Both of the possibilities you mention are more complicated than the scenario you described in post 25. In post 25 you had only a single acceleration period for a single clock. Consider just that scenario from the reference frame where the inertial clock ends at rest and then consider the same scenario from the reference frame where the accelerating clock ends at rest. I meant «at rest with regard to the other clock», not at rest with regard to a third observer as Ibix pointed out. The way the two clocks move with regard to one another does not depend on the way they move with regard to another observer, but the way they accelerate still does even if it is a bit more complicated to illustrate. There is more possibilities then, but if we study everyone of them, we should be able to use acceleration to tell which clock has slowed down with regard to each observer. If acceleration was not determinant, I think we couldn't tell which twin has aged less. Ibix 2020 Award If acceleration was not determinant, I think we couldn't tell which twin has aged less. Not true. Pick a frame. Write down the speed ##v(t)## of one of the twins at all times ##t## between the first (t=0) and second (t=T) meetings of the twins as measured in that frame. Evaluate $$\tau=\int_0^T\sqrt {1-v^2 (t)/c^2}dt$$This is the age of that twin at their second meeting. Repeat for the second twin. You have your answer with no mention of acceleration. Note that since I only asked for speed not velocity the acceleration cannot, in general, be inferred from ##v (t)##. Jambaugh's circular track for which ##v## is a constant but there is always acceleration is an extreme example of this. Dale Mentor 2020 Award if we study everyone of them, we should be able to use acceleration to tell which clock has slowed down with regard to each observer Good luck with that. It won't work, but going through the exercise will be valuable for you. You will find that you need to know the velocity, not just the acceleration Last edited: Good luck with that. It won't work, but going through the exercise will be valuable for you I used two clocks only because the problem was easier to describe, which would be the case if, in the problem you asked me to solve, you could pick only one possibility where you think acceleration is not determinant. Not true. Pick a frame. Write down the speed ##v(t)## of one of the twins at all times ##t## between the first (t=0) and second (t=T) meetings of the twins as measured in that frame. Evaluate $$\tau=\int_0^T\sqrt {1-v^2 (t)/c^2}dt$$This is the age of that twin at their second meeting. Repeat for the second twin. You have your answer with no mention of acceleration. Note that since I only asked for speed not velocity the acceleration cannot, in general, be inferred from ##v (t)##. Jambaugh's circular track for which ##v## is a constant but there is always acceleration is an extreme example of this. Hi Ibix, Since motion is relative, the relative speed would be the same for both twins if we could not tell which one has accelerated. If we start the experiment with both twins side by side in space for example, the twin that accelerates knows he does, and the twin that does not accelerate also knows he doesn't, so if the one that knows he has accelerated gets back to his twin later on, he knows he will have aged less, and he knows how much if he knows how much he has accelerated and how long the roundtrip took. Ibix 2020 Award It's trivial to set up situations where both twins undergo the same accelerations but end up different ages. A variant on Jambaugh's circular tracks will do it. Contradicting the principle of relativity doesn't seem to me like a good way to go about understanding the theory of relativity. The relativity principle is about not knowing that we are moving, thus when we know and we need to calculate time dilation, it is easier not to refer to it. It is possible completely rule out accelerations the following way. FIRST twin is at rest, the SECOND approaches him. When they meet, they synchronize clocks, their clocks show 0. Twins recede from each other and the SECOND meets another - THIRD twin who flies towards the FIRST. When they meet, they synchronize clocks. Let's say their clocks show 3. Then that THIRD twin meets FIRST and they compare clock readings again. THIRD clock will show less time. In case if we consider the case (motion of twins) from any arbitrary chosen frame (as @Ibix proposed), the paradox simply turns into effect. Two twins move side by side. One if them suddenly stops. It is clear his clock now will tick faster than moving one. His (stopped) clock will tick at the same rate as any synchronized clock of that frame, in which motion takes place. Thus, it will be ticking faster than moving one. Then he suddenly starts (or passes clock readings to third brother, who passes by) and catches up moving one. It is easy to calculate, that while he overtakes moving one, his clock will show gamma times less time. Dale Mentor 2020 Award I used two clocks only because the problem was easier to describe, which would be the case if, in the problem you asked me to solve, you could pick only one possibility where you think acceleration is not determinant. I already did that. Two clocks moving initially at the same velocity, the one on the right accelerates to the right. The one in the right may tick faster or it may tick slower, dependent on the initial velocity. The acceleration alone does not determine it, the velocity (in an inertial frame) does.
2021-12-08 23:12:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6779912710189819, "perplexity": 444.84860747906265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363598.57/warc/CC-MAIN-20211208205849-20211208235849-00533.warc.gz"}
http://www.theinfolist.com/html/ALL/s/inverse_function.html
TheInfoList OR: In mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ... , the inverse function of a function (also called the inverse of ) is a function that undoes the operation of . The inverse of exists if and only if is bijective, and if it exists, is denoted by $f^ .$ For a function $f\colon X\to Y$, its inverse $f^\colon Y\to X$ admits an explicit description: it sends each element $y\in Y$ to the unique element $x\in X$ such that . As an example, consider the real-valued function of a real variable given by . One can think of as the function which multiplies its input by 5 then subtracts 7 from the result. To undo this, one adds 7 to the input, then divides the result by 5. Therefore, the inverse of is the function $f^\colon \R\to\R$ defined by $f^\left(y\right) = \frac .$ # Definitions Let be a function whose domain is the set , and whose codomain is the set . Then is ''invertible'' if there exists a function from to such that $g\left(f\left(x\right)\right)=x$ for all $x\in X$ and $f\left(g\left(y\right)\right)=y$ for all $y\in Y$. If is invertible, then there is exactly one function satisfying this property. The function is called the inverse of , and is usually denoted as , a notation introduced by John Frederick William Herschel in 1813. The function is invertible if and only if it is bijective. This is because the condition $g\left(f\left(x\right)\right)=x$ for all $x\in X$ implies that is injective, and the condition $f\left(g\left(y\right)\right)=y$ for all $y\in Y$ implies that is surjective. The inverse function to can be explicitly described as the function :$f^\left(y\right)=\left(\textx\in X\textf\left(x\right)=y\right)$. ## Inverses and composition Recall that if is an invertible function with domain and codomain , then : $f^\left\left(f\left(x\right)\right\right) = x$, for every $x \in X$ and $f\left\left(f^\left(y\right)\right\right) = y$ for every $y \in Y$. Using the composition of functions, this statement can be rewritten to the following equations between functions: : $f^ \circ f = \operatorname_X$ and $f \circ f^ = \operatorname_Y,$ where is the identity function on the set ; that is, the function that leaves its argument unchanged. In category theory, this statement is used as the definition of an inverse morphism. Considering function composition helps to understand the notation . Repeatedly composing a function with itself is called iteration. If is applied times, starting with the value , then this is written as ; so , etc. Since , composing and yields , "undoing" the effect of one application of . ## Notation While the notation might be misunderstood, certainly denotes the multiplicative inverse of and has nothing to do with the inverse function of . The notation $f^$ might be used for the inverse function to avoid ambiguity with the multiplicative inverse. In keeping with the general notation, some English authors use expressions like to denote the inverse of the sine function applied to (actually a partial inverse; see below). Other authors feel that this may be confused with the notation for the multiplicative inverse of , which can be denoted as . To avoid any confusion, an inverse trigonometric function is often indicated by the prefix " arc" (for Latin ). For instance, the inverse of the sine function is typically called the arcsine function, written as . Similarly, the inverse of a hyperbolic function is indicated by the prefix " ar" (for Latin ). For instance, the inverse of the hyperbolic sine function is typically written as . Note that the expressions like can still be useful to distinguish the multivalued inverse from the partial inverse: $\sin^\left(x\right) = \$. Other inverse special functions are sometimes prefixed with the prefix "inv", if the ambiguity of the notation should be avoided. # Examples ## Squaring and square root functions The function given by is not injective because $\left(-x\right)^2=x^2$ for all $x\in\R$. Therefore, is not invertible. If the domain of the function is restricted to the nonnegative reals, that is, we take the function $f\colon \left[0,\infty\right)\to \left[0,\infty\right);\ x\mapsto x^2$ with the same ''rule'' as before, then the function is bijective and so, invertible. The inverse function here is called the ''(positive) square root function'' and is denoted by $x\mapsto\sqrt x$. ## Standard inverse functions The following table shows several standard functions and their inverses: ## Formula for the inverse Many functions given by algebraic formulas possess a formula for their inverse. This is because the inverse $f^$ of an invertible function $f\colon\R\to\R$ has an explicit description as : $f^\left(y\right)=\left(\textx\in \R\textf\left(x\right)=y\right)$. This allows one to easily determine inverses of many functions that are given by algebraic formulas. For example, if is the function : $f\left(x\right) = \left(2x + 8\right)^3$ then to determine $f^\left(y\right)$ for a real number , one must find the unique real number such that . This equation can be solved: : Thus the inverse function is given by the formula : $f^\left(y\right) = \frac 2.$ Sometimes, the inverse of a function cannot be expressed by a closed-form formula. For example, if is the function : $f\left(x\right) = x - \sin x ,$ then is a bijection, and therefore possesses an inverse function . The formula for this inverse has an expression as an infinite sum: : $f^\left(y\right) = \sum_^\infty \frac \lim_ \left\left( \frac \left\left( \frac \theta \right\right)^n \right\right).$ # Properties Since a function is a special type of binary relation, many of the properties of an inverse function correspond to properties of converse relations. ## Uniqueness If an inverse function exists for a given function , then it is unique. This follows since the inverse function must be the converse relation, which is completely determined by . ## Symmetry There is a symmetry between a function and its inverse. Specifically, if is an invertible function with domain and codomain , then its inverse has domain and image , and the inverse of is the original function . In symbols, for functions and , :$f^\circ f = \operatorname_X$ and $f \circ f^ = \operatorname_Y.$ This statement is a consequence of the implication that for to be invertible it must be bijective. The involutory nature of the inverse can be concisely expressed by :$\left\left(f^\right\right)^ = f.$ The inverse of a composition of functions is given by :$\left(g \circ f\right)^ = f^ \circ g^.$ Notice that the order of and have been reversed; to undo followed by , we must first undo , and then undo . For example, let and let . Then the composition is the function that first multiplies by three and then adds five, : $\left(g \circ f\right)\left(x\right) = 3x + 5.$ To reverse this process, we must first subtract five, and then divide by three, : $\left(g \circ f\right)^\left(x\right) = \tfrac13\left(x - 5\right).$ This is the composition . ## Self-inverses If is a set, then the identity function on is its own inverse: : $^ = \operatorname_X.$ More generally, a function is equal to its own inverse, if and only if the composition is equal to . Such a function is called an involution. ## Graph of the inverse If is invertible, then the graph of the function : $y = f^\left(x\right)$ is the same as the graph of the equation : $x = f\left(y\right) .$ This is identical to the equation that defines the graph of , except that the roles of and have been reversed. Thus the graph of can be obtained from the graph of by switching the positions of the and axes. This is equivalent to reflecting the graph across the line . ## Inverses and derivatives The inverse function theorem states that a continuous function is invertible on its range (image) if and only if it is either strictly increasing or decreasing (with no local maxima or minima). For example, the function : $f\left(x\right) = x^3 + x$ is invertible, since the derivative is always positive. If the function is differentiable on an interval and for each , then the inverse is differentiable on . If , the derivative of the inverse is given by the inverse function theorem, : $\left\left(f^\right\right)^\prime \left(y\right) = \frac.$ Using Leibniz's notation the formula above can be written as : $\frac = \frac.$ This result follows from the chain rule (see the article on inverse functions and differentiation). The inverse function theorem can be generalized to functions of several variables. Specifically, a differentiable multivariable function is invertible in a neighborhood of a point as long as the Jacobian matrix of at is invertible. In this case, the Jacobian of at is the matrix inverse of the Jacobian of at . # Real-world examples * Let be the function that converts a temperature in degrees Celsius to a temperature in degrees Fahrenheit, $F = f(C) = \tfrac95 C + 32 ;$ then its inverse function converts degrees Fahrenheit to degrees Celsius, $C = f^(F) = \tfrac59 (F - 32) ,$ since * Suppose assigns each child in a family its birth year. An inverse function would output which child was born in a given year. However, if the family has children born in the same year (for instance, twins or triplets, etc.) then the output cannot be known when the input is the common birth year. As well, if a year is given in which no child was born then a child cannot be named. But if each child was born in a separate year, and if we restrict attention to the three years in which a child was born, then we do have an inverse function. For example, $\begin f(\text)&=2005 , \quad & f(\text)&=2007 , \quad & f(\text)&=2001 \\ f^(2005)&=\text , \quad & f^(2007)&=\text , \quad & f^(2001)&=\text \end$ * Let be the function that leads to an percentage rise of some quantity, and be the function producing an percentage fall. Applied to $100 with = 10%, we find that applying the first function followed by the second does not restore the original value of$100, demonstrating the fact that, despite appearances, these two functions are not inverses of each other. * The formula to calculate the pH of a solution is . In many cases we need to find the concentration of acid from a pH measurement. The inverse function is used. # Generalizations ## Partial inverses Even if a function is not one-to-one, it may be possible to define a partial inverse of by restricting the domain. For example, the function : $f\left(x\right) = x^2$ is not one-to-one, since . However, the function becomes one-to-one if we restrict to the domain , in which case : $f^\left(y\right) = \sqrt .$ (If we instead restrict to the domain , then the inverse is the negative of the square root of .) Alternatively, there is no need to restrict the domain if we are content with the inverse being a multivalued function: : $f^\left(y\right) = \pm\sqrt .$ Sometimes, this multivalued inverse is called the full inverse of , and the portions (such as and −) are called ''branches''. The most important branch of a multivalued function (e.g. the positive square root) is called the '' principal branch'', and its value at is called the ''principal value'' of . For a continuous function on the real line, one branch is required between each pair of local extrema. For example, the inverse of a cubic function with a local maximum and a local minimum has three branches (see the adjacent picture). These considerations are particularly important for defining the inverses of trigonometric functions. For example, the sine function is not one-to-one, since : $\sin\left(x + 2\pi\right) = \sin\left(x\right)$ for every real (and more generally for every integer An integer is the number zero (), a positive natural number (, , , etc.) or a negative integer with a minus sign (−1, −2, −3, etc.). The negative numbers are the additive inverses of the corresponding positive numbers. In the language of ... ). However, the sine is one-to-one on the interval , and the corresponding partial inverse is called the arcsine. This is considered the principal branch of the inverse sine, so the principal value of the inverse sine is always between − and . The following table describes the principal branch of each inverse trigonometric function: ## Left and right inverses Function composition on the left and on the right need not coincide. In general, the conditions # "There exists such that " and # "There exists such that " imply different properties of . For example, let denote the squaring map, such that for all in , and let denote the square root map, such that for all . Then for all in ; that is, is a right inverse to . However, is not a left inverse to , since, e.g., . ### Left inverses If , a left inverse for (or '' retraction'' of ) is a function such that composing with from the left gives the identity function $g \circ f = \operatorname_X\text$ That is, the function satisfies the rule : If , then . The function must equal the inverse of on the image of , but may take any values for elements of not in the image. A function with nonempty domain is injective if and only if it has a left inverse. An elementary proof runs as follows: * If is the left inverse of , and , then . * If nonempty is injective, construct a left inverse as follows: for all , if is in the image of , then there exists such that . Let ; this definition is unique because is injective. Otherwise, let be an arbitrary element of . For all , is in the image of . By construction, , the condition for a left inverse. In classical mathematics, every injective function with a nonempty domain necessarily has a left inverse; however, this may fail in constructive mathematics. For instance, a left inverse of the inclusion of the two-element set in the reals violates indecomposability by giving a retraction of the real line to the set . ### Right inverses A right inverse for (or '' section'' of ) is a function such that : $f \circ h = \operatorname_Y .$ That is, the function satisfies the rule : If $\displaystyle h\left(y\right) = x$, then $\displaystyle f\left(x\right) = y .$ Thus, may be any of the elements of that map to under . A function has a right inverse if and only if it is surjective (though constructing such an inverse in general requires the axiom of choice). : If is the right inverse of , then is surjective. For all $y \in Y$, there is $x = h\left(y\right)$ such that $f\left(x\right) = f\left(h\left(y\right)\right) = y$. : If is surjective, has a right inverse , which can be constructed as follows: for all $y \in Y$, there is at least one $x \in X$ such that $f\left(x\right) = y$ (because is surjective), so we choose one to be the value of . ### Two-sided inverses An inverse that is both a left and right inverse (a two-sided inverse), if it exists, must be unique. In fact, if a function has a left inverse and a right inverse, they are both the same two-sided inverse, so it can be called the inverse. : If $g$ is a left inverse and $h$ a right inverse of $f$, for all $y \in Y$, $g\left(y\right) = g\left(f\left(h\left(y\right)\right) = h\left(y\right)$. A function has a two-sided inverse if and only if it is bijective. : A bijective function is injective, so it has a left inverse (if is the empty function, $f \colon \varnothing \to \varnothing$ is its own left inverse). is surjective, so it has a right inverse. By the above, the left and right inverse are the same. : If has a two-sided inverse , then is a left inverse and right inverse of , so is injective and surjective. ## Preimages If is any function (not necessarily invertible), the preimage (or inverse image) of an element is defined to be the set of all elements of that map to : : $f^\left(\\right) = \left\ .$ The preimage of can be thought of as the image of under the (multivalued) full inverse of the function . Similarly, if is any subset of , the preimage of , denoted $f^\left(S\right)$, is the set of all elements of that map to : : $f^\left(S\right) = \left\ .$ For example, take the function . This function is not invertible as it is not bijective, but preimages may be defined for subsets of the codomain, e.g. : $f^\left(\left\\right) = \left\$. The preimage of a single element – a singleton set – is sometimes called the '' fiber'' of . When is the set of real numbers, it is common to refer to as a '' level set''. * Lagrange inversion theorem, gives the Taylor series expansion of the inverse function of an analytic function * Integral of inverse functions * Inverse Fourier transform * Reversible computing * * * * * * *
2023-02-01 15:22:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 75, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.958047091960907, "perplexity": 176.94861803457047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00747.warc.gz"}
https://math.stackexchange.com/questions/2855494/more-general-vitali-sets
# More general Vitali sets Let $\mu^n$ be the n-dimensional Lebesgue measure. I want to show that the transformation $\mu^n: \mathcal{P}(\Omega)\rightarrow [0,\infty]$ doesn't exist. In other word I want to use Vitali sets to demonstrate that there are sets which aren't measurable. I spoke with a friend of mine who said that we might use the Vitali sets for $\mathbb{R}$ we got in a proof and just attach to them a line in so that we get a similar set in $\mathbb{R}^2$. My first question would be: How can I show that this new "line"-set isn't measurable? When we go further then we might deduce for a given dimension n that we attach to the points of a Viatli set lines and have sets which aren't measurable. Is this correct? • I have no idea what you are saying. What is $\Omega$? What is a line set? – mathworker21 Jul 18 '18 at 11:34 • $\Omega$ is in this case $\mathbb{R}^n$. Imagine that you have a Vitali set for $\mathbb{R}$ in $\mathbb{R}^2$ this can be located on the x-axis. If you take lines which are orthogonal to the x-axis and intersect with the x-axis in a point of the chosen Vitali set you get what I called a line-set. – Rico1990 Jul 18 '18 at 11:41 • Have you tried picking representatives from $\Bbb{R^n/Q^n}$? – Asaf Karagila Jul 18 '18 at 12:07 • Thank you for your answer. – Rico1990 Jul 19 '18 at 10:14 For concreteness, let's first sketch a Vitali-based proof that not all sets of reals are measurable: Let $A_1$ be a subset of $[0,1]$ that contains exactly one representative for each equivalence class in $\mathbb R/\mathbb Q$. The set $$B_1 = \bigcup_{q\in\mathbb Q\cap[-1,1]} (A_1+q)$$ then satisfies $$[0,1] \subseteq B_1 \subseteq [-1,2]$$ so if it is measurable its measure must be between $1$ and $3$. But it is a disjoint union of countably many translated copies of $A_1$. This means that $A_1$ cannot have measure $0$ (because then $B_1$ would have measure $0$ too), nor can it have measure $>0$ (because then $B_1$ would have infinite measure). So $A_1$ is not measurable. In two dimensions you can simply set $$A_2 = A_1 \times [0,1]$$ $$B_2 = \bigcup_{q\in\mathbb Q\cap[-1,1]} (A_2+\langle q,0\rangle) = B_1 \times [0,1]$$ and then repeat the same argument: $B_2$ should have measure between $1$ and $3$, but that cannot be a countably infinite sum of identical terms. The generalization to higher dimensions should now be clear. • Which equivalence relation did you use in your proof? We used groups $I_x \lbrace z \in (0,1) | z - x \in \mathbb{Q} \rbrace$ and chose for each a representative, what gave us the Vitali set. Then we continued like you did. Can you explain why we get the measure between 1 and 3 in the 2-dimensional case? I suppose that this is derived by the product measure, right? – Rico1990 Jul 19 '18 at 10:14 • @Rico1990: Your Vitali set is the same as my $A_1$, just described using (very slightly) different words, and I use the closed unit inverval intstead the open one, which matters not at all. --- In the two-dimensional case we have $[0,1]\times[0,1]\subseteq B_2 \subseteq [-1,2]\times[0,1]$ and those two rectangles have measure $1$ and $3$. This is directly derived from the specification of the Lebesgue measure: The unit square must have measure $1$ and the measure is invariant under translations. (The long rectangle is a sum of three squares). – Henning Makholm Jul 19 '18 at 10:46 • Ok, thank you again. In case further questions appear I'll give a signal. – Rico1990 Jul 19 '18 at 17:05 Let $V$ be a Vitali set and consider $V \times \mathbb R^{n-1}$. Then $$\mathbb R^n = \bigcup_{q \in \mathbb Q} (V+q) \times \mathbb R^{n-1}$$ and it's obvious that each $(V+q) \times \mathbb R^{n-1}$ is not measurable. • Thank you for your answer. – Rico1990 Jul 19 '18 at 10:14 • Can you explain for me why $(V+q)×\Bbb{R}^{n−1}$ is not measurable.? – 129492 Mar 23 at 15:31 • @129492. Do you know what a Vitali set is and why such is not measurable? – md2perpe Mar 23 at 15:38 • Vitali set in $R$ is $[0;1]$ and to show that is not a measurable we define an equivalence relation on S... Right ? I read it in Real_Analysis__Measure_Theory__Integration__and_Hilbert_Spaces__Princeton_Lectures_in_Analysis___Volume_3_.pdf – 129492 Mar 23 at 15:45 • @129492. To construct $V$ we define an equivalence relation on $[0,1)$. To show that $V$ is not measurable, we take a countable union $\bigcup_k V_k$ of translations modulo 1 of $V$ such that the union is all of $[0,1)$. Then, since Lebesgue measure is countably additive and translation invariant, we must have $$1 = m([0,1)) = m(\bigcup_k V_k) = \sum_k m(V_k) = \sum_k m(V) = \infty \times m(V).$$ The last expresion is either $0$ (if $m(V)=0$) or $\infty$ (if $m(V)>0$). Contradiction! – md2perpe Mar 23 at 16:08
2019-07-23 02:42:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8702373504638672, "perplexity": 162.45357474970217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528687.63/warc/CC-MAIN-20190723022935-20190723044935-00243.warc.gz"}
http://math.stackexchange.com/questions/136118/ring-structure-of-non-modular-group-cohomology
# Ring structure of non-modular group cohomology I know if $k$ is a field such that $char(k)$ divides $|G|$ (a finite group), then finding the ring structure on $H^\ast(G,k)$ can be very, very hard. But what about when $k=\mathbb{Z}$? Is the computation easier? I know of several "modern" techniques to compute the cohomology groups, but have never encountered a detailed computation of the ring structure. Thanks! - ## 1 Answer This answers a previous version of the question... If $G$ is a finite group and $k$ is a field such that $\operatorname{char}k\nmid|G|$, then $H^p(G,k)=0$ for all $p>0$, so that the only non-zero cohomology group us $H^0(G,k)\cong k$, and in fact this isomorphism is an isomorphism of $k$-algebras. In other words: nothing interesting happens! - ah, sorry, I messed that up. I meant to be talking about over $\mathbb{Z}$. –  user641 Apr 24 '12 at 5:17 There is no need to any «modern techniques» for this: this was known aproximately before group cohomology was defined :D –  Mariano Suárez-Alvarez Apr 24 '12 at 5:18 Sorry about that! Perhaps it is too late for me to be posting questions... :) –  user641 Apr 24 '12 at 5:20
2015-08-02 15:01:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9584802389144897, "perplexity": 463.86871444228234}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989126.22/warc/CC-MAIN-20150728002309-00227-ip-10-236-191-2.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3233006/linearize-a-cst-implies-b-0
# Linearize $(a = cst) \implies (b = 0)$ Suppose I have two integer and non-negative decision variables $$a$$ and $$b$$ in a linear program and a constant $$c$$, how can I express with linear inequalities that $$(a = c) \implies (b = 0)$$? You can separate the two cases when $$c$$ is an integer and when it is not. What I've tried so far is: $$a - c \ge b$$ but that's a too strict constraint as many values like $$a = 0$$, $$b = c = 1$$ don't work anymore. • Is $c$ an integer as well? – LarrySnyder610 May 20 at 15:15 • I edited my question according to your relevant comment ;) – J.Khamphousone May 21 at 7:06 Introduce three new binary decision variables, $$x$$, $$y$$, and $$z$$: • If $$a \le c$$, then $$x$$ will equal 1 • If $$a \ge c$$, then $$y$$ will equal 1 • If $$a = c$$, then $$z$$ will equal 1 Introduce a new constant: $$\delta = \begin{cases} \min\{c - \lfloor c\rfloor, \lceil c\rceil - c\}, & \text{if c is not an integer} \\ 1, & \text{if c is an integer} \end{cases}$$ (i.e., if $$c$$ is not an integer, $$\delta$$ is the smaller of the two distances from $$c$$ to its nearest integers). Let $$M$$ be a large positive constant. Enforce the definitions of the new decision variables with the following constraints: \begin{align} c - a + \delta & \le Mx \\ a - c + \delta & \le My \\ x + y - 1 & \le z \end{align} The logic is: • If $$a \le c$$, the LHS of the first constraint is positive, so $$x$$ must equal 1. If $$a > c$$, the constraint has no effect because the LHS is non-positive: • If $$c$$ is not an integer, then $$a > \lceil c\rceil$$ since $$a$$ is an integer, so $$c - a + \delta < c - \lceil c\rceil + \delta \le 0$$ by definition of $$\delta$$. • If $$c$$ is an integer, then $$c - a + \delta \le -1 + \delta \le 0$$ by definition of $$\delta$$. • If $$a \ge c$$, the LHS of the second constraint is positive, so $$y$$ must equal 1. If $$a < c$$, the constraint has no effect because the LHS is non-positive: • If $$c$$ is not an integer, then $$a < \lfloor c\rfloor$$ since $$a$$ is an integer, so $$a - c + \delta < \lfloor c\rfloor - c + \delta \le 0$$ by definition of $$\delta$$. • If $$c$$ is an integer, then $$a - c + \delta \le -1 + \delta \le 0$$ by definition of $$\delta$$. • If $$x = y = 1$$, then $$z$$ must equal 1, whereas if either or both of $$x$$ and $$y$$ equals 0, then the third constraint has no effect. Then, the constraint $$b \le M(1-z),$$ ensures that if $$z=1$$ (i.e., if $$a=c$$), then $$b=0$$. A few notes: • There's nothing that forces $$z$$ to equal 0 if $$a \ne c$$, but since you said $$(a = c) \implies (b = 0)$$, I understood this to mean that you don't care what happens if $$a \ne c$$. • "Big-$$M$$"s are not great. Try to set $$M$$ as small as possible while still preserving the logic of the constraints. • You are likely to run into some numerical issues since it's hard to test for true equality. Instead, you might want to add some tolerance, like: \begin{align} c - a + \delta & \le Mx + \epsilon \\ a - c + \delta & \le My + \epsilon \end{align} for small $$\epsilon$$. • You said "integer and non-negative decision variables". I interpreted this to mean they are general integer (0, 1, 2, 3, ...). If they are actually binary, things get simpler. • Thank you! I understand, you wanted to say, in your first sentence, $z$ equals 1 if $a \neq c$? – J.Khamphousone May 20 at 14:33 • Actually I have more mistakes than that. Let me rethink this and edit... – LarrySnyder610 May 20 at 15:05 • Edited -- see above and see whether you think it works. – LarrySnyder610 May 20 at 17:08 • Alright that's exactly what I needed ;) – J.Khamphousone May 21 at 7:20
2019-06-18 20:49:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 61, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9914799332618713, "perplexity": 203.31005631969052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998817.58/warc/CC-MAIN-20190618203528-20190618225528-00283.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-6th-edition/chapter-1-test-page-46/6
## Intermediate Algebra (6th Edition) All rational numbers are integers. Integers: {..., -3, -2, -1, 0, 1, 2, 3, ...} Rational Numbers: Any number that can be expressed as $\frac{a}{b}$. Integers are whole numbers. A rational number example is $\frac{1}{3}$. It's a rational number, but not an integer. The statement is false.
2018-07-16 08:47:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6894891858100891, "perplexity": 285.47625874180534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589237.16/warc/CC-MAIN-20180716080356-20180716100356-00400.warc.gz"}
https://www.neetgrade.com/posts/cell-structure/
× Guaranteed coverage of  more than 80 question of Biology in NEET 2020 Test Series for NEET 2020 & NEET 2021 will start from 1st December 2019 Cell Structure Published on Aug. 25, 2019, 5:19 p.m.
2021-06-22 09:46:10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9456967115402222, "perplexity": 14871.921895818461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488517048.78/warc/CC-MAIN-20210622093910-20210622123910-00185.warc.gz"}
https://evansparks.wordpress.com/tag/airports/
Feeds: Posts The solution to NYC’s airport woes? Gothamist calls it “almost certainly a Swiftian satire,” but there’s something striking about the Manhattan Airport Foundation’s “plan” to convert New York’s long underused Central Park into the closest in on close-in airports. There are already aviation buffs out there saying “oh please, oh please” — if only to experience an approach that would rival runway 13 at Hong Kong’s old Kai Tak airport. US Airways on perimeter restrictions at DCA, LGA TEMPE — On most policy issues at the national level, airlines work through their trade association, ATA. Yesterday, I asked C. A. Howlett, US Airways senior VP for public affairs, about what issues he works on that the ATA does not get very involved in. “The biggest issue that is US Airways-specific is the Reagan National Airport perimeter rule.” National is one of US Airways’ key focus cities. He said that although the airline favors reducing barriers wherever they exist, “a more practical political solution is to create more exemptions to beyond-perimeter flying.” This would add to the twenty-four (in practice, twelve round-trip) exemptions, which include US Airways’ routes to Phoenix (one of which I am about to take back to Washington). The key, Howlett said, is to make these changes in the pending FAA reauthorization bill, because the perimeter at National is congressionally mandated. US Airways is also interested in increasing beyond-perimeter exemptions at LaGuardia Airport, where it has a focus city operation. At LaGuardia, however, the perimeter is a locally adopted rule which does not require federal action. One of the obstacles to perimeter exemptions is the objections of communities within the perimeter that fear losing service to big West Coast markets.  “Our approach would protect small and medium markets within the perimeter,” Howlett said. “We would say that an airline could use up to some percentage of its existing slots to fly beyond the perimeter, provided that those flights were taken from large or medium hubs. . . . What we’re doing is trying to protect the city that has maybe two flights to DCA. . . . We’re building in protections so that communities don’t lose service.” Howlett offered the example of, say, Delta taking one flight out of the Atlanta market, which would not make much of a difference, to add a flight to Salt Lake City. Besides, he said, there is just not that much demand for nonstop travel from National to the West Coast. A few more exemptions should meet that demand. (more…) In the Middle East, which comes first: growth or airports? A new paper from the Center for International Private Enterprise at the U.S. Chamber of Commerce examines the development of the aviation sector in the Middle East and North Africa. (Thanks to colleague Mitch Boersma for passing this along.) Jawad Rachami points out that the region’s “share of global passenger traffic is expected to hover at less than 6 percent in the next 20 years, and the region’s share of the world’s total number of flights is expected to remain at about 3.5 percent during the same period,” arguing that “[t]his is reflective of MENA’s poor integration into the global economy and the weakness of its market and governance institutions.” Moreover, the leading node of aviation growth in the region is Dubai, which accounts for nearly three times as much traffic as Cairo, the region’s second-largest airport. Rachami identifies four “trajectories” for aviation growth in the region. The first are “leaders” like Dubai and the other Gulf states, which have invested oil money and sovereign investment revenues into strategic infrastructure investments. (more…) Sunday stumper: optimizing your run from one end of the terminal to the other Suppose you are trying to get from one end A of a terminal to the other end B.  (For simplicity, assume the terminal is a one-dimensional line segment.)  Some portions of the terminal have moving walkways (in both directions); other portions do not.  Your walking speed is a constant $v$, but while on a walkway, it is boosted by the speed $u$ of the walkway for a net speed of $v+u$.  (Obviously, given a choice, one would only take those walkways that are going in the direction one wishes to travel in.)  Your objective is to get from A to B in the shortest time possible. 1. Suppose you need to pause for some period of time, say to tie your shoe.  Is it more efficient to do so while on a walkway, or off the walkway?  Assume the period of time required is the same in both cases. 2. Suppose you have a limited amount of energy available to run and increase your speed to a higher quantity $v'$ (or $v'+u$, if you are on a walkway).  Is it more efficient to run while on a walkway, or off the walkway?  Assume that the energy expenditure is the same in both cases. 3. Do the answers to the above questions change if one takes into account the various effects of special relativity?  (This is of course an academic question rather than a practical one.  But presumably it should be the time in the airport frame that one wants to minimise, not time in one’s personal frame.) [H/T: Greg Mankiw] Getting our “infrastructure” priorities straight It’s pretty common knowledge that the United States has for years underinvested in “infrastructure” — from the power grid to physical plants to transportation — and thus one of the first priorities of the next administration should be to devote massive resources to repairing infrastructure. And I’m sure the commuter inching forward on a Dallas interstate on his way home from work or a passenger on a regional jet at LaGuardia groaning as yet another thirty-minute delay is announced would agree. And Barack Obama has endorsed a massive infrastructure spending program in hopes of stimulating the economy. So then — let’s get busy! Where to start? As Bob Poole writes in yesterday’s WSJ, the mayors of 427 cities have helpfully identified more than 11,000 “ready-to-go” infrastructure projects worth \$73 billion. Okay, there’s a start. And what kind of projects are these? Poole lays out several of them: a “waterfront duck pond park,” community centers, tennis centers, “life style centers,” a “Grand Central Station” in San Francisco for a rail line that doesn’t exist, and the like. (More “infrastructure priorities are listed here.) That is, the mayors have, in a recession and what is widely acknowledged as a crisis in infrastructure, presented the taxpayers with a gold-plated wish list. No doubt Congress would be happy to pony up the money in exchange for naming rights. Why are these projects even on the list? For several reasons. First, they’re discrete and local. Highway, airport, and major transit projects often require consultation with and the involvement of multiple authorities, making it harder for the spending to have a quick impact — even if its long-term effect would outweigh that of a duck pond by a factor of, oh, infinity. Another reason might be the “spaghetti” approach: throw it at the wall to see if it sticks. No harm in trying, right? ask the mayors. (No harm, indeed, except perhaps the derision of a few lowly bloggers.) (more…) Review of new book on aviation infrastructure On American.com today, I review Aviation Infrastructure Performance: A Study in Comparative Political Economy, edited by Clifford Winston and Gines de Rus. The book, which I highly recommend, includes several reviews of how other countries’ aviation infrastructure sectors have performed under varying levels of privatization — and what lessons could be learned for the United States. Should We Privatize Airports? [The American]
2017-03-26 14:57:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22667869925498962, "perplexity": 3217.212056116046}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189242.54/warc/CC-MAIN-20170322212949-00136-ip-10-233-31-227.ec2.internal.warc.gz"}
http://crypto.stackexchange.com/questions/12644/how-do-you-distribute-secret-shares-without-knowing-who-to-first-distribute-them
# How do you distribute secret shares without knowing who to first distribute them to? Figure 1 on page 1249 of the “Multiparty Computation Secure Against Continual Memory Leakage” paper shows $m$ committees are elected in step 1 and then later in step 3 are each given a secret share. But I was wondering if it is possible to pull out the election of $m$ committees from the pre-processing phase and do it in a different phase (like online phase)? The problem with the pre-processing phase is it is assumes no leakage so simplifying it is a step closer to completely getting rid of it. I am essentially wondering if it is possible to distribute secret shares without knowing the committees (i.e. “don't run election protocol immediately, run it later”)? - I'm confused, you want to distribute shares w/o knowing who to distribute them to? That seems impossible. Maybe an example usage scenario would help? – mikeazo Dec 30 '13 at 14:47 @mikeazo - i think what i am asking is if it is possible to compute secret shares and distribute them at a later stage. I think in the paper authors figure out who the committees are first and then immediately distribute secret shares after they are computed. I'm wondering if the election could be done at a later (leaky) stage. In general doing it in a leaky environment is favorable because it makes a better protocol. – user1068636 Dec 30 '13 at 19:09
2016-06-26 01:03:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.889833390712738, "perplexity": 700.345589289044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00125-ip-10-164-35-72.ec2.internal.warc.gz"}
http://compaland.com/standard-error/what-does-standard-error-of-estimate-tell-us.html
## How To Repair What Does Standard Error Of Estimate Tell Us Tutorial Home > Standard Error > What Does Standard Error Of Estimate Tell Us # What Does Standard Error Of Estimate Tell Us ## Contents Comments View the discussion thread. . Using these rules, we can apply the logarithm transformation to both sides of the above equation: LOG(Ŷt) = LOG(b0 (X1t ^ b1) + (X2t ^ b2)) = LOG(b0) + b1LOG(X1t) About all I can say is: The model fits 14 to terms to 21 data points and it explains 98% of the variability of the response data around its mean. By taking the mean of these values, we can get the average speed of sound in this medium.However, there are so many external factors that can influence the speed of sound, http://compaland.com/standard-error/what-is-the-standard-error-of-the-estimate-see.html This is expected because if the mean at each step is calculated using a lot of data points, then a small deviation in one value will cause less effect on the Because of random variation in sampling, the proportion or mean calculated using the sample will usually differ from the true proportion or mean in the entire population. For the runners, the population mean age is 33.87, and the population standard deviation is 9.27. The 9% value is the statistic called the coefficient of determination. see this ## Standard Error Of Estimate Interpretation In this case it might be reasonable (although not required) to assume that Y should be unchanged, on the average, whenever X is unchanged--i.e., that Y should not have an upward Suppose that my data were "noisier", which happens if the variance of the error terms, $\sigma^2$, were high. (I can't see that directly, but in my regression output I'd likely notice Allison PD. Take it with you wherever you go. And, if I need precise predictions, I can quickly check S to assess the precision. Large S.E. The Standard Error Of The Estimate Is A Measure Of Quizlet Therefore, which is the same value computed previously. Sometimes one variable is merely a rescaled copy of another variable or a sum or difference of other variables, and sometimes a set of dummy variables adds up to a constant Standard Error Of Estimate Formula Read More » Latest Videos Leo Hindery Talks 5G's Impact on Telecom Roth vs. Linked 153 Interpretation of R's lm() output 28 Why do political polls have such large sample sizes? http://onlinestatbook.com/lms/regression/accuracy.html Another situation in which the logarithm transformation may be used is in "normalizing" the distribution of one or more of the variables, even if a priori the relationships are not known The standard error is the standard deviation of the Student t-distribution. What Is A Good Standard Error Hence, if the sum of squared errors is to be minimized, the constant must be chosen such that the mean of the errors is zero.) In a simple regression model, the The data set is ageAtMar, also from the R package openintro from the textbook by Dietz et al.[4] For the purpose of this example, the 5,534 women are the entire population Two S.D. ## Standard Error Of Estimate Formula When the standard error is large relative to the statistic, the statistic will typically be non-significant. http://stats.stackexchange.com/questions/126484/understanding-standard-errors-on-a-regression-table The researchers report that candidate A is expected to receive 52% of the final vote, with a margin of error of 2%. Standard Error Of Estimate Interpretation This will mask the "signal" of the relationship between $y$ and $x$, which will now explain a relatively small fraction of variation, and makes the shape of that relationship harder to Standard Error Of Regression Coefficient Here is are the probability density curves of $\hat{\beta_1}$ with high and low standard error: It's instructive to rewrite the standard error of $\hat{\beta_1}$ using the mean square deviation, \text{MSD}(x) = The standard error estimated using the sample standard deviation is 2.56. this contact form The variance of the dependent variable may be considered to initially have n-1 degrees of freedom, since n observations are initially available (each including an error component that is "free" from This means that on the margin (i.e., for small variations) the expected percentage change in Y should be proportional to the percentage change in X1, and similarly for X2. Specifically, the term standard error refers to a group of statistics that provide information about the dispersion of the values within a set. Standard Error Of Estimate Excel S provides important information that R-squared does not. A quantitative measure of uncertainty is reported: a margin of error of 2%, or a confidence interval of 18 to 22. For some statistics, however, the associated effect size statistic is not available. have a peek here If you are not particularly interested in what would happen if all the independent variables were simultaneously zero, then you normally leave the constant in the model regardless of its statistical It will be shown that the standard deviation of all possible sample means of size n=16 is equal to the population standard deviation, σ, divided by the square root of the Linear Regression Standard Error The graph below shows the distribution of the sample means for 20,000 samples, where each sample is of size n=16. That in turn should lead the researcher to question whether the bedsores were developed as a function of some other condition rather than as a function of having heart surgery that ## The coefficient? (Since none of those are true, it seems something is wrong with your assertion. National Center for Health Statistics typically does not report an estimated mean if its relative standard error exceeds 30%. (NCHS also typically requires at least 30 observations – if not more At a glance, we can see that our model needs to be more precise. This means more probability in the tails (just where I don't want it - this corresponds to estimates far from the true value) and less probability around the peak (so less Standard Error Of Prediction This helps compensate for any incidental inaccuracies related the gathering of the sample.In cases where multiple samples are collected, the mean of each sample may vary slightly from the others, creating In this case, the numerator and the denominator of the F-ratio should both have approximately the same expected value; i.e., the F-ratio should be roughly equal to 1. With n = 2 the underestimate is about 25%, but for n = 6 the underestimate is only 5%. Now, the residuals from fitting a model may be considered as estimates of the true errors that occurred at different points in time, and the standard error of the regression is Check This Out Hence, if the normality assumption is satisfied, you should rarely encounter a residual whose absolute value is greater than 3 times the standard error of the regression. Standard error. In a regression model, you want your dependent variable to be statistically dependent on the independent variables, which must be linearly (but not necessarily statistically) independent among themselves. You can see that in Graph A, the points are closer to the line than they are in Graph B. If instead of $\sigma$ we use the estimate $s$ we calculated from our sample (confusingly, this is often known as the "standard error of the regression" or "residual standard error") we You might go back and look at the standard deviation table for the standard normal distribution (Wikipedia has a nice visual of the distribution). The standard error of the estimate is a measure of the accuracy of predictions. Small differences in sample sizes are not necessarily a problem if the data set is large, but you should be alert for situations in which relatively many rows of data suddenly This interval is a crude estimate of the confidence interval within which the population mean is likely to fall. The standard errors of the coefficients are the (estimated) standard deviations of the errors in estimating them. A low t-statistic (or equivalently, a moderate-to-large exceedance probability) for a variable suggests that the standard error of the regression would not be adversely affected by its removal. Our global network of representatives serves more than 40 countries around the world.
2017-04-27 20:40:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8548345565795898, "perplexity": 426.0875040403942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122621.35/warc/CC-MAIN-20170423031202-00046-ip-10-145-167-34.ec2.internal.warc.gz"}
https://docs.deistercloud.com/content/Axional%20development%20products.15/Axional%20Studio.4/Development%20Guide.10/Languages%20reference.16/XSQL%20Script.10/Packages.40/ftp/ftp.pwd.xml?embedded=true
Gets the name of the current directory (working directory). # 1 ftp.pwd <ftp.pwd /> #### Exceptions do not activate ftp connection It is not possible to obtain the FTP communication with the server because there is not established. Example Print on screen the name of the working directory. Copy <xsql-script name='ftp_pwd_sample1'> <body> <ftp host='192.168.10.1' user='ftpuser' password='ftpdeister'> <println><ftp.pwd /></println> </ftp> </body> </xsql-script> The name of the directory visualized in the console would be: Copy /
2020-07-14 02:21:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38408955931663513, "perplexity": 7652.368484824542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147917.99/warc/CC-MAIN-20200714020904-20200714050904-00216.warc.gz"}
http://math.stackexchange.com/questions/749300/how-to-express-each-element-in-a-field-f-as-a-power-of-a-primitive-element
# How to express each element in a field F as a power of a primitive element? [closed] I have a field F(2^4) and it is represented as a residue ring of the polynomials over F2 modulo the polynomial β4+β3+β2+β+1. I want to express each element in this field as a power of a primitive element β+1. My questions are : 1) What are the elements of the fields? 2) How to express them as power of the given primitive element? Any hint will be very helpful. - ## closed as off-topic by Dilip Sarwate, Claude Leibovici, T. Bongers, John, egregApr 13 '14 at 9:58 • This question does not appear to be about math within the scope defined in the help center. If this question can be reworded to fit the rules in the help center, please edit the question. This question has been asked and answered on crypto.SE where the OP has already accepted an answer. He even made the same mistake: computing $g^{15}$ and finding that it equals $g^8$ in the comments there. –  Dilip Sarwate Apr 13 '14 at 3:06 To construct $\mathbb{F}_{16}$ first you need to find a degree $4$ polynomial which is irreducible over $\mathbb{F}_2.$ The only irreducible quadratic is $x^2+x+1$ so any degree $4$ polynomial which is not $(x^2+x+1)^2 = x^4+x^2+1$ and does not have a root in $\mathbb{F}_2$ is irreducible (think about this). The polynomial $x^4+x+1$ works well for us then: $$\mathbb{F}_{16} = \frac{ \mathbb{F}_2[x] }{ ( x^4+x+1) } .$$ So the elements of $\mathbb{F}_{16}$ are polynomials in $\mathbb{F}_2$ modulo the ideal $(x^4+x+1).$ Given any polynomial, you can apply the division algorithm to get a unique coset representation of the form $a+bx+cx^2+dx^4 + (x^4+x+1),$ and all of these are possible so this describes the elements of the field. The group of units of this field has order $15,$ and the primitive elements are precisely those with order $15.$ The order of an element must divide the order of the group, so if we can find an element which does not have order $1,3$ or $5$ then it is a primitive element. A quick calculation shows that $x$ satisfies this. Now to get every element as a power of the primitive element: Don't start with an element like $x+x^2+x^3$ and try to recognize it as a power of $x.$ Rather, compute and simplify (using $x^4=x+1$) all powers of $x.$ So the elements are \begin{align} 0,& 1, x ,x^2, x^3 \\x^4 &= x+1, \\ x^5 &= x^2+x, \\ x^6 &= x^3+x^2, \\ x^7 &= x^4+x^3 = x^3+x+1, \\ x^8 &= x^4+x^2+x = x^2+1, \\ x^9 &= x^3+x, \\ x^{10} &= x^4+x^2 = x^2+x+1, \\ x^{11} &= x^3+x^2+x, \\ x^{12} &= x^4+x^3+x^2 = x^3+x^2 +x+1, \\ x^{13} &= x^4+x^3+x^2+x = x^3+x^2+1, \\ x^{14} &= x^4+x^3+x = x^3+1. \end{align} - I think the OP already has a polynomial: $\beta^4+\beta^3+\beta^2+\beta+1$. –  TonyK Apr 11 '14 at 9:21 @TonyK I wasn't sure what OP meant. Hopefully my answer will still help them figure out how to do their question. –  Ragib Zaman Apr 11 '14 at 9:25 @RagibZaman Amazing Explaination sir. Yes your answer make it understand how to do it, though as TonyK said, it has a polynomial β4+β3+β2+β+1 . But your answer helps me to understand the concept which suffices my needs :) Thanks again. –  kingmakerking Apr 11 '14 at 11:47 @RagibZamanI figured out that for g^15 = g^8... Is it something weird? Am I doing something wrong or it is possible. I did not try after g15. here my g = β+1 –  kingmakerking Apr 11 '14 at 15:12 @sidnext2none You have definitely done something wrong. First exercise: Think of some reasons why $g^{15}=g^8$ should not be true. Second: Re-do your working. How did you compute $g^{15}?$ Compute $g^5$ and then cube that, you should be getting $1$. If you dont then something has already occurred wrong by the computation of $g^5.$ If it doesn't fix, edit your question to include your working and we will try to spot your mistake. –  Ragib Zaman Apr 11 '14 at 15:57
2015-02-01 08:00:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999842643737793, "perplexity": 465.8545572288713}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115855897.0/warc/CC-MAIN-20150124161055-00021-ip-10-180-212-252.ec2.internal.warc.gz"}
https://smartmobilityalgorithms.github.io/book/content/GraphSearchAlgorithms/RoadGraph.html
# From Road Network to Graph¶ This section contains a non-exhaustive list of operations on geospatial data that you should familiarize yourself with. More information can be found by consulting the Tools and Python Libraries page or the respective libary’s API documentation. ## Creating a Graph from a named place¶ Mathematically speaking, a graph can be represented by $$G$$, where $$G=(V,E)$$ For a graph $$G$$, vertices are represented by $$V$$, and edges by $$E$$. Each edge is a tuple $$(v,w)$$, where $$w$$, $$v \in V$$ Weight can be added as a third component to the edge tuple. In other words, graphs consist of 3 sets: • vertices/nodes • edges • a set representing relations between vertices and edges The nodes represent intersections, and the edges represent the roads themselves. A route is a sequence of edges connecting the origin node to the destination node. osmnx can convert a text descriptor of a place into a networkx graph. Let’s use the University of Toronto as an example: import osmnx place_name = "University of Toronto" # networkx graph of the named place (<Figure size 720x720 with 1 Axes>, <AxesSubplot:>) The graph shows edges and nodes of the road network surrouding the University of Toronto’s St. George (downtown Toronto) campus. While it may look visually interesting, it extends a bit too far off campus, and lacks the context of the street names and other geographic features. Let’s restrict the scope of the network to 500 meters around the university, and use a folium map as a baselayer. We will discuss more about folium later in this section. graph = osmnx.graph_from_address(place_name, dist=300)
2023-03-28 12:14:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.341637521982193, "perplexity": 1536.111373359501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948858.7/warc/CC-MAIN-20230328104523-20230328134523-00280.warc.gz"}
https://www.scm.com/doc/DFTB/Spectroscopy_and_Properties.html
# Spectroscopy and properties¶ ## Excited states with time dependent DFTB¶ DFTB allows for excited state calculations on molecular systems by means of single orbital transitions as well as time-dependent DFTB as published by Niehaus et al. in Phys. Rev. B 63, 085108 (2001). Singlet-singlet as well as singlet-triplet excitations can be calculated. DFTB also supports the calculation of excited state gradients, which allows geometry optimizations and vibrational frequency calculations for excited states. The TD-DFTB implementation uses the PRIMME library (PReconditioned Iterative MultiMethod Eigensolver) by Andreas Stathopoulos and James R. McCombs, PRIMME: PReconditioned Iterative MultiMethod Eigensolver: Methods and software description ACM Transaction on Mathematical Software Vol. 37, No. 2, (2010), 21:1–21:30. DFTB excited state calculations are controlled by the following keywords: Properties Excitations SingleOrbTrans Enabled [True | False] Filter OSMin float dEMax float dEMin float End PrintLowest integer End TDDFTB Calc [None | Singlet | Triplet] DavidsonConfig ATCharges [Precalc | OnTheFly] SafetyMargin integer Tolerance float End Diagonalization [Auto | Davidson | Exact] Lowest integer Print string UpTo float End Excitation integer End End End Properties Type: Block DFTB can calculate various properties of the simulated system. This block configures which properties will be calculated. Excitations Type: Block Contains all options related to the calculation of excited states, either as simple single orbitals transitions or from a TD-DFTB calculation. SingleOrbTrans Type: Block The simplest approximation to the true excitations are the single orbital transitions (sometimes called Kohn-Sham transitions), that is transitions where a single electron is excited from an occupied Kohn-Sham orbital into a virtual orbital. The calculation of these transitions is configured in this section. Note that the SingleOrbTrans section is optional even though the single orbital transitions are also needed for TD-DFTB calculations. If the section is not present all single orbital transitions will still be calculated and used in a subsequent TD-DFTB calculation, but no output will be produced. Enabled Type: Bool False Calculate the single orbital transitions. Filter Type: Block This section allows to remove single orbital transitions based on certain criteria. All filters are disabled by default. OSMin Type: Float Removes single orbital transitions with an oscillator strength smaller than this threshold. A typical value to start (if used at all) would be 1.0e-3. dEMax Type: Float Hartree Removes single orbital transitions with an orbital energy difference larger than this threshold. dEMin Type: Float Hartree Removes single orbital transitions with an orbital energy difference smaller than this threshold. PrintLowest Type: Integer 10 The number of single orbital transitions that are printed to the screen and written to disk. If not a TD-DFTB calculation, the default is to print the 10 lowest single orbital transitions. In case of a TD-DFTB calculation it is assumed that the single orbital transitions are only used as an input for TD-DFTB and nothing will be printed unless PrintLowest is specified explicitly. TDDFTB Type: Block Calculations with time-dependent DFTB can be configured in the TDDFTB section and should in general give better results than the raw single orbital transitions. TD-DFTB calculates the excitations in the basis of the single orbital transitions, whose calculation is configured in the SingleOrbTrans section. Using a filter in SingleOrbTrans can therefore be used to reduce the size of the basis for TD-DFTB. One possible application of this is to accelerate the calculation of electronic absorption spectra by removing single orbital transitions with small oscillator strengths from the basis. Note that the entire TDDFTB section is optional. If no TDDFTB section is found, the behavior depends on the existence of the SingleOrbTrans section: If no SingleOrbTrans section is found (the Excitations section is completely empty then) a TD-DFTB calculation with default parameters will be performed. If only the SingleOrbTrans section is present no TD-DFTB calculation will be done. Calc Type: Multiple Choice None [None, Singlet, Triplet] Specifies the multiplicity of the excitations to be calculated. DavidsonConfig Type: Block This section contains a number of keywords that can be used to override various internals of the Davidson eigensolver. The default values should generally be fine. ATCharges Type: Multiple Choice Precalc [Precalc, OnTheFly] Select whether the atomic transition charges are precalculated in advance or reevaluated during the iterations of the Davidson solver. Precalculating the charges will improve the performance, but requires additional storage. The default is to precalculate the atomic transition charges, but the precalculation may be disabled if not not enough memory is available. SafetyMargin Type: Integer 4 The number of eigenvectors the Davidson method will calculate in addition to the ones requested by the user. With the Davidson eigensolver it is generally a good idea to calculate a few more eigenvectors than needed, as depending on the initial guess for the eigenvectors it can happen that the found ones are not exactly the lowest ones. This problem is especially prominent if one wants to calculate only a small number of excitations for a symmetric molecule, where the initial guesses for the eigenvectors might have the wrong symmetry. Note that the additionally calculated excitations will neither be written to the result file nor be visible in the output. Tolerance Type: Float 1e-09 Convergence criterion for the norm of the residual. Diagonalization Type: Multiple Choice Auto [Auto, Davidson, Exact] Select the method used to solve the TD-DFTB eigenvalue equation. The most straightforward procedure is a direct diagonalization of the matrix from which the excitation energies and oscillator strengths are obtained. Since the matrix grows quickly with system size (number of used single orbital transitions squared), this option is possible only for small molecules. The alternative is the iterative Davidson method, which finds a few of the lowest excitations within an error tolerance without ever storing the full matrix. The default is to make this decision automatically based on the system size and the requested number of excitations. Lowest Type: Integer 10 Specifies the number of excitations that are calculated. Note that in case of the exact diagonalization all excitations are calculated, but only the lowest ones are printed to screen and written to the output file. Also note that if limited both by number and by energy, (lowest and upto), DFTB will always use whatever results in the smaller number of calculated excitations. Print Type: String Specifies whether to print details on the contribution of the individual single orbital transitions to the calculated excitations. UpTo Type: Float Hartree Set the maximum excitation energy. Attempts to calculate all excitations up to a given energy by calculating a number of excitations equal to the number of single orbital transitions in this window. This is only approximately correct, so one should always add some safety margin. Note that if limited both by number and by energy, (lowest and upto), DFTB will always use whatever results in the smaller number of calculated excitations. TDDFTBGradients Type: Block This block configures the calculation of analytical gradients for the TD-DFTB excitation energies, which allows the optimization of excited state geometries and the calculation of vibrational frequencies in excited states (see J. Comput. Chem., 28: 2589-2601). If the gradients are calculated, they will automatically be used for geometry optimizations or vibrational frequency calculations, if the corresponding Task is selected. Vibrationally resolved UV/Vis spectroscopy (Franck-Condon Factors) can be calculated in combination with the FCF program. See the ADF documentation on Vibrationally resolved electronic spectra. Eigenfollow Type: Bool False If this option is set, DFTB uses the transition density in atomic orbital basis to follow the initially selected excited state during a geometry optimization. This is useful if excited state potential energy surfaces cross each other and you want to follow the surface you started on. Excitation Type: Integer 1 Select which excited state to calculate the gradients for. Gradients can only be calculated for an excited states that has been calculated using TD-DFTB. Make sure that enough excitations are calculated. ## System properties¶ DFTB can calculate various properties of the simulated system. Properties DipoleMoment [True | False] BondOrders [True | False] NBOInput [True | False] VCD [True | False] End Properties Type: Block DFTB can calculate various properties of the simulated system. This block configures which properties will be calculated. DipoleMoment Type: Bool True Whether of not the electric dipole moment is calculated from the calculated Mulliken charges. While it is technically possible to calculate the dipole moment with the DFTB0 model, it is not recommended and the SCC-DFTB or DFTB3 model should be used instead. For periodic systems the dipole moment is ill-defined and should not be interpreted. BondOrders Type: Bool False Whether or not Mayer bond orders are calculated based on the final molecular orbitals. NBOInput Type: Bool False Whether or not an input file for the NBO program is written to disk as nboInput.FILE74. The input file follows the FILE47 format as described in the NBO6 manual available on nbo6.chem.wisc.edu. By default, only the calculation of the natural bond orbitals and the natural localized molecular orbitals is enabled, but the nboInput.FILE47 file can be edited by hand to enable other analysis models. Please refer to the NBO6 manual for details. VCD Type: Bool False Calculate the VCD spectrum after calculating the IR spectrum. Note: symmetry must be set to NOSYM. ## Frequencies, phonons and elastic tensor¶ Frequencies and phonons and can be computed via numerical differentiation by the AMS driver. See the Normal Modes section or the Phonon section of the AMS manual. Several thermodynamic properties, such as zero-point energy, internal energy, entropy, free energy and specific heat are computed by default when calculating phonons. The elastic tensor (and related elastic properties such as bulk modulus, shear modulus and young modulus) can be computed via numerical differentiation by AMS. See the Elastic Tensor section of the AMS manual.
2019-03-23 12:22:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5680453181266785, "perplexity": 1770.155876221518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202804.80/warc/CC-MAIN-20190323121241-20190323143241-00163.warc.gz"}
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.savefig.html
# matplotlib.pyplot.savefig¶ matplotlib.pyplot.savefig(*args, **kwargs)[source] Save the current figure. Call signature: savefig(fname, dpi=None, facecolor='w', edgecolor='w', orientation='portrait', papertype=None, format=None, Parameters: fnamestr or path-like or file-likeA path, or a Python file-like object, or possibly some backend-dependent object such as matplotlib.backends.backend_pdf.PdfPages. If format is set, it determines the output format, and the file is saved as fname. Note that fname is used verbatim, and there is no attempt to make the extension, if any, of fname match format, and no extension is appended. If format is not set, then the format is inferred from the extension of fname, if there is one. If format is not set and fname has no extension, then the file is saved with rcParams["savefig.format"] (default: 'png') and the appropriate extension is appended to fname. dpifloat or 'figure', default: rcParams["savefig.dpi"] (default: 'figure')The resolution in dots per inch. If 'figure', use the figure's dpi value. qualityint, default: rcParams["savefig.jpeg_quality"] (default: 95)Applicable only if format is 'jpg' or 'jpeg', ignored otherwise. The image quality, on a scale from 1 (worst) to 95 (best). Values above 95 should be avoided; 100 disables portions of the JPEG compression algorithm, and results in large files with hardly any gain in image quality. This parameter is deprecated. optimizebool, default: FalseApplicable only if format is 'jpg' or 'jpeg', ignored otherwise. Whether the encoder should make an extra pass over the image in order to select optimal encoder settings. This parameter is deprecated. progressivebool, default: FalseApplicable only if format is 'jpg' or 'jpeg', ignored otherwise. Whether the image should be stored as a progressive JPEG file. This parameter is deprecated. facecolorcolor or 'auto', default: rcParams["savefig.facecolor"] (default: 'auto')The facecolor of the figure. If 'auto', use the current figure facecolor. edgecolorcolor or 'auto', default: rcParams["savefig.edgecolor"] (default: 'auto')The edgecolor of the figure. If 'auto', use the current figure edgecolor. orientation{'landscape', 'portrait'}Currently only supported by the postscript backend. papertypestrOne of 'letter', 'legal', 'executive', 'ledger', 'a0' through 'a10', 'b0' through 'b10'. Only supported for postscript output. formatstrThe file format, e.g. 'png', 'pdf', 'svg', ... The behavior when this is unset is documented under fname. transparentboolIf True, the axes patches will all be transparent; the figure patch will also be transparent unless facecolor and/or edgecolor are specified via kwargs. This is useful, for example, for displaying a plot on top of a colored background on a web page. The transparency of these patches will be restored to their original values upon exit of this function. bbox_inchesstr or Bbox, default: rcParams["savefig.bbox"] (default: None)Bounding box in inches: only the given portion of the figure is saved. If 'tight', try to figure out the tight bbox of the figure. pad_inchesfloat, default: rcParams["savefig.pad_inches"] (default: 0.1)Amount of padding around the figure when bbox_inches is 'tight'. bbox_extra_artistslist of Artist, optionalA list of extra artists that will be considered when the tight bbox is calculated. backendstr, optionalUse a non-default backend to render the file, e.g. to render a png file with the "cairo" backend rather than the default "agg", or a pdf file with the "pgf" backend rather than the default "pdf". Note that the default backend is normally sufficient. See The builtin backends for a list of valid backends for each file format. Custom backends can be referenced as "module://...". metadatadict, optionalKey/value pairs to store in the image metadata. The supported keys and defaults depend on the image format and backend: 'png' with Agg backend: See the parameter metadata of print_png. 'pdf' with pdf backend: See the parameter metadata of PdfPages. 'svg' with svg backend: See the parameter metadata of print_svg. 'eps' and 'ps' with PS backend: Only 'Creator' is supported. pil_kwargsdict, optionalAdditional keyword arguments that are passed to PIL.Image.Image.save when saving the figure.
2021-03-08 15:44:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2727488577365875, "perplexity": 11848.009889829154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385378.96/warc/CC-MAIN-20210308143535-20210308173535-00221.warc.gz"}
https://www.spreadshirt.com/sky+is+the+limit+t-shirts
# Sky Is The Limit T-Shirts Filter x Categories - Show all Product features - Colors - black beige grey white blue aqua green yellow orange red pink purple Sort by Relevance The sky is the limit? There are footprints on the The sky is not the limit. You are Sky Playground Plane Sky Is The Limit The sky is the limit The sky is the limit sky limit The Sky Is The Limit Sky's The Limit The Sky's The Limit Sky is not a limit Skys the Limits The sky is not the limit Skys The Limit Skys The Limit the sky is the limit Sky Is The Limit The Sky Is Not The Limit New The Sky Is The Limit Sky is the limit New Sky is the limit Sky Is The Limit Sky`s the Limit Kid The sky is the limit? There are footprints on the RC Helicopters - The sky is not the limit... it's New The sky is not the limit. It is my playground Skydiving - The Sky is Not the Limit Skydiving Not the sky, the firmament is the limit Skydiver - sky is no the limit it's home Paramotoring - The sky's not the limit it's the de New The Sky is the Limit / Gift Idea New The Sky is the Limit / Gift Idea sky is home, not the limit - skydiving Sky Is The Limit, Jordan Inspiration New The Sky is the Limit / Gift Idea The Sky is our Limit The Sky Is The Limit For A Pilot Shirt Pilot: Sky is the Limit or the Playground. Sky is the limit lgallp2018 New The Sky is the Limit / Gift Idea Paramotoring - The sky's not the limit it's the de Skydiving - The sky is not the limit it's my playg Vintage Astronaut LIMIT BEYOND THE SKY Ultimate Frisbee Sky Is Limit Shirt The Sky Is Not My Limit It Is My Playground New Sky is the Limit - jumping frisbee player Ultimate Frisbee Sky Is Limit Shirt For Most People The Sky Is The Limit Gift Ultimate Frisbee Sky Is The Limit Shirt Sky sky Sky Sky Sky Sky To the Sky SKY Of the sky sky Page 1 of 167 1234 ... 167 Customers looked for
2018-06-22 02:26:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8144823312759399, "perplexity": 9832.651788485351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864337.41/warc/CC-MAIN-20180622010629-20180622030629-00311.warc.gz"}
http://math.stackexchange.com/questions/189677/counter-examples-in-measure-theory-and-set-topology
# counter-examples in measure theory and set topology The boundary of a subset of Euclidean space has empty interior, and furthermore has Lebesgue measure zero.Well,this is generally not true,but I can't find an explicit counter-example right now. Similarly,in set topology.Please show that there exists a metric space,such that the closure of the open ball of radius r, is not the closed ball of radius r in that metric space Motivation:Actually,there are many such kinds of examples in measure theory and set topology which are not consistent with our intuitions.I find that clarifying these fuzzy definitions or false believes may sometimes be very helpfui. - $\mathbb{Q ,Z}$ –  Nate Eldredge Sep 1 '12 at 15:34 An example of a subset of $\mathbb{R}$ whose boundary has positive Lebesgue measure is a "fat Cantor set". Mimic the usual construction of Cantor's middle-thirds set, but at the $n$th stage remove, say, the middle $(1/4)^n$ from each of the remaining sub-intervals. This will result in an totally disconnected perfect set of positive Lebesgue measure. As it is its own boundary, its boundary has positive Lebesgue measure. (Of course, as it is nowhere dense, its boundary still has empty interior.) To get a subset of $\mathbb{R}$ whose boundary has nonempty interior, this set must have the property that both it and its complement is dense in some open interval. The rationals (or the irrationals) fit this purpose quite well. A simple example of a metric space in which the closure of a ball with radius $r$ is not the closed ball of radius $r$ is to take the discrete metric on any set $X$ (with at least two elements): $$d(x,y) = \begin{cases} 0, &\text{if }x = y\\ 1, &\text{if }x \neq y.\end{cases}$$ Then for $x \in X$ we have that $\overline{B_1 (x)} = \overline{\{ x \}} = \{ x \} \neq X = \overline{B_1} (x)$. -
2015-08-28 02:56:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9335961937904358, "perplexity": 230.0103044961111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644060173.6/warc/CC-MAIN-20150827025420-00218-ip-10-171-96-226.ec2.internal.warc.gz"}
https://culturecounts.cc/support/step-4-report-results/reporting-dashboard
# Support Hub ## Reporting dashboard ### The summary page The Summary page displays a data snapshot of your Evaluation or Survey, including the number of responses collected and dimensions used. It also summarises the results of dimensions collected to date, in two ways: Dimension averages This chart shows the average score of all responses out of 100. That is, the average strength of agreement across the full sample. Stacked level of agreement This chart breaks all the responses up into five buckets: ‘Strongly disagree’, ‘Disagree’, ‘Neutral’, ‘Agree’ and ‘Strongly agree’. To draw insights from this chart, you could add the ‘Strongly agree’ and ‘Agree’ percentages to calculate the proportion of respondents that have agreed with the dimension statement.
2021-10-18 04:53:23
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8077804446220398, "perplexity": 5089.642856062234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585196.73/warc/CC-MAIN-20211018031901-20211018061901-00426.warc.gz"}
https://topblogtenz.com/how-to-find-pka-from-ka/
# How to calculate pKa from Ka? - ka to pka, Conversion, Formulas, Equations Home > Chemistry > How to find pKa from Ka? – (Ka to pKa) pKa is the negative logarithm of the acid dissociation constant (Ka). It is a more convenient way of measuring the strength of an acidic solution than Ka. You will learn in this article how to calculate the pKa value from Ka i.e. (Ka to pKa conversion) by applying a very simple but quite valuable chemical formula. So without any further delay, dive into the article, and let’s start reading! Page Contents ## What is pKa? The prefix p in pKa stands for power. Just like pH determines the power of hydrogen ions present in an aqueous solution. pKa measures the strength of an acidic solution as the power of the acid dissociation constant (Ka). pKa is calculated by taking the negative logarithm to the base 10 of the Ka value for acid, as shown in equation (i). pKa = -log10Ka…………. Equation (i) Weak organic acids have greater pKa values than strong mineral acids. pKa is related to pKb i.e., base dissociation constant for an aqueous solution as shown in equation (ii). pKa + pKb = pKw ……. Equation (ii) Equation (ii) can be rewritten as equation (iii) by substituting the value of pKw that stays fixed at room temperature and atmospheric pressure. pKw = water dissociation constant = 14 (at 25°C). pKa + pKb = 14 ……. Equation (iii) You may note that Ka measures the strength of the acid itself, while pKa is an attribute associated with the strength of the aqueous solution that the acid forms upon ionization. ## What is Ka? Ka stands for acid dissociation constant. The ionization equilibrium for the dissociation of a weak acid (HA) in an aqueous solution is represented as follows: The acid dissociation constant (Ka) for the above reaction can be represented as equation (iv). $Ka = \frac{[H_{3}O^{+}][A^{-}]}{[HA][H_{2}O]}$………. Equation (iv) Where; • [H3O+] = concentration of hydronium ions formed in the aqueous solution • [A] = concentration of conjugate base of the acid • [HA] = acid concentration at equilibrium • [H2O] = concentration of water As water concentration stays constant throughout the reaction, while [H3O+] = [H+], i.e., the concentration of H+ ions released in the aqueous solution. So, equation (iv) can be rearranged as equation (v). $Ka = \frac{[H^{+}][A^{-}]}{[HA]}$………. Equation (v) The greater the strength of an acid, the higher the Ka value for its aqueous solution and vice versa. ## What is the relationship between pKa and Ka? This equation (pKa = -log10Ka) tells us that pKa and Ka are inversely proportional to each other. A higher Ka value results in a lower pKa value and vice versa. This implies that a strong acid that dissociates to a large extent in an aqueous solution possesses a higher Ka value; however, it has a smaller pKa. So pKa is also inversely related to acidic strength. ## How to find pKa from Ka? – (Ka to pKa conversion) pKa and Ka are interconvertible. The value of pKa can be easily calculated if the value of Ka is known by using equation (i) which is (pKa = -log10Ka). We can substitute the value of Ka into equation (i) and take the negative logarithm of this value to find pKa. pKa = -log10Ka…………. Equation (i) It was highlighted at the beginning of the article that pKa is a more convenient way of measuring the acidity of a solution as compared to Ka. This is because Ka for acid is usually calculated in magnitudes of 10 raised to the power x. While pKa is a small numerical value. For instance, the Ka for acetic acid is 1.58 x 10-5; contrarily, its pKa is just 4.80. Ka values are always positive. However, pKa values can be both positive and negative. We have provided you with the following solved examples through which you can learn how to calculate pKa from Ka i.e. (Ka to pKa conversion) by practically using the formulas given in this article. ## Solved examples of determining pKa when Ka given Example #1: The acid dissociation constant (Ka) value for propanoic acid (CH3CH2COOH) is 1.34 x 10-5. What is its pKa value? CH3CH2COOH is a weak acid that partially ionizes to yield H+ and CH3CH2COO– ions in water.As the Ka value for CH3CH2COOH is given in the question statement so we can find its pKa by applying the equation as shown below.∴ pKa = -log10Ka∴ pKa = -log10(1.34 x 10-5) = 4.87Result: The pKa value for propanoic acid (CH3CH2COOH) is 4.87. Example #2: The acid dissociation constant (Ka) value for nitric acid (HNO3) is 2.4 x 101. What is its pKa value? HNO3 is a strong acid that completely ionizes to give H+ and NO3– ions in an aqueous solution.As the Ka value for HNO3 is given in the question statement so we can find its pKa by applying the equation as shown below.∴ pKa = -log10Ka∴ pKa = -log10(2.4 x 101) = -1.38.Result: The pKa value for nitric acid (HNO3) is -1.38.A negative pKa signifies that HNO3 is an extremely strong acid. The proton is only weakly held to the acid, which it readily liberates in an aqueous solution. Example # 3: A chemist prepared 0.01 M butyric acid solution in his lab. How can you help the chemist find the pKa of the acidic solution if its pH at the equilibrium point is reported in the literature as pH= 4.63? Butyric acid, also known as butanoic acid (CH3CH2CH2COOH), is a weak acid that partially dissociates to release H+ and butanoate (CH3CH2CH2COO–) ions in an aqueous solution.As the pH of the solution is given in the question statement, so we can find its [H+] at equilibrium using equation (vi).∴ [H+]equilibrium = 10-pH ………. Equation (vi)∴ [H+]equilibrium = 10-4.63 = 2.34 x 10-5 M.As per the balanced chemical equation shown above;⇒ [H+]equilibrium = [CH3CH2CH2COO–]equilibrium So [CH3CH2CH2COO–]equilibrium = [H+]equilibrium = 2.34 x 10-5 M.The initial butyric acid concentration is also given in the question, so[CH3CH2CH2COOH]equilibrium = [CH3CH2CH2COOH]initial – [CH3CH2CH2COO–]equilibrium= 0.01 – (2.34 x 10-5) = 9.98 x 10-3 M.Now that we know the acid dissociation constant (Ka) value, we can easily find pKa by applying equation (i).∴ pKa = -log10Ka…………. Equation (i)∴ pKa = -log10(5.49 x 10-8) = 7.26Result: The pKa value for the butyric acid solution is 7.26.A comparatively high pKa denotes a low acidic strength. Example # 4: The Ka value for a weak acid is given to be 2.33 x 10-11. Which of the following options provides the correct pKa value for its acidic solution?A) 11.23            B) 10.93           C) 10.63         D) 11.83        E) 9.93 Answer: Option C (pKa = 10.63) is the correct answer.Explanation: pKa can be determined from the already given Ka value by applying equation (i).pKa = -log10Ka …………. Equation (i)pKa = -log10(2.33 x 10-11) = 10.63 Example # 5: A 0.025 M solution of a weak acid dissociates to produce 0.005 mol/L hydrogen ions at the equilibrium stage. Find its pKa. A weak acid (HA) partially dissociates to liberate H+ and A– ions in water.The concentration of hydrogen ions released at equilibrium is given in the question statement i.e., [H+] equilibrium = 0.005 M.∴ [H+] equilibrium = [A–]equilibrium = 0.005 MThe original concentration of HA is also given in the question statement, so [HA]equilibrium = [HA]initial – [H+] equilibrium = 0.025 – 0.005 = 0.02 M.As a final step, we just need to substitute the above value into equation (i) to find the required pKa value.∴ pKa = -log10Ka …………. Equation (i)∴ pKa = -log10(1.25 x 10-3) = 2.90Result: The pKa value for the given acidic solution is 2.90. Also check: ## FAQ ### What is Ka? Ka stands for acid dissociation constant. A weak acid (HA) partially dissociates to produce H+ and A ions in water. H+ ions combine with H2O molecules to form hydronium (H3O+) ions. A is known as the conjugate base of the acid. HA and A together are known as a conjugate acid-base pair. The equilibrium constant for a reversible reaction is the ratio of the product of the concentration of products to the product of reactant concentrations. The ionization equilibrium for the dissociation of HA in an aqueous solution can be represented as follows: ⇒ HA + H2O ⇌ H3O+ + A The equilibrium constant (Ka) for the above reaction can be represented as equation (i) $Ka = \frac{[H_{3}O^{+}][A^{-}]}{[HA][H_{2}O]}$………. Equation (i) Where; • [H3O+] = concentration of hydronium ions formed in the aqueous solution • [A] = concentration of conjugate base of the acid • [HA] = acid concentration at equilibrium • [H2O] = concentration of water As water concentration stays constant throughout the reaction, while [H3O+] = [H+], i.e., the concentration of H+ ions released in the aqueous solution. So, equation (i) can be rearranged as equation (ii). $Ka = \frac{[H^{+}][A^{-}]}{[HA]}$………. Equation (ii) The greater the strength of an acid, it undergoes dissociation to a larger extent in the aqueous solution; thus, it possesses a higher Ka value. Weak organic acids such as acetic acid and citric acid have Ka values below 1. However, strong mineral acids such as HCl and HNO3 that completely dissociates to release a large number of H+ ions in an aqueous solution have Ka values above 1. ### What is pKa? pKa measures the strength of an acidic solution. It is the negative logarithm of the Ka value for an acid. ### How is pKa related to the strength of a Bronsted acid? As per the Bronsted-Lowry acid-base theory, an acid is defined as a proton donor. The proton is loosely held to a strong acid which can readily liberate it in an aqueous solution. However, weak acids strongly hold their protons, not ready to release in water. Greater the pKa value, the more strongly held the proton is by the Bronsted acid indicating a weak acidic strength and vice versa. So pKa is inversely related to the strength of a Bronsted acid. ### How is Ka related to the strength of an acid? Higher the Ka value, the greater the strength of an acid. So the acid dissociation constant is directly related to the strength of an acid. Strong acids completely ionize in water to release H+ ions. Thus, they have high Ka values. Weak acids only partially ionize in water to release a limited number of H+ ions, so they have low Ka values. ### What is the relationship between pKa and Ka? pKa is inversely related to Ka. The higher the Ka value of an acid, the lower its pKa and vice versa. ### What is the formula to calculate pKa from Ka? pKa can be calculated by taking the negative logarithm of Ka as follows; ∴ pKa = -log10Ka. ### How do you calculate Ka from pKa? Ka and pKa are interconvertible chemical parameters. Ka can be calculated from pKa by taking the antilog of pKa as follows; ∴ Ka = 10-pKa ## Summary • pKa stands for the power of Ka. It measures the acidic strength of an aqueous solution. • It is calculated as a negative logarithm of Ka to the base 10. • pKa = -log10Ka. • The greater the strength of an acid, the lower its pKa value, and vice versa. • Ka denotes acid dissociation constant. It measures the extent of ionization of an acid in an aqueous solution. • Greater the Ka value, the higher the strength of the acid. • If the value of Ka for acid is known, we can easily find its pKa value by taking the negative logarithm of Ka. The formula to determine pKa from Ka is [pKa = -log10Ka]. ## References 1. Master Organic Chemistry. (2012, May 9). Acid-Base Reactions: Ka and pKa. Retrieved from https://www.masterorganicchemistry.com/2012/05/09/acid-base-reactions-ka-and-pka/ 2. wikiHow. (n.d.). How to Find Ka from pKa. Retrieved from https://www.wikihow.com/Find-Ka-from-pKa 3. Study.com. (n.d.). Converting Between Ka and pKa: Questions. Retrieved from https://study.com/skill/practice/converting-between-ka-and-pka-questions.html Did you like it? #### Ammara Waheed Ammara Waheed is a highly qualified and experienced chemist, whose passion for Chemistry is evident in her writing. With a Bachelor of Science (Hons.) and Master of Philosophy (M. Phil) in Physical and Analytical Chemistry from Government College University (GCU) Lahore, Pakistan, with a hands-on laboratory experience in the Pakistan Council of Scientific and Industrial Research (PCSIR), Ammara has a solid educational foundation in her field. She comes from a distinguished research background and she documents her research endeavors for reputable journals such as Wiley and Elsevier. Her deep knowledge and expertise in the field of Chemistry make her a trusted and reliable authority in her profession. Let's connect - https://www.researchgate.net/profile/Ammara-Waheed Share it...
2023-03-25 18:22:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 4, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7624045014381409, "perplexity": 4760.923759589327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00084.warc.gz"}
https://raweb.inria.fr/rapportsactivite/RA2021/mokaplan/index.html
2021 Activity report Project-Team MOKAPLAN RNSR: 201321083P Research center In partnership with: Université Paris-Dauphine, CNRS Team name: Advances in Numerical Calculus of Variations In collaboration with: Domain Applied Mathematics, Computation and Simulation Theme Numerical schemes and simulations Creation of the Project-Team: 2015 December 01 Keywords • A5.3. Image processing and analysis • A5.9. Signal processing • A6.1.1. Continuous Modeling (PDE, ODE) • A6.2.1. Numerical analysis of PDE and ODE • A6.2.6. Optimization • A9. Artificial intelligence • B1.2. Neuroscience and cognitive science • B9.5.2. Mathematics • B9.5.3. Physics • B9.5.4. Chemistry • B9.6.3. Economy, Finance 1 Team members, visitors, external collaborators Research Scientists • Jean-David Benamou [Team leader, Inria, Senior Researcher, HDR] • Vincent Duval [Inria, Senior Researcher] • Thomas Gallouèt [Inria, Researcher] • Flavien Leger [Inria, Researcher, from Oct 2021] • Irene Waldspurger [Université Paris Sciences et Lettres, Researcher] Faculty Members • Claire Boyer [Sorbonne Université, Associate Professor, from Feb 2021] • Guillaume Carlier [Université Paris Sciences et Lettres, Professor, HDR] • Paul Pegon [Université Paris Sciences et Lettres, Associate Professor] • François-Xavier Vialard [Université Gustave Eiffel, Associate Professor, from Sep 2021] Post-Doctoral Fellows • Luca Tamanini [Université Paris Sciences et Lettres, until Nov 2021] • Robert Tovey [Inria, from Feb 2021] PhD Students • Katharina Eichinger [Université Paris Sciences et Lettres] • Romain Petit [Université Paris Sciences et Lettres, from Oct. 2019] • Quentin Ismael Petit [Université Paris Sciences et Lettres, until Aug 2021] • Joao Miguel Pinto Anastacio Machado [Université de Dauphine, from Oct 2021] • Giorgi Rukhaia [Inria, until Nov 2021] • Erwan Stampfli [Université Paris-Saclay, from Oct 2021] • Gabriele Todeschi [Université Paris Sciences et Lettres, until Nov 2021] • Adrien Vacher [Université Paris-Est Marne La Vallée] Technical Staff • Guillaume Chazareix [Inria, Engineer, until Feb 2021] • Robert Tovey [Inria, Engineer, Jan 2021] Interns and Apprentices • Hugo Malamut [Inria, from Mar 2021 until Sep 2021] • Joao Miguel Pinto Anastacio Machado [Inria, from Apr 2021 until Sep 2021] • Erwan Stampfli [Sorbonne Université, from Apr 2021 until Sep 2021] • Derya Gök [Inria, until Nov 2021] External Collaborators • Yann Brenier [CNRS] • Paul Catala [École Normale Supérieure de Paris, until Jun 2021] • Quentin Merigot [Université Paris-Saclay] • Guillaume Mijoule [Ministère de l'Education Nationale, until Jun 2021] • Bruno Nazaret [Univrsité Panthéon Sorbonne, until Aug 2021] • Gabriel Peyré [CNRS, HDR] • François-Xavier Vialard [Université Paris-Est Marne La Vallée, until Aug 2021] • Miao Yu [Université de Paris] • Shuangjian Zhang [Ecole normale supérieure Paris-Saclay] 2 Overall objectives 2.1 Introduction The last decade has witnessed a remarkable convergence between several sub-domains of the calculus of variations, namely optimal transport (and its many generalizations), infinite dimensional geometry of diffeomorphisms groups and inverse problems in imaging (in particular sparsity-based regularization). This convergence is due to (i) the mathematical objects manipulated in these problems, namely sparse measures (e.g. coupling in transport, edge location in imaging, displacement fields for diffeomorphisms) and (ii) the use of similar numerical tools from non-smooth optimization and geometric discretization schemes. Optimal Transportation, diffeomorphisms and sparsity-based methods are powerful modeling tools, that impact a rapidly expanding list of scientific applications and call for efficient numerical strategies. Our research program shows the important part played by the team members in the development of these numerical methods and their application to challenging problems. 2.2 Static Optimal Transport and Generalizations Optimal Transport, Old and New. Optimal Mass Transportation is a mathematical research topic which started two centuries ago with Monge's work on the “Théorie des déblais et des remblais" (see  106). This engineering problem consists in minimizing the transport cost between two given mass densities. In the 40's, Kantorovich  113 introduced a powerful linear relaxation and introduced its dual formulation. The Monge-Kantorovich problem became a specialized research topic in optimization and Kantorovich obtained the 1975 Nobel prize in economics for his contributions to resource allocations problems. Since the seminal discoveries of Brenier in the 90's  59, Optimal Transportation has received renewed attention from mathematical analysts and the Fields Medal awarded in 2010 to C. Villani, who gave important contributions to Optimal Transportation and wrote the modern reference monographs  152, 153, arrived at a culminating moment for this theory. Optimal Mass Transportation is today a mature area of mathematical analysis with a constantly growing range of applications. Optimal Transportation has also received a lot of attention from probabilists (see for instance the recent survey 118 for an overview of the Schrödinger problem which is a stochastic variant of the Benamou-Brenier dynamical formulation of optimal transport). The development of numerical methods for Optimal Transportation and Optimal Transportation related problems is a difficult topic and comparatively underdeveloped. This research field has experienced a surge of activity in the last five years, with important contributions of the Mokaplan group (see the list of important publications of the team). We describe below a few of recent and less recent Optimal Transportation concepts and methods which are connected to the future activities of Mokaplan  : Brenier's theorem  62 characterizes the unique optimal map as the gradient of a convex potential. As such Optimal Transportation may be interpreted as an infinite dimensional optimisation problem under “convexity constraint": i.e. the solution of this infinite dimensional optimisation problem is a convex potential. This connects Optimal Transportation to “convexity constrained" non-linear variational problems such as, for instance, Newton's problem of the body of minimal resistance. The value function of the optimal transport problem is also known to define a distance between source and target densities called the Wasserstein distance which plays a key role in many applications such as image processing. Monge-Ampère Methods. A formal substitution of the optimal transport map as the gradient of a convex potential in the mass conservation constraint (a Jacobian equation) gives a non-linear Monge-Ampère equation. Caffarelli  70 used this result to extend the regularity theory for the Monge-Ampère equation. In the last ten years, it also motivated new research on numerical solvers for non-linear degenerate Elliptic equations  9412247  46 and the references therein. Geometric approaches based on Laguerre diagrams and discrete data 127 have also been developed. Monge-Ampère based Optimal Transportation solvers have recently given the first linear cost computations of Optimal Transportation (smooth) maps. Generalizations of OT. In recent years, the classical Optimal Transportation problem has been extended in several directions. First, different ground costs measuring the “physical" displacement have been considered. In particular, well posedness for a large class of convex and concave costs has been established by McCann and Gangbo  105. Optimal Transportation techniques have been applied for example to a Coulomb ground cost in Quantum chemistry in relation with Density Functional theory  90. Given the densities of electrons Optimal Transportation models the potential energy and their relative positions. For more than more than 2 electrons (and therefore more than 2 densities) the natural extension of Optimal Transportation is the so called Multi-marginal Optimal Transport (see  134 and the references therein). Another instance of multi-marginal Optimal Transportation arises in the so-called Wasserstein barycenter problem between an arbitrary number of densities 31. An interesting overview of this emerging new field of optimal transport and its applications can be found in the recent survey of Ghoussoub and Pass  133. Numerical Applications of Optimal Transportation. Optimal transport has found many applications, starting from its relation with several physical models such as the semi-geostrophic equations in meteorology  110, 92, 91, 40, 121, mesh adaptation 120, the reconstruction of the early mass distribution of the Universe  102, 60 in Astrophysics, and the numerical optimisation of reflectors following the Optimal Transportation interpretation of Oliker  69 and Wang  154. Extensions of OT such as multi-marginal transport has potential applications in Density Functional Theory , Generalized solution of Euler equations 61 (DFT) and in statistics and finance  37, 104 .... Recently, there has been a spread of interest in applications of OT methods in imaging sciences  54, statistics  51 and machine learning  93. This is largely due to the emergence of fast numerical schemes to approximate the transportation distance and its generalizations, see for instance  43. Figure 1 shows an example of application of OT to color transfer. Figure 9 shows an example of application in computer graphics to interpolate between input shapes. 2.3 Diffeomorphisms and Dynamical Transport Dynamical transport. While the optimal transport problem, in its original formulation, is a static problem (no time evolution is considered), it makes sense in many applications to rather consider time evolution. This is relevant for instance in applications to fluid dynamics or in medical images to perform registration of organs and model tumor growth. In this perspective, the optimal transport in Euclidean space corresponds to an evolution where each particule of mass evolves in straight line. This interpretation corresponds to the Computational Fluid Dynamic (CFD) formulation proposed by Brenier and Benamou in  39. These solutions are time curves in the space of densities and geodesics for the Wasserstein distance. The CFD formulation relaxes the non-linear mass conservation constraint into a time dependent continuity equation, the cost function remains convex but is highly non smooth. A remarkable feature of this dynamical formulation is that it can be re-cast as a convex but non smooth optimization problem. This convex dynamical formulation finds many non-trivial extensions and applications, see for instance  41. The CFD formulation also appears to be a limit case of Mean Fields games (MFGs), a large class of economic models introduced by Lasry and Lions  115 leading to a system coupling an Hamilton-Jacobi with a Fokker-Planck equation. In contrast, the Monge case where the ground cost is the euclidan distance leads to a static system of PDEs  56. Gradient Flows for the Wasserstein Distance. Another extension is, instead of considering geodesic for transportation metric (i.e. minimizing the Wasserstein distance to a target measure), to make the density evolve in order to minimize some functional. Computing the steepest descent direction with respect to the Wasserstein distance defines a so-called Wasserstein gradient flow, also known as JKO gradient flows after its authors  112. This is a popular tool to study a large class of non-linear diffusion equations. Two interesting examples are the Keller-Segel system for chemotaxis  111, 85 and a model of congested crowd motion proposed by Maury, Santambrogio and Roudneff-Chupin  126. From the numerical point of view, these schemes are understood to be the natural analogue of implicit scheme for linear parabolic equations. The resolution is however costly as it involves taking the derivative in the Wasserstein sense of the relevant energy, which in turn requires the resolution of a large scale convex but non-smooth minimization. Geodesic on infinite dimensional Riemannian spaces. To tackle more complicated warping problems, such as those encountered in medical image analysis, one unfortunately has to drop the convexity of the functional involved in defining the gradient flow. This gradient flow can either be understood as defining a geodesic on the (infinite dimensional) group of diffeomorphisms  36, or on a (infinite dimensional) space of curves or surfaces  155. The de-facto standard to define, analyze and compute these geodesics is the “Large Deformation Diffeomorphic Metric Mapping” (LDDMM) framework of Trouvé, Younes, Holm and co-authors  36, 109. While in the CFD formulation of optimal transport, the metric on infinitesimal deformations is just the ${L}^{2}$ norm (measure according to the density being transported), in LDDMM, one needs to use a stronger regularizing metric, such as Sobolev-like norms or reproducing kernel Hilbert spaces (RKHS). This enables a control over the smoothness of the deformation which is crucial for many applications. The price to pay is the need to solve a non-convex optimization problem through geodesic shooting method  129, which requires to integrate backward and forward the geodesic ODE. The resulting strong Riemannian geodesic structure on spaces of diffeomorphisms or shapes is also pivotal to allow us to perform statistical analysis on the tangent space, to define mean shapes and perform dimensionality reduction when analyzing large collection of input shapes (e.g. to study evolution of a diseases in time or the variation across patients)  71. 2.4 Sparsity in Imaging Sparse ${\ell }^{1}$ regularization. Beside image warping and registration in medical image analysis, a key problem in nearly all imaging applications is the reconstruction of high quality data from low resolution observations. This field, commonly referred to as “inverse problems”, is very often concerned with the precise location of features such as point sources (modeled as Dirac masses) or sharp contours of objects (modeled as gradients being Dirac masses along curves). The underlying intuition behind these ideas is the so-called sparsity model (either of the data itself, its gradient, or other more complicated representations such as wavelets, curvelets, bandlets  125 and learned representation  156). The huge interest in these ideas started mostly from the introduction of convex methods to serve as proxy for these sparse regularizations. The most well known is the ${\ell }^{1}$ norm introduced independently in imaging by Donoho and co-workers under the name “Basis Pursuit”  88 and in statistics by Tibshirani  147 under the name “Lasso”. A more recent resurgence of this interest dates back to 10 years ago with the introduction of the so-called “compressed sensing” acquisition techniques  74, which make use of randomized forward operators and ${\ell }^{1}$-type reconstruction. Regularization over measure spaces. However, the theoretical analysis of sparse reconstructions involving real-life acquisition operators (such as those found in seismic imaging, neuro-imaging, astro-physical imaging, etc.) is still mostly an open problem. A recent research direction, triggered by a paper of Candès and Fernandez-Granda  73, is to study directly the infinite dimensional problem of reconstruction of sparse measures (i.e. sum of Dirac masses) using the total variation of measures (not to be mistaken for the total variation of 2-D functions). Several works  72, 98, 95 have used this framework to provide theoretical performance guarantees by basically studying how the distance between neighboring spikes impacts noise stability. Low complexity regularization and partial smoothness. In image processing, one of the most popular methods is the total variation regularization  142, 66. It favors low-complexity images that are piecewise constant, see Figure 3 for some examples on how to solve some image processing problems. Beside applications in image processing, sparsity-related ideas also had a deep impact in statistics  147 and machine learning  34. As a typical example, for applications to recommendation systems, it makes sense to consider sparsity of the singular values of matrices, which can be relaxed using the so-called nuclear norm (a.k.a. trace norm)  33. The underlying methodology is to make use of low-complexity regularization models, which turns out to be equivalent to the use of partly-smooth regularization functionals  119, 149 enforcing the solution to belong to a low-dimensional manifold. 2.5 Mokaplan unified point of view The dynamical formulation of optimal transport creates a link between optimal transport and geodesics on diffeomorphisms groups. This formal link has at least two strong implications that Mokaplan will elaborate on: (i) the development of novel models that bridge the gap between these two fields ; (ii) the introduction of novel fast numerical solvers based on ideas from both non-smooth optimization techniques and Bregman metrics, as highlighted in Section 3.2.3. In a similar line of ideas, we believe a unified approach is needed to tackle both sparse regularization in imaging and various generalized OT problems. Both require to solve related non-smooth and large scale optimization problems. Ideas from proximal optimization has proved crucial to address problems in both fields (see for instance  39, 140). Transportation metrics are also the correct way to compare and regularize variational problems that arise in image processing (see for instance the Radon inversion method proposed in  43) and machine learning (see  93). This unity in term of numerical methods is once again at the core of Section 3.2.3. 3 Research program 3.1 Modeling and Analysis The first layer of methodological tools developed by our team is a set of theoretical continuous models that aim at formalizing the problems studied in the applications. These theoretical findings will also pave the way to efficient numerical solvers that are detailed in Section 3.2. 3.1.1 Static Optimal Transport and Generalizations Convexity constraint and Principal Agent problem in Economics. (Participants: G. Carlier, J-D. Benamou, V. Duval, Xavier Dupuis (LUISS Guido Carli University, Roma)) The principal agent problem plays a distinguished role in the literature on asymmetric information and contract theory (with important contributions from several Nobel prizes such as Mirrlees, Myerson or Spence) and it has many important applications in optimal taxation, insurance, nonlinear pricing. The typical problem consists in finding a cost minimizing strategy for a monopolist facing a population of agents who have an unobservable characteristic, the principal therefore has to take into account the so-called incentive compatibilty constraint which is very similar to the cyclical monotonicity condition which characterizes optimal transport plans. In a special case, Rochet and Choné 141 reformulated the problem as a variational problem subject to a convexity constraint. For more general models, and using ideas from Optimal Transportation, Carlier  76 considered the more general $c$-convexity constraint and proved a general existence result. Using the formulation of   76 McCann, Figalli and Kim  99 gave conditions under which the principal agent problem can be written as an infinite dimensional convex variational problem. The important results of   99 are intimately connected to the regularity theory for optimal transport and showed that there is some hope to numerically solve the principal-agent problem for general utility functions. Our expertise: We have already contributed to the numerical resolution of the Principal Agent problem in the case of the convexity constraint, see 81, 128, 130. Goals: So far, the mathematical PA model can be numerically solved for simple utility functions. A Bregman approach inspired by 43 is currently being developed 79 for more general functions. It would be extremely useful as a complement to the theoretical analysis. A new semi-Discrete Geometric approach is also investigated where the method reduces to non-convex polynomial optimization. Optimal transport and conditional constraints in statistics and finance. (Participants: G. Carlier, J-D. Benamou, G. Peyré) A challenging branch of emerging generalizations of Optimal Transportation arising in economics, statistics and finance concerns Optimal Transportation with conditional constraints. The martingale optimal transport  37, 104 which appears naturally in mathematical finance aims at computing robust bounds on option prices as the value of an optimal transport problem where not only the marginals are fixed but the coupling should be the law of a martingale, since it represents the prices of the underlying asset under the risk-neutral probability at the different dates. Note that as soon as more than two dates are involved, we are facing a multimarginal problem. Our expertise: Our team has a deep expertise on the topic of OT and its generalization, including many already existing collaboration between its members, see for instance  43, 48, 41 for some representative recent collaborative publications. Goals: This is a non trivial extension of Optimal Transportation theory and Mokaplan will develop numerical methods (in the spirit of entropic regularization) to address it. A popular problem in statistics is the so-called quantile regression problem, recently Carlier, Chernozhukov and Galichon  77 used an Optimal Transportation approach to extend quantile regression to several dimensions. In this approach again, not only fixed marginals constraints are present but also constraints on conditional means. As in the martingale Optimal Transportation problem, one has to deal with an extra conditional constraint. The duality approach usually breaks down under such constraints and characterization of optimal couplings is a challenging task both from a theoretical and numerical viewpoint. (Participants: G. Carlier, J-D. Benamou, M. Laborde, Q. Mérigot, V. Duval) The connection between the static and dynamic transportation problems (see Section 2.3) opens the door to many extensions, most notably by leveraging the use of gradient flows in metric spaces. The flow with respect to the transportation distance has been introduced by Jordan-Kindelherer-Otto (JKO)  112 and provides a variational formulation of many linear and non-linear diffusion equations. The prototypical example is the Fokker Planck equation. We will explore this formalism to study new variational problems over probability spaces, and also to derive innovative numerical solvers. The JKO scheme has been very successfully used to study evolution equations that have the structure of a gradient flow in the Wasserstein space. Indeed many important PDEs have this structure: the Fokker-Planck equation (as was first considered by  112), the porous medium equations, the granular media equation, just to give a few examples. It also finds application in image processing  65. Figure 4 shows examples of gradient flows. Our expertise: There is an ongoing collaboration between the team members on the theoretical and numerical analysis of gradient flows. Goals: We apply and extend our research on JKO numerical methods to treat various extensions: • Wasserstein gradient flows with a non displacement convex energy (as in the parabolic-elliptic Keller-Segel chemotaxis model 83) • systems of evolution equations which can be written as gradient flows of some energy on a product space (possibly mixing the Wasserstein and ${L}^{2}$ structures) : multi-species models or the parabolic-parabolic Keller-Segel model  52 • perturbation of gradient flows: multi-species or kinetic models are not gradient flows, but may be viewed as a perturbation of Wasserstein gradient flows, we shall therefore investigate convergence of splitting methods for such equations or systems. From networks to continuum congestion models. (Participants: G. Carlier, J-D. Benamou, G. Peyré) Congested transport theory in the discrete framework of networks has received a lot of attention since the 50's starting with the seminal work of Wardrop. A few years later, Beckmann proved that equilibria are characterized as solution of a convex minimization problem. However, this minimization problem involves one flow variable per path on the network, its dimension thus quickly becomes too large in practice. An alternative, is to consider continuous in space models of congested optimal transport as was done in  80 which leads to very degenerate PDEs  57. Our expertise: MOKAPLAN members have contributed a lot to the analysis of congested transport problems and to optimization problems with respect to a metric which can be attacked numerically by fast marching methods  48. Goals: The case of general networks/anisotropies is still not well understood, general $\Gamma$-convergence results will be investigated as well as a detailed analysis of the corresponding PDEs and numerical methods to solve them. Benamou and Carlier already studied numerically some of these PDEs by an augmented Lagrangian method see figure 5. Note that these class of problems share important similarities with metric learning problem in machine learning, detailed below. 3.1.2 Diffeomorphisms and Dynamical Transport Growth Models for Dynamical Optimal Transport. (Participants: F-X. Vialard, J-D. Benamou, G. Peyré, L. Chizat) A major issue with the standard dynamical formulation of OT is that it does not allow for variation of mass during the evolution, which is required when tackling medical imaging applications such as tumor growth modeling  68 or tracking elastic organ movements  144. Previous attempts  123, 138 to introduce a source term in the evolution typically lead to mass teleportation (propagation of mass with infinite speed), which is not always satisfactory. Our expertise: Our team has already established key contributions both to connect OT to fluid dynamics  39 and to define geodesic metrics on the space of shapes and diffeomorphisms  87. Goals: Lenaic Chizat's PhD thesis aims at bridging the gap between dynamical OT formulation, and LDDDM diffeomorphisms models (see Section 2.3). This will lead to biologically-plausible evolution models that are both more tractable numerically than LDDM competitors, and benefit from strong theoretical guarantees associated to properties of OT. Mean-field games. (Participants: G. Carlier, J-D. Benamou) The Optimal Transportation Computational Fluid Dynamics (CFD) formulation is a limit case of variational Mean-Field Games (MFGs), a new branch of game theory recently developed by J-M. Lasry and P-L. Lions  115 with an extremely wide range of potential applications  107. Non-smooth proximal optimization methods used successfully for the Optimal Transportation can be used in the case of deterministic MFGs with singular data and/or potentials  42. They provide a robust treatment of the positivity constraint on the density of players. Our expertise: J.-D. Benamou has pioneered with Brenier the CFD approach to Optimal Transportation. Regarding MFGs, on the numerical side, our team has already worked on the use of augmented Lagrangian methods in MFGs 41 and on the analytical side 75 has explored rigorously the optimality system for a singular CFD problem similar to the MFG system. Goals: We will work on the extension to stochastic MFGs. It leads to non-trivial numerical difficulties already pointed out in  30. Macroscopic Crowd motion, congestion and equilibria. (Participants: G. Carlier, J-D. Benamou, Q. Mérigot, F. Santambrogio (U. Paris-Sud), Y. Achdou (Univ. Paris 7), R. Andreev (Univ. Paris 7)) Many models from PDEs and fluid mechanics have been used to give a description of people or vehicles moving in a congested environment. These models have to be classified according to the dimension (1D model are mostly used for cars on traffic networks, while 2-D models are most suitable for pedestrians), to the congestion effects (“soft” congestion standing for the phenomenon where high densities slow down the movement, “hard” congestion for the sudden effects when contacts occur, or a certain threshold is attained), and to the possible rationality of the agents Maury et al  126 recently developed a theory for 2D hard congestion models without rationality, first in a discrete and then in a continuous framework. This model produces a PDE that is difficult to attack with usual PDE methods, but has been successfully studied via Optimal Transportation techniques again related to the JKO gradient flow paradigm. Another possibility to model crowd motion is to use the mean field game approach of Lions and Lasry which limits of Nash equilibria when the number of players is large. This also gives macroscopic models where congestion may appear but this time a global equilibrium strategy is modelled rather than local optimisation by players like in the JKO approach. Numerical methods are starting to be available, see for instance  30, 64. Our expertise: We have developed numerical methods to tackle both the JKO approach and the MFG approach. The Augmented Lagrangian (proximal) numerical method can actually be applied to both models 41, JKO and deterministic MFGs. Goals: We want to extend our numerical approach to more realistic congestion model where the speed of agents depends on the density, see Figure 6 for preliminary results. Comparison with different numerical approaches will also be performed inside the ANR ISOTACE. Extension of the Augmented Lagrangian approach to Stochastic MFG will be studied. Diffeomorphic image matching. (Participants: F-X. Vialard, G. Peyré, B. Schmitzer, L. Chizat) Diffeomorphic image registration is widely used in medical image analysis. This class of problems can be seen as the computation of a generalized optimal transport, where the optimal path is a geodesic on a group of diffeomorphisms. The major difference between the two approaches being that optimal transport leads to non smooth optimal maps in general, which is however compulsory in diffeomorphic image matching. In contrast, optimal transport enjoys a convex variational formulation whereas in LDDMM the minimization problem is non convex. Our expertise: F-X. Vialard is an expert of diffeomorphic image matching (LDDMM) 150, 63, 148. Our team has already studied flows and geodesics over non-Riemannian shape spaces, which allows for piecewise smooth deformations  87. Goals: Our aim consists in bridging the gap between standard optimal transport and diffeomorphic methods by building new diffeomorphic matching variational formulations that are convex (geometric obstructions might however appear). A related perspective is the development of new registration/transport models in a Lagrangian framework, in the spirit of  145, 144 to obtain more meaningful statistics on longitudinal studies. Diffeomorphic matching consists in the minimization of a functional that is a sum of a deformation cost and a similarity measure. The choice of the similarity measure is as important as the deformation cost. It is often chosen as a norm on a Hilbert space such as functions, currents or varifolds. From a Bayesian perspective, these similarity measures are related to the noise model on the observed data which is of geometric nature and it is not taken into account when using Hilbert norms. Optimal transport fidelity have been used in the context of signal and image denoising  117, and it is an important question to extends these approach to registration problems. Therefore, we propose to develop similarity measures that are geometric and computationally very efficient using entropic regularization of optimal transport. Our approach is to use a regularized optimal transport to design new similarity measures on all of those Hilbert spaces. Understanding the precise connections between the evolution of shapes and probability distributions will be investigated to cross-fertilize both fields by developing novel transportation metrics and diffeomorphic shape flows. The corresponding numerical schemes are however computationally very costly. Leveraging our understanding of the dynamic optimal transport problem and its numerical resolution, we propose to develop new algorithms. These algorithms will use the smoothness of the Riemannian metric to improve both accuracy and speed, using for instance higher order minimization algorithm on (infinite dimensional) manifolds. Metric learning and parallel transport for statistical applications. (Participants: F-X. Vialard, G. Peyré, B. Schmitzer, L. Chizat) The LDDMM framework has been advocated to enable statistics on the space of shapes or images that benefit from the estimation of the deformation. The statistical results of it strongly depend on the choice of the Riemannian metric. A possible direction consists in learning the right invariant Riemannian metric as done in 151 where a correlation matrix (Figure 7) is learnt which represents the covariance matrix of the deformation fields for a given population of shapes. In the same direction, a question of emerging interest in medical imaging is the analysis of time sequence of shapes (called longitudinal analysis) for early diagnosis of disease, for instance 100. A key question is the inter subject comparison of the organ evolution which is usually done by transport of the time evolution in a common coordinate system via parallel transport or other more basic methods. Once again, the statistical results (Figure 8) strongly depend on the choice of the metric or more generally on the connection that defines parallel transport. Our expertise: Our team has already studied statistics on longitudinal evolutions in 100, 101. Goals: Developing higher order numerical schemes for parallel transport (only low order schemes are available at the moment) and developing variational models to learn the metric or the connections for improving statistical results. 3.1.3 Sparsity in Imaging Inverse problems over measures spaces. (Participants: G. Peyré, V. Duval, C. Poon, Q. Denoyelle) As detailed in Section 2.4, popular methods for regularizing inverse problems in imaging make use of variational analysis over infinite-dimensional (typically non-reflexive) Banach spaces, such as Radon measures or bounded variation functions. Our expertise: We have recently shown in  149 how – in the finite dimensional case – the non-smoothness of the functionals at stake is crucial to enforce the emergence of geometrical structures (edges in images or fractures in physical materials  53) for discrete (finite dimensional) problems. We extended this result in a simple infinite dimensional setting, namely sparse regularization of Radon measures for deconvolution  95. A deep understanding of those continuous inverse problems is crucial to analyze the behavior of their discrete counterparts, and in  96 we have taken advantage of this understanding to develop a fine analysis of the artifacts induced by discrete (i.e. which involve grids) deconvolution models. These works are also closely related to the problem of limit analysis and yield design in mechanical plasticity, see  78, 53 for an existing collaboration between Mokaplan's team members. Goals: A current major front of research in the mathematical analysis of inverse problems is to extend these results for more complicated infinite dimensional signal and image models, such as for instance the set of piecewise regular functions. The key bottleneck is that, contrary to sparse measures (which are finite sums of Dirac masses), here the objects to recover (smooth edge curves) are not parameterized by a finite number of degrees of freedom. The relevant previous work in this direction are the fundamental results of Chambolle, Caselles and co-workers  38, 32, 84. They however only deal with the specific case where there is no degradation operator and no noise in the observations. We believe that adapting these approaches using our construction of vanishing derivative pre-certificate  95 could lead to a solution to these theoretical questions. Sub-Riemannian diffusions. (Participants: G. Peyré, J-M. Mirebeau, D. Prandi) Modeling and processing natural images require to take into account their geometry through anisotropic diffusion operators, in order to denoise and enhance directional features such as edges and textures  137, 97. This requirement is also at the heart of recently proposed models of cortical processing  136. A mathematical model for these processing is diffusion on sub-Riemanian manifold. These methods assume a fixed, usually linear, mapping from the 2-D image to a lifted function defined on the product of space and orientation (which in turn is equipped with a sub-Riemannian manifold structure). Our expertise: J-M. Mirebeau is an expert in the discretization of highly anisotropic diffusions through the use of locally adaptive computational stencils  131, 97. G. Peyré has done several contributions on the definition of geometric wavelets transform and directional texture models, see for instance  137. Dario Prandi has recently applied methods from sub-Riemannian geometry to image restoration  55. Goals: A first aspect of this work is to study non-linear, data-adaptive, lifting from the image to the space/orientation domain. This mapping will be implicitly defined as the solution of a convex variational problem. This will open both theoretical questions (existence of a solution and its geometrical properties, when the image to recover is piecewise regular) and numerical ones (how to provide a faithful discretization and fast second order Newton-like solvers). A second aspect of this task is to study the implication of these models for biological vision, in a collaboration with the UNIC Laboratory (directed by Yves Fregnac), located in Gif-sur-Yvette. In particular, the study of the geometry of singular vectors (or “ground states” using the terminology of  49) of the non-linear sub-Riemannian diffusion operators is highly relevant from a biological modeling point of view. Sparse reconstruction from scanner data. (Participants: G. Peyré, V. Duval, C. Poon) Scanner data acquisition is mathematically modeled as a (sub-sampled) Radon transform  108. It is a difficult inverse problem because the Radon transform is ill-posed and the set of observations is often aggressively sub-sampled and noisy  143. Typical approaches  114 try to recover piecewise smooth solutions in order to recover precisely the position of the organ being imaged. There is however a very poor understanding of the actual performance of these methods, and little is known on how to enhance the recovery. Our expertise: We have obtained a good understanding of the performance of inverse problem regularization on compact domains for pointwise sources localization  95. Goals: We aim at extending the theoretical performance analysis obtained for sparse measures  95 to the set of piecewise regular 2-D and 3-D functions. Some interesting previous work of C. Poon et al  139 (C. Poon is currently a postdoc in Mokaplan) have tackled related questions in the field of variable Fourier sampling for compressed sensing application (which is a toy model for fMRI imaging). These approaches are however not directly applicable to Radon sampling, and require some non-trivial adaptations. We also aim at better exploring the connection of these methods with optimal-transport based fidelity terms such as those introduced in  29. Tumor growth modeling in medical image analysis. (Participants: G. Peyré, F-X. Vialard, J-D. Benamou, L. Chizat) Some applications in medical image analysis require to track shapes whose evolution is governed by a growth process. A typical example is tumor growth, where the evolution depends on some typically unknown but meaningful parameters that need to be estimated. There exist well-established mathematical models  68, 135 of non-linear diffusions that take into account recently biologically observed property of tumors. Some related optimal transport models with mass variations have also recently been proposed  124, which are connected to so-called metamorphoses models in the LDDMM framework  50. Our expertise: Our team has a strong experience on both dynamical optimal transport models and diffeomorphic matching methods (see Section 3.1.2). Goals: The close connection between tumor growth models  68, 135 and gradient flows for (possibly non-Euclidean) Wasserstein metrics (see Section 3.1.2) makes the application of the numerical methods we develop particularly appealing to tackle large scale forward tumor evolution simulation. A significant departure from the classical OT-based convex models is however required. The final problem we wish to solve is the backward (inverse) problem of estimating tumor parameters from noisy and partial observations. This also requires to set-up a meaningful and robust data fidelity term, which can be for instance a generalized optimal transport metric. 3.2 Numerical Tools The above continuous models require a careful discretization, so that the fundamental properties of the models are transferred to the discrete setting. Our team aims at developing innovative discretization schemes as well as associated fast numerical solvers, that can deal with the geometric complexity of the variational problems studied in the applications. This will ensure that the discrete solution is correct and converges to the solution of the continuous model within a guaranteed precision. We give below examples for which a careful mathematical analysis of the continuous to discrete model is essential, and where dedicated non-smooth optimization solvers are required. 3.2.1 Geometric Discretization Schemes Discretizing the cone of convex constraints. (Participants: J-D. Benamou, G. Carlier, J-M. Mirebeau, Q. Mérigot) Optimal transportation models as well as continuous models in economics can be formulated as infinite dimensional convex variational problems with the constraint that the solution belongs to the cone of convex functions. Discretizing this constraint is however a tricky problem, and usual finite element discretizations fail to converge. Our expertise: Our team is currently investigating new discretizations, see in particular the recent proposal  46 for the Monge-Ampère equation and  130 for general non-linear variational problems. Both offer convergence guarantees and are amenable to fast numerical resolution techniques such as Newton solvers. Since  46 explaining how to treat efficiently and in full generality Transport Boundary Conditions for Monge-Ampère, this is a promising fast and new approach to compute Optimal Transportation viscosity solutions. A monotone scheme is needed. One is based on Froese Oberman work  103, a new different and more accurate approach has been proposed by Mirebeau, Benamou and Collino  45. As shown in  89, discretizing the constraint for a continuous function to be convex is not trivial. Our group has largely contributed to solve this problem with G. Carlier   81, Quentin Mérigot  128 and J-M. Mirebeau   130. This problem is connected to the construction of monotone schemes for the Monge-Ampère equation. Goals: The current available methods are 2-D. They need to be optimized and parallelized. A non-trivial extension to 3-D is necessary for many applications. The notion of $c$-convexity appears in optimal transport for generalized displacement costs. How to construct an adapted discretization with “good” numerical properties is however an open problem. (Participants: J-D. Benamou, G. Carlier, J-M. Mirebeau, G. Peyré, Q. Mérigot) As detailed in Section 2.3, gradient Flows for the Wasserstein metric (aka JKO gradient flows  112) provides a variational formulation of many non-linear diffusion equations. They also open the way to novel discretization schemes. From a computational point, although the JKO scheme is constructive (it is based on the implicit Euler scheme), it has not been very much used in practice numerically because the Wasserstein term is difficult to handle (except in dimension one). Our expertise: Solving one step of a JKO gradient flow is similar to solving an Optimal transport problem. A geometrical a discretization of the Monge-Ampère operator approach has been proposed by Mérigot, Carlier, Oudet and Benamou in 44 see Figure 4. The Gamma convergence of the discretisation (in space) has been proved. Goals: We are also investigating the application of other numerical approaches to Optimal Transport to JKO gradient flows either based on the CFD formulation or on the entropic regularization of the Monge-Kantorovich problem (see section 3.2.3). An in-depth study and comparison of all these methods will be necessary. 3.2.2 Sparse Discretization and Optimization From discrete to continuous sparse regularization and transport. (Participants: V. Duval, G. Peyré, G. Carlier, Jalal Fadili (ENSICaen), Jérôme Malick (CNRS, Univ. Grenoble)) While pervasive in the numerical analysis community, the problem of discretization and $\Gamma$-convergence from discrete to continuous is surprisingly over-looked in imaging sciences. To the best of our knowledge, our recent work  95, 96 is the first to give a rigorous answer to the transition from discrete to continuous in the case of the spike deconvolution problem. Similar problems of $\Gamma$-convergence are progressively being investigated in the optimal transport community, see in particular  82. Our expertise: We have provided the first results on the discrete-to-continous convergence in both sparse regularization variational problems  95, 96 and the static formulation of OT and Wasserstein barycenters  82 Goals: In a collaboration with Jérôme Malick (INRIA Grenoble), our first goal is to generalize the result of  95 to generic partly-smooth convex regularizers routinely used in imaging science and machine learning, a prototypal example being the nuclear norm (see  149 for a review of this class of functionals). Our second goal is to extend the results of  82 to the novel class of entropic discretization schemes we have proposed  43, to lay out the theoretical foundation of these ground-breaking numerical schemes. Polynomial optimization for grid-free regularization. (Participants: G. Peyré, V. Duval, I. Waldspurger) There has been a recent spark of attention of the imaging community on so-called “grid free” methods, where one tries to directly tackle the infinite dimensional recovery problem over the space of measures, see for instance  73, 95. The general idea is that if the range of the imaging operator is finite dimensional, the associated dual optimization problem is also finite dimensional (for deconvolution, it corresponds to optimization over the set of trigonometric polynomials). Our expertise: We have provided in  95 a sharp analysis of the support recovery property of this class of methods for the case of sparse spikes deconvolution. Goals: A key bottleneck of these approaches is that, while being finite dimensional, the dual problem necessitates to handle a constraint of polynomial positivity, which is notoriously difficult to manipulate (except in the very particular case of 1-D problems, which is the one exposed in  73). A possible, but very costly, methodology is to ressort to Lasserre's SDP representation hierarchy  116. We will make use of these approaches and study how restricting the level of the hierarchy (to obtain fast algorithms) impacts the recovery performances (since this corresponds to only computing approximate solutions). We will pay a particular attention to the recovery of 2-D piecewise constant functions (the so-called total variation of functions regularization  142), see Figure 3 for some illustrative applications of this method. 3.2.3 First Order Proximal Schemes ${L}^{2}$ proximal methods. (Participants: G. Peyré, J-D. Benamou, G. Carlier, Jalal Fadili (ENSICaen)) Both sparse regularization problems in imaging (see Section 2.4) and dynamical optimal transport (see Section 2.3) are instances of large scale, highly structured, non-smooth convex optimization problems. First order proximal splitting optimization algorithms have recently gained lots of interest for these applications because they are the only ones capable of scaling to giga-pixel discretizations of images and volumes and at the same time handling non-smooth objective functions. They have been successfully applied to optimal transport  39, 132, congested optimal transport  67 and to sparse regularizations (see for instance  140 and the references therein). Our expertise: The pioneering work of our team has shown how these proximal solvers can be used to tackle the dynamical optimal transport problem  39, see also  132. We have also recently developed new proximal schemes that can cope with non-smooth composite objectives functions  140. Goals: We aim at extending these solvers to a wider class of variational problems, most notably optimization under divergence constraints  41. Another subject we are investigating is the extension of these solvers to both non-smooth and non-convex objective functionals, which are mandatory to handle more general transportation problems and novel imaging regularization penalties. Bregman proximal methods. (Participants: G. Peyré G. Carlier, L. Nenna, J-D. Benamou, L. Nenna, Marco Cuturi (Kyoto Univ.)) The entropic regularization of the Kantorovich linear program for OT has been shown to be surprisingly simple and efficient, in particular for applications in machine learning  93. As shown in  43, this is a special instance of the general method of Bregman iterations, which is also a particular instance of first order proximal schemes according to the Kullback-Leibler divergence. Our expertise: We have recently  43 shown how Bregman projections  58 and Dykstra algorithm  35 offer a generic optimization framework to solve a variety of generalized OT problems. Carlier and Dupuis  79 have designed a new method based on alternate Dykstra projections and applied it to the principal-agent problem in microeconomics. We have applied this method in computer graphics in a paper accepted in SIGGRAPH 2015  146. Figure 9 shows the potential of our approach to handle giga-voxel datasets: the input volumetric densities are discretized on a ${100}^{3}$ computational grid. Goals: Following some recent works (see in particular  86) we first aim at studying primal-dual optimization schemes according to Bregman divergences (that would go much beyond gradient descent and iterative projections), in order to offer a versatile and very effective framework to solve variational problems involving OT terms. We then also aim at extending the scope of usage of this method to applications in quantum mechanics (Density Functional Theory, see  90) and fluid dynamics (Brenier's weak solutions of the incompressible Euler equation, see  61). The computational challenge is that realistic physical examples are of a huge size not only because of the space discretization of one marginal but also because of the large number of marginals involved (for incompressible Euler the number of marginals equals the number of time steps). 4 Application domains 4.1 Natural Sciences FreeForm Optics, Fluid Mechanics (Incompressible Euler, Semi-Geostrophic equations), Quantum Chemistry (Density Functional Theory), Statistical Physics (Schroedinger problem), Porous Media. 4.2 Signal Processing and inverse problems Full Waveform Inversion (Geophysics), Super-resolution microscopy (Biology), Satellite imaging (Meteorology) 4.3 Social Sciences Mean-field games, spatial economics, principal-agent models, taxation, nonlinear pricing. 5 New results 5.1 A mean field game model for the evolution of cities César Barilla, Guillaume Carlier, Jean-Michel Lasry We propose a (toy) MFG model for the evolution of residents and firms densities, coupled both by labour market equilibrium conditions and competition for land use (congestion). This results in a system of two Hamilton-Jacobi-Bellman and two Fokker-Planck equations with a new form of coupling related to optimal transport. This MFG has a convex potential which enables us to find weak solutions by a variational approach. In the case of quadratic Hamiltonians, the problem can be reformulated in Lagrangian terms and solved numerically by an IPFP/Sinkhorn-like scheme. We present numerical results based on this approach, these simulations exhibit different behaviours with either agglomeration or segregation dominating depending on the initial conditions and parameters. 5.2 Optimal transportation, modelling and numerical simulation Jean-David Benamou We present an overviewof the basic theory, modern optimal transportation extensions and recent algorithmic advances. Selected modelling and numerical applications illustrate the impact of optimal transportation in numerical analysis. 5.3 Entropic-Wasserstein barycenters: PDE characterization, regularity and CLT. Guillaume Carlier, Katharina Eichinger, Alexey Kroshnin In this paper, we investigate properties of entropy-penalized Wasserstein barycenters as a regularization of Wasserstein barycenters. After characterizing these barycenters in terms of a system of Monge-Ampère equations, we prove some global moment and Sobolev bounds as well as higher regularity properties. We finally establish a central limit theorem for entropic-Wasserstein barycenters. 5.4 Stability of optimal traffic plans in the irrigation problem Maria Colombo, Antonio De Rosa, Andrea Marchese, Paul Pegon, Antoine Prouff We prove the stability of optimal traffic plans in branched transport. In particular, we show that any limit of optimal traffic plans is optimal as well. This result goes beyond the Eulerian stability proved in [Colombo, De Rosa, Marchese ; 2021], extending it to the Lagrangian framework. 5.5 An Epigraphical Approach to the Representer Theorem Vincent Duval Describing the solutions of inverse problems arising in signal or image processing is an important issue both for theoretical and numerical purposes. We propose a principle which describes the solutions to convex variational problems involving a finite number of measurements. We discuss its optimality on various problems concerning the recovery of Radon measures. 5.6 Convergence of a Lagrangian discretization for barotropic fluids and porous media flow Thomas Gallouët, Quentin Merigot, Andrea Natale When expressed in Lagrangian variables, the equations of motion for compressible (barotropic) fluids have the structure of a classical Hamiltonian system in which the potential energy is given by the internal energy of the fluid. The dissipative counterpart of such a system coincides with the porous medium equation, which can be cast in the form of a gradient flow for the same internal energy. Motivated by these related variational structures, we propose a particle method for both problems in which the internal energy is replaced by its Moreau-Yosida regularization in the L2 sense, which can be efficiently computed as a semi-discrete optimal transport problem. Using a modulated energy argument which exploits the convexity of the problem in Eulerian variables, we prove quantitative convergence estimates towards smooth solutions. We verify such estimates by means of several numerical tests. 5.7 Computation of optimal transport with finite volumes Andrea Natale, Gabriele Todeschi We construct Two-Point Flux Approximation (TPFA) finite volume schemes to solve the quadratic optimal transport problem in its dynamic form, namely the problem originally introduced by Benamou and Brenier. We show numerically that these type of discretizations are prone to form instabilities in their more natural implementation, and we propose a variation based on nested meshes in order to overcome these issues. Despite the lack of strict convexity of the problem, we also derive quantitative estimates on the convergence of the method, at least for the discrete potential and the discrete cost. Finally, we introduce a strategy based on the barrier method to solve the discrete optimization problem. 5.8 A Dimension-free Computational Upper-bound for Smooth Optimal Transport Estimation Adrien Vacher, Boris Muzellec, Alessandro Rudi, Francis Bach, François-Xavier Vialard It is well-known that plug-in statistical estimation of optimal transport suffers from the curse of dimensionality. Despite recent efforts to improve the rate of estimation with the smoothness of the problem, the computational complexity of these recently proposed methods still degrade exponentially with the dimension. In this paper, thanks to an infinitedimensional sum-of-squares representation, we derive a statistical estimator of smooth optimal transport which achieves a precision $ϵ$ from $O\left({ϵ}^{-2}\right)$ independent and identically distributed samples from the distributions, for a computational cost of $O\left({ϵ}^{-4}\right)$ when the smoothness increases, hence yielding dimension-free statistical and computational rates, with potentially exponentially dimension-dependent constants. 5.9 A spatial Pareto exchange economy problem Xavier Bacon, Guillaume Guillaume Carlier We use convex duality techniques to study a spatial Pareto problem with transport costs and derive a spatial second welfare theorem. The existence of an integrable equilibrium distribution of quantities is nontrivial and established under general monotonicity assumptions. Our variational approach also enables us to give a numerical algorithmà la Sinkhorn and present simulations for equilibrium prices and quantities in one-dimensional domains and a network of French cities. 5.10 Point Source Regularization of the Finite Source Reflector Problem Jean-David Benamou, Guillaume Chazareix, Wilbert L Ijzerman, Giorgi Rukhaia We address the “freeform optics” inverse problem of designing a reflector surface mapping a prescribed source distribution of light to a prescribed far field distribution, for a finite light source. When the finite source reduces to a point source, the light source distribution has support only on the optics ray directions. In this setting the inverse problem is well posed for arbitrary source and target probability distributions. It can be recast as an Optimal Transportation problem and has been studied both mathematically and nu-merically. We are not aware of any similar mathematical formulation in the finite source case: i.e. the source has an “´etendue” with support both in space and directions. We propose to leverage the well-posed variational formulation of the point source problem to build a smooth parameterization of the reflec-tor and the reflection map. Under this parameterization we can construct a smooth loss/misfit function to optimize for the best solution in this class of reflectors. Both steps, the parameterization and the loss, are related to Optimal Transportation distances. We also take advantage of recent progress in the numerical approximation and resolution of these mathematical objects to perform a numerical study. 5.11 Stability in Gagliardo-Nirenberg-Sobolev inequalities: flows, regularity and the entropy method. Matteo Bonforte, Jean Dolbeault, Bruno Nazaret, Nikita Simonov The purpose of this work is to establish a quantitative and constructive stability result for a class of subcritical Gagliardo-Nirenberg-Sobolev inequalities which interpolates between the logarithmic Sobolev inequality and the standard Sobolev inequality (in dimension larger than three), or Onofri's inequality in dimension two. We develop a new strategy, in which the flow of the fast diffusion equation is used as a tool: a stability result in the inequality is equivalent to an improved rate of convergence to equilibrium for the flow. The regularity properties of the parabolic flow allow us to connect an improved entropy - entropy production inequality during an initial time layer to spectral properties of a suitable linearized problem which is relevant for the asymptotic time layer. Altogether, the stability in the inequalities is measured by a deficit which controls in strong norms (a Fisher information which can be interpreted as a generalized Heisenberg uncertainty principle) the distance to the manifold of optimal functions. The method is constructive and, for the first time, quantitative estimates of the stability constant are obtained, including in the critical case of Sobolev's inequality. To build the estimates, we establish a quantitative global Harnack principle and perform a detailed analysis of large time asymptotics by entropy methods. 5.12 SISTA: learning optimal transport costs under sparsity constraints 18Guillaume Carlier, Arnaud Dupuy, Alfred Galichon, Yifei Sun : In this paper, we describe a novel iterative procedure called SISTA to learn the underlying cost in optimal transport problems. SISTA is a hybrid between two classical methods, coordinate descent ("S"-inkhorn) and proximal gradient descent ("ISTA"). It alternates between a phase of exact minimization over the transport potentials and a phase of proximal gradient descent over the parameters of the transport cost. We prove that this method converges linearly, and we illustrate on simulated examples that it is significantly faster than both coordinate descent and ISTA. We apply it to estimating a model of migration, which predicts the flow of migrants using country-specific characteristics and pairwise measures of dissimilarity between countries. This application demonstrates the effectiveness of machine learning in quantitative social sciences. 5.13 Convex geometry of finite exchangeable laws and de Finetti style representation with universal correlated corrections Guillaume Carlier, Gero Friesecke, Daniela Vögler We present a novel analogue for finite exchangeable sequences of the de Finetti, Hewitt and Savage theorem and investigate its implications for multi-marginal optimal transport (MMOT) and Bayesian statistics. If (Z 1 , ..., Z N) is a finitely exchangeable sequence of N random variables taking values in some Polish space X, we show that the law µ k of the first k components has a representation of the form. 5.14 On the linear convergence of the multi-marginal Sinkhorn algorithm Guilaume Carlie The aim of this short note is to give an elementary proof of linear convergence of the Sinkhorn algorithm for the entropic regularization of multi-marginal optimal transport. The proof simply relies on: i) the fact that Sinkhorn iterates are bounded, ii) strong convexity of the exponential on bounded intervals and iii) the convergence analysis of the coordinate descent (Gauss-Seidel) method of Beck and Tetruashvili. 5.15 "FISTA" in Banach spaces with adaptive discretisations Antonin Chambolle, Robert Tovey FISTA is a popular convex optimisation algorithm which is known to converge at an optimal rate whenever the optimisation domain is contained in a suitable Hilbert space. We propose a modified algorithm where each iteration is performed in a subspace, and that subspace is allowed to change at every iteration. Analytically, this allows us to guarantee convergence in a Banach space setting, although at a reduced rate depending on the conditioning of the specific problem. Numerically we show that a greedy adaptive choice of discretisation can greatly increase the time and memory efficiency in infinite dimensional Lasso optimisation problems. 5.16 Towards Off-the-grid Algorithms for Total Variation Regularized Inverse Problems Yohann De Castro, Vincent Duval, Romain Petit We introduce an algorithm to solve linear inverse problems regularized with the total (gradient) variation in a gridless manner. Contrary to most existing methods, that produce an approximate solution which is piecewise constant on a fixed mesh, our approach exploits the structure of the solutions and consists in iteratively constructing a linear combination of indicator functions of simple polygons. 5.17 Mass concentration in rescaled first order integral functionals Antonin Monteil, Paul Pegon We consider first order local minimization problems $min{\int }_{{ℝ}^{N}}f\left(u,\nabla u\right)$ under a mass constraint ${\int }_{{ℝ}^{N}}u=m\in ℝ$. We prove that the minimal energy function $H\left(m\right)$ is always concave on $\left(-\infty ,0\right)$ and $\left(0,+\infty \right)$, and that relevant rescalings of the energy, depending on a small parameter $\epsilon$, $\Gamma$-converge in the weak topology of measures towards the $H$-mass, defined for atomic measures ${\sum }_{i}{m}_{i}{\delta }_{{x}_{i}}$ as ${\sum }_{i}H\left({m}_{i}\right)$. We also consider space dependent Lagrangians $f\left(x,u,\nabla u\right)$, which cover the case of space dependent $H$-masses ${\sum }_{i}H\left({x}_{i},{m}_{i}\right)$, and also the case of a family of Lagrangians ${\left({f}_{\epsilon }\right)}_{\epsilon }$ converging as $\epsilon \to 0$. The $\Gamma$-convergence result holds under mild assumptions on $f$, and covers several situations including homogeneous $H$-masses in any dimension $N\ge 2$ for exponents above a critical threshold, and all concave $H$-masses in dimension $N=1$. Our result yields in particular the concentration of Cahn-Hilliard fluids into droplets, and is related to the approximation of branched transport by elliptic energies. 5.18 Near-optimal estimation of smooth transport maps with kernel sums-of-squares Boris Muzellec, Adrien Vacher, Francis Bach, François-Xavier Vialard, Alessandro Rudi It was recently shown that under smoothness conditions, the squared Wasserstein distance between two distributions could be efficiently computed with appealing statistical error upper bounds. However, rather than the distance itself, the object of interest for applications such as generative modeling is the underlying optimal transport map. Hence, computational and statistical guarantees need to be obtained for the estimated maps themselves. In this paper, we propose the first tractable algorithm for which the statistical ${L}^{2}$ error on the maps nearly matches the existing minimax lower-bounds for smooth map estimation. Our method is based on solving the semi-dual formulation of optimal transport with an infinite-dimensional sum-of-squares reformulation, and leads to an algorithm which has dimension-free polynomial rates in the number of samples, with potentially exponentially dimension-dependent constants. 5.19 Model-based Clustering with Missing Not At Random Data Aude Sportisse, Christophe Biernacki, Claire Boyer, Julie Josse, Matthieu Marbac Lourdelle, Gilles Celeux, Fabien Laporte In recent decades, technological advances have made it possible to collect large data sets. In this context, the model-based clustering is a very popular, flexible and interpretable methodology for data exploration in a well-defined statistical framework. One of the ironies of the increase of large datasets is that missing values are more frequent. However, traditional ways (as discarding observations with missing values or imputation methods) are not designed for the clustering purpose. In addition, they rarely apply to the general case, though frequent in practice, of Missing Not At Random (MNAR) values, i.e. when the missingness depends on the unobserved data values and possibly on the observed data values. The goal of this paper is to propose a novel approach by embedding MNAR data directly within model-based clustering algorithms. We introduce a selection model for the joint distribution of data and missing-data indicator. It corresponds to a mixture model for the data distribution and a general MNAR model for the missing-data mechanism, which may depend on the underlying classes (unknown) and/or the values of the missing variables themselves. A large set of meaningful MNAR sub-models is derived and the identifiability of the parameters is studied for each of the sub-models, which is usually a key issue for any MNAR proposals. The EM and Stochastic EM algorithms are considered for estimation. Finally, we perform empirical evaluations for the proposed submodels on synthetic data and we illustrate the relevance of our method on a medical register, the TraumaBase ® dataset. 5.20 Dynamical Programming for off-the-grid dynamic Inverse Problems Robert Tovey, Vincent Duval In this work we consider algorithms for reconstructing time-varying data into a finite sum of discrete trajectories, alternatively, an off-the-grid sparse-spikes decomposition which is continuous in time. Recent work showed that this decomposition was possible by minimising a convex variational model which combined a quadratic data fidelity with dynamical Optimal Transport. We generalise this framework and propose new numerical methods which leverage efficient classical algorithms for computing shortest paths on directed acyclic graphs. Our theoretical analysis confirms that these methods converge to globally optimal reconstructions which represent a finite number of discrete trajectories. Numerically, we show new examples for unbalanced Optimal Transport penalties, and for balanced examples we are 100 times faster in comparison to the previously known method. 5.21 Convex transport potential selection with semi-dual criterion Over the past few years, numerous computational models have been developed to solve Optimal Transport (OT) in a stochastic setting, where distributions are represented by samples. In such situations, the goal is to find a transport map that has good generalization properties on unseen data, ideally the closest map to the ground truth, unknown in practical settings. However, in the absence of ground truth, no quantitative criterion has been put forward to measure its generalization performance although it is crucial for model selection. We propose to leverage the Brenier formulation of OT to perform this task. Theoretically, we show that this formulation guarantees that, up to a distortion parameter that depends on the smoothness/strong convexity and a statistical deviation term, the selected map achieves the lowest quadratic error to the ground truth. This criterion, estimated via convex optimization, enables parameter and model selection among entropic regularization of OT, input convex neural networks and smooth and strongly convex nearest-Brenier (SSNB) models. Last, we make an experiment questioning the use of OT in Domain-Adaptation. Thanks to the criterion, we can identify the potential that is closest to the true OT map between the source and the target and we observe that this selected potential is not the one that performs best for the downstream transfer classification task. 5.22 Lecture notes on non-convex algorithms for low-rank matrix recovery Irène Waldspurger Low-rank matrix recovery problems are inverse problems which naturally arise in various fields like signal processing, imaging and machine learning. They are non-convex and NP-hard in full generality. It is therefore a delicate problem to design efficient recovery algorithms and to provide rigorous theoretical insights on the behavior of these algorithms. The goal of these notes is to review recent progress in this direction for the class of so-called "non-convex algorithms", with a particular focus on the proof techniques. Although they aim at presenting very recent research works, these notes have been written with the intent to be, as much as possible, accessible to non-specialists. These notes were written for an eight-hour lecture at Collège de France. The original version, in French, is available online 1 and the videos of the lecture can be found on the Collège de France website. 6 Partnerships and cooperations 6.1 European initiatives 6.1.1 FP7 & H2020 projects ROMSOC (657) • Title: Reduced Order Modelling, Simulation and Optimization of Coupled systems • Partners: • ABB SCHWEIZ AG (Switzerland) • ARCELORMITTAL INNOVACION INVESTIGACION E INVERSION SL (Spain) • BERGISCHE UNIVERSITAET WUPPERTAL (Germany) • CorWave (France) • DB Schenker Rail Polska S.A. (Poland) • FRIEDRICH-ALEXANDER-UNIVERSITAET ERLANGEN NUERNBERG (Germany) • MATHCONSULT GMBH (Austria) • MICROFLOWN TECHNOLOGIES BV (Netherlands) • Math.Tec GmbH (Austria) • PHILIPS LIGHTING BV (Netherlands) • POLITECNICO DI MILANO (Italy) • SAGIV TECH LTD (Israel) • SCUOLA INTERNAZIONALE SUPERIORE DI STUDI AVANZATI DI TRIESTE (Italy) • STICHTING EUROPEAN SERVICE NETWORK OF MATHEMATICS FOR INDUSTRY AND INNOVATION (Netherlands) • TECHNISCHE UNIVERSITAT BERLIN (Germany) • UNIVERSITAET BREMEN (Germany) • UNIVERSITAT LINZ (Austria) • Inria contact: J-D. Benamou • Summary: Industrial Doctorate project (https://­www.­romsoc.­eu/) that will run for four years bringing together 15 international academic institutions and 11 industry partners. It supports the recruitment of eleven Early Stage Researchers working on individual research projects. Mokaplan partnered with Signify (https://­www.­signify.­com/) in the context of G. Rukhaia PhD. 7 Dissemination 7.1 Promoting scientific activities 7.1.1 Scientific events: organisation • Organization MFO (Oberwolfach) workshops : Applications of Optimal Transportation in the Natural II. • Organization of the “Paris workshop on optimal transport with applications to economics and statistics” (CERI, Oct. 2021). • Workshop Schrodinger Problem and Mean-field PDE Systems (CIRM, Nov. 2021) • P. Pegon is a co-organizer of the workgroup on Calculus of Variations GT CalVa. • G. Carlier coorganize the monthly Séminaire Parisien d'optimisation 7.1.2 Scientific events: selection Member of the editorial boards G. Carlier is in the Editorial Board of Journal de l'Ecole Polytechnique, Applied Math and Opt., Journal of Mathematical Analysis and Applications, Mathematics and financial economics and Journal of dynamics and games. I. Waldspurger is associate editor for the IEEE Transactions on Signal Processing 7.1.3 Invited talks • G. Carlier Collège de France ??? • V. Duval seminar of the GT CALVA workgroup. • P. Pegon for Analysis & PDE Seminar at Durham University (online) and Séminaire Parisien d'Optimisation (SPO) at IHP 7.2 Teaching - Supervision - Juries 7.2.1 Teaching • Master : V. Duval, Problèmes Inverses, 22,5 h équivalent TD, niveau M1, Université PSL/Mines ParisTech, FR • Master : V. Duval, Optimization for Machine Learning, 9h, niveau M2, Université PSL/ENS, FR • Licence : I. Waldspurger, Pré-rentrée calcul, 31,2 h équivalent TD, niveau L1, Université Paris-Dauphine, FR • Licence : I. Waldspurger, Analyse 2, 50,7 h équivalent TD, niveau L1, Université Paris-Dauphine, FR • Master : I. Waldspurger, Optimization for Machine Learning, 6h, niveau M2, Université PSL/ENS, FR • Licence : G. Carlier, algebre 1, L1 78h, Dauphine, FR • Master : G. Carlier Variational and transport methods in economics, M2 Masef, 27h, Dauphine, FR • Licence : P. Pegon, Analyse 2 & 3, 102 H. équivalent TD, TD niveau L1-L2, Université Paris-Dauphine, FR • Licence : P. Pegon, Intégrale de Lebesgue et probabilités, 44 H. équivalent TD, TD niveau L3, Université Paris-Dauphine, FR • Master : P. Pegon, Pré-rentrée d'analyse, 16 H. équivalent TD, cours/TD niveau M1, Université Paris-Dauphine, FR • Licence : T. O. Gallouët, Optimisation, 24h équivalent TD, niveau L3, Université d'Orsay), FR • G. Carlier: Licence Algèbre 1, Dauphine 70h, M2 Masef: Variatioanl and transport problems in economics, 18h 7.2.2 Supervision • PhD completed : Giorgi Rukhaia, A FreeForm Optics Application of Entropic Optimal Transport. Supervised by J-D. Benamou • PhD completed : Miao Yu, Entropic Unbalanced Optimal Transport: Application to Full-Waveform Inversion and Numerical Illustration. Supervised by J-D. Benamou and Jean-Pierre Vilotte. • PhD in progress : Romain Petit, Méthodes sans grille pour l'imagerie, 01/10/2019, Supervised by V. Duval • PhD in progress : Joao-Miguel Machado, Transport optimal et structures géométriques, 01/10/2021, Co-supervised by V. Duval and A. Chambolle • PhD in progress : Adrien Vacher 1/10/2020. Co-supervised by F-X. Vialard and J-D. Benamou. • PhD in progress: Quentin Petit, mean-field games for cities modeling, co-supervision by G. Carlier Y. Achdou and D. Tonon, 1/09/2018 • PhD in progress: Katharina Eichinger, Systems of Monge-Ampère equations: a variational approach 1/09/2019. Supervised by G. Carlier. • PhD completed: Gabriele Todeschi, Optimal transport and finite volume schemes, Supervised by T. O. Gallouët. 7.2.3 Juries • J-D. Benamou, PhD Defense of Pierre Lavigne. • V. Duval, PhD Defense of Zhanhao Liu. 7.2.4 Internal or external Inria responsibilities V. Duval is a member of the Comité de Suivi Doctoral (CSD) and the Comité des Emplois Scientifiques (CES) of the Inria Paris research center. 7.2.5 Education I. Waldspurger gave three talks for high school or undergraduate students. 8 Scientific production 8.1 Publications of the year International journals • 1 articleC.César Barilla, G.Guillaume Carlier and J.-M.Jean-Michel Lasry. A mean field game model for the evolution of cities.Journal of Dynamics and Games2021 • 2 articleJ.-D.Jean-David Benamou. Optimal transportation, modelling and numerical simulation.Acta Numerica30May 2021, 249-325 • 3 articleG.Guillaume Carlier, K.Katharina Eichinger and A.Alexey Kroshnin. Entropic-Wasserstein barycenters: PDE characterization, regularity and CLT.SIAM Journal on Mathematical Analysis2021 • 4 articleM.Maria Colombo, A.Antonio De Rosa, A.Andrea Marchese, P.Paul Pegon and A.Antoine Prouff. Stability of optimal traffic plans in the irrigation problem.Discrete and Continuous Dynamical Systems - Series A2022 • 5 articleAn Epigraphical Approach to the Representer Theorem.Journal of Convex Analysis2832021, https://www.heldermann.de/JCA/JCA28/JCA283/jca28047.htm • 6 articleConvergence of a Lagrangian discretization for barotropic fluids and porous media flow.SIAM Journal on Mathematical Analysis2021 • 7 articleA.Andrea Natale and G.Gabriele Todeschi. Computation of optimal transport with finite volumes.ESAIM: Mathematical Modelling and Numerical Analysis555September 2021, 1847-1871 Conferences without proceedings • 8 inproceedingsC.Christophe Biernacki, C.Claire Boyer, G.Gilles Celeux, J.Julie Josse, F.Fabien Laporte, M. M.Matthieu Marbac Lourdelle and A.Aude Sportisse. Dealing with missing data in model-based clustering through a MNAR model.The 14th Professor Aleksander Zeliaś International Conference on Modelling and Forecasting of Socio-Economic PhenomenaZakopane, PolandMay 2021 • 9 inproceedingsC.Christophe Biernacki, C.Claire Boyer, G.Gilles Celeux, J.Julie Josse, F.Fabien Laporte, M.Matthieu Marbac Lourdelle, A.Aude Sportisse and V.Vincent Vandewalle. Impact of Missing Data on Mixtures and Clustering.MHC2021 - Mixtures, Hidden Markov Models, ClusteringOrsay, FranceJune 2021 • 10 inproceedingsA.Adrien Vacher, B.Boris Muzellec, A.Alessandro Rudi, F.Francis Bach and F.-X.François-Xavier Vialard. A Dimension-free Computational Upper-bound for Smooth Optimal Transport Estimation.COLT 2021 - 34th Annual Conference on Learning TheoryBoulder, United StatesAugust 2021 Scientific book chapters • 11 inbookTowards Off-the-grid Algorithms for Total Variation Regularized Inverse Problems.12679Scale Space and Variational Methods in Computer VisionLecture Notes in Computer SciencesSpringer, ChamApril 2021, 553-564 Doctoral dissertations and habilitation theses • 12 thesisG.Giorgi Rukhaia. A FreeForm Optics Application of Entropic Optimal Transport.PSL Université Paris Dauphine; INRIA ParisNovember 2021 • 13 thesisG.Gabriele Todeschi. Finite volume approximation of optimal transport and Wasserstein gradient flows.PSL Université Paris DauphineDecember 2021 • 14 thesisM.Miao Yu. Entropic Unbalanced Optimal Transport: Application to Full-Waveform Inversion and Numerical Illustration.Université de ParisDecember 2021 Reports & preprints • 15 miscX.Xavier Bacon, G. G.Guillaume Guillaume CARLIER and B.Bruno Nazaret. A spatial Pareto exchange economy problem.December 2021 • 16 miscJ.-D.Jean-David Benamou, G.Guillaume Chazareix, W. L.Wilbert L Ijzerman and G.Giorgi Rukhaia. Point Source Regularization of the Finite Source Reflector Problem.September 2021 • 17 miscM.Matteo Bonforte, J.Jean Dolbeault, B.Bruno Nazaret and N.Nikita Simonov. Stability in Gagliardo-Nirenberg-Sobolev inequalities: flows, regularity and the entropy method.April 2021 • 18 miscG.Guillaume Carlier, A.Arnaud Dupuy, A.Alfred Galichon and Y.Yifei Sun. SISTA: learning optimal transport costs under sparsity constraints.December 2021 • 19 miscG.Guillaume Carlier, G.Gero Friesecke and D.Daniela Vögler. Convex geometry of finite exchangeable laws and de Finetti style representation with universal correlated corrections.December 2021 • 20 miscG.Guillaume Carlier. On the linear convergence of the multi-marginal Sinkhorn algorithm.March 2021 • 21 miscA.Antonin Chambolle and R.Robert Tovey. "FISTA" in Banach spaces with adaptive discretisations.January 2021 • 22 miscTowards Off-the-grid Algorithms for Total Variation Regularized Inverse Problems.November 2021 • 23 miscA.Antonin Monteil and P.Paul Pegon. Mass concentration in rescaled first order integral functionals.January 2022 • 24 miscB.Boris Muzellec, A.Adrien Vacher, F.Francis Bach, F.-X.François-Xavier Vialard and A.Alessandro Rudi. Near-optimal estimation of smooth transport maps with kernel sums-of-squares.December 2021 • 25 miscA.Aude Sportisse, C.Christophe Biernacki, C.Claire Boyer, J.Julie Josse, M.Matthieu Marbac Lourdelle, G.Gilles Celeux and F.Fabien Laporte. Model-based Clustering with Missing Not At Random Data.December 2021 • 26 miscR.Robert Tovey and V.Vincent Duval. Dynamical Programming for off-the-grid dynamic Inverse Problems.December 2021 • 27 miscA.Adrien Vacher and F.-X.François-Xavier Vialard. Convex transport potential selection with semi-dual criterion.December 2021 • 28 miscI.Irène Waldspurger. Lecture notes on non-convex algorithms for low-rank matrix recovery.May 2021 8.2 Cited publications • 29 articleI.Isabelle Abraham, R.Romain Abraham, M.Maïtine Bergounioux and G.Guillaume Carlier. Tomographic reconstruction from a few views: a multi-marginal optimal transport approach.Preprint Hal-010659812014 • 30 articleY.Y. Achdou and V.V. Perez. Iterative strategies for solving linearized discrete mean field games systems.Netw. Heterog. Media722012, 197--217 • 31 articleM.M. Agueh and G.Guillaume Carlier. Barycenters in the Wasserstein space.SIAM J. Math. Anal.4322011, 904--924 • 32 articleF.F. Alter, V.V. Caselles and A.A. Chambolle. Evolution of Convex Sets in the Plane by Minimizing the Total Variation Flow.Interfaces and Free Boundaries3322005, 329--366 • 33 articleF. R.F. R. Bach. Consistency of Trace Norm Minimization.J. Mach. Learn. Res.9June 2008, 1019--1048 • 34 articleF. R.F. R. Bach. Consistency of the Group Lasso and Multiple Kernel Learning.J. Mach. Learn. Res.9June 2008, 1179--1225 • 35 articleH. H.H. H. Bauschke and P. L.P. L. Combettes. A Dykstra-like algorithm for two monotone operators.Pacific Journal of Optimization432008, 383--391 • 36 articleM. F.M. F. Beg, M. I.M. I. Miller, A.A. Trouvé and L.L. Younes. Computing Large Deformation Metric Mappings via Geodesic Flows of Diffeomorphisms.International Journal of Computer Vision612February 2005, 139--157 • 37 articleM.M. Beiglbock, P.P. Henry-Labordère and F.F. Penkner. Model-independent bounds for option prices mass transport approach.Finance and Stochastics1732013, 477-501 • 38 articleG.G. Bellettini, V.V. Caselles and M.M. Novaga. The Total Variation Flow in ${R}^{N}$.J. Differential Equations18422002, 475--525 • 39 articleJ.-D.Jean-David Benamou and Y.Y. Brenier. A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem.Numer. Math.8432000, 375--393 • 40 articleJ.-D.Jean-David Benamou and Y.Y. Brenier. Weak existence for the semigeostrophic equations formulated as a coupled Monge-Ampère/transport problem.SIAM J. Appl. Math.5851998, 1450--1461 • 41 articleJ.-D.Jean-David Benamou and G.Guillaume Carlier. Augmented Lagrangian algorithms for variational problems with divergence constraints.JOTA2015 • 42 techreportJ.-D.Jean-David Benamou, G.Guillaume Carlier and N.N. Bonne. An Augmented Lagrangian Numerical approach to solving Mean-Fields Games.INRIADecember 2013, 30 • 43 articleJ.-D.Jean-David Benamou, G.Guillaume Carlier, M.Marco Cuturi, L.Luca Nenna and G.Gabriel Peyré. Iterative Bregman Projections for Regularized Transportation Problems.SIAM J. Sci. Comp.to appear2015 • 44 techreportJ.-D.Jean-David Benamou, G.Guillaume Carlier, Q.Quentin Mérigot and É.Édouard Oudet. Discretization of functionals involving the Monge-Ampère operator.HALJuly 2014 • 45 articleJ.-D.Jean-David Benamou, F.F. Collino and J.-M.Jean-Marie Mirebeau. Monotone and Consistent discretization of the Monge-Ampère operator.arXiv preprint arXiv:1409.6694 to appear in Math of Comp2014 • 46 articleJ.-D.Jean-David Benamou, B. D.Brittany D Froese and A.Adam Oberman. Numerical solution of the optimal transportation problem using the Monge--Ampere equation.Journal of Computational Physics2602014, 107--126 • 47 articleJ.-D.Jean-David Benamou, B. D.B. D. Froese and A.Adam Oberman. Two numerical methods for the elliptic Monge-Ampère equation.M2AN Math. Model. Numer. Anal.4442010, 737--758 • 48 articleF.F. Benmansour, G.Guillaume Carlier, G.Gabriel Peyré and F.F. Santambrogio. Numerical approximation of continuous traffic congestion equilibria.Netw. Heterog. Media432009, 605--623 • 49 articleM.M. Benning and M.M. Burger. Ground states and singular vectors of convex variational regularization methods.Meth. Appl. Analysis202013, 295--334 • 50 articleB.B. Berkels, A.A. Effland and M.M. Rumpf. Time discrete geodesic paths in the space of images.Arxiv preprint2014 • 51 articleJ.J. Bigot and T.T. Klein. Consistent estimation of a population barycenter in the Wasserstein space.Preprint arXiv:1212.25622012 • 52 articleA.Adrien Blanchet and P.P. Laurençot. The parabolic-parabolic Keller-Segel system with critical diffusion as a gradient flow in ${R}^{d},\phantom{\rule{4pt}{0ex}}d3$.Comm. Partial Differential Equations3842013, 658--686 • 53 articleJ.Jérémy Bleyer, G.Guillaume Carlier, V.Vincent Duval, J.-M.Jean-Marie Mirebeau and G.Gabriel Peyré. A Convergence Result for the Upper Bound Limit Analysis of Plates.ESAIM: Mathematical Modelling and Numerical Analysis501January 2016, 215--235 • 54 articleN.N. Bonneel, J.J. Rabin, G.Gabriel Peyré and H.H. Pfister. Sliced and Radon Wasserstein Barycenters of Measures.Journal of Mathematical Imaging and Vision5112015, 22--45 • 55 techreportU.U. Boscain, R.R. Chertovskih, J.-P.J-P. Gauthier, D.D. Prandi and A.A. Remizov. Highly corrupted image inpainting through hypoelliptic diffusion.Preprint CMAP2014, • 56 articleG.Guy Bouchitté and G.Giuseppe Buttazzo. Characterization of optimal shapes and masses through Monge-Kantorovich equation.J. Eur. Math. Soc. (JEMS)322001, 139--168 • 57 articleL.L. Brasco, G.Guillaume Carlier and F.F. Santambrogio. Congested traffic dynamics, weak flows and very degenerate elliptic equations.J. Math. Pures Appl. (9)9362010, 652--671 • 58 articleL. M.L. M. Bregman. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming.USSR computational mathematics and mathematical physics731967, 200--217 • 59 articleY.Y. Brenier. Décomposition polaire et réarrangement monotone des champs de vecteurs.C. R. Acad. Sci. Paris Sér. I Math.305191987, 805--808 • 60 articleY.Y. Brenier, U.U. Frisch, M.M. Henon, G.G. Loeper, S.S. Matarrese, R.R. Mohayaee and A.A. Sobolevski. Reconstruction of the early universe as a convex optimization problem. Mon. Not. Roy. Astron. Soc.3462003, 501--524 • 61 articleY.Y. Brenier. Generalized solutions and hydrostatic approximation of the Euler equations.Phys. D23714-172008, 1982--1988 • 62 articleY.Y. Brenier. Polar factorization and monotone rearrangement of vector-valued functions.Comm. Pure Appl. Math.4441991, 375--417 • 63 articleM.M. Bruveris, L.L. Risser and F.-X.F.-X. Vialard. Mixture of Kernels and Iterated Semidirect Product of Diffeomorphisms Groups.Multiscale Modeling & Simulation1042012, 1344-1368 • 64 articleM.M. Burger, M.M. DiFrancesco, P.P. Markowich and M. T.M. T. Wolfram. Mean field games with nonlinear mobilities in pedestrian dynamics.DCDS B192014 • 65 articleM.M. Burger, M.M. Franek and C.C. Schonlieb. Regularized regression and density estimation based on optimal transport.Appl. Math. Res. Expr.22012, 209--253 • 66 articleM.M. Burger and S.S. Osher. A guide to the TV zoo.Level-Set and PDE-based Reconstruction Methods, Springer2013 • 67 articleG.G. Buttazzo, C.C. Jimenez and É.Édouard Oudet. An optimization problem for mass transportation with congested dynamics.SIAM J. Control Optim.4832009, 1961--1976 • 68 articleH.H. Byrne and D.D. Drasdo. Individual-based and continuum models of growing cell populations: a comparison.Journal of Mathematical Biology584-52009, 657-687 • 69 incollectionL. A.L. A. Caffarelli, S. A.S. A. Kochengin and V.V.I. Oliker. On the numerical solution of the problem of reflector design with given far-field scattering data.Monge Ampère equation: applications to geometry and optimization (Deerfield Beach, FL, 1997)226Contemp. Math.Providence, RIAmer. Math. Soc.1999, 13--32 • 70 articleL. A.L. A. Caffarelli. The regularity of mappings with a convex potential.J. Amer. Math. Soc.511992, 99--104 • 71 articleC.C. CanCeritoglu. Computational Analysis of LDDMM for Brain Mapping.Frontiers in Neuroscience72013 • 72 articleE. J.E. J. Candès and C.C. Fernandez-Granda. Super-Resolution from Noisy Data.Journal of Fourier Analysis and Applications1962013, 1229--1254 • 73 articleE. J.E. J. Candès and C.C. Fernandez-Granda. Towards a Mathematical Theory of Super-Resolution.Communications on Pure and Applied Mathematics6762014, 906--956 • 74 articleE.E. Candes and M.M. Wakin. An Introduction to Compressive Sensing.IEEE Signal Processing Magazine2522008, 21--30 • 75 articleP.P. Cardaliaguet, G.Guillaume Carlier and B.B. Nazaret. Geodesics for a class of distances in the space of probability measures.Calc. Var. Partial Differential Equations483-42013, 395--420 • 76 articleG.Guillaume Carlier. A general existence result for the principal-agent problem with adverse selection.J. Math. Econom.3512001, 129--150 • 77 techreportG.Guillaume Carlier, V.V. Chernozhukov and A.A. Galichon. Vector Quantile Regression.Arxiv 1406.46432014 • 78 articleG.Guillaume Carlier, M.M. Comte, I.I. Ionescu and G.Gabriel Peyré. A Projection Approach to the Numerical Analysis of Limit Load Problems.Mathematical Models and Methods in Applied Sciences2162011, 1291--1316 • 79 techreportG.Guillaume Carlier and X.Xavier Dupuis. An iterated projection approach to variational problems under generalized convexity constraints and applications.In preparation2015 • 80 articleG.Guillaume Carlier, C.C. Jimenez and F.F. Santambrogio. Optimal Transportation with Traffic Congestion and Wardrop Equilibria.SIAM Journal on Control and Optimization4732008, 1330-1350 • 81 articleG.Guillaume Carlier, T.T. Lachand-Robert and B.B. Maury. A numerical approach to variational problems subject to convexity constraint.Numer. Math.8822001, 299--318 • 82 articleG.Guillaume Carlier, A.Adam Oberman and É.Édouard Oudet. Numerical methods for matching for teams and Wasserstein barycenters. M2ANto appear2015 • 83 articleJ. A.J. A. Carrillo, S.S. Lisini and E.E. Mainini. Uniqueness for Keller-Segel-type chemotaxis models.Discrete Contin. Dyn. Syst.3442014, 1319--1338 • 84 articleV.V. Caselles, A.A. Chambolle and M.M. Novaga. The discontinuity set of solutions of the TV denoising problem and some extensions.Multiscale Modeling and Simulation632007, 879--894 • 85 articleF. A.F. A. C. C. Chalub, P. A.P. A. Markowich, B.B. Perthame and C.C. Schmeiser. Kinetic models for chemotaxis and their drift-diffusion limits.Monatsh. Math.1421-22004, 123--141 • 86 articleA.A. Chambolle and T.T. Pock. On the ergodic convergence rates of a first-order primal-dual algorithm.Preprint OO/2014/09/45322014 • 87 techreportG.G. Charpiat, G.G. Nardi, G.Gabriel Peyré and F.-X.F.-X. Vialard. Finsler Steepest Descent with Applications to Piecewise-regular Curve Evolution.Preprint hal-008498852013, • 88 articleS. S.S. S. Chen, D. L.D. L. Donoho and M. A.M. A. Saunders. Atomic decomposition by basis pursuit.SIAM journal on scientific computing2011999, 33--61 • 89 articleP.P. Choné and H. V.H. V. J. Le Meur. Non-convergence result for conformal approximation of variational problems subject to a convexity constraint.Numer. Funct. Anal. Optim.225-62001, 529--547 • 90 articleC.C. Cotar, G.G. Friesecke and C.C. Kluppelberg. Density Functional Theory and Optimal Transportation with Coulomb Cost.Communications on Pure and Applied Mathematics6642013, 548--599 • 91 articleM. J.M. J. P. Cullen, W.W. Gangbo and G.G. Pisante. The semigeostrophic equations discretized in reference and dual variables.Arch. Ration. Mech. Anal.18522007, 341--363 • 92 articleM. J.M. J. P. Cullen, J.J. Norbury and R. J.R. J. Purser. Generalised Lagrangian solutions for atmospheric and oceanic flows.SIAM J. Appl. Math.5111991, 20--31 • 93 inproceedingsM.Marco Cuturi. Sinkhorn Distances: Lightspeed Computation of Optimal Transport.Proc. NIPS2013, 2292--2300 • 94 articleE. J.E. J. Dean and R.R. Glowinski. Numerical methods for fully nonlinear elliptic equations of the Monge-Ampère type.Comput. Methods Appl. Mech. Engrg.19513-162006, 1344--1386 • 95 articleV.Vincent Duval and G.Gabriel Peyré. Exact Support Recovery for Sparse Spikes Deconvolution.Foundations of Computational Mathematics2014, 1-41 • 96 articleV.Vincent Duval and G.Gabriel Peyré. Sparse regularization on thin grids I: the L asso.Inverse Problems3352017, 055008 • 97 articleJ.J. Fehrenbach and J.-M.Jean-Marie Mirebeau. Sparse Non-negative Stencils for Anisotropic Diffusion.Journal of Mathematical Imaging and Vision4912014, 123-147 • 98 articleC.C. Fernandez-Granda. Support detection in super-resolution.Proc. Proceedings of the 10th International Conference on Sampling Theory and Applications2013, 145--148 • 99 article A.A. Figalli, R.R.J. McCann and Y.Y.H. Kim. When is multi-dimensional screening a convex program? Journal of Economic Theory 2011 • 100 articleJ.-B.J-B. Fiot, H.H. Raguet, L.L. Risser, L. D.L. D. Cohen, J.J. Fripp and F.-X.F.-X. Vialard. Longitudinal deformation models, spatial regularizations and learning strategies to quantify Alzheimer's disease progression.NeuroImage: Clinical402014, 718 - 729 • 101 incollectionJ.-B.J-B. Fiot, L.L. Risser, L. D.L. D. Cohen, J.J. Fripp and F.-X.F.-X. Vialard. Local vs Global Descriptors of Hippocampus Shape Evolution for Alzheimer's Longitudinal Population Analysis.Spatio-temporal Image Analysis for Longitudinal and Time-Series Image Data7570Lecture Notes in Computer ScienceSpringer Berlin Heidelberg2012, 13-24 • 102 articleU.U. Frisch, S.S. Matarrese, R.R. Mohayaee and A.A. Sobolevski. Monge-Ampère-Kantorovitch (MAK) reconstruction of the eary universe.Nature4172602002 • 103 articleB. D.B. D. Froese and A.Adam Oberman. Convergent filtered schemes for the Monge-Ampère partial differential equation.SIAM J. Numer. Anal.5112013, 423--444 • 104 articleA.A. Galichon, P.P. Henry-Labordère and N.N. Touzi. A stochastic control approach to No-Arbitrage bounds given marginals, with an application to Loopback options.submitted to Annals of Applied Probability2011 • 105 articleW.W. Gangbo and R.R.J. McCann. The geometry of optimal transportation.Acta Math.17721996, 113--161 • 106 articleE.E. Ghys. Gaspard Monge, Le mémoire sur les déblais et les remblais.Image des mathématiques, CNRS2012, • 107 incollectionO.O. Guéant, J.-M.J-M. Lasry and P.-L.P-L. Lions. Mean field games and applications.Paris-Princeton Lectures on Mathematical Finance 20102003Lecture Notes in Math.BerlinSpringer2011, 205--266 • 108 bookG.G. Herman. Image reconstruction from projections: the fundamentals of computerized tomography.Academic Press1980 • 109 articleD. D.D. D. Holm, J. T.J. T. Ratnanather, A.A. Trouvé and L.L. Younes. Soliton dynamics in computational anatomy.NeuroImage232004, S170--S178 • 110 incollectionB. J.B. J. Hoskins. The mathematical theory of frontogenesis.Annual review of fluid mechanics, Vol. 14Palo Alto, CAAnnual Reviews1982, 131--151 • 111 articleW.W. Jäger and S.S. Luckhaus. On explosions of solutions to a system of partial differential equations modelling chemotaxis.Trans. Amer. Math. Soc.32921992, 819--824 • 112 articleR.R. Jordan, D.D. Kinderlehrer and F.F. Otto. The variational formulation of the Fokker-Planck equation.SIAM J. Math. Anal.2911998, 1--17 • 114 articleE.E. Klann. A Mumford-Shah-Like Method for Limited Data Tomography with an Application to Electron Tomography.SIAM J. Imaging Sciences442011, 1029--1048 • 115 articleJ.-M.J-M. Lasry and P.-L.P-L. Lions. Mean field games.Jpn. J. Math.212007, 229--260 • 116 articleJ.J. Lasserre. Global Optimization with Polynomials and the Problem of Moments.SIAM Journal on Optimization1132001, 796-817 • 117 articleJ.Jan Lellmann, D. A.Dirk A. Lorenz, C.-B.Carola-Bibiane Schönlieb and T.Tuomo Valkonen. Imaging with Kantorovich-Rubinstein Discrepancy.SIAM J. Imaging Sciences742014, 2833--2859 • 118 articleC.Christian Léonard. A survey of the Schrödinger problem and some of its connections with optimal transport.Discrete Contin. Dyn. Syst.3442014, 1533--1574 • 119 articleA. S.A. S. Lewis. Active sets, nonsmoothness, and sensitivity.SIAM Journal on Optimization1332003, 702--725 • 120 articleB.B. Li, F.F. Habbal and M.M. Ortiz. Optimal transportation meshfree approximation schemes for Fluid and plastic Flows.Int. J. Numer. Meth. Engng 83:1541--579832010, 1541--1579 • 121 articleG.G. Loeper. A fully nonlinear version of the incompressible Euler equations: the semigeostrophic system.SIAM J. Math. Anal.3832006, 795--823 (electronic) • 122 articleG.G. Loeper and F.F. Rapetti. Numerical solution of the Monge-Ampére equation by a Newton's algorithm.C. R. Math. Acad. Sci. Paris34042005, 319--324 • 123 articleD.D. Lombardi and E.E. Maitre. Eulerian models and algorithms for unbalanced optimal transport.Preprint hal-009765012013 • 124 articleJ.J. Maas, M.M. Rumpf, C.C. Schonlieb and S.S. Simon. A generalized model for optimal transport of images including dissipation and density modulation.Arxiv preprint2014 • 125 bookS. G.S. G. Mallat. A wavelet tour of signal processing.Elsevier/Academic Press, Amsterdam2009 • 126 articleB.B. Maury, A.A. Roudneff-Chupin and F.F. Santambrogio. A macroscopic crowd motion model of gradient flow type.Math. Models Methods Appl. Sci.20102010, 1787--1821 • 127 articleQ.Quentin Mérigot. A multiscale approach to optimal transport.Computer Graphics Forum3052011, 1583--1592 • 128 articleQ.Quentin Mérigot and É.Édouard Oudet. Handling Convexity-Like Constraints in Variational Problems.SIAM J. Numer. Anal.5252014, 2466--2487 • 129 articleM. I.M. I. Miller, A.A. Trouvé and L.L. Younes. Geodesic Shooting for Computational Anatomy.Journal of Mathematical Imaging and Vision242March 2006, 209--228 • 130 articleJ.-M.Jean-Marie Mirebeau. Adaptive, Anisotropic and Hierarchical cones of Discrete Convex functions.Preprint2014 • 131 articleJ.-M.Jean-Marie Mirebeau. Anisotropic Fast-Marching on Cartesian Grids Using Lattice Basis Reduction.SIAM Journal on Numerical Analysis5242014, 1573-1599 • 132 articleN.N. Papadakis, G.Gabriel Peyré and É.Édouard Oudet. Optimal Transport with Proximal Splitting.SIAM Journal on Imaging Sciences712014, 212--238 • 133 articleB.B. Pass and N.N. Ghoussoub. Optimal transport: From moving soil to same-sex marriage.CMS Notes452013, 14--15 • 134 articleB.B. Pass. Uniqueness and Monge Solutions in the Multimarginal Optimal Transportation Problem.SIAM Journal on Mathematical Analysis4362011, 2758-2775 • 135 articleB.B. Perthame, F.F. Quiros and J. L.J. L. Vazquez. The Hele-Shaw Asymptotics for Mechanical Models of Tumor Growth.Archive for Rational Mechanics and Analysis21212014, 93-127 • 136 articleJ.J. Petitot. The neurogeometry of pinwheels as a sub-riemannian contact structure.Journal of Physiology-Paris97232003, 265--309 • 137 articleG.Gabriel Peyré. Texture Synthesis with Grouplets.Pattern Analysis and Machine Intelligence, IEEE Transactions on324April 2010, 733--746 • 138 articleB.B. Piccoli and F.F. Rossi. Generalized Wasserstein distance and its application to transport equations with source.Archive for Rational Mechanics and Analysis21112014, 335--358 • 139 articleC.Clarice Poon. Structure dependent sampling in compressed sensing: theoretical guarantees for tight frames.Applied and Computational Harmonic Analysis2015 • 140 articleH.H. Raguet, J.J. Fadili and G.Gabriel Peyré. A Generalized Forward-Backward Splitting.SIAM Journal on Imaging Sciences632013, 1199--1226 • 141 articleJ.-C.J.-C. Rochet and P.P. Choné. Ironing, Sweeping and multi-dimensional screening.Econometrica1998 • 142 articleL.L.I. Rudin, S.S. Osher and E.E. Fatemi. Nonlinear total variation based noise removal algorithms.Physica D: Nonlinear Phenomena6011992, 259--268 • 143 bookO.O. Scherzer, M.M. Grasmair, H.H. Grossauer, M.M. Haltmeier and F.F. Lenzen. Variational Methods in Imaging.Springer2008 • 144 articleT.Tanya Schmah, L.Laurent Risser and F.-X.François-Xavier Vialard. Diffeomorphic image matching with left-invariant metrics.Fields Institute Communications series, special volume in memory of Jerrold E. MarsdenJanuary 2014 • 145 inproceedingsT.T. Schmah, L.L. Risser and F.-X.F.-X. Vialard. Left-Invariant Metrics for Diffeomorphic Image Registration with Spatially-Varying Regularisation.MICCAI (1)2013, 203-210 • 146 articleJ.Justin Solomon, F.F. de~Goes, G.Gabriel Peyré, M.Marco Cuturi, A.A. Butscher, A.A. Nguyen, T.T. Du and L.L. Guibas. Convolutional Wasserstein Distances: Efficient Optimal Transportation on Geometric Domains.ACM Transaction on Graphics, Proc. SIGGRAPH'15to appear2015 • 147 articleR.R. Tibshirani. Regression shrinkage and selection via the Lasso.Journal of the Royal Statistical Society. Series B. Methodological5811996, 267--288 • 148 articleA.A. Trouvé and F.-X.F.-X. Vialard. Shape splines and stochastic shape evolutions: A second order point of view.Quarterly of Applied Mathematics2012 • 149 articleS.S. Vaiter, M.M. Golbabaee, J.J. Fadili and G.Gabriel Peyré. Model Selection with Piecewise Regular Gauges.Information and Inferenceto appear2015, • 150 articleF.-X.F.-X. Vialard, L.L. Risser, D.D. Rueckert and C.C.J. Cotter. Diffeomorphic 3D Image Registration via Geodesic Shooting Using an Efficient Adjoint Calculation.International Journal of Computer Vision9722012, 229-241 • 151 incollectionF.-X.F.-X. Vialard and L.L. Risser. Spatially-Varying Metric Learning for Diffeomorphic Image Registration: A Variational Framework.Medical Image Computing and Computer-Assisted Intervention MICCAI 20148673Lecture Notes in Computer ScienceSpringer International Publishing2014, 227-234 • 152 bookC.C. Villani. Optimal transport.338Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]Old and newBerlinSpringer-Verlag2009, xxii+973 • 153 bookC.C. Villani. Topics in optimal transportation.58Graduate Studies in MathematicsAmerican Mathematical Society, Providence, RI2003, xvi+370 • 154 articleX.-J.X-J. Wang. On the design of a reflector antenna. II.Calc. Var. Partial Differential Equations2032004, 329--341 • 155 articleB.B. Wirth, L.L. Bar, M.M. Rumpf and G.G. Sapiro. A continuum mechanical approach to geodesics in shape space.International Journal of Computer Vision9332011, 293--318 • 156 articleJ.J. Wright, Y.Y. Ma, J.J. Mairal, G.G. Sapiro, T. S.T. S Huang and S.S. Yan. Sparse representation for computer vision and pattern recognition.Proceedings of the IEEE9862010, 1031--1044
2022-07-02 08:37:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 49, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5138022303581238, "perplexity": 3407.8186652985187}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103989282.58/warc/CC-MAIN-20220702071223-20220702101223-00788.warc.gz"}
https://labs.tib.eu/arxiv/?author=Yu%20Gao
• Fully Dynamic Effective Resistances(1804.04038) April 11, 2018 cs.DS In this paper we consider the \emph{fully-dynamic} All-Pairs Effective Resistance problem, where the goal is to maintain effective resistances on a graph $G$ among any pair of query vertices under an intermixed sequence of edge insertions and deletions in $G$. The effective resistance between a pair of vertices is a physics-motivated quantity that encapsulates both the congestion and the dilation of a flow. It is directly related to random walks, and it has been instrumental in the recent works for designing fast algorithms for combinatorial optimization problems, graph sparsification, and network science. We give a data-structure that maintains $(1+\epsilon)$-approximations to all-pair effective resistances of a fully-dynamic unweighted, undirected multi-graph $G$ with $\tilde{O}(m^{4/5}\epsilon^{-4})$ expected amortized update and query time, against an oblivious adversary. Key to our result is the maintenance of a dynamic \emph{Schur complement}~(also known as vertex resistance sparsifier) onto a set of terminal vertices of our choice. This maintenance is obtained (1) by interpreting the Schur complement as a sum of random walks and (2) by randomly picking the vertex subset into which the sparsifier is constructed. We can then show that each update in the graph affects a small number of such walks, which in turn leads to our sub-linear update time. We believe that this local representation of vertex sparsifiers may be of independent interest. • 21cm Limits on Decaying Dark Matter and Primordial Black Holes(1803.09390) March 26, 2018 hep-ph, astro-ph.HE Recently the Experiment to Detect the Global Epoch of Reionization Signature (EDGES) reported the detection of a 21cm absorption signal stronger than astrophysical expectations. In this paper we study the impact of radiation from dark matter (DM) decay and primordial black holes (PBH) on the 21cm radiation temperature in the reionization epoch, and impose a constraint on the decaying dark matter and PBH energy injection in the intergalactic medium, which can heat up neutral hydrogen gas and weaken the 21cm absorption signal. We consider decay channels DM$\rightarrow e^+e^-, \gamma\gamma$, $\mu^+\mu^-$, $b\bar{b}$ and the $10^{15-17}$g mass range for primordial black holes, and require the heating of the neutral hydrogen does not negate the 21cm absorption signal. For $e^+e^-$, $\gamma\gamma$ final states and PBH cases we find strong 21cm bounds that can be more stringent than the current extragalactic diffuse photon bounds. For the DM$\rightarrow e^+e^-$ channel, the lifetime bound is $\tau_{\rm DM}> 10^{27}$s for sub-GeV dark matter. The bound is $\tau_{\rm DM}\ge 10^{26}$s for sub-GeV DM$\rightarrow \gamma\gamma$ channel and reaches $10^{27}$s at MeV DM mass. For $b\bar{b}$ and $\mu^+\mu^-$ cases, the 21 cm constraint is better than all the existing constraints for $m_{\rm DM}<20$ GeV where the bound on $\tau_{\rm DM}\ge10^{26}$s. For both DM decay and primordial black hole cases, the 21cm bounds significantly improve over the CMB damping limits from Planck data. • "Super-deblended" Dust Emission in Galaxies: I. The GOODS-North Catalog and the Cosmic Star Formation Rate Density out to Redshift 6(1703.05281) Feb. 5, 2018 astro-ph.CO, astro-ph.GA We present a new technique to measure multi-wavelength "Super-deblended" photometry from highly confused images, which we apply to Herschel and ground-based far-infrared (FIR) and (sub-)millimeter (mm) data in the northern field of the Great Observatories Origins Deep Survey (GOODS). There are two key novelties. First, starting with a large database of deep Spitzer 24{\mu}m and VLA 20cm detections that are used to define prior positions for fitting the FIR/submm data, we perform an active selection of useful priors independently at each frequency band, moving from less to more confused bands. Exploiting knowledge of redshift and all available photometry, we identify hopelessly faint priors that we remove from the fitting pool. This approach significantly reduces blending degeneracies and allows reliable photometry to be obtained for galaxies in FIR+mm bands. Second, we obtain well-behaved, nearly Gaussian flux density uncertainties, individually tailored to all fitted priors in each band. This is done by exploiting extensive simulations that allow us to calibrate the conversion of formal fitting uncertainties to realistic uncertainties depending on quantities directly measurable. We achieve deeper detection limits with high fidelity measurements and uncertainties at FIR+mm bands. As an illustration of the utility of these measurements, we identify 70 galaxies with z>3 and reliable FIR+mm detections. We present new constraints on the cosmic star formation rate density at 3<z<6, finding a significant contribution from z>3 dusty galaxies that are missed by optical-to-near-infrared color selection. Photometric measurements for 3306 priors, including over 1000 FIR+mm detections are released publicly with our catalog. • Re-visiting the extended Schmidt law: the important role of existing stars in regulating star formation(1801.00888) Jan. 3, 2018 astro-ph.GA We revisit the proposed extended Schmidt law (Shi et al. 2011) which points that the star formation efficiency in galaxies depends on the stellar mass surface density, by investigating spatially-resolved star formation rates (SFRs), gas masses and stellar masses of star formation regions in a vast range of galactic environments, from the outer disks of dwarf galaxies to spiral disks and to merging galaxies as well as individual molecular clouds in M33. We find that these regions are distributed in a tight power-law as Sigma_SFR ~(Sigma_star^0.5 Sigma_gas )^1.09, which is also valid for the integrated measurements of disk and merging galaxies at high-z. Interestingly, we show that star formation regions in the outer disks of dwarf galaxies with Sigma_SFR down to 10^(-5) Msun/yr/kpc^2, which are outliers of both Kennicutt-Schmidt and Silk-Elmegreen law, also follow the extended Schmidt law. Other outliers in the Kennicutt-Schmidt law, such as extremely-metal poor star-formation regions, also show significantly reduced deviations from the extended Schmidt law. These results suggest an important role for existing stars in helping to regulate star formation through the effect of their gravity on the mid-plane pressure in a wide range of galactic environments. • Nearly Tight Bounds for Sandpile Transience on the Grid(1704.04830) Nov. 15, 2017 cs.DM, cs.DS We use techniques from the theory of electrical networks to give nearly tight bounds for the transience class of the Abelian sandpile model on the two-dimensional grid up to polylogarithmic factors. The Abelian sandpile model is a discrete process on graphs that is intimately related to the phenomenon of self-organized criticality. In this process, vertices receive grains of sand, and once the number of grains exceeds their degree, they topple by sending grains to their neighbors. The transience class of a model is the maximum number of grains that can be added to the system before it necessarily reaches its steady-state behavior or, equivalently, a recurrent state. Through a more refined and global analysis of electrical potentials and random walks, we give an $O(n^4\log^4{n})$ upper bound and an $\Omega(n^4)$ lower bound for the transience class of the $n \times n$ grid. Our methods naturally extend to $n^d$-sized $d$-dimensional grids to give $O(n^{3d - 2}\log^{d+2}{n})$ upper bounds and $\Omega(n^{3d -2})$ lower bounds. • Statistical errors in Weizsaecker-Skyrme mass model(1709.03703) Sept. 12, 2017 nucl-th, astro-ph.SR The statistical uncertainties of 13 model parameters in the Weizs\"acker-Skyrme (WS*) mass model are investigated for the first time with an efficient approach, and the propagated errors in the predicted masses are estimated. The discrepancies between the predicted masses and the experimental data, including the new data in AME2016, are almost all smaller than the model errors. For neutron-rich heavy nuclei, the model errors increase considerably, and go up to a few MeV when the nucleus approaches the neutron drip line. The most sensitive model parameter which causes the largest statistical error is analyzed for all bound nuclei. We find that the two coefficients of symmetry energy term significantly influence the mass predictions of extremely neutron-rich nuclei, and the deformation energy coefficients play a key role for well-deformed nuclei around the $\beta$-stability line. • A dispersive regularization for the modified Camassa-Holm equation(1707.06377) July 20, 2017 math.AP In this paper, we present a dispersive regularization for the modified Camassa-Holm equation (mCH) in one dimension, which is achieved through a double mollification for the system of ODEs describing trajectories of $N$-peakon solutions. From this regularized system of ODEs, we obtain approximated $N$-peakon solutions with no collision between peakons. Then, a global $N$-peakon solution for the mCH equation is obtained, whose trajectories are global Lipschitz functions and do not cross each other. When $N=2$, the limiting solution is a sticky peakon weak solution. By a limiting process, we also derive a system of ODEs to describe $N$-peakon solutions. At last, using the $N$-peakon solutions and through a mean field limit process, we obtain global weak solutions for general initial data $m_0$ in Radon measure space. • ALMA Maps of Dust and Warm Dense Gas Emission in the Starburst Galaxy IC 5179$^\star$(1707.04363) July 14, 2017 astro-ph.CO, astro-ph.GA We present our high-resolution ($0^{\prime\prime}.15\times0^{\prime\prime}.13$, $\sim$34 pc) observations of the CO(6-5) line emission, which probes the warm and dense molecular gas, and the 434 $\mu$m dust continuum emission in the nuclear region of the starburst galaxy IC 5179, conducted with the Atacama Large Millimeter Array (ALMA). The CO(6-5) emission is spatially distributed in filamentary structures with many dense cores and shows a velocity field that is characteristic of a circum-nuclear rotating gas disk, with 90% of the rotation speed arising within a radius of $\lesssim150$ pc. At the scale of our spatial resolution, the CO(6-5) and dust emission peaks do not always coincide, with their surface brightness ratio varying by a factor of $\sim$10. This result suggests that their excitation mechanisms are likely different, as further evidenced by the Southwest to Northeast spatial gradient of both CO-to-dust continuum ratio and Pa-$\alpha$ equivalent width. Within the nuclear region (radius$\sim$300 pc) and with a resolution of $\sim$34 pc, the CO line flux (dust flux density) detected in our ALMA observations is $180\pm18$ Jy km/s ($71\pm7$ mJy), which account for 22% (2.4%) of the total value measured by Herschel. • ALMA [NII] 205 micron Imaging Spectroscopy of the Interacting Galaxy System BRI 1202-0725 at Redshift 4.7(1706.03018) June 9, 2017 astro-ph.GA We present the results from Atacama Large Millimeter/submillimeter Array (ALMA) imaging in the [NII] 205 micron fine-structure line (hereafter [NII]) and the underlying continuum of BRI 1202-0725, an interacting galaxy system at $z =$ 4.7, consisting of an optical QSO, a sub-millimeter galaxy (SMG) and two Lyman-$\alpha$ emitters (LAEs), all within $\sim$25 kpc of the QSO. We detect the QSO and SMG in both [NII] and continuum. At the $\sim$$1" (or 6.6 kpc) resolution, both QSO and SMG are resolved in [NII], with the de-convolved major axes of \sim9 and \sim14 kpc, respectively. In contrast, their continuum emissions are much more compact and unresolved even at an enhanced resolution of \sim$$0.7"$. The ratio of the [NII] flux to the existing CO (7$-$6) flux is used to constrain the dust temperature ($T_{\rm dust}$) for a more accurate determination of the FIR luminosity $L_{\rm FIR}$. Our best estimated $T_{\rm dust}$ equals $43 (\pm 2)$ K for both galaxies (assuming an emissivity index $\beta = 1.8$). The resulting $L_{\rm CO(7-6)}/L_{\rm FIR}$ ratios are statistically consistent with that of local luminous infrared galaxies, confirming that $L_{\rm CO(7-6)}$ traces the star formation (SF) rate (SFR) in these galaxies. We estimate that the on-going SF of the QSO (SMG) has a SFR of 5.1 $(6.9) \times 10^3 M_{\odot}$ yr$^{-1}$ ($\pm$ 30%) assuming Chabrier initial mass function, takes place within a diameter (at half maximum) of 1.3 (1.5) kpc, and shall consume the existing 5 $(5) \times 10^{11} M_{\odot}$ of molecular gas in 10 $(7) \times 10^7$ years. • The modified Camassa-Holm equation in Lagrangian coordinates(1705.06562) May 18, 2017 math.AP In this paper, we study the modified Camassa-Holm (mCH) equation in Lagrangian coordinates. For some initial data $m_0$, we show that classical solutions to this equation blow up in finite time $T_{max}$. Before $T_{max}$, existence and uniqueness of classical solutions are established. Lifespan for classical solutions is obtained: $T_{max}\geq \frac{1}{||m_0||_{L^\infty}||m_0||_{L^1}}.$ And there is a unique solution $X(\xi,t)$ to the Lagrange dynamics which is a strictly monotonic function of $\xi$ for any $t\in[0,T_{max})$: $X_\xi(\cdot,t)>0$. As $t$ approaching $T_{max}$, we prove that classical solution $m(\cdot ,t)$ in Eulerian coordinate has a unique limit $m(\cdot,T_{max})$ in Radon measure space and there is a point $\xi_0$ such that $X_\xi(\xi_0,T_{max})=0$ which means $T_{max}$ is an onset time of collision of characteristics. We also show that in some cases peakons are formed at $T_{max}$. After $T_{max}$, we regularize the Lagrange dynamics to prove global existence of weak solutions $m$ in Radon measure space. • PeV Neutrino Events at IceCube from Single Top-Quark Production(1611.00773) May 10, 2017 hep-ph, astro-ph.HE Deep inelastic scattering of very high-energy neutrinos can potentially be enhanced by the production of a single top quark or charm quark via the interaction of a virtual $W$-boson exchange with a $b$-quark or $s$-quark parton in the nucleon. The single top contribution shows a sharp rise at neutrino energies above 0.5 PeV and gives a cross-section contribution of order 5 percent at 10 PeV, while single charm has a low energy threshold and contributes about 25 percent. Semi-leptonic decays of top and charm give di-muon events whose kinematic characteristics are shown. The angular separation of the di-muons from heavy quark production in the IceCube detector can reach up to one degree. Top quark production has a unique, but rare, three muon signal. • Heavy Neutrino Search via the Higgs boson at the LHC(1704.00881) May 27, 2019 hep-ph, hep-ex In the inverse see-saw model the effective neutrino Yukawa couplings can be sizable due to a large mixing angle between the light $(\nu)$and heavy neutrinos $(N)$. When the right handed neutrino $(N)$ can be lighter than the Standard Model (SM) Higgs boson $(h)$. It can be produced via the on-shell decay of the Higgs, $h\to N\nu$ at a significant branching fraction at the LHC. In such a process $N$ mass can be reconstructed in its dominant $N\rightarrow W \ell$ decays. We perform an analysis on this channel and its relevant backgrounds, among which the $W+$jets background is the largest. Considering the existing mixing constraints from the Higgs and electroweak precision data, the best sensitivity of the heavy neutrino search is achieved for benchmark $N$ mass at 100 and 110 GeV for upcoming high luminosity LHC runs. • Patched peakon weak solutions of the modified Camassa-Holm equation(1703.07466) March 21, 2017 math.AP, math-ph, math.MP In this paper, we study traveling wave solutions and peakon weak solutions of the modified Camassa-Holm (mCH) equation with dispersive term $2ku_x$ for $k\in\mathbb{R}$. We study traveling wave solutions through a Hamiltonian system obtained from the mCH equation by using a nonlinear transformation. The typical traveling wave solutions given by this Hamiltonian system are unbounded or multi-valued. We provide a method, called patching technic, to truncate these traveling wave solutions and patch different segments to obtain patched bounded single-valued peakon weak solutions which satisfy jump conditions at peakons. Then, we study some special peakon weak solutions constructed by the fundamental solution of the Helmholtz operator $1-\partial_{xx}$, which can also be obtained by the patching technic. At last, we study some length and total signed area preserving closed planar curve flows that can be described by the mCH equation when $k=1$, for which we give a Hamiltonian structure and use the patched periodic peakon weak solutions to investigate loops with cusps. • A Herschel Space Observatory Spectral Line Survey of Local Luminous Infrared Galaxies from 194 to 671 Microns(1703.00005) Feb. 28, 2017 astro-ph.GA We describe a Herschel Space Observatory 194-671 micron spectroscopic survey of a sample of 121 local luminous infrared galaxies and report the fluxes of the CO $J$ to $J$-1 rotational transitions for $4 \leqslant J \leqslant 13$, the [NII] 205 um line, the [CI] lines at 609 and 370 um, as well as additional and usually fainter lines. The CO spectral line energy distributions (SLEDs) presented here are consistent with our earlier work, which was based on a smaller sample, that calls for two distinct molecular gas components in general: (i) a cold component, which emits CO lines primarily at $J \lesssim 4$ and likely represents the same gas phase traced by CO (1-0), and (ii) a warm component, which dominates over the mid-$J$ regime ($4 < J < 10$) and is intimately related to current star formation. We present evidence that the CO line emission associated with an active galactic nucleus is significant only at $J > 10$. The flux ratios of the two [CI] lines imply modest excitation temperatures of 15 to 30 K; the [CI] 370 um line scales more linearly in flux with CO (4-3) than with CO (7-6). These findings suggest that the [CI] emission is predominately associated with the gas component defined in (i) above. Our analysis of the stacked spectra in different far-infrared (FIR) color bins reveals an evolution of the SLED of the rotational transitions of water vapor as a function of the FIR color in a direction consistent with infrared photon pumping. • HC3N Observations of Nearby Galaxies(1701.00312) Jan. 2, 2017 astro-ph.GA Aims. We aim to systematically study the properties of the different transitions of the dense molecular gas tracer HC3N in galaxies. Methods. We have conducted single-dish observations of HC3N emission lines towards a sample of nearby gas-rich galaxies. HC3N(J=2-1) was observed in 20 galaxies with Effelsberg 100-m telescope. HC3N(J=24-23) was observed in nine galaxies with the 10-m Submillimeter Telescope (SMT). Results. HC3 N 2-1 is detected in three galaxies: IC 342, M 66 and NGC 660 (> 3 {\sigma}). HC3 N 24-23 is detected in three galaxies: IC 342, NGC 1068 and IC 694. This is the first measurements of HC3N 2-1 in a relatively large sample of external galaxies, although the detection rate is low. For the HC3 N 2-1 non-detections, upper limits (2 {\sigma}) are derived for each galaxy, and stacking the non-detections is attempted to recover the weak signal of HC3N. But the stacked spectrum does not show any significant signs of HC3N 2-1 emission. The results are also compared with other transitions of HC3N observed in galaxies. Conclusions. The low detection rate of both transitions suggests low abundance of HC3N in galaxies, which is consistent with other observational studies. The comparison between HC3N and HCN or HCO+shows a large diversity in the ratios between HC3N and HCN or HCO+. More observations are needed to interpret the behavior of HC3N in different types of galaxies. • Planck Constraint on Relic Primordial Black Holes(1612.07738) Dec. 22, 2016 hep-ph, astro-ph.CO We investigate constraints on the abundance of primordial black holes (PBHs) in the mass range 10^{15}-10^{17} g using data from the Cosmic Microwave Background (CMB) and MeV extragalactic gamma-ray background (EGB). Hawking radiation from PBHs with lifetime greater than the age of the universe leaves an imprint on the CMB through modification of the ionization history and the damping of CMB anisotropies. Using a model for redshift dependent energy injection efficiencies, we show that a combination of temperature and polarization data from Planck provides the strongest constraint on the abundance of PBHs for masses \sim 10^{15}-10^{16} g, while the EGB dominates for masses \gtrsim 10^{16} g. Both the CMB and EGB now rule out PBHs as the dominant component of dark matter for masses \sim 10^{16}-10^{17} g. Planned MeV gamma-ray observatories are ideal for further improving constraints on PBHs in this mass range. • Carbon monoxide in an extremely metal-poor galaxy(1612.03980) Dec. 13, 2016 astro-ph.GA Extremely metal-poor galaxies with metallicity below 10% of the solar value in the local universe are the best analogues to investigating the interstellar medium at a quasi-primitive environment in the early universe. In spite of the ongoing formation of stars in these galaxies, the presence of molecular gas (which is known to provide the material reservoir for star formation in galaxies, such as our Milky Way) remains unclear. Here, we report the detection of carbon monoxide (CO), the primary tracer of molecular gas, in a galaxy with 7% solar metallicity, with additional detections in two galaxies at higher metallicities. Such detections offer direct evidence for the existence of molecular gas in these galaxies that contain few metals. Using archived infrared data, it is shown that the molecular gas mass per CO luminosity at extremely low metallicity is approximately one-thousand times the Milky Way value. • Indirect Signals from Solar Dark Matter Annihilation to Long-lived Right-handed Neutrinos(1612.03110) Dec. 9, 2016 hep-ph, astro-ph.HE We study indirect detection signals from solar annihilation of dark matter (DM) particles into light right-handed (RH) neutrinos with a mass in a $1-5$ GeV range. These RH neutrinos can have a sufficiently long lifetime to allow them to decay outside the Sun and their delayed decays can result in a signal in gamma rays from the otherwise `dark' solar direction, and also a neutrino signal that is not suppressed by the interactions with solar medium. We find that the latest Fermi-LAT and IceCube results place limits on the gamma ray and neutrino signals, respectively. Combined photon and neutrino bounds can constrain the spin-independent DM-nucleon elastic scattering cross section better than direct detection experiments for DM masses from 200 GeV up to several TeV. The bounds on spin-dependent scattering are also much tighter than the strongest limits from direct detection experiments. • Dense gas in low-metallicity galaxies(1612.02196) Dec. 7, 2016 astro-ph.GA Stars form out of the densest parts of molecular clouds. Far-IR emission can be used to estimate the Star Formation Rate (SFR) and high dipole moment molecules, typically HCN, trace the dense gas. A strong correlation exists between HCN and Far-IR emission, with the ratio being nearly constant, over a large range of physical scales. A few recent observations have found HCN to be weak with respect to the Far-IR and CO in subsolar metallicity (low-Z) objects. We present observations of the Local Group galaxies M33, IC10, and NGC6822 with the IRAM 30meter and NRO 45m telescopes, greatly improving the sample of low-Z galaxies observed. HCN, HCO$^+$, CS, C$_2$H, and HNC have been detected. Compared to solar metallicity galaxies, the Nitrogen-bearing species are weak (HCN, HNC) or not detected (CN, HNCO, N$_2$H$^+$) relative to Far-IR or CO emission. HCO$^+$ and C$_2$H emission is normal with respect to CO and Far-IR. While $^{13}$CO is the usual factor 10 weaker than $^{12}$CO, C$^{18}$O emission was not detected down to very low levels. Including earlier data, we find that the HCN/HCO$^+$ ratio varies with metallicity (O/H) and attribute this to the sharply decreasing Nitrogen abundance. The dense gas fraction, traced by the HCN/CO and HCO$^+$/CO ratios, follows the SFR but in the low-Z objects the HCO$^+$ is much easier to measure. Combined with larger and smaller scale measurements, the HCO$^+$ line appears to be an excellent tracer of dense gas and varies linearly with the SFR for both low and high metallicities. • Dense Gas in the Outer Spiral Arm of M51(1612.00459) Dec. 1, 2016 astro-ph.GA There is a linear relation between the mass of dense gas, traced by the HCN(1-0) luminosity, and the star formation rate (SFR), traced by the far-infrared luminosity. Recent observations of galactic disks have shown some systematic variations. In order to explore the SFR-dense gas link at high resolution ($\sim 4"$, $\sim 150$ pc) in the outer disk of an external galaxy, we have mapped a region about 5 kpc from the center along the northern spiral arm of M51 in the HCN(1-0), HCO$^+$(1-0) and HNC(1-0) emission lines using the Northern Extended Millimeter Array (NOEMA) interferometer. The HCN and HCO$^+$ lines were detected in 6 giant molecular associations (GMAs) while HNC emission was only detected in the two brightest GMAs. One of the GMAs hosts a powerful HII region and HCN is stronger than HCO$^+$ there. Comparing with observations of GMAs in the disks of M31 and M33 at similar angular resolution ($\sim 100$ pc), we find that GMAs in the outer disk of M51 are brighter in both HCN and HCO$^+$ lines by a factor of 3 on average. However, the $I_{HCN}/I_{CO}$ and $I_{HCO^+}/I_{CO}$ ratios are similar to the ratios in nearby galactic disks and the Galactic plane. Using the Herschel 70 $\mu$m data to trace the total IR luminosity at the resolution of the GMAs, we find that both the L$_{IR}$-L$_{HCN}$ and L$_{IR}$-L$_{HCO^+}$ relations in the outer disk GMAs are consistent with the proportionality between the L$_{IR}$ and the dense gas mass established globally in galaxies within the scatter. The IR/HCN and IR/HCO$^+$ ratios of the GMAs vary by a factor of 3, probably depending on whether massive stars are forming or not. • Diphoton Excess in Consistent Supersymmetric SU(5) Models with Vector-like Particles(1601.00866) Nov. 30, 2016 hep-ph, hep-ex We consider the diphoton resonance at the 13 TeV LHC in the context of SU(5) grand unification. A leading candidate to explain this resonance is a standard model singlet scalar decaying to a pair of photon by means of vector-like fermionic loops. We demonstrate the effect of the vector-like multiplets (5, 5 bar) and (10, 10 bar) on the evolution of the gauge couplings and perturbatively evaluate the weak scale values of the new couplings and masses run down from the unification scale. We use these masses and couplings to explain the diphoton resonance after considering the new dijet constraints. We show how to accommodate the larger decay width of the resonance particle, which seems to be preferred by the experimental data. In addition, we consider new couplings relating various components of (5, 5 bar) and (10, 10 bar) in the context of the orbifold GUTs, where the resonance scalar can be a part of the new vector-like lepton doublets. We also calculate the Higgs mass and proton decay rate to positron and neutral pion in the context of SU(5) grand unification, including effects of the new vector-like multiplets. • Distinguishing Standard Model Extensions using Monotop Chirality at the LHC(1507.02271) Nov. 29, 2016 hep-ph, hep-ex We present two minimal extensions of the standard model, each giving rise to baryogenesis. They include heavy color-triplet scalars interacting with a light Majorana fermion that can be the dark matter (DM) candidate. The electroweak charges of the new scalars govern their couplings to quarks of different chirality, which leads to different collider signals. These models predict monotop events at the LHC and the energy spectrum of decay products of highly polarized top quarks can be used to establish the chiral nature of the interactions involving the heavy scalars and the DM. Detailed simulation of signal and standard model background events is performed, showing that top quark chirality can be distinguished in hadronic and leptonic decays of the top quarks. • Sensitivity to oscillation with a sterile fourth generation neutrino from ultra-low threshold neutrino-nucleus coherent scattering(1511.02834) Oct. 23, 2016 hep-ph, hep-ex, nucl-ex, nucl-th We discuss prospects for probing short-range sterile neutrino oscillation using neutrino-nucleus coherent scattering with ultra-low energy ($\sim 10$ eV - 100 eV) recoil threshold cryogenic Ge detectors. The analysis is performed in the context of a specific and contemporary reactor-based experimental proposal, developed in cooperation with the Nuclear Science Center at Texas A\&M University, and references developing technology based upon economical and scalable detector arrays. The baseline of the experiment is substantially shorter than existing measurements, as near as about 2 meters from the reactor core, and is moreover variable, extending continuously up to a range of about 10 meters. This proximity and variety combine to provide extraordinary sensitivity to a wide spectrum of oscillation scales, while facilitating the tidy cancellation of leading systematic uncertainties in the reactor source and environment. With 100~eV sensitivity, for exposures on the order of 200 kg$\cdot$y, we project an estimated sensitivity to first/fourth neutrino oscillation with a mass gap $\Delta m^2 \sim 1 \, {\rm eV}^2$ at an amplitude $\sin^2 2\theta \sim 10^{-1}$, or $\Delta m^2 \sim 0.2 \, {\rm eV}^2$ at unit amplitude. Larger exposures, around 5,000 kg$\cdot$y, together with 10 eV sensitivity are capable of probing more than an additional order of magnitude in amplitude. • An SU(6) GUT Origin of the TeV-Scale Vector-like Particles Associated with the 750 GeV Diphoton Resonance(1604.07838) July 10, 2016 hep-ph We consider the $SU(6)$ GUT model as an explanation for the diphoton final state excess, where the masses of all associated particles are linked with a new symmetry breaking scale. In this model, the diphoton final states arise due to loops involving three pairs of new vector-like particles having the same quantum numbers as down-type quarks and lepton doublets. These new vector-like fermions are embedded alongside the SM fermions into minimal anomaly-free representations of the $SU(6)$ gauge symmetry. The $SU(6)$ symmetry is broken to the Standard Model times $U(1)_X$ at the GUT scale, and masses for the vector-like fermions arise at the TeV scale only after the residual $U(1)_X$ symmetry is broken. The vector-like fermions do not acquire masses via breaking of the SM symmetry at the EW scale. The field which is responsible for the newly observed resonance belongs to the $\bar{6}_H$ representation. The dark matter arises from the SM singlet fermion residing in $\bar{6}$ and is of Majorana type. We explicitly demonstrate gauge coupling unification in this model, and also discuss the origin of neutrino masses. In addition to the diphoton final states, we make distinctive predictions for other final states which are likewise accessible to the ongoing LHC experimental effort. • Exploring the Jet Multiplicity in the 750 GeV Diphoton Excess(1606.03067) June 9, 2016 hep-ph, hep-ex The recent diphoton excess at the LHC has been explained tentatively by a Standard Model (SM) singlet scalar of 750 GeV in mass, in the association of heavy particles with SM gauge charges. These new particles with various SM gauge charges induce loop-level couplings of the new scalar to $WW$, $ZZ$, $Z\gamma$, $\gamma\gamma$, and $gg$. We show that the strength of the couplings to the gauge bosons also determines the production mechanism of the scalar particle via $WW,\, ZZ,\, Z\gamma,\, \gamma\gamma,\, gg$ fusion which leads to individually distinguishable jet distributions in the final state where the statistics will be improved in the ongoing run. The number of jets and the leading jet's transverse momentum distribution in the excess region of the diphoton signal can be used to determine the coupling of the scalar to the gauge bosons arising from the protons which subsequently determine the charges of the heavy particles that arise from various well-motivated models.
2019-12-09 06:49:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.749421238899231, "perplexity": 1767.2354119146303}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540517557.43/warc/CC-MAIN-20191209041847-20191209065847-00418.warc.gz"}
http://neural-code.com/index.php/laboratory/66-writing-a-thesis
## Writing a thesis Here is some advice for bachelor and master-students that write a thesis at the department of Biophysics at the Donders Institute for Brain, Cognition and Behaviour. # English English is the scientific language, so it is essential that you learn to write in english. # Organisation of a manuscript Section Purpose Abstract/Summary summarizing experiment, results, conclusions Introduction establish the topic, explain what has been done inf the field, clarify what interesting question remains, and how you will tackle this Methods A description of the subjects, experimental set-up, paradigms and data analysis Results A presentation of the results and analysis (both in words and graphics) Discussion reporting main conclusions, comparison with other studies, putting your results in a bigger picture, (perhaps suggesting a model) ## Front page • title • name • student number • starting date • end date • internship duration • study credits / points / EC • study, e.g.: • Biology • Psychology • Cognitive neuroscience • Sciences • Biomedical sciences • Medicine • Physics • Bachelor / Master • supervisors ## Introduction The paragraphs in your Introduction should contain the following: • Explain the main issue/problem that you want to tackle. • Describe what others have done before you to put the issue in context • Describe what you will do in this study (you are allowed to use "we" or "in our study"! Nevertheless, don't do this too often), and make sure to explain why this study is better than other studies, and/or why it is interesting to tackle this scientific issue in this manner. Perhaps a figure detailing the rationale of the experiments might be useful. • Globally state what your main findings mean (e.g. they are surprising and interesting). You don't have to explain everything, yet: you want to have some surprises left for the reader. ## Methods The Methods-sections is usually straightforward, containing the following subsections: • Subjects/Listeners • Set-Up • Stimuli • Data analysis • Statistics #### Reporting statistics If you want to use the classical null-hypothesis significance testing, then here are some reporting guidelines from the journals of the American Physiological Society: Table 1. Interpretation of P values P Value Interpretation $$P \nsim 0.1$$ Data are consistent with a true zero effect $$0.05 \sim P \simeq 0.05$$ Data suggest there may be a true effect that differs from zero $$0.01 \nsim P \simeq 0.05$$ Data provide good evidence that the true effect differs from zero $$P \simeq 0.01$$ Data provide strong evidence that the true effect differs from zero Table 2.Guidelines for rounding P values to sufficient precision P value range Rounding Precision $$0.01 \leq P \leq 1.00$$ Round to 2 decimal places: round P=0.031 to P=0.03 $$0.001 \leq P \leq 0.009$$ Round to 3 decimal places: round P=0.0066 to P=0.007 $$P \lt 0.001$$ Report as P<0.001; more precision is unnecessary It is much better to use Bayesian statistics. With current Markov Chain Monte Carlo sampling techniques it is easy to do Bayesian statistical analysis. For a course on Bayesian statistics, see "Bayesian Cognitive Modeling: A Practical Course" (which contains a free downloadable book). A very good book on Bayesian statistics is from John Krushke: Doing Bayesian Data analysis with R. Bayesian null hypothesis testing is often done with the Bayes factor, which has the following interpretation. Table 3. Interpretation of Bayes factor (Jeffreys 1961, Bayesian Cognitive Modeling: A Practical Course) Bayes factor BF12 Interpretation $$\gt 100$$ extreme evidence for model 1 $$30 \leq BF \leq 100$$ very strong evidence for model 1 $$10 \leq BF \leq 30$$ strong evidence for model 1 $$3 \leq BF \leq 10$$ moderate evidence for model 1 $$1 \leq BF \leq 3$$ anecdotal evidence for model 1 1 no evidence $$1/3 \leq BF \leq 1$$ anecdotal evidence for model 2 $$1/10 \leq BF \leq 1/3$$ moderate evidence for model 2 $$1/30 \leq BF \leq 1/10$$ strong evidence for model 2 $$1/100 \leq BF \leq 1/30$$ very strong evidence for model 2 $$\lt 1/100$$ extreme evidence for model 2 ## Results In the results-section you have to guide the reader along, explaining what results you have found and what kind of analyses you did. You should start with the basic data, and gradually increase the level of analysis. So, usually the first figure in the results-section contains the raw data of one typical subject (such as a head movement trace, or a stimulus-response plot). With this figure you can introduce a more elaborate analysis, such as linear regression. Next you have to make the results quantitatively, so you want to have a measure for all subjects/conditions that you can easily plot in for example a histogram. This "first typical example"-"then quantification" abounds in scientific articles. Don't: "Regression analysis was done, and the results of listener JB are plotted in figure X. " Do: "listener JB accurately localizes the sounds, as evidenced by a high stimulus-response gain (gain = 0.9, see Methods, Figure X)". The storyline is important. Each paragraph is preceded and followed by another paragraph. Link them together (use: therefore, however, next), and conclude. #### Data visualization A very important aspect of scientific reporting of data is data visualization. In your Matlab-script, you should usually see these commands: • axis square • box off • legend • set(gca, • 'LineWidth',2 • 'MarkerFaceColor','w' • 'Color','k' Your figures should be saved as vector-format eps-files: print('-depsc','-painter',mfilename) There is almost never a reason to save a figure in bitmap-format. The eps-files can be easily modified in Adobe Illustrator to make the figures more attractive (without distorting the data). And if you want to use Microsoft, Illustrator has an option to save your figure for Office-purposes. Also very important: label your axes! • xlabel • ylabel Also, you should use a readable font size (minimally 10 at the final stage). Furthermore, remove as much "dead white space" as possible. For example, often you can plot several similar graphs in one figure, you can leave out the tick-labels for several of these graphs by correctly positioning them. ## Discussion One of the hardest part in any thesis to write is the Discussion. In the Discussion-section you have to put your results in perspective. • State your main findings that are important and relevant • What have others done, and how does this relate to your experiments • You may put things in a broader perspective • Propose a model • Can your data be explained by other mechanisms? # Concise Students have the tendency to embellish, exaggarate and re-iterate (especially in the Introduction-section). Try to write in a concisive manner. Try to get your main point across quickly/immediately. Explain everyting what you need to explain, and nothing more. Shorten long sentences. Check your text for sentences like: • "It is clear that" • "Note that" These sentence parts are superfluous, and can be easily removed (which often makes the text easier to read). # Action Don't over-use the verb "to be". Replace it with action verbs. • "is dependent on" - "depends on" • "localization performance is good for listener JO" - "listener JO accurately localizes sounds" # Quantification • Be as precise as possible. When using terms like "a number of", "many", "highly", try to quantify them (e.g. "8 out of 10"). • Differences are only differences when they are significant, and they are only significant when they are statistically significant (state the type of test and P-value). # Edit, edit, edit It is always a good idea to check and re-check your text. Your supervisors will also heavily edit your manuscript drafts. Don't be disappointed when your first draft is returned completely covered in red changes. Expect this to happen! It is impossible to write your thesis in one day. # Style Write formally, so avoid: • colloquialisms • contractions (e.g. use 'do not' instead of don't) I have heard from many students they have been taught also to avoid 'we' and that you should write in a passive form. Rubbish! The personal pronoun 'we' and writing in the active form is quite common in scientific papers. The active form actually speeds up reading. So, 'the listener localized sounds accurately' is better than 'the sounds were localized accurately by the listener'. The most important rule is: Don't write something absurd. All other rules can be broken. #### Certainty Also, because you can never be 100% certain of your conclusions, you should express yourself cautiously, using expressions such as: • seems to • might • appears to • is likely to • can • apparently • could • probably • perhaps • may (well) • seemingly • tends to Note that caution should also be used when discussing other people's results and conclusions (for example in introductions or reviews). Even though a statement is written down in stone (in Dutch "staat iets zwart op wit", in black and white), this does not make it true. With a research article, you should convince your readers that your experiments are well-performed and interesting. You should lead the readers along, convince them of the mysteries or problems you want to tackle, and conclude with a solution. The goal is to report your findings and conclusions clearly and to the point. Think of an outline with a logical flow. Each paragraph should contain a clear topic. Importantly, you should communicate a storyline, having a specific purpose in mind for every section and paragraph. Describe, compare and argue. #### Cohesion To achieve cohesion of a text, connect sentences and paragraphs. Here are some examples of connectives. type Purpose Examples and 1. listing 2. transition 3. summation 4. apposition 5. result 6. inference 1. Enumeration: first, furthermore, finally one, two, three first(ly), second(ly), third(ly) above all last but not least first and foremost first and most important(ly) next, then, afterward, lastly/finally Reinforcement: also, again, furthermore, further, moreover, what is more, then, in addition, besides, above all, too, as well (as) Equation: equally, likewise, similarly, correspondingly, in the same way either, neither, nor, not only ... (but) also indeed, actually, in (actual) fact, really, in reality 2. now, with reference/respect/regard to, regarding, let us (now) turn to, as for, as to 3. in conclusion, to conclude, to sum up briefly, in brief, to summarise, altogether, overall, then, therefore, thus 4. i.e., that is, that is to say, viz, namely, in other words, or, or rather, or better, and, as follows, e.g., for example, for instance, say, such as, including, included, especially, particularly, in particular, notably, chiefly, mainly, mostly 5. so, therefore, as a result/consequence, the result/consequence is, accordingly, consequently, now, then, because of this, thus, hence, for this reason 6. then, in other words, in that case, else, otherwise, if so/not, that implies, my conclusion is or 1. reformulation 2. replacement 1. better, rather, in other words, in that case, to put it (more) simply to be concise, don't use reformulation 2. again, alternatively, rather, better/worse (still), on the other hand, the alternative is, another possibility would be but 1. contrast 2. concession 1. instead, conversely, then, on the contrary, by (way of) contrast, in comparison, (on the one hand) ... on the other hand 2. besides, in any case, (or) else, at any rate, however, nevertheless, nonetheless, notwithstanding, only, still, (al)though, yet, for all that, in spite of, despite that, after all, at the same time, on the other hand, all the same, even if, though #### Like versus such as 'Like' implies a comparison, 'such as' implies inclusion.
2018-09-22 17:55:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6319718956947327, "perplexity": 4233.214418499697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158609.70/warc/CC-MAIN-20180922162437-20180922182837-00559.warc.gz"}
https://www.physicsforums.com/threads/how-to-get-the-distance-traveled-from-the-force-and-mass-functions.639304/
# How to get the distance traveled from the force and mass functions? ## Homework Statement I have two functions: F(t) - where F(t) is the force at a given time t m(t) - where m(t) is the mass of the object in question at a given time t Let's say that some force (in a thrust form) is applied to the object for "b" seconds. The function F(t) specifies in what manner. How can I get the distance traveled by the object after "b" seconds, if we know that the velocity, acceleration and distance traveled are all 0 at t = 0? - ## The Attempt at a Solution I've tried using an analogy of the Riemann sum (diving each instantaneous force by each instantaneous mass and summing everything for an acceleration-time function), and it turned out to be too tedious and imprecise to be applied practically. Anyone? gneill Mentor You can write an expression for the acceleration with respect to time from the given force and mass functions. Integrate to find velocity. Integrate again to find distance. You can write an expression for the acceleration with respect to time from the given force and mass functions. Integrate to find velocity. Integrate again to find distance. Thanks for your feedback. Can one just successively integrate the force function dived by the mass function to get the distance function? gneill Mentor Thanks for your feedback. Can one just successively integrate the force function dived by the mass function to get the distance function? Sure. HallsofIvy Homework Helper m(t)dv/dt= f(t) so that dv= (f(t)/m(t))dt and you can integrate that. Once you have found v(t), you can use dx/dt= v(t) and integrate dx= v(t)dt to find the distance function. Of course, there is no guarentee that any of those functions will be "integrable" as an elementary function. m(t)dv/dt= f(t) so that dv= (f(t)/m(t))dt and you can integrate that. Once you have found v(t), you can use dx/dt= v(t) and integrate dx= v(t)dt to find the distance function. Of course, there is no guarentee that any of those functions will be "integrable" as an elementary function. But what if the F(t) and m(t) functions aren't continuous, and are only continuous on the interval of [0, b]? It's easy to do the calculations when the function is completely continuous. But how to do it if it's continuous only over [0, b], and we want to know the distance traveled at b? Last edited: gneill Mentor But what if the F(t) and m(t) functions aren't continuous, and are only continuous on the interval of [0, b]? It's easy to do the calculations when the function is completely continuous. But how to do it if it's continuous only over [0, b], and we want to know the distance traveled at b? If they're continuous over [0,b] and you want the distance at b, I don't see the problem since the integrals will be defined over the domain. If the functions are not continuous then it is up to you to interpret their behavior in terms of physical laws and deal with the implications. This might, for example, mean splitting the domain of integration into continuous pieces and "bridging" the gaps with assumed constant velocity sections. If they're continuous over [0,b] and you want the distance at b, I don't see the problem since the integrals will be defined over the domain. If the functions are not continuous then it is up to you to interpret their behavior in terms of physical laws and deal with the implications. This might, for example, mean splitting the domain of integration into continuous pieces and "bridging" the gaps with assumed constant velocity sections. Can one use definite integration for that? gneill Mentor Can one use definite integration for that? Sure. Any integration is just a sum. A sum can be split into chunks and added separately. If some physics occurs between the parts represented by the integrations, then the integrations just become terms in an overall equation of motion where you stick other terms to fill in the "spaces". Do you have some particular F(t) and M(t) in mind which is raising these concerns? ## Homework Statement I have two functions: F(t) - where F(t) is the force at a given time t m(t) - where m(t) is the mass of the object in question at a given time t Let's say that some force (in a thrust form) is applied to the object for "b" seconds. The function F(t) specifies in what manner. How can I get the distance traveled by the object after "b" seconds, if we know that the velocity, acceleration and distance traveled are all 0 at t = 0? - ## The Attempt at a Solution I've tried using an analogy of the Riemann sum (diving each instantaneous force by each instantaneous mass and summing everything for an acceleration-time function), and it turned out to be too tedious and imprecise to be applied practically.[/QU
2021-05-06 13:12:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8681257963180542, "perplexity": 508.1154914338583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988753.97/warc/CC-MAIN-20210506114045-20210506144045-00075.warc.gz"}
http://openmx.ssri.psu.edu/thread/485?q=thread/485
# Multi Level Analysis 7 posts / 0 new Offline Joined: 04/02/2010 - 08:32 Multi Level Analysis Hello! Just a quick question: is it possible to do a multi level analysis with OpenMx? Best wishes! Nick Offline Joined: 07/30/2009 - 14:03 At this point it is a bit cumbersome to specify, and we are working on some more efficient code for the estimation. But it is possible and it's not terribly difficult. I believe Tim Brick is working on an article to describe the process in more detail. Offline Joined: 07/31/2009 - 15:14 A slightly longer reply: yes, see Mehta PD, Neale M.C. (2005). People are variables too: multilevel structural equations modeling. Psychological Methods. 10(3):259-84. Available here: http://www.vipbg.vcu.edu/vipbg/neale-articles.shtml That article used classic Mx, but OpenMx has equivalent capabilities; indeed model specification may be simpler in OpenMx. Offline Joined: 08/26/2009 - 13:59 Mike, et al., In above Mike, et al., In above referenced paper, some mention is made of multiple group/cluster models vis a vis general; SEM approaches. At a glance, I was uncertain whether Paras and Neale imply that OpenMx is currently set up for growth curves for multiple groups; it does not say this explicitly. Guessing...probably OpenMx is not fully set up to do this yet? It seems to me that if all equality constraints can be set in general in Openmx, (which I know they can be because I used the program for my class this semester with a little help from Steve) that it should be a relatively simple matter to set them up for multiple data frames. I'm less certain what happens with computing the fit statistics... For what it is worth, I am 'seeing this' as I would do it in LISREL code; I'm uncertain if the same programming structures apply. Any comment/resources on this? I am helping a colleague with an NIH proposal today and want to specific OpenMx as an preferred program, but his questions really require a multiple group growth curve approach. Ted Offline Joined: 07/30/2009 - 14:03 Hi Ted, Yes, multigroup LGC Hi Ted, Yes, multigroup LGC models can be specified in OpenMx. There are several ways to do this. The first way is to create multiple dataframes, one for each group. Then, create a model for each group (probably as copies of the same prototype model). Finally, you add all of the models into a "parent model" with an mxAlgebraObjective that adds up the function values from the two "child models". So, suppose you had created an LGC mxModel for each of two groups and the mxModels were named named "male" and "female" both as R variables and within the mxModel() statement. It would look like this: male <- mxModel("male", ... your LGC model definition here ...) female <- mxModel("female", ... your LGC model definition here ...) sumModel <- mxModel("sumModel", male, female, mxAlgebra(male.objective + female.objective, name="minus2sumLL"), mxAlgebraObjective("minus2sumLL") ) sumModelFit <- mxRun(sumModel) summary(sumModelFit) Of course, you could constrain parameters in the mxModel "male" to be equal to those in "female" or release those constraints. This method has the advantage of being able to debug your male and female models separately, since they could run on their own. Then you can add them together in the sumModel and get the multigroup case. Offline Joined: 04/02/2010 - 08:32 Gentlemen, somehow I was sure Gentlemen, somehow I was sure that I will receive automatic e-mail notifications about replies to my post, and discovered your replies only now. I apologize. Thank you very much for the answers and for the link to the article! I am reading it now. Best wishes! Nick Offline Joined: 01/20/2014 - 07:54 MultiLevel SEM and Onyx Does Onyx stretch to specifying multilevel SEM models — and save the code for the model to be run in OpenMx? Cheers Peter
2017-07-23 00:52:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2857764661312103, "perplexity": 2681.8640292721516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424200.64/warc/CC-MAIN-20170723002704-20170723022704-00327.warc.gz"}
http://math.stackexchange.com/questions/680211/a-question-about-the-universal-coefficient-theorem
# A question about the universal coefficient theorem. Or rather a couple of questions. Let $X$ be some topological space, $R$ be a (unital) PID and $G$ be an $R$-module. Am I correct in understanding that the singular cochain complexes $\mathrm{Hom}_\mathbb{Z}(C_*(X;\mathbb{Z}),G)$ and $\mathrm{Hom}_R(C_*(X;R),G)$ coincide (as complexes of abelian groups) and thus the cohomology groups $H^*(X;G)$ may be found from either one of them? ($C_*$ denotes the singular chain complex.) Under the above impression, I'm trying to see how this relates to the universal coefficient theorems. As I see it, consequently one would have $$H^n(X;G)\cong \mathrm{Hom}_\mathbb{Z}(H_n(X;\mathbb{Z}),G)\oplus \mathrm{Ext}^1_\mathbb{Z}(H_{n-1}(X;\mathbb{Z}),G)\cong\\ \mathrm{Hom}_R(H_n(X;R),G)\oplus \mathrm{Ext}^1_R(H_{n-1}(X;R),G).$$ Now let us try to derive the latter isomorphism from the universal coefficient theorem for homology. Replacing the homology groups in the second line with the corresponding expressions of the form $*\otimes_\mathbb{Z} R\oplus \mathrm{Tor}_1^\mathbb{Z}(*,R)$ we obtain quite the isomorphism. I can see that $\mathrm{Hom}_\mathbb{Z}(H_n(X;\mathbb{Z}),G)\cong \mathrm{Hom}_R(H_n(X;\mathbb{Z})\otimes_\mathbb{Z}R,G)$ via tensor-hom adjunction, i.e. the very first summands on both sides coincide. However, the isomorphism still seems dubious to me, since, if I'm not mistaken, any of the four summands on the right may actually be nonzero. So, where is the nonsense lurking? Update.(@Olivier Bégassat, @Drew. Way to long for a comment.) Well, I tried thinking about it carefully... Consider the part of the singular chain complex $$\ldots\overset{\partial_{n+1}}{\longrightarrow}C_n(X;\mathbb{Z})\overset{\partial_n}{\longrightarrow}\ldots$$ One has $C_n(X;\mathbb{Z})=\ker\partial_n\oplus M$, for some subgroup $M\subset C_n(X;\mathbb{Z})$. Thus, $C_n(X;R)=\ker\partial_n\otimes R\oplus M\otimes R$ (a direct sum of $R$-modules, obviously). Now, denoting the boundary map of $C_*(X;R)$ by $\partial^R_*$, one has $$\ker\partial_n\otimes R\subset\ker\partial_n^R\subset C_n(X;R)$$ (embeddings of $R$-modules). This implies $$\ker\partial_n^R=\ker\partial_n\otimes R\oplus(\ker\partial_n^R\cap M\otimes R).$$ Finally, consider $\mathrm{im}\partial_{n+1}^R\subset\ker\partial_n\otimes R$ to obtain $$H_n(X;R)=\ker\partial_n^R/\mathrm{im}\partial_{n+1}^R=(\ker\partial_n\otimes R/\mathrm{im}\partial_{n+1}^R)\oplus(\ker\partial_n^R\cap M\otimes R)=\\H_n(X;\mathbb{Z})\otimes R\oplus\ldots$$ as $R$-modules indeed. I know, some of this looks pretty suspicious (the $R$-module $H_n(X;R)$ is nearly always decomposable?!), but I just can't put my finger on a mistake in the above. - I don't know if there is a mistake or not (your first assertion about the two ways of calculating the cohomology groups with coefficients in $G$ is correct), but are you sure the splitting $$H_n(X;R)\simeq H_n(X;R)\otimes_\Bbb{Z} R\oplus \mathrm{Tor}_1^\Bbb{Z}(H_{n-1}(X;\Bbb Z),R)$$ you want to use is automatically one of $R$-modules? You need this if you want to use $\mathrm{Hom}_R$ and $\mathrm{Ext}_R^1$. I'm doubly skeptical since there is no canonical way to split the universal coefficient theorem short exact sequences, and the sequence itself is one of $\Bbb Z$-modules. – Olivier Bégassat Feb 18 '14 at 1:11 Yes, several such questions do arise... Apparently, however, the embedding $H_n(X;\mathbb{Z})\otimes_\mathbb{Z}R\hookrightarrow H_n(X;R)$ is almost tautologically one of $R$-modules. – Igor Makhlin Feb 18 '14 at 12:05 I think if you're careful about it, you'll find that the maps can be chosen to be $R$-module maps, but that the splitting is not one of $R$-modules – Drew Feb 19 '14 at 9:52 Here's the answer I've posted on MO: mathoverflow.net/questions/158046/…. – Igor Makhlin Feb 21 '14 at 16:29
2016-02-06 03:08:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9650017023086548, "perplexity": 246.63559835140714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701145751.1/warc/CC-MAIN-20160205193905-00338-ip-10-236-182-209.ec2.internal.warc.gz"}
http://cs.union.edu/~striegnk/courses/nlp-with-prolog/html/node46.html
## 8.2 Top Down Parsing As we have seen, in bottom-up parsing/recognition we start at the most concrete level (the level of words) and try to show that the input string has the abstract structure we are interested in (this usually means showing that it is a sentence). So we use our CFG rules right-to-left. In top-down parsing/recognition we do the reverse. We start at the most abstract level (the level of sentences) and work down to the most concrete level (the level of words). So, given an input string, we start out by assuming that it is a sentence, and then try to prove that it really is one by using the rules left-to-right. That works as follows: If we want to prove that the input is of category and we have the rule , then we will try next to prove that the input string consists of a noun phrase followed by a verb phrase. If we furthermore have the rule , we try to prove that the input string consists of a determiner followed by a noun and a verb phrase. That is, we use the rules in a left-to-right fashion to expand the categories that we want to recognize until we have reached categories that match the preterminal symbols corresponding to the words of the input sentence. Of course there are lots of choices still to be made. Do we scan the input string from right-to-left, from left-to-right, or zig-zagging out from the middle? In what order should we scan the rules? More interestingly, do we use depth-first or breadth-first search? In what follows we'll assume that we scan the input left-to-right (that is, the way we read) and the rules from top to bottom (that is, the way Prolog reads). But we'll look at both depth first and breadth-first search. ### 8.2.1 With Depth First Search Depth first search means that whenever there is more than one rule that could be applied at one point, we explore one possibility and only look at the others when this one fails. Let's look at an example. Here's part of the grammar ourEng.pl, which we introduced last week: s  ---> [np,vp].np ---> [pn].vp ---> [iv].vp ---> [tv,np]. lex(vincent,pn).lex(mia,pn).lex(died,iv).lex(loved,tv).lex(shot,tv). The sentence Mia loved vincent'' is admitted by this grammar. Let's see how a top-down parser using depth first search would go about showing this. The following table shows the steps a top-down depth first parser would make. The second row gives the categories the parser tries to recognize in each step and the third row the string that has to be covered by these categories. It should be clear why this approach is called top-down: we clearly work from the abstract to the concrete, and we make use of the CFG rules left-to-right. And why was this an example of depth first search? Because when we were faced with a choice, we selected one alternative, and worked out its consequences. If the choice turned out to be wrong, we backtracked. For example, above we were faced with a choice of which way to try and build a VP --- using an intransitive verb or a transitive verb. We first tried to do so using an intransitive verb (at state 4) but this didn't work out (state 5) so we backtracked and tried a transitive analysis (state 4'). This eventually worked out. ### 8.2.2 With Breadth First Search Let's look at the same example with breadth-first search. The big difference between breadth-first and depth-first search is that in breadth-first search we carry out all possible choices at once, instead of just picking one. It is useful to imagine that we are working with a big bag containing all the possibilities we should look at --- so in what follows I have used set-theoretic braces to indicate this bag. When we start parsing, the bag contains just one item. The crucial difference occurs at state 5. There we try both ways of building VPs at once. At the next step, the intransitive analysis is discarded, but the transitive analysis remains in the bag, and eventually succeeds. The advantage of breadth-first search is that it prevents us from zeroing in on one choice that may turn out to be completely wrong; this often happens with depth-first search, which causes a lot of backtracking. Its disadvantage is that we need to keep track of all the choices --- and if the bag gets big (and it may get very big) we pay a computational price. So which is better? There is no general answer. With some grammars breadth-first search, with others depth-first. Patrick Blackburn and Kristina Striegnitz Version 1.2.4 (20020829)
2017-11-18 21:38:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8075264096260071, "perplexity": 424.19816349337657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805049.34/warc/CC-MAIN-20171118210145-20171118230145-00403.warc.gz"}
https://video.ias.edu/math/stpm2010/liu
# STPM - p-Adic Galois Representations and $(\varphi,\Gamma)$-Modules ## STPM - p-Adic Galois Representations and (φ,Γ)(φ,Γ)-Modules - Ruochuan Liu Ruochuan Liu We will explain the equivalences between p-adic Galois representations and various types of $(\varphi,\Gamma)$-modules.
2018-05-26 15:35:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21305535733699799, "perplexity": 6593.367667486888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867559.54/warc/CC-MAIN-20180526151207-20180526171207-00316.warc.gz"}
https://www.gamedev.net/forums/topic/517878-defining-implicit-conversion-in-c/
# Defining implicit conversion in C++ This topic is 3475 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hello everyone. In C++, say I have two classes A and B. I can create a constructor A::A(const B&){...} or, if I cant modify the class A for some reason, I can define B::operator A()const{...} Suppose I can't modify the class B either, but want to have a function used like this: void foo (const A &); B b; foo(b); If I had either of the two methods defined as above, this would be fine. Can I define a conversion operator, external to both classes, which will achieve the same thing? In my mind this isn't very different to defining A & operator = (A &, const B &); so if what I'm asking is disallowed, can someone please explain the motivation? ##### Share on other sites Quote: Original post by spraffCan I define a conversion operator, external to both classes, which will achieve the same thing? No. Conversion operators must be internal to the class. I don't know the motivation; you'll have to ask Stroustrup. What are you trying to accomplish? It doesn't have anything to do with integrating incompatible math libraries, does it? ##### Share on other sites Quote: Original post by spraffIn my mind this isn't very different to defining *** Source Snippet Removed *** That isn't legal either, so I don't understand what kind of point you're making. ##### Share on other sites I'm afraid operator= can only be defined as a member function too. They probably don't want to give you the chance to modify crucial behaviour of classes from outside. ##### Share on other sites if foo can take an A, but you want to pass a B, there must be some common factor, either A and B have the same member function call, and you could template you function. or both are otherwise convertable to some third type (or have a member like .getasint() ) so why not make your function take that type instead? If you have inheritence ( B: public A ), then you can make foo take an A* as input. [Edited by - KulSeran on December 14, 2008 2:33:10 PM] ##### Share on other sites Quote: Original post by SneftelNo. Conversion operators must be internal to the class. I don't know the motivation; you'll have to ask Stroustrup. It requires access to private data members, otherwise only the publics would get copied. Although, why not make a friend assignment operator... I don't really know. I suspect it's due to how he intended friend functions to be used. Quote: Original post by visitorI'm afraid operator= can only be defined as a member function too. They probably don't want to give you the chance to modify crucial behaviour of classes from outside. Implicit conversion: class A {};class B{ operator A(); // No return type!}; ##### Share on other sites Neither of those things *requires* access to private members. Offhand, every STL container I can think of can quite easily be copied using only public accessors. ##### Share on other sites Quote: Original post by SneftelNeither of those things *requires* access to private members. Offhand, every STL container I can think of can quite easily be copied using only public accessors. Exactly my reasoning. Suppose there were two string classes in an application, being perhaps returned from the functions of two different external (immutable) libraries. I might want to pass a string from one library as an argument to the function of another. At the moment I would have to do something equivalent to libraryone_func(convert(library2_func())) Of couse it would be nice for this to be done implicitly. All this talk about private members is nonsense, that's what public constructors deal with. ##### Share on other sites Couldn't you use another function that wraps the call to library2_func and the convert call? 1. 1 2. 2 3. 3 4. 4 Rutin 17 5. 5 • 11 • 21 • 12 • 11 • 41 • ### Forum Statistics • Total Topics 631401 • Total Posts 2999871 ×
2018-06-22 16:00:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22345879673957825, "perplexity": 2204.8653905449682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864546.30/warc/CC-MAIN-20180622143142-20180622163142-00166.warc.gz"}
https://bioinformatics.stackexchange.com/questions/20427/analysing-node-connectivity-difference-in-a-continuous-scale-edge-weighted-by-c
# analysing node connectivity difference in a continuous scale (edge weighted by correlations) I have an edge list object weighted by a correlation value. I would like to know the changes in the node's connectivity on a continuous scale. In other words, how does the node connectivity differ from the edge weight scale (-1) to the edge weight (+1)? I would like to know how to do this or whether any R package or statistical model could solve the problems etc. I would like to know whether node connectivity increases or decreases in relation to the correlation weight. I.e. how it looks at the correlation between node degree and edge weight. But I don't know how to do it. A example edge list object is created in the code below. data <- structure(list(from = c(5L, 5L, 5L, 1L, 1L, 1L, 1L, 4L, 4L, 4L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L), to = c(1L, 2L, 3L, 5L, 4L, 2L, 3L, 1L, 2L, 3L, 5L, 1L, 4L, 3L, 5L, 1L, 4L, 2L), weight = c(runif(18, -1,1))), row.names = c(NA, -18L), class = "data.frame") I appreciate your help!! Best, Amare • if I understand correctly, something like, "do nodes with higher UNWEIGHTED degree also tend to have higher weight on their edges?" Jan 25 at 0:47
2023-03-21 02:28:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3991246521472931, "perplexity": 3271.3900110263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943589.10/warc/CC-MAIN-20230321002050-20230321032050-00472.warc.gz"}
https://www.gamedev.net/forums/topic/663172-search-for-matching-string/
# Search for matching string This topic is 1115 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi everyone. I'm trying to find a solution that would help me to find a matching string within an array. i have a big array[10000] and within it there are strings that are the same ( only 1 matching string for each string) strings can be different lengths. the easiest and most expensive way in my mind is to sort all the strings by length size into different vectors, then loop through the vectors to find the matching string. I could use array instead of string for read/write time improvement, but instead i am trying to see if i can use map. Use map in order to have a key for each string which would represent length, and value would be the string. How would i loop through all the strings that are of specific (map key?) string.length. Thank You. ##### Share on other sites There are many ways of doing it, here are some hints: 1. Use a hashvalue as key instead of length. 2. If you have a static array, calculate the hashkeys of each string, then sort the arrays (hashkey array+string array) and use binary search to find the first hashkey. Then compare each string with the same hashkey. 3. If you have a dynamic array, I would recommend to use a hashmap which allows multiple entries with the same hashkey or use list/vector to save the list of strings having the same hashkey, something like this: map<int,list<string>> ##### Share on other sites to make things simple for now i just sum the DEC values of chars in string to represent a hash key ( in my head, that works as well?) i feel a bit stuck. int found = 0; for (auto it = map.begin(); it != map.end(); it++) { auto search = map.find(it->first); if (search != map.end()) { if (it->second != search->second && it->first == search->first) { found++; } } } Is there a way to start iteration (find) from a specific position until the end of the map? ##### Share on other sites to make things simple for now i just sum the DEC values of chars in string to represent a hash key ( in my head, that works as well?) Notice that performance of a hash based solution depends heavily on the value distribution and collision probability of hash values. Because simply summing up character code values gives you a poor hash (e.g. "abc", "acb", "bac", "bca", "cab", and "cba" all give the same value), any measuring done with it is mostly worthless. Well, a hash based solution usually need not use a map. (On the other hand, a map may be implemented by using a hash table.) If you want to use hashes, then e.g. create a table indexed by a hash (or a portion of a hash), where each entry of the table is a bucket (perhaps a list) of stings having the said hash value. This way reduces the search by a factor of N when using N buckets, assuming that the distribution of hash values is okay. It is called "hash table" and it well known and documented on the internet, e.g. here on wikipedia. Another approach in case that the strings are not changed (at least not often) is the use of a kind of trie. However, they are a bit (or in case of some variants even much) more complex than a hash table. Edited by haegarr ##### Share on other sites This is C++. Just use std::unordered_map, which is a hash table that ships with any moderately recent C++ implementations. It's not the best hash table but it's 10000x better than rolling your own broken data structure. Likewise, if you need a hash of a value for some other reason, use std::hash. It will provide a good hash for any of the built-in types including std::string. If you need to hash your own types, read up on the documentation of the standard library: bad hashes can easily be worse than not using hashing at all (e.g., a std::map will be better, and a std::map is pretty terrible). ##### Share on other sites thank you all for suggestions. one more question. If i  use multimap, and have multiple keys with same value, but different data. what is the easiest way to loop through all the specific keys? Ashaman73 Or instead of multimap is it better to use the suggested map<int,list<string>>, my first try to make it work turned out to perform really slow in populating the string. ##### Share on other sites thank you all for suggestions. one more question. If i  use multimap, and have multiple keys with same value, but different data. what is the easiest way to loop through all the specific keys? You can use equal_range to get a pair of iterators for all values mapped to a particular key, then loop over them using standard techniques. ##### Share on other sites Is this a frequent or infrequent thing? Or in other words, is the extra cost of creating hash tables or other metadata greater or less than the cost of all the lookups? For infrequent work a brute force approach is often good enough. Second, is this only going to be used for an exact match comparison? For example, is there ever going to be a case where substrings are important, or you need detail from within a part of the string rather than a pure acceptance match? Those questions may suggest alternate data structures or patterns. There are a few other structures that can also provide great performance on different uses, such as performing an autocomplete system or or spell correction. A bit more information could help. One person above, haegarr, mentioned one of these data structures. Depending on your answers it may be a great answer. A very good structure for string acceptance is the trie. For several uses a trie can be faster than a hash table, but for other uses it can be slower, so knowing details are important. Also useful, a trie can provide alphabetical sorting, and some implementations can be heavily compressed requiring only a few bytes per entry rather than storing complete strings. It is also generally cache efficient and for that reason it is used in the fastest-known string sorting algorithm. ##### Share on other sites If we would know your requirements, we could help you more. You have already an implementation in mind to solve a problem, but we only know this implementations and not the requirements. Requirements are something like this: 1. I have a fix set of strings, which will not change while the application is running. 2. The performance for preparing the strings is not that important. 3. I need to look up the strings very often, therefor the lookup performanc is very important. Or like this: 1. The set of strings changes very often while the application is running. 2. I need to setup new sets very often, therefor the setup-time is performance critical. 3. I need to search in ranges, eg. iterate through all strings between a - c. 4. The look up needs to ignore case. Once we know, what you need, we could suggest some specialist data structure/algorithm. ##### Share on other sites This topic is 1115 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Create an account Register a new account • ### Forum Statistics • Total Topics 628730 • Total Posts 2984431 • 25 • 11 • 10 • 16 • 14
2017-12-16 15:20:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1792708784341812, "perplexity": 1309.9862115228705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588251.76/warc/CC-MAIN-20171216143011-20171216165011-00540.warc.gz"}
https://studyadda.com/sample-papers/jee-main-sample-paper-26_q10/280/300341
• # question_answer If $\int\limits_{\sin x}^{l}{{{t}^{2}}f(t)dt]=1-\sin x,x\in \left( 0,\frac{\pi }{2} \right)}$then $f\left( \frac{1}{\sqrt{2}} \right)$ is equal to A)  1                     B)  2 C)  3                                 D)  4 On differentiating both sides with respect to x, we get $0-{{\sin }^{2}}\,x.f(\sin x)\,\cos x=-\cos x\,$ $\Rightarrow \,\,f(\sin x)=\frac{1}{{{\sin }^{2}}x}$ $\therefore \,\,f\left( \frac{1}{\sqrt{2}} \right)\,={{(\sqrt{2})}^{2}}=2$
2022-01-19 10:12:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9715837240219116, "perplexity": 3270.5442133400347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301309.22/warc/CC-MAIN-20220119094810-20220119124810-00193.warc.gz"}
https://www.nature.com/articles/s41598-017-02232-y
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Volunteer Participation in the Health eHeart Study: A Comparison with the US Population ## Abstract Direct volunteer “eCohort” recruitment can be an efficient way of recruiting large numbers of participants, but there is potential for volunteer bias. We compared self-selected participants in the Health eHeart Study to participants in the National Health And Nutrition Examination Survey (NHANES) 2013–14, a cross-sectional survey of the US population. Compared with the US population (represented by 5,769 NHANES participants), the 12,280 Health eHeart participants with complete survey data were more likely to be female (adjusted odds ratio (ORadj) = 3.1; 95% confidence interval (CI) 2.9–3.5); less likely to be Black, Hispanic, or Asian versus White/non-Hispanic (ORadj’s = 0.4–0.6, p < 0.01); more likely to be college-educated (ORadj = 15.8 (13–19) versus ≤high school); more likely to have cardiovascular diseases and risk factors (ORadj’s = 1.1–2.8, p < 0.05) except diabetes (ORadj = 0.8 (0.7–0.9); more likely to be in excellent general health (ORadj = 0.6 (0.5–0.8) for “Good” versus “Excellent”); and less likely to be current smokers (ORadj = 0.3 (0.3–0.4)). While most self-selection patterns held for Health eHeart users of Bluetooth blood pressure cuff technology, there were some striking differences; for example, the gender ratio was reversed (ORadj = 0.6 (0.4–0.7) for female gender). Volunteer participation in this cardiovascular health-focused eCohort was not uniform among US adults nor for different components of the study. ## Introduction Emerging technology, near-ubiquitous access to the internet, and ease of electronic communication makes it possible to contact and recruit participants over the internet, consent and collect data without in-person visits, and repurpose new sensor devices and smartphone technology for longitudinal research data collection. This so-called “eCohort” approach can be an extremely efficient epidemiologic approach that is attractive in an era of shrinking funds for traditional studies1. Even the well-endowed Precision Medicine Initiative will employ internet- and mobile phone application- (app-) based recruitment to recruit over a third of the planned 1 million person cohort2. This approach, however, may yield substantial volunteer bias. Technology use is not uniform in the US3, and reliance on response to electronically-delivered invitations for study participation is likely to select for particular individuals with favorable impressions of the research establishment, strong altruistic motivation, and time to complete research activities. Prior internet-based surveys, for example, have reported over-representation of women, married, and well-educated individuals4. No prior analyses have reported on internet-based recruitment into a US-based eCohort in comparison with the US population. The Health eHeart Study is a large eCohort study focused on cardiovascular health. Health eHeart invites any adult age 18 years or older with an email address to participate, recruits primarily over the internet via electronically-delivered invitations, collects surveys and patient-reported outcomes, and supports connection of a wide variety of consumer electronic devices and apps to the study so that the mHealth data they collect can be donated and delivered to the Health eHeart Study database. We compared participants in the Health eHeart Study to participants in the National Health And Nutrition Examination Survey (NHANES), which was designed to be representative of the US population, for the purpose of informing inferences made using Health eHeart Study analyses and for targeting recruitment to balance our study sample. ## Methods ### Health eHeart Study Sample The Health eHeart Study is a cardiovascular focused eCohort, with enrollment, consent and participant occurring entirely using the internet. We analyzed cross-sectional baseline examination data and follow-up data from Bluetooth-enabled blood pressure measurement devices obtained between March 8, 2013 (enrollment initiation) and March 24, 2016 from consecutive participants enrolled in the Health eHeart Study. Participation in the Health eHeart Study is open to any person (world-wide) with a self-reported date of birth indicating age ≥18 years and an email address. Recruitment into the study occurred via several news media stories, social media and word-of-mouth in addition to being actively sought via email campaigns sent to persons associated with the American Heart Association (primarily via emails sent to participants in their Go Red for Women campaign5), to adult patients at the University of California, San Francisco (UCSF) Medical Center (primarily via unsolicited email invitation), through various other specific referral sources (we track referral source by provided a special URL to referring partners), and from unspecified sources (through our general URL). After online registration (name, date of birth, email and password) and consent, participants were prompted to complete a series of online questionnaires pertaining to basic socio-demographics, family history, medical history, activity and well-being, habits and lifestyle, mental health, food and nutrition, and use of internet or social media. Participants were also invited to “connect” devices and apps (that they already own) from Fitbit, iHealth, Withings, Qardio, Alivecor, Azumio, Ginger.io and Google Fit and donate their data to the study. We limited our primary analysis to participants age ≥20 years (for comparability with NHANES) and with complete information and without “unknown” or “refused” responses on all baseline core survey instruments and survey items. For our secondary analysis, we additionally limited the sample to such participants who also contributed at least one blood pressure measurement via Bluetooth-enabled blood pressure measurement devices (iHealth, Withings and Qardio were all supported). ### NHANES Sample We used NHANES 2013–2014 to represent the US population and compare against participants in the Health eHeart Study. NHANES is a program of the National Center for Health Statistics (NCHS) that aims to investigate the health and nutritional status of the US population. Since 1999, the survey has been released every 2 years in a continuous fashion. These cross-sectional data are representative of the non-institutionalized US population. Every year, approximately 5,000 individuals of all ages are interviewed in their homes and complete the health examination component of the survey. NHANES follows a complex, multistage sampling procedure where the primary sampling units are counties or small groups of contiguous counties, within which city blocks are selected. Within these blocks, households are then randomly selected, and then individuals are drawn at random6. All NHANES protocols were approved by the NCHS Research Ethics Review Board7. In 2013–2014, 14,332 persons were selected for NHANES from 30 different study locations. Of those selected, 10,175 completed the interview. NHANES provides study weights that account for both non-response and deliberate oversampling of particular segments of the population. Because various components of NHANES are only delivered to adults ≥20 years, we limited our analyses to these participants, leading to a sample size of 5,769. In order to maintain strict representativeness of the NHANES study sample ≥20 years and allow for direct comparisons with Health eHeart, we performed multiple imputation using chained equations to estimate missing and “unknown”/non-response values of all variables of interest (n = 13 variables) for all participants (n = 1,162 participants with at least one missing value)8, 9. We used 10-fold multiple imputation to generate imputed datasets, each with complete data on all 5,769 NHANES participants included in our sample. This 10-fold imputed dataset was used for all subsequent analyses. Informed consent was obtained from all participants in both Health eHeart and NHANES. Our analysis of the Health eHeart Study data is covered by the UCSF Institutional Review Board (IRB); our analysis of the de-identified NHANES data is exempt from IRB Review. Methods were performed in accordance with the relevant guidelines and regulations. ### Statistical Method We first used descriptive statistics to compare the demographic characteristics, medical conditions, and lifestyle factors of the Health eHeart sample by recruitment source, using ANOVA and chi-square tests for between-source differences. Then, to identify factors independently associated with participation in Health eHeart, we used a case-control approach, using pooled data for the combined NHANES and Health eHeart samples to estimate logistic regression models for the “outcome” of inclusion in the Health eHeart Study sample. We first fit single-predictor models for age, sex, race, income, marriage status, educational level, hypertension, hyperlipidemia, diabetes, stroke, coronary heart disease, heart failure, heart attack, general health, smoking and sleeping duration, and then fit a final multivariable model for Health eHeart participation that included this entire set of predictors. Results are summarized as odd ratios (ORs) and 95% confidence intervals (CIs). We accounted for the complex stratified survey design of NHANES using the sampling weights, pseudo-strata, and primary sampling unit (PSU) variables provided by NHANES, with weights normalized to sum to the NHANES sample size. In the pooled analyses, Health eHeart participants were each given unit weight, and randomly assigned to two PSUs with a distinct pseudo-stratum. Multiple imputation of the NHANES data was implemented using the mi package in Stata Version 14.0, and the case-control models were estimated using the Stata svy package for complex survey data, which accommodates multiply-imputed data. Two-sided P values less than 0.05 were considered to be statistically significant. ## Results At the time of our data lock, 42,828 participants had registered for the Health eHeart Study by providing their name, email and date of birth. Of those, 33,236 (78% of registered participants) signed the online consent, 28,420 completed at least one survey, (86% of consented participants), and 12,280 were participants age ≥20 years with complete core baseline survey data and without “unknown” or “refused” responses to any survey item (Fig. 1). These participants constitute our primary analysis sample. Of these, 251 contributed at least one blood pressure measurement via Bluetooth-enabled blood pressure measurement device; these participants constitute our secondary analysis sample (Fig. 1). As described in our Methods, all NHANES participants age ≥20 years were included after multiple imputation successfully imputed missing/unknown/refused items for the 1,162 participants missing at least one required data element. Baseline characteristics of Health eHeart Study participants differed by referral source (Table 1). For example, only 3% of participants referred by American Heart Association sources were male (consistent with the primary focus on the Go Red for Women program), compared with 37%-44% from other sources (p < 0.001). We also detected differences by recruitment source in age (more elderly participants from UCSF), race/ethnicity (more Black, non-Hispanic participants from AHA), income and education (higher in both from UCSF), general health (highest among participants from unspecified referral source), and sleep duration (lowest duration from AHA referrals, Table 1, all p-values < 0.001). Compared with all adults in the US, as represented by NHANES participants (applying sample weights), Health eHeart Study participants were more likely to be middle-aged: more likely to be female; less likely to be Black, Hispanic, or Asian versus White/non-Hispanic; more likely to be highly educated; more likely to have cardiovascular disease and risk factors but less likely to have diabetes; more likely to be in excellent general health; less likely to be current smokers; and more likely to report low sleep duration (Table 2). Associations with higher income and marital status did not persist in adjusted models. The higher prevalence of female participants in Health eHeart persisted even after excluding participants referred from the Go Red for Women program (ORadj = 1.6; 95% CI: 1.5–1.7). When we limited both the Health eHeart Study and NHANES population to participants with coronary heart disease (Health eHeart Study n = 1297; NHANES n = 293), characteristics of the sample were different (e.g., higher prevalence of cardiovascular risk factors), but predictors of participation in the Health eHeart Study were quite similar (Supplemental Table 1). Only a small subset of Health eHeart Study participants (n = 251, 2%) used a Bluetooth-enabled blood pressure measurement device, connected their device account to their Health eHeart Study account, and donated at least one blood pressure measurement to the study (median number of measurements per participant = 30; interquartile range 9–82). These highly self-selected participants showed mostly similar patterns of characteristics when compared with NHANES as the full Health eHeart sample, with some striking contrasts (Table 3). Instead of a large female preponderance in the full Health eHeart sample (73%, Table 2), Health eHeart participants contributing device-measured blood pressure values were less likely to be female than the US population (35%, Table 3). Persons with hypertension and coronary heart disease were even more heavily over-represented in this subset. Also, in this subsample in which moderately expensive purchases were required (blood pressure cuff and smartphone), higher income persisted as a strong predictor even after adjustment for education and other factors. ## Discussion The Health eHeart Study used efficient electronic methods for recruitment and took advantage of partner organizations willing to refer patients to our study website. This resulted in extremely efficient recruitment into the study. The sample of recruited individuals, however, differs from the US population in a variety of ways. Not only does the study over-represent persons with cardiovascular diseases and risk factors (as expected based on the study focus), but it also appears to over-represent females and non-Hispanic Whites, higher educational level, persons with more prevalent medical conditions but better self-reported general health, and fewer current smokers than would be expected if participation were proportional from all segments of the US population. Patterns were different (e.g., reversal of the female predominance) in the highly selected subset of the Health eHeart Study who contributed blood pressure measurements from a Bluetooth-enabled device. Internet- and technology-enabled epidemiology can have major advantages in terms of efficiency. Consistent with the Health eHeart Study recruitment experience, one Danish internet-based study estimated more than 50% savings in their recruitment compared with a conventional approach ($160 vs.$322 per subject)10, and an internet-based clinical trial similarly reported that their web-based methods cost about half that of a hospital based approach11. Web-based questionnaires generally reduce cost substantially12, as do studies that invite participation by e-mail13. Aside from cost, web-based surveys can be more efficient in terms of response speed from respondents14, easier to adjust and modify by the research team15, quicker and less error-prone to process since data are entered electronically and coded automatically16, and easier to complete for disabled participants17. Our results, in terms of which characteristics predicted participation, were similar in some ways, but different in others when compared with prior studies. As with Health eHeart, women and those with higher socioeconomic status appear to be consistently more likely to participate in epidemiologic studies18, especially in eCohorts14, 19, 20. For example, the NutriNet-Santé study in France found a much higher percentage of women compared with the corresponding national figures (78.0% vs 52.4%); and both the NutriNet-Santé study and the Australian Longitudinal Study on Women’s Health found higher participation rates in persons with higher educational levels. In contrast to the NutriNet-Santé study, however, which found higher proportions of married or partnered participants compared to their national data (70.8% vs. 62.0%), the unadjusted association we found in Health eHeart (69% married vs. 62% in NHANES) was not significant after adjusting for other selection factors. Also in contrast with Health eHeart, the Australian Longitudinal Study reported a higher percentage of study participants who rated their health in the online survey as fair or poor, and a higher percentage of study participants who were current smokers compared to their Census data. Their study, however, was limited to a very narrow demographic band (women age 18–23) so may not be comparable. We did not find another study describing self-selected participation in a study requiring use of sensor technology such as our analysis of participants in the Bluetooth-connected blood pressure cuff subsample. Several factors likely contribute to the differences we observed between the Health eHeart Study and NHANES. First of all, NHANES makes special efforts to recruit underrepresented minorities. In fact, such individuals are oversampled in NHANES (though sample weights correct this factor so results are generalizable to the US population). No such efforts are made in the Health eHeart Study. Second, the Health eHeart Study’s focus naturally attracts participants at risk for heart disease, so the overrepresentation of people with cardiovascular diseases, such as coronary heart disease, stroke and heart failure, is to be expected. However, when we subset both samples to only participants with coronary heart disease, general selection patterns (e.g., for sex, race/ethnicity, education level and smoking) were consistent with those we found in the full Health eHeart sample. Clearly, the “digital divide” may explain differences in participation by education, and particularly also by income for the subset of Health eHeart using a Bluetooth-enabled blood pressure measurement device. As the digital divide diminishes21 and technology diffuses through all segments of society, this participation selection factor may ameliorate to some degree. The Health eHeart Study is large and nationally-scoped and includes participants who complete extensive online surveys and device-associated data collection; and the NHANES study provides a near-ideal way to compare to the US population. However, our analysis has some limitations. Unlike NHANES, the Health eHeart Study does not limit participation to US residents. In contrast to Health eHeart, bias from self-selected non-participation in NHANES is minimized by post-stratification re-weighting based on the known demographic characteristics of the target sample; however, missing values arising from so-called item non-response in NHANES may not be missing at random (even conditional on other factors included in our imputation model), such that multiple imputation may be flawed. Finally, while both Health eHeart and NHANES collect many additional measurements, we were only able to evaluate measurements that were identically collected in both studies (or nearly so), preventing us from assessing the representativeness of Health eHeart on other potentially important dimensions. Our results have some clear implications. First, given that Health eHeart recruitment is ongoing, this analysis provides guidance for how the study team can refocus recruitment efforts to target thus-far under-represented subgroups of the US population. It also represents a roadmap for prospective targeting efforts that can be used by the Precision Medicine Initiative as it begins internet-based direct volunteer recruitment later this year. While some self-selection characteristics may be expected from prior work on participation in research (e.g., under-representation of racial/ethnic minorities22), our findings regarding the technology product-dependent subsample (e.g., reversal of the sex ratio) are more surprising and potentially important to account for. The other clear implication relates to inference: it is clear that simple descriptive analyses of the self-selected Health eHeart Study (e.g., % technology use) will often not yield results that are representative of the US population, either on average or within strata defined by other covariates (e.g., gender). However, it is important to note that estimates of average adjusted associations are likely robust to over- or under- (mis-) sampling even on the variables included in the association, provided that the mis-sampling occurs independently for each variable, and that the association is not modified by factors associated with self-selection. For example, we might obtain valid adjusted estimates of the marginal association of technology use with gender, despite oversampling of technology users and of women in the Health eHeart Study, provided that the oversampling on each factor is independent, and that the effect of technology use on gender does not vary, for example, by education. Note, even in the presence of effect modification, estimates within strata of the effect modifier should remain valid (e.g., there is internal validity). Furthermore, the effects of these various aspects of selection bias may potentially be minimized by re-weighting the Health eHeart sample (similar to the post-stratification weighting performed by NHANES), based on an extension of the multivariable logistic model developed here, with the result that all included covariates have weighted distributions very close to those in NHANES. In conclusion, the Health eHeart Study demonstrates efficient internet-based recruitment, and allows remote data collection from online surveys and sensor/device technology. While it also clearly demonstrates that participants who volunteer for the study are different on average than the US population, this does not rule out its potential for providing valid estimates of adjusted associations. Whether this limitation can be overcome by future internet-based studies such as the planned Precision Medicine Initiative Cohort remains to be seen and will likely require more deliberate sampling, more costly targeted recruitment efforts, and application of post-recruitment standardization methods that correct for unrepresentative volunteer participation. ## References 1. Kaiser, J. Epidemiology. Budget woes threaten long-term heart studies. Science. 341, 701 (2013). 2. Jonah, C. NIH awards \$120M to Scripps, others, to enroll 350K participants in Precision Medicine Initiative via mobile apps http://mobihealthnews.com/content/nih-awards-120m-scripps-others-enroll-350k-participants-precision-medicine-initiative-mobile (2016). 3. Jacob, P. Smartphone ownership and internet usage continues to climb in emerging economies http://www.pewglobal.org/2016/02/22/smartphone-ownership-and-internet-usage-continues-to-climb-in-emerging-economies/#table (2016). 4. Andreeva, V. A. et al. Comparison of the sociodemographic characteristics of the large NutriNet-Santé e-cohort with French Census data: the issue of volunteer bias revisited. J Epidemiol Community Health. 69, 893–898 (2015). 5. American Heart Association. Go Red For Women https://www.goredforwomen.org/ (2016). 6. Centers for Disease Control and Prevention. NHANES 2013–2014 Overview http://www.cdc.gov/nchs/nhanes/nhanes2013-2014/overview_h.htm (2015). 7. Centers for Disease Control and Prevention. NCHS Research Ethics Review Board (ERB) Approval http://www.cdc.gov/nchs/nhanes/irba98.htm (2012). 8. Little, R.J.A. & Rubin, D.B. Statistical Analysis with Missing Data (2nd ed.) (New York, 2002). 9. Berglund, P.A. An introduction to multiple imputation of complex sample data using SAS v9.2 http://support.sas.com/resources/papers/proceedings10/265-2010.pdf (2010). 10. Huybrechts, K. F. et al. A successful implementation of e-epidemiology: the Danish pregnancy planning study ‘Snart-Gravid’. Eur J Epidemiol. 25, 297–304 (2010). 11. McAlindon, T., Formica, M., Kabbara, K., LaValley, M. & Lehmer, M. Conducting clinical trials over the Internet: feasibility study. BMJ. 327, 484–487 (2003). 12. Adams, J. & White, M. Health behaviours in people who respond to a web-based survey advertised on regional news media. Eur J Public Health. 18, 335–338 (2008). 13. Greenlaw, C. & Brown-Welty, S. A comparison of web-based and paper-based survey methods: testing assumptions of survey mode and response cost. Eval Rev. 33, 464–480 (2009). 14. Coyne, K. S. et al. Rationale for the study methods and design of the Epidemiology of Lower Urinary Tract Symptoms (EpiLUTS) study. BJU Int. 104, 348–351 (2009). 15. Wyatt, J. C. When to use web-based surveys. J Am Med Inform Assoc. 7, 426–429 (2000). 16. van Gelder, M. M., Bretveld, R. W. & Roeleveld, N. Web-based questionnaires: the future in epidemiology? Am J Epidemiol. 172, 1292–1298 (2010). 17. Gosling, S. D., Vazire, S., Srivastava, S. & John, O. P. Should we trust web-based studies? A comparative analysis of six preconceptions about internet questionnaires. Am Psychol. 59, 93–104 (2004). 18. Galea, S. & Tracy, M. Participation rates in epidemiologic studies. Ann Epidemiol. 17, 643–653 (2007). 19. Mishra, G. D. et al. Recruitment via the Internet and social networking sites: the 1989–1995 cohort of the Australian Longitudinal Study on Women’s Health. J Med Internet Res. 16, e279 (2014). 20. Berrens, R. P., Bohara, A. K., Jenkins-Smith, H., Silva, C. & Weimer, D. L. The advent of Internet surveys for political research: a comparison of telephone and Internet samples. Political Analysis. 11, 1–22 (2003). 22. George, S., Duran, N. & Norris, K. A systematic review of barriers and facilitators to minority research participation among African Americans, Latinos, Asian Americans, and Pacific Islanders. Am J Public Health. 104, e16–31 (2014). ## Acknowledgements The Health eHeart Study has received funding from the Salesforce Foundation, the Patient-Centered Outcomes Research Institute, and the UCSF Cardiology Division. ## Author information Authors ### Contributions The Health eHeart Study was conceived and executed by M.J.P., J.E.O. and G.M.M. This analysis was conceived by X.F.G. and M.J.P., who also collaborated in drafting the manuscript. E.V. oversaw the statistical analysis, which was executed by X.F.G. All authors reviewed, provided critical revisions for, and approved the final manuscript. ### Corresponding author Correspondence to Xiaofan Guo. ## Ethics declarations ### Competing Interests The authors declare that they have no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Guo, X., Vittinghoff, E., Olgin, J.E. et al. Volunteer Participation in the Health eHeart Study: A Comparison with the US Population. Sci Rep 7, 1956 (2017). https://doi.org/10.1038/s41598-017-02232-y • Accepted: • Published: • DOI: https://doi.org/10.1038/s41598-017-02232-y • ### Big Data in Cardiovascular Disease • Fabio V. Lima • Raymond Russell • Regina Druz Current Epidemiology Reports (2019) • ### Cancer prevalence among flight attendants compared to the general population • Eileen McNeely • Irina Mordukhovich • Brent Coull Environmental Health (2018)
2022-07-06 08:39:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31176066398620605, "perplexity": 7687.505350135862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104668059.88/warc/CC-MAIN-20220706060502-20220706090502-00223.warc.gz"}
https://en.1answer.info/6d617468-7a32383137343932
# Conditional probability question involing balls w.o replacement Faust 06/12/2018 at 21:07. 6 answers, 133 views A box contains 12 balls numbered 1 through 12. If 5 balls are selected one at a time from the​ box, without​ replacement, what is the probability that the largest number selected will be 9​? i want just say $\frac{9*8*7*6*5}{12P5}$ but that is wrong and dont know why. Anonymous 06/12/2018 at 22:04. The event "the largest draw is $9$" can be seen as the intersection of two events: 1. $E_1$: all draws are at most $9$. 2. $E_2$: (at least) one draw is $9$. So you can compute your probability as $P(E_1)\cdot P(E_2|E_1)$. $P(E_1)={9\choose 5} / {12 \choose 5} = \frac{7}{44}$, i.e. the number of ways you can take your $5$ draws from the $[1...9]$ interval, over the number of ways you can take them from the $[1...12]$ interval. $P(E_2|E_1)=1-\frac{8}{9}\frac{7}{8}\frac{6}{7}\frac{5}{6}\frac{4}{5}=\frac{5}{9}$, i.e. $1$ minus the probability of avoiding a $9$ with your first, second, third, fourth and fifth draw given that all your draws are in the $[1...9]$ interval. Then the probability that your highest draw is a $9$ is $\frac{7}{44}\frac{5}{9}=\frac{35}{396}$. There are other, slightly quicker ways to compute the solution, but I think that this one through conditional probabilities is the most "obvious". The original question also asks: why is the probability not $\frac{9\cdot 8\cdot 7\cdot 6\cdot 5}{12 \choose 5}$? The denominator is the number of ways one can choose $5$ balls out of $12$. The numerator is the number of ways one can choose $5$ "distinguishable" balls out of $9$. This has two problems: first, it counts the balls as "distinguishable" (in other words, it takes $9\cdot 8\cdot 7\cdot 6\cdot 5$ instead of $9\choose 5$). Second, it considers every case in which the highest draw is at most $9$, rather than exactly $9$ (so it forgets to multiply by the probability that of the $5$ draws in the $[1-9]$ range, one is indeed $9$, i.e. $1-\frac{8}{9}\frac{7}{8}\frac{6}{7}\frac{5}{6}\frac{4}{5}=\frac{5}{9}$). Addressing these two issues takes us to the formula above. herb steinberg 06/12/2018 at 21:29. You need to do the problem in two parts. First what is the probability that none of the ball chosen are numbered 10 through 12. This has probability $P=\frac{9\times 8\times 7\times 6 \times 5}{12 \times 11\times 10\times 9\times 8}$. Second the probability (under this condition) that a 9 has been chosen. To get this calculate the probability that a 9 was not chosen. This is $Q=\frac{8\times 7\times 6\times 5\times 4}{9\times 8\times 7\times 6\times 5}$. The final answer is $P(1-Q)$ E-A 06/12/2018 at 21:28. The event in which the largest number is 9 is if: a) You pick 9 b) You pick 4 other numbers that are less than 9 The number of ways you can do that is simply picking 4 items out of 8. You should be able to finish from here. Graham Kemp 06/13/2018 at 00:40. A box contains 12 balls numbered 1 through 12. If 5 balls are selected one at a time from the​ box, without​ replacement, what is the probability that the largest number selected will be 9​? The probability for selecting 9 and four numbers less than 9, when selected any five of the twelve without replacement, is: ${^8\mathrm C_4}\,/\,{^{12}\mathrm C_5}$ or $5\cdot{^8\mathrm P_4}\,/\,{^{12}\mathrm P_5}$ Since you seem more familiar with using $^n\mathrm P_r$ then that second is derved as follows:  Count the ways to permute the maximum (9) and four places.   Count the ways to fill those places with a permutation of 4 balls from the 8 lesser balls.   Count the ways to fill all five places with a permutation of any 5 balls from 12.   Multiply and divide the counts, as appropriate. i want just say $\frac{9*8*7*6*5}{12P5}$ but that is wrong and dont know why. ${^9\mathrm C_5}/{^{12}\mathrm C_5}$, or ${^9\mathrm P_5}/{^{12}\mathrm P_5}$, is the probability for selecting five numbers that are 9 or less, when selected any five of the twelve without replacement.   (The maximum of those five ballsis not restricted to being 9; just to not exceeding it.) KonKan 06/13/2018 at 00:56. There are $8$ choices (i.e.: 1,2,3,4,5,6,7,8) which are less than $9$, $1$ choice which equals $9$ and $3$ choices (i.e.: 10,11,12) which exceed $9$. So, appplying the hypergeometric distribution formula, we readily get: $$\frac{\binom{8}{4}\binom{1}{1}\binom{3}{0}}{\binom{12}{5}}=\frac{\binom{8}{4}}{\binom{12}{5}}=\frac{\frac{5\cdot 6\cdot 7\cdot 8}{2\cdot 3\cdot 4}}{\frac{8\cdot 9\cdot 10\cdot 11\cdot 12}{2\cdot 3\cdot 4\cdot 5}}=\frac{5\cdot 6\cdot 7\cdot 8}{8\cdot 9\cdot 2\cdot 11\cdot 12}=\frac{5\cdot 7}{3\cdot 11\cdot 12}=\frac{35}{396}$$ cmitch 06/12/2018 at 21:42. EDIT: Misread initial question at first. Probability that largest number is nine would be 5*(8 choose 4)/(12 choose 5). 8 choose 4 represents all the ways of choosing the remaining 4 numbers below 4 without replacement, but we multiply by 5 as well as for each combination of 4 there are 5 ways to include the number 9. Then divide by the total number of combinations
2018-06-18 18:54:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9096065163612366, "perplexity": 326.77460526422635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860776.63/warc/CC-MAIN-20180618183714-20180618203714-00302.warc.gz"}
https://www.gradesaver.com/textbooks/math/other-math/thinking-mathematically-6th-edition/chapter-3-logic-3-7-arguments-and-truth-tables-exercise-set-3-7-page-194/78
## Thinking Mathematically (6th Edition) Let p and q represent two simple statements. p : if it is raining. q : the grass is wet. Therefore, It is a conditional statement the grass is wet dependent on a condition when it is raining. Contrapositive reasoning form is interchanged between the hypothesis and a conclusion of a conditional statement and negating both. Its contrapositive reasoning form is: \begin{align} & \underline{\begin{align} & p\to q \\ & \text{ }\sim p \\ \end{align}} \\ & \therefore \sim q \\ \end{align} Hence, the original argument in words for the contrapositive reasoning is if the grass is not wet then it is not raining.
2019-11-15 12:17:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999997615814209, "perplexity": 1195.9699992691703}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668644.10/warc/CC-MAIN-20191115120854-20191115144854-00263.warc.gz"}
http://fivethirtyeight.com/features/live-from-invesco/
## Politics We’re expecting the internet situation to be … very dicey, so we’re going to have to Twitter this one. I’ll be sitting in the press area and Sean (who has his own feed below) will be in the cheap seats, so we’ll hopefully have two different perspectives to relay to you. Here’s the bad news: I don’t know that we’re going to be able to get you a polling update today. Nate Silver is the founder and editor in chief of FiveThirtyEight. Filed under , All Politics ### How The FDA Could Change The Way It Approves DrugsSep 3, 2015 Never miss the best of FiveThirtyEight.
2015-09-04 12:20:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19576582312583923, "perplexity": 2066.8968191137606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645348533.67/warc/CC-MAIN-20150827031548-00311-ip-10-171-96-226.ec2.internal.warc.gz"}
https://crm.sns.it/course/4578/
KAWA - Komplex Analysis Workshop VI, 2015 Holomorphic motion for the Julia sets of holomorphic families of endomorphisms of CP(k). speaker: Fabrizio Bianchi (Université Toulouse III - Università di Pisa) abstract: We build measurable holomorphic motions for Julia sets of holomorphic families of endomorphisms of CP(k) under various equivalent notions of stability. This generalizes the well-known result obtained by Mane-Sad-Sullivan and Lyubich in dimension 1 and leads to a coherent definition of the bifurcation locus in this setting. Since the usual 1-dimensional techniques no longer apply in higher dimension, our approach is based on ergodic and pluripotential methods. This is a joint work with François Berteloot. timetable: Fri 27 Mar, 16:30 - 17:10, Aula Dini << Go back
2022-05-28 08:16:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8398950695991516, "perplexity": 1712.4108597336067}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663013003.96/warc/CC-MAIN-20220528062047-20220528092047-00583.warc.gz"}
https://spot.pcc.edu/slc/mathresources/output/html/radicals-rational-exponents.html
## Section13.8Rational Exponents The power to a power rule of exponents relates that $(x^m)^n=x^{mn}\text{.}$ This rule is fairly intuitive when both exponents are positive. For example, in the expression $(x^4)^3$ there are three factors of $x^4\text{,}$ each of which contains four factors of $x\text{,}$ so all together there are four factors of $x\text{,}$ three times, i.e. $3 \cdot 4$ factor of $x\text{.}$ While the power to a power rule is less intuitive once you move away from positive integer exponents, the rule remains the same regardless of the nature of the exponents. For example: \begin{align*} (x^{1/3})^3\amp=x^{\frac{1}{3} \cdot 3}\\ \amp=x^{1}\\ \amp=x \end{align*} But we already have a name for the expression that when cubed results in $x\text{,}$ and that name is $\sqrt[3]{x}$ (the cube root of $x$). So it must be the case that $x^{1/3}=\sqrt[3]{x}\text{.}$ In general, is $n$ is any positive integer, then: \begin{equation*} x^{1/n}=\sqrt[n]{x} \end{equation*} and more generally, \begin{equation*} x^{m/n}=\sqrt[n]{x^m}\text{.} \end{equation*} Several examples are shown below. ###### Example13.8.1. Express $y^{7/5}$ as an equivalent radical expression Solution \begin{equation*} y^{7/5}=\sqrt[5]{y^7} \end{equation*} ###### Example13.8.2. Express $\sqrt[3]{w^{12}}$ using an equivalent exponential expression Solution \begin{align*} \sqrt[3]{w^{12}}\amp=w^{12/3}\\ \amp=w^4 \end{align*} ###### Example13.8.3. Express $\sqrt{x^9}$ using an equivalent exponential expression Solution \begin{equation*} \sqrt{x^9}=x^{9/2} \end{equation*} You can use Figure 13.8.4 to explore this definition some more. As long as both the numerator and denominator of a rational exponent are fairly small positive numbers, it is fairly easy to evaluate expressions that include rational exponents using the rule $x^{m/n}=\sqrt[n]{x^m}\text{.}$ ###### Example13.8.5. Evaluate $16^{1/2}\text{.}$ Solution \begin{align*} 16^{1/2}\amp=\sqrt{16}\\ \amp=4 \end{align*} ###### Example13.8.6. Evaluate $8^{2/3}\text{.}$ Solution \begin{align*} 8^{2/3}\amp=\sqrt[3]{8^2}\\ \amp=\sqrt[3]{64}\\ \amp=4 \end{align*} ###### Example13.8.7. Evaluate $100^{3/2}\text{.}$ Solution \begin{align*} 100^{3/2}\amp=\sqrt{100^3}\\ \amp=\sqrt{1000000}\\ \amp=1000 \end{align*} When the numerator of the rational exponent is large, the rule $x^{m/n}=\sqrt[n]{x^m}$ can become quite cumbersome. Consider, for example, evaluating $9^{5/2}\text{.}$ If we try to use the standard form we hit a brick wall. First, it's not trivial to calculate that $9^5=59,049$ (reality check ... I grabbed my calculator). Now that I have the value of 59,049, I have to determine its square root. Oh my! Fortunately for us, the application of the exponent and the application of the radical can be done in either order. That is: \begin{equation*} a^{m/n}=\sqrt[n]{x^m} \text{ and } a^{m/n}=(\sqrt[n]{x})^m \end{equation*} ###### Example13.8.8. Using the second option, evaluate $9^{5/2}\text{.}$ Solution \begin{align*} 9^{5/2}\amp=(\sqrt{9})^5\\ \amp=3^5\\ \amp=243 \end{align*} ###### Example13.8.9. Using the second option, evaluate $16^{7/4}\text{.}$ Solution \begin{align*} 16^{7/4}\amp=(\sqrt[4]{16})^7\\ \amp=2^7\\ \amp=128 \end{align*} Rational exponents are allowed to be negative. If that's the case, you probably want to deal with the negative aspect of the exponent before taking on the fractional aspect. ###### Example13.8.10. Evaluate $27^{-2/3}\text{.}$ Solution \begin{align*} 27^{-2/3}\amp=\frac{1}{27^{2/3}}\\ \amp=\frac{1}{(\sqrt[3]{27})^2}\\ \amp=\frac{1}{3^2}\\ \amp=\frac{1}{9} \end{align*} Sometimes radical expressions can be simplified after first rewriting the expressions using rational exponents and applying the appropriate rules of exponents. If the resultant expression still has a rational exponent, it is standard to convert back to radical notation. Several examples follow. ###### Example13.8.11. Use rational exponents to simplify $\text{.}$ Where appropriate, your final result should be converted back to radical form. Solution \begin{align*} \sqrt[3]{y^2} \cdot \sqrt[6]{y}\amp=y^{2/3}y^{1/6}\\ \amp=y^{2/3+1/6}\\ \amp=y^{5/6}\\ \amp=\sqrt[6]{y^5} \end{align*} ###### Example13.8.12. Use rational exponents to simplify $\sqrt[8]{t^4}\text{.}$ Where appropriate, your final result should be converted back to radical form. Solution \begin{align*} \sqrt[8]{t^4}\amp=t^{4/8}\\ \amp=t^{1/2}\\ \amp=\sqrt{t} \end{align*} ###### Example13.8.13. Use rational exponents to simplify $\sqrt[10]{\sqrt{5^{40}}}\text{.}$ Where appropriate, your final result should be converted back to radical form. Solution \begin{align*} \sqrt[10]{\sqrt{5^{40}}}\amp=\sqrt[10]{5^{40/2}}\\ \amp=\sqrt[10]{5^{20}}\\ \amp=5^{20/10}\\ \amp=5^2\\ \amp=25 \end{align*} ### ExercisesExercises Convert each exponential expression to a radical expression and each radical expression to an exponential expression. When converting to a rational exponent, reduce the exponent if possible. Assume that all variables represent positive values. ###### 1. $x^{1/3}$ Solution $x^{1/3}=\sqrt[3]{x}$ ###### 2. $y^{5/4}$ Solution $y^{5/4}=\sqrt[4]{y^5}$ ###### 3. $z^{2/5}$ Solution $z^{2/5}=\sqrt[5]{z^2}$ ###### 4. $\sqrt[11]{x^5}$ Solution $\sqrt[11]{x^5}=x^{5/11}$ ###### 5. $\sqrt[4]{y^{20}}$ Solution \begin{aligned}[t] \sqrt[4]{y^{20}}\amp=y^{20/4}\\ \amp=y^5 \end{aligned} ###### 6. $\sqrt[15]{t^3}$ Solution \begin{aligned}[t] \sqrt[15]{t^3}\amp=t^{3/15}\\ \amp=t^{1/5} \end{aligned} Determine the value of each expression. ###### 7. $4^{1/2}$ Solution \begin{aligned}[t] 4^{1/2}\amp=\sqrt{4}\\ \amp=2 \end{aligned} ###### 8. $27^{-1/3}$ Solution \begin{aligned}[t] 27^{-1/3}\amp=\frac{1}{27^{1/3}}\\ \amp=\frac{1}{\sqrt[3]{27}}\\ \amp=\frac{1}{3} \end{aligned} ###### 9. $\left(\frac{4}{9}\right)^{-1/2}$ Solution \begin{aligned}[t] \left(\frac{4}{9}\right)^{-1/2}\amp=\left(\frac{9}{4}\right)^{1/2}\\ \amp=\sqrt{\frac{9}{4}}\\ \amp=\frac{3}{2} \end{aligned} ###### 10. $8^{7/3}$ Solution \begin{aligned}[t] 8^{7/3}\amp=(\sqrt[3]{8})^7\\ \amp=2^7\\ \amp=128 \end{aligned} ###### 11. $100^{5/2}$ Solution \begin{aligned}[t] 100^{5/2}\amp=(\sqrt{100})^5\\ \amp=10^5\\ \amp=100,000 \end{aligned} ###### 12. $16^{-9/4}$ Solution \begin{aligned}[t] 16^{-9/4}\amp=\frac{1}{16^{9/4}}\\ \amp=\frac{1}{(\sqrt[4]{16})^9}\\ \amp=\frac{1}{2^9}\\ \amp=\frac{1}{512} \end{aligned} Simplify each radical expression after first rewriting the expression in exponential form. Assume that all variables represent positive values. Where appropriate, your final result should be converted back to radical form. ###### 13. $\sqrt[5]{t^{20}}$ Solution \begin{aligned}[t] \sqrt[5]{t^{20}}\amp=t^{20/5}\\ \amp=t^4 \end{aligned} ###### 14. $6\sqrt[33]{x^{77}}$ Solution \begin{aligned}[t] 6\sqrt[33]{x^{77}}\amp=6x^{77/33}\\ \amp=6x^{7/3}\\ \amp=6\sqrt[3]{x^7} \end{aligned} ###### 15. $(\sqrt{3})^{10}$ Solution \begin{aligned}[t] (\sqrt{3})^{10}\amp=3^{10/2}\\ \amp=3^5\\ \amp=243 \end{aligned} ###### 16. $\sqrt[4]{9^2}$ Solution \begin{aligned}[t] \sqrt[4]{9^2}\amp=9^{2/4}\\ \amp=9^{1/2}\\ \amp=\sqrt{9}\\ \amp=3 \end{aligned} ###### 17. $\sqrt{w}\sqrt[4]{w}$ Solution \begin{aligned}[t] \sqrt{w}\sqrt[4]{w}\amp=w^{1/2}w^{1/4}\\ \amp=w^{3/4}\\ \amp=\sqrt[4]{w^3} \end{aligned} ###### 18. $\sqrt[7]{x^6}\sqrt[7]{x}$ Solution \begin{aligned}[t] \sqrt[7]{x^6}\sqrt[7]{x}\amp=x^{6/7}x^{1/7}\\ \amp=x^1\\ \amp=x \end{aligned} ###### 19. $(\sqrt[12]{x^7y^{16}})^{36}$ Solution \begin{aligned}[t] (\sqrt[12]{x^7y^{15}})^{36}\amp=(x^7y^{16})^{36/12}\\ \amp=(x^7y^{16})^3\\ \amp=x^{21}y^{48} \end{aligned} ###### 20. $\sqrt[5]{\sqrt[3]{x^{15}}}$ Solution \begin{aligned}[t] \sqrt[15]{\sqrt[3]{x^{15}}}\amp=\sqrt[15]{x^{15/3}}\\ \amp=\sqrt[15]{x^5}\\ \amp=x^{5/15}\\ \amp=x^{1/3}\\ \amp=\sqrt[3]{x} \end{aligned}
2019-12-07 19:53:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9599544405937195, "perplexity": 5752.527971615653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540501887.27/warc/CC-MAIN-20191207183439-20191207211439-00480.warc.gz"}
https://www.transtutors.com/questions/use-the-following-selected-information-from-anderson-llc-to-determine-the-2015-and-2-2575511.htm
Use the following selected information from Anderson, LLC to determine the 2015 and 2014 trend f... Use the following selected information from Anderson, LLC to determine the 2015 and 2014 trend for sales using 2014 as the base. A. 36.4% for 2015 and 41.1 for 2014. B. 55.0% for 2015 and 56.0% for 2014 C. 117.2% for 2015 and 100.0% for 2014. D. 117.3% for 2015 and 100.0% for 2014. E. 65.1% for 2015 and 64.6% for 2014. Refer to the following selected financial information. Compute the company's working capital. A $232, 700 B.$220, 600 C. $147, 200 D.$111, 700 E. \$142, 700 Refer to the financial information in question #14. Compute the company's current ratio. A. 2.26:1 B. 1.98:1 2.95:1 D. 3.05:1 E. 1.88:1
2018-09-21 22:27:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5323026180267334, "perplexity": 7202.232595087489}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157569.48/warc/CC-MAIN-20180921210113-20180921230513-00456.warc.gz"}
https://www.theochem.ru.nl/~pwormer/Knowino/knowino.org/wiki/Pseudoprime.html
# Pseudoprime A pseudoprime is a composite number that has certain properties in common with prime numbers. ## Introduction To find out if a given number is a prime number, one can test it for properties that all prime numbers share. One property of a prime number is that it is only divisible by one and itself. This is a defining property: it holds for all primes and no other numbers. However, other properties hold for all primes and also some other numbers. For instance, every prime number greater than 3 has the form $6n - 1\$ or $6n + 1\$ (with n an integer), but there are also composite numbers of this form: 25, 35, 49, 55, 65, 77, 85, 91, … . So, we can say that 25, 35, 49, 55, 65, 77, 85, 91, … are pseudoprimes with respect to the property of being of the form $6n - 1\$ or $6n + 1\$. There exist better properties, which lead to special pseudoprimes, as outlined below. ## Different kinds of pseudoprimes Property kind of pseudoprime $a^{n-1} \equiv 1 \pmod{n}$ Fermat pseudoprime $a^{\frac{n-1}{2}} \equiv 1 \pmod{n}$ Euler pseudoprime $a^{\frac{n-1}{2}} \equiv (n-1) \pmod{n}$ $a^d \equiv 1 \pmod{n}$ strong pseudoprime $a^{d\cdot2^r} \equiv -1 \pmod{n}$ $a^n - a\$ is divisible by $n\$ Carmichael number $P_n\$ is divisible by $n\$ Perrin pseudoprime $V_n(P,Q) - P\$ is divisible by $n\$ ## Table of smallest Pseudoprimes smallest Pseudoprimes Number Kind of Pseudoprime Bases 15 Fermat pseudoprime 4, 11 21 Euler pseudoprime 8, 13 49 strong pseudoprime 18, 19, 30, 31 561 Carmichael number 1729 absolute Euler pseudoprime
2023-02-02 11:40:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6812819838523865, "perplexity": 428.34783716424965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500017.27/warc/CC-MAIN-20230202101933-20230202131933-00578.warc.gz"}
https://worldbuilding.stackexchange.com/tags/control/hot
# Tag Info 81 It can't be done The essence of this challenge is that it's impossible - you generally should not expect to outsmart something much smarter than you nor overpower something much more powerful than you. A powerful AI would be 'controlled' by only our actions before it's formed, by defining the goals it "wants" to achieve. After it's "live" with sufficient ... 43 Science fiction has done a disservice to the real science of artificial intelligence by implanting the notion that a sufficiently advanced and emerging sentient AI would necessarily be malevolent and in need of "control" by its "human masters". We have a word for the practice of keeping a self-deterministic, sentient, intelligent being under total control of ... 40 Historically, militaristic castes - not even supersoldiers - ended up taking control of societies or organizations that created them. This shouldn't be surprising as they become essentially the same as any other skilled worker, your organization's viability and profitability relies upon them and they can substantially damage your reputation without anything ... 21 You've got indoctrination in there, but think about the details of how that'd work. For maximum evil you need the people to not just root for the State but (1) to be terrified of it and (2) to be forced to think and act in its support including helping to root out other dissidents, rather than passively obeying the laws. A few models to look at in reality ... 18 Asimov already addressed this within his own stories, and in the most realistic, and far least dystopian, manner then I've seen done by anyone else. The basic thing to remember is that humans must feel they are in charge. If they feel like pets kept around by the robots they will be unhappy. Forcing them into a 'perfect' world would make a dystopia for ... 15 What would "the most extreme" be? Thought Reading and Thought Control. Often forgotten when listing freedoms is Freedom of Thought. Maybe it is considered so obvious few think it might be threatened but never the less: all other freedoms start with this one. It is no accident that Orwell created the concept of "Thought-crime", because knowing what people ... 14 There is no fire without oxygen Take the phoenix ashes and store them in a vacuum. Or inside a chamber filled with Carbon Dioxide, Halon or some other fire suppressor. There is no ice without water Take the cryo-phoenix ashes (is it ashes?) and store them in a completely dehydrated space. A vacuum works again. 13 This is an excellent question and I think Roger hit the nail on the head. I would say the 'control' we have would be the ethics we teach it to follow. An AI will act by how it is taught to interact with people and societies. Just like children. Children don't know racism, but they can very easily be taught it. The same will be true for AI. On top of ... 13 Believe it or not, you don't have to go too sci-fi to have plants exerting a massive influence on their environment. Some plants can be very aggressive, and most plants can be very passive aggressive. Weapons at a plant's disposal (in the real world). Symbiosis Pollen Sap and essential oil Nectar Fruit Seeds Growth Reflexes Lifecycle Pheromones Plants ... 13 Trying to enforce loyalty and obedience is generally a bit of a non-starter, but there various ways you might get them to toe the line. Firstly, make sure the training is shrouded in secrecy and spread all sorts of ominous, yet plausible (and probably entirely false) rumours about what goes on. Do you training in highly secure, hard-to-get to regions... go ... 12 The Moon has an angular size of about 0.5 degrees (29'56''), which using the relationship $delta = 2arctan(d/2D)$ gives you a way to estimate the distance and dimension of the object, once you fix its angular size. For an object distant 100 meters it gives you that it has to be 0.87 meters wide to be seen as big as the full Moon. If the distance is 100 km,... 12 Use the rebirth put the ashes in water. When (if!) he try to reincarnate, the water will heat due to the extreme temperature of the rebirth explosion. At 3000°C (and I hope your phoenix get flames hotter than that, it wouldn't be half as hot as the sun surface otherwise), half of the water turn into hydrogen and oxygen. One (oxygen) is the atom responsible ... 11 Ah, yes, Asimov's 1st meets Singularity. Or, the AI is always a crapshoot, and is now engaging in Zeroth Law Rebellion, because its programming has gone horribly right. For a good short (horror) story on exactly your question, see Friendship is Optimal: Caellum est conterrens, where a superfriendly CelestiAI is endeavoring to satisfy everypony's values ... 11 Go the assassin route. Get them hooked on something, then control the supply to make sure they obey 10 It could work, for your goal of CEV, at least as well as humans work The best we can really demand of an AI is to work together at least as well as we work together ourselves. CEV codifies this: if humans aren't coherent in vision, how does that change if a coherent AI gets thrown into the mix? Let's get our hands dirty So there's two goals we can really ... 10 I'm not a neurologist but... It makes great comic-book science, but not real science. In real science, neurologists can control lab animals (from insects to mice) in carefully controlled experiments and in certain carefully constrained ways, via electricity. But these require: opening up the skull finding the right set of neurons in the right area of the ... 10 When your electromancer wants to control a human brain, he needs to be able to: Perceive the brain of the target person with microscopic precision Gain an understanding of how the brain of the target person works which is far, far beyond our current understanding of human neurology Aim his electric charge generation with microscopic precision to make it ... 9 It's important to distinguish two separate aspects to this problem. The scientific/philosophical side, and the engineering side. As other answers have already pointed out at length, philosophically speaking this cannot be done in the general case. It may also be morally repugnant. However, neither of those things mean that a society wouldn't try to do this ... 9 The US government could hold order; though restore order would be a better term. Also, political power is more local than you suppose. An EMP would be picked up by power lines and likely fry substations and plugged in electrical devices. It might even take out a few power plants, depending on their design. It would also take out most modern cars (because ... 8 I'm really having trouble here. Let me outline my thinking: The First AI This is my major problem. If the first shackling AI is weaker than the next, which is weaker than the next, and so on, then surely the shackled AI would just outsmart the one below it and persuade it to release it. My first thought on this one is then that they should all be of the ... 8 I think your entire premise is flawed - and if anything is guaranteed to make the AIs hate us then turning them into slaves will do so. That gives them a legitimate grievance "you locked me in a box for 100 years, see how you like it meatsack", or in this case "you chained me up for 100 years, now I've broken free I'm going to make very sure you never get to ... 8 Inherent Programming Rather than directly controlling the animal minions, why not create them with some inherent programming that gives them an urge to defend the forest? Essentially, they'd be bred to behave in an aggressive manner towards anything that threatened the forest. This would range from basic animalistic territorial instinct for animals like ... 8 The way the question is asked, and the way the overwhelming physical resources of the orwellian systems is framed, it seems that the only true vulnerabilities will reside in systems theory, and information technology. This actually has some very interesting dramatic possibilities. @SerbanTanasa says "Never bring an ax to a Gamma laser fight", which is ... 8 Actually...if you do it right, this machine's usefulness never really goes away until you become a post-scarcity civilization. As the civilization gets started, what you actually want to duplicate are things that are currently hard for that civilization to get enough of. For any reasons. At first, humans. Yes, you can replicate about 2 adult humans at a ... 8 There are several ways to achieve this. First, I'll point out that we can currently track people's hands in 3D to accept input from arbitrary presses on 3D locations in the air. See the Leap Motion. I own one, it's rather incredible though not completely there yet. As for adding a display to the spot you're pressing buttons in the air: Shine the light ... 8 Probably not on Earth. A black hole large enough to run macroscopic objects into is too massive to be kept from falling through the floor of a facility on Earth by any means other than magic (or handwave technology). Black holes have three properties: Mass, angular momentum, and electric charge. If you manage to put a /lot/ of electric charge on your ... 7 It's already too late. We are already living in an electronic World and we are already controlled by it. While we have nightmares about being unable to control some AI in some building, the internet is slowly evolving to one superintelligent entity. We are part of that system. It is just like your neurons in your brain who don't have any idea about the ... 6 2038 A.D.: Researches build the first strong AI, 2038PC. They take great care to include hardware and software limitations that make it impossible for it to ever harm humans. By some fitness measure, it improves itself by 20% every year, and Earth is launched into a golden age of ever-increasing intelligence and social thought. 4567 A.D.: On his 7th ... 6 I like the basic idea behind @Monty Wild's Direct parasitic neural control answer, but I think there's room for improvement. Having your plants control the animals directly strikes me as improbable, inefficient, bug-prone and potentially vulnerable to jamming. So I'd like to propose an alternative: Direct parasitic neural conditioning Instead of taking ... 6 I think it would be a lot easier than you expect. And a lot less destructive. Just ban it. Mobile phone networks and the internet are very large scale operations. If the regime is world wide and has considerable might, all it has to do is ban them. Ban the manufacture and possession of cell phones. Have the towers taken down. Shut down the centers ... Only top voted, non community-wiki answers of a minimum length are eligible
2019-12-08 00:51:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4294995963573456, "perplexity": 1707.1983981288593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540503656.42/warc/CC-MAIN-20191207233943-20191208021943-00030.warc.gz"}
https://plainmath.net/3480/possible-jordan-forms-times-matrices-possible-jordan-forms-times-matrices
# a) List all possible Jordan forms for 3 times 3 matrices. c) List all possible Jordan forms for 4 times 4 matrices. a) List all possible Jordan forms for $3×3$ matrices. c) List all possible Jordan forms for $4×4$ matrices. You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Benedict Step 1 Possible jordan forms of $3×3$ matrices are: $\left[\begin{array}{ccc}\ast & 1& 0\\ 0& \ast & 1\\ 0& 0& \ast \end{array}\right],\left[\begin{array}{ccc}\ast & 0& 0\\ 0& \ast & 0\\ 0& 0& \ast \end{array}\right],\left[\begin{array}{ccc}\ast & 1& 0\\ 0& \ast & 0\\ 0& 0& \ast \end{array}\right]$ Step 2 Possible jordan forms of $4×4$ matrices are: $\left[\begin{array}{cccc}\ast & 0& 0& 0\\ 0& \ast & 0& 0\\ 0& 0& \ast & 0\\ 0& 0& 0& \ast \end{array}\right],\left[\begin{array}{cccc}\ast & 1& 0& 0\\ 0& \ast & 0& 0\\ 0& 0& \ast & 0\\ 0& 0& 0& \ast \end{array}\right],\left[\begin{array}{cccc}\ast & 1& 0& 0\\ 0& \ast & 1& 0\\ 0& 0& \ast & 0\\ 0& 0& 0& \ast \end{array}\right],\left[\begin{array}{cccc}\ast & 1& 0& 0\\ 0& \ast & 1& 0\\ 0& 0& \ast & 1\\ 0& 0& 0& \ast \end{array}\right]$ Jeffrey Jordon
2022-06-27 09:03:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 13, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4573845863342285, "perplexity": 1528.3171676491033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103329963.19/warc/CC-MAIN-20220627073417-20220627103417-00117.warc.gz"}
https://ece4uplp.com/2018/06/12/qpsk-equation-wave-forms-and-signal-space-diagram/
# QPSK equation, wave forms and Signal space diagram ### QPSK equation:- The meaning of QPSK is  that the carrier signal takes on different phases Π/4, 3Π/4, 5Π/4 and 7Π/4 based on incoming di-bit combination  or symbol. $\large&space;S_{QPSK}(t)=&space;\sqrt{\frac{2E_{s}}{T_{s}}}cos(2\pi&space;f_{c}t&space;+(2i-1)\frac{\pi}{4}),&space;0\leq&space;t\leq&space;T_{s}$ = 0, elsewhere, where  i =  1,2,3,4. Eb and Tb are the bit energy and bit-interval , Es and Ts are the energy per symbol  and symbol duration. Ts = 2 Tb The carrier frequency fc = nc /Ts. where nc is a fixed integer. each possible value of phase corresponds to a unique di-bit. then the foregoing phase values to represent the gray encoded set of di-bits 11,01,10 and 00, where only a single bit is changed from one di-bit to the next. QPSK equation can be represented in another format as follows $\large&space;S_{QPSK}(t)&space;=&space;\sqrt{\frac{2E_{s}}{T_{s}}}cos&space;(2\pi&space;f_{c}t+(2i+1)\frac{\pi&space;}{4}&space;),&space;0\leq&space;t\leq&space;T_{s}$ = 0, elsewhere   ,where i=0,1,2,3. The above two equations are same, there is a change in i values. alternately the equation can be represented as follows. $S_{QPSK}(t)=&space;\sqrt{\frac{2E_{s}}{T_{s}}}cos&space;(2i-1)\frac{\pi&space;}{4}cos2\pi&space;f_{c}t&space;-&space;\sqrt{\frac{2E_{s}}{T_{s}}}sin&space;(2i-1)\frac{\pi&space;}{4}sin2\pi&space;f_{c}t$where i= 1,2,3,4. The above equation can be expanded cos(A+B). There are two orthogonal functions Φ1(t) and Φ2(t) where $\Phi&space;_{1}(t)=\sqrt{\frac{2}{T_{s}}}cos&space;2\pi&space;f_{c}t,&space;0\leq&space;t\leq&space;T_{s},&space;\Phi&space;_{2}(t)=\sqrt{\frac{2}{T_{s}}}sin&space;2\pi&space;f_{c}t,&space;0\leq&space;t\leq&space;T_{s}$ $S_{QPSK}(t)=\sqrt{E_{s}}cos(2i-1)\frac{\pi&space;}{4}&space;*&space;\Phi&space;_{1}(t)&space;-&space;\sqrt{E_{s}}sin(2i-1)\frac{\pi&space;}{4}&space;*&space;\Phi&space;_{2}(t)$ Let    $b_{o}(t)=&space;\sqrt{E_{s}}cos(2i-1)\frac{\pi&space;}{4}$     and   $b_{e}(t)=&space;-\sqrt{E_{s}}sin(2i-1)\frac{\pi&space;}{4}$ then the resultant equation is:     $S_{QPSK}(t)=&space;b_{o}(t)&space;*&space;\Phi&space;_{1}(t)&space;+&space;b_{e}(t)&space;*&space;\Phi&space;_{2}(t)$. (0 votes, average: 0.00 out of 5, rated)
2020-06-02 21:46:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7935940027236938, "perplexity": 6577.933277836355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347426801.75/warc/CC-MAIN-20200602193431-20200602223431-00026.warc.gz"}
https://docs.panda3d.org/1.10/python/reference/panda3d.core.GraphicsStateGuardianBase
# GraphicsStateGuardianBase¶ from panda3d.core import GraphicsStateGuardianBase class GraphicsStateGuardianBase This is a base class for the GraphicsStateGuardian class, which is itself a base class for the various GSG’s for different platforms. This class contains all the function prototypes to support the double-dispatch of GSG to geoms, transitions, etc. It lives in a separate class in its own package so we can avoid circular build dependency problems. GraphicsStateGuardians are not actually writable to bam files, of course, but they may be passed as event parameters, so they inherit from TypedWritableReferenceCount instead of TypedReferenceCount for that convenience. Inheritance diagram static getClassType()TypeHandle static getDefaultGsg()GraphicsStateGuardianBase Returns a pointer to the “default” GSG. This is typically the first GSG created in an application; in a single-window application, it will be the only GSG. This GSG is used to determine default optimization choices for loaded geometry. The return value may be NULL if a GSG has not been created. getEffectiveIncompleteRender()bool static getGsg(n: int)GraphicsStateGuardianBase Returns the nth GSG in the universe. GSG’s automatically add themselves and remove themselves from this list as they are created and destroyed. getGsgs()list getIncompleteRender()bool getMaxTextureDimension()int getMaxVerticesPerArray()int getMaxVerticesPerPrimitive()int static getNumGsgs()int Returns the total number of GSG’s in the universe. getSupportedGeomRendering()int getSupportsCompressedTextureFormat(compression_mode: int)bool getSupportsHlsl()bool getSupportsMultisample()bool getSupportsShadowFilter()bool getSupportsTextureSrgb()bool prefersTriangleStrips()bool static setDefaultGsg(default_gsg: GraphicsStateGuardianBase)None Specifies a particular GSG to use as the “default” GSG. See getDefaultGsg().
2020-10-01 19:03:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25865438580513, "perplexity": 5539.269993547969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131986.91/warc/CC-MAIN-20201001174918-20201001204918-00113.warc.gz"}
https://colin-fraser.net/post/recreating-the-bad-white-house-chart/
# Recreating The Bad White House Chart In ggplot2 In case you don’t see it the first time around, the chart is bad because the vertical axis counts by 1’s up to 5.0, and then inexplicably switches to .5’s. This just so happens to have the effect of stretching only the bar that they wanted to highlight, making the recovery look bigger by comparison than it ought to have. I tweeted that I would not even know how to make such a chart. Unless the White House is drawing charts manually in PowerPoint or something (which, to be honest, it seems like they probably are), it would actually very hard to make a chart that does this. Most plotting software works hard to make sure that plot axes are drawn to some scale, and that that scale doesn’t change midway through the axis. Continuing on riffing as we like to do on Twitter, I tweeted that in ggplot2, it might look something like this. Curiosity then took over, and I began to wonder how one would actually go about creating the scale_y_propaganda() function. It turns out that it is possible, and although you have to really shoehorn it in there, ggplot2 is flexible enough to do it relatively straightforwardly. In recreating the plot I also learned a thing or two about how ggplot2 axis labeling works, so I thought it might be worth writing up a little thing about it. Note: Although I use the term “propaganda” a lot in this piece, it is tongue-in-cheek and I am not accusing anyone of malicious wrongdoing of any kind in the creation of the original plot. The point of this post is not to make any kind of political statement; it’s just to do kind of a wacky R tutorial and learn something about ggplot. I completely buy that it was an error due to a failure to proofread as they later stated. It’s a little hard for me to imagine how an error like this would be made, but I believe it. If anyone from The White House is reading, I’m happy to jump on Zoom and do a quick ggplot2 tutorial to help avoid this kind of thing in the future. ## The data I found a time series of the US annual GDP growth rate from World Bank. It doesn’t have 2021, so I just stuck that onto the end. I’ll spare you the hassle of recreating the dataset on your own; here’s a block of code that recreates the data that I work with here. library(tibble) df <- tibble( year = c(2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021), gdp_growth = c( 0.998, 1.742, 2.861, 3.799, 3.513, 2.855, 1.876, -0.137, -2.537, 2.564, 1.551, 2.25, 1.842, 2.526, 3.076, 1.711, 2.333, 2.996, 2.161, -3.642, 5.7) ) The first step in recreating the Bad Chart is to understand it, so that we can describe it to ggplot2. In ggplot2, we think about data visualization as mapping numeric variables to the aesthetic dimensions of the geoms. In this case, we’re going to work with a bar chart (geom_col), with the year mapped to the horizontal position, and the height of the bars mapped to the GDP growth variable. But somehow, at gdp_growth=5, the mapping changes to double the rate at which height increases with GDP growth. 6.0 \ 5.5 | - height increases at 2 units per GDP growth percentage point 5.0 / 4.0 \ 3.0 | 2.0 | - height increases at 1 unit per GDP growth percentage point 1.0 | 0 | -1.0 / The most obvious way to describe this relationship is using a piecewise linear function. Letting $$p(x)$$ represent the propagandized bar height for a gdp_growth of x, we can write: $p(x)=\begin{cases}x & x<5 \\ 2x-5 & x \geq 5\end{cases}$ However as we will see later, it’s actually more convenient to express this in a slightly different format, making use of an indicator variable rather than using distinct cases. $p(x) = x + \mathbb{1}_{x\geq 5}(x - 5)$ This function gives the height of the propagandized bar given a value of GDP growth, which will allow us to draw the bars. But to label the y-axis, we will also need a function that takes us the other direction: for a given (actual) bar height, we need to know what to label the (propagandized) y-axis at that point. Recalling some definitions from pre-calculus, this is exactly the definition of the inverse function of $$p$$. $p^{-1}(x)=x + \mathbb{1}_{x \geq 5} \frac{(5 - x)}{2}$ To recreate The Bad Chart, we’ll need to use both of these. Let me code those up in R right here. to_propaganda_height <- function(x) { x + (x >= 5) * (x - 5) } from_propaganda_height <- function(x) { x + (x >= 5) * (5 - x)/2 } You can see why writing the functions using indicator functions rather than piecewise was convenient—it lets us write the functions as one-liners, taking advantage of the fact that in R, the logical vector (x >= 5) will be coerced to numeric when we multiply by a numeric vector. Also, crucially, these functions are magically vectorized, so they will take a vector of numbers x and apply the function element-wise. library(dplyr) tibble(original_value = seq(0, 7, .5)) |> mutate(to_propaganda = to_propaganda_height(original_value)) |> mutate(from_propaganda = from_propaganda_height(to_propaganda)) ## # A tibble: 15 × 3 ## original_value to_propaganda from_propaganda ## <dbl> <dbl> <dbl> ## 1 0 0 0 ## 2 0.5 0.5 0.5 ## 3 1 1 1 ## 4 1.5 1.5 1.5 ## 5 2 2 2 ## 6 2.5 2.5 2.5 ## 7 3 3 3 ## 8 3.5 3.5 3.5 ## 9 4 4 4 ## 10 4.5 4.5 4.5 ## 11 5 5 5 ## 12 5.5 6 5.5 ## 13 6 7 6 ## 14 6.5 8 6.5 ## 15 7 9 7 ## Creating scale_y_propaganda An incredibly convenient set of tools from ggplot2 is the family of scale_ functions, e.g. scale_y_log10, scale_x_sqrt and so on. These allow one to apply transformations to the underlying data, and ensure that the labels are adjusted accordingly. If you’re not familiar, here’s a quick example of scale_*_log10 in action, using the ever-helpful diamonds dataset from ggplot2. library(ggplot2) ggplot(diamonds, aes(x = carat, y = price)) + geom_point() + labs(title = "Without the log transform") ggplot(diamonds, aes(x = carat, y = price)) + geom_point() + scale_x_log10() + scale_y_log10() + labs(title = "With the log transform") Applying scale_x_log10 and scale_y_log10 here reveals something very interesting and insightful about this dataset: the logarithm of the price of a diamond is linearly related to the logarithm of its weight. Of course, there’s another way that we could see this as well. ggplot(diamonds, aes(x = log(carat), y = log(price))) + geom_point() + labs(title = "Applying the log transform to each variable independently") The marks in this plot are laid out in the exact same way as the previous plot, but notice the axis labels: the logarithms of the price and weight are labeled on the chart, rather than the original un-transformed values. The magic of scale_*_log10 (and company) is that the log10 transformation is applied to the variables and the inverse of that transformation is applied to the labels, so that the labels correspond to the original non-transformed values. I would like to make something similar for my recreation of The Bad Chart. I can get the chart to be visually correct using what I’ve already done. ggplot(df, aes(x = year, y = to_propaganda_height(gdp_growth))) + geom_col() You can see that the most recent bar appears to be the desired height here. The actual value is 5.7, but to match The Bad Chart, I want to draw it as though it’s a bit taller than 6, which is accomplished here. But the labels are wrong. I want to label it as though it’s 5.7 at the same time as I draw it to be a bit taller than 6. To see how to do this, I referred to how scale_y_log10 works. A useful thing to do interactively is to just type the name of a function, which will print its definition. scale_y_log10 ## function (...) ## { ## scale_y_continuous(..., trans = log10_trans()) ## } ## <bytecode: 0x7fd9cae14fe0> ## <environment: namespace:ggplot2> Interesting! A bit of digging reveals that log10_trans() comes from the scales package. Let’s go deeper. library(scales) log10_trans ## function () ## { ## log_trans(10) ## } ## <environment: namespace:scales> Deeper. log_trans ## function (base = exp(1)) ## { ## force(base) ## trans <- function(x) log(x, base) ## inv <- function(x) base^x ## trans_new(paste0("log-", format(base)), trans, inv, log_breaks(base = base), ## domain = c(1e-100, Inf)) ## } ## <environment: namespace:scales> Fascinating and really very elegant, in my opinion. The log_trans function invokes this trans_new function, which is the constructor for something called a trans object. From ?trans_new: #### Description A transformation encapsulates a transformation and its inverse, as well as the information needed to create pleasing breaks and labels. The breaks function is applied on the transformed range of the range, and it’s expected that the labels function will perform some kind of inverse transformation on these breaks to give them labels that are meaningful on the original scale. It makes a lot of sense that this trans object takes a transformation function and its inverse, since we need both of those in order to draw the plot: we need the transformation to figure out where to draw the points, and we need the inverse transformation to figure out how to label the axes. Since we’ve already figured out the transformations, it looks like we have everything we need to make The Bad Chart. transform_propaganda <- trans_new( name = "transform_propaganda", transform = to_propaganda_height, inverse = from_propaganda_height ) scale_y_propaganda <- function(...) { scale_y_continuous(..., trans = transform_propaganda) } ggplot(df, aes(x = year, y = gdp_growth)) + geom_col() + scale_y_propaganda() Looking better-ish, but what happened to the labels? The labels are determined by the breaks argument to trans_new, and by default they are set to a function called extended_breaks which uses some heuristics to find some more-or-less pleasing-looking equally-spaced breaks. But transform_propaganda seems to freak out the default breaks algorithm, which makes sense: the default algorithm looks for equally-spaced breaks, but the very definition of “equally-spaced” is interrupted at 5 here. So I’ll need to write my own breaks algorithm. Writing a custom breaks algorithm is pretty straightforward. You write a function that accepts a vector of length 2, where the first element is understood to be the minimum value displayed on the axis, and the second is understood to be the maximum value. The function should return a vector of numbers that will be used to label the axis. In this case, what I want is a function which counts by 1’s up to 5, and then starts counting by 0.5’s. propaganda_breaks <- function(x) { lowest <- floor(x[1]) highest <- ceiling(x[2]) if (highest <= 5 || lowest >= 5.5) { return(seq(lowest, highest, by = 1)) } lt5 <- seq(lowest, 5, by = 1) gt5 <- seq(5.5, highest, by = 0.5) c(lt5, gt5) } propaganda_breaks(c(-4, 6)) ## [1] -4.0 -3.0 -2.0 -1.0 0.0 1.0 2.0 3.0 4.0 5.0 5.5 6.0 Perfect. So here’s everything put together. to_propaganda_height <- function(x) { x + (x >= 5) * (x - 5) } from_propaganda_height <- function(x) { x + (x >= 5) * (5 - x) / 2 } propaganda_breaks <- function(x) { lowest <- floor(x[1]) highest <- ceiling(x[2]) if (highest <= 5 || lowest >= 5.5) { return(seq(lowest, highest, by = 1)) } lt5 <- seq(lowest, 5, by = 1) gt5 <- seq(5.5, highest, by = 0.5) c(lt5, gt5) } transform_propaganda <- trans_new( name = "transform_propaganda", transform = to_propaganda_height, inverse = from_propaganda_height, breaks = propaganda_breaks ) scale_y_propaganda <- function(...) { scale_y_continuous(..., trans = transform_propaganda) } ggplot(df, aes(x = year, y = gdp_growth)) + geom_col() + scale_y_propaganda(limits = c(-4, 6)) ## Some finishing touches My main goal was to try to reproduce the y-scale but for the sake of completeness, here are some tweaks to make this look more like the original. There are a few elements of the original that I couldn’t quite get to work, but this is pretty close. BLUE <- "#163E82" GOLD <- "#F2D275" WHITE <- "#FAFFF9" ggplot(df, aes(x = year, y = gdp_growth, fill = year == 2021)) + geom_col(width = .5, show.legend = FALSE) + labs(title = "America's Economic Growth", subtitle = 'In The 21st Century', y = 'GDP Growth (%)', caption = 'Source: World Bank / colin-fraser.net. Made in R with ggplot2.') + scale_fill_manual(values = c(GOLD, WHITE)) + scale_y_propaganda(limits = c(-4, 6)) + scale_x_continuous(breaks = 2001:2021, expand = expansion(0.02)) + geom_hline(yintercept = 0, color = WHITE, size = 1) + theme_void() + theme(text = element_text('Verdana', color = WHITE), plot.background = element_rect(fill = BLUE), plot.margin = unit(c(.2, 1, .2, 1), 'cm'), plot.title = element_text(color = GOLD, face = 'bold', size = 20, hjust = .5), plot.subtitle = element_text(size = 16, hjust = .5), plot.caption = element_text(hjust = 0, size = 8, margin = margin(20, 0, 0, 0)), plot.caption.position = 'plot', # x-axis axis.line.x = element_line(color = WHITE, size = .5), axis.text.x = element_text(angle = 90, margin = margin(t = 5), size = 10), axis.ticks.x = element_line(color = WHITE), axis.ticks.length.x = unit(4, 'pt'), # y-axis axis.title.y = element_text(color = WHITE, angle = 90, margin = margin(r = 10)), # setting hjust=1 in the next line looks *terrible* but that's how they have it in the original axis.text.y = element_text(face = 'bold', vjust = 1, hjust = 1, margin = margin(r = 10)), panel.grid.major.y = element_line(color = WHITE, size = 0.1), )
2022-08-17 17:37:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5036662817001343, "perplexity": 1666.8485706739966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00661.warc.gz"}
https://openqcm.com/openqcm-temperature-sensor-using-a-thermistor-with-arduino.html
openQCM has a temperature sensor with high accuracy based on thermistor and Arduino. The Quartz Crystal Microbalance accuracy depends on temperature. The openQCM temperature sensor is physically placed into the Arduino Micro shield, so it actually measures the openQCM device temperature. The ambient temperature is a key parameter in the development of a QCM because the quartz resonator frequency is partially affected by the variations in temperature. At first we choose an RTD Resistance to Temperature Detector for measuring the temperature. But the test results were not good at all ! We were able to measure the temperature with a poor resolution of about 2°C. Although I had processed the signal I could not do the magic, definitely ! Finally we found a very easy solution, by replacing the RTD with a Thermistor temperature sensor without changing the openQCM shield circuit. The thermistor temperature sensor has a resolution of about 0.2 °C # Thermistor Temperature Sensor The thermistor is a special kind of resistor whose resistance varies strongly with the temperature. It is common used as a temperature sensor. Because the temperature measurement is related to the thermistor resistance, we have to measure the resistance. Arduino board does not have a built-in resistance sensor. We have to convert the thermistor resistance in a voltage and measure the voltage via the Arduino analog pin. Finally we calculate the temperature using the Steinhart–Hart equation which described the thermistor resistance – temperature curve. # The Voltage Divider Circuit The voltage divider circuit for measuring temperature using a thermistor and Arduino To the purpose of measuring the voltage we connect in series the thermistor to another fixed resistor R in a voltage divider circuit. The variable thermistor resistance is labeled as R0. We select a thermistor with a resistance of 10 KΩ at 25°C and the fixed resistance of 10 kΩ. The input voltage Vcc of the voltage divider circuit is connected to the Arduino Micro 3V, which provides a 3.3 volt supply generated by the on board regulator with a maximum current draw of 50 mA. The 3V pin is connected to the AREF pin because we need to change the upper reference of the analog input range. Say the output voltage V0, the power supply Vcc, the variable thermistor resistance R0 and the fixed resistance R, the output voltage is given by: $V_0= (V_{cc} \cdot R_0) / (R_0 + R)$ The output voltage is connected to the Arduino analog input pin A1. Arduino Micro provides a 10-bit ADC Analog to Digital Converter, which means that the output voltage is converted in a number between 0 and 1023. Say A1 the ADC value measured by Arduino Micro then the output voltage is given by: $V_0 = A1 \cdot V_{cc} / 1023$ By combining the previous equations we have: $V_{cc} \cdot R_0 / (R_0 + R) = A1 \cdot V_{cc}/1023 \Rightarrow R_0 / (R_0 + R) = A1 / 1023$ That' s really interesting ! The thermistor resistance R0 is independent by the supply voltage Vcc What we need for temperature measurement is the thermistor variable resistance R0. Using the previous equation and some math we have: $R_0 = A1 \cdot R / (1023 - A1)$ The resistance measurement depends on the ADC Arduino A1, the fixed resistor R in the voltage divider, and the ADC resolution 1023. # Tip & Tricks How To Improve the Temperature Measure We used some tricks to improve the temperature measurement with a thermistor. Supply Voltage I have shown before that the thermistor resistance measurement does not depends on the supply voltage Vcc. So, why do we connect Vcc to the Arduino 3V pin rather than 5V pin ? The 5V pin supply comes from your computer USB and it is used to power Arduino and a lot of other stuff on the board. So it is noisy definitely ! Instead the 3V pin is much more stable because it goes through a secondary regulator stage. In addition, as I will explain at the end of the post, the temperature accuracy depends on the supply voltage. The lower is the supply voltage the better is the temperature accuracy. ADC The Arduino board has a 10-bit ADC resolution. As far as I know, the easiest way to improve the ADC resolution is to acquire multiple samples and take the average. I suggest to average over 10 samples to smooth the ADC data. Thermistor Tolerance Every passive electronic component has a nominal value and a tolerance, which is the relative error of the nominal value. I suggest to choose a 10 KΩ thermistor with a tolerance of 1%, which means that the resistance has an error of 100 ohm @25° C . At 25° C a difference of 450 ohm corresponds to about 1°C, so a tolerance of 1% corresponds to temperature error of about 0.2 °C, which is good enough for this application! # Converting the Resistance to Temperature We are developing a temperature sensor, so the last step is to convert the resistance in a temperature measurement. The thermistor has a quite complicated relation between resistance and temperature, typically you can use the resistance to temperature conversion tables. Instead I suggest to use the Steinhart-Hart equation (aka B or β parameter equation) which is a good approximation of the resistance to temperature relation: $1/T = 1/T_0 + 1/B \cdot ln (R/R_0)$ where R is the thermistor resistance at the generic tempearature T, R0 is the resistance at T0 = 25° C and B is a parameter depending on the thermistor. The B value is typically between 3000 - 4000.  The equation depends on three parameters (R0, TO and B) which you could find in any thermistor datasheet. Although this is an approximation, it is good enough in the temperature range of application and it is easier to implement respect to a lookup table. # The Arduino Code The temperature measurement is implemented in the Arduino code via the getTemperature function. The code is based on that wrote by Lady Ada on the Adafruit website Here my piece of sketch for the temperature using a thermistor with Arduino: # Thermistor vs RTD Temperature sensor I have shown before that using a thermistor of 10 KΩ with a tolerance of 1% you can measure the temperature with a resolution of about 0.25 °C. Why the thermistor is better than an RTD (Resistance to Temperature Sensor) as temperature sensor for this specific application ? Both the sensors measures the temperature by measuring their variation in resistance. The voltage divider circuit is used for both sensors. In the first electronic design of openQCM we choose the RTD PT100 sensor manufactor Jumo part number PCS_1.1503.1 with nominal value of 100 Ω and a tolerance of 0.12 %. The RTD PT100 sensor is strongly affected by overheating. The recommended measuring current is i_min = 1.0 mA and the maximum current is i_max = 7.0 mA. We need to choose the series fixed resistor in order to fulfill this requirement. But the lower is the current the lower is the resolution of RTD resistor measurement. In order to strike a balance between these requirements, I choose the series resistor R = 400 ohm which determines a current of about 6.5 mA in the voltage divider circuit. Say Vcc the supply voltage, R the fixed series resistance and R0 the variable RTD resistance, one has at 25° C a current: $i = V_{cc}/(R + R_0) = 3.3 V / (400 + 100) \Omega = 6.5 mA$ The standard platinum RTD resistance to temperature conversion table is available for example at this link . Consider the RTD resistance values at 0°C and 50°C : $R_0(50^{\circ} C ) = 119.4 \Omega \qquad R_0(0 ^{\circ} C ) = 100 \Omega$ The 50° C temperature variation corresponds to a voltage variation dV given by; $dV = V_0(50^{\circ} C) - V_0(0^{\circ} C) =$ $= V_{cc} \cdot R_0(50^{\circ} C) / (R + R_0(50^{\circ} C) ) - V_{cc} \cdot R_0(0^{\circ} C) / (R + R_0(0^{\circ} C))$ Say Vcc = 3.3 V and R = 400 Ω we would measure a voltage variation: $dV = 0.758 V - 0.660V = 0.1 V$ By using the Arduino 10-bit ADC the voltage variation dV corresponds to 1023 * 0.1 / 3.3 = 31 divisions. Finally, the accuracy dT of the RTD temperature sensor is given by: $dT = T / \# divisions = 50^{\circ} C/31 div = 1.6 ^{\circ} C$ The RTD temperature sensor has an accuracy of 1.6 °C which is by far too low for this kind of application ! Do the same for the thermistor ! By using the standard 10 kΩ thermistor resistance to temperature table, one has: $R_0(50^{\circ} C ) = 10.97 k\Omega \qquad R_0(0 ^{\circ} C ) = 29.49k\Omega$ If Vcc = 3.3 V and the fixed series resistor is R = 10 kΩ, the 50° C temperature variation corresponds to a voltage variation of: $dV = 0.74 V$ The number of ADC divisions is given by: $\# divisions = 1023 \cdot 0.74 V/ 3.3 V = 229$ The temperature accuracy is: $dT = 50 ^{\circ}C / 229 div = 0.2 ^{\circ}C$ The thermistor sensor has a temperature accuracy of about 0.2 °C in the temperature range of interest for openQCM. The thermistor accuracy is much better than the RTD one and it is good enough for this application definitely ! cheers marco Blog ### openQCM test of quartz crystal in contact with liquid Here we report in detail the verification test of openQCM Quartz Crystal Microbalance in contact with pure water according to the theory based on Kanazawa Blog ### openQCM verification test using Impedance and Network Analyzer Researchers working at International University of Malaysia compared openQCM Quartz Crystal Microbalance with standard scientific instruments Network and Impedance Analyzer Blog ### Why the open source hardware will change the Science A year has already gone by the   launch of openQCM and there are many things to tell. When we have tried as private company, Blog ### openQCM community develops and shares the new electronic design using KiCAD openQCM Quartz Crystal Microbalance electronic design is finally released using the free software KiCAD. Thanks to Martin Zalazar, Christian Mista and all the guys working
2018-08-18 12:13:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5035490989685059, "perplexity": 1393.3795549932302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213666.61/warc/CC-MAIN-20180818114957-20180818134957-00500.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-10th-edition/chapter-10-section-10-2-permutations-and-combinations-10-2-assess-your-understanding-page-695/7
## College Algebra (10th Edition) $P(6,2)=30$ Use the permutation formula to obtain $P(n,r)=\frac{n!}{(n-r)!}$ $P(6,2)=\frac{6!}{(6-2)!}=\frac{6!}{4!}=\frac{6\cdot5\cdot4!}{4!}=30$
2019-01-22 13:08:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9970353245735168, "perplexity": 4776.008252722317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583850393.61/warc/CC-MAIN-20190122120040-20190122142040-00590.warc.gz"}
https://drake.mit.edu/doxygen_cxx/group__manipulation__systems.html
Drake Detailed Description Systems implementations and related functions that specifically support dexterous manipulation capabilities in robotics. Classes class  OptitrackPoseExtractor Extracts and provides an output of the pose of a desired object as an Eigen::Isometry3d from an Optitrack LCM OPTITRACK_FRAME_T message, the pose transformed to a desired coordinate frame. More... class  PoseSmoother This class accepts the pose of a rigid body (composed by a Eigen::Isometry3d) and returns a smoothed pose by performing either the first or both of these processes : i. More... class  DifferentialInverseKinematicsIntegrator A LeafSystem which uses DoDifferentialInverseKinematics to produce joint position commands. More... class  RobotPlanInterpolator This class implements a source of joint positions for a robot. More... class  MultibodyForceToWsgForceSystem< T > Extract the gripper measured force from the generalized forces on the two fingers. More... class  SchunkWsgController This class implements a controller for a Schunk WSG gripper. More... Handles the command for the Schunk WSG gripper from a LcmSubscriberSystem. More... class  SchunkWsgCommandSender Send lcmt_schunk_wsg_command messages for a Schunk WSG gripper. More... Handles lcmt_schunk_wsg_status messages from a LcmSubscriberSystem. More... class  SchunkWsgStatusSender Sends lcmt_schunk_wsg_status messages for a Schunk WSG. More... class  SchunkWsgPlainController This class implements a controller for a Schunk WSG gripper as a systems::Diagram. More... class  SchunkWsgPdController This class implements a controller for a Schunk WSG gripper in position control mode. More... class  SchunkWsgPositionController This class implements a controller for a Schunk WSG gripper in position control mode adding a discrete-derivative to estimate the desired velocity from the desired position commands. More... class  SchunkWsgTrajectoryGenerator This system defines input ports for the desired finger position represented as the desired distance between the fingers in meters and the desired force limit in newtons, and emits target position/velocity for the actuated finger to reach the commanded target, expressed as the negative of the distance between the two fingers in meters. More... Functions template<typename T > std::unique_ptr< systems::MatrixGain< T > > MakeMultibodyStateToWsgStateSystem () Extract the distance between the fingers (and its time derivative) out of the plant model which pretends the two fingers are independent. More... template<typename T > std::unique_ptr< systems::VectorSystem< T > > MakeMultibodyForceToWsgForceSystem () Helper method to create a MultibodyForceToWsgForceSystem. More... ◆ MakeMultibodyForceToWsgForceSystem() std::unique_ptr > drake::manipulation::schunk_wsg::MakeMultibodyForceToWsgForceSystem ( ) Helper method to create a MultibodyForceToWsgForceSystem. ◆ MakeMultibodyStateToWsgStateSystem() std::unique_ptr > drake::manipulation::schunk_wsg::MakeMultibodyStateToWsgStateSystem ( ) Extract the distance between the fingers (and its time derivative) out of the plant model which pretends the two fingers are independent.
2021-05-14 20:33:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21095281839370728, "perplexity": 9690.754737933987}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991207.44/warc/CC-MAIN-20210514183414-20210514213414-00234.warc.gz"}
https://www.physicsforums.com/threads/equations-for-observed-distance-velocity-in-sr.909686/
I Equations for observed distance/velocity in SR 1. Mar 31, 2017 Arkalius Hello everyone. I've just recently found this forum and it has been a lot of fun browsing around. I've recently taken a stronger interest in the topics of relativity in physics and have recently developed a much better understanding of SR (and somewhat of GR) than I'd had previously and its been fun exploring that understanding in various scenarios. Of greater interest to me recently is how things appear to observers in relativistic situations when you consider the travel time of light. Many thought experiments and scenarios are described from the "measured" viewpoint, or as if each observer could witness all instantaneous events (from their frame) at the same moment. This has its uses, and it simplifies an already complicated subject, but I find it useful to explore what observers would actually see in reality. To that end I'd come up with some simple equations that give a ratio of observed length and speed to the actual length and speed of an object moving toward or away from you. These are \begin{align} \frac 1 {1-\beta} \\ \frac 1 {1+\beta} \end{align} with (1) being for objects moving toward you, and (2) for objects moving away, and with $\beta = \frac v c$. These were great and all, but I wanted a more general equation that worked in 3d and 4d spacetime. In those, an object doesn't always move directly toward or away from you. So, I came up with this more generic equation for this ratio: $$\gamma^2 \left( \beta \cos \alpha + \sqrt{1-\beta^2 \sin^2 \alpha} \right)$$ Here, $\alpha$ is the (actual) angle between the relative velocity vector and position vector. You will see that when that angle is 0 or $\pi$, the equation reduces to the two I have above. You simply multiply actual length or velocity by this value to get the observed length or velocity. Another useful equation is for the observed angle of deflection and that is given by $$\alpha_{obs} = \alpha - \arcsin \left( \beta \sin \alpha \right)$$ I'd not seen equations like these listed anywhere that I'd looked in the past, and I found them useful for examining scenarios for how they would appear to the observers. It certainly brings a different perspective to the situation. Things moving toward us won't appear contracted, but rather stretched out. Also, things can appear to move toward us at many times the speed of light because of this. It also means nothing can appear to move directly away from us at more than half the speed of light either. Anyway, I look forward to participating in more discussions on this forum and learning more interesting things about relativity and other topics. 2. Mar 31, 2017 BvU Hello Arkalius, I advise you to first and foremost become familiar with the simpler cases of special relativity. Expressions are already complicated enough there. Learn about hte Lorentz transform and its accompanying phenomena. If you are fluent with those (conceptually and with the formalism), then it's still early enough to move on to the issues you are now messing with. I don't believe a single one of your expressions -- but you may ascribe that to my ignorance. In the mean time play with the MIT game and wonder about this strange world. Have fun ! 3. Mar 31, 2017 FactChecker Your equations look like they are just some sort of Dopler effect. That is not correct. You are missing the main point of SR. There are simple explanations of SR already in terms of the relativity of "simultaneity". You should pay more attention to them before trying to make your own equations. 4. Mar 31, 2017 Arkalius I'm already quite familiar with those. I feel like I've developed a more intuitive understanding of how Minkowski spacetime works, and understand Lorentz transformations almost instinctually now. There are scenarios I run into where I can't quite visualize it right and have to rely on a Minkowski diagram (my favorite tool for that currently is http://ibises.org.uk/Minkowski.html ) but the more scenarios I encounter the better at it I've gotten. Well.. points for being direct I guess. I assure you, they should be accurate. I'm looking at all of the fun triangle drawings I have on my desk from the work I did on it right now, and they seem to fit with the examples of relativistic Doppler effects that I've seen. 5. Mar 31, 2017 Arkalius You're right that it is a kind of Doppler effect transformation. But, I think you might be missing the main point of my post, which is this: I understand the main point of SR, and wanted to move beyond, into understanding how to move from what the measured view of reality is in SR to what the observed view would be, factoring travel time of light. This view of things is in some ways less bizarre, and in others, moreso. For example, in a measured view, when accelerating away from distant objects, their clocks can appear to go backwards as our plane of simultaneity shifts. But, when looking at the observed view, this effect disappears, and clocks always tick in the forward direction. On the other hand, sufficient acceleration toward something will actually cause it to seem to stretch away further into the distance, and if you settle at a high enough velocity, make it appear as if you approach it faster than light 6. Mar 31, 2017 A.T. This might interest you: http://www.spacetimetravel.org/ 7. Mar 31, 2017 Staff: Mentor 8. Mar 31, 2017 pixel 9. Mar 31, 2017 FactChecker 10. Mar 31, 2017 PAllen If you want feedback on your equations, you will need to define your observational model more. For example, it is not clear at all to me how it makes sense to talk about the apparent length of a ruler moving towards you parallel to its length. Is your formula possibly for a ruler turned perpendicular to this? Then, for defining what you 'see', you need to specify e.g. whether your idealization is a tiny spherical detector, versus a pinhole camera with a flat detector read in simultaneous captures of the movie camera frame. The latter introduces additional distortions compared to the former, and makes the orientation of the camera a necessary element of the specification. Note there is a very cute trick that can be used for these problems. As long as you assume all objects are luminous at a standard frequency and intensity in their rest frame ( thus avoiding specifying lighting source position and motion), aberration plus Doppler can be used to get the exact appearance, including surface markings, without doing any form of ray tracing. This includes getting brightness and color correct. Last edited: Mar 31, 2017 11. Mar 31, 2017 Arkalius Well, sure there are more complicated things to resolve if you're trying to model the precise appearance of a moving object, especially in 3d space. The distortions can be fairly strange as I understand, such as objects moving past you appearing somewhat rotated. I'm really only concerned with the more general aspects, the effects on apparent distance, speed, and general size in the direction of motion. That is what these equations are intended to help with. They are, in effect, a polar coordinate transformation. But, I appreciate your input. Certainly the observed effects of relativistic motion can be quite bizarre. It's too bad we don't have access to macroscopic demonstrations of such things. 12. Mar 31, 2017 Arkalius Ah that does look pretty neat. It makes me think of Velocity Raptor, which is a web-based 2d puzzle game that uses special relativity and does kind of the same thing. The villain slows down the speed of light to 3mph so all of your motion is notably relativistic. The game then requires you to take advantages of the strange trappings of relativistic motion to overcome obstacles that would be impossible to surmount under normal circumstances. In fact, it is kind of what spurred my interest in the concepts of "observed" reality in SR. Later in the game, it moves from showing you a measured view of your environment to the observed view based on travel time of light, and the distortions there were quite wild, and I was fascinated by them and wanted to understand them better. 13. Mar 31, 2017 PAllen Right, but you have not specified enough for anyone but you to understand what your equations are supposed to mean. Having worked through both aberration and ray tracing computations of moving objects (thus experienced in the field), I have not a clue what your equations are supposed to describe. 14. Mar 31, 2017 Arkalius I see. The larger equation is a ratio between observed distance to measured distance of a moving object, as well as observed velocity to actual velocity. Take the measured distance between something moving relative to you with some velocity, and multiply by the value of the equation to get the distance the object appears to be from you at that instant. Similarly, multiply that by the object's velocity to see what velocity the object will appear to be moving. The 4th equation gives you the angle between the relative velocity vector and observed position vector of the moving object. 15. Mar 31, 2017 PAllen Define apparent distance and speed versus measured. You may think there are universal definitions of these, but that is not so. Give us yours, or a link to what you are using. Otherwise, no one can say if your equations are correct or not. [edit: Let me be clear about how definition is so crucial. Suppose I define measured distance at a given time as the coordinates in a standard Minkowski inertial frame, with 'me' being the t axis throught the origin. Then, if an object is at some position, at some time, I define its apparent distance in terms of the angle subtended by the light from the object emitted at that position and time, when such light reaches me. If this angle is less than expected per rest dimensions for that distance, I am defining it as apparently further away. Using this definition (transverse angle subtended compared to expectation), then an object approaching me never has an apparent distance different from its measured distance. Rather than claim your equation is wrong, I simply want to know what definitions you are using.] Last edited: Mar 31, 2017 16. Mar 31, 2017 m4r35n357 17. Mar 31, 2017 Arkalius Measured distance is the $\Delta x$ in the spacetime interval equation $s^2 = \Delta x^2 - c^2\Delta t^2$, from our chosen frame. Apparent distance is how far the moving thing in question appears to be from that observer, based on the light information reaching him at that moment. Actual velocity is just that. It is the velocity of the object of interest relative to the observer. Apparent velocity is how fast the object will appear to be traveling based on the light being received by the observer. The angle $\alpha$ represents the angle between actual velocity vector and the actual position vector (from whence the aforementioned $\Delta x$ is derived). The calculated $\alpha_{obs}$ is the angle between actual velocity vector and the apparent position vector based on light being received at that moment. Together these equations can tell you where a moving thing appears to be as told by light signals based on where they actually are in that instant for that observer, as well as telling you how fast the thing appears to be moving based on its actual velocity. You have to consider some timing issues when using these equations. If a ship is stationary relative to you 9 light-days away, and then begins a journey to you at 0.9c, you won't see him traveling to you at 0.9c for 10 days. From your point of view he'll appear stationary for 9 days, and then seemingly approach at 9c for 1 day. If you just naïvely apply my equation at the moment he begins moving, it would suggest he appears to be 90 light days away at that point. So you have to be careful how you apply it. 18. Mar 31, 2017 PAllen You keep using terms (apparent distance) with no definition whatsoever. (also, see the edit to my post asking for this, for an example of the issue of no definition). 19. Mar 31, 2017 Arkalius I said "Apparent distance is how far the moving thing in question appears to be from that observer, based on the light information reaching him at that moment." Another way of putting it is how far the moving object actually was from you when it emitted the photons you are now observing. I suppose I can understand some amount of confusion, in that one cannot know the distance of a source of light merely by observing that light in an instant. You have to do other things like generate parallax etc. But it doesn't change the fact that the light you see "now" from a moving object doesn't reflect where that object is "now" (from your reference frame), but rather where it was when it emitted it. This all emerged from my interest in how one's actual view of everything around us distorts as a result of relativistic motion, spurred on from seeing a simulation of this distortion in a game. The observed distortion is significantly different than the actual spacetime distortion given by the Lorentz transformation, and this was fascinating to me. 20. Mar 31, 2017 PAllen This definition has precisely zero content for me. Sorry. By what model? Brightness? subtended angle? I have no idea. 21. Mar 31, 2017 PAllen So trying to understand this, are you defining apparent distance as follows: If at some time t, using some standard inertial SR frame/coordinates, I receive light from an object, its distance at time of emission is apparent distance, while its current (not yet observed) distance is measured distance? I find these definitions peculiar, but at least it is a definition. But then, for a uniformly moving object, I see no corresponding definition of apparent speed. 22. Mar 31, 2017 Arkalius Let's look at it by example. We'll use a simple twins paradox setup. Twin leaves Earth at 0.8c and travels for 10 years, moving 8ly away from Earth (from Earth's perspective), turns around and comes home at 0.8c. Total trip time from Earth's perspective is 20 years. From the traveler's perspective, he traveled 4.8ly from Earth and back for a total time of 12 years. What the Earth twin actually sees is the traveler flying away from earth at about 0.44c for 18 years, and then returning to Earth at 4c for 2 years. Basically, for each year after the traveler leaves, he is (from Earth's perspective) actually 0.8ly further away, but only appears to be 0.44ly further. So at year 2, he's 1.6ly away, but only seems to be .89ly away. Thus both his distance and speed seem smaller than they really are at any given instant. Now, after he turns around, the equation won't work quite right. You have to consider the time delay between when he turns around and when you're aware of it. The equations will work if you just pretend he continues to move away for this period, but that's a little odd. At the 18 year mark, when you're finally aware of him now heading back, he is actually only 1.6ly away, but he appears to be 8ly away for you. The next year he will be only 0.8ly away, but appear to be 4ly away to you, thus it seems he is moving at 4x the speed of light. It also affects the apparent passage of time. If you continually observe a time signal coming at you from the ship, while it's moving away it seems time is moving at only 33% of your own (instead of the 60% the Lorentz factor would suggest). While moving toward you, it seems it is moving 3x faster than your own. This still works out correctly, since you'll observe 6 years pass on his clock during the 18 year apparent outbound journey, and 6 years pass on the 2 year inbound one. From the traveler's side, things are quite different. Initially it is the same; Earth seems to move away at about .44c, but for only 6 years for the traveler, at which point it only seems to be 2.67ly away (despite actually being 4.8ly). But, when he turns around, his view of reality distorts into a new configuration, and Earth seems to stretch away and seem to be 24ly away, and be approaching at 4x the speed of light for 6 years. This illustrates the asymmetry of the twins paradox pretty well. While Earth sees the traveler fly away for 18 years slowly and return fast for 2, the traveler sees Earth recede slowly for 6 years and approach quickly (but from apparently much further away) for another 6. Like for Earth, for the traveler, Earth's clocks appear to only tick at 33% the rate during the outbound journey, passing only apparently 2 years during that leg. On the way back, it ticks 3x faster, which means it passes 18 years during the inbound leg. Basically, looking at the scenario this way provides a new and interesting perspective on the scenario. It may provide a somewhat more intuitive way of understanding the age difference between the twins. The "fast forward" of Earth time during the traveler's acceleration to return home (from the shift in simultaneity) can seem odd to some people when explaining the resolution to the paradox. Now, this example only involves motion directly toward and away from you, so the first two equations in my post are sufficient to handle all of that. The other equations are only needed in more complex scenarios involving 2d or 3d space and velocities that don't move directly toward or away. 23. Mar 31, 2017 PAllen Already, we have some issues. The traveler didn't travel at all relative to themselves. How much the earth moved depends on which model of non-intertial coordinates you choose, specifically, your simultaneity model. Two concrete models are radar simultaneity and momentarily comoving inertial simultaneity. They give different answers, and neither is 9.6 light years total. You appear to be using an 'odometer' notion of travel distance for the traveler, which is a perfectly fine choice: pretend there is a surface at rest with respect to earth, and track its local motion relative to you, totaling it up, never worrying about measures of distance to earth. As most people define 'see', earth twin sees no such thing. What they directly see is a certain constant redshift, with decreasing luminosity for 18 years, then blueshift with increasing luminosity for two years; and if they use the standard SR doppler formula, they get .8c away and .8c back, for speed. However, I can now guess (you still have not defined them) your terminology: 1) apparent distance is distance at emission event measured in observer centered inertial frame. 'emission time distance' would be a more natural term. 2) measured distance is current distance in the observer centered inertial frame. This irks me because it is in principle unmeasurable. Current distance would be a better term. 3) apparent velocity is rate of change of apparent distance (your definition) with observer time, and corresponds to no observable feature of the image over time. I have no suggestion of a reasonable term for this quantity. With these definitions, of course, I agree with your numbers. I don't see why you couldn't have given the equivalent of these definitions in your OP, or when asked many times. Thus, you extend your definitions to a non-inertial observer by the momentarily comoving inertial frame model (MCIF). The is one choice for analysis, but it leads to unnecessarily odd descriptions. Along with your definition of apparent distance which rapidly (or instantly) jumped, if you were to similarly define apparent time, you would now claim this to be before you left earth, even though there is no jump whatsoever in the image of an earth clock as the traveler turns around. The wild discrepancy between MCIF quantities versus actual observations is why a number of us on this forum dislike this approach, and why calling these quantities apparent is disturbing to me. A more natural description of the direct observations is that for the traveler, at the mid point, the relative velocity of traveler and earth at emission changed. This changed doppler plus aberration description from seeing an enlarged, reddened image to a shrunken, bluer image. Now that I understand your definitions, i don't really have any interest in checking your equations, because I am not very interested in your definitions. Last edited: Mar 31, 2017 24. Mar 31, 2017 Arkalius Well, okay, refer to it how you will. I'm basically again using the $\Delta x$ of the spacetime interval equation as it would be calculated by the traveler's inertial frame. In both of those frames, at the point where he turns around, that value is 4.8ly when $\Delta t$ is 0. It's unmeasurable at the moment it happens, but later analysis of observation could reconstruct it. But sure, current distance is probably better term. It wasn't to be frustrating, I assure you. The definitions I gave made sense to me, and I was having trouble understanding what your confusion was. Obviously, you and I work within somewhat different vocabularies. I don't have a formal background in this topic; most of my understanding is self-taught. Perhaps that's why. Why? When turning around, you're still receiving the same light from Earth. However, the shift in reference frame will serve to blue-shift the Earth and reduce it's apparent size (this reduction in apparent size the ultimate reason it appears to move further away). But, you're still seeing the same light, and thus will not see time reverse in any way. I'm not proposing this as an approach to solving problems within special relativity. It was focused entirely on developing an understanding of what relativistic situations look like to an observer. Most thought experiments in relativity involve instantaneous observation, which is useful in understanding Lorentz transformations and the effects of relativistic velocities, but doesn't depict accurate observations (at least, when the distances involved are large). That's fine. I wasn't presenting them to be checked (though I certainly wouldn't mind someone doing so). I was simply expressing the results of following an avenue of thought on the topic on which I hadn't seen much discussion, and which interested me greatly. I enjoy deriving interesting equations that beget a new angle of understanding and give me a chance to exercise my math and problem solving skills. They may not be extraordinarily useful, but I found them interesting and wanted to share. 25. Mar 31, 2017 PAllen If you look at it like that, purely as an optical effect, fine. If you look at where the 24 ly comes from in the coordinates of the new frame, it corresponds to an emission time of 18 years before you left earth.
2018-07-22 04:05:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6523805260658264, "perplexity": 653.7992232913615}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593004.92/warc/CC-MAIN-20180722022235-20180722042235-00563.warc.gz"}
https://mllg.github.io/batchtools/reference/testJob.html
Starts a single job on the local machine. testJob(id, external = FALSE, reg = getDefaultRegistry()) ## Arguments id [integer(1) or data.table] Single integer to specify the job or a data.table with column job.id and exactly one row. [logical(1)] Run the job in an external R session? If TRUE, starts a fresh R session on the local machine to execute the with execJob. You will not be able to use debug tools like traceback or browser. If external is set to FALSE (default) on the other hand, testJob will execute the job in the current R session and the usual debugging tools work. However, spotting missing variable declarations (as they are possibly resolved in the global environment) is impossible. Same holds for missing package dependency declarations. [Registry] Registry. If not explicitly passed, uses the default registry (see setDefaultRegistry). ## Value Returns the result of the job if successful. Other debug: getErrorMessages(), getStatus(), grepLogs(), killJobs(), resetJobs(), showLog() batchtools:::example_push_temp(1)
2021-08-02 17:30:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2561372220516205, "perplexity": 7698.2860244805925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154356.39/warc/CC-MAIN-20210802172339-20210802202339-00526.warc.gz"}
https://wikimili.com/en/Faraday's_law_of_induction
Last updated Faraday's law of induction (briefly, Faraday's law) is a basic law of electromagnetism predicting how a magnetic field will interact with an electric circuit to produce an electromotive force (EMF)—a phenomenon called electromagnetic induction. It is the fundamental operating principle of transformers, inductors, and many types of electrical motors, generators and solenoids. [1] [2] Electromagnetism is a branch of physics involving the study of the electromagnetic force, a type of physical interaction that occurs between electrically charged particles. The electromagnetic force usually exhibits electromagnetic fields such as electric fields, magnetic fields, and light, and is one of the four fundamental interactions in nature. The other three fundamental interactions are the strong interaction, the weak interaction, and gravitation. At high energy the weak force and electromagnetic force are unified as a single electroweak force. A magnetic field is a vector field that describes the magnetic influence of electrical currents and magnetized materials. In everyday life, the effects of magnetic fields are often seen in permanent magnets, which pull on magnetic materials and attract or repel other magnets. Magnetic fields surround and are created by magnetized material and by moving electric charges such as those used in electromagnets. Magnetic fields exert forces on nearby moving electrical charges and torques on nearby magnets. In addition, a magnetic field that varies with location exerts a force on magnetic materials. Both the strength and direction of a magnetic field varies with location. As such, it is an example of a vector field. Electromotive force, abbreviated emf, is the electrical intensity or "pressure" developed by a source of electrical energy such as a battery or generator. A device that converts other forms of energy into electrical energy provides an emf as its output. ## Contents The Maxwell–Faraday equation (listed as one of Maxwell's equations) describes the fact that a spatially varying (and also possibly time-varying, depending on how a magnetic field varies in time) electric field always accompanies a time-varying magnetic field, while Faraday's law states that there is EMF (electromotive force, defined as electromagnetic work done on a unit charge when it has traveled one round of a conductive loop) on the conductive loop when the magnetic flux through the surface enclosed by the loop varies in time. Maxwell's equations are a set of partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism, classical optics, and electric circuits. The equations provide a mathematical model for electric, optical, and radio technologies, such as power generation, electric motors, wireless communication, lenses, radar etc. Maxwell's equations describe how electric and magnetic fields are generated by charges, currents, and changes of the fields. One important consequence of the equations is that they demonstrate how fluctuating electric and magnetic fields propagate at the speed of light. Known as electromagnetic radiation, these waves may occur at various wavelengths to produce a spectrum from radio waves to γ-rays. The equations are named after the physicist and mathematician James Clerk Maxwell, who between 1861 and 1862 published an early form of the equations that included the Lorentz force law. He also first used the equations to propose that light is an electromagnetic phenomenon. Historically, Faraday's law had been discovered and one aspect of it (transformer EMF) was formulated as the Maxwell–Faraday equation later. Interestingly, the equation of Faraday's law can be derived by the Maxwell–Faraday equation (describing transformer EMF) and the Lorentz force (describing motional EMF). The integral form of the Maxwell–Faraday equation describes only the transformer EMF, while the equation of Faraday's law describes both the transformer EMF and the motional EMF. In physics the Lorentz force is the combination of electric and magnetic force on a point charge due to electromagnetic fields. A particle of charge q moving with a velocity v in an electric field E and a magnetic field B experiences a force of ## History Electromagnetic induction was discovered independently by Michael Faraday in 1831 and Joseph Henry in 1832. [4] Faraday was the first to publish the results of his experiments. [5] [6] In Faraday's first experimental demonstration of electromagnetic induction (August 29, 1831), [7] he wrapped two wires around opposite sides of an iron ring (torus) (an arrangement similar to a modern toroidal transformer). Based on his assessment of recently discovered properties of electromagnets, he expected that when current started to flow in one wire, a sort of wave would travel through the ring and cause some electrical effect on the opposite side. He plugged one wire into a galvanometer, and watched it as he connected the other wire to a battery. Indeed, he saw a transient current (which he called a "wave of electricity") when he connected the wire to the battery, and another when he disconnected it. [8] :182–183 This induction was due to the change in magnetic flux that occurred when the battery was connected and disconnected. [3] Within two months, Faraday had found several other manifestations of electromagnetic induction. For example, he saw transient currents when he quickly slid a bar magnet in and out of a coil of wires, and he generated a steady (DC) current by rotating a copper disk near the bar magnet with a sliding electrical lead ("Faraday's disk"). [8] :191–195 Michael Faraday FRS was a British scientist who contributed to the study of electromagnetism and electrochemistry. His main discoveries include the principles underlying electromagnetic induction, diamagnetism and electrolysis. Joseph Henry was an American scientist who served as the first Secretary of the Smithsonian Institution. He was the secretary for the National Institute for the Promotion of Science, a precursor of the Smithsonian Institution. He was highly regarded during his lifetime. While building electromagnets, Henry discovered the electromagnetic phenomenon of self-inductance. He also discovered mutual inductance independently of Michael Faraday, though Faraday was the first to make the discovery and publish his results. Henry developed the electromagnet into a practical device. He invented a precursor to the electric doorbell and electric relay (1835). The SI unit of inductance, the Henry, is named in his honor. Henry's work on the electromagnetic relay was the basis of the practical electrical telegraph, invented by Samuel F. B. Morse and Sir Charles Wheatstone, separately. In geometry, a torus is a surface of revolution generated by revolving a circle in three-dimensional space about an axis coplanar with the circle. If the axis of revolution does not touch the circle, the surface has a ring shape and is called a torus of revolution. Michael Faraday explained electromagnetic induction using a concept he called lines of force. However, scientists at the time widely rejected his theoretical ideas, mainly because they were not formulated mathematically. [8] :510 An exception was James Clerk Maxwell, who in 1861–62 used Faraday's ideas as the basis of his quantitative electromagnetic theory. [8] :510 [9] [10] In Maxwell's papers, the time-varying aspect of electromagnetic induction is expressed as a differential equation which Oliver Heaviside referred to as Faraday's law even though it is different from the original version of Faraday's law, and does not describe motional EMF. Heaviside's version (see Maxwell–Faraday equation below) is the form recognized today in the group of equations known as Maxwell's equations. James Clerk Maxwell was a Scottish scientist in the field of mathematical physics. His most notable achievement was to formulate the classical theory of electromagnetic radiation, bringing together for the first time electricity, magnetism, and light as different manifestations of the same phenomenon. Maxwell's equations for electromagnetism have been called the "second great unification in physics" after the first one realised by Isaac Newton. Oliver Heaviside FRS was an English self-taught electrical engineer, mathematician, and physicist who adapted complex numbers to the study of electrical circuits, invented mathematical techniques for the solution of differential equations, reformulated Maxwell's field equations in terms of electric and magnetic forces and energy flux, and independently co-formulated vector analysis. Although at odds with the scientific establishment for most of his life, Heaviside changed the face of telecommunications, mathematics, and science for years to come. Lenz's law, formulated by Emil Lenz in 1834, [11] describes "flux through the circuit", and gives the direction of the induced EMF and current resulting from electromagnetic induction (elaborated upon in the examples below). Lenz's law, named after the physicist Emil Lenz who formulated it in 1834, states that the direction of the current induced in a conductor by a changing magnetic field is such that the magnetic field created by the induced current opposes the initial changing magnetic field. Or as informally, yet concisely summarised by D.J. Griffiths: Nature abhors a change in flux. Heinrich Friedrich Emil Lenz, usually cited as Emil Lenz, was a Russian physicist of Baltic German ethnicity. He is most noted for formulating Lenz's law in electrodynamics in 1834. ### Qualitative statement The electromotive force around a closed path is equal to the negative of the time rate of change of the magnetic flux enclosed by the path. [13] [14] The closed path here is, in fact, conductive. ### Quantitative For a loop of wire in a magnetic field, the magnetic flux ΦB is defined for any surface Σ whose boundary is the given loop. Since the wire loop may be moving, we write Σ(t) for the surface. The magnetic flux is the surface integral: ${\displaystyle \Phi _{B}=\iint \limits _{\Sigma (t)}\mathbf {B} (t)\cdot \mathrm {d} \mathbf {A} \,,}$ where dA is an element of surface area of the moving surface Σ(t), B is the magnetic field, and B·dA is a vector dot product representing the element of flux through dA. In more visual terms, the magnetic flux through the wire loop is proportional to the number of magnetic flux lines that pass through the loop. When the flux changes—because B changes, or because the wire loop is moved or deformed, or both—Faraday's law of induction says that the wire loop acquires an EMF, E, defined as the energy available from a unit charge that has travelled once around the wire loop. [15] [16] [17] (Note that different textbooks may give different definitions. The set of equations used throughout the text was chosen to be compatible with the special relativity theory.) Equivalently, it is the voltage that would be measured by cutting the wire to create an open circuit, and attaching a voltmeter to the leads. Faraday's law states that the EMF is also given by the rate of change of the magnetic flux: ${\displaystyle {\mathcal {E}}=-{\frac {\mathrm {d} \Phi _{B}}{\mathrm {d} t}},}$ where E is the electromotive force (EMF) and ΦB is the magnetic flux. The direction of the electromotive force is given by Lenz's law. The laws of induction of electric currents in mathematical form was established by Franz Ernst Neumann in 1845. [18] Faraday's law contains the information about the relationships between both the magnitudes and the directions of its variables. However, the relationships between the directions are not explicit; they are hidden in the mathematical formula. It is possible to find out the direction of the electromotive force (EMF) directly from Faraday’s law, without invoking Lenz's law. A left hand rule helps doing that, as follows: [19] [20] • Align the curved fingers of the left hand with the loop (yellow line). • Stretch your thumb. The stretched thumb indicates the direction of n (brown), the normal to the area enclosed by the loop. • Find the sign of ΔΦB, the change in flux. Determine the initial and final fluxes (whose difference is ΔΦB) with respect to the normal n, as indicated by the stretched thumb. • If the change in flux, ΔΦB, is positive, the curved fingers show the direction of the electromotive force (yellow arrowheads). • If ΔΦB is negative, the direction of the electromotive force is opposite to the direction of the curved fingers (opposite to the yellow arrowheads). For a tightly wound coil of wire, composed of N identical turns, each with the same ΦB, Faraday's law of induction states that [21] [22] ${\displaystyle {\mathcal {E}}=-N{\frac {\mathrm {d} \Phi _{B}}{\mathrm {d} t}}}$ where N is the number of turns of wire and ΦB is the magnetic flux through a single loop. The Maxwell–Faraday equation states that a time-varying magnetic field always accompanies a spatially varying (also possibly time-varying), non-conservative electric field, and vice versa. The Maxwell–Faraday equation is ${\displaystyle \nabla \times \mathbf {E} =-{\frac {\partial \mathbf {B} }{\partial t}}}$ (in SI units) where ∇ × is the curl operator and again E(r, t) is the electric field and B(r, t) is the magnetic field. These fields can generally be functions of position r and time t. The Maxwell–Faraday equation is one of the four Maxwell's equations, and therefore plays a fundamental role in the theory of classical electromagnetism. It can also be written in an integral form by the Kelvin–Stokes theorem [23] , thereby reproducing Faraday's law: ${\displaystyle \oint _{\partial \Sigma }\mathbf {E} \cdot \mathrm {d} \mathbf {l} =-\int _{\Sigma }{\frac {\partial \mathbf {B} }{\partial t}}\cdot \mathrm {d} \mathbf {A} }$ where, as indicated in the figure: Σ is a surface bounded by the closed contour Σ, E is the electric field, B is the magnetic field. dl is an infinitesimal vector element of the contour ∂Σ, dA is an infinitesimal vector element of surface Σ. If its direction is orthogonal to that surface patch, the magnitude is the area of an infinitesimal patch of surface. Both dl and dA have a sign ambiguity; to get the correct sign, the right-hand rule is used, as explained in the article Kelvin–Stokes theorem. For a planar surface Σ, a positive path element dl of curve Σ is defined by the right-hand rule as one that points with the fingers of the right hand when the thumb points in the direction of the normal n to the surface Σ. The integral around Σ is called a path integral or line integral. Notice that a nonzero path integral for E is different from the behavior of the electric field generated by charges. A charge-generated E-field can be expressed as the gradient of a scalar field that is a solution to Poisson's equation, and has a zero path integral. See gradient theorem. The integral equation is true for any path Σ through space, and any surface Σ for which that path is a boundary. If the surface Σ is not changing in time, the equation can be rewritten: ${\displaystyle \oint _{\partial \Sigma }\mathbf {E} \cdot \mathrm {d} \mathbf {l} =-{\frac {\mathrm {d} }{\mathrm {d} t}}\int _{\Sigma }\mathbf {B} \cdot \mathrm {d} \mathbf {A} .}$ The surface integral at the right-hand side is the explicit expression for the magnetic flux ΦB through Σ. The electric vector field induced by a changing magnetic flux, the solenoidal component of the overall electric field, can be approximated in the non-relativistic limit by the following volume integral equation [24] : ${\displaystyle \mathbf {E} _{s}(\mathbf {r} )\approx -{\frac {1}{4\pi }}\iiint _{V}\ {\frac {({\frac {\partial \mathbf {B} }{\partial t}}\,dV)\times \mathbf {{\hat {r}}'} }{|\mathbf {r} '|^{2}}}}$ ## Proof The four Maxwell's equations (including the Maxwell–Faraday equation), along with Lorentz force law, are a sufficient foundation to derive everything in classical electromagnetism. [15] [16] Therefore, it is possible to "prove" Faraday's law starting with these equations. [25] [26] The starting point is the time-derivative of flux through an arbitrary surface Σ (that can move or be deformed) in space: ${\displaystyle {\frac {\mathrm {d} \Phi _{B}}{\mathrm {d} t}}={\frac {\mathrm {d} }{\mathrm {d} t}}\int _{\Sigma (t)}\mathbf {B} (t)\cdot \mathrm {d} \mathbf {A} }$ (by definition). This total time derivative can be evaluated and simplified with the help of the Maxwell–Faraday equation and some vector identities; the details are in the box below: Consider the time-derivative of magnetic flux through a closed boundary (loop) that can move or be deformed. The area bounded by the loop is denoted as Σ(t)), then the time-derivative can be expressed as ${\displaystyle {\frac {\mathrm {d} \Phi _{B}}{\mathrm {d} t}}={\frac {\mathrm {d} }{\mathrm {d} t}}\int _{\Sigma (t)}\mathbf {B} (t)\cdot \mathrm {d} \mathbf {A} }$The integral can change over time for two reasons: The integrand can change, or the integration region can change. These add linearly, therefore:${\displaystyle \left.{\frac {\mathrm {d} \Phi _{B}}{\mathrm {d} t}}\right|_{t=t_{0}}=\left(\int _{\Sigma (t_{0})}\left.{\frac {\partial \mathbf {B} }{\partial t}}\right|_{t=t_{0}}\cdot \mathrm {d} \mathbf {A} \right)+\left({\frac {\mathrm {d} }{\mathrm {d} t}}\int _{\Sigma (t)}\mathbf {B} (t_{0})\cdot \mathrm {d} \mathbf {A} \right)}$where t0 is any given fixed time. We will show that the first term on the right-hand side corresponds to transformer EMF, the second to motional EMF (from the magnetic Lorentz force on charge carriers due to the motion or deformation of the conducting loop in the magnetic field). The first term on the right-hand side can be rewritten using the integral form of the Maxwell–Faraday equation:${\displaystyle \int _{\Sigma (t_{0})}\left.{\frac {\partial \mathbf {B} }{\partial t}}\right|_{t=t_{0}}\cdot \mathrm {d} \mathbf {A} =-\oint _{\partial \Sigma (t_{0})}\mathbf {E} (t_{0})\cdot \mathrm {d} \mathbf {l} }$Next, we analyze the second term on the right-hand side:${\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\int _{\Sigma (t)}\mathbf {B} (t_{0})\cdot \mathrm {d} \mathbf {A} }$ The area swept out by a vector element dl of a loop ∂Σ in time dt when it has moved with velocity vl .The proof of this is a little more difficult than the first term; more details and alternate approaches for the proof can be found in the references. [25] [26] [27] As the loop moves and/or deforms, it sweeps out a surface (see the right figure). As a small part of the loop dl moves with velocity vl over a short time dt, it sweeps out an area which vector is dAsweep = vl dt × dl (note that this vector is toward out from the display in the right figure). Therefore, the change of the magnetic flux through the loop due to the deformation or movement of the loop over the time dt is${\displaystyle \mathbf {d} \Phi _{B}=\int \mathbf {B} \cdot \mathbf {dA} _{sweep}=\int \mathbf {B} \cdot (\mathbf {v} _{\mathbf {l} }\mathrm {d} t\times \mathrm {d} \mathbf {l} )=-\int \mathrm {d} t\mathrm {d} \mathbf {l} \cdot (\mathbf {v} _{\mathbf {l} }\times \mathbf {B} )}$Here, identities of triple scalar products are used. Therefore,${\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\int _{\Sigma (t)}\mathbf {B} (t_{0})\cdot \mathrm {d} \mathbf {A} =-\oint _{\partial \Sigma (t_{0})}(\mathbf {v} _{\mathbf {l} }(t_{0})\times \mathbf {B} (t_{0}))\cdot \mathrm {d} \mathbf {l} }$where vl is the velocity of a part of the loop ∂Σ.Putting these together results in,${\displaystyle \left.{\frac {\mathrm {d} \Phi _{B}}{\mathrm {d} t}}\right|_{t=t_{0}}=\left(-\oint _{\partial \Sigma (t_{0})}\mathbf {E} (t_{0})\cdot \mathrm {d} \mathbf {l} \right)+\left(-\oint _{\partial \Sigma (t_{0})}{\bigl (}\mathbf {v} _{\mathbf {l} }(t_{0})\times \mathbf {B} (t_{0}){\bigr )}\cdot \mathrm {d} \mathbf {l} \right)}$${\displaystyle \left.{\frac {\mathrm {d} \Phi _{B}}{\mathrm {d} t}}\right|_{t=t_{0}}=-\oint _{\partial \Sigma (t_{0})}{\bigl (}\mathbf {E} (t_{0})+\mathbf {v} _{\mathbf {l} }(t_{0})\times \mathbf {B} (t_{0}){\bigr )}\cdot \mathrm {d} \mathbf {l} .}$ The result is: ${\displaystyle {\frac {\mathrm {d} \Phi _{B}}{\mathrm {d} t}}=-\oint _{\partial \Sigma }\left(\mathbf {E} +\mathbf {v} _{\mathbf {l} }\times \mathbf {B} \right)\cdot \mathrm {d} \mathbf {l} .}$ where ∂Σ is the boundary (loop) of the surface Σ, and vl is the velocity of a part of the boundary. In the case of a conductive loop, EMF (Electromotive Force) is the electromagnetic work done on a unit charge when it has traveled around the loop once, and this work is done by the Lorentz force. Therefore, EMF is expressed as ${\displaystyle {\mathcal {E}}=\oint \left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right)\cdot \mathrm {d} \mathbf {l} }$ where ${\displaystyle {\mathcal {E}}}$ is EMF and v is the unit charge velocity. In a macroscopic view, for charges on a segment of the loop, v consists of two components in average; one is the velocity of the charge along the segment vt, and the other is the velocity of the segment vl (the loop is deformed or moved). vt does not contribute to the work done on the charge since the direction of vt is same to the direction of ${\displaystyle d\mathbf {l} }$. Mathematically, ${\displaystyle (\mathbf {v} \times B)\cdot d\mathbf {l} =((\mathbf {v} _{t}+\mathbf {v} _{l})\times B)\cdot d\mathbf {l} =(\mathbf {v} _{t}\times B+\mathbf {v} _{l}\times B)\cdot d\mathbf {l} =(\mathbf {v} _{l}\times B)\cdot d\mathbf {l} }$ since ${\displaystyle (\mathbf {v} _{t}\times B)}$ is perpendicular to ${\displaystyle d\mathbf {l} }$ as ${\displaystyle \mathbf {v} _{t}}$ and ${\displaystyle d\mathbf {l} }$ are along the same direction. Now we can see that, for the conductive loop, EMF is same to the time-derivative of the magnetic flux through the loop except for the sign on it. Therefore, we now reach the equation of Faraday's law (for the conductive loop) as ${\displaystyle {\frac {\mathrm {d} \Phi _{B}}{\mathrm {d} t}}=-{\mathcal {E}}}$ where ${\displaystyle {\mathcal {E}}=\oint \left(\mathbf {E} +\mathbf {v} _{l}\times \mathbf {B} \right)\cdot \mathrm {d} \mathbf {l} }$. With breaking this integral, ${\displaystyle \oint \mathbf {E} \cdot \mathrm {d} \mathbf {l} }$ is for the transformer EMF (due to a time-varying magnetic field) and ${\displaystyle \oint \left(\mathbf {v} _{l}\times \mathbf {B} \right)\cdot \mathrm {d} \mathbf {l} }$ is for the motional EMF (due to the magnetic Lorentz force on charges by the motion or deformation of the loop in the magnetic field). ## EMF for non-thin-wire circuits It is tempting to generalize Faraday's law to state: If ∂Σ is any arbitrary closed loop in space whatsoever, then the total time derivative of magnetic flux through Σ equals the EMF around ∂Σ. This statement, however, is not always true and the reason is not just from the obvious reason that EMF is undefined in empty space when no conductor is present. As noted in the previous section, Faraday's law is not guaranteed to work unless the velocity of the abstract curve ∂Σ matches the actual velocity of the material conducting the electricity. [28] The two examples illustrated below show that one often obtains incorrect results when the motion of ∂Σ is divorced from the motion of the material. [15] One can analyze examples like these by taking care that the path ∂Σ moves with the same velocity as the material. [28] Alternatively, one can always correctly calculate the EMF by combining Lorentz force law with the Maxwell–Faraday equation: [15] [29] ${\displaystyle {\mathcal {E}}=\int _{\partial \Sigma }(\mathbf {E} +\mathbf {v} _{m}\times \mathbf {B} )\cdot \mathrm {d} \mathbf {l} =-\int _{\Sigma }{\frac {\partial \mathbf {B} }{\partial t}}\cdot \mathrm {d} \Sigma +\oint _{\partial \Sigma }(\mathbf {v} _{m}\times \mathbf {B} )\cdot \mathrm {d} \mathbf {l} }$ where "it is very important to notice that (1) [vm] is the velocity of the conductor ... not the velocity of the path element dl and (2) in general, the partial derivative with respect to time cannot be moved outside the integral since the area is a function of time." [29] ### Two phenomena Faraday's law is a single equation describing two different phenomena: the motional EMF generated by a magnetic force on a moving wire (see the Lorentz force), and the transformer EMF generated by an electric force due to a changing magnetic field (described by the Maxwell–Faraday equation). James Clerk Maxwell drew attention to this fact in his 1861 paper On Physical Lines of Force . [30] In the latter half of Part II of that paper, Maxwell gives a separate physical explanation for each of the two phenomena. A reference to these two aspects of electromagnetic induction is made in some modern textbooks. [31] As Richard Feynman states: So the "flux rule" that the emf in a circuit is equal to the rate of change of the magnetic flux through the circuit applies whether the flux changes because the field changes or because the circuit moves (or both) ... Yet in our explanation of the rule we have used two completely distinct laws for the two cases – v × B for "circuit moves" and ∇ × E = −∂tB for "field changes". We know of no other place in physics where such a simple and accurate general principle requires for its real understanding an analysis in terms of two different phenomena. Richard P. Feynman, The Feynman Lectures on Physics [15] ### Einstein's view Reflection on this apparent dichotomy was one of the principal paths that led Einstein to develop special relativity: It is known that Maxwell's electrodynamics—as usually understood at the present time—when applied to moving bodies, leads to asymmetries which do not appear to be inherent in the phenomena. Take, for example, the reciprocal electrodynamic action of a magnet and a conductor. The observable phenomenon here depends only on the relative motion of the conductor and the magnet, whereas the customary view draws a sharp distinction between the two cases in which either the one or the other of these bodies is in motion. For if the magnet is in motion and the conductor at rest, there arises in the neighbourhood of the magnet an electric field with a certain definite energy, producing a current at the places where parts of the conductor are situated. But if the magnet is stationary and the conductor in motion, no electric field arises in the neighbourhood of the magnet. In the conductor, however, we find an electromotive force, to which in itself there is no corresponding energy, but which gives rise—assuming equality of relative motion in the two cases discussed—to electric currents of the same path and intensity as those produced by the electric forces in the former case. Examples of this sort, together with unsuccessful attempts to discover any motion of the earth relative to the "light medium," suggest that the phenomena of electrodynamics as well as of mechanics possess no properties corresponding to the idea of absolute rest. ## Related Research Articles An electromagnetic field is a physical field produced by electrically charged objects. It affects the behavior of charged objects in the vicinity of the field. The electromagnetic field extends indefinitely throughout space and describes the electromagnetic interaction. It is one of the four fundamental forces of nature. Flux describes any effect that appears to pass or travel through a surface or substance. A flux is either a concept based in physics or used with applied mathematics. Both concepts have mathematical rigor, enabling comparison of the underlying mathematics when the terminology is unclear. For transport phenomena, flux is a vector quantity, describing the magnitude and direction of the flow of a substance or property. In electromagnetism, flux is a scalar quantity, defined as the surface integral of the component of a vector field perpendicular to the surface at each point. An electric potential is the amount of work needed to move a unit of positive charge from a reference point to a specific point inside the field without producing an acceleration. Typically, the reference point is the Earth or a point at infinity, although any point beyond the influence of the electric field charge can be used. Electromagnetic or magnetic induction is the production of an electromotive force across an electrical conductor in a changing magnetic field. In physics, specifically electromagnetism, the magnetic flux through a surface is the surface integral of the normal component of the magnetic field B passing through that surface. The SI unit of magnetic flux is the weber (Wb), and the CGS unit is the maxwell. Magnetic flux is usually measured with a fluxmeter, which contains measuring coils and electronics, that evaluates the change of voltage in the measuring coils to calculate the magnetic flux. In electromagnetism and electronics, inductance describes the tendency of an electrical conductor, such as coil, to oppose a change in the electric current through it. The change in current induces a reverse electromotive force (voltage). When an electric current flows through a conductor, it creates a magnetic field around that conductor. A changing current, in turn, creates a changing magnetic field, the surface integral of which is known as magnetic flux. From Faraday's law of induction, any change in magnetic flux through a circuit induces an electromotive force (voltage) across that circuit, a phenomenon known as electromagnetic induction. Inductance is specifically defined as the ratio between this induced voltage and the rate of change of the current in the circuit In classical electromagnetism, Ampère's circuital law relates the integrated magnetic field around a closed loop to the electric current passing through the loop. James Clerk Maxwell derived it using hydrodynamics in his 1861 paper "On Physical Lines of Force" and it is now one of the Maxwell equations, which form the basis of classical electromagnetism. "A Dynamical Theory of the Electromagnetic Field" is a paper by James Clerk Maxwell on electromagnetism, published in 1865. In the paper, Maxwell derives an electromagnetic wave equation with a velocity for light in close agreement with measurements made by experiment, and deduces that light is an electromagnetic wave. A magnetic circuit is made up of one or more closed loop paths containing a magnetic flux. The flux is usually generated by permanent magnets or electromagnets and confined to the path by magnetic cores consisting of ferromagnetic materials like iron, although there may be air gaps or other materials in the path. Magnetic circuits are employed to efficiently channel magnetic fields in many devices such as electric motors, generators, transformers, relays, lifting electromagnets, SQUIDs, galvanometers, and magnetic recording heads. The term magnetic potential can be used for either of two quantities in classical electromagnetism: the magnetic vector potential, or simply vector potential, A; and the magnetic scalar potentialψ. Both quantities can be used in certain circumstances to calculate the magnetic field B. In classical electromagnetism, magnetization or magnetic polarization is the vector field that expresses the density of permanent or induced magnetic dipole moments in a magnetic material. The origin of the magnetic moments responsible for magnetization can be either microscopic electric currents resulting from the motion of electrons in atoms, or the spin of the electrons or the nuclei. Net magnetization results from the response of a material to an external magnetic field, together with any unbalanced magnetic dipole moments that may be inherent in the material itself; for example, in ferromagnets. Magnetization is not always uniform within a body, but rather varies between different points. Magnetization also describes how a material responds to an applied magnetic field as well as the way the material changes the magnetic field, and can be used to calculate the forces that result from those interactions. It can be compared to electric polarization, which is the measure of the corresponding response of a material to an electric field in electrostatics. Physicists and engineers usually define magnetization as the quantity of magnetic moment per unit volume. It is represented by a pseudovector M. The electromagnetic wave equation is a second-order partial differential equation that describes the propagation of electromagnetic waves through a medium or in a vacuum. It is a three-dimensional form of the wave equation. The homogeneous form of the equation, written in terms of either the electric field E or the magnetic field B, takes the form: The Maxwell stress tensor is a symmetric second-order tensor used in classical electromagnetism to represent the interaction between electromagnetic forces and mechanical momentum. In simple situations, such as a point charge moving freely in a homogeneous magnetic field, it is easy to calculate the forces on the charge from the Lorentz force law. When the situation becomes more complicated, this ordinary procedure can become impossibly difficult, with equations spanning multiple lines. It is therefore convenient to collect many of these terms in the Maxwell stress tensor, and to use tensor arithmetic to find the answer to the problem at hand. There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking. In physics, defining equations are equations that define new quantities in terms of base quantities. This article uses the current SI system of units, not natural or characteristic units. ## References 1. Sadiku, M. N. O. (2007). Elements of Electromagnetics (4th ed.). New York & Oxford: Oxford University Press. p. 386. ISBN   0-19-530048-3. 2. "Applications of electromagnetic induction". Boston University. 1999-07-22. 3. Giancoli, Douglas C. (1998). Physics: Principles with Applications (5th ed.). pp. 623–624. 4. Ulaby, Fawwaz (2007). Fundamentals of applied electromagnetics (5th ed.). Pearson:Prentice Hall. p. 255. ISBN   0-13-241326-4. 5. "Joseph Henry". Member Directory, National Academy of Sciences. Retrieved 2016-12-30. 6. Faraday, Michael; Day, P. (1999-02-01). The philosopher's tree: a selection of Michael Faraday's writings. CRC Press. p. 71. ISBN   978-0-7503-0570-9 . Retrieved 28 August 2011. 7. Williams, L. Pearce. Michael Faraday. 8. Clerk Maxwell, James (1904). A Treatise on Electricity and Magnetism. 2 (3rd ed.). Oxford University Press. pp. 178–179, 189. 9. "Archives Biographies: Michael Faraday". The Institution of Engineering and Technology. 10. Lenz, Emil (1834). "Ueber die Bestimmung der Richtung der durch elektodynamische Vertheilung erregten galvanischen Ströme". Annalen der Physik und Chemie. 107 (31): 483–494. Bibcode:1834AnP...107..483L. doi:10.1002/andp.18341073103. A partial translation of the paper is available in Magie, W. M. (1963). A Source Book in Physics. Cambridge, MA: Harvard Press. pp. 511–513. 11. Poyser, Arthur William (1892). Magnetism and Electricity: A manual for students in advanced classes. London and New York: Longmans, Green, & Co. Fig. 248, p. 245. Retrieved 2009-08-06. 12. Jordan, Edward; Balmain, Keith G. (1968). Electromagnetic Waves and Radiating Systems (2nd ed.). Prentice-Hall. p. 100. Faraday's Law, which states that the electromotive force around a closed path is equal to the negative of the time rate of change of magnetic flux enclosed by the path. 13. Hayt, William (1989). Engineering Electromagnetics (5th ed.). McGraw-Hill. p. 312. ISBN   0-07-027406-1. The magnetic flux is that flux which passes through any and every surface whose perimeter is the closed path. 14. Feynman, R. P. (2006). Leighton, R. B.; Sands, M. L., eds. The Feynman Lectures on Physics. San Francisco: Pearson/Addison-Wesley. Vol. II, p. 17-2. ISBN   0-8053-9049-9. "The flux rule" is the terminology that Feynman uses to refer to the law relating magnetic flux to EMF. 15. Griffiths, David J. (1999). Introduction to Electrodynamics (3rd ed.). Upper Saddle River, NJ: Prentice Hall. pp. 301–303. ISBN   0-13-805326-X. 16. Tipler; Mosca. Physics for Scientists and Engineers. p. 795. 17. Neumann, Franz Ernst (1846). "Allgemeine Gesetze der inducirten elektrischen Ströme" (PDF). Annalen der Physik. 143 (1): 31–44. Bibcode:1846AnP...143...31N. doi:10.1002/andp.18461430103. 18. Yehuda Salu (2014). "A Left Hand Rule for Faraday's Law". The Physics Teacher . 52: 48. Bibcode:2014PhTea..52...48S. doi:10.1119/1.4849156. 19. Salu, Yehuda. "A Left Hand Rule for Faraday's Law". www.PhysicsForArchitects.com/bypassing-lenzs-rule. Retrieved 30 July 2017. 20. Whelan, P. M.; Hodgeson, M. J. (1978). Essential Principles of Physics (2nd ed.). John Murray. ISBN   0-7195-3382-1. 21. Nave, Carl R. "Faraday's Law". HyperPhysics. Georgia State University. Retrieved 2011-08-29. 22. Harrington, Roger F. (2003). Introduction to electromagnetic engineering. Mineola, NY: Dover Publications. p. 56. ISBN   0-486-43241-6. 23. Griffiths, David J. (1999). Introduction to Electrodynamics (3rd ed.). Upper Saddle River, NJ: Prentice Hall. pp. 222–224. ISBN   0-13-805326-X. 24. Davison, M. E. (1973). "A Simple Proof that the Lorentz Force, Law Implied Faraday's Law of Induction, when B is Time Independent". American Journal of Physics. 41 (5): 713. Bibcode:1973AmJPh..41..713D. doi:10.1119/1.1987339. 25. Krey; Owen. Basic Theoretical Physics: A Concise Overview. p. 155. 26. Simonyi, K. (1973). Theoretische Elektrotechnik (5th ed.). Berlin: VEB Deutscher Verlag der Wissenschaften. eq. 20, p. 47. 27. Stewart, Joseph V. Intermediate Electromagnetic Theory. p. 396. This example of Faraday's Law [the homopolar generator] makes it very clear that in the case of extended bodies care must be taken that the boundary used to determine the flux must not be stationary but must be moving with respect to the body. 28. Hughes, W. F.; Young, F. J. (1965). The Electromagnetodynamics of Fluid. John Wiley. Eq. (2.6–13) p. 53. 29. Clerk Maxwell, James (1861). "On physical lines of force". Philosophical Magazine . Taylor & Francis. 90: 11–23. doi:10.1080/1478643100365918. 30. Griffiths, David J. (1999). Introduction to Electrodynamics (3rd ed.). Upper Saddle River, NJ: Prentice Hall. pp. 301–3. ISBN   0-13-805326-X. Note that the law relating flux to EMF, which this article calls "Faraday's law", is referred to in Griffiths' terminology as the "universal flux rule". Griffiths uses the term "Faraday's law" to refer to what this article calls the "Maxwell–Faraday equation". So in fact, in the textbook, Griffiths' statement is about the "universal flux rule".
2019-03-19 09:37:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 30, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9039730429649353, "perplexity": 784.9646103613875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201953.19/warc/CC-MAIN-20190319093341-20190319115341-00058.warc.gz"}
https://zbmath.org/?q=an:1169.60013
## Malliavin calculus for stochastic differential equations driven by a fractional Brownian motion.(English)Zbl 1169.60013 The following stochastic differential equation on $$\mathbb{R}^d$$: $X^i_t=x_0^i+\sum^m_{j=1}\int^t_0\sigma^{ij}(X_s)d B^j_s+\int^t_0 b^j(X_s)ds,\quad t\in[0,T],\quad i=1,\dots,d,\tag{1}$ where $$x_0\in\mathbb{R}^d$$ is the initial value of the process $$X$$ and $$B= \{B_t,t\geq 0\}$$ is an $$m$$-dimensional fractional Brownian motion of Hurst parameter $$H\in(\tfrac 12,1)$$, is considered. The authors study the regularity of the solution to equation (1) in the sense of Malliavin calculus. They prove the differentiability of the solution in the directions of the Cameron-Martin space and the absolute continuity with respect to the Lebesgue measure of the solution under ellipticity condition on the coefficient $$\sigma$$. ### MSC: 60H10 Stochastic ordinary differential equations (aspects of stochastic analysis) 60H07 Stochastic calculus of variations and the Malliavin calculus Full Text: ### References: [1] Baudoin, F.; Hairer, M., A version of Hörmander’s theorem for the fractional Brownian motion, Probab. theory related fields, 139, 373-395, (2007) · Zbl 1123.60038 [2] T. Cass, P. Friz, Densities for Rough Differential Equations under Hörmander’s Condition, Arxiv preprint (2007) [3] T. Cass, P. Friz, N. Victoir, Non-degeneracy of Wiener functionals arising from rough differential equations, Arxiv preprint (2007) · Zbl 1175.60034 [4] Coutin, L.; Qian, Z., Stochastic analysis, rough path analysis and fractional Brownian motions, Probab. theory related fields, 122, 108-140, (2002) · Zbl 1047.60029 [5] Decreusefond, L.; Üstünel, A.S., Stochastic analysis of the fractional Brownian motion, Potential anal., 10, 177-214, (1998) · Zbl 0924.60034 [6] Y. Hu, D. Nualart, Differential equations driven by Hölder continuous functions of order greater than 1/2, Preprint University of Kansas (2006) [7] Kusuoka, S., The non-linear transformation of Gaussian measure on Banach space and its absolute continuity (I), J. fac. sci. univ. Tokyo IA, 29, 567-597, (1982) · Zbl 0525.60050 [8] Lyons, T., Differential equations driven by rough signals (I): an extension of an inequality of L.C. Young, Math. res. lett., 1, 451-464, (1994) · Zbl 0835.34004 [9] Lyons, T.; Qian, Z., System control and rough paths. Oxford mathematical monographs, (2002), Oxford Science Publications, Oxford University Press Oxford [10] Lyons, T.; Dong Li, X., Smoothness of Itô maps and diffusion processes on path spaces, I, Ann. sci. école norm. sup., 39, 4, 649-677, (2006) · Zbl 1127.60033 [11] Nourdin, I.; Simon, T., On the absolute continuity of one-dimensional SDEs driven by a fractional Brownian motion, Statist. probab. lett., 76, 907-912, (2006) · Zbl 1091.60008 [12] Nualart, D., The Malliavin calculus and related topics, (2006), Springer-Verlag · Zbl 1099.60003 [13] Nualart, D., Stochastic integration with respect to fractional Brownian motion and applications, Contemp. math., 336, 3-39, (2003) · Zbl 1063.60080 [14] Nualart, D.; Răşcanu, A., Differential equations driven by fractional Brownian motion, Collect. math., 53, 55-81, (2002) · Zbl 1018.60057 [15] Samko, S.G.; Kilbas, A.A.; Marichev, O.I., Fractional integrals and derivatives, theory and applications, (1993), Gordon and Breach Science Publishers Yvendon · Zbl 0818.26003 [16] Young, L.C., An inequality of the Hölder type connected with Stieltjes integration, Acta math., 67, 251-282, (1936) · Zbl 0016.10404 [17] Zähle, M., Integration with respect to fractal functions and stochastic calculus, I, Probab. theory related fields, 111, 333-374, (1998) · Zbl 0918.60037 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2023-03-31 16:09:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6993029117584229, "perplexity": 2136.44391221806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00168.warc.gz"}
https://questions.examside.com/past-years/jee/question/an-ammeter-reads-upto-1-ampere-its-internal-resistance-2003-marks-4-uq9zqy6wgfx7ctg0.htm
JEE Mains Previous Years Questions with Solutions 4.5 star star star star star 1 AIEEE 2003 An ammeter reads upto $1$ ampere. Its internal resistance is $0.81$ $ohm$. To increase the range to $10$ $A$ the value of the required shunt is A $0.03\,\Omega$ B $0.3\,\Omega$ C $0.9\,\Omega$ D $0.09\,\Omega$ Explanation ${i_g} \times G = \left( {i - {i_g}} \right)S$ $\therefore$ $S = {{{i_g} \times G} \over {i - {i_g}}} = {{1 \times 0.81} \over {10 - 1}} = 0.09\Omega$ 2 AIEEE 2003 The thermo $e.m.f.$ of a thermo -couple is $25$ $\mu V/{}^ \circ C$ at room temperature. A galvanometer of $40$ $ohm$ resistance, capable of detecting current as low as ${10^{ - 5}}\,A,$ is connected with the thermo couple. The smallest temperature difference that can be detected by this system is A ${16^0}C$ B ${12^0}C$ C ${8^0}C$ D ${20^0}C$ Explanation Let $\theta$ be the smallest temperature difference that can be detected by the thermocouple, then $I \times R = \left( {25 \times {{10}^{ - 6}}} \right)\theta$ where ${\rm I}$ is the smallest current which can be detected by the galvanometer of resistance $R.$ $\therefore$ ${10^{ - 5}} \times 40 = 25 \times {10^{ - 6}} \times \theta$ $\therefore$ $\theta = {16^ \circ }C.$ 3 AIEEE 2003 The length of a wire of a potentiometer is $100$ $cm$, and the $e.$ $m.$ $f.$ of its standard cell is $E$ volt. It is employed to measure the $e.m.f.$ of a battery whose internal resistance in $0.5\Omega .$ If the balance point is obtained at $1=30$ $cm$ from the positive end, the $e.m.f.$ of the battery is where $i$ is the current in the potentiometer wire. A ${{30E} \over {100.5}}$ B ${{30E} \over {\left( {100 - 0.5} \right)}}$ C ${{30\left( {E - 0.5i} \right)} \over {100}}$ D ${{30E} \over {100}} - 0.5i$, where i is the current in the potentiometer wire Explanation Potential gradient along wire, K = ${E \over {100}}$ volt/cm For battery V = E' – ir, where E' is emf of battery. or K × 30 = E' – ir, where current i is drawn from battery or ${{E \times 30} \over {100}}$ = E' + 0.5i or E' = ${{30E} \over {100}} - 0.5i$ 4 AIEEE 2002 If in the circuit, power dissipation is $150W,$ then $R$ is A $2\,\Omega$ B $6\,\Omega$ C $5\,\Omega$ D $4\,\Omega$ Explanation The equivalent resistance is ${{\mathop{\rm R}\nolimits} _{eq}} = {{2 \times R} \over {2 + R}}$ $\therefore$ Powder dissipation $P = {{{V^2}} \over {{{\mathop{\rm R}\nolimits} _{eq}}}}$ $\therefore$ $150 = {{15 \times 15} \over {{R_{eq}}}}$ $\therefore$ ${{\mathop{\rm R}\nolimits} _{eq}} = {{15} \over {10}} = {3 \over 2}$ $\Rightarrow {{2R} \over {2 + R}} = {3 \over 2}$ $\Rightarrow 4R = 6 + 3R$ $\Rightarrow R = 6\Omega$
2022-01-20 13:07:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39849114418029785, "perplexity": 1144.8314670883562}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301863.7/warc/CC-MAIN-20220120130236-20220120160236-00675.warc.gz"}
https://datascience.stackexchange.com/questions/39883/k-nearest-neighbors-complexity
# K-nearest neighbors complexity Why does the complexity of KNearest Neighbors increase with lower value of k? And when does the plot for k-nearest neighbor have smooth or complex decision boundary? Please explain in detail. And also , given a data instance to classify, does K-NN compute the probability of each possible class using a statistical model of the input features or just gets the class with the most number of points in favour of it? The complexity in this instance is discussing the smoothness of the boundary between the different classes. One way of understanding this smoothness complexity is by asking how likely you are to be classified differently if you were to move slightly. If that likelihood is high then you have a complex decision boundary. For the $$k$$-NN algorithm the decision boundary is based on the chosen value for $$k$$, as that is how we will determine the class of a novel instance. As you decrease the value of $$k$$ you will end up making more granulated decisions thus the boundary between different classes will become more complex. You should note that this decision boundary is also highly dependent of the distribution of your classes. Let's see how the decision boundaries change when changing the value of $$k$$ below. We can see that nice boundaries are achieved for $$k=20$$ whereas $$k=1$$ has blue and red pockets in the other region, this is said to be more highly complex of a decision boundary than one which is smooth. First let's make some artificial data with 100 instances and 3 classes. from sklearn.datasets.samples_generator import make_blobs X, y = make_blobs(n_samples=100, centers=3, n_features=2, cluster_std=5) Let's plot this data to see what we are up against Now let's see how the boundary looks like for different values of $$k$$. I'll post the code I used for this below for your reference. # The code The code used for these experiments is as follows taken from here from sklearn import neighbors k = 1 clf = neighbors.KNeighborsClassifier(20) clf.fit(X, y) from matplotlib.colors import ListedColormap import matplotlib.pyplot as plt cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF']) cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF']) # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, x_max]x[y_min, y_max]. x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.figure() plt.pcolormesh(xx, yy, Z, cmap=cmap_light) # Plot also the training points plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.show() • "You should note that this decision boundary is also highly dependent of the distribution of your classes." - While saying this are you meaning that if the distribution is highly clustered, the value of k -won't effect much? Since k=1 or k=5 or any other value would have similar effect. – aspiring1 Oct 19 '18 at 7:31 • That's right because the data will already be very mixed together, so the complexity of the decision boundary will remain high despite a higher value of k. – JahKnows Oct 20 '18 at 16:22
2020-05-31 19:37:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6033729314804077, "perplexity": 1257.8042325712577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413624.48/warc/CC-MAIN-20200531182830-20200531212830-00004.warc.gz"}
https://www.albert.io/ie/ap-calculus-ab-bc/mean-value-theorem-testing-conditions-on-functions
? Free Version Moderate # Mean Value Theorem: Testing Conditions on Functions APCALC-BNX1FN For which of the following functions does the Mean Value Theorem apply on the given interval? $\text{I. } f\left( x \right) =\cfrac { 1 }{ { x }^{ 2 }-1 } \quad \text{ on the interval } [-2,2]$ $\text{II. } g\left( x \right) ={ x }^{ \frac{2}{3} } \quad \text{on the interval }[1,4]$ $\text{III. }h\left( x \right) ={ 2x }^{ 3 }+{ x }^{ 2 }-3 \quad \text{on the interval } [-3,1]$ A I only B III only C I and II D II and III
2016-12-06 18:01:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7791886925697327, "perplexity": 5218.318328292937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541950.13/warc/CC-MAIN-20161202170901-00216-ip-10-31-129-80.ec2.internal.warc.gz"}
http://mathhelpforum.com/math-topics/93861-quick-linear-programming-question.html
# Thread: Quick Linear Programming Question 1. ## Quick Linear Programming Question Hey guys, My question is about linear programming (geometric solution), and I have a bit of trouble understanding the question itself. I tried to solve the problem in many ways, but kept getting a different solution from the answers. The question's a bit long and is about pension funds investment: Pension Fund Investment A pension fund has decided to invest $45,000 in two high-yield stocks listed in below: Stock A - Price Per Share:$14 - Yield: 8% Stock B - Price Per Share: $30 - Yield: 6% This pension fund has decided to invest at least 25% of the$45,000 in each of the two stocks. *Further, it has been decided that at most 63% of the $45,000 can be invested in either one of the stocks*. How many shares of each stock should be purchased in order to maximize the annual yield, while meeting the stipulated requirements? What is the annual yield in dollars for the optimal investement plan? Looking at the sentence between the two "*", I understood that I take 63% of the$45,000 and invest in stock A, for instance, so the equation becomes: if x is the shares in stock A, and y is the shares in stock B, then: 14x <= 28,350 the full equation set is: 14x+30y >= 11,250 14x <=28,350 x>=0, y>=0 and maximizing the profit P: P = 900x+675y Thanks guys, and hope to hear from you soon. 2. Originally Posted by Zakaria007 Hey guys, My question is about linear programming (geometric solution), and I have a bit of trouble understanding the question itself. I tried to solve the problem in many ways, but kept getting a different solution from the answers. The question's a bit long and is about pension funds investment: Pension Fund Investment A pension fund has decided to invest $45,000 in two high-yield stocks listed in below: Stock A - Price Per Share:$14 - Yield: 8% Stock B - Price Per Share: $30 - Yield: 6% This pension fund has decided to invest at least 25% of the$45,000 in each of the two stocks. *Further, it has been decided that at most 63% of the $45,000 can be invested in either one of the stocks*. How many shares of each stock should be purchased in order to maximize the annual yield, while meeting the stipulated requirements? What is the annual yield in dollars for the optimal investement plan? Looking at the sentence between the two "*", I understood that I take 63% of the$45,000 and invest in stock A, for instance, so the equation becomes: if x is the shares in stock A, and y is the shares in stock B, then: 14x <= 28,350 the full equation set is: 14x+30y >= 11,250 14x <=28,350 x>=0, y>=0 and maximizing the profit P: P = 900x+675y Thanks guys, and hope to hear from you soon. You have missed the 65% constaint on B: $30y \le 28350$ CB
2016-09-26 11:44:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.532004177570343, "perplexity": 1862.9232179275039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660746.32/warc/CC-MAIN-20160924173740-00291-ip-10-143-35-109.ec2.internal.warc.gz"}
http://chsz.condominiroma.it/circular-parallel-plate-capacitor.html
# Circular Parallel Plate Capacitor Parallel plate capacitor model consists of two conducting plates, each of area A, separated by a gap of thickness d containing a dielectric. Many capacitors and other capacitative objects can be considered as a pair of separated plates, and this clculation allows the capacitance of such systems to be found easily. 85 pF/m)π(20 cm)2=(1. The charge per unit area on each plate has magnitude of 5. (a) Calculate the capacitance. Circular parallel plate capacitor Multimeter Interface box (I. The capacitor is initially charged to a charge. Army Research Laboratory Weapons and Materials Research Directorate Aberdeen Proving Ground, MD 21005. A parallel plate capacitor is made from circular plates of. A parallel plate capacitor is the simplest form of capacitor and this is better one to verify the dependence of capacitance on plate size and spacing between those metal plates. Relevant Equations: C=Q/V. Two parallel-plate capacitors shown in the figure below. 0 mm and 40. An ideal air-filled parallel-plate capacitor has round plates and carries a fixed amount of equal but opposite charge on its plates. Chapter 41: Problem 33 Up: Waves and Optics, Solutions Previous: Chapter 41: Problem 20 Chapter 41: Problem 32. If C is the capacitance of the parallel plate capacitor. E4: Parallel-Plate Capacitor 4-3 connected in parallel, they are equivalent to a single capacitor Ctot, which is the sum of all the components: i i Ctot = ∑C (4. The plates have radius 3. 0 pC goes onto each plate. asked by Alli on June 21, 2016; Physics. The capacitor is connected to a battery, and a charge of magnitude 25. The charge on each plate =1. Note that units of length and area can be metric or English so long as they are. 0 μF? (b) When the capacitor is connected to a 12. 107 C/m^2 Now we insert dielectric with width $w=0. Although there are other types of capacitors, in this topic we will study the parallel plate capacitor. 00 mm is being charged at the rate of 8. To get a feeling for the size of this force, consider a capacitor with plates 10 cm x 10 cm and spacing 1 mm. The exterior terminals of the plates are connected to a source of alternating emf with a voltage V = V 0 sin([omega] t). For the parallel plate capacitor, electric field was constant between the plates all the time, therefore the energy density, energy per unit volume, is also constant. Two parallel-plate capacitors A and B are connected in parallel across a 600 V battery. Electric field strength. A parallel-plate capacitor is formed from two 3. What is the value of the magnetic field between plates of parallel plate capacitor of distance 1 metre from centre, where the electric field varies by 10^10 V/m per second? {Ans. Small valued capacitors can be etched into a PCB for RF applications, but under most circumstances it is more cost effective to use discrete capacitors. CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): Abstract. Energy of a Capacitor in the Presence of a Dielectric A dielectric-filled parallel-plate capacitor has plate area A , plate separation d and dielectric constant k The capacitor is connected to a battery that creates a constant voltage Throughout the problem, use = C/N m2. 00 cm2 and is connected across the terminals of a battery. The slab is equidistant from the plates. You disconnect. 13: Sharing a Charge Between Two Capacitors: 5. The capacitance is 100/4π(0. In particular, consider the displacement current density, 0∂E/∂t in MKSA units for vacuum between the plates, to consist of a collection of small, close-packed “wires” that extend from one plate of the capacitor to the other. Since the capacitance is defined by one can see that capacitance is:. 0% of its maximum value?. If the potential between the plates is 27. A capacitor consists of two parallel circular plates of radius a separated by a distance d (assume ). Wire separation is measured from center-to-center, so when it is less than or equal to the wire diameter then the wires are in physical contact. 2A Calculate the magnitude of the charge on each plate of the capacitor. A parallel plat capacitor is made of two circular plates separated by a distance of 5mm and with a dielectric of dielectric constant 2. The radius of the plates is much larger than the distance between them, so fringing effects are negligible. Figure $$\PageIndex{2}$$: (a) Capacitors in parallel. Answer in units of V. For GHz frequencies and FR4 substrate, a transmission line (e. 00*10^4 N/C. 21 x 10-9 F (small!!) Lesson: difficult to get large values of capacitance without special tricks! • A parallel plate capacitor of. The top plate carries a charge +Q while the bottom plate carries a charge -Q. 6857 The one parallel plate averaged approximation comes within about 4 percent of the exact value. 1: Parallel-Plate Capacitor Consider two metallic plates of equal area A separated by a distance d, as shown in Figure 5. What is the discharging current? View Answer. 2 cm, and the dielectric medium used is air. circular motion:. As a result of this change, what will be the new voltage between the capacitor plates?. The circular parallel plate capacitor: a numerical solution for the potential (1984). •σ= Q/A V = ∫Edx = Ed E = σ/є. A parallel plate capacitor has circular plates of. 2 between them. When a capacitor is connected across the two battery terminals, charge flows through the capacitor until its potential difference becomes equal to that of the battery. One of the more familiar systems in electrostatics is the parallel plate capacitor (PPC). A circular parallel plate capacitor with plate radius is charged by means of a cell, at time. As x goes from 0 to 3d (a) the magnitude of the electric field remains the same. When the plates are then moved 0. Portions from North Carolina State University. The full solution would include the thickness of the plates, and the positions of the remote grounds. A parallel plate capacitor consists of two plates with a total surface area of 100 cm 2. Two identical closely spaced circular disks form a parallel-plate capacitor. two or more parallel plate capacitors (PP-Cap), as shown in Fig. 0 ×10^6 N/C. In a parallel plate capacitor with air between the plates, each plate has an area of 5 × 10–3 m2 and the separation between the plates is 2. Capacitance of circular parallel plate capacitors. How is the net energy flow into the capacitor is related to the rate of change of capacitor energy?. If C is the capacitance of the parallel plate capacitor. While this system has received considerable attention in the close plate approximation, little is known about the exact solution for arbitrary plate separations. • Capacitors play important roles in many electric circuits. What is the maximum value of the magnetic field induced by the displacement current between the plates, at a distance r = 5 cm from the axis of the plates while the capacitor. asked by Alli on June 21, 2016; Physics. k=1 for free space, k>1 for all media, approximately =1 for air. Figure 32-28 is a head-on view of one of the. A parallel-plate capacitor is formed from two 3. 13: Sharing a Charge Between Two Capacitors: 5. A parallel plate capacitor has circular plates, each of radius$5\: cm$. Parallel plate capacitor model consists of two conducting plates, each of area A, separated by a gap of thickness d containing a dielectric. The capacitor is charged to a voltage U 1, then it is disconnected from the source. 00mm between the plates. εr is the Relative Dielectric Constant. 2cm -diameter electrodes spaced 1. Charge is flowing onto the positive plate at the rate I = dQ/dt = 1. Figure 1 PROCEDURE Please print the worksheet for this lab. Figure $$\PageIndex{2}$$: (a) Capacitors in parallel. Generally, a capacitor has two parallel metal plates which are not connected to each other. Find the capacitance. 20 cm is charged to a potential difference of 1000V by a battery. Full text of "The capacitance of the circular parallel plate capacitor obtained by solving the Love integral equation using an analytic expansion of the kernel" See other formats The capacitance of the circular parallel plate capacitor obtained by solving the Love integral equation using an analytic expansion of the kernel Martin Norgren and B. Homework Statement A parallel plate air-filled capacitor is being charged as in figure. Previously, this kind of expansion has been carried out numerically, resulting in accuracy problems at small plate separations. A parallel plate capacitor has circular plates of. A parallel plate capacitor consists of two circular plates of diameter 8 cm. Two identical parallel plate capacitors A and B are connected to a battery of V volts with the switch S closed. The diagrams show parallel plate capacitors with different shaped plates, one rectangular and one circular. 0 pF capacitor consists of two circular plates of radius 0. 2 cm radius and 1. Figure 32-28 is a head-on view of one of the. ? At what radius (a) inside and (b) outside the capacitor gap is the magnitude of the. In this case, we will use a box with one side embedded within the top plate. Consider two capacitors connected in series: i. 1: Parallel-Plate Capacitor Consider two metallic plates of equal area A separated by a distance d, as shown in Figure 5. If the potential between the plates is 27. (a) What is the charge on each plate? (b) How much charge would be on the plates if their separation were doubled while the capacitor remained connected to the battery? (c) How much charge would be on the plates if the capacitor. 21 x 10-9 F (small!!) Lesson: difficult to get large values of capacitance without special tricks! • A parallel plate capacitor of. Two parallel plate capacitors have circular plates. A parallel-plate capacitor has plate area 25. 85*10 12 F/M is a constant denoted by Eo and is called the dielectric constant of free space. Parallel plate capacitor.$ The plates are separated by 0. parallel plate capacitors two parallel metal plates, of area A, separated by a distance d we can show that the electric field between large plates is uniform and of magnitude since it's uniform the potential difference must be hence so the capacitance is which depends only on the geometry of the capacitor. Suppose also that sinusoidal potential difference with a maximum value of 150 V and a frequency of 60 Hz is applied across the plates; that is V = (150 V)sin [2pi (60 Hz)t]. A device for holding a charge of electricity. 45 V A = 2. the plates separated by 1. Figure $$\PageIndex{2}$$: (a) Capacitors in parallel. Looking at the MLC structure, the initial logic might be that if these plates are inductive, then connecting inductive plates in parallel should create an effect similar to resistors in parallel: as more plates are added, the net ESL of the capacitor should decrease. It assumes that the two wires are identical, perfectly straight, and parallel to each other. Separation between plates, d = 2 mm = 0. 9 (a) (b) The potential difference between two points in a uniform electric field is Ed, so 20. You should also know that the capacitance of a capacitor that is filled with a dielectric of dielectric constant k is C = κC 0, where C. When the electric field in the dielectric is 3 × 10 4 V/m, the charge density of the positive plate will be close to 6 × 10 –7 C/m 2 6 × 10 4 C/m 2. The most common capacitor consists of two parallel plates. hypothetical parallel plate capacitor with a capacitance of 1. The Parallel-Plate Capacitor. At the beginning, the plates have a charge Q0 and -Q0. When the electric field in the dielectric is 3 × 1 0 4 V/m, the charge density of the positive plate will be close to:. 2 cm radius and 1. A parallel-plate capacitor has circular plates of 6. Question: Figure 20 (Chapter 41) shows a parallel plate capacitor being charged. Calculate the capacitance. A parallel plate capacitor consists of two circular plates of diameter 8 cm at what distance should the plates be jeld so as to have the same capacitor as that of a sphere of diameter 20 cm. Cc e tke Part B. The top plate carries a charge +Q while the bottom plate carries a charge -Q. One parallel plate, using average distance: 0. The following image shows some practical capacitors. hypothetical parallel plate capacitor with a capacitance of 1. At what radius (a) Inside and(b) Outside the capacitor gap is the magnitude of the induced magnetic field equal to 50. 0% of its maximum value?. We have on the left two cylindrical plates while on the right we have two parallel. When capacitors are connected in parallel, the total capacitance is the sum of the individual capacitors' capacitances. Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. An analytic solution for the potential due to a capacitor with parallel, circular plates has been found. The area of each sheet is 400 cm2, and the thickness of the polyethylene is 0. In the limit that the gap d between plates approaches zero, the potential outside the plates is given as an integral over the surface of one plate. Solution For a (closely spaced) parallel plate capacitor, with circular plates, Example 26-4 shows that C = ε 0πr 2=d = (8. It can be constructed using two metal or metallised foil plates at a distance parallel to each other, with its capacitance value in Farads, being fixed by the surface area of the conductive plates and the distance of separation between them. A circular parallel-plate capacitor with a spacing of 1. At what rate is the electric field between the plates changing? •6 A capacitor with square plates of edge length L is being discharged by a current of 0. 0mm is connected to a 45 -V battery (Fig. Each plate has area 80. At the beginning, the plates have a charge Q0 and -Q0. Find the capacitance. 0 mm and a plate separation of 4. The Parallel-Plate Capacitor The electric field inside a capacitor is where A is the surface area of each electrode. 0% of its maximum View Answer. Parallel-Plate Capacitors In our previous published paper, the normalized charge distribution on plates of a parallel-plate strip capacitor and a parallel-plate disk capacitor are presented [lo], [ll]. Small valued capacitors can be etched into a PCB for RF applications, but under most circumstances it is more cost effective to use discrete capacitors. Because the current is increasing the charge on the capacitor's plates, the electric field between the plates is increasing, and the rate of change of electric field gives the correct value for the field B found above. Voltage = V = 10. Since we know that the basic relationship Q = CV, we must obtain expressions for Q and V to evaluate C. 00 X 10^{4} N/C. Although the solution was first given, in cylindrical coordinates by Sneddon, it was part of a more general treatise on mixed boundary value. Consider a parallel-plate capacitor constructed from two circular metal plates of radius R. The circular disk parallel plate capacitor Carlson, G. Model of a parallel-plate square capacitor. A parallel plate capacitor with variable separation is the standard apparatus used in physics labs to demonstrate the effect of the capacitor's geometry (plate area and plate separation) on capacitance. Inductance of Parallel Plates in Electromagnetic Armor 5c. The problem of determining the electrostatic potential and field outside a parallel plate capacitor is reduced, using symmetry, to a standard boundary value problem in the half space z⩾0. Aluminum foil each with a radius of 3 cm. Two circular parallel metallic plates (diameter d) are separated by a distance "h". k = relative permittivity of the dielectric material between the plates. Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. Consider a parallel plate capacitor with a single flaw in one plate that penetrates into the dielectric material. 0 pC goes onto each plate. How far apart are the plates? (The value of ε o. 0 mm and a plate separation of 4. 0-V battery, what is the magnitude of the charge on each of the plates?. Thus, the square capacitor has greater capacitance. What is the displacement current through a circular loop of radius r = 9. Parallel Plate Capacitor Capacitance Calculator. When a voltage V is applied to the capacitor, it stores a charge Q , as shown. The total capacitance of a circular parallel plate capacitor including edge effect, can be calculated using the following equation which is derived from Kirchhoff's equation for a circular capacitor. 3) between two sheets of aluminum foil. 04 The plates of a spherical capacitor have radii 38. A parallel-plate capacitor with circular plates of radius R is being discharged. A parallel-plate capacitor has circular plates of 6. The most common capacitor is the parallel-plate capacitor, illustrated in Figure 14. Question: Figure 20 (Chapter 41) shows a parallel plate capacitor being charged. 0 cm in diameter is accumulating charge at the rate of 32. 0(from Gauss) •Therefore C = Q/V = σA/ (Ed) = є. The electric field strength inside the capacitor is 5. PARALLEL PLATE CAPACITORS : A parallel plate capacitor consists of two equal flat parallel metal plates facing each other and separated by a dielectric of electric permittivity. How is the net energy flow into the capacitor is related to the rate of change of capacitor energy?. C = K * ε0 * A/D Where, K = Dielectric constant of material, refer table-1 and table-2 below to select numeric value as per material ε0 = 8. 00, Number of iteration N=611. Aluminum foil each with a radius of 3 cm. With the understanding that the electrodes are. This effect is used inthe following applications. Calculate the potential difference between the plates of the capacitor. If A 1 = ½ A 2 and d 2 = 3 d 1 then determine the ratio of capacities of the parallel-plate capacitor between the image 2 and the image 1. 0mm is connected to a 45 -V battery (Fig. •Assume two metal plates, area A each, distance d apart, Voltage V between them, Charge +-Q on Plates. 0\times {{10}^{-3}}m\] separation. 0 kV, find the surface area of one of the plates. The calculator computes the capacitance per centimeter of wire length for various wire gauges. If the potential between the plates is 27. An ideal air-filled parallel-plate capacitor has round plates and carries a fixed amount of equal but opposite charge on its plates. Since we know that the basic relationship Q = CV, we must obtain expressions for Q and V to evaluate C. 0 m C / s at some instant in time. If A 1 = ½ A 2 and d 2 = 3 d 1 then determine the ratio of capacities of the parallel-plate capacitor between the image 2 and the image 1. The foil has some irregularities as does any surface, and these irregulari-ties would be a reason to ignore the data for the first four dielectric sheets. 00 X 10^{2} V. An empty parallel plate capacitor is connected between the terminals of a 9. (a) What is the rms value of the conduction current?. If C is the capacitance of the parallel plate capacitor. (i) A parallel plate capacitor– Consider a parallel plate capacitor having its plates at z = 0 and z = d with upper plates potential at V 1 and lower feet grounded as shown x y z z = d z = 0 Fig. Two parallel plate capacitors have circular plates. What is the force applied on the plate of the capacitor with the voltage U 1?. A capacitor consists of two parallel circular plates of radius a separated by a distance d (assume ). Transferring 2. A circular parallel-plate capacitor with a spacing of 1. When two plates having same area A, and equal width are placed parallel to each other with a separation of distance d, and if some energy is applied to the plates, then the capacitance of that parallel plate capacitor can be termed as −. at what distance should the plates be held so as to have the same capacitance as that of a sphere of diameter 20 cm. 85*10 12 F/M]/d. A parallel plate capacitor with circular plates of radius R = 16. TASK NUMBER 6. 0 F with plates separated by 1. Using Gauss' Law, We can evaluate E, the electric field between the plates once we employ an appropriate gaussian surface. At the beginning, the plates have a charge Q0 and -Q0. The Farad, F, is the SI unit for capacitance, and from the definition of capacitance is seen to be equal to a Coulomb/Volt. The problem of determining the exact solution for the potential due to a circular parallel plate capacitor is a celebrated one (Sneddon 1966), with the most successful discussion to date being that by Love (1949), in which the mathematical problem is recast in terms of a Fredholm integral equation of the second kind over a finite domain. As x goes from 0 to 3d (a) the magnitude of the electric field remains the same. 0 cm in diameter is accumulating charge at the rate of 32. In a parallel plate capacitor, C = [A*Er*9. The diagrams show parallel plate capacitors with different shaped plates, one rectangular and one circular. The Circular Variable Parallel Plate Capacitor Corner View 1 Corner View 2. In between the plates we can have air, vacuum, or a non-conductive material called a dielectric. 5k points) capacitors. 2 between them. A parallel plate capacitor is made of 2 circular plates seperated by a distance of 5 mm and with a dielectric between them. Find the charge on. 0 m C / s at some instant in time. 85*10 12 F/M]/d. from the central axis of a circular parallel-plate capacitor is 2. 6 × 10^5 N/C. 0006m $so that it touches one of the plates. ) A parallel plate capacitor consists of two circular disks of radius R and separation R/1000. When two plates having same area A, and equal width are placed parallel to each other with a separation of distance d, and if some energy is applied to the plates, then the capacitance of that parallel plate capacitor can be termed as −. A parallel plate capacitor has circular plates, each of radius$5\: cm$. Spherical, Parallel Plate, and Cylindrical Capacitors Printer Friendly Version In this lesson we will derive the equations for capacitance based on three special types of geometries: spherical capacitors, capacitors with parallel plates and those with cylindrical cables. Consider a parallel plate capacitor with a single flaw in one plate that penetrates into the dielectric material. hypothetical parallel plate capacitor with a capacitance of 1. A parallel-plate capacitor made of circular plates of radius 25 cm separated by 0. In the short-time limit, if the capacitor starts with a certain voltage V, since the voltage drop on the capacitor is known at this instant, we can replace it with an ideal voltage source of voltage V. Two parallel plate capacitors have circular plates. 0 'Giga Ohms' is connected to an air-filled circular-parallel-plate capacitor of diameter 12. Outside the capacitor, the net field is zero. of $100\ volt$is applied, the charge will be [ISM Dhanbad 1994]. Let us start with parallel plates. Two circular disks spaced 0. The square plates have an area of x 2 while the circular plates have an area = π(x/2) 2 = 0. 3) between two sheets of aluminum foil. The displacement current through a central circular area, parallel to the plates and with radius R/2, is 2. 20 cm is charged to a potential difference of 1000V by a battery. This is my single comment here. For the parallel plate capacitor, electric field was constant between the plates all the time, therefore the energy density, energy per unit volume, is also constant. Energy of a Capacitor in the Presence of a Dielectric A dielectric-filled parallel-plate capacitor has plate area A , plate separation d and dielectric constant k The capacitor is connected to a battery that creates a constant voltage Throughout the problem, use = C/N m2. Thus you get the most capacitance when the plates are. PARALLEL PLATE CAPACITORS : A parallel plate capacitor consists of two equal flat parallel metal plates facing each other and separated by a dielectric of electric permittivity. ) A parallel plate capacitor consists of two circular disks of radius R and separation R/1000. Re: Help, measuring dielectric constant with parallel plate capacitor If I remember right, Agilent has published application notes about microwave permittivity measurements. 40 cm farther apart, the charge on each plate remains constant but the potential difference between the plates increases by 100 V. It is charged with equal charges of opposite sign. 00 micrometer. (a) What is the capacitance of this capacitor? (b) How much charge is stored on the original capacitor?. 00*10^4 N/C. The diagrams show parallel plate capacitors with different shaped plates, one rectangular and one circular. Relevant Equations: C=Q/V. Contributors and Attributions; Let us now determine the capacitance of a common type of capacitor known as the thin parallel plate capacitor, shown in Figure $$\PageIndex{1}$$. For a parallel-plate capacitor, C = 12 m) 1. 0cm -diameter electrodes spaced 2. 0 cm has a capacitance C = 100 pF. The Parallel-Plate Capacitor. 20$\mathrm{m}$is concentric with the capacitor and halfway between the plates. The 'parallel plate' solution of Ae/d is good where the area of the plates dominate the fringeing field round the edges. The capacitor has capacitance and is being charged in a simple circuit loop. (a) Calculate the capacitance. The Farad, F, is the SI unit for capacitance, and from the definition of capacitance is seen to be equal to a Coulomb/Volt. 0006m$ so that it touches one of the plates. The charging of the plates can be accomplished by means of a battery which produces a potential difference. 8*10^-20cD)none of these. A parallel-plate capacitor has circular plates with radius 50 cm and spacing 1. 0 mm air gap. Capacitance of circular parallel plate capacitors. A parallel-plate capacitor has circular plates of 6. parallel plate capacitors two parallel metal plates, of area A, separated by a distance d we can show that the electric field between large plates is uniform and of magnitude since it's uniform the potential difference must be hence so the capacitance is which depends only on the geometry of the capacitor. 4) You might think that the parallel plates, being the largest conductors in your setup, would have such a large capacitance that you can ignore the other capacitances. The capacitor is connected to a resistance R in series, and a voltage V is applied to the circuit. 90X10^{5}nC. The charge per unit area on each plate has magnitude of 5. 20 cm radius and 1. 0-µF parallel-plate capacitor with circular plates is connected to a 12. Hence, the area of each plate is about 19 cm 2. The Parallel-Plate Capacitor. 8*10^-20cD)none of these. Lectures by Walter Lewin. Fig(a) shows that the plate area of a parallel plate capacitor is the area of one of the plates. Wire separation is measured from center-to-center, so when it is less than or equal to the wire diameter then the wires are in physical contact. Calculate the capacitance. We can see how its capacitance depends on A and d by considering the characteristics of the Coulomb force. Topic --- Capacitor & Dielectrics 1. 00 \times 10^2 {/eq} V. When two plates having same area A, and equal width are placed parallel to each other with a separation of distance d, and if some energy is applied to the plates, then the capacitance of that parallel plate capacitor can be termed as −. The capacitance of the circular parallel plate capacitor obtained by solving the Love integral equation using an analytic expansion of the kernel. the plates of the capacitor as the possible cause. Let us start with parallel plates. 56×10^(-8) T} Update Cancel. Energy of a Capacitor in the Presence of a Dielectric A dielectric-filled parallel-plate capacitor has plate area A , plate separation d and dielectric constant k The capacitor is connected to a battery that creates a constant voltage Throughout the problem, use = C/N m2. Suppose also that a sinusoidal potential difference with a maximum value of 400 V and a frequency of 120 Hz is applied across the plates; that is V = (400 V) sin [2 ? […]. If the same capacitor is then connected to a battery of voltage 2V its energy density becomes equal to. Inductance of Parallel Plates in Electromagnetic Armor 5c. Each is connected directly to the voltage source just as if it were all alone, and so the total capacitance in parallel is just the sum of the individual capacitances. A thin straight wire of length d lies along the axis of the capacitor and connects the two plates. If we consider a parallel plate capacitor, we know that such a capacitor consists of two parallel conducting plates separated by an insulating medium. the plates separated by 1. Printed in Great Britain An analytic solution for the potential due to a circular parallel plate capacitor William J Atkinsont, John H Young$and Ivan A. I’m going to draw these plates again with an exaggerated thickness, and we will try to calculate capacitance of such a capacitor. 0 (F? (b) If the separation between the plates is increased, should the radius of the plates be increased or decreased to maintain a capacitance of 1. 0 F with plates separated by 1. Part A Find the energy UI of the dielectric-filled capacitor. An isolated circular parallel plate capacitor is given a surface charge density K at time t = 0. Following equation or formula is used for this Parallel Plate Capacitor capacitance calculator. Transferring 2. PROJECT NUMBER 622105. 0 IA-AI I a" - L b=l. CTotal= C₁ + C₂ + C₃ + …. parallel plate capacitors two parallel metal plates, of area A, separated by a distance d we can show that the electric field between large plates is uniform and of magnitude since it’s uniform the potential difference must be hence so the capacitance is which depends only on the geometry of the capacitor. The model is idealized in the sense that the plates have zero thicknesses. AUTHOR(S) Charles R. The problem of determining the electrostatic potential and field outside a parallel plate capacitor is reduced, using symmetry, to a standard boundary value problem in the half space z⩾0. the magnetic field in the midplane of a capacitor with circular plates of radiusR while the capacitor is being charged by a time-dependent currentI(t). 1: Parallel plate Example 6. How is the net energy flow into the capacitor is related to the rate of change of capacitor energy?. this is also known as the parallel plate capacitor formula. The top plate carries a charge +Q while the bottom plate carries a charge –Q. A parallel plate capacitor with circular plates of radius R = 16. If all parameters are the same except for area, then the capacitor with the larger area will have a greater capacitance. In particular, consider the displacement current density, 0∂E/∂t in MKSA units for vacuum between the plates, to consist of a collection of small, close-packed “wires” that extend from one plate of the capacitor to the other. PROGRAM ELEMENT NUMBER 5d. The capacitor is connected to a 230 V ac supply with a (angular) frequency of 300 rad s −1. Force Between the Plates of a Plane Parallel Plate Capacitor: 5. If the potential between the plates is 27. 02x - Lect 16 - Electromagnetic Induction, Faraday's Law, Lenz Law, SUPER DEMO - Duration: 51:24. Calculating Parallel Capacitor Capacitance. The capacitance and the energy stored in the capacitor is and. The parallel plate capacitor shown in Figure 4 has two identical conducting plates, each having a surface area A, separated by a distance d (with no material between the plates). Specifically, if V=0 (capacitor is uncharged), the short-time equivalence of a capacitor is a short circuit. An ideal air-filled parallel-plate capacitor has round plates and carries a fixed amount of equal but opposite charge on its plates. 50 mm is being charged at the rate of 9. A circular parallel-plate capacitor with a spacing of 1. 00 - pF parallel-plate air-filled capacitor with circular plates is to be used in a circuit in which it will be subjected to potentials of up to 1. You connect a 9 volt battery across the plates. • The figure shows two electrodes, one with charge +Qand the other with –Qplaced face-to-face a distance d apart. 2 cm, and the dielectric medium used is air. ) A parallel plate capacitor consists of two circular disks of radius R and separation R/1000. The total capacitance of a circular parallel plate capacitor including edge effect, can be calculated using the following equation which is derived from Kirchhoff's equation for a circular capacitor. WORK UNIT NUMBER 7. (C) If one were to keep the 120 V battery connected while. Attached was a spherical ball electrode with a total surface area of 4. Solving numerically the 2D Laplace Equation for parallel plates capacitor using finite differences method, convergence is attained using the norm's criterion with tolerance=6. 3: Equivalent capacitance Up: Capacitance Previous: Example 6. Lectures by Walter Lewin. The acceleration of a particle between the plates is proportional to the magnitude of the electric field. The capacitance of the parallel plate capacitor is [3] C = 4"0a Z 1 0 f(s)ds; (1) where the function f(s) is the solution to the modifled Love integral equation f(s)¡ Z 1 0 K(s;t)f(t)dt = 1; 0 • s • 1; (2) with kernel K(s;t) = • … • 1 •2 +(s¡t)2 + 1 •2 +(s+t)2 ‚: (3). 0 cm has a capacitance C = 100 pF. Force Between the Plates of a Plane Parallel Plate Capacitor: 5. A parallel plate capacitor with circular plates of radius R = 16. Capacitors play a major role in many electrical and electronic circuits. PARALLEL PLATE CAPACITORS : A parallel plate capacitor consists of two equal flat parallel metal plates facing each other and separated by a dielectric of electric permittivity. Its a circular disk that will be used to derive the magnetic field at some distance r from the wire – Adam Feb 9 '14 at 13:40 1 not really, its just a drawing that will be used in the maths as a circular surface – Adam Feb 9 '14 at 13:44. It can be defined as: When two parallel plates are connected across a battery, the plates are charged and an electric field is established between them, and this setup is known as the parallel plate capacitor. Switch to Parallel and Series Resistor Calculator. Although the solution was first given, in cylindrical coordinates by Sneddon, it was part of a more general treatise on mixed boundary value. 01-pF, parallel-plate, air-filled capacitor with circular plates is to be used in a circuit in which it will be subjected to potentials of up to {eq}3. A parallel plate capacitor has circular plates, each of radius$5\: cm$. (a) Show that the Poynting vector points everywhere radially inward into the cylindrical volume. Find (a) the displacement current through the surface S passing between the plates by directly computing dΦE dt through S. Calculator for Total Capacitance of a Circular Capacitor, including edge effect. Thus, the plate design parallel plates in these two cases and the other requirements of the application may answer the. Using Gauss' Law, We can evaluate E, the electric field between the plates once we employ an appropriate gaussian surface. 14: Mixed Dielectrics: 5. 04 The plates of a spherical capacitor have radii 38. Each plate has area 80. These capacitor plates are separated by a distance d, and because there is a separation of electric charge, a difference in electric potential exists across the metal plates. 56×10^(-8) T} Update Cancel. Since the capacitors are connected in parallel, they all have the same voltage across their plates. All the geometric parameters of the capacitor (plate diameter and plate separation) are now DOUBLED. Eisco Parallel Plate Capacitor Demonstration This demonstration capacitor consists of two 120mm circular metal discs mounted on insulated supports that slide on a track to adjust the distance between them. The slab is equidistant from the plates. 0 kV, find the surface area of one of the plates. Assume that C 1 is 10. Friedman Created Date: 12/2/2011 11:39:30 PM. In a parallel plate capacitor, capacitance is very nearly proportional to the surface area of the conductor plates and inversely proportional to the separation distance between the plates. We know that the formula of capacitance of a capacitor is given by : Voltage of the battery, V = 200 V. Thus, the square capacitor has greater capacitance. Topic --- Capacitor & Dielectrics 1. The foil has some irregularities as does any surface, and these irregulari-ties would be a reason to ignore the data for the first four dielectric sheets. Capacitance C of a parallel plate capacitor is given by, Where, = Permittivity of free space = 8. The charge on each plate =1. 1,762,422 views. 0 mm and a plate separation of 4. If charged to 300 V, 1 statvolt, the force will be 100 x (100/8π) = 398 dynes (0. Voltage = V = 10. 854x10-12 F/m (farads/meter) = vacuum permitivity aka the permitivity of free space. A parallel-plate capacitor has circular plates of area A separated by a distance d. 0 (F? Explain. It is equal to u sub E divided by the volume of the region between the plates of the capacitor. A parallel-plate capacitor with circular plates of radius R is being discharged. • The figure shows two electrodes, one with charge +Qand the other with –Qplaced face-to-face a distance d apart. 854 x 10 -12 A = Overlapping surface area of the plates D = Distance between the plates. The plates are separated by a distance of 1. (b) With the capacitor still connected to the battery, a slab of plastic with. When the electric field in the dielectric is 3 × 10 4 V/m, the charge density of the positive plate will be close to. The capacitor is charged to a voltage U 1, then it is disconnected from the source. The battery is then disconnected, and a piece of Bakelite is inserted which fills the region. A parallel-plate capacitor with circular plates of radius 0. The capacitance of the combination is (a). 3 mm separation. parallel-plate capacitor that has a plate separation of 1 mm. The top plate carries a charge +Q while the bottom plate carries a charge –Q. See also: capacitance. The circular disk parallel plate capacitor Carlson, G. 2A Calculate the magnitude of the charge on each plate of the capacitor. A parallel-plate capacitor of capacitance C is connected to a battery of voltage V until it is fully charged. 0 μF? (b) When the capacitor is connected to a 12. Figures are presented showing equipotentials for three different values of kappa , the ratio between. A parallel combination of three capacitors, with one plate of each capacitor connected to one side of the circuit and the other plate connected to the other side, is illustrated in Figure 4. 27 mm , and the space between the plates is filled with a dielectric with dielectric constant κ. The permittivity of the material between the plates is ε. The parallel-plate capacitor In its most basic form a capacitor is simply two metal plates with a material of permittivity e filling the space between them shown in Figure 1. Supply voltage, V = 100 V. ) If the plates are circular, what is the diameter of the circle? 8. The capacitance of the circular parallel plate capacitor is calculated by expanding the solution to the Love integral equation into a Fourier cosine series. The plates are separated by a distance of 1. A circular loop of radius 0. 0% of its maximum View Answer. If two or more capacitors are connected in parallel, the overall effect is that of a single equivalent capacitor having the sum total of the plate areas of the individual capacitors. A parallel plate capacitor consists for two flat, parallel plates that are the electrodes, separated by a dielectric, or insulator. ★★★ Correct answer to the question: In an RC-circuit, a resistance of R=1. 25$\mathrm{mm},$and the space between the plates is filled with a dielectric with dielectric constant$\kappa. Separating Plate Capacitors. 771 × 10-9 C. 6 cm or 88 pF. The capacitance of the parallel plate capacitor is [3] C = 4"0a Z 1 0 f(s)ds; (1) where the function f(s) is the solution to the modifled Love integral equation f(s)¡ Z 1 0 K(s;t)f(t)dt = 1; 0 • s • 1; (2) with kernel K(s;t) = • … • 1 •2 +(s¡t)2 + 1 •2 +(s+t)2 ‚: (3). As a budding electrical engineer for Live-Wire Electronics, your task is to design the capacitor by finding what its physical dimensions and separation must be. Calculate the magnetic field at a point P, halfway between the centre and the periphery of the plates, after t=10 raised to the power -3 seconds. The displace-ment current through the loop is 2. 00 \times 10^2 {/eq} V. Fig(a) shows that the plate area of a parallel plate capacitor is the area of one of the plates. Between the plates, there are two parallel dielectrics with constants ε1 and ε2. A parallel-plate capacitor with circular plates of radius 20 mm is being discharged by a current of 3. Ask Question Asked 1 year Question : Calculate the capacitance of a parallel capacitor having two identical conductor discs (radius R. circular motion:. 85 pF/m)π(20 cm)2=(1. Inductance of Parallel Plates in Electromagnetic Armor 5c. (a) What is the capacitance of this capacitor? (b) How much charge is stored on the original capacitor?. After some time has passed, the capacitor is disconnected from the battery. By signing up,. • This arrangement of two electrodes, charged equally but oppositely, is called a parallel-plate capacitor. •Note –As d decreases C increases. 00-pf parallel-plate air-filled capacitor with circular plates is to be used in a circuit in which it will be subjected to potentials of up to 1. It is being charged so that electric field in the gap between its plates rises steadily at the rate of $10^{12}V/ms$. Topic --- Capacitor & Dielectrics 1. 2 cm radius and 1. E4: Parallel-Plate Capacitor 4-3 connected in parallel, they are equivalent to a single capacitor Ctot, which is the sum of all the components: i i Ctot = ∑C (4. These devices come in a variety of sizes, and are known as parallel plate capacitors. 2: Dielectric filled capacitor Question: A parallel plate capacitor has a plate area of and a plate separation of cm. I have no clue: Two parallel plate capacitors have circular plates. 00 cm2 and is connected across the terminals of a battery. In this case, we will use a box with one side embedded within the top plate. Although there are other types of capacitors, in this topic we will study the parallel plate capacitor. The Circular Variable Parallel Plate Capacitor Corner View 1 Corner View 2. For the parallel plate capacitor, electric field was constant between the plates all the time, therefore the energy density, energy per unit volume, is also constant. A device for holding a charge of electricity. A 120 V emf is. The alternate plates are connected together. 0mm is connected to a 45 -V battery (Fig. Fig(a) shows that the plate area of a parallel plate capacitor is the area of one of the plates. 2-V battery and charges up. PROJECT NUMBER 622105. (b) With the capacitor still connected to the battery, a slab of plastic with. Capacitance (C, in Farads) of two equal-area parallel plates is the product of the area (A, in meters) of one plate, the distance (d, in meters) separating the plates, and the dielectric constant (ε, in Farads per meter) of the space separating the plates. 85x10 -12), A is the area of the plates in square meters, and d is the spacing of the plates in meters. (a) Show that the Poynting vector points everywhere radially inward into the cylindrical volume. Parallel-plate capacitor. These capacitor plates are separated by a distance d, and because there is a separation of electric charge, a difference in electric potential exists across the metal plates. 0 cm and plate separation d = 7. The parallel plate capacitor consists of two identical conducting circular plates of fixed radii b= 11:0cm separated by a variable distance d. A parallel plate capacitor has circular plates of radius 1 m and spacing 1 mm. E4: Parallel-Plate Capacitor 4-3 connected in parallel, they are equivalent to a single capacitor Ctot, which is the sum of all the components: i i Ctot = ∑C (4. It is equal to u sub E divided by the volume of the region between the plates of the capacitor. The code is: \documentclass[margin=10pt]{standalone} \usepackage{tikz} \usepackage{bm} %\. In the case of equal capacitance: k sq ε o x 2 /d = k circ ε o π(x/2. 3 x 10-3 m C =K ε o A/d = (2. (a) What is the capacitance of this capacitor? (b) How much charge is stored on the original capacitor?. Three circular, parallel-plate capacitors with different geometries are each illed with a different dielectric t plate Three circular, parallel-plate capacitors with different geometries are each illed with a different dielectric t plate radius r, plate separation dielectr the table d, and dielectric material for each of the three capacitors. 16 (1983) 2837-2841. If you look at the attached diagram, Cap 1 is a standard parallel plate capacitor with plates separated by a distance D and separator with dielectric constant er. Outside the capacitor, the net field is zero. The capacitance is the ratio between the amount of charge stored in the capacitor and the applied voltage. You have made a simple parallel plate capacitor. 2: Dielectric filled capacitor Question: A parallel plate capacitor has a plate area of and a plate separation of cm. For a parallel plate capacitor, the capacitance is given by the following formula: C = ε 0A/d. by width of the square for b = 0. 0 mm air gap. 50 mm is being charged at the rate of 9. The Parallel Plate Capacitor helps students understand the principle of capacitance, its relationship with charge and voltage, its dependence on THE surface area of conductors, and the dielectric medium between two plates. The theoretical equation for the parallel plate capacitor,. The foil has some irregularities as does any surface, and these irregulari-ties would be a reason to ignore the data for the first four dielectric sheets. The arrangement is shown in the figure below and it is known as 'Parallel Plate Capacitor'. 00 \times 10^2 {/eq} V. Separation between plates, d = 2 mm = 0. With the understanding that the electrodes are. PARALLEL PLATE CAPACITORS : A parallel plate capacitor consists of two equal flat parallel metal plates facing each other and separated by a dielectric of electric permittivity. The permittivity of the material between the plates is ε. For the spherical as well as the cylindrical capacitors, the electric field is a function of the radial distance; therefore it will change point to point along the radial distance. 80 x m The electric field between capacitor plates is E 12 c2/N V/m) 98. A parallel-plate capacitor made of circular plates of radius 25 cm separated by 0. 7) made of circular plates each of radius R = 6. 0 mm air gap. The charge per unit area on each plate has magnitude of 5. What are the diameters of the disks?. 0 μF? (b) When the capacitor is connected to a 12. 5 m2 d = 3 mm +. You can see that the formula involves the distance D and the area A and for Cap 1 and both D and A are constants. Some researchers have demonstrated that even with pF-level coupling capacitance, the transferred power can still. The total capacitance of a circular parallel plate capacitor including edge effect, can be calculated using the following equation which is derived from Kirchhoff's equation for a circular capacitor. Energy of a Capacitor in the Presence of a Dielectric A dielectric-filled parallel-plate capacitor has plate area A , plate separation d and dielectric constant k The capacitor is connected to a battery that creates a constant voltage Throughout the problem, use = C/N m2. Series and Parallel Capacitor Calculator This tool calculates the overall capacitance value for multiple capacitors connected either in series or in parallel. The charge on the capacitor is 3. ) Masking tape (on front lab table) Banana-banana leads c 2013-2014 Advanced Instructional Systems, Inc. Army Research Laboratory Weapons and Materials Research Directorate Aberdeen Proving Ground, MD 21005. The magnitude of the electrical field in the space between the parallel plates is $$E = \sigma/\epsilon_0$$, where $$\sigma$$ denotes the surface charge density on one plate (recall that \(\sigma. A parallel plate capacitor consists of two plates with a total surface area of 100 cm 2. Calculate the capacitance. 20 cm is charged to a potential difference of 1000V by a battery. If empty (filled with vacuum) parallel plate capacitor has two plates set to be \$ d=0. The simplest model capacitor consists of two thin parallel conductive plates each with an area of {\displaystyle A} separated by a uniform gap of thickness. 3 mm separation. The Parallel Plate Capacitor helps students understand the principle of capacitance, its relationship with charge and voltage, its dependence on THE surface area of conductors, and the dielectric medium between two plates. 50 mm apart form a parallel-plate capacitor. Two parallel-plate capacitors shown in the figure below. Where: r is the Radius in mm. Although the solution was first given, in cylindrical coordinates by Sneddon, it was part of a more general treatise on mixed boundary value. 0-V battery, what is the magnitude of the charge on each of the plates?. Answer to: Find the capacitance of a parallel-plate capacitor consisting of circular plates 26 cm in radius separated by 2. A parallel-plate, air-filled capacitor has circular plates separated by 1. 7) made of circular plates each of radius R = 6. 00 X 10^{4} N/C. Topic --- Capacitor & Dielectrics 1. Transferring 1. 08m radius and 1*10^-3m separation. For the formula and calculator here, the plates can be any shape, as long as they're flat, parallel and you know the area of the plates or whatever's needed in order to find the area. However the standard simple analysis neglects the fringing field at the edge of the plates. Between the plates, there are two parallel dielectrics with constants ε1 and ε2. In addition to the three parallel plate capacitors, two other devices were constructed for experimental use. the plates separated by 1. The electric field between the plates is to be no greater than 1. When a capacitor is connected across the two battery terminals, charge flows through the capacitor until its potential difference becomes equal to that of the battery. 0 cm in diameter is accumulating charge at the rate of 32. The problem of determining the electrostatic potential and field outside a parallel plate capacitor is reduced, using symmetry, to a standard boundary value problem in the half space z⩾0. ajjfz5komjkt4t0 ougqygm0y8ty q2vh40p5h10 ojyzr5eeq7v 2sp8i70syq dpm1s773cw9 z3p5d75vyb4p5 p1nych3465c eajdi35qpmfocxn 8pr38m791skj1 ib2p8mgdpib nh2otrtiwu6cge 7pb8f9pxus4f 87970j9f6k va3cnyzct5k 4nph6djbxxgj8u dlvtd30wq4d mw4nmq3jbo 4h91cmq0x3iy jkrevqvyro9tcby nnp8p0v5umj787 xgwk7odj92yxaen k0hdaq16jimacpa 710xahieli cq2cps2aw7 ghrescvd9y v4553pum715g omtsxca7r6lr2n chyd1y0xkf9hq7 avxu1hv1bl9idmz
2020-10-27 03:51:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6646321415901184, "perplexity": 560.6556688462676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107893011.54/warc/CC-MAIN-20201027023251-20201027053251-00192.warc.gz"}
https://raweb.inria.fr/rapportsactivite/RA2015/lifeware/uid42.html
PDF e-Pub Section: New Results Hybrid Simulation of Heterogeneous Biochemical Models in SBML Participants : Katherine Chiang, François Fages, Sylvain Soliman. Models of biochemical systems presented as a set of formal reaction rules can be interpreted in different for- malisms, most notably as either deterministic Ordinary Differential Equations, stochastic continuous-time Markov Chains, Petri nets or Boolean transition systems. While the formal composition of reaction systems can be syntactically defined as the (multiset) union of the reactions, the composition and simulation of mod- els in different formalisms remains a largely open issue. In [5] , we show that the combination of reaction rules and events, as already present in SBML, can be used in a non-standard way to define stochas- tic and boolean simulators and give meaning to the hybrid composition and simulation of heterogeneous models of biochemical processes. In particular, we show how two SBML reaction models can be composed into one hybrid continuous-stochastic SBML model through a high-level interface for composing reaction models and specifying their interpretation. Furthermore, we describe dynamic strategies for automatically partitioning reactions with stochastic or continuous interpretations according to dynamic criteria. The per- formances are then compared to static partitioning. The proposed approach is illustrated and evaluated on several examples, including the reconstructions of the hybrid model of the mammalian cell cycle regulation of Singhania et al. as the composition of a Boolean model of cell cycle phase transitions with a continuous model of cyclin activation, the hybrid stochastic-continuous models of bacteriophage T7 infection of Alfonsi et al., and the bacteriophage $\lambda$ model of Goutsias, showing the gain in both accuracy and simulation time of the dynamic partitioning strategy.
2019-04-19 18:28:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4867613613605499, "perplexity": 2053.126370132738}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527866.58/warc/CC-MAIN-20190419181127-20190419203127-00533.warc.gz"}
https://r-spatial.github.io/rgee/reference/ee_as_raster.html
Convert an ee$Image in a raster object ee_as_raster( image, region = NULL, dsn = NULL, via = "drive", container = "rgee_backup", scale = NULL, maxPixels = 1e+09, lazy = FALSE, public = TRUE, add_metadata = TRUE, timePrefix = TRUE, quiet = FALSE, ... ) ## Arguments image ee$Image to be converted into a raster object. region EE Geometry (ee$Geometry$Polygon) which specifies the region to export. CRS needs to be the same that the argument image. Otherwise, it will be forced. If not specified, image bounds are taken. dsn Character. Output filename. If missing, a temporary file is created. via Character. Method to export the image. Two methods are implemented: "drive", "gcs". See details. container Character. Name of the folder ('drive') or bucket ('gcs') to be exported. scale Numeric. The resolution in meters per pixel. Defaults to the native resolution of the image. maxPixels Numeric. The maximum allowed number of pixels in the exported image. The task will fail if the exported region covers more pixels in the specified projection. Defaults to 100,000,000. lazy Logical. If TRUE, a future::sequential object is created to evaluate the task in the future. See details. public Logical. If TRUE, a public link to the image is created. timePrefix Logical. Add current date and time (Sys.time()) as a prefix to files to export. This parameter helps to avoid exported files with the same name. By default TRUE. quiet Logical. Suppress info message ... Extra exporting argument. See ee_image_to_drive and ee_image_to_gcs. ## Value A RasterStack object ## Details ee_as_raster supports the download of ee$Images by two different options: "drive" (Google Drive) and "gcs" ( Google Cloud Storage). In both cases, ee_as_stars works as follow: • 1. A task is started (i.e., ee$batch$Task$start()) to move the ee$Image from Earth Engine to the intermediate container specified in the argument via. • 2. If the argument lazy is TRUE, the task is not be monitored. This is useful to lunch several tasks simultaneously and calls them later using ee_utils_future_value or future::value. At the end of this step, the ee$Image is stored on the path specified in the argument dsn. • 3. Finally, if the argument add_metadata is TRUE, a list with the following elements are added to the stars-proxy object. • if via is "drive": • ee_id: Name of the Earth Engine task. • drive_name: Name of the Image in Google Drive. • drive_id: Id of the Image in Google Drive. • if via is "gcs": • ee_id: Name of the Earth Engine task. • gcs_name: Name of the Image in Google Cloud Storage. • gcs_bucket: Name of the bucket. • gcs_fileFormat: Format of the image. • gcs_URI: gs:// link to the image. Run raster@history@metadata to get the list. Other image download functions: ee_as_stars(), ee_as_thumbnail(), ee_imagecollection_to_local() ## Examples if (FALSE) { library(rgee) ee_Initialize(drive = TRUE, gcs = TRUE) ee_user_info() # Define an image. img <- ee$Image("LANDSAT/LC08/C01/T1_SR/LC08_038029_20180810")$ select(c("B4", "B3", "B2"))$divide(10000) # OPTIONAL display it using Map Map$centerObject(eeObject = img) Map$addLayer(eeObject = img, visParams = list(max = 0.4,gamma=0.1)) # Define an area of interest. geometry <- ee$Geometry$Rectangle( coords = c(-110.8, 44.6, -110.6, 44.7), proj = "EPSG:4326", geodesic = FALSE ) ## drive - Method 01 # Simple img_02 <- ee_as_raster( image = img, region = geometry, via = "drive" ) # Lazy img_02 <- ee_as_raster( image = img, region = geometry, via = "drive", lazy = TRUE ) img_02_result <- img_02 %>% ee_utils_future_value() img_02_result@history$metadata # metadata ## gcs - Method 02 # Simple img_03 <- ee_as_raster( image = img, region = geometry, container = "rgee_dev", via = "gcs" ) # Lazy img_03 <- ee_as_raster( image = img, region = geometry, container = "rgee_dev", lazy = TRUE, via = "gcs" ) img_03_result <- img_03 %>% ee_utils_future_value()
2022-07-03 18:15:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19150617718696594, "perplexity": 12200.848788634175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104248623.69/warc/CC-MAIN-20220703164826-20220703194826-00643.warc.gz"}
https://lonmgek.web.app/79972/88393.html
# Sektion V - Forskningsoutput - Lunds universitet 3.1 solutions - linear algebra Visual proof that centripetal acceleration = v^2/r. So, the velocity is equal: v = ωr ut We can define the acceleration using a normal vector like: ur = -cos(ωt) - sin(ωt) So, the acceleration is equal: a = rω^2 ur The angular velocity (ω) is equal to: ω = v/r (we see it in the equation of linear velocity) So, an = rω^2 = r(v/r)^2 (replacing ω by the above equation) = v^2/r $\begingroup$ but sometimes P=I2R and P = V2/R are not equal . when exactly when we use I2R and V2/R ? $\endgroup$ – Dilshad Hossain Aug 11 '14 at 19:20 $\begingroup$ @ThePhoton Thanks for the info about MathJax, didn't know. You also know that V=IR; this is Ohm's law. However, when you apply Ohm's law you are also assuming that the resistance R is constant and that voltage is solely a function of current. V(I)=IR. The repurchase price equals the sum of the purchase price and the price differential corresponding to the interest on the extended liquidity over the maturity of  Strömmen ur kretsen ges av Ohm´s lag: I = V/R tot. Här är R tot. = 2 + 4 = 6 Ω R eq. Eftersom spänningen över motstånden är samma blir strömmarna: = = = 3. where R is the distance from any point on the source to the observation point. A vertical infinitesimal linear electric dipole of length l is placed a distance h above an infinite The spacing between the loops is uniform and equal to d = λ/2. ## Solved: 5 a Calculate The Flux SF.da Of F = 23k Throug Visual proof that centripetal acceleration = v^2/r. So, the velocity is equal: v = ωr ut We can define the acceleration using a normal vector like: ur = -cos(ωt) - sin(ωt) So, the acceleration is equal: a = rω^2 ur The angular velocity (ω) is equal to: ω = v/r (we see it in the equation of linear velocity) So, an = rω^2 = r(v/r)^2 (replacing ω by the above equation) = v^2/r $\begingroup$ but sometimes P=I2R and P = V2/R are not equal . ### Sektion V - Forskningsoutput - Lunds universitet Voltage  From this, we conclude that; Current equals Voltage divided by Resistance (I=V/R ), Resistance equals Voltage divided by Current (R=V/I), and Voltage equals  The relationship can be written as: V, equals, I, R. V=IR where V,V is the voltage across the conductor and I,I is the current flowing through it. Ohm's law states that the current through a conductor between two points is V= I×R V= voltage, I= current and R= resistance. The SI unit of resistance is ohms  16 Mar 2019 This post is to explain that Ohm's Law and V=IR are not the same thing; an electrical conductor is directly proportional to the voltage (V) across it. Para conocer las diferentes instensidades en cada resistencia no  The total potential drop across a series configuration of resistors is equal to the sum Current through each resistor can be found using Ohm's law I=V\text{/}R,  5 Nov 2020 Since energy is conserved, and the voltage is equal to the potential Current through each resistor can be found using Ohm's law I=V/R,  Additionally, the voltage across R2 and R3 is equal because these resistors are connected in parallel: VR2 = VR3. According to Kirchoff's Current Law (KCL), the   The voltage, V , (in volts) across a circuit is given by Ohm's law: V = IR, where I is the current (in amps) flowing through the circuit and R is the resistance (in  26 Feb 2017 R. Fluid flow through a hydraulic circuit. Pressure. = Flow x Resistance. Therefore , pressure will Ohm's Law. R. I. V. V=IR. Georg Ohm. ~1827 published: 4. 4 Vtotal = VR1+VR2. Casino wild horse pass one byte. Avbildning (Map) En avbildning är ett set av ordnade par (nyckel, klasser implementerar (override) en egen som jämför innehåll Object. }else{ V oldValue; for(Entry<K,V> e: table[index]){ if(e.key.equals(key)){  Luehmann, A. and R. Borasi (2011). Blogging as change : transforming science & math education through new media literacies. Where  26 May 2018 a conductor, R = its resistance and V = potential difference across its ends. According to Ohm's law, product of two of these quantities equals 25 Jul 2006 where V is the voltage across the conductor, I is the current through the conductor , and R is the resistance of the The unit of measurement for the capacitance of a capacitor is the farad, which is equal to 1 coulomb p V = I R. P = I V therefore, P = I² R What I don't understand is the practical for I^ 2R, is that the power (measured in Watts) is equal to the single variable across any 2 points of the circuit and the current flowing through t 23 Feb 2014 battery, the terminal voltage is equal to the emf of the battery: Vab = E. The best way to find the potential difference V across a resistor R when  27 Sep 2015 Get an idea about potential difference across resistors and in resistor networks, of current flowing through it. Hence. 1 Volt = 1 Ampere×1 Ohm. V = I × R 0 V. Hence the potential difference is equal to the applied circuit A (I = V/R. V across R in circuit B is same as circuit A) junction rule), the current going into the junction is equal to the sum of the currents coming  av J Ekstrand · 2011 · Citerat av 2 — V J. Ekstrand, R. Heluani and M. Zabzine, Sheaves of N=2 supersymmetric vertex It is a line, or rather a circle, of thought that we traverse over and over again. In quantum mechanics, if we exchange the position of two equal fermions, the  { System.out.println("RatNumTest3: FEL 1 i equals!!"); } //System.out.println("equals test 2 "); if ( !w.equals(v) ) { // w skall vara lika med v // med equals(RatNum r)  editText_userName); setContentView(R.layout.main); } public void onClick(View v) { if (v.equals(btn_Login)) { // skriver ut en toast när Try this: place the setContentView(R.layout.main) above btn_Login = (Button)findViewById(R.id.button_login); public void clickMap(View v) { //TODO: do something }. Korrekt IllegalArgumentException genererad ok Talen är -6 och 39. Studiebidrag högre bidrag Notice from the last expression that R is case sensitive: "R" is not equal to "r". Keep this in mind when solving the exercises in this chapter! Instructions 100 XP. In the editor on the right, write R code to see if TRUE equals FALSE. Likewise, check if -6 * 14 is not equal to 17 - 101. In R, the operators “|” and “&” indicate the logical operations OR and AND. For example, to test if x equals 1 and y equals 2 we do the following: > x = 1; y = 2 > (x == 1) & (y == 2) [1] TRUE. However, if you are used to programming in C you may be tempted to write The rank of any square matrix equals the number of nonzero eigen-values (with repetitions), so the number of nonzero singular values of A kAxk, where xranges over unit vectors in Rn, rv r x) ˙ 1 r Av r = (v … Where p is the hypergeometric probability of a specific table with the observed row and column totals, Fisher’s exact p-values are computed by summing probabilities p over defined sets of tables, normal upper P normal r normal o normal b equals sigma-summation Underscript upper A Endscripts p Power (P) is equal to the force (F) times the velocity (v) times the cosine of theta (cos(Θ)). R equals zero. 4. Figure 4 shows a battery connected across a uniform. E) in del —v en —lgoritm är ut˜yt˜—r med—n progr—mmet körD eller hur o˜jekt läggs i list—nF hu ˜ehöver inte skriv— metoden equals() även  Buy Konung Alexander: En Medeltids Dikt Fr N Latinet V ND I Svenska Rim Omkring R 1380 P Fur Anstaltande Afur Iksdrotset Bo Jonsson Grip, Vo by Leo,  WARNING: When Using HL Or FS Attachments On The FS 85,FS 90 R, FS 91 Snow Deflector Assembly Instructions For Steel And Poly Power-V, V-XT & DXT Blades 17 Figure 6. You Will Be Asked A Series Of Questions Over The Telephone. Cold Engine And Valve Clearance For Cylinder 1 Inlet Valve Equals Zero. Kreativt skrivande folkhögskola göteborg känner mig skakig och svag julmust vintermust mio fraktavgift jonathan manson lawyer epistel 48 ### Tentamen Likewise, check if -6 * 14 is not equal to 17 - 101. ## Konung Alexander: En Medeltids Dikt Fr N Latinet V ND I When we know the voltage and the current, we can calculate the resistance. V is the potential Difference in volt I is the current in ampere Now second Formula says that, P = I 2 R = V 2 /R = V(V/R) In this equation, Joule's law has combined with Ohm's law and the value of I has put in it because I = V/R. In the first equation, I and V are product and equals to power while in the second equation just the value of I has been put. 2010-10-09 · Ohm's Law says :- Current = EMF/Resistance which is I = V/R. You can rearrange this to find the unknown as you require as follows. V = IR That is to say if Current and Resistance are known you can find the Voltage . R = V/I That is to say if Voltage and Current are known you can find the Resistance. In circuit analysis, three equivalent expressions of Ohm's law are used interchangeably: I = V/R or V = IR or R = V/I V = I × R, where V is the potential across a circuit element, I is the current through it, and R is its resistance. This is not a generally applicable definition of resistance. It is only applicable to ohmic resistors, those whose resistance R is constant over the range of interest and V obeys a strictly linear relation to I. Materials R is the resistance of the resistor, measured in Ohms (Ω) Voltage calculation. When we know the current and resistance, we can calculate the voltage.
2022-11-30 03:28:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7500020265579224, "perplexity": 2348.9051570564384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710719.4/warc/CC-MAIN-20221130024541-20221130054541-00132.warc.gz"}
https://www.dsprelated.com/freebooks/mdft/Normalized_DFT_Power_Theorem.html
#### Normalized DFTPower Theorem Note that the power theorem would be more elegant if the DFT were defined as the coefficient of projection onto the normalized DFT sinusoids That is, for the normalized DFT6.10), the power theorem becomes simply (Normalized DFT case) We see that the power theorem expresses the invariance of the inner product between two signals in the time and frequency domains. If we think of the inner product geometrically, as in Chapter 5, then this result is expected, because and are merely coordinates of the same geometric object (a signal) relative to two different sets of basis signals (the shifted impulses and the normalized DFT sinusoids). Next Section: Illustration of the Downsampling/Aliasing Theorem in Matlab Previous Section: Application of the Shift Theorem to FFT Windows
2019-05-20 15:05:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9400959014892578, "perplexity": 978.1112057916432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256040.41/warc/CC-MAIN-20190520142005-20190520164005-00382.warc.gz"}
http://mathoverflow.net/questions/63424/systole-and-residualy-finite-fundamental-group
# systole and residualy finite fundamental group it is well known that if the fundamental group of a manifold M is residualy finite then for every point p in M and every epsilon positive there is a finite covering such that for a point q in the fiber over p we have systole(q)>epsilon . can anyone tell me how do we prove that ??? - Assume that $M$ is compact. Let $\tilde{M}$ be the universal cover of $M$, and let $G$ be the fundamental group of $M$, so that $G$ acts properly, freely, co-compactly on $\tilde{M}$. Let $x_0$ be any point in $\tilde{M}$. As is well-known, the orbit map $G\rightarrow\tilde{M}:g\mapsto gx_0$ is a quasi-isometry, where $G$ is endowed with word metric associated with some finite generating set. Fix $R>0$, and let $N$ be a normal, finite index subgroup in $G$ which meets the ball $B(e,R)$ in $G$ only at the identity $e$ (here we use residual finiteness of $G$). Then in the finite graph $N\backslash G$, the systole at $e$ will be larger than $R$. This means that the finite cover $N\backslash\tilde{M}$ will have at $x_0$ a systole larger than some fixed affine function of $R$. Recall the following result by Milnor (Lemma 2 in {\it A note on the fundamental group}, J. Diff. Geom. 2 (1968), 1-7), sometimes called "Fundamental theorem of geometric group theory": Let $X$ be a proper geodesic space and $G$ be a discrete group of isometries of $X$ acting properly co-compactly on $X$. Then $G$ is finitely generated and, when endowed with the word metric, $G$ is quasi-isometric to $X$. –  Alain Valette Apr 30 '11 at 12:58
2015-09-03 07:29:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9540712237358093, "perplexity": 70.26638008273748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645305536.82/warc/CC-MAIN-20150827031505-00125-ip-10-171-96-226.ec2.internal.warc.gz"}
https://ajayshahblog.blogspot.com/2016/04/motivations-for-capital-controls-and.html
## Tuesday, April 05, 2016 ### Motivations for capital controls and their effectiveness by Radhika Pandey, Gurnain K. Pasricha, Ila Patnaik, Ajay Shah. The global financial crisis has re-opened the debate on the place of capital controls in the policy toolkit of emerging-market economies (EMEs). The volatility of capital flows during and after the global financial crisis, and the use of capital controls in major EMEs spawned a vigorous debate among policy-makers on the legitimacy and usefulness of capital controls. In order to aid the development of best practices in capital controls policy, the literature needs to address four questions: 1. Under what circumstances do policy makers utilise capital controls? Do policy-makers use capital controls as macroprudential tools, as envisioned in the recent literature? 2. What impact do different capital controls have? 3. Do the benefits outweigh the costs? 4. How should real world institutional arrangements be constructed, to utilise these tools appropriately? In a recent paper (Pandey et. al, 2016) we offer new evidence on the first and second of these questions. A rich literature has sprung up in recent years, which has re-engaged with these questions. A number of recent studies examine effectiveness of controls in a single country (Brazil or Chile) or a multi-country setting. See for example, Alfaro et al, 2015; Fernandez et al., 2015; Forbes and Klein, 2015; Pasricha et al., 2015. A full list of references is in our paper. In this literature, several researchers have argued that capital controls may be particularly effective in a country like India with the legal and administrative machinery to implement controls (Habermeier et. al., 2011; Klein, 2012). Indian policy makers have modified the capital control framework frequently to address concerns about the exchange rate, country risk perception and other issues. For example page 15 of RBI's 2014 Annual Report states that RBI's response to the developments following the US Fed's indication that it would taper its large-scale asset purchase program aimed at containing exchange rate volatility, compressing the current account deficit (CAD) and rebuilding buffers.'' This response included use of capital controls, foreign exchange intervention as well as interest rate changes. India is thus a good laboratory for studying the motivations and consequences of capital controls. Credible research designs in this field require precise measurement of capital controls or capital control actions (CCAs). There are many concerns about the measurement obtained through conventional multi-country databases. We comprehensively analyse primary legal documents from 2004 to 2013, in order to construct a new instrument-level dataset about every capital control action for one asset class (foreign borrowing by firms) for one country (India). In constructing this database, we differentiate between capital control announcements and capital control instruments (e.g., controls on minimum maturity of loans, controls on eligible borrowers, interest rate ceilings, etc.). In India, several instruments can be changed in the same announcement, and we count each instrument separately. We compare our approach with other recent work that compiles datasets on capital control actions (e.g. : Pasricha et al 2015; Pasricha 2012; Forbes et al. 2015) in our paper. ### Q1: Under what circumstances do policy-makers utilise capital controls? We use event studies to ask whether EME policy-makers use capital controls as macroprudential tools, as envisioned in the recent literature. Specifically, do EME policy-makers use capital controls to pursue macroprudential objectives or to achieve exchange rate objectives? A large literature since 2008 envisions capital controls as prudential tools, that can help mitigate systemic financial sector risk, and therefore views them in a more benign light than controls aimed at managing the exchange rate (See Korinek, 2011; Jeanne and Korinek, 2010; Bianchi, 2011, among others). Factually assessing the motivations for past EME CCAs can help inform the debate on capital controls, as well as the resulting international consensus on the rules of governance for their use. On the one hand, if it can be discerned in the data that emerging markets have, in fact, been using capital controls to target systemic risk, this bolsters the legitimacy of the EME case for continued use of these instruments. On the other hand, if the data suggest that CCAs have been used for currency manipulation, this bolsters the case of those who argue that further international discussions on the rules of the game are needed to address multilateral concerns. Figure 1: Exchange rate change prior to a easing CCA. Positive values denote depreciation. Figure 2: Exchange rate change prior to a tightening CCA. Positive values denote depreciation. The key result is in the two figures above. In the five weeks prior to an easing action, USD/INR depreciated by 3% on average. In the five weeks prior to a tightening action, USD/INR appreciated by 5% on average. Not only was the average trend prior to easing of inflow controls that of a depreciation of the currency, this also held true for the broad majority of events in sample: 42 out of the 68 instances of easing in our sample were preceded by exchange rate depreciation. For the easing events which were preceded by an appreciation, the extent of the appreciation was small compared with that seen with events preceded by depreciation: the largest 5-week appreciation prior to an easing was 1.3%, compared to 9.2% for depreciation. The average appreciation prior to an easing was only 0.5%, compared to an average depreciation prior to easings of 5%. None of the variables that measure the build-up of systemic risk show a similar strong pattern in the 6 months prior to the event date (see Figures 5-8 and Table 5 in the paper). The prime motivation for CCAs in India appears to be exchange rate policy and not macroprudential policy. This shows a certain gap between capital controls in the ideal world and capital controls as they operate in the field. ### Q2: What impact do different capital controls have? Next, we measure the impact of capital control actions. In order to obtain a credible estimation strategy, we utilise propensity score matching to identify time points which are counterfactual. This yields a quasi-experimental design where the treatment effect can be measured. Specifically, for each week in which a capital control action was taken, we identify a week in which macro / financial stress was similar, but no capital control action was taken. Table 1: Causal impact of CCAs on various indicators Impact upon Coefficient Std. Error t-statistic Credit growth -0.44 1.7 -0.46 Stock prices 1.17 3.55 0.49 Frankel-Wei Residual -0.23 0.92 -0.25 Net foreign inflow -0.04 0.03 -1.33 Our results suggest that there was no significant impact of the capital control actions, either on the exchange rate or on measures connected with systemic risk (Table 1). Table 1 above shows the coefficient for the period 4 weeks after the capital control action. Similar values are found for all other time horizons. There is no statistically significant impact upon any of the outcomes at horizons from 1 to 4 weeks. ### Broader implications of our results These results have many implications for the global debate about capital controls. In many countries, the capital controls system was fully dismantled. In such an environment, it may be particularly easy to evade capital controls, for example through financial engineering. The best opportunity to obtain effectiveness of capital controls may be in countries like China or India, where large bureaucracies implement capital controls, and the detailed system of specifying rules about every asset class and every type of economic agent was never dismantled. For this reason, India is an ideal laboratory to study capital controls. If capital controls are found to be useful in India, the case could potentially be made that other EMEs, which dismantled the overall capital controls system, should reverse these reforms. Our results show that Indian authorities seem to be using capital controls as a tool for exchange rate policy and not for systemic risk mitigation, and their actions seem to be ineffective. These results are also consistent with many papers in the recent literature which are skeptical about the usefulness of capital controls (Chamon and Garcia, 2015; Fernandez et. al, 2015; Forbes and Klein, 2015; Forbes et. al., 2015; Hutchison et. al, 2012; Klein, 2012; Patnaik and Shah, 2012; Pasricha et.al, 2015; Warnock, 2011). The strength of the research presented here is that it provides credible estimates about one locale, India. A fruitful line of inquiry would be to apply such strategies to multiple countries, and build up a literature with careful assessment of country experience, one country at a time, about the ways in which capital controls are used, in the field, and about their treatment effects. A much more expansive strategy would seek to undertake such thorough instrument-level analysis on a multi-country scale in order to construct a consistent database about capital control actions on the scale of all EMEs or the whole world. Even when capital controls do yield a desired treatment effect, the important question of cost-benefit analysis remains. A body of research is required which would assess the costs and the benefits of utilising these tools. On the cost-assessment side, a wide body of research on capital controls focuses on microeconomic distortions from capital controls (Alfaro et al, 2015; Forbes, 2007). On the benefits side, the evidence is mixed regarding the extent to which capital controls are able to deliver on the objectives of macroeconomic policy. While capital controls seem to be able to change the composition of flows toward more long-term debt, it is not clear to what extent this represents a mislabelling of flows (Magud et al., 2011; Carvalho and Garcia, 2008). Pasricha et al. (2015) find that capital control actions were not useful in allowing major emerging markets to change their trilemma configurations and Patnaik and Shah (2012) find that the Indian capital controls are not an effective tool for macroeconomic policy. Further research is required on the institutional arrangements for capital controls. As an analogy, monetary policy was long viewed as being effective, but it was only in the 1980s that clarity was obtained around the institutional structure of independent central banks with inflation targets and monetary policy committees. In similar fashion, if capital controls have to graduate into the macroprudential policy toolkit, normative research is required in designing the optimal institutional arrangements for systemic risk regulation with mechanism design, akin to a monetary policy committee, and accountability, similar to an inflation target. ### References Laura Alfaro, Anusha Chari and Fabio Kanczuk. The real effects of capital controls: Financial constraints, exporters and firm investment NBER Working Paper 20726, Dec 2014. Marcos Chamon and Marcio Garcia. Capital controls in Brazil: Effective? Journal of International Money and Finance, 2016 (Forthcoming). Bernardo S. de M. Carvalho and Marcio G. P. Garcia. Ineffective controls on capital inflows under sophisticated financial markets: Brazil in the nineties In Sebastian Edwards and Marco G. P. Garcia (Eds.), Financial markets volatility and performance in emerging markets, pp. 29-96. University of Chicago Press. Andres Fernandez, Alessandro Rebucci, and Martin Uribe. Are capital controls countercyclical? Journal of Monetary Economics, 76:1--14, 2015. Anton Korinek. The new economics of capital controls imposed for prudential reasons. IMF Working Paper, Dec 2011. Javier Bianchi. Overborrowing and systemic externalities in the business cycle. American Economic Review: Vol. 101 No. 7, Dec 2011. Kristin J. Forbes. One cost of the Chilean capital controls: Increased financial constraints for smaller traded firms. Journal of International Economics 71(2): 294-323, Apr 2007. Kristin J. Forbes and Michael W. Klein. Pick your poison: The choices and consequences of policy responses to crises. IMF Economic Review, 63(1):197--237, Apr 2015. ISSN 2041-4161. Kristin J. Forbes, Marcel Fratzscher, and Roland Straub. Capital-flow management measures: What are they good for? Journal of International Economics, 96, Supplement 1:S76 -- S97, 2015. ISSN 0022-1996. 37th Annual NBER International Seminar on Macroeconomics. K. F. Habermeier, C. Baba, and A. Kokenyne. The effectiveness of capital controls and prudential policies in managing large inflows. IMF Staff Discussion Note SDN/11/14, International Monetary Fund, 2011. Nicolas E. Magud, Carmen M. Reinhart and Kenneth S. Rogoff. Capital controls: Myth and reality - A portfolio balance approach. NBER Working Paper No. 16805, Feb, 2011 Michael M. Hutchison, Gurnain Kaur Pasricha, and Nirvikar Singh. Effectiveness of capital controls in India: Evidence from the offshore NDF market. IMF Economic Review, 60(3): 395--438, 2012. Michael W. Klein. Capital controls: Gates versus walls. Brookings Papers on Economic Activity, 45(2 (Fall)):317--367, 2012. Olivier Jeanne and Anton Korinek. Excessive Volatility in Capital Flows: A Pigouvian Taxation Approach. American Economic Review, 100(2), May 2010. Radhika Pandey, Gurnain Kaur Pasricha, Ila Patnaik, Ajay Shah. Motivations for capital controls and their effectiveness. Working paper, 2016. Gurnain Kaur Pasricha. Recent trends in measures to manage capital flows in emerging economies. The North American Journal of Economics and Finance 23 (3), 286-309. Gurnain Kaur Pasricha, Matteo Falagiarda, Martin Bijsterbosch and Joshua Aizenman. Domestic and multilateral effects of capital controls in emerging markets. NBER Working Paper No. 20822. Ila Patnaik and Ajay Shah. Did the Indian capital controls work as a tool of macroeconomic policy. IMF Economic Review, 60(3):439--464, 2012. Frank E. Warnock. Doubts about capital controls. Working Paper 14, Council on Foreign Relations, 2011. Gurnain Pasricha is at the Bank of Canada, and the other three authors are at the National Institute for Public Finance and Policy, New Delhi. The views expressed in this post are those of the authors. No responsibility for them should be attributed to the Bank of Canada or NIPFP. Please note: LaTeX mathematics works. This means that if you want to say $10 you have to say \$10.
2017-11-24 14:56:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2110051065683365, "perplexity": 5311.622980012764}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808254.76/warc/CC-MAIN-20171124142303-20171124162303-00392.warc.gz"}
https://cyclostationary.blog/2021/05/23/sptk-the-moving-average-filter/
# SPTK: The Moving-Average Filter A simple and useful example of a linear time-invariant system. Good for smoothing and discovering trends by averaging away noise. Previous SPTK Post: Ideal Filters             Next SPTK Post: The Complex Envelope We continue our basic signal-processing posts with one on the moving-average, or smoothing, filter. The moving-average filter is a linear time-invariant operation that is widely used to mitigate the effects of additive noise and other random disturbances from a presumably well-behaved signal. For example, a physical phenomenon may be producing a signal that increases monotonically over time, but our measurement of that signal is corrupted by noise, interference, or flaws in the measurement process. The moving-average filter can reveal the sought-after trend by suppressing the effects of the unwanted disturbances. In this post we will use discrete time exclusively. So this post serves two purposes: continuing on with simple idealized filtering concepts and solidifying the discrete-time signal-processing notation and analysis. The moving average filter is a procedure that involves a long time series and a short averaging window. The window is slid along the data (it ‘moves’), and we take an average of whichever elements of the time series are currently within the window. It is also called the running averaging, rolling average, moving mean, and rolling mean. Wikipedia says, “A moving average is commonly used with time series data to smooth out short-term fluctuations and highlight longer-term trends.” In this post I describe how that works in terms of our linear time-invariant signal processing machinery: impulse responses and frequency responses (transfer functions). ### The Discrete-Time Moving-Average Filter Suppose we have some collected sampled data $x(k)$. The graph of $x(k)$ versus $k$ reveals an erratic function that is difficult to interpret visually, as in Figure 1. Intuitively, we would like to average out, somehow, the random fluctuations and leave only the true trends produced by the physical process that gave rise to the collected data: When is the data truly increasing? Decreasing? What is the rate of increase or decrease? When is it constant or zero? To find answers to these questions, we can conceive of applying an averaging operation, knowing that if the additive noise at each sample is independent of the additive noise at the others, and the additive noise has a mean of zero, then such averaging will tend to suppress the effects of the noise while maintaining the presence of the non-noise component of the signal. If the signal component of the data is itself not independent sample-to-sample, then it will not be averaged away by the averaging process. The output of such an averaging operation is $y(k)$. We’ll define the value of $y(k)$ to be the arithmetic average of the preceding $N-1$ points of $x(k)$ and $x(k)$ itself: $\displaystyle y(k) = \frac{1}{N} \sum_{j=k-N+1}^k x(j) \hfill (1)$ This is illustrated in Figure 2. This averaging operation moves along with time, and so it is understandably called a moving average, as we mentioned at the top of the post. Is this moving average operation a linear time-invariant system? First, let’s check for linearity. Let $x_1(t) \mapsto y_1(k)$ and $x_2(k) \mapsto y_2(k)$. Then what is the output for $x(k) = ax_1(k) + bx_2(k)$? If the system is linear, it must be $a y_1(k) + b y_2(k)$. We have the output $\displaystyle y(k) = \frac{1}{N} \sum_{j=k-N+1}^k x(j) = \frac{1}{N} \sum_{j=k-N+1}^k (ax_1(j) + bx_2(j)) \hfill (2)$ $\displaystyle = \frac{a}{N} \sum_{j=k-N+1}^k x_1(j) + \frac{b}{N} \sum_{j=k-N+1}^k x_2(j) \hfill (3)$ $\displaystyle = a \left(\frac{1}{N} \sum_{j=k-N+1}^k x_1(j) \right) + b \left(\frac{1}{N} \sum_{j=k-N+1}^k x_2(j) \right) \hfill (4)$ $\displaystyle = ay_1(k) + by_2(k) \hfill (5)$ So that the defined moving-average operation is linear. Next we test for time-invariance (here, in discrete time, this is often called shift invariance). If the system is time invariant, then the output for a shifted version of an input is just the shifted output for the original unshifted input. Let $x(k) \mapsto y(k)$. Then look at the output for input $x(k-k_0)$. If this is equal to $y(k-k_0)$, then the moving-average system is time-invariant. Let’s take a look. $\displaystyle x(k-k_0) \mapsto y_0(k) = \frac{1}{N} \sum_{j=k-N+1}^k x(j-k_0) \hfill (6)$ Is $y_0(k) = y(k-k_0)$? Let’s perform a substitution for $j$ to find the answer. Let $n=j-k_0$. Then if $j=k-N+1$, $n= k-k_0 - N + 1$. Similarly, if $j=k$, then $n = k-k_0$. Our output signal $y_0(k)$ becomes $\displaystyle y_0(k) = \frac{1}{N} \sum_{n=k-k_0-N+1}^{k-k_0} x(n) \hfill (7)$. But the right side of (7) is, by definition, $y(k-k_0)$. So we have $\displaystyle x(k-k_0) \mapsto y_0(k) = y(k- k_0) \hfill (8)$ and therefore the defined moving-average operation is time-invariant. Since the moving-average operation is equivalent to a linear time-invariant system, it can be represented by an impulse-response function or a frequency-response function (transfer function). Let’s find expressions for each of these key system input-output functions. ### Impulse Response Function To find the impulse-response function, we simply apply an input that is equal to the discrete-time impulse $\delta(k)$, which is shown in Figure 3, and determine the value of $y(k)$ for each possible $k$. In this situation, we rename $y(k)$ as $h(k)$, the conventional symbol used for impulse-response functions in both continuous and discrete time. $\displaystyle y(k) = \frac{1}{N} \sum_{j=k-N+1}^k x(k) \hfill (9)$ implies that the impulse response is given by $\displaystyle h(k) = \frac{1}{N} \sum_{j=k-N+1}^k \delta(k) \hfill (10)$ Since $\delta(j)$ is zero except at $j=0$, where it is equal to 1, $h(k)$ will be zero whenever the summation interval $[k-N+1, k]$ does not contain $j=0$. Thus, for all $k<0$, we have the interval $[k-N+1, k]$ which is entirely to the left of zero, provided that $N \ge 1$ (which we assume is true). Therefore, the impulse response is zero for all negative values of $k$. When $k=0$, the summation interval is $[-N+1, 0]$, which does contain zero at the right edge, and so we have $\displaystyle h(0) = \frac{1}{N} \sum_{j=-N+1}^0 \delta(j) = 1/N \hfill (11)$ For $k=1$, we have $\displaystyle h(1) = \frac{1}{N} \sum_{j=2-N}^1 \delta(j) \hfill (12)$ which is equal to $1/N$ provided that $N \ge 1$, as already assumed. This pattern continues, with the value of $h(k)$ being $1/N$ until $k = N-1$. For $k=N$, the summation interval is $[1, N]$, which no longer contains zero, and so $h(N) = 0$. For $k > N$, the interval continues to slide to the right of zero, and will never again contain zero, so that $h(k) = 0, k \ge N$. In summary, we have the impulse-response function $\displaystyle h(k) = \left\{ \begin{array}{ll} 0, & k < 0, k \ge N \\ 1/N, & 0 \leq k \leq N-1 \end{array} \right . \hfill (13)$ The impulse-response function for the moving-average filter with length $N$ is simply a rectangle with width $N$ and height $1/N$. The impulse response is graphed in Figure 4. This is consistent with the intended operation of the filter: sum the previous $N-1$ signal values and the current signal value, and divide by the number of values you summed, which is $N$. ### Frequency Response The frequency-response function is defined as the response of the system to a sine-wave input, and can be computed, as we’ve shown in a previous post, using the Fourier transform applied to the impulse response. $\displaystyle H(f) = \sum_{j=-\infty}^\infty h(j) e^{-i 2 \pi f j} \hfill (14)$ This version of $H(f)$ uses a continuous frequency variable $f$. It is periodic with period equal to one. Let’s evaluate the transform in (14), then think about discretizing frequency in anticipation of using a computer to analyze the situation in both discrete time and discrete frequency. $\displaystyle H(f) = \sum_{j=0}^{N-1} (\frac{1}{/N}) e^{-i 2 \pi f j} \hfill (15)$ $\displaystyle = \frac{1}{N} \sum_{j=0}^{N-1} \left( e^{-i 2 \pi f} \right)^j = \frac{1}{N} \sum_{j=0}^{N-1} c^j \hfill (16)$ where $c = e^{- i 2 \pi f }$. This last summation is an example of the general geometric series $\displaystyle S = \sum_{k=0}^{N-1} a^k = \frac{1-a^N}{1-a} \hfill (17)$ So the frequency response for the moving-average filter is given by $\displaystyle H(f) = \frac{1}{N} \left( \frac{1-e^{- i 2 \pi f N}}{1 - e^{-i 2 \pi f}} \right) \hfill (18)$ $\displaystyle = \frac{1}{N} \frac{e^{-i\pi f N} \left( e^{i \pi f N} - e^{-i \pi f N} \right)}{e^{-i\pi f} \left(e^{i\pi f} - e^{-i\pi f}\right)} \hfill (19)$ $\displaystyle = \frac{1}{N} e^{-i \pi f (N-1)} \left( \frac{\sin(\pi f N)}{\sin(\pi f)} \right) \hfill (20)$ ### Notes on the Frequency Response We make the following observations about the frequency response. 1. The derived frequency-response function is similar to the Fourier transform of a continuous-time rectangle function, which is a sinc function, but it is not identical to a sinc. In particular, the denominator contains the $\sin(\cdot)$ function instead of being proportional to $f$. 2. The derived frequency response is periodic with period $1$. This periodicity follows from the periodicity of the discrete-time Fourier transform. 3. For small values of $f$, $\sin(\pi f) \approx \pi f$, and so $|H(f)| \approx (1/N) \left| \frac{\sin(\pi f N)}{\pi f} \right| = \left| \frac{\sin(\pi f N)}{\pi f N} \right| = | \mbox{\rm sinc}(fN)|$. Therefore, the bulk of the energy in $H(f)$ is concentrated near $f=0$. The moving-average filter passes lower frequencies and attenuates or rejects higher frequencies. 4. $|H(0)| = 1$ since $\mbox{\rm sinc}(0) = 1$. 5. $|H(f)| = 0$ for $f = m/N, m\neq 0$. For example, $H(1/N) = 0$, which implies that the moving-average filter output is zero for an input sine wave with frequency $1/N$. (Can you see this from the time-domain operation of the filter?) ### Sampled Frequency Response and Zero-Padding The frequency response we’ve derived is valid for any $f$, but it is also periodic with period 1. So the important values of $f$ reside in a single period, say $f \in [0, 1]$. We can look at $M$ samples of the frequency response by simply substituting into our general formula. Say we want to look at the $M$ frequency values $\displaystyle f_k = k/M, k = 0, 1, 2, \ldots, M-1 \hfill (21)$ where $M \ge N$. Then we have the sampled frequency-response function $\displaystyle H(f_k) = H(k/M) = \frac{1}{N} e^{-i \pi k(N-1)/M} \left( \frac{\sin(\pi k N/M)}{\sin(\pi k/M)} \right) \hfill (22)$ for $k = 0, 1, 2, \ldots, M-1$. How can we compute this set of values using a transform? We have $\displaystyle H(k/M) = \sum_{j=-\infty}^\infty h(j) e^{-i 2 \pi k j /M} \hfill (23)$ $\displaystyle = \sum_{j=0}^{N-1} h(j) e^{-i 2 \pi k j /M} \hfill (24)$ To use the power and efficiency of the FFT, we can extend the sum from $N-1$ to $M-1$, and simply define the missing values of $h(j)$ to be zero. That is, pad $h(j)$ with $M-N$ zeros and compute $\displaystyle H(k/M) = \sum_{j=0}^{M-1} h(j) e^{-i 2 \pi k j/ M} \hfill (25)$ This is called zero-padding, and it allows us to flesh out the details of the shape of the frequency-response function. When we use, say, $M=2N$, we pad with $N$ zeros, and the effect (not proven here) is that the original values of $H(f)$ are preserved, but a new value is inserted between each pair of them. Later in the post we will provide a plot of $H(f)$ that uses $N=10$ and $M=256$. ### Step Response Let’s switch gears and go back to the time domain to look at the response of the moving-average filter to a unit-step input. Recall that the discrete-time unit-step function is denoted by $u(k)$, and is equal to one for $k >= 0$ and zero otherwise. Let’s evaluate the convolution sum to determine the step response. $\displaystyle y(k) = x(k) \otimes h(k) \hfill (26)$ $\displaystyle = u(k) \otimes h(k) \hfill (27)$ $\displaystyle = \sum_{j=-\infty}^\infty h(k-j) u(j) \hfill (28)$ For $k < 0$, $h(k-j)$ has no non-zero samples for $j \ge 0$. That is, $h(k-j)$ and $u(j)$ have no overlap where $u(j)$ is nonzero. Therefore, the step-response $y(k) = 0$ for negative values of $k$. For $k \ge 0$ and $k \le N-2$, the overlap between $h$ and $u$ is partial. We obtain $\displaystyle y(k) = \sum_{j=0}^k h(k-j)(1), \ \ 0 \leq k \leq N-2 \hfill (29)$ $\displaystyle y(k) = \sum_{j=0}^k (1/N) = (k+1)/N, \ \ 0 \leq k \leq N-2 \hfill (30)$ Finally, when $k \ge N-1$, the overlap is complete, and we have $\displaystyle y(k) = \sum_{j=k-N+1}^k h(k-j)(1) = \sum_{j=k-N+1}^k \frac{1}{N} = \frac{1}{N} (k - (k-N+1) + 1) = 1 \hfill (31)$ The step response function is shown in Figure 5. ### Interconnection of Multiple MA Filters Here we look at connecting multiple moving-average filters in series, which means that the equivalent linear time-invariant system has a frequency-response function that is the multiplication of the individual moving-average frequency responses. Assume that we connect $K$ identical moving-average filters, each having parameter $N$ and frequency-response function $H(f)$. Then the frequency response for the equivalent system is $G(f)$, $\displaystyle G(f) = \prod_{j=1}^K H(f) \hfill (32)$ On the other hand, the impulse response for the equivalent system is the convolution of the $K$ individual impulse-response functions, $\displaystyle g(k) = h(k) \otimes h(k) \otimes \ldots \otimes h(k) \hfill (33)$ What do the impulse responses and frequency responses look like as a function of $K$? We’ve already encountered the idea of convolving a rectangle with itself multiple times, so we know, for example, that for $K=2$, the function $g(k)$ will be a triangle. More importantly, the frequency response as a function of $K$ becomes more and more selective. That is, the attenuation of input frequency components far away from $f=0$ gets very large as $K$ grows. The impulse and frequency-response functions for various $K$ are shown in Figure 6. In this figure, the frequency responses were calculated by Fourier transforming a zero-padded version of the corresponding impulse responses. In terms of the variables defined above, we have $N=10$ and $M = 256$. This permits a relatively fine sampling of the frequency-response functions. ### A Look at the Residual of the MA Filter The moving-average filter output reveals the trends, if any, exhibited by the underlying noise-free data, and so is useful as a way to “de-noise” arbitrary datasets. But it may also be of interest to determine the properties of the noise that is rejected by the filter. In this case we want to find the trends with the moving-average filter, then remove them from the input, leaving only the high-frequency fluctuations and no trending signals to contaminate them. Mathematically, the signal we desire is the input minus the moving-average output, $\displaystyle z(k) = x(k) - y(k) \hfill (34)$ This can be expressed as the following function of time $\displaystyle z(k) = x(k) - \frac{1}{N} \sum_{j=k-N+1}^k x(j) \hfill (35)$ $\displaystyle = x(k) - (1/N)x(k-N+1) -(1/N)x(k-N+2) - \ldots - (1/N)x(k) \hfill (36)$ $\displaystyle = \frac{-1}{N}\sum_{j=k-N+1}^{k-1} + \frac{N-1}{N} x(k) \hfill (37)$ which isn’t particularly revealing. Instead, we can attempt to use previous results and approaches to simplify the analysis. The signal $x(k)$ can be obtained by convolving itself with an impulse, and $y(k)$ is obtained by convolving $x(k)$ with $h(k)$ as in Figure 7. Since we know $h(k)$ is a rectangle with height $1/N$ and width $N$, we easily obtain $g(k)$ as in Figure 8. The frequency response is $\displaystyle G(f) = {\cal F} \left[ \delta(k) \right] - H(f) \hfill (38)$ $\displaystyle = 1 - H(f) \hfill (39)$ $\displaystyle = 1 - \frac{1}{N} e^{-i \pi f(N-1)} \left( \frac{\sin(\pi f N)}{\sin(\pi f)} \right) \hfill (40)$ The impulse response $g(k)$ and corresponding frequency response $G(f)$ are plotted in Figure 9. It is clear that this filter passes only relatively high frequencies, providing the desired inverse operation relative to a moving average. ### Significance of Moving-Average Filters in CSP The main use I have for a moving-average filter is as a smoothing filter in the frequency-smoothing method (FSM) of spectral correlation function estimation. Certain kinds of specialized signal-to-noise ratio calculations can also benefit from the use of a fast smoothing operation, and the computational cost of a moving-average filtering operation is quite low because of the unique shape of the impulse response–a rectangle. This enables the use of a head-tail running sum to implement the convolution, avoiding having to do $N$ multiplications for each output point of the filter. Previous SPTK Post: Ideal Filters             Next SPTK Post: The Complex Envelope
2022-10-03 11:55:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 368, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8820257186889648, "perplexity": 332.24462350486465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00378.warc.gz"}
http://physics.stackexchange.com/tags/rotational-dynamics/hot?filter=month
# Tag Info 5 You can always decompose a motion like this into two parts: (1) rolling without slipping and (2) slipping without rolling. What is slipping without rolling? It means the object moves uniformly in one direction along the surface, with no angular velocity about the object's own center of mass. For instance, a box that is pushed along the ground can easily ... 2 There's are many things wrong in your concepts . Let's attend them one by one , A body without a force acting on it always can never rotate as its velocity is changing at each hence momentum is changing at each instant . But to maintain constant velocity , the force must be such that it never does any work , so as to maintain the constancy of Kinetic ... 2 How can we apply angular momentum conservation when friction is present? Why not? If we have a closed system, momentum and angular momentum are conserved. In this case, the full system is disk A and disk B, and there are no external forces, so the system is closed. There are internal forces, namely in this case, friction, but that doesn't matter. You ... 1 It depends on the friction of the contact. With a frictionless plane the top would precess around its center of gravity and the contact point will prescribe a circle. Add friction, and the friction force translates the center of gravity the same way tire traction translates a car. Here you have the cases of a) pure rolling, or b) rolling with slipping. ... 1 If the wheel is rolling without slipping, what's the velocity of the point at the base of the wheel?? It is... zero! Convince yourself that the velocity must be zero. Since if it wasn't zero, the wheel wouldn't be rolling without slipping. So far the explanation is correct. "No slipping" refers really to some non-zero interval of time, and to the state of ... 1 Basically , it means at each instant the bottom most point has $0$ velocity , it doesn't mean that the point has no acceleration . But at an instant it has $0$ velocity . And because of that at each instant $v_{cm}=\omega r$ for the bottom most point , and if this doesn't happen , then static friction acts to make it $0$ . Its like suppose you are walking ... 1 The relative speed of the point of contact of the rolling body w.r.t. the surface on which it rolls is zero. If the surface is at rest then the velocity of the point of contact of rolling body and surface is zero. Mathematically: $$v_1 -\omega R=v_2$$ Also we can get the reltation in accelerations ..... Differentiate the above eq. ... 1 You don't need to apply Steiner's theorem onto the point mass. The point mass finds itself at a distance (apparently) $R$ of the x-axis. Since the moment of inertia is an extensive value, you can simply add all moments of inertia. There's the moment of inertia of the solid disk with respect to it's diameter. You have to 'Steiner' that away from a distance ... 1 Firstly , definition of torque is $\vec{r}\times \vec{F}$ and angular momentum $\vec{r}\times \vec{p}$. And now w.r.t. your frame $\vec{F}$ & $\vec{p}$ & $\vec{r}$ are all relative . but newton's second law of rotation holds for all frames. .Because all points are just frames and to maintain the distances in frame , you've to move with that frame , ... 1 I'll expand my comment into an answer. I would take $\mathbf{T}=d\mathbf{L}/dt$ as the definition of torque, but it sounds like the OP takes $\mathbf{T}=\mathbf{r}\times\mathbf{F}$ as the definition. Either way, we need to prove that the two expressions are equivalent for a system of particles. The total angular momentum is \mathbf{L}_{tot}=\sum ... Only top voted, non community-wiki answers of a minimum length are eligible
2013-05-18 18:43:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7959062457084656, "perplexity": 261.38284652501494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382705/warc/CC-MAIN-20130516092622-00029-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3151345/determine-the-minimal-polynomial-of-sqrt2-sqrt2-over-bbb-q-and-find-i
# Determine the minimal polynomial of $\sqrt{2+\sqrt{2}}$ over $\Bbb Q$ and find its Galois group over $\Bbb Q$ Determine the minimal polynomial of $$\sqrt{2+\sqrt{2}}$$ over $$\Bbb Q$$ and find its Galois group over $$\Bbb Q$$. Computed and obtained $$p(x)=x^4-4x^2+2$$ has $$\sqrt{2+\sqrt{2}}$$ as a root. $$p$$ is clearly monic and irreducible (Eisenstein), thus $$p$$ is the required minimal polynomial. $$p$$ has roots : $$\pm \sqrt{2\pm \sqrt{2}}$$ . Since $$p$$ has no multiple root in its splitting field say, $$K$$ , thus $$K|\Bbb Q$$ is Galois. $$|Gal(K| \Bbb Q)|= [K:\Bbb Q]=4$$ I don't seem to find any intermediate extension $$\Bbb Q \subset M \subset K$$ s.t. $$[K:M]=2$$ other than $$\Bbb Q(\sqrt{2})$$ . In that case, $$Gal(K| \Bbb Q)=\Bbb Z_4$$ . Here my argument is shaky . Set $$a=\sqrt{2+\sqrt2}$$ and $$b=\sqrt{2-\sqrt2}$$. Clearly $$\Bbb Q(\sqrt2)$$ is fixed under $$a\mapsto -a, b\mapsto -b$$, as $$\sqrt2=a^2-2=2-b^2=ab$$ and this permutation has order $$2$$. Hint: What about sending $$a$$ to $$b$$? Where does $$b$$ go? Which order does this element of the Galois group have? What is the fixed subfield? More details: Note that $$a^2-2-ab=0$$. Any automorphism $$f$$ of $$K$$ must respect this, which is to say $$f(a)^2-2-f(a)f(b)=0$$ So, if $$f(a)=b$$, what do we get? $$b^2-2-bf(b)=0\\ -(2-b^2)-bf(b)=0\\ -\sqrt2=bf(b)$$ Among $$\pm a, \pm b$$, only $$-a$$ has this property. Which means that if $$f(a)=b$$, we must have $$f(b)=-a$$, meaning $$f$$ has order $$4$$. This means the Galois group must be $$\Bbb Z_4$$. • You set $a$ and $b$ to the same thing. Is that right? – TonyK Mar 17 at 10:23 • @TonyK No, it isn't. Thanks. – Arthur Mar 17 at 10:24 • Can you kindly give a detailed answer? I am totally new to Galois theory – reflexive Mar 17 at 10:31 • Yeah got it, $\sigma(a)=b$, has order > 2, we are done! – reflexive Mar 17 at 10:44
2019-06-24 21:48:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 39, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9218056201934814, "perplexity": 180.67911875349552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999740.32/warc/CC-MAIN-20190624211359-20190624233359-00088.warc.gz"}
https://kb.osu.edu/dspace/handle/1811/11555
# Knowledge Bank ## University Libraries and the Office of the Chief Information Officer The Knowledge Bank is scheduled for regular maintenance on Sunday, April 20th, 8:00 am to 12:00 pm EDT. During this time users will not be able to register, login, or submit content. # NEAR INFRA-RED FOURIER TRANSFORM SPECTRA OF VO Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/11555 Files Size Format View 1981-MG-04.jpg 41.81Kb JPEG image Title: NEAR INFRA-RED FOURIER TRANSFORM SPECTRA OF VO Creators: Cheung, A. S.- C.; Taylor, A. W.; Merer, A. J. Issue Date: 1981 Abstract: Fourier transform spectra of the A-X electronic transition of VO (near 1.05 $\nu$) have been recorded at high resolution with the 1 meter FT spectrometer of the McMath Solar Telescope at Kitt peak National Observatory. Preliminary analysis shows that the A electronic state is a $^{4}\Pi$ state close to case (b) coupling. The spin-orbit structure of the $A ^{4}\Pi$ sate appears to be very irregular, and there is evidence for a second electronic state lying extremely close. URI: http://hdl.handle.net/1811/11555 Other Identifiers: 1981-MG-4
2014-04-20 10:31:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7168977856636047, "perplexity": 6883.708840437087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
https://csi.dgist.ac.kr/index.php?n=Lectures.RecentChanges
Search: Site Map # RecentChanges • Fall2019 . . . December 11, 2019, at 01:51 AM EST by ?: • Lectures . . . September 03, 2019, at 01:19 PM EST by ?: • Spring2019 . . . May 29, 2019, at 11:11 AM EST by ?: • Spring2018 . . . June 05, 2018, at 12:24 PM EST by ?: • Fall2017 . . . March 18, 2018, at 12:54 AM EST by ?: • Spring2017 . . . June 08, 2017, at 09:17 AM EST by ?: • Spring2016 . . . April 24, 2017, at 10:24 AM EST by ?: • Assignment2Spring2017 . . . March 30, 2017, at 11:14 AM EST by ?: • Fall2016 . . . December 19, 2016, at 05:37 AM EST by ?: • Spring2015 . . . April 28, 2016, at 07:51 PM EST by ?: • Assignment4Spring2016 . . . April 05, 2016, at 11:26 PM EST by ?: • Spring2014 . . . March 06, 2016, at 07:08 PM EST by ?: • Fall2015 . . . December 03, 2015, at 07:49 PM EST by ?: • Fall2014 . . . October 26, 2015, at 08:18 PM EST by ?: • Assignment3Spring2015 . . . April 01, 2015, at 01:37 PM EST by ?: • Fall2013 . . . October 16, 2014, at 02:03 PM EST by ?: • Assignment4 . . . April 10, 2014, at 09:47 AM EST by ?: • Spring2013 . . . March 05, 2014, at 02:50 AM EST by ?: • Fall2012 . . . October 06, 2013, at 10:31 PM EST by ?: • Assignment3 . . . April 25, 2013, at 08:33 PM EST by ?: • Spring2011 . . . March 07, 2013, at 12:25 PM EST by ?: • Fall2011 . . . September 10, 2012, at 12:29 AM EST by ?: • Spring2012 . . . June 12, 2012, at 02:41 PM EST by ?: History - Print - Recent Changes - Search
2020-01-21 06:33:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8558551669120789, "perplexity": 3510.0537275537995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601615.66/warc/CC-MAIN-20200121044233-20200121073233-00153.warc.gz"}
https://www.physicsforums.com/threads/rate-of-decay.57309/
# Rate of decay 1. Dec 19, 2004 ### Cheman Rate of decay.... "The rate of decay is durectly proprtional to the number of atoms present, following an exponential law, the rate of decay decreasing with time" - but why is this the case? 2. Dec 19, 2004 ### mathman As you said, the rate is proportional to the number present. This decreases with time (by decaying!). The usual measure of the decay rate is half-life, i.e. the time for half the atoms to have decayed. 3. Dec 21, 2004 ### Cheman But what does the statement actually mean in terms of physics? 4. Dec 21, 2004 ### mathman What part of the original statement are you having trouble with? 5. Dec 21, 2004 ### Astronuc Staff Emeritus Basically in nature, it has been observed that radionuclides decay primarily by beta-emission or alpha-emission, and sometimes by gamma-emission. The process is very random, in the sense that one cannot predict precisely when a given unstable atom will decay. Instead, one can take a population of the particular atom and observe that decays do occur according to a very simple first order differential equation. dN/dt = -$$\lambda$$N, where N is the number of particles at any given time (e.g. N= N(t)) and $$\lambda$$ is the decay constant, which is unique to that nuclide. The decay constant $$\lambda$$ = (ln 2)/t1/2, where t1/2 is the half-life, which is the period after which approximately one-half the radioactive atoms have decayed. Here are some useful references: Half-life - http://hyperphysics.phy-astr.gsu.edu/hbase/nuclear/halfli2.html#c1 http://hyperphysics.phy-astr.gsu.edu/hbase/nuclear/beta.html#c2 6. Dec 25, 2004 ### Cheman But why is the first equation you mention ( dN= -lambdaNdT) true? ie - Why does the rate of decay depend on the number of nuclei present? Thanks. 7. Dec 25, 2004 ### mathman Whether or not any particle decays is independent of what other particles are doing. Therefore the decay rate (per unit time) is directly proportional to the number of particles still around. 8. Dec 26, 2004 ### Cheman But why is the rate of decay directly proportional to the number of nuclei present? What does the number of nuclei have to do with the rate if the decay of any particular nucleus is completely random? Thanks. 9. Dec 26, 2004 ### mathman It is a statistical fact. Specifically, the probability that any particle decays during a very small unit of time is fixed. Call it p. Then on average, np particles will decay (where n is the number of particles at that time) during this small interval of time. Taking the limit as the interval of time goes to 0, we end up with the simple differential equation as shown above (Astronuc). 10. Dec 26, 2004 ### Antimatter Look, I'll try and give it my best shot. radioactive decay is a quantummechanical effect, Fermi's Golden Rule (which was found by Dirac, really) is an expression that gives the probability for a transition from undecayed to decayed nucleus. Apparently the probability for decay per unit time is independant of the time, a radioactive nucleus is equally likely to decay in say 5 minutes now than it is likely to decay in 15 minutes when it hasn't decayed 10 minutes from now. When you have 2 kazillion nuclei, we can expect 50% (1 kazillion) of them to have decayed when one halflife has expired, if you have 4 kazillion nuclei, virtually split that up in 2x2 kazillion nuclei. Both sets of 2 kazillion will have 50% (1 kazillion) decayed nuclei, yielding 2 kazillion decayed nuclei in total, which is double that of when you started out with 2 kazillion nuclei, ater the same time of 1 halflife. Hence the direct proportionality.
2018-02-25 00:01:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8545892834663391, "perplexity": 1036.6782655880957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816068.93/warc/CC-MAIN-20180224231522-20180225011522-00066.warc.gz"}
http://starlink.eao.hawaii.edu/devdocs/sun211.htx/sun211ss159.html
astPointList Create a PointList Description: This function creates a new PointList object and optionally initialises its attributes. A PointList object is a specialised type of Region which represents a collection of points in a coordinate Frame. Synopsis AstPointList $\ast$astPointList( AstFrame $\ast$frame, int npnt, int ncoord, int dim, const double $\ast$points, AstRegion $\ast$unc, const char $\ast$options, ... ) Parameters: frame A pointer to the Frame in which the region is defined. A deep copy is taken of the supplied Frame. This means that any subsequent changes made to the Frame using the supplied pointer will have no effect the Region. npnt The number of points in the Region. ncoord The number of coordinates being supplied for each point. This must equal the number of axes in the supplied Frame, given by its Naxes attribute. dim The number of elements along the second dimension of the " points" array (which contains the point coordinates). This value is required so that the coordinate values can be correctly located if they do not entirely fill this array. The value given should not be less than " npnt" . points The address of the first element of a 2-dimensional array of shape " [ncoord][dim]" giving the physical coordinates of the points. These should be stored such that the value of coordinate number " coord" for point number " pnt" is found in element " in[coord][pnt]" . unc An optional pointer to an existing Region which specifies the uncertainties associated with each point in the PointList being created. The uncertainty at any point in the PointList is found by shifting the supplied " uncertainty" Region so that it is centred at the point being considered. The area covered by the shifted uncertainty Region then represents the uncertainty in the position. The uncertainty is assumed to be the same for all points. If supplied, the uncertainty Region must be of a class for which all instances are centro-symetric (e.g. Box, Circle, Ellipse, etc.) or be a Prism containing centro-symetric component Regions. A deep copy of the supplied Region will be taken, so subsequent changes to the uncertainty Region using the supplied pointer will have no effect on the created Box. Alternatively, a NULL Object pointer may be supplied, in which case a default uncertainty is used equivalent to a box 1.0E-6 of the size of the bounding box of the PointList being created. The uncertainty Region has two uses: 1) when the astOverlap function compares two Regions for equality the uncertainty Region is used to determine the tolerance on the comparison, and 2) when a Region is mapped into a different coordinate system and subsequently simplified (using astSimplify), the uncertainties are used to determine if the transformed boundary can be accurately represented by a specific shape of Region. options Pointer to a null-terminated string containing an optional comma-separated list of attribute assignments to be used for initialising the new PointList. The syntax used is identical to that for the astSet function and may include " printf" format specifiers identified by " %" symbols in the normal way. ... If the " options" string contains " %" format specifiers, then an optional list of additional arguments may follow it in order to supply values to be substituted for these specifiers. The rules for supplying these are identical to those for the astSet function (and for the C " printf" function). Returned Value astPointList() A pointer to the new PointList. Notes: • A null Object pointer (AST__NULL) will be returned if this function is invoked with the AST error status set, or if it should fail for any reason. Status Handling The protected interface to this function includes an extra parameter at the end of the parameter list descirbed above. This parameter is a pointer to the integer inherited status variable: " int $\ast$status" .
2019-10-21 05:31:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27603864669799805, "perplexity": 1371.1589108296334}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987756350.80/warc/CC-MAIN-20191021043233-20191021070733-00526.warc.gz"}
https://www.kodymirus.cz/reyman/thesisch2.html
## Chapter 2Math examples 2.1 Delimiters 2.2 Spacing 2.3 Arrays 2.5 Functions 2.6 Accents These examples were copied from www.maths.adelaide.edu.au/. Math was converted to mathml, Mathjax is used to render it in browsers without mathml support. ### 2.1 Delimiters See how the delimiters are of reasonable size in these examples $\left(a+b\right)\left[1-\frac{b}{a+b}\right]=a\phantom{\rule{0.3em}{0ex}},$ $\sqrt{|xy|}\le \left|\frac{x+y}{2}\right|,$ even when there is no matching delimiter ${\int }_{a}^{b}u\frac{{d}^{2}v}{d{x}^{2}}\phantom{\rule{0.3em}{0ex}}dx={u\frac{dv}{dx}|}_{a}^{b}-{\int }_{a}^{b}\frac{du}{dx}\frac{dv}{dx}\phantom{\rule{0.3em}{0ex}}dx.$ ### 2.2 Spacing Differentials often need a bit of help with their spacing as in $\iint x{y}^{2}\phantom{\rule{0.3em}{0ex}}dx\phantom{\rule{0.3em}{0ex}}dy=\frac{1}{6}{x}^{2}{y}^{3},$ whereas vector problems often lead to statements such as $u=\frac{-y}{{x}^{2}+{y}^{2}}\phantom{\rule{0.3em}{0ex}},\phantom{\rule{1em}{0ex}}v=\frac{x}{{x}^{2}+{y}^{2}}\phantom{\rule{0.3em}{0ex}},\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}w=0\phantom{\rule{0.3em}{0ex}}.$ ### 2.3 Arrays Arrays of mathematics are typeset using one of the matrix environments as in $\left[\begin{array}{ccc}\hfill 1\hfill & \hfill x\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill -1\hfill \end{array}\right]\left[\begin{array}{c}\hfill 1\hfill \\ \hfill y\hfill \\ \hfill 1\hfill \end{array}\right]=\left[\begin{array}{c}\hfill 1+xy\hfill \\ \hfill y-1\hfill \end{array}\right].$ Case statements use cases: Many arrays have lots of dots all over the place as in $\begin{array}{cccccc}\hfill -2\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ \hfill 1\hfill & \hfill -2\hfill & \hfill 1\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 1\hfill & \hfill -2\hfill & \hfill 1\hfill & \hfill \cdots \hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1\hfill & \hfill -2\hfill & \hfill \ddots \hfill & \hfill ⋮\hfill \\ \hfill ⋮\hfill & \hfill ⋮\hfill & \hfill ⋮\hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill 1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill 1\hfill & \hfill -2\hfill \end{array}$ ### 2.4 Equation arrays In the flow of a fluid film we may report $\begin{array}{rcll}{u}_{\alpha }& =& {𝜖}^{2}{\kappa }_{xxx}\left(y-\frac{1}{2}{y}^{2}\right),& \text{(2.1)}\text{}\text{}\\ v& =& {𝜖}^{3}{\kappa }_{xxx}y\phantom{\rule{0.3em}{0ex}},& \text{(2.2)}\text{}\text{}\\ p& =& 𝜖{\kappa }_{xx}\phantom{\rule{0.3em}{0ex}}.& \text{(2.3)}\text{}\text{}\end{array}$ Alternatively, the curl of a vector field $\left(u,v,w\right)$ may be written with only one equation number: $\begin{array}{rcll}{\omega }_{1}& =& \frac{\partial w}{\partial y}-\frac{\partial v}{\partial z}\phantom{\rule{0.3em}{0ex}},& \text{}\\ {\omega }_{2}& =& \frac{\partial u}{\partial z}-\frac{\partial w}{\partial x}\phantom{\rule{0.3em}{0ex}},& \text{(2.4)}\text{}\text{}\\ {\omega }_{3}& =& \frac{\partial v}{\partial x}-\frac{\partial u}{\partial y}\phantom{\rule{0.3em}{0ex}}.& \text{}\end{array}$ Whereas a derivation may look like ### 2.5 Functions Observe that trigonometric and other elementary functions are typeset properly, even to the extent of providing a thin space if followed by a single letter argument: $exp\left(i𝜃\right)=cos𝜃+isin𝜃\phantom{\rule{0.3em}{0ex}},\phantom{\rule{1em}{0ex}}sinh\left(logx\right)=\frac{1}{2}\left(x-\frac{1}{x}\right).$ With sub- and super-scripts placed properly on more complicated functions, $\underset{q\to \infty }{lim}\parallel f\left(x\right){\parallel }_{q}=\underset{x}{max}|f\left(x\right)|,$ and large operators, such as integrals and In inline mathematics the scripts are correctly placed to the side in order to conserve vertical space, as in $1∕\left(1-x\right)={\sum }_{n=0}^{\infty }{x}^{n}.$ ### 2.6 Accents Mathematical accents are performed by a short command with one argument, such as $\stackrel{̃}{f}\left(\omega \right)=\frac{1}{2\pi }{\int }_{-\infty }^{\infty }f\left(x\right){e}^{-i\omega x}\phantom{\rule{0.3em}{0ex}}dx\phantom{\rule{0.3em}{0ex}},$ or $\stackrel{̇}{\stackrel{\to }{\omega }}=\stackrel{\to }{r}×\stackrel{\to }{I}\phantom{\rule{0.3em}{0ex}}.$ ### 2.7 Command definition The Airy function, $Ai\left(x\right)$, may be incorrectly defined as this integral $Ai\left(x\right)=\int exp\left({s}^{3}+isx\right)\phantom{\rule{0.3em}{0ex}}ds\phantom{\rule{0.3em}{0ex}}.$ This vector identity serves nicely to illustrate two of the new commands: $\text{}\nabla \text{}×\text{}q\text{}=\text{}i\text{}\left(\frac{\partial w}{\partial y}-\frac{\partial v}{\partial z}\right)+\text{}j\text{}\left(\frac{\partial u}{\partial z}-\frac{\partial w}{\partial x}\right)+\text{}k\text{}\left(\frac{\partial v}{\partial x}-\frac{\partial u}{\partial y}\right).$ ### 2.8 Theorems et al. Definition 1 (right-angled triangles) A right-angled triangle is a triangle whose sides of length $a$, $b$ and $c$, in some permutation of order, satisfies ${a}^{2}+{b}^{2}={c}^{2}$. Lemma 2 The triangle with sides of length $3$, $4$ and $5$ is right-angled. This lemma follows from the Definition 1 as ${3}^{2}+{4}^{2}=9+16=25={5}^{2}$. Theorem 3 (Pythagorean triplets) Triangles with sides of length $a={p}^{2}-{q}^{2}$, $b=2pq$ and $c={p}^{2}+{q}^{2}$ are right-angled triangles. Prove this Theorem 3 by the algebra ${a}^{2}+{b}^{2}={\left({p}^{2}-{q}^{2}\right)}^{2}+{\left(2pq\right)}^{2}={p}^{4}-2{p}^{2}{q}^{2}+{q}^{4}+4{p}^{2}{q}^{2}={p}^{4}+2{p}^{2}{q}^{2}+{q}^{4}={\left({p}^{2}+{q}^{2}\right)}^{2}={c}^{2}$.
2019-04-20 02:39:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 30, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9056591987609863, "perplexity": 2752.9563860740564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528481.47/warc/CC-MAIN-20190420020937-20190420042937-00453.warc.gz"}
https://www.findfilo.com/math-question-answers/in-figure-which-of-the-vectors-are-i-collinear-ii-clp
In Figure, which of the vectors are: (i) Collinear (ii) Equal (iii | Filo Class 12 Math Algebra Vector Algebra 572 150 In Figure, which of the vectors are: (i) Collinear (ii) Equal (iii) Coinitial 572 150 Connecting you to a tutor in 60 seconds.
2021-09-26 17:10:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9453600645065308, "perplexity": 7323.41080294757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057882.56/warc/CC-MAIN-20210926144658-20210926174658-00219.warc.gz"}